In this chapter, you will learn about
• Identity and access management
• Security controls
• Privacy by design
• Integrating privacy into business processes
• Data retention and destruction
This chapter covers Certified Information Privacy Manager job practice IV, “Privacy Operational Lifecycle: Protect.” This domain represents approximately 20 percent of the CIPM examination.
The protection of personal information is foundational in any privacy program. Customers, employees, and constituents expect organizations to protect their information from compromise, damage, loss, and theft. Although information protection practices are often the role of the cybersecurity leader, privacy leaders also need to be familiar with these practices because they are fundamental to a privacy program.
Information privacy is wholly dependent upon cybersecurity for the protection of personal information. Cybersecurity is not generally performed by privacy managers, but they should be familiar with numerous information security practices that support their programs.
At the highest level, information security is governed through an information security management system (ISMS), a set of processes that provides management with governance capabilities to manage the entire information security program.
Identity and access management in an organization comprises a collection of activities concerned with controlling and monitoring individuals’ access to information systems containing sensitive and personal information. Foundational activities include the following:
• Management of an accurate inventory of workers in the organization, whether full-time employees, part-time employees, temporary workers, contractors, consultants, or employees of other organizations performing services requiring access to networks, systems, or data
• Management of all of these workers’ access rights into networks, systems, data, applications, and places where business operations take place
Identity and access management is getting more difficult. As organizations shift from on-premises to cloud-based computing, the traditional fallback controls of building access and network firewalls are no longer relevant. Only identity and access management processes are available to distinguish persons and devices authorized to access systems and data from those who are not.
Part of the duality of privacy is security. Increasingly, identity and access management is becoming central to security and, therefore, to privacy as well.
Access controls are used to determine whether and how subjects (usually persons, but also running programs and computers) are able to access objects (usually systems and/or data). Logical access controls work two ways:
• Subject access A logical access control uses some means to determine the identity of the subject requesting access. Once the subject’s identity is known and verified beyond a reasonable doubt, the access control performs a function to determine whether the subject should be allowed to access the object. If the access is permitted, the subject can proceed; if the access is denied, the subject cannot proceed. An example of this type of access control is an application that first authenticates a user by requiring a user ID and password before enabling access to the application.
• Service access A logical access control is used to control the types of messages that are allowed to pass through a control point. The logical access control is designed to permit or deny messages of specific types (and may possibly permit or deny based upon origin and destination) to pass. Examples of this type of access control include a firewall, a screening router, an intrusion protection system (IPS), a web content filter, and a cloud access security broker (CASB) that makes pass/block decisions based upon the type of traffic, its content, its origin, and its destination.
These two types of access are like a concert hall with a parking garage. The parking garage (the service access) permits cars, trucks, and motorcycles to enter but denies oversized vehicles from entering. Upstairs at the concert box office (the subject access), persons are admitted with photo identification if their names match those on a list of prepaid attendees. Further, certain persons are granted “backstage access” if they possess the required credentials and are not carrying dangerous objects such as weapons.
Access Control Concepts In discussions about access control, security and privacy professionals often use terms that are not used in other disciplines, including these:
• Subject, object In access control situations, a subject is usually a person, but it could also be a running program, a device, or a computer. In typical security parlance, a subject is someone (or some thing) that wants to access something. An object (which could be a computer, an application, a database, a file, a record, or another resource) is the thing that the subject wants to access.
• Fail open, fail closed These terms refer to the behaviors of controls when they experience a failure of some kind. For instance, if power is removed from a keycard-based building access control system, will all doors be locked or unlocked? The term fail closed means that all accesses will be denied if the access control system fails; the term fail open means that all accesses will be permitted upon its failure. Generally, security and privacy professionals prefer that access control systems fail closed because it is safer to admit no one than to admit everyone. But there will be exceptions now and then where fail open might be better—for example, building access control systems may need to fail open to facilitate the emergency evacuation of personnel or entrance of emergency services personnel. Fail open and fail closed can also apply to a manual control. For instance, if a building entrance is manned by a security guard who leaves her post, the control would fail closed if the guard is responsible for unlocking the entrance door to admit personnel. It would fail open if the guard merely observes people coming and going.
• Least privilege An individual user should have the lowest (or least amount of) privilege possible that will still enable him or her to perform required tasks.
• Segregation of duties One individual should not have combinations of privileges that would permit him or her to conduct high-value operations alone. The classic example is a business accounting department, where the functions of creating a payee, requesting a payment, approving a payment, and making a payment should rest with two or more separate individuals to prevent any one person from being able to embezzle funds from the organization without notice. In the context of information technology, functions such as requesting user accounts and provisioning user accounts should reside with two different persons so that one individual could not request and provision user accounts alone.
• Split custody This is the concept of splitting knowledge of a specific object or task between two or more persons. One example is splitting the password for a critical encryption key between two parties: one person has the first half, and the other has the second half. Similarly, the combination to a bank vault could be split so that two persons have the first half of the combination and two others have the second half. In some industries, this practice is known as dual control.
Access Control Threats Because access controls are often the only means of protection between protected assets and users, they are often vigorously attacked. Indeed, the majority of attacks against computers and networks containing valuable assets are against access controls in attempts to trick, defeat, or bypass them. A threat represents the intent and ability to do harm to an asset. In the context of privacy, threats represent the desire for an adversary to access personal information to steal it, expose it, corrupt it, or destroy it.
Threats against access controls include social engineering, malware, eavesdropping, logic bombs, back doors, and vulnerability scanning.
Social Engineering Is the Preferred Initial Attack Vector
Research and numerous surveys reveal that more than 90 percent of successful cyberattacks begin with social engineering—when personnel in an organization are tricked into performing actions that enable an adversary to attack the organization successfully. The most common form of social engineering is phishing, but several other techniques are used as well. Attacks are almost always aided by an initial social engineering attack that gives the adversary the beachhead needed to break into the environment.
Access Control Vulnerabilities Vulnerabilities are the weaknesses present in a system that enable a threat to be more easily carried out or to have greater impact. Vulnerabilities alone do not bring about actual harm. Instead, threats and vulnerabilities work together. Most often, a threat exploits a vulnerability, because it is easier to attack a system at its weakest point. Following are some common vulnerabilities:
• Unpatched systems Security patches are designed to remove specific vulnerabilities. A system that is not patched still has vulnerabilities, some of which are easily exploited. Attackers can easily enter and take over systems that lack important security patches.
• Default system settings Default settings often include unnecessary services that increase the chances that an attacker can find a way to break into a system. The practice of system hardening removes all unnecessary services and changes system security configurations to make the system as secure as possible.
• Default passwords Some systems are shipped with default administrative passwords that make it easy for a new customer to configure the system. Often, the organization fails to change these default passwords. Hackers have access to extensive lists of default passwords for practically every kind of computer and device that can be connected to a network.
• Incorrect permissions settings If the permissions for access to files, directories, databases, application servers, or software programs are incorrectly set, this could permit access—and even modification or damage—by persons who should not have access.
• Vulnerabilities in utilities and applications System utilities, tools, and applications that are not a part of the base operating system may have exploitable weaknesses that could enable an attacker to compromise a system successfully.
• Faulty application logic Software applications—especially those that are accessible via the Internet—that contain inadequate session management, resource management, and input testing controls can potentially permit an intruder to take over a system and steal or damage information.
Remote access is defined as the means of providing remote connectivity to a corporate LAN through a logical data link. Remote access is provided by many organizations so that employees who are temporarily or permanently working offsite can access internal LAN-based resources from their remote locations.
Remote access was initially provided using dial-up modems that included authentication. Although remote dial-up is still provided in some instances, most remote access is provided over the Internet and typically uses an encrypted tunnel, or virtual private network (VPN), to protect transmissions from any eavesdroppers. VPNs are so prevalent in remote access technology that the terms VPN and remote access have become synonymous. Remote access architectures are depicted in Figure 4-1.
Figure 4-1 Remote access architectures
Two security controls are essential for remote access:
• Authentication It is necessary to know who is requesting access to the corporate LAN, and with what device. Authentication may consist of the same user ID and password that personnel use when working onsite, or multifactor authentication may be required. Authentication may also include a digital certificate or other means for authenticating the device, thereby preventing remote access from assets not owned by the organization.
• Encryption Many onsite network applications do not encrypt sensitive traffic because it is all contained within the physically and logically protected corporate LAN. However, because remote access provides the same function as the corporate LAN, and because the applications themselves sometimes do not provide encryption, the remote access service itself usually provides encryption. Encryption may use Secure Sockets Layer (SSL), Internet Protocol security (IPsec), Layer 2 Tunneling Protocol (L2TP), or Point-to-Point Tunneling Protocol (PPTP).
These controls are needed because they are substitutes (or compensating control) for the physical access controls that are usually present to control which personnel may enter the building to use the onsite corporate LAN. When personnel are onsite, their identities are confirmed through keycards or other physical access controls. Because the organization cannot “see” offsite personnel who gain remote access, the authentication used is the next best thing.
The migration of corporate resources from internal networks to cloud-based networks is changing the notion of remote access. Organizations are incorporating multifactor authentication for access to the organization’s cloud-based resources, regardless of the users’ location—whether they are on a corporate LAN, at home, in the field, or traveling.
The New Remote Access Paradigm
As organizations migrate their business applications to colocation centers and XaaS providers, and after the last internal resource is moved to the cloud, what is the point of remote access? Remote access to what?
If we think about this in terms of VPNs and the protection afforded through encryption, VPNs still make good business sense for protecting network traffic from potential eavesdroppers (whether the human or malware variety). For this reason, it’s preferred to say “VPN” instead of saying “remote access.”
Organizations still need to address several subtopics when considering their VPN architectures in light of cloud migration, such as split tunneling and Internet backhauling, and whether VPN should always automatically activate on workstations away from internal corporate networks.
Identification, Authentication, and Authorization Access to computing resources is protected by mechanisms that ensure that only authorized subjects are permitted to access protected information. Generally, these mechanisms first identify who (or what) wants to access the resource, and then they determine whether the subject is permitted to access the resource and either grant or deny the access.
Several terms, including identification, authentication, and authorization, are used to describe various activities and are explained here.
Identification Identification is the act of asserting an identity without providing any proof of it. This is analogous to one person walking up to another and saying, “Hello, my name is ______.” Because it requires no proof, identification is not usually used alone to protect high-value assets or functions.
Identification is often used by web sites to remember someone’s profile or preferences. For example, a bank’s web application may use a cookie to store the name of the city in which the customer lives. When the customer returns to the web site, the application will display some photo or news that is related to the customer’s location. But when the customer is ready to perform online banking, this simple identification is insufficient to prove the customer’s actual identity.
Identification is just the first step in the process of gaining entry to a system or application. The next steps are authentication and authorization.
Authentication Authentication is similar to identification, where a subject asserts an identity. In identification, no proof of identity is requested or provided, but with authentication, some form of proof of the subject’s identity is required. That proof is usually provided in the form of a secret password or some means of higher sophistication and security, such as a token, biometric, smart card, or digital certificate. Each of these is discussed later in this section.
When the user presents a user ID plus a second factor, whether a password, token, biometric, or something else, the system will determine whether the login request will be granted or denied. Regardless of the outcome, the system will record the login event in an event log.
Authorization After a subject has been authenticated, the next step is authorization. This is the process by which the system determines whether the subject should be permitted to access the requested resource in the requested manner. To determine whether the subject is permitted to access the resource, the system will perform some type of lookup or other reference to a business rule. For instance, an access control table associated with the requested resource may have a list of users who are permitted to access it. The system will read through the table, and if the user subject’s identity in included (and if the type of requested access matches the type permitted in the table), the system will permit the 172subject to access the resource. If the user’s identity is not listed in the table, he or she will be denied access. Whether the login is successful or not, a record of the access attempt (and its disposition) is recorded in an event log.
Typically, permissions are centrally stored by the operating system and administered by system administrators, although some environments enable the owners of resources to administer user access.
User IDs and Passwords User IDs and passwords are the most common means for users to authenticate to a resource—whether a network, a server, or an application. In most environments, a user’s ID will not be a secret; in fact, user IDs may be a derivation of the user’s name or an identification number. Some of the common forms of a user ID include combinations of the user’s first and last name or an employee ID number.
Whereas a user ID is not necessarily kept confidential, a password always is kept confidential. A password, also known as a passphrase, is a secret combination of letters, numbers, and other symbols known only to the actual user. End users are typically advised the following about passwords:
• Select a strong password or passphrase that is easy to remember but difficult for others to guess.
• Passwords must never be shared or used by others.
• Passwords must never be transmitted over any network.
• Passwords should be stored in a secure password vault.
• Each system should have a unique password.
• Passwords used for personal accounts should not be used for any work-related account.
User Account Provisioning When a user is issued a new computer or system user account, he or she needs to know the password to access the resource. Generating and transmitting an initial password to a user can be tricky, because passwords should never be sent in an e-mail message. A sound practice for initial user account provisioning would involve the use of a limited time, one-time password that would be securely provided to the user; upon first use, the system would require that the user change the password to a value that no one else would know.
Risks with User IDs and Passwords Password-based authentication is among the oldest in use in information systems. Although password authentication is still quite prevalent, a number of risks are associated with its use because of the different ways in which passwords can be discovered and reused by others, and include the following:
• Finding a password written down
• Finding a stored password
• Exploiting a browser’s password store
These follow the same theme: user IDs and passwords are static and, if discovered, can be used by others. For this reason, other, more secure means for authentication have been developed, including biometrics, tokens, smart cards, and certificates, all of which are collectively known as multifactor authentication.
Multifactor Authentication Multifactor authentication (MFA) is so-called because it relies not only on “something you know” (namely, a user ID and password), but also “something you have” (such as a key card or smart card) and/or “something you are” (such as a fingerprint). MFA requires a user ID and password, but the user must also possess something or use a biometric to form a part of the authentication. Several technologies are used for MFA, including tokens, soft tokens, SMS tokens, smart cards, digital certificates, and biometrics.
Users of MFA systems need to be trained on their proper use. For example, they need to be told not to store their tokens or smart cards with their computers and to keep their smartphones or mobile devices locked except when in use.
Biometrics A number of different biometrics authentication technologies have a common theme: all use some way of measuring a unique physical characteristic of the person who is authenticating. Some of the technologies in use are
• Voice recognition
• Iris scan
• Facial scan
Reduced Sign-On In a reduced sign-on environment, several applications use a centralized directory service such as LDAP (Lightweight Directory Access Protocol), RADIUS (Remote Authentication Dial-in User Service), Diameter protocol, or Microsoft Active Directory for authentication. The term comes from the result of changing each application’s authentication from stand-alone to centralized and the resulting reduction in the number of user ID–password pairs that each user is required to remember.
Single Sign-On In a single sign-on (SSO) interconnected environment, applications are logically connected to a centralized authentication server that is aware of the logged-in/logged-out status of each user. At the start of the workday, when a user logs in to an application, he or she will be prompted for login credentials. When the user logs in to another application, the application will consult the central authentication server to determine whether the user is logged in, and, if so, the second application will not require the user’s credentials. The term refers to the fact that a user needs to log in only one time, even in a multiple-application environment.
SSO is more complicated than reduced sign-on. In an SSO environment, each participating application must be able to communicate with a centralized authentication controller and act accordingly by requiring a new user to log in, or not.
Access Control Lists Access control lists (ACLs) are a common means to administer access controls. ACLs are used by many operating systems and other devices such as routers as a simple means to control access to a resource such as a server or a network.
On many devices and systems, the list of packet-filtering rules (which give a router many of the characteristics of a firewall) is known as an ACL. In the Unix operating system, for instance, ACLs can control which users are permitted to access files and directories and run tools and programs. ACLs in these and other contexts are often simple text files that can be edited with a text editor.
Access Control Processes Sound business processes must be in place for access controls to manage user access effectively and protect critical systems and sensitive information. These business processes should be documented and detailed business records kept that document all related activities. Formal roles and responsibilities must be defined so that only authorized persons may perform various functions.
Access control processes generally fall into two categories: the processing of access requests and periodic access reviews.
Access Requests Formal access request processes should be used to control the provisioning of user access. Using the principle of separation of duties, the actions of requesting access, approving access, and providing access should be performed by three different individuals.
Each step in an access request process should be recorded. These business records permit audits of access request processes to confirm that only properly issued and processed access requests result in the granting of user access.
Access Reviews The rate of change in organizations creates the need for periodic reviews of access rights to ensure that all subjects that have access to systems and sensitive data still require that access. Several types of reviews ensure that provisioning and deprovisioning processes are effective, accurate, and timely.
Reviews are warranted even in organizations with automation and workflow in their identity and access management processes. Reviews are even more critical in organizations that use manual processes. The objectives of access reviews ensure that access management processes remain effective and accurate and that access rights remain valid and justified.
Several types of access reviews are performed:
• Access certifications System owners review access rights for subjects and confirm that each subject still requires access rights. Any subjects that no longer require access rights are flagged and their accesses are removed.
• Provisioning certifications Access management personnel examine subjects’ access rights and confirm that there is valid evidence of properly executed requests, reviews, approvals, and execution for each.
• Deprovisioning certifications Security personnel obtain lists of terminated personnel from human resources and confirm that deprovisioning was properly and timely executed for each.
• Activity reviews Security personnel examine systems to determine whether subjects have logged into them recently. Inactive user accounts can be flagged for removal if subjects have not logged into them for extended periods of time, which indicates that they probably do not require access.
• Segregation of duties (SOD) matrix reviews Periodic reviews of SOD matrices help to determine whether all disallowed combinations of access are represented in SOD matrices. This is a review of the roles themselves, not the persons who have the roles.
• Segregation of duties reviews Reviews of subject accesses to detect SOD exceptions confirm whether any persons have access rights that violate the segregation of duties policy.
• Temporary worker reviews In organizations lacking centralized management of temporary workers, additional reviews will be needed to ensure that no active user accounts exist for temporary workers who are no longer active in the organization.
• Privileged account reviews All of the reviews listed here should be performed at a higher frequency for privileged accounts. This is warranted because of the additional powers associated with privileged accounts and the greater damage that may result in cases of abuse and compromise.
• Service account reviews These reviews determine whether service accounts are still being used, where and how they are used, and who manages them to ensure that there is no unauthorized use of service accounts.
In the absence of access governance tools, some of these reviews may be labor intensive. Because of this, organizations will perform these reviews on a risk basis, in which reviews of more critical systems will be performed more frequently than others.
Access Monitoring Because many cyberattacks begin with attempts to compromise individual user and system accounts, continuous monitoring of user and system account activity is considered an essential practice in cybersecurity. This monitoring is typically achieved through the real-time transmission of user account events to a centralized log server, or better yet to a security information and event management (SIEM) system, so that alerts on suspicious behaviors can be created and such matters investigated.
The events that should be sent to a log server or SIEM include the following:
• All successful logins Should include the originating IP address and geolocation of the login event
• All unsuccessful logins Also needs to include IP address and location
• All user account permission changes Should include the IP address and/or user account that performed the change
• All user account creations Should include IP address and user account performing the change
• Privileged account changes Includes permission changes and password changes.
• Service account changes Includes the creation of and modification of any service account
Organizations need to develop “use cases” in their SIEM systems to alert security personnel of events that warrant investigation and action. For instance, if a user account logs in from the United States, and then a short time later there is a login for the same user account in another country, an investigation should immediately commence to determine whether the user account has been compromised, resulting in an adversary logging into the account from the foreign location.
While identity and access management is a critical activity with dire consequences for mismanagement, several other controls are also considered foundational to information security.
Vulnerability management is the practice of periodically examining information systems (including but not limited to operating systems, subsystems such as database management systems, applications, and network devices) for the purpose of discovering exploitable vulnerabilities, conducting related analysis, and making decisions about remediation. Organizations employ vulnerability management as a primary activity to reduce the likelihood of successful attacks on their IT environments.
Often, one or more scanning tools are used to scan target systems in the search for vulnerabilities:
• Network device identification
• Open port identification
• Software version identification
• Exploitable vulnerability identification
• Web application vulnerability identification
• Source code defect identification
Security managers generally employ several of these tools for routine and nonroutine vulnerability management tasks. Routine tasks include scheduled scans of specific IT assets, while nonroutine tasks include troubleshooting and various types of investigations.
A typical vulnerability management process includes these activities:
• Periodic scanning One or more tools will be used to scan assets in the organization in the search for vulnerabilities.
• Analysis of scan results A security manager will examine the results of a vulnerability scan to ensure there are no false-positive results. This analysis often includes a risk analysis to understand an identified vulnerability in the context of the asset, its role, and its criticality. Scanning tools generally include a criticality level or score for an identified vulnerability so that personnel can begin to understand the severity of the vulnerability. Most tools utilize the common vulnerability scoring system (CVSS) method.
After noting the CVSS score of a specific vulnerability, a security manager will analyze the vulnerability to establish the contextual criticality of the vulnerability. For example, a vulnerability in the service message block (SMB) service on Microsoft Windows servers may be rated as critical. A security manager may downgrade the risk in the organization if SMB services are not accessible over the Internet. In another example, a security manager may raise the severity of a vulnerability if the organization lacks detective controls that would alert the organization that the vulnerable component has been attacked and compromised.
• Delivery of scan results to asset owners The security manager will deliver the report to the owners or custodians of affected assets so that those people can begin planning remediation activities.
• Remediation Asset owners will make changes to affected assets, typically through the installation of one or more security patches or through the implementation of one or more security configuration changes. Often, risk analysis is performed to determine the risks associated with proposed remediation plans.
Organizations often establish service level agreements (SLAs) for the maximum amounts of time required for remediation of identified vulnerabilities. Table 4-1 shows a typical SLA example.
Table 4-1 Typical Vulnerability Management Remediation SLA
Common Vulnerability Scoring System The CVSS is an open framework that is used to provide a common methodology for scoring vulnerabilities. CVSS employs a standard methodology for examining and scoring a vulnerability based on the exploitability of the vulnerability, the impact of exploitation, and the complexity of the vulnerability. The CVSS has made it possible for organizations to adopt a consistent approach for the analysis and remediation of vulnerabilities. Specifically, organizations can develop SLAs that determine the speed by which an organization will remediate vulnerabilities.
Vulnerability Identification Techniques Several techniques are used for identifying vulnerabilities in target systems:
• Security scan This involves the use of one or more vulnerability scanning tools to help identify easily found vulnerabilities in target systems. A security scan will identify a vulnerability in one or two ways: by confirming the version of a target system or program that is known to be vulnerable, or by making an attempt at proving the existence of a vulnerability by testing a system’s response to specific stimuli.
• Penetration test This test involves the use of a security scan plus additional manual tests that security scanning tools do not employ. A pen test is considered a realistic simulation of an attacker who intends to break into a target system. A pen test of an organization’s production environment may fall somewhat short of the techniques used by an actual attacker. A pen tester is careful not to exploit vulnerabilities that could result in a malfunction of the target system. Often, an actual attacker will not take this precaution unless he or she wants to attack a system without being noticed. For this reason, it is sometimes desirable to conduct a pen test of nonproduction infrastructure; however, nonproduction environments are often not identical to their production counterparts.
• Social engineering assessment This is an assessment of the judgment of personnel in the organization to see how well they are able to recognize various ruses used by attackers in an attempt to trick users into performing tasks or providing information. Several means are used, including e-mail, telephone calls, and in-person encounters. Social engineering assessments help organizations identify training and improvement opportunities.
Social engineering attacks can have a high impact on an organization. A particular form of social engineering known as business e-mail compromise (BEC), CEO fraud, or wire transfer fraud consists of a ruse where an attacker sends an e-mail that pretends to originate from a CEO to the chief financial officer (CFO), claiming that a secret merger or acquisition proceeding requires a wire transfer for a significant sum be sent to a specific offshore account. According to the Federal Bureau of Investigation (FBI), aggregate losses resulting from BEC fraud in the year 2019 are estimated to exceed $1.7 billion.
Patch Management Closely related to vulnerability management, patch management ensures that IT systems, tools, and applications have consistent version and patch levels. In all but the smallest organizations, patch management can be successful only through the use of tools that are used to automate the deployment of patches to target systems. Without automated tools, patch management is labor intensive and prone to errors that are often unnoticed, resulting in systems that remain vulnerable to exploitation even when IT and security staff believe they are protected.
Patch management is related to other IT processes, including change management and configuration management, which are discussed later in this chapter.
Event monitoring is a set of activities that focus on the collection of security- and privacy-related events, the correlation of events, and the generation of alerts when actionable events occur. Like vulnerability management, event monitoring and anomaly detection are critical to an organization’s defenses.
Incident response represents a set of activities that are initiated when certain events or conditions occur. The practice of incident response includes the creation of a formal incident response plan, with detailed playbooks to be used when specific types of events occur and training and exercises to ensure that incident responders are familiar with the plans and playbooks.
IT service management (ITSM) activities ensure that the delivery of IT services is efficient and effective through active management and the continuous improvement of processes. ITSM is defined in the IT Infrastructure Library (ITIL) process framework, a well-recognized standard managed by AXELOS. ITSM processes can be audited and registered to the ISO/IEC 20000:2011 standard, the international standard for ITSM.
ITSM consists of several distinct activities:
• IT service desk
• Incident management
• Problem management
• Change management
• Configuration management
• Release management
• Service-level management
• Financial management
• Capacity management
• Service continuity management
• Availability management
• Asset management
Each of these activities is described in detail in this section.
Why ITSM Matters to Privacy and Security
At first glance, ITSM and information risk may not appear to be related. However, information risk and information security rely a great deal on effective ITSM for the following reasons:
• In the absence of effective change management and configuration management, the configuration of IT systems will be inconsistent, in many cases resulting in exploitable vulnerabilities that could lead to security incidents.
• In the absence of effective release management, security defects may persist in production environments, possibly resulting in vulnerabilities and incidents.
• In the absence of effective capacity management, system and application malfunctions could occur, resulting in unscheduled downtime and data corruption.
• Without effective financial management, IT organizations may have insufficient funds for important security initiatives.
IT Service Desk Often known as the help desk, the IT service desk function handles incidents and service requests on behalf of customers by acting as a single point of contact. The service desk performs end-to-end management of incidents and service requests (at least from the perspective of the customer) and is also responsible for communicating status reports to customers.
The service desk can also serve as a collection point for other ITSM processes, such as change management, configuration management, service-level management, availability management, and other ITSM functions. A typical service desk function consists of frontline analysts who take calls from users and perform basic triage, and they are often trained to perform routine tasks such as resetting passwords, troubleshooting hardware and software issues, and assisting users with questions and problems with software programs. When frontline analysts are unable to assist a user, the matter is typically escalated to a subject-matter expert who can provide assistance.
Incident Management ITIL defines an incident as “an unplanned interruption to an IT service or reduction in the quality of an IT service. Failure of a configuration item that has not yet affected service is also an incident—for example, failure of one disk from a mirror set.” ISO/IEC 20000-1:2011 defines an incident as an “unplanned interruption to a service, a reduction in the quality of a service or an event that has not yet impacted the service to the customer.”
Thus, an incident may be any of the following:
• Service outage
• Service slowdown
• Software bug
Regardless of the cause, incidents are a result of failures or errors in any component or layer in IT infrastructure.
In ITIL terminology, if the incident has been seen before and its root cause is known, this is a known error. If the service desk is able to access the catalog of known errors, this may result in a more rapid resolution of incidents and less downtime and inconvenience. The change management and configuration management processes are used to modify the system to fix it temporarily or permanently. If the root cause of the incident is not known, the incident may be escalated to a problem, which is discussed in the next section.
Problem Management When several incidents have occurred that appear to have the same or a similar root cause, a problem is occurring. ITIL defines a problem as “a cause of one or more incidents.” ISO/IEC 20000-1:2011 defines a problem as the “root cause of one or more incidents” and continues, “the root cause is not usually known at the time a problem record is created and the problem management process is responsible for further investigation.”
The overall objective of problem management is the reduction in the number and severity of incidents. Problem management can also include some proactive measures, including system monitoring to measure system health and capacity management that will help management to forestall capacity-related incidents.
Examples of problems include the following:
• A server that has exhausted available resources that result in similar, multiple errors (which, in ITSM terms, are known as incidents)
• A software bug in a service that is noticed by and affecting many users
• A chronically congested network that causes the communications between many IT components to fail
Similar to incidents, when the root cause of a problem has been identified, the change management and configuration management processes will be enacted to make temporary and permanent fixes.
Change Management Change management is the set of processes that ensures all changes performed in an IT environment are controlled and performed consistently. ITIL defines change management as follows: “The goal of the change management process is to ensure that standardized methods and procedures are used for efficient and prompt handling of all changes, in order to minimize the impact of change-related incidents upon service quality, and consequently improve the day-to-day operations of the organization.”
The main purpose of change management is to ensure that all proposed changes to an IT environment are vetted for suitability and risk and to ensure that changes will not interfere with each other or with other planned or unplanned activities. To be effective, each stakeholder should review all changes so that every perspective of each change is properly reviewed.
A typical change management process is a formal “waterfall” process that includes the following steps:
• Proposal or request The person or group performing the change announces the proposed change. Typically, a change proposal contains a description of the change, the change procedure, the IT components that are expected to be affected by the change, a verification procedure to ensure that the change was applied properly, a back-out procedure in the event the change cannot be applied (or failed verification), and the results of tests that were performed in a test environment. The proposal should be distributed to all stakeholders several days prior to its review.
• Review This is typically a meeting or discussion about the proposed change, where the personnel who will be performing the change can discuss the change and answer stakeholders’ questions. Since the change proposal was sent out earlier, each stakeholder should have had an opportunity to read about the proposed change in advance of the review. Stakeholders can discuss any aspect of the change during the review. The stakeholders may agree to approve the change, or they may request that it be deferred or that some aspect of the proposed change be altered.
• Approval When a change has been formally approved in the review step, the person or group responsible for change management recordkeeping will record the approval, including the names of the individuals who consented to the change. If, however, a change has been deferred or denied, the person or group that proposed the change will need to make alterations to the proposed change so that it will be acceptable, or they can withdraw the change altogether.
• Implementation The actual change is implemented per the procedure described in the change proposal. Here, the personnel identified as the change implementers perform the actual change to the IT systems identified in the approved change procedure.
• Verification After the implementers have completed the change, they will perform the verification procedure to make sure that the change was implemented correctly and that it produces the desired result. Generally, the verification procedure will include one or more steps that include the gathering of evidence (and directions for confirming correct versus incorrect change) that shows the change was performed correctly. This evidence will be filed with other records related to the change and may be useful in the future if there is any problem with the system where this change is suspected as part of the root cause.
• Post-change review Some or all changes in an IT organization will be reviewed after the change is implemented. In this activity, the persons who made the change discuss the change with other stakeholders to learn more about the change and whether any updates to future changes may be needed.
These activities should be part of a change control board (CCB) or change advisory board (CAB), a group of stakeholders from IT and every group that is affected by changes in IT applications and supporting infrastructure.
Change Management Records Most or all of the activities related to a change should include updates to business records so that all of the facts related to each change are captured for future reference. In even the smallest IT organization, there are too many changes taking place over time to expect that anyone will later be able to recall facts about each change. Records that are related to each change serve as a permanent record.
Emergency Changes While most changes can be planned in advance using the change management process described here, there are times when IT systems need to be changed right away. Most change management processes include a process for emergency changes that details most of the steps in the nonemergency change management process, but they are performed out of order. The steps for emergency changes are as follows:
• Emergency approval When an emergency situation arises, the staff members attending to the emergency should seek management approval for the proposed change. This approval may be done by phone, in person, or in writing (typically, e-mail). If the approval was by phone or in person, e-mail or other follow-up is usually performed. Certain members of management should be designated in advance who can approve these emergency changes.
• Implementation The staff members perform the change.
• Verification Staff members verify that the change produced the expected result. This may involve other staff members from other departments or end users.
• Review The emergency change is formally reviewed. This review may be performed alongside nonemergency changes with the CCB, the same group of individuals who discuss nonemergency changes.
Like nonemergency changes, emergency changes should have a full set of records available for future reference.
Configuration Management Configuration management (CM) is the process of recording and maintaining the configuration of IT systems. Each asset being configured is known in ITSM parlance as a configuration item (CI). CIs usually include the following:
• Hardware complement This includes the hardware specifications of each system (such as CPU speed, amount of memory, firmware version, adapters, and peripherals).
• Hardware configuration Settings at the hardware level may include boot settings, adapter configuration, and firmware settings.
• Operating system version and configuration These includes versions, patches, and many operating system configuration items that have an impact on system performance and functionality.
• Software versions and configuration Software components such as database management systems, application servers, and integration interfaces often have many configuration settings of their own.
Organizations that have many IT systems may automate the CM function with tools that are used to record and change configuration settings automatically. These tools help to streamline IT operations and make it easier for IT systems to be more consistent with one another. The database of system configurations is called a configuration management database (CMDB).
Release Management Release management is the ITIL term used to describe the portion of the SDLC where changes in applications are made available to end users. Release management is used to control the changes that are made to software programs, applications, and environments.
The release process is used for several types of changes to a system, including the following:
• Incidents and problem resolution Casually known as bug fixes, these types of changes are made in response to an incident or problem, where it has been determined that a change to application software is the appropriate remedy.
• Enhancements New functions in an application are created and implemented. These enhancements may have been requested by customers, or they may be a part of the long-range vision on the part of the designers of the software program.
• Subsystem patches and changes Changes in lower layers in an application environment may require a level of testing similar to testing used when changes are made to the application itself. Examples of changes are patches, service packs, and version upgrades to operating systems, database management systems, application servers, and middleware.
The release process is a sequential process—that is, each change that is proposed to a software program will be taken through each step in the release management process. In many applications, changes are usually assembled into a “package” for process efficiency purposes: it is more effective to discuss and manage groups of changes than it would be to manage individual changes.
The steps in a typical release process are preceded by typical SDLC process steps, which are as follows:
• Feasibility study This includes activities that seek to determine the expected benefits of a program, project, or change to a system.
• Requirements definition Each software change is described in terms of a feature description and requirements. The feature description is a high-level description of a change to software that may explain the change in business terms. Requirements are the detailed statements that describe a change in enough detail for a developer to make changes and additions to application code that will provide the desired functionality. Often, end users will be involved in the development of requirements so that they may verify that the proposed software change is actually what they desire.
• Design After requirements have been developed, a programmer/analyst or application designer will create a formal design. For an existing software application, this will usually involve changes to existing design documents and diagrams, but for new applications, designs will need to be created from scratch or copied from similar designs and modified. Regardless, the design will have a sufficient level of detail to permit a programmer or software engineer to complete development without having to discern the meaning of requirements or design.
• Development When requirements and design have been completed, reviewed, and approved, programmers or software engineers begin development. This involves actual coding in the chosen computer language with approved development tools, as well as the creation or update to ancillary components, such as a database design or application programming interface (API). Developers will often perform their own unit testing, where they test individual modules and sections of the application code to make sure that it works properly.
• Testing When the developers have finished coding and unit testing, a more formal and comprehensive test phase is performed. Here, analysts, dedicated software testers, and perhaps end users will test all of the new and changed functionality to confirm whether it is performing according to requirements. Depending on the nature of the changes, some amount of regression testing is also performed; this means that functions that were confirmed to be working properly in prior releases are tested again to make sure that they continue to work as expected. Testing is performed according to formal, written test plans that are designed to confirm that every requirement is fulfilled. Formal test scripts are used, and the results of all tests should be recorded and archived. The testing that users perform is usually called user acceptance testing (UAT). Often, automated test tools are used, which can make testing more accurate and efficient. After testing is completed, a formal review and approval are required before the process is allowed to continue.
• Implementation When testing has been completed, the software is implemented on production systems. Here, developers hand off the completed software to operations personnel who install it according to instructions created by developers. This could also involve the use of tools to make changes to data and database design to accommodate changes in the software. When changes are completed and tested, the release itself is carried out with these last two steps:
• Release preparation When UAT and regression testing have been completed, reviewed, and approved, a release management team will begin to prepare the new or changed software for release. Depending upon the complexity of the application and of the change itself, release preparation may involve not only software installation but also the installation or change to database design, and perhaps even changes to customer data. Hence, the software release may involve the development and testing of data conversion tools and other programs that are required so that the new or changed software will operate properly. As with testing and other phases, full records of testing and implementation of release preparation details need to be captured and archived.
• Release deployment When release preparation is completed (and perhaps reviewed and approved), the release is installed on the target systems. Personnel deploying the release will follow the release procedure, which may involve the use of tools that will make changes to the target system at the operating system, database, or other level; any required data manipulation or migration; and the installation of the actual software. The release procedure will also include verification steps that will be used to confirm the correct installation of all components.
• Post-implementation After the software has been implemented, a post-implementation review takes place to examine matters of system adequacy, security, return on investment (ROI), and any issues encountered during implementation.
Utilizing a Gate Process Many organizations utilize a “gate process” approach in their release management process. This means that each step of the process undergoes formal review and approval before the next step is allowed to begin. For example, a formal design review will be performed and attended by end users, personnel who created requirements and feature description documents, developers, and management. If the design is approved, development may begin. But if questions or concerns are raised in the design review, the design may need to be modified and reviewed again before development is allowed to begin.
Agile processes utilize gates as well, although the flow of agile processes is often parallel rather than sequential. The concept of formal reviews is the same, regardless of the SDLC process in use.
Service-Level Management Service-level management is composed of the set of activities that confirms whether information services (IS) operations are providing adequate services to customers. This is achieved through continuous monitoring and periodic review of IT service delivery.
An IS department often plays two different roles in service-level management. As a provider of service to its own customers, the IS department will measure and manage the services that it provides directly. Also, many IT departments directly or indirectly manage services that are provided by external service providers. Thus, many IT departments are both service provider and customer, and often the two are interrelated, as depicted in Figure 4-2.
Figure 4-2 The different perspectives of the delivery of IT services
Financial Management IT financial management is the portion of IT management that takes into account the financial value of IT services that support organizational objectives. Financial management for IT services consists of several activities, including the following:
• Capital investment
• Expense management
• Project accounting and project ROI
Capacity Management Capacity management is a set of activities that confirms there is sufficient capacity in IT systems and IT processes to meet service needs. Primarily, an IT system or process has sufficient capacity if its performance falls within an acceptable range, as specified in SLAs.
Capacity management is not just a concern for current needs; it must also be concerned about meeting future needs. This is attained through several activities, including the following:
• Periodic measurements Systems and processes need to be regularly measured so that trends in usage can be used to predict future capacity needs.
• Considering planned changes Planned changes to processes and IT systems may have an impact on the predicted workload.
• Understanding long-term strategies Changes in the organization, including IT systems, business processes, and organizational objectives, may have an impact on workloads, requiring more (or less) capacity than would be extrapolated through simpler trend analysis.
• Changes in technology Several factors may influence capacity plans, including the expectation that computing and network technologies will deliver better performance in the future and that trends may influence how end users use technology.
Service Continuity Management Service continuity management includes the activities concerned with the ability of the organization to continue providing services, primarily in the event that a natural or manmade disaster has occurred. Service continuity management is ITIL parlance for the more common terms business continuity planning and disaster recovery planning.
Availability Management The goal of availability management is the sustainment of IT service availability in support of organizational objectives and processes. The availability of IT systems is governed by the following:
• Effective change management When changes to systems and infrastructure are properly vetted through a change management process, changes are less likely to result in unanticipated downtime.
• Effective application testing When changes to applications are made according to a set of formal requirements, review, and testing, the application is less likely to fail and become unavailable.
• Resilient architecture When the overall architecture of an application environment is designed from the beginning to be highly reliable, it will be more resilient and more tolerant of individual faults and component failures.
• Serviceable components When the individual components of an application environment can be effectively serviced by third-party service organizations, those components will be less likely to fail unexpectedly.
Asset Management Asset management is the collection of activities used to manage the inventory, classification, use, and disposal of assets. Asset management is a foundational activity, without which several other activities could not be effectively managed, including vulnerability management, device hardening, incident management, data security, and some aspects of financial management.
In information security, asset management is critical to the success of vulnerability management. If assets are not known to exist, they may be excluded from processes used to identify and remediate vulnerabilities. Similarly, it will be impossible to harden assets if their existence is not known. And if an unknown asset is attacked, the organization may have no way of directly knowing this in a timely manner; instead, if an attacker compromises an unknown device, the attack may not be known until the attacker pivots and selects additional assets to compromise. This time lag could prove crucial to the impact of the incident.
Asset Identification A security management program’s main objective (whether formally stated or not) is the protection of the organization’s assets. These assets may be tangible or intangible, physical, logical, or virtual. Here are some examples of assets:
• Buildings and property These assets include real estate, structures, and other improvements.
• Equipment This includes machinery, vehicles, and office equipment such as copiers, printers, and scanners.
• IT equipment This includes computers, printers, scanners, tape libraries (the devices that create backup tapes, not the tapes themselves), storage systems, network devices, and phone systems.
• Virtual assets In addition to the tangible IT equipment cited, virtual assets include virtual machines and software running on them.
• Supplies and materials These include office supplies as well as materials that are used in manufacturing.
• Records These include business records, such as contracts, video surveillance tapes, visitor logs, and far more.
• Information This includes data in software applications, documents, e-mail messages, and files of every kind on workstations, servers, and in the cloud.
• Intellectual property This includes an organization’s designs, architectures, patents, software source code, processes, and procedures.
• Personnel In a real sense, an organization’s personnel are the organization. Without its staff, the organization cannot perform or sustain its processes.
• Reputation One of the intangible characteristics of an organization, reputation is the individual and collective opinion about an organization in the eyes of its customers, competitors, shareholders, and the community.
• Brand equity Similar to reputation, this is the perceived or actual market value of an individual brand of product or service that is produced by the organization.
Asset Data Sources An organization that is building or improving its security management program may need to build its asset inventory from scratch. Management will need to determine where this initial asset data will originate. Sources include the following:
• Financial system asset inventory An organization that keeps all of its assets on the books will have a wealth of asset inventory information. However, it may not be entirely useful: asset lists often do not include the location or purpose of the asset and whether it is still in use. Correlating a financial asset inventory to assets in actual use may consume more effort than the other methods for creating the initial asset list. However, for organizations that have a relatively small number of highly valued assets (for instance, an ore crusher in a gold mine or a mainframe computer in a small company), knowing the precise financial value of an asset is highly useful because the actual depreciated value of the asset is used in the risk analysis phase of risk management. Knowing the depreciated value of other assets is also useful, because this will figure into the risk treatment choices that will be identified later.
• Interviews Discussions with key personnel for purposes of identifying assets are usually the best approach. However, to be effective, several people usually need to be interviewed to ensure that all relevant assets are included.
• IT systems portfolio A well-managed IT organization will have formal documents and records for its major applications. Although this information may not encompass every IT asset in the organization, it can provide information on the assets supporting individual applications or geographic locations.
• Online data An organization with a large number of IT assets (systems, network devices, and so on) can sometimes utilize the capability of online data to identify those assets. An organization with cloud-based assets can use the asset management portion of the cloud services dashboard to determine the number and type of assets in use there. Also, a systems or network management system often includes a list of managed assets, which can be a good starting point when creating the initial asset list.
• Security scans An organization that has security scanning tools can use them to identify network assets. This technique will identify authorized as well as unauthorized assets.
• Asset management system Larger organizations may find it more cost-effective to use an asset management application dedicated to this purpose rather than rely on lists of assets from other sources.
None of these sources should be considered accurate or complete. Instead, as a formal asset inventory is being assembled, the security manager should continue to explore other sources of assets.
Note that it is rarely possible to take (or create) a list of assets from a single source. Rather, more than one source of information is often needed to be sure that the risk management program has identified at least the important, in-scope assets that it needs to worry about.
It is usually useful to organize or classify assets. This will help to get identified assets into smaller chunks that can be analyzed more effectively. There is no single way to organize assets, but here are a few ideas:
• Geography A widely dispersed organization may want to classify its assets according to their location. This will aid risk managers during the risk analysis phase, since many risks are geographic-centric, particularly natural hazards.
• Service provider An organization utilizing one or more infrastructure as a service (IaaS) providers can group its assets by service provider.
• Business process Because some organizations rank the criticality of their individual business processes, it can be useful to group assets according to the business processes they support. This helps the risk analysis and risk treatment phases because assets supporting individual processes can be associated with business criticality and treated appropriately.
• Organizational unit In larger organizations, it may be easier to classify assets according to the organizational unit they support.
• Sensitivity Usually ascribed to information, sensitivity relates to the nature and content of that information. Sensitivity usually applies in two ways: to an individual, where the information is considered personal or private, and to an organization, where the information may be considered a trade secret. Sometimes sensitivity is somewhat subjective and arbitrary, but often it is defined in laws and regulations.
• Regulation For organizations that are required to follow government and other legal obligations regarding the processing and protection of information, it will be useful to include data points that indicate whether specific assets are considered in scope for specific regulations. This is important because some regulations specify how assets should be protected, so it’s useful to be aware of this during risk analysis and risk treatment.
There is no need to choose which of these methods will be used to classify assets. Instead, an IT analyst should collect several points of metadata about each asset (including location, process supported, and organizational unit supported). This will enable the security manager to sort and filter the list of assets in various ways to understand which assets are in a given location or which support a particular process or part of the business.
Why Asset Management Is Control #1
The well-known control framework Critical Security Controls, produced by the Center for Internet Security (commonly known as the CIS 20), lists hardware asset inventory as the first control. I believe there is a specific purpose to this: an organization cannot protect assets that it does not know about.
Within an information security program, administrative safeguards take the form of information security policy, security and IT standards, and other statements that define security-related roles, responsibilities, and other expected outcomes.
Information security policy is a foundational component of any organization’s security program and a necessary prerequisite to an organization’s privacy program. Security policy defines the principles and required actions for the organization to protect its assets and personnel properly.
The audience for security policy is the organization’s personnel—not only full-time and part-time employees, but also temporary workers, including contractors and consultants. Security policy must be easily accessible by all personnel so that they can never offer ignorance as an excuse for violating policy. To this point, many organizations require all personnel to acknowledge the existence of, and their understanding of, the organization’s security policy at the time of hire and annually thereafter.
Security policy cannot be developed in a vacuum. Instead, it needs to align with a number of internal and external factors. The development of policy needs to incorporate several considerations, including the following:
• Applicable laws, regulations, standards, and other legal obligations
• Risk tolerance
• Organizational culture
Alignment with Controls Security policy and controls need to be in alignment. This is not to say that there must be a control for every policy or a policy for every control. However, policies and controls must not contradict each other. For example, if one control states that no personally owned mobile devices may connect to internal networks, then another policy cannot state that those devices may be used provided no corporate information is stored on them.
Alignment with the Audience Security policy needs to align with the audience. In most organizations, this means that policy statements need to be understood by the majority of workers. A common mistake in the development of security policy is the inclusion of highly technical policies such as permitted encryption algorithms or statements about the hardening of servers. Such topics are irrelevant to most workers. The danger of including policies that are irrelevant to most workers is that they are likely to “tune out” and not pay attention to those policies that are applicable to them. In other words, security policy should have a high signal-to-noise ratio.
In organizations with extensive technology use, one avenue is to create a general security policy intended for all workers (technical and nontechnical) and a separate policy for technical workers who design, build, and maintain information systems. Another alternative is to create a general security policy for all workers that includes a policy stating that all controls are mandatory. Either approach would be sufficient by aligning messages about policy with various audiences.
Security Policy Structure Several different topics are included in a security policy, including the following:
• Acceptable use of organization assets
• Mobile devices
• Protection of information and assets
• Access control and passwords
• Personally owned devices
• Security incidents
• E-mail and other communications
• Social media
• Ethics and applicable laws
• Workplace safety
• Consequences of noncompliance
• Cloud computing
• Data exchange with third parties
Security managers are free to choose how to package these and other security policies. For example, they may exist in separate documents or all together in one document. There is no right or wrong here: a security manager should figure out what would work best in the organization by observing how other policies are structured and published.
Security policy statements should be general in nature and not cite specific devices, technologies, or configurations. Policy statements should state what is to be done (or not done) but not how. This way, security policies will be durable and will need to be changed infrequently. On the other hand, security standards and procedures may change more frequently as practices, techniques, and technologies change.
Policy Distribution and Acknowledgment Security policy—indeed, all organization policy—should be well known and easily accessible by all workers. It may be published on a corporate intranet or other online location where workers go to obtain information about internal operations.
All workers need to be informed of the presence of the organization’s security policy. The best method in most organizations is for a high-ranking executive to write a memo or an e-mail to all workers stating the importance of information security in the organization and informing them that the information security policy describes required behavior on the part of all workers. Another effective tactic is to have the senior executive record a message outlining the need for and importance of security policy. Additionally, the message should state that the executive leadership team has reviewed and fully supports the policies.
Executives need to be mindful that they lead by example. If executives carve out exceptions for themselves (for example, if an executive insists on using a personal tablet computer for company business when policy forbids it), other workers are apt to notice and take their own shortcuts wherever they’re able. If executives visibly comply with security policy, others will too. Organizational culture includes behavior such as compliance to policy or a tendency for workers to skirt policy whenever possible.
An organization’s security and IT standards describe, in detail, the methods, techniques, technologies, specifications, brands, and configurations to be used throughout the organization.
As with the security policy, it is important that the privacy and security managers understand the breadth of coverage, strictness, compliance, and last review and update. These tell the managers the extent to which an organization’s security standards are used—if at all.
In addition to the aforementioned characteristics, it is important to know how the organization’s standards were developed and how good they are. For instance, if there are device-hardening standards, and whether they are aligned to or derived from industry-recognized standards such as CIS, National Institute for Standards and Technology (NIST), European Union Agency for Network and Information Security (ENISA), Defense Information Systems Agency Security Technical Implementation Guides (DISA STIG), or others.
Similarly, it is important to know whether standards are highly detailed (configuration item by configuration item) or whether they are principle-based. If they are the latter, engineers may exercise potentially wide latitude when implementing these standards. You should understand that highly detailed standards are not necessarily better than principle-based standards; their worth depends on the nature of the organization, its risk tolerance, and its maturity.
Guidelines are statements that help organization personnel better understand how to implement or comply with policies and standards. Although an organization’s guidelines are not “the law” per se, the presence of guidelines may signal a higher than average maturity. Many organizations don’t get any further than creating policies and standards, so the presence of proper guidelines means that the organization may have (or had, in the past) sufficient resources or prioritization to make documenting guidance on policies important enough to do.
According to their very nature, guidelines are typically written for personnel who need a little extra help on how to adhere to policies.
Like other types of security program documents, guidelines should be reviewed and updated regularly. Because guidelines bridge rarely changing policy with often-changing technologies and practices, a strategist examining guidelines should find them being changed frequently—or they may be found to be irrelevant. That, too, is possibly evidence of an attempt to improve maturity or communications (or both) but with the absence of long-term commitment.
Privacy and security by design is a concept that reinforces the need to incorporate privacy and security considerations into systems and applications as a standard practice. In other words, the product is designed with privacy and security as a priority, along with whatever other functional purposes the product delivers.
Privacy by design is based on seven foundational principles:
• Proactive, not reactive; preventive, not remedial The privacy by design approach is characterized by proactive rather than reactive measures. It anticipates and prevents privacy-invasive events before they happen. Privacy by design does not wait for privacy risks to materialize, and it does not offer remedies for resolving privacy infractions once they have occurred; instead, it aims to prevent them from occurring.
• Privacy embedded into design Privacy by design is embedded into the design and architecture of IT systems as well as business practices. It is not bolted on as an add-on, after the fact. The result is that privacy becomes an essential component of the core functionality being delivered. Privacy is integral to the system without diminishing its functionality.
• Privacy as the default setting Privacy by default seeks to deliver the maximum degree of privacy by ensuring that personal data is automatically protected in any given IT system or business practice. If an individual does nothing to protect her privacy, it should still remain intact. No action should be required on the part of the individual to protect her individual privacy.
• Full functionality—positive-sum, not zero-sum Privacy by design seeks to accommodate all legitimate interests and objectives in a positive-sum “win-win” manner, not through a dated, zero-sum approach, where unnecessary trade-offs are made. Privacy by design avoids the pretense of false dichotomies, such as privacy versus security, demonstrating that it is possible to have both.
• End-to-end security—full life-cycle protection Privacy by design, having been embedded into the system prior to the first element of information being collected, extends securely throughout the entire life cycle of the data involved—strong security measures are essential to privacy, from start to finish. This ensures that all data is securely retained and then securely destroyed at the end of the process, in a timely fashion. Thus, privacy by design ensures cradle-to-grave, secure life-cycle management of information, end to end.
• Visibility and transparency—keep it open Privacy by design seeks to assure all stakeholders that whatever the business practice or technology involved, it is, in fact, operating according to the stated promises and objectives, subject to independent verification. Its component parts and operations remain visible and transparent to users and providers alike.
• Respect for user privacy—keep it user-centric Privacy by design requires architects and administrators to keep the interests of the individual uppermost by offering such measures as strong privacy defaults, appropriate notice, and user-friendly empowerment options.
It is essential that business application architectures, designs, data flows, and configurations all contribute to and support privacy and security. The SDLC represents the business processes used to develop, maintain, and operate business applications, and it must include steps to ensure that new information systems and changes to existing information systems do not impact security and privacy in unexpected ways. Policies and standards must also contribute to and support security and privacy principles to ensure that personal information is adequately protected and properly used. Ongoing operations must likewise ensure that the security and privacy of systems are not compromised and that continuous monitoring is employed to detect security and privacy incidents early so that they may be contained.
Privacy principles and practices need to permeate throughout the organization in the form of a lifestyle of awareness and protection of personal information. While privacy plays a dominant role in IT and information security functions, it’s vital elsewhere as well. This section describes privacy practices throughout the organization.
Information security is a foundation piece of privacy. Without it, privacy has only the “proper use” principle without the “information protection” principle.
Information security must be privacy-aware in the following key areas:
• Logging and monitoring Event logs created by applications, database management systems, business applications, and operating systems should not contain full data subject information except where specifically required. For instance, full bank account or credit card numbers should not be included in security event alerts sent to a SIEM.
• Security incident response Incident response proceedings and records should not contain full data subject details except where required by law; otherwise, the business records created in the security incident response process would themselves be subject to privacy requirements.
IT is the core of information processing in organizations. Although the IT department itself may be distributed, and though there may be a degree of shadow IT and citizen IT, business rules in the form of privacy and security requirements must be a core tenet of the development and operations of all IT systems. If privacy and security by design are not a part of IT culture, then IT will either be using significant resources to retrofit systems or the organization is going to be in a sorry state of noncompliance to privacy laws, security laws, and societal norms.
Security and privacy requirements must align with applicable laws and other legal obligations, and they must be a part of every systems acquisition and development effort. Doing any less will send the organization on a slippery backslide into noncompliance and its consequences.
Business continuity and disaster recovery (BCDR) planning work hand-in-hand to ensure the organization’s survival despite minor or major disruptive events. At its heart, BCDR ensures business resilience through the implementation of resilient infrastructure and emergency response and operations plans to keep key business processes operating before, during, and after a disaster.
BCDR requirements and operations cannot surrender ground when it comes to the protection and proper use of personal information. Although key business processes and information systems may operate with reduced capacity or features during a disaster, the protection of personal information cannot be reduced. Audits of BCDR plans and records will help to confirm whether BCDR plans ensure the protection of personal information as they should.
Mergers, acquisitions, and divestitures (MAD) are transactions in which one organization is combining with another (in the case of a merger or acquisition) or breaking into two or more organizations (in the case of a divestiture). These transactions are generally orchestrated by the organization’s most senior executives and board members. Privacy and security leaders are often not involved until later in transaction negotiations, and they are sometimes not informed until the transaction has closed.
Privacy and security leaders need to be in a position of influence in all MAD transactions. Doing so requires a high level of trust among other executives.
Human resources, or HR, is a corporate function responsible for the administration of hiring, internal transfers, and termination of employees. HR departments generally perform the roles of performance review, salary administration, benefits administration, career development, and training. All of the records for these and other activities are managed by HR in one or more systems known as a human resources information system (HRIS) or a human capital management (HCM) system.
Whether stored piecemeal or in an HRIS or HCM, human resources has under its control a significant amount of sensitive information about all of the employees in an organization, including
• Contact information Residential address, phone numbers, personal e-mail
• Government IDs Driver’s license or passport number
• Financial Compensation information, bank account numbers
• Behavioral Records concerning performance and discipline
• Medical Records concerning healthcare benefits and possibly the healthcare records themselves
• Background Records concerning education, prior employment, financial history, references, and criminal activity
Organizations must take great care to protect these records from misuse and compromise.
Prior to the enactment of modern privacy laws, many organizations implemented protection and proper-use principles concerning their HR records that were not unlike the provisions of privacy laws. In many cases, organizations need look no further than HR to observe privacy principles in practice.
The accuracy, completeness, and integrity of HR information are key for information security, primarily its identity and access management processes and systems. Information security considers the HRIS or HCM system as the official system of record for the organization’s workforce and will often integrate HRIS/HCM systems with identity management systems for efficiency and accuracy.
At the time of hire and periodically thereafter, an organization’s HR department will administer employees’ written acknowledgment of key policies, including ethics, harassment, privacy, and security. Information security and privacy leaders also count on HR to carry out disciplinary action when security, privacy, and other policies are violated in order to maintain a practice of consistent and fair policy enforcement. HR generally maintains discipline-related records.
On account of the 1999 Microsoft Contingent Worker’s Lawsuit, HR organizations often refuse to be the steward of business records concerning temporary workers, contractors, and other workers that are not full-time employees. This complicates security-related processes such as identity and access management and policy enforcement.
HRIS and HCM systems are now capable of managing temporary workers separately from employees. As a result, HR departments are once again considering managing the temporary workers’ records. Because a generation of HR leaders have worked under the regime of temporary workers being managed outside of HR, the adoption is slowly gaining ground.
Compliance and ethics functions in organizations intersect in two ways. First, the compliance function is often the driving force that facilitates efforts to comply with various laws and regulations, including privacy and information security. Second, the ethics function often maintains an information store that includes the record of employees’ acknowledgment to ethics or “code of conduct” policies, as well as recordkeeping for “whistleblower” events. All such records are highly sensitive (sometimes meeting the definition of personally identifiable information) and must be closely managed.
The audit function in an organization examines selected business processes to ensure that they are being managed properly and that they are effective. An organization’s audit leader will determine which business processes to examine in any given year based upon several factors, primarily compliance and risk. The compliance driver is associated with ensuring that an organization is compliant with specific industry-related regulations, and the risk driver brings audit scrutiny to those business processes that warrant more attention.
Audit records are almost always considered highly sensitive, and often they contain personally identifiable information when auditors are collecting business records’ evidence in the course of their audits. Auditors generally maintain audit records on purpose-built information systems or on tightly controlled storage systems.
An organization’s marketing function undertakes various efforts, called campaigns, to inform persons and organizations of its goods and services. A marketing department will perform many activities, including
• Development of product or service names and descriptions
• Determining product or service pricing and terms
• Creating advertising content
Marketing and privacy intersect where marketing is developing direct marketing campaigns in which the organization sends information to selected individuals. Direct marketing involves several channels, including but not limited to telephone, texts, e-mail, web site advertisements, flyers, and postal mail.
The recipients of direct marketing messaging consist of current customers and other individuals that have been preselected. Current customers generally provide their contact information as a part of purchase transactions.
To reach individuals who are not current customers, organizations generally obtain lists of desired customers from information brokers that sell these lists based on targeted characteristics of individuals, such as age, address, income, and others.
Prior to the enactment of privacy laws, marketing departments had nearly carte blanche autonomy to develop any and every kind of direct marketing campaigns. Because privacy laws in part target direct marketing, organizations need to review applicable privacy laws and their own privacy policies carefully to ensure the compliance of all direct marketing activities.
An organization’s business development function is tasked with identifying and developing new business opportunities in the form of relationships and partnerships to ensure the continued growth of the organization. Business development often performs feasibility studies to determine the potential benefit of a new product or service developed internally or with an external party.
Privacy and security leaders should be involved in the development or review of feasibility studies to ensure that privacy and security risks are identified. Involving privacy and security early reduces the probability of privacy- or security-related surprises that may influence or invalidate a business case.
Privacy and security leaders need to earn a high level of trust in order to be involved in business development. Successful involvement in business development comes in the form of collaboration and exploration to figure out how to ensure the success of a new business endeavor. This is opposed to the reputation of privacy and security professionals who instead attempt to halt specific business development efforts because of the presence of privacy- or security-related risks.
An organization’s public relations (PR) function is tasked with keeping customers, regulators, partners, and the general public informed of activities and developments in the organization. The mission of PR is to influence the perception of the organization.
A PR department often maintains a list of contacts to which communications are directed. Such a list would include journalists, regulators, and other interested parties. Often these lists will be considered in scope of privacy laws, requiring their protection and management of their use.
An organization’s procurement and sourcing functions are responsible for managing the acquisition of products and services. These functions perform a number of activities, including the following:
• Issuing requests for information and requests for proposals Requests for information (RFIs) and requests for proposal (RFPs) are solicitations to product and service providers to help the organization better understand the providers’ products and services. Procurement and sourcing will often include functional and nonfunctional requirements on many topics, including information security and privacy.
• Management of requirements As discussed earlier, procurement and sourcing will collect and manage requirements on many topics to be used when tasked with acquiring new products and services. Privacy and security requirements should align with applicable privacy and security laws, as well as with internal policies and controls, so that new products and services contribute to the organization’s compliance.
• Contract negotiations Procurement and sourcing is often involved in the parts of contract negotiations that are related to product or service terms, conditions, pricing, and compliance with privacy and security requirements. The legal department usually takes the lead on contracts and will manage all other components of a contract.
An organization’s legal department consists of one or more attorneys and has responsibility for several important functions, including these:
• Interpretation of laws and regulations A primary role of a legal department is the interpretation of laws and regulations, including the determination of applicability.
• Policy The legal department generally approves all corporate policies developed by different departments. In the contact of privacy, this will include security policy as well as internal and external privacy policies.
• Legal contracts The legal department takes the lead on all contracts between the organization and outside parties. Procurement and sourcing, when involved, will have managed pricing as well as terms and conditions related to the product or service being acquired.
• Insurance The legal department often leads efforts related to insurance policies, including cyber-risk policies.
• Compliance Through its involvement with policy, contracts, and insurance, a legal department is often considered a key role in an organization’s overall compliance.
• Business risk The legal department is often thought of as an organization’s business risk manager, through the management of policies, contracts, and insurance.
• Incident response The legal department plays a key role in privacy and security incident response, primarily in decisions on whether and when (and how) to involve external parties, including regulators, law enforcement, and affected persons.
Many organizations lack legal expertise on all relevant topics. A common approach involves the establishment of legal retainer relationships with outside legal counsel with expertise in areas of occasional concern to the organization.
An organization’s security and emergency services department is concerned with the protection of work centers and processing centers, as well as response to security-related matters that occur there. Generally, the nexus of privacy with these services is related to two main areas: contact information for security and emergency response personnel, and business records containing incident descriptions and the names of involved persons. Although the volume of information related to these activities may be low in comparison to the organization’s workforce and its customers, the personal information held in security and emergency services business records must be kept confidential and used for no other purposes.
An organization’s financial records will contain the names of accounting department personnel as well as outsiders, including persons to whom payments are sent and those who work in vendor organizations. As is the case with business records in other parts of an organization, the identities of persons whose names and other information appears in financial records should be kept confidential and used for no other purposes.
In most circumstances, organizations are required by law to retain detailed financial records for many years. The names of persons are a part of these records. Subject data requests concerning financial records may request corrections or even removal; in most cases, these requests cannot be granted as doing so would violate laws concerning the completeness, integrity, and retention of financial records.
Organizations often have business activities in areas or functions not listed in this section. Business records throughout the organization may contain subject names for one reason or another. In all cases, the principles of privacy and security apply: protection and proper use. An organization’s security and privacy policies should apply to information created or used in all departments regardless of its use. All such data stores should be included in an information catalog so that privacy, security, and information management personnel include them in routine data management and protection activities.
Privacy managers need to understand the full range of information protection techniques. A key tenet of data protection is the concept of keeping data for the shortest possible period of time, thereby reducing overall risk in terms of the numbers of records present at any given time. And when information is no longer needed, destruction techniques ensure that data cannot be reconstituted by anyone.
This section follows with discussions about organizations and individuals sharing data with others and the need for visibility and control of these sharing events. This section concludes with a discussion on quantifying the costs of information protection.
Organizations are accustomed to retaining data for very long periods, often in perpetuity. For generations, the risks associated with long-term data retention have been quite low. Digital transformation has changed all of that: with business functions implemented in a workflow and records retained online, sensitive data such as personal information can be used, misused, abused, stolen, and subsequently monetized in numerous ways by cybercriminals, bringing potential and real harm to affected persons. The practice of data retention involves the management of a data retention schedule that determines the amount of time that various types of business records should be retained in an organization.
Today, the liability of excess data retention is being felt: executives are beginning to understand that retaining certain types of records represents risk. This realization results in a greater emphasis on establishing data retention schedules to limit how long organizations retain sensitive information.
Privacy and security professionals need to work with business unit leaders and legal counsel to determine appropriate data retention periods for various data types. Often, there are laws on the context of said data that need to be identified and understood. For instance, nation, state, or provincial laws specify that organizations must retain employment HR records throughout an employee’s length of employment, plus several years afterward. Financial services and banking laws similarly have similar requirements for minimum retention periods for retaining financial transaction data.
The General Data Protection Regulation (GDPR), California Consumer Privacy Act (CCPA), and other privacy laws give data subjects a “right to be forgotten.” Generally, data subjects can request that their data be removed. Still, such requests can be granted only to the extent that an organization can remove such information without violating other laws that specify that information be retained. For example, Fred was unceremoniously fired from his job at Best State Bank. Thinking he could improve his employment prospects elsewhere, Fred asked Best State Bank to remove his employment record data under the “right to be forgotten” concept in applicable privacy laws. However, Best State Bank refused to fulfill the request, citing employment laws that require that all employment records be retained for the length of employment, plus seven years after the end of employment relationships.
In the early years of computing, archiving meant that data was copied to magnetic tapes, and those tapes were kept in a vault for many years. Nowadays, with low data storage costs, organizations are apt to retain some data in perpetuity, particularly business transactions and records about employees and customers. However, the risks associated with long-term retention have compelled organizations to consider alternatives, including data archiving, the process of preparing data for long-term storage.
In instances where organizations are bound by specific laws to retain data for many years, archiving is a viable opportunity for removing data from online transaction systems to other systems or media.
The terms “online,” “nearline,” and “offline” connote approaches to archiving. Data that is online remains in a primary processing system along with current records. Data that is nearline may exist in a different system such as a data warehouse and can be accessed in the data warehouse or even returned to the primary transaction processing and storage system for a time. Offline data generally resides in another form such as backup media, whether tapes, a virtual tape library (VTL), or disk storage of some sort—but not in a form that is immediately available for normal processing.
Archiving may be a viable option for risk-averse organizations that want to be rid of their older records systems but where applicable laws require long-term storage of original records. In this case, techniques such as pseudonymization and anonymization may not be available options, since organizations are generally obligated to retain original information in business transactions, including subject names and other personal information.
The Data Retention Tug of War
Privacy professionals and business leaders are sometimes at odds with one another, particularly on data retention. The core of the conflict is this: Business leaders want to keep information for as long as possible, to mine every last inkling of value from collected information. On the other side, privacy professionals want data retained for the shortest possible time, if it is retained at all.
Neither side is entirely correct. Instead, business leaders and privacy professionals need to understand the facts, applicable laws, use cases, and options available to enable the organization to derive maximum value from its information, while applying techniques to reduce risks as much as possible.
For all of an organization’s efforts to ensure data availability, an equal amount of effort is required when it no longer needs to retain data even in adverse conditions. Data destruction is the purposeful act of destroying data so that it cannot be recovered. Data destruction is invoked as a part of two processes: data classification and handling, and data retention.
Data destruction policy should include directives for the safe removal of data in numerous use cases, including
• Electronic storage on a laptop computer, desktop computer, tablet computer, smartphone, or USB drive
• Electronic storage on a server
• Electronic storage on a file server
• Electronic storage on a hard disk drive (HDD) or a solid-state drive (SSD)
• Stored as a record in a database management system
• Stored as a record in a business application
• Stored on backup media
• Stored on printed paper
The rigor used to destroy data safely should depend upon the sensitivity of the data being destroyed and a risk or threat assessment that helps determine the extent to which an adversary would attempt to reconstitute discarded or destroyed data.
It is said that sensitive information does not provide value until it is collected, used, and reused. The use and reuse of information occur when it is shared and subsequently used. However, privacy laws and emerging privacy practices are putting controls around the sharing and subsequent use of information. It is still possible and often profitable to share and reuse information, but in light of privacy laws, this sharing and reuse is monitored and controlled so that management has a say in these activities.
Data sharing takes place on two levels:
• Sharing structured data in intra- and inter-company arrangements Management makes business decisions to share specific data sets within the organization or externally to other organizations. These decisions are often motivated by monetization objectives to contribute to an organization’s bottom line. Intercompany data sharing is a prime focus of privacy laws, so all such arrangements should require approval at the highest levels.
Management may, at times, wonder about the costs expended to protect information and ensure its proper handling: Is our security or privacy program worth its cost? What does it cost our organization to comply with specific privacy or security laws? These questions are often easier asked than answered for reasons that include the following:
• Blended costs Any given software as a service (SaaS) application includes privacy and security features and controls such as event monitoring, forensics, and response. The trouble is that SaaS services rarely, if ever, break out the costs of privacy and security in their fees. If they did, would you use those figures or other arbitrary amounts?
• Bundled features Numerous IT products, including but not limited to network devices, operating systems, and database management systems, include capabilities that facilitate privacy and security capabilities. However, like SaaS applications, the proportion of total cost that represents privacy or security is elusive.
• Feature utilization Continuing the preceding point, if it were possible to know the proportion of the cost of a product allocated to security or privacy, the next hurdle in this thought experiment is related to the use of security and privacy features in IT products. For example, if an organization arbitrarily allocates 20 percent of the cost of a network router to security, and the organization emphasizes security in its network, then should the 20 percent figure be increased to some higher value?
• Privacy and security controls Another thought experiment targeting the costs allocated to privacy and security is related to the effort expended to operate privacy and security controls. Take, for example, controls related to identity and access management. As an extreme example, an organization that believes that it spends zero dollars on security still has to spend time creating and managing user accounts, because information systems are designed in a way that requires identity and access management efforts to create user accounts. Thus, can an organization that has applications with user accounts really say it spends no money on security?
• Staff compensation Measuring the time that IT and other personnel spend on privacy and security is an achievable endeavor, but it requires effort to reach the correct conclusions. This is partly because security and privacy are parts of many workers’ jobs, whether or not “security” and “privacy” are mentioned in their job descriptions. For instance, security is probably not 100 percent of a security engineer’s job, nor is security 0 percent of a system engineer’s job. Realistic estimates are only a little better than outright guesses and may be clouded by biases as well as arbitrary statements about which activities are security-related or privacy-related.
• Benchmarking Many organizations wonder what percentage of total IT spend is devoted to security or to privacy. In the past, it was easier to answer this. Nowadays, with organizations using SaaS and other cloud services, many IT costs aren’t included in IT’s budgets, and many security and privacy costs are even harder to identify and allocate. Today, many organization departments “do IT” but don’t have an IT budget per se. Instead, their IT work is buried (not intentionally) within other operational budgets.
What is possible in an organization is the tracking of costs for specific activities and services, including tracking spending from one budget year to the next. It may be possible to understand whether the costs of security and privacy measures are increasing if consistent means are used to track different kinds of costs.
Management can also ask the opposite question: What is the potential cost if we don’t implement privacy or security in the organization? This is no easier determined than the costs of controls, because when estimating the consequences for the lack of controls, it is also necessary to estimate the probability of costly events that their lack would permit. For instance, what is the statistical probability for a ransomware attack to occur in different types of organizations? The risk factors are numerous and difficult to quantify.
It should be clear by now that privacy is not just an IT and information security function. Management and execution of a privacy program and activities require buy-in and support from all levels of an organization. As the speed of business continues to accelerate with the use of technology and privacy regulations continue to evolve, the ability to identify and control personal information will continue to be at the forefront of business concerns so that organizations can build sustainable privacy and security programs that can adapt to future requirements.
Information privacy is wholly dependent upon cybersecurity for the protection of personal information. While cybersecurity is generally performed by others, privacy managers should be familiar with numerous information security practices that support their programs.
Identity and access management comprises a collection of activities in an organization that is concerned with controlling and monitoring individuals’ access to information systems containing sensitive and personal information.
Vulnerability management is the practice of periodically examining information systems to discover exploitable vulnerabilities, related analyses, and decisions about remediation.
The practice of patch management, a part of vulnerability management, ensures that IT systems, tools, and applications have consistent version and patch levels.
Incident response represents a set of activities focused on activities that are initiated when certain events or conditions occur.
IT service management (ITSM) is the set of activities that ensures that the delivery of IT services is efficient and effective, through active management and the continuous improvement of processes.
Information security policy is a foundational component of any organization’s security program and a necessary prerequisite to an organization’s privacy program. Security policy defines the principles and required actions for the organization to protect its assets and personnel properly.
An organization’s security and IT standards describe, in detail, the methods, techniques, technologies, specifications, brands, and configurations to be used throughout the organization.
Guidelines are statements that help organization personnel better understand how to implement or comply with policies and standards.
Privacy and security by design is a concept that reinforces the need to have privacy and security considerations incorporated into systems and applications as a standard practice.
Key areas where information security must be privacy-aware include logging and monitoring, software and systems development, security incident response, identity and access management, and business continuity and disaster recovery planning.
Privacy principles and policies should be integrated into many facets of organizations and their activities, including human resources, compliance and ethics, audit, marketing and business development, public relations, procurement and sourcing, legal and contracts, security and emergency services, finance, and mergers, acquisitions, and divestitures.
The practice of data retention involves the management of a data retention schedule that determines the amount of time that various types of business records should be retained in an organization.
Data destruction is the purposeful act of destroying data so that it cannot be recovered.
Organizations should develop the ability to monitor and control data sharing and data disclosure events.
• More than 90 percent of successful cyberattacks begin with social engineering, when personnel in an organization are tricked into performing actions that enable an adversary to attack the organization successfully.
• Vulnerabilities are the weaknesses that may be present in a system and that enable a threat to be more easily carried out or to have greater impact.
• Privacy managers need to understand the differences in meaning among the terms identification, authentication, and authorization.
• The common vulnerability scoring system (CVSS) is an open framework that is used to provide a common methodology for scoring vulnerabilities.
• Privacy principles and practices need to permeate throughout the organization in the form of a lifestyle of awareness and protection of personal information. Though privacy plays a dominant role in IT and information security functions, it’s vital elsewhere as well.
• Organizations need to reconcile data retention and data minimization practices.
• Determination of the cost of privacy and security controls is difficult because information systems and business processes blend privacy, security, and operations activities in varying proportions.
1. The purpose of vulnerability management is to:
A. Identify and remediate vulnerabilities in all systems.
B. Transfer vulnerabilities to low-risk systems.
C. Identify exploitable vulnerabilities in all systems.
D. Transfer vulnerabilities to third parties.
2. A privacy manager has written a procedure in which the password for a system has been divided into two parts, each given to separate persons. This practice is an example of:
A. Split custody
B. Dual control
C. Segregation of duties
D. Twin secrecy
3. All of the following threats are reduced through the use of multifactor authentication except:
A. Dictionary attacks
B. Password replay attacks
D. User ID replay attacks
4. In an organization whose IT environment is entirely cloud-based, what purpose does a VPN serve?
A. Blocks command-and-control traffic
B. Prevents malware from executing
C. Provides defense in depth authentication
D. Protects traffic from eavesdroppers
5. When logging in to an application, a user has provided a user ID and clicked the Next button. What has the user performed at this point?
6. The activity consisting of the use of tools to identify security defects on systems is known as:
A. Vulnerability management
B. Vulnerability scanning
C. Patch management
D. Penetration testing
7. The IT process concerned with the review and approval of alterations to IT systems is known as:
A. Change management
B. Problem management
C. Configuration management
D. Defect management
8. The IT process that is often considered the cornerstone of tactical security management is:
A. Change management
B. Asset management
C. Configuration management
D. Incident management
9. All of the following are deemed administrative safeguards except:
A. Privileged access controls
B. Security policy
D. Security standards
10. The practice of incorporating security and privacy into business processes is known as:
A. Security and privacy requirements development
B. Security and privacy awareness training
D. Security and privacy by design
11. By integrating privacy and security into business continuity planning, an organization ensures that:
A. Processes related to personal information are given priority for restoration.
B. Personal information protection and valid use continues to be the norm.
C. Processes related to personal information are more resilient.
D. Privacy and security are the most important characteristics of business processes.
12. Which of the following statements regarding information in an HRIS is true?
A. Data in an HRIS system should be classified at the lowest level.
B. Data in an HRIS system should be classified at the highest level.
C. Data in an HRIS system is generally considered in scope for privacy laws.
D. Data in an HRIS system is generally considered out of scope for privacy laws.
13. In the context of privacy laws, the purpose of data retention schedules is:
A. Develop means for automatic removal of personal information.
B. Retain personal information for the longest possible time.
C. Retain personal information for the shortest possible time.
D. Develop procedures for manual removal of personal information.
14. Which of the following techniques is most suitable for the removal of expired records in a large database management system?
A. Removal of specific rows
15. In a large multiuser file storage system, instances of directories and files shared by users with other parties should be:
1. A. The purpose of vulnerability management is to identify vulnerabilities in information systems and devices and to remediate vulnerabilities on a schedule according to risk.
2. A. Split custody is an arrangement whereby one person controls part of a password, and another person controls another part. Each person is required to perform a procedure that completes the entire password.
3. D. Multifactor authentication does not reduce the threat of user ID replay attacks. It reduces or eliminates the threats of dictionary attacks, password replay attacks, and keyloggers.
4. D. A VPN connection is encrypted, thus preventing any party from eavesdropping on the connection.
5. C. When a user has presented only a user ID, the user is said to have identified himself. The user is not authenticated until he successfully presents a correct password and potentially performs an additional step of multifactor authentication.
6. B. Vulnerability scanning is the use of tools to perform scans to identify security defects—often in the form of missing patches and configuration errors—on systems in a network.
7. A. Change management is the IT process whereby changes to IT systems are proposed, reviewed, analyzed, approved, performed, verified, and recorded.
8. B. Asset management is considered a cornerstone process, without which other processes such as vulnerability management, event management, and change management cannot be effective.
10. D. Security and privacy by design is the culture of including security and privacy in business processes, particularly those in which innovation occurs.
11. B. When privacy and security are incorporated into business continuity and disaster recovery planning, the protection of personal information and safeguards to enforce proper use of personal information will be a part of business contingency and disaster recovery procedures.
12. C. Data in a human resources information system (HRIS) includes numerous types of personal information about an organization’s workforce. For the most part, privacy laws consider this personal information to be subject to those laws.
13. C. Data retention schedules specify the period of time that various types of business records should be retained. Generally, data containing personal information should be kept for only as long as necessary and no longer.
14. A. The only practical way to remove expired data from a large database management system is to delete the rows that have expired. Other techniques are ineffective or not applicable.
15. D. File and directory sharing actions in a file storage system should be logged and monitored for violations of policy. Many sharing actions will be considered appropriate, but some may warrant investigation.