CHAPTER MENU
Chapters 1 and 2 covered the general data security obligations that all U.S. companies face under Section 5 of the FTC Act, state data security laws, and common law torts that could lead to class actions lawsuits and other litigation. These requirements apply equally to companies regardless of their industry.
In addition to these general data security requirements, companies that handle particularly sensitive information or operate in industries that carry particularly high national security risks face more stringent requirements. This chapter will cover six such prominent legal requirements for sensitive information: (1) the Gramm-Leach-Bliley Act Safeguards Rule for financial institutions, (2) the Red Flags Rule for information for certain creditors and financial institutions, (3) the Payment Card Industry Data Security Standard (PCI DSS) for credit and debit card information, (4) the Health Information Portability and Accountability Act Security Rule for certain health-related information, (5) Federal Energy Regulatory Agency guidelines for electric grid cybersecurity, and (6) Nuclear Regulatory Commission cybersecurity requirements for nuclear reactor licensees.
Keep in mind that the general cybersecurity requirements described in Chapters 1 and 2 also apply to these industries, unless there is an exception for companies that comply with industry-specific laws and regulations. Moreover, it is increasingly common for companies that provide highly sensitive information to certain contractors, such as law firms and accountants, to contractually require additional cybersecurity protections.
In 1999, Congress enacted the Gramm-Leach-Bliley Act, a comprehensive overhaul of financial regulation in the United States. Many of the most controversial portions of the act, which relaxed decades-old ownership restrictions on financial institutions, are outside of the scope of this book. For the purposes of cybersecurity, the most relevant section is known as the Safeguards Rule, which requires federal regulators to adopt data security standards for the financial institutions that they regulate.
The Gramm-Leach-Bliley Act requires the agencies to adopt administrative, technical, and physical safeguards:
The statute only applies to “nonpublic personal information,” which it defines as personally identifiable financial information that is (1) provided by a consumer to a financial institution, (2) resulting from any transaction with the consumer or any service performed for the consumer, or (3) otherwise obtained by the financial institution.2
A number of agencies regulate financial institutions, and they have taken slightly different approaches to developing regulations under the GLBA Safeguards Rule. The remainder of this section examines the primary regulations issued by the various agencies.
Agencies that regulate banks and related financial institutions have collaborated to develop Interagency Guidelines to implement the Safeguards Rule. The agencies that have adopted the Interagency Guidelines into their regulations, and the types of institutions that they regulate, are as follows:
The Interagency Guidelines require covered institutions to implement a “comprehensive written information security program” to safeguard nonpublic personal information.4 The agencies stated that financial institutions must take the following steps while developing and implementing their programs:
The Interagency Guidelines further require financial institutions to maintain incident response programs for sensitive customer information, which the guidelines define as a customer's name, address, or phone number in combination with at least one of the following:
Sensitive customer information includes any additional information that would enable an unauthorized user to access a customer's account (i.e., username and password).
Incident response programs for sensitive information must contain procedures to:
The notices must contain: a description, in general, of the data breach, the types of information that were accessed without authorization, and mitigation steps taken by the financial institution; a telephone number for further information about the breach; and a reminder to “remain vigilant” and report apparent identity theft. 7
Although the Interagency Guidelines are comprehensive, the banking regulators have not focused on enforcement of their data security regulations as much as many other regulators, such as the Securities and Exchange Commission and the FTC.
The Securities and Exchange Commission's Regulation S-P sets the GLBA Safeguards Rule requirements for brokers, dealers, investment companies, and investment advisers that are registered with the SEC.8 The SEC's version of the Safeguards Rule is not as detailed as the Interagency Guidelines, though the SEC has been fairly aggressive in its enforcement of the rule in recent years.
The SEC's regulations broadly require institutions to adopt written information security policies and procedures that contain administrative, technical, and physical safeguards that meet the three goals of the GLBA Safeguards Rule: insuring security and confidentiality of customer information, protecting the information from anticipated threats or hazards, and protecting the information from unauthorized access that could substantially harm or inconvenience the customer.9 Regulation S-P also requires institutions to properly dispose of consumer report information and take steps to protect against unauthorized access.10
Despite the relative lack of specificity in the SEC's version of the Safeguards Rule, the agency has indicated that cybersecurity is a high priority, and that it will use the regulation to pursue institutions that do not adequately protect customer information. In September 2015, the SEC announced a settlement of an administrative proceeding with R.T. Jones Capital Equities Management, an investment adviser that experienced a data breach, originating in China, and exposed the personal information of approximately 100,000 people.11 Although R.T. Jones notified affected individuals and there were no reports of harm, the SEC brought the administrative action because the company did not have a written information security program. In the settlement order, the SEC noted that the company failed to conduct periodic risk assessments, use a firewall to protect the web server that contains client information, encrypt customer information, or develop an incident response plan.12 The no-fault settlement required the company to cease future violations of the SEC's Safeguards Rule and to pay a $75,000 penalty. In announcing the settlement, Marshall S. Sprung, Co-Chief of the SEC Enforcement Division's Asset Management Unit, warned that firms “must adopt written policies to protect their clients' private information and they need to anticipate potential cybersecurity events and have clear procedures in place rather than waiting to react once a breach occurs.”13
The FTC regulates financial institutions that are not regulated by one of the banking agencies or the SEC. Among the types of financial institutions that the FTC regulates are consumer reporting agencies, retailers that offer credit to customers, and mortgage brokers.
Like the SEC, the FTC did not pass an incredibly detailed Safeguards Rule. Nonetheless, the FTC has been quite aggressive in its enforcement of the Safeguards Rule, partly due to the key role that customer information plays for consumer reporting agencies and other financial institutions regulated by the FTC.
The FTC's Safeguards Rule, like those of the other agencies, requires financial institutions to develop, implement, and maintain a comprehensive written information security program that contains administrative, technical, and physical safeguards that meet the GLBA Safeguards Rule's three key objectives listed at the start of this chapter.
The FTC's regulations require information security programs to be carried out and protected as follows:
The FTC has brought a number of enforcement actions against companies that failed to develop information security programs that meet these requirements. Typically, the FTC brings cases after a financial institution has experienced a data breach. The summaries that follow are a few of the most prominent settlements of enforcement actions that the FTC has brought under the Safeguards Rule.
Data breaches often trigger FTC scrutiny of a financial institution's compliance with the Safeguards Rule. ACRAnet assembles consumer reports for the three major consumer reporting agencies, Equifax, Experian, and TransUnion. The reports contain a great deal of sensitive and nonpublic information, such as consumer's names, addresses, Social Security numbers, birth dates, and work history. The company sells these reports to mortgage brokers, and therefore is a financial institution subject to FTC's Safeguards Rule. In 2007 and 2008, hackers accessed nearly 700 consumer reports due to vulnerabilities in the networks of ARCAnet's clients. After the breach, the FTC states that ACRANet did not take steps to prevent similar breaches by, for instance, requiring clients to demonstrate that their computer networks are free of security threats. The FTC asserted that ACRANet violated the Safeguards Rule by failing to:
James B. Nutter & Co. makes and services residential loans, and is therefore covered by the FTC Safeguards Rule. The company collects a great deal of highly sensitive information, including employment history, credit history, Social Security numbers, and driver's license numbers. It uses its website and computer network to obtain personal information from customers, store data, and otherwise conduct its lending business. An unauthorized individual managed to hack into the company's network and send spam. Although there was no evidence of theft of customer information, the FTC stated in its complaint that the hacker “could have accessed personal information without authorization.” The FTC claimed that the company violated the Safeguards Rule by failing to:
Superior Mortgage Corporation, a residential mortgage direct lender, collects during the mortgage application process sensitive information such as credit card account numbers and Social Security numbers. The FTC brought a complaint against the company for violating the Safeguards Rule. Although the complaint does not mention a specific data breach or other attack on the company's system, the complaint noted that the company's website only encrypted sensitive customer information while in transit, but not while it was at rest. The decrypted customer information allegedly was then emailed in clear text to the company's headquarters and branch offices. The company's online privacy policy claimed that “[a]ll information submitted is handled by SSL encryption[.]” The FTC alleged that Superior Mortgage violated the Safeguards Rule by, among other things, failing to:
The FTC also expects companies to adequately oversee their employees' handling of personal information. In such a case, employees of Goal Financial, a marketer and originator of student loans, transferred more than 7000 consumer files to third parties. Additionally, a Goal Financial employee sold hard drives that had not yet been wiped of approximately 34,000 customers' sensitive personal information. In its complaint against Goal Financial, the FTC alleged that the company violated the Safeguards Rule by failing to: identify reasonably foreseeable risks, design and implement safeguards to control those risks, develop a written information security program, and require contractors to safeguard customer information.
In 2003, amid growing concern about identity theft, Congress passed the Fair and Accurate Credit Transaction Act of 2003. Among other provisions, the statute required banking regulators and the FTC to develop regulations that require financial institutions and creditors that offer covered accounts to develop “reasonable policies and procedures” to prevent identity theft of their account holders.14
The Red Flag Rule only applies to companies that (1) are financial institutions or creditors and (2) offer “covered accounts” to individuals. To determine whether the Red Flag Rule applies, companies must analyze the definition of both terms.
The FTC and banking regulators issued their first iteration of the Red Flag regulations in 2007, but the regulations' implementation was delayed after an outcry from the business community about the lack of clarity in the regulations. Although “financial institution” is clearly defined, the regulations contained a broad definition of “creditor” that could have included professionals such as doctors and lawyers because they bill clients after they perform the services. Many such professionals argued that their operations do not pose a substantial risk of identity theft, and therefore they should not be required to develop comprehensive identity theft prevention programs.
Congress responded to the industry concerns in 2010 by passing the Red Flag Program Clarification Act of 2010.15 The law defines “creditor” as a company that, in the ordinary course of business:
The Clarification Act explicitly states that the term “creditor” does not include an entity that “advances funds on behalf of a person for expenses incidental to a service provided by the creditor to that person.”22
The new definition clarifies that the Red Flag Rule applies to financial institutions; companies that obtain, use, or provide information for credit reports; or companies that lend money to people, provided that the loan is for something other than the lender's own services. Accordingly, under the clarified Red Flag Rule, a doctor or lawyer does not become subject to the Red Flag Rule merely by billing a customer after providing the service.
Not all financial institutions and creditors are covered by the Red Flag Rule. The requirements only apply if the company offers a “covered account.” The Red Flag Rule regulations define “covered accounts” as including two types of accounts:
To determine whether an account falls within either definition, the regulations instruct the financial institution or creditor to consider the methods that the company provides to open its accounts, the methods that the company provides to access the accounts, and the company's previous experience with identity theft.24 Keep in mind that the regulations apply as long as the financial institution or creditor has at least one covered account.
In other words, financial institutions and creditors must conduct a balancing test to determine whether the risk of identity theft to its customers is reasonably foreseeable. They are only required to develop an identity theft prevention plan if they determine that the risk is reasonably foreseeable and, therefore, they offer covered accounts. The regulators expect the companies to periodically reassess this risk. Companies should make an honest assessment of the risk. If a company obtains highly sensitive personal information via an unencrypted Internet connection, it is difficult to conceive of how a company could find that there is not a reasonably foreseeable risk of identity theft. It is a best practice to document the reasoning behind the determination of whether a company offers a covered account.
The Red Flag regulations require financial institutions and creditors that offer at least one covered account to develop a written identity theft prevention program designed “to detect, prevent, and mitigate identity theft in connection with the opening of a covered account or any existing covered account.”25
The written program must explain how the financial institution or creditor will accomplish four goals:
Companies that accept or use credit or debit cards (including, but not limited to retailers), are required to comply with the Payment Card Industry Data Security Standard (PCI DSS), an extensive set of operational and technical rules that are intended to protect payment card numbers and associated data. The goal of the rules is to reduce the chances of the data being stolen and used for identity theft.
The PCI DSS standards are adopted not by courts or legislatures but by an organization comprised of the major credit card companies (American Express, Discovery Financial Services, JCB, MasterCard, and Visa).
The PCI Security Standards Council has developed detailed technical guidance for businesses of varying sizes to comply with the standards, available on its website, www.pcisecuritystandards.org. In short, PCI DSS consists of six goals and twelve requirements:
The credit card companies individually enforce these requirements by contractually imposing them on the banks, which in turn impose the requirements on the merchants and others that accept and use their credit cards. The credit card companies and banks can impose substantial fines on retailers that fail to comply with PCI DSS, but the amount of those fines is not publicly disclosed.
Additionally, two state laws refer to PCI DSS:
Even in states that have not adopted laws that incorporate PCI DSS, the standards could help determine the general standard of care in common-law tort and contract claims. For example, in the Hannaford case discussed in Chapter 2, involving the breach of a grocery chain's payment card systems, the district court concluded that it is possible that retailers have an implied contract with their consumers to incorporate industry data security standards with their payment card data:
If a consumer tenders a credit or debit card as payment, I conclude that a jury could find certain other implied terms in the grocery purchase contract: for example, that the merchant will not use the card data for other people's purchases, will not sell or give the data to others (except in completing the payment process), and will take reasonable measures to protect the information (which might include meeting industry standards), on the basis that these are implied commitments that are “absolutely necessary to effectuate the contract,” and “indispensable to effectuate the intention of the parties.” A jury could reasonably find that customers would not tender cards to merchants who undertook zero obligation to protect customers' electronic data. But in today's known world of sophisticated hackers, data theft, software glitches, and computer viruses, a jury could not reasonably find an implied merchant commitment against every intrusion under any circumstances whatsoever (consider, for example, an armed robber confronting the merchant's computer systems personnel at gunpoint).32
In short, PCI DSS has become the de facto standard of care for all companies – large and small – that accept, use, process, or store credit or debit card information. Companies are wise to keep informed about the PCI Council's latest guidance regarding PCI DSS compliance.
Certain health-related providers and companies are required to comply with an extensive series of regulations for the security of health data. Under its authority from the Health Insurance Portability and Accountability Act, the Department of Health and Human Services has promulgated regulations known as the HIPAA Security Rule.
The HIPAA Security Rule applies to two types of entities: “covered entities” and “business associates.” Other companies, even if they handle health information, are not subject to HIPAA, unless required by a contract. A “covered entity” is a health plan, healthcare clearinghouse, or a healthcare provider who transmits health information in electronic form. A “business associate” is a provider of “data transmission services” to a covered entity, a person who offers a personal health record to individuals on behalf of a covered entity, or a subcontractor that “creates, receives, maintains, or transmits protected health information on behalf of the business associate.”33 Examples of business associates include attorneys who require access to protected health information to provide services and medical transcriptionist services.
The HIPAA Security Rule only applies to “protected health information” that is collected from an individual and is created or received by a covered entity, and relates to “the past, present, or future physical or mental health or condition of an individual; the provision of health care to an individual; or the past, present, or future payment for the provision of health care to an individual.”34 Information is only protected health information if it directly identifies an individual or if there is a reasonable basis to believe that it could identify an individual.35
The HIPAA Security Rule requires covered entities and business associates to ensure the confidentiality, integrity, and availability of electronic protected health information and take steps to protect against reasonably anticipated threats. 36 As with the GLBA Safeguards Rule, the HIPAA Security Rule is not a one-size-fits-all approach, and instead states that covered entities and business associates may “use any security measures that allow the covered entity or business associate to reasonably and appropriately implement the standards and implementation specification[.]”37 The regulations instruct covered entities and business associates to consider their size, complexity, and capabilities, technical infrastructure, costs of security measures, and likelihood and magnitude of potential information security risks. 38
Despite its flexible approach, the HIPAA Security Rule imposes a number of administrative, physical, technical, and organizational standards that covered entities and business associates must adopt. The following are the requirements from the current HIPAA regulations, located at 45 CFR Part 164, edited here for clarity and brevity:
Administrative safeguards.39
Physical safeguards.40
Technical safeguards.41
Organizational safeguards.42
The Department of Health and Human Services also has developed a detailed set of regulations that require covered entities to notify affected individuals and regulators about data breaches of unsecured protected health information. If business associates experience a breach, they are required to notify the covered entity within sixty days, and the covered entity is obligated to inform individuals.43
The breach notification requirement does not apply if all of the protected health information has been “secured” pursuant to guidance from the Department of Health and Human Services or if there is a “low probability” of compromise.44 The Department states that protected health information can be secured by an encryption method that has been validated by the National Institute of Standards and Technology, or if the media on which the protected health information has been properly destroyed (i.e., by shredding paper, film, or other hard copy media or destroying electronic media). Redaction alone does not constitute “securing” data, according to the Department.45
Unless law enforcement requests a delay for investigative purposes, covered entities must provide breach notifications to affected individuals without unreasonable delay and no later than sixty calendar days after first discovering the breach.46
HIPAA requires notices to contain many of the same elements as the notices required in the state data breach statutes, discussed in Chapter 1. Keep in mind that many of the state breach notice laws contain safe harbors that allow HIPAA-covered entities to satisfy the state breach notice requirements by complying with HIPAA's notice procedures. HIPAA breach notifications must contain the following:
The notification must be provided in writing to each individual's last known mailing address, or to an email address if the individual had agreed to electronic notice and had not revoked consent.48 If the covered entity is aware that the individual is deceased, and has a mailing address for the next of kin or personal representative of the affected individual, the covered entity should send the notification to that address via first-class mail.49
If there is not sufficient contact information to send written notifications to individuals via postal mail, covered entities may use a substitute notice process. If there is insufficient contact information for fewer than ten individuals, then covered entities can provide an alternative form of written notice, notice by telephone, or other means. If there is insufficient or out-of-date contact information for ten or more people, the substitute notification must (1) be a conspicuous posting on the covered entity's website for ninety days, or a conspicuous notice in major local print or broadcast media, and (2) include a toll-free number, active for at least ninety days, to provide individuals with more information about whether they were affected by the breach.50
If the covered entity determines that there is an urgent need to notify individuals, the entity may also notify the individuals by telephone and other means, in addition to written notice.51
If a breach involves the unsecured protected health information of more than 500 residents of a single state or jurisdiction, the covered entity must notify prominent outlets in the state or jurisdiction within sixty calendar days of discovery of the breach, and the content of the notification should be the same as in the individual notifications.52
The regulations also require notification to the Department of Health and Human Services. If the breach involves 500 or more individuals, a covered entity must inform the Department at the same time that it notifies individuals.53 If the breach involves fewer than 500 individuals, the covered entity must maintain a log of breaches and, within sixty days after the end of each calendar year, provide the Department with the log of all breaches from the preceding calendar year.54 The Department of Health and Human Services' website contains instructions for the manner in which to notify the department of both categories of breaches.55
The Department of Health and Human Services' Office of Civil Rights enforces the HIPAA privacy and security regulations. In 2014, the most recent year for which data was available, the Office investigated 427 data breaches, and required that covered entities take corrective action in 415 of those cases.
Unlike the FTC, the Department of Health and Human Services does not publicly release the full text of its investigative complaints and settlements. However, on its website, the Department summarized some cases without specifying the identities of the covered entities:
Among the many concerns about potential cyber threats, attacks on the nation's electric grid is among the most frequently discussed. A cyberattack that causes large metropolitan areas to go dark could have devastating effects on national security and the economy.
Accordingly, the Federal Energy Regulatory Commission, which regulates national electric utilities, has increasingly focused on cybersecurity. In January 2016, FERC adopted seven critical infrastructure protection reliability standards that originated from the North American Electric Reliability Corp., a nonprofit organization. Unlike many of the other industry-specific laws and regulations, such as GLBA and HIPAA, the FERC standards are not only concerned with the confidentiality of data but also with preventing any disruptions due to cyberattacks.
This section contains a summary of key provisions of each of the seven standards, but utilities should review the complete standards to ensure compliance.
At least every fifteen months, utilities' senior managers should approve cybersecurity policies that address:
Utilities should name a responsible manager for leading the implementation of the cybersecurity standards, who is permitted to delegate authority to other employees, provided that this delegation has been approved by a senior manager of the utility. In practice, it is common for the responsible manager to be a Chief Information Security Officer or equivalent.
Utilities should implement quarterly training for security awareness that “reinforces cybersecurity practices (which may include associated physical security practices) for the [utilities'] personnel who have unauthorized electronic or authorized unescorted physical access” to the utilities' systems. These training sessions should be designed for individual jobs. For instance, a supervisor's training likely will differ from those of a line worker.
Utilities should review employees' criminal history at least once every seven years, and conduct other “personnel risk assessment” programs for individuals who need access to utilities' cyber systems.
In addition to training, utilities should ensure that employees do not have access to cyber systems when they no longer need to have access (e.g., if they leave their jobs). Utilities also should develop processes to timely revoke access to cyber systems.
This guideline requires utilities to develop a comprehensive plan for the physical security of facilities that house the utilities' cyber systems. These plans should include controls such as intrusion alarms and logs of physical entries. The policies should require “continuous escorted access of visitors” within the physical perimeter of the utility's facilities, except under exceptional circumstances.
To minimize the attack surface, when technically feasible, utilities should enable only the logical network accessible ports that are needed for the utilities' operations. The utilities also should implement a patch management process. At least once every thirty-five days, the utilities should evaluate new security patches and take other steps to reduce the likelihood of harm from malicious code.
CIP-007-6 suggests that utilities maintain audit logs of failed log-in attempts, malicious code, and other potential cybersecurity events. Utilities should develop a process that alerts them to such events.
The guidelines also require utilities to pay close attention to log-in credentials. Utilities should inventory user accounts, change default passwords, establish standards for minimum password length, and, when possible, require authorized users to change passwords at least once every fifteen months. Utilities also should either impose a maximum number of failed login attempts, or implement a system that alerts the information security staff of unsuccessful login attempts.
CIP-009-6 provides a framework for utilities to create plans that enable them to respond to cyber incidents. Utilities should develop recovery plans that designate specific responsibilities for responders, describe how data will be stored, and provide plans for backing up and preserving data after an incident. At least once every fifteen months, utilities should test recovery plans by recovering from an incident that has occurred during that time period, conducting a paper drill or tabletop exercise, or conducting an operational exercise. The utilities should test the recovery plans at least once every thirty-six months through an “operational exercise of the recovery plans.”
Within ninety days of a recovery plan test or actual recovery, utilities should document “lessons learned,” update the recovery plan, and notify relevant individuals of the updates.
Utilities must develop configuration change management processes to “prevent unauthorized modifications” to cyber systems. Change management processes should include a “baseline configuration” that identifies operating systems, installed software, accessible ports, and security patches. The processes also should authorize and document any changes that fail to comply with this baseline configuration.
At least once every thirty-five days, utilities should monitor for deviations from the baseline configuration. At least once every fifteen months, they should conduct a vulnerability assessment to ensure proper implementation of cybersecurity controls. At least every thirty-six months, when feasible, the utility should assess the vulnerabilities, based on this baseline configuration.
Utilities should authorize the use of transient cyber assets (e.g., removable media), except in exceptional circumstances. The authorization should specify the users, locations, defined acceptable use, operating system, firmware, and software on the removable media. Utilities must determine how to minimize threats to these transient assets. Within thirty-five days before use of a transient cyber asset, utilities must ensure that security patches to all transient cyber assets are updated.
Utilities should implement information protection programs that include procedures for securely handling information regardless of whether the data is at rest and in transit. Utilities should prevent the “unauthorized retrieval” of information from their systems and ensure that information is securely disposed.
Just as policy makers are concerned about a cyberattack threatening the electric grid, they also are deeply concerned about the prospect of a cyberattack on a U.S. nuclear power facility. Such an attack could have devastating national security implications. Accordingly, in 2009, the U.S. Nuclear Regulatory Commission adopted a thorough cybersecurity regulation for licensees of nuclear power reactors. In 2013, the NRC created a Cybersecurity Directorate, which oversees the cybersecurity of the nuclear industry and works with FERC, the Department of Homeland Security, and others that oversee the cybersecurity of the nation's power system.
The NRC's cybersecurity rule56 requires nuclear licensees to protect their computer and communication systems with safety-related and important-to-safety functions, security functions, emergency preparedness functions, and support systems and equipment that, if compromised, would harm safety, security, or emergency preparedness.57 The NRC regulations require nuclear licensees to protect these systems and networks from cyberattacks that would harm the integrity or confidentiality of data or software; deny access to the systems, services, or data; and harm the operation of the systems, network, and equipment.58 The NRC's regulations broadly require nuclear operators to develop cybersecurity programs to implement security controls that protect nuclear facilities from cyberattacks, reduce the likelihood of cyber incidents, and mitigate harm caused by cyber incidents.59 The regulations provide a great deal of flexibility for nuclear licensees to determine how to develop and draft these plans.
To implement the cybersecurity program, the NRC regulations require licensees to ensure that nuclear licensee employees and contractors receive appropriate cybersecurity training, properly manage cybersecurity risks, incorporate cybersecurity into any considerations of modifications to cyber assets, and properly notify regulators of cybersecurity incidents.60
The NRC requires licensees to develop a written cybersecurity plan that implements the program. The plan must describe how the licensee will implement the program, and account for relevant site-specific conditions. The cybersecurity plan also must provide an incident response and recovery plan that describes the capability for detection and response, mitigation, correcting exploited vulnerabilities, and restoring affected systems.61
3.145.166.241