Chapter 8
Continuous Monitoring

We have described some common due diligence efforts: Intake, Ongoing, and On‐site assessments, which are point‐in‐time due diligence activities. In normal circumstances at KC Enterprises, a high‐risk vendor has a physical on‐site assessment only once a year, which leaves the rest of the 364 days for something to go wrong. It is not that KC expects a vendor to undo any security controls once an evaluation is completed; the concern is that a lot can happen in those days between the point‐in‐time appraisals. The development of a Continuous Monitoring (CM) program was a logical next step for finding ways to engage with vendors when risks were observed between normal visits.

What Is Continuous Monitoring?

KC's Cybersecurity team developed this program around the concept that they would be like a team of cyber threat analysts. Their roles contrast with the roles of the other analysts from other parts of the Cyber Third‐Party Risk team. Cyber threat analysts are trained to look for cyber risks internally at most companies, and are trained to look externally at vendors for the same risks. This may require an additional set of tools because KC cannot run scans or vulnerability tests against vendors directly. Let's look at the tools used as we discuss how the team performs its actions.

Vendor Security‐Rating Tools

Vendor security‐rating tools are a relatively new capability, and have not been on the market for a long time. Some of their earliest software makers started as late as 2011. These security‐rating tools provide scores and details on how a vendor's attack surface relates to their overall cybersecurity posture. Most are provided as a Software‐as‐a‐Service (SaaS), wherein a web front‐end vendor can be added to view their scores. How these systems produce the data varies, but many use sinkholes (silent vacuum cleaners on the internet that sweep up the data coming from the vendor's network) to find information on key security controls like patching cadence, open server ports, botnet detections, spam propagation, and so on.

These tools are useful for one data point for KC's Continuous Monitoring staff. All the vendors that meet the criteria for due diligence (data of the top three data classifications or a connection to the KC network) are loaded into their vendor security‐rating software. Within the tool, vendors are sorted according to their risk levels (i.e., high, moderate, and low), and are given an overall score or grade. However, that overall number is not useful for this due diligence effort. Instead, KC has designed a number of triggers and thresholds in the tool to indicate further action is needed.

Open‐Server Ports  Open‐server ports are TCP and UDP ports that are set to accept packets (network traffic). (TCP is a connection‐oriented protocol and UDP is a connection‐less protocol. TCP establishes a connection between a sender and receiver before data can be sent. UDP does not establish a connection before sending data.) When these are set to accept traffic it is easy for an attacker to scan and find them open. The vendor server‐rating tools describe in detail which ports are left open on the vendor's externally facing servers. Sometimes, these ports are open for valid reasons. A mail server, for example, should have the common mail protocol ports open. However, it should not have an open FTP port. Sometimes, a vendor fails to close all of its unused ports on servers, which can be exploited by hackers. The vendor server‐rating tool tells the team if there are any unexpected ports left open that need to be closed or disabled.

Patching Cadence  Patching cadence is how often an organization reviews its systems, applications, and networks for necessary updates that will fix security vulnerabilities. The vendor security‐rating tools communicate to KC analysts which vendor external‐facing systems require remediation; those systems are the ones most often used by an attacker to perform breaches. If these systems are not patched properly, then an unpatched security flaw may become a point of attack. If the patching cadence rises to a critical level, an alert is raised for that vendor, signifying that attention is needed.

SPAM Propagation  SPAM, or the sending of unsolicited email, is a problem for all organizations. The worst SPAM producers are often hijacked unsecured systems that send out thousands or millions of emails to its victims. If the vendor security‐rating tools software detects this type of email spewing from a vendor, it can indicate that a system has been compromised. The likelihood that something else is unsecure is increased due to this alert, as it suggests the vendor is unaware of the SPAM being sent and/or a bot infection.

Botnet Infections  A botnet is a network of hijacked computers used to carry out cybercriminal activities. Hackers use botnets to grow, automate, and speed up their attack capabilities. Malware is used to make a computer a bot, which then carries out data theft, malware deployment, and access disruption. Alerts of botnet infections, much like SPAM propagation, indicate that systems within the vendor's network have been compromised.

File Sharing  File sharing is the act of distributing or sharing access to digital media. Torrent is a good file‐sharing example. When a vendor security‐rating tool indicates file sharing is occurring, it indicate a vendor's internal systems are possibly not well controlled. File‐sharing services are sometimes referred to as peer‐to‐peer (P2P) applications, which share music, movies, and other copyrighted material (e.g., Napster). Nearly every firm blocks these types of activities as they generally violate acceptable use policies. Usually, no justification can be provided for having a file‐sharing service running at a business. The following risks make file sharing high‐risk for any organization.

Installation of Malware

The installation of malware is highly possible when utilizing these applications. When using a P2P application, downloading software or entertainment files opens a user and their enterprise to malware. Nothing guarantees that the downloading files are free of malicious code, and often hackers use these platforms specifically to distribute malware.

Exposure of Sensitive Information

Exposing sensitive information can be easily done with these P2P products. This software often requires a user to open ports and share directories in order to work. Because there's no way to know who else is accessing those shared files and folders or how many have accessed them, your personal data can be relatively easy for an attacker to expose when you're using a file‐sharing application.

Denial of Service

While not common, denial of service can clog the network with unnecessary uploads and downloads from the file share. Because the programs provide no business value or need, they unnecessarily congest normal required business traffic and can impact a business's productivity and availability.

Legal Trouble

Legal trouble is also a likely outcome stemming from the use of P2P software. Because much of these applications are used for distributing copyrighted entertainment, pirated software, and pornography, there's a high likelihood that the copyright holder may pursue legal recourse. In addition, pornography distribution is another legal and human resources (HR) issue that can be avoided by stopping the use of the file‐sharing software.

Much Easier Attacks

When P2P software is being used by a potential victim, hacker attacks are much easier. As said previously, most attacks require the user to open ports and loosen their restrictions on sharing. Some advanced P2P programs even have the ability to alter firewalls and penetrate them without a user's knowledge.

Exposed Credentials  Exposed credentials are only mentioned here because it's important to note how problematic such alerts can be. As discussed previously, the number of records exposed or compromised credentials are in the billions. There is likely not a person on the Earth who doesn't have an old password and user ID floating around for sale in the internet. So, what should we do with these alerts? Focus on any that can be identified as administrator or privileged accounts. Because it's difficult to remediate, it is a trigger or alert that is very useful to be monitored as a risk.

The alerts and thresholds can vary from each software maker for the vendor security‐rating tools. What other companies choose depends on which tool they have selected and how it is used. At KC Enterprises, the team leveraged the tool's application programming interface (API) to tie it to the vendor management system of record so that confirmed alerts go into the vendor's record.

The vendor security‐rating tools are just one tool in the Continuous Monitoring toolbox, and they have some challenges. First, most of them drive their intelligence from public records. The most important public record they use is the known IP ranges of the companies monitored. These IP addresses are publicly available in several locations searchable on the internet. However, they are not always accurate. Companies grow and shrink, selling or buying new IP ranges along the way, and they do not always go back and update the public records of these changes. As the alerts come in and a threshold is triggered to engage with the vendor, there is a need to confirm that they own the IP range that contains the alert.

Internal Due Diligence  At KC Enterprises, every vendor that falls into the cybersecurity criteria for the Cyber Third‐Party Risk team has had some internal due diligence done. Whether it's only an intake assessment or has been expanded into ongoing and/or on‐site assessments, data exists in the system of record on the security of the vendor. If there is a trigger or alert from the vendor security‐rating tool, the CM threat analyst reviews several items:

  • What is the vendor's risk level and information? If a vendor is in the high‐risk category, this gives them priority over a low‐ or moderate‐risk vendor for further investigation. Look into the system of record for how much data they process or hold. The high‐risk category has a range of data they can have; find out specifically the number of records for a closer quantitative analysis.
  • Are there any open findings or Risk Acceptances (RAs)? Have any of the due diligence efforts produced the finding of a security gap that has yet to be remediated? Does the third party have an RA that could be related to the alert?
  • What kind of connectivity data does the vendor have? If the vendor has a connection, are there any risks or findings on that connectivity that combined with the alert raise concerns?

These second steps of internal due diligence data gathering enables the threat analyst to connect any dots between the alert and known risks or security gaps for the vendor. Thresholds are set for when further action is required and when the risk is low enough that no action is needed. As an example, say a vendor is in the low‐risk category and an alert is received about exposed credentials on an FTP server. Reviews of due diligence show no open findings or RAs. The threshold here is not met per the process at KC. The exposed credentials are not confirmed to be administrator level, due diligence efforts have not produced any gaps, and it is a low‐risk vendor.

Cyber Intelligence Forums  The last step in the CM process before engaging with the vendor is to perform some research on the specified threat. KC has found that going to a third party with specific data on the type of threat and the risk it poses aids communications with the vendor's cybersecurity teams. Approaching the vendor with “your vendor security rating is terrible” is non‐specific and does not produce a lot of information for them to remediate it.

The KC's cybersecurity operations team has existing contracts with a number of cybersecurity intelligence forums and companies. These service offerings provide data about how widespread the threat might be, how long has it been around, what risks it poses, and the known threat actors who use it.

As the team gathers detailed information about the threat, this information is combined with information from the previous two steps to provide a fuller picture of the risk to the analyst and eventually to the vendor.

Continuous Monitoring Vendor Engagement  Once all three steps have been completed (i.e., the alert from the vendor security software meets the threshold, the due diligence points to potential risks with open findings, and threat intelligence determines that the alerts or threat all rise to the level that the vendor needs to be informed of the risk), different phases of the CM engagement begin. Note, there are four phases in this process: discovery, investigation, reporting, and closure (see Figure 8.1). The best way to demonstrate them here is to perform a mock engagement for KC Enterprises.

Schematic illustration of the Continuous Monitoring Process.

FIGURE 8.1 The Continuous Monitoring Process

The Discovery Phase

In KC's discovery phase, data is collected to discover if the threat (a detected botnet) meets the threshold to engage. However, it has not yet been confirmed as a risk to the vendor because KC has not received confirmation from the vendor; they own that IP range. The analyst puts together an artifact to be shared with the third party through the supplier manager at KC. The vendor is in the high‐risk category and has, on average, 100,000 records of protected data (personal identifiable information [PII]). There is an open finding that they allow end users to have administrator control on their corporate laptops. As the team investigates the botnet, they discover that this botnet comes in the form of a browser add‐on for Windows machines. Further, this browser add‐on is capable of key logging.

The analyst gathers all the data together into a document sent to the vendor through the supplier manager. In this case, this document describes an alert from KC's vendor security software, which detected a botnet coming from an IP address that is tied to the vendor. Upon investigation, the CM team again notes that the botnet has a potential to key log, and it is a browser add‐on. Looking at the timeline of the botnet alerts, the team discovers it began in mid‐March 2020 when numerous workers were sent to work from home (WFH). The third party is asked if they can confirm that the IP address noted for the botnet infection is tied to their owned range.

From KC's perspective, it appears that because the vendor does not prohibit end users from installing software, and the time the botnet began was when their workforce was sent to WFH, it's likely one or more users have installed this malware.

The Investigation Phase

Once KC's vendor receives the information about the alerts and research collected on the threat, KC's CM team follows up with them based upon the risk. In this instance, because the botnet is a known key‐logger and they have a lot of data located at this third party, the CM team follows up daily until an answer is produced or the alert stops. Often, the alerted supplier uses the data to fix the issue without replying, and the CM team simply notes that the alert has stopped subsequent to the vendor's notification. In this example, the alerted vendor replies that they are investigating the data and do own the IP range noted on the alert. Within a day, the vendor returns a response on the secure vendor portal to the supplier manager that they found the infected laptops, quarantined them, and had the end users send them in to be analyzed and subsequently wiped.

The Reporting and Closure Phases

The Reporting Phase is where the threat analyst records the outcome. In cases where a vendor does not acknowledge the alert but it stopped after the vendor was notified, it is reasonable to assume cause and effect. The CM team then updates the vendor's file noting that the risk has been reduced due to the botnet no longer being seen. If the third party confirms that they do not own the IP range, it's noted in the vendor's record that it was a false positive, the reason why is given. In this example, the vendor acknowledged the security issue, took steps to remediate it, and resolved the case.

One more item the CM team must follow up on with on this vendor is the issue of end users having administrative rights on their laptops. Allowing users to install any software is an unnecessary risk that clearly presented itself as an issue with this third party. A discussion on their progress to closing this gap is warranted and is an appropriate part of the engagement.

KC Enterprises decides that point‐in‐time assessments are valuable, and develops a robust risk‐based program for the intake, ongoing, and on‐site security evaluations. However, as a cybersecurity incident arose in the early days of the pandemic, it became clear that something needed to be done to monitor and reduce risk between these important activities. CM was found to be so successful that the management team decided to grow the capability with high‐risk vendors into a process called Enhanced Continuous Monitoring.

Inside Look: Health Share of Oregon's Breach

The State of Oregon's largest Medicaid organization was victim of a large data breach (over 650,000 PII records) when their third party, GridWorks, had a laptop stolen that contained the data. (GridWorks provided medication transportation to Health Share of Oregon.) The stolen data included names, addresses, phone numbers, Social Security numbers, and birth dates—nearly 650,000 instances of enough data to open millions of fake credit cards and other uses for stolen personal details. To make matters worse, the data was located on a laptop stolen during a break‐in and the hard drive was unencrypted.

A key takeaway is always ask third parties if they encrypt the hard drives and memory for mobile devices and laptops. What could have been a small expense to GridWorks snowballed into a major data breach not for the third party, but for the Medicaid company. The loss of nearly three quarters of a million personal data records on a laptop could've been an easily avoidable mistake if the Health Share of Oregon had the proper Continuous Monitoring in place.

Enhanced Continuous Monitoring

Normally, KC Enterprises has around 50 high‐risk vendors. In addition, TPRM keeps a list of vendors identified as systemically critical (or known as critical). These critical third parties are defined as such because if one of them were to go offline, KC Enterprises' main operations would cease either instantly or quickly enough that a replacement service or product could not be found in time. This list of systemically critical vendors averages half of the high‐risk vendor list of 25 critical vendors. This critical list is a subset of high‐risk vendors and all critical vendors that are also high risk, which is considered to be normal and anticipated because they are the large relationships that move the firm day to day and year after year.

The critical vendor list was discussed several times in KC's recent risk committee meetings at the Board level. They posed a question to the Cyber Third‐Party Risk leadership if they should perform more due diligence on this list to lower the risk of a breach even further. Reviewing the other due diligence efforts, they found little to do on the intake, ongoing, or on‐site assessments except to ask more questions. However, the problem is that these programs are well‐developed and still focused on a point in time. With the pandemic, the landscape had changed and a huge increase in cybercrime and activity meant they needed to focus more on the Continuous Monitoring side.

So the team created was called Enhanced Continuous Monitoring, which focused on four key areas of risk (Software Vulnerabilities, 4th Party Risk, Data Location, and Connections) based upon the history of third‐party breaches and typical weak security points.

Software Vulnerabilities/Patching Cadence

As seen in some of the breach case studies in the book, the KC team noticed many were caused by vendors not updating software per their own patching and vulnerability management policies. There is a need to understand what software and versions they are operating in order to provide service to the company. While there was an element of KC's Cybersecurity team taking on a vulnerability management program for the vendors, the risk was elevated enough to make it worth the investment. Any vendors that are systemically critical to your company must provide a list of the software used in providing the service to it.

Fourth‐Party Risk

Fourth‐party risk is real, and KC had observed instances of its competitors and other companies being breached by them. However, there was a problem with the scope of managing fourth parties. At KC Enterprises, there are hundreds of vendors, each with potentially hundreds of their own vendors (i.e., fourth parties to KC) and too much data, and the approach is not risk‐based. KC decided the systemically critical risk category was small enough and important enough to also invest in learning what fourth parties these vendors use to provide the service or products to KC.

Third parties listed as systemically critical to operations are required to list all the vendors (i.e., KC's fourth parties) that are needed to provide service to KC. This list is validated at several points in the due diligence cycle. During the Intake process, this list is gathered and validated at the on‐site assessment as well as during any Continuous Monitoring engagement. In addition, the Master Services Agreement (MSA) contractual obligations for these vendors requires them to update KC when any material change is made to a fourth party.

Data Location

When it comes to data, it's all about its location, where it travels, and how is it protected. Throughout the due diligence process at KC, the question of data location is asked, and these critical vendors are required to more extensive monitoring around this risk item. For vendors located at CSPs, there is an attempt to leverage the available monitoring APIs provided by them to have them alert KC's third‐party risk staff. At a minimum, the vendor must supply a security configuration report on a quarterly basis through its secure portal. If the data is located in a vendor data center or co‐location facility, then the vendor is required to have the on‐site team perform a data center physical and logical security evaluation annually.

Connectivity Security

Not all critical vendors have network connections to KC, but those that do will have quarterly checks of their software running on the connectivity hardware. In addition, the vendor must turn over the logs for both ends of the connections during each quarterly software version check. This transparency is viewed as critical to ensure that these vendor connections are not being compromised.

The KC Enterprises Board approved and funded the request as some additional resources are required for this additional due diligence. The cybersecurity team hired another four cyber threat analysts to focus on these critical vendors full time. Working with the supplier managers for these 25 critical vendors, they developed and delivered an online questionnaire on the vendor portal.

By having a relationship with your vendor and holding conversations at each due diligence step, your company is able to grow that trust and partnership. A checklist does not build trust, nor does it provide any growth in a business relationship. It is something that must be done, that's all. KC's TPRM leadership, supplier managers, and Cyber Third‐Party Risk team creates a slide deck and pitches it to each of these critical vendors. Part of the deck plays to the vendor's ego and how important they are to KC Enterprises' success and how both companies benefit. The other part of the pitch explains how strong the partnership is and also extends into the Cybersecurity and Third‐Party Risk space. Lastly, the environment that both companies were now in, each vendor and customer, was a dangerous one from a cybersecurity perspective. It required KC and critical vendors to partner on how they could help each other.

In some cases, KC's vendors wanted a bit more money to cover the additional oversight, and if the request was within a range that did not overstate the risk, it was approved. Given that it was just over two dozen vendors, it only took a few weeks to accomplish. The real work started when the data was collected a month later.

Production Deployment

During the CM production deployment, KC's CM team collects all the data from the critical vendors and places it into a database. The typical vendor management software system of record, unfortunately, did not have the capabilities to store and report on the data in ways that the team had planned. So, they opted to deploy front‐end business intelligence software to present the data. This detailed data is now tied to the vendor security software tool, the vendor risk management software, and other data sources as triggers.

Software patching and vulnerability data requires the CM team to find triggers and thresholds for the software reported by the critical vendors. In most cases, the software is common across them (i.e., Microsoft Server, Unix Server, Oracle databases, Apache Web Servers), so they tie significant and zero‐day alerts from these software makers directly into the business intelligence engine to match any of the maker and version information. Other less common software was harder to automate, but these were found to be few in number. As the significant or zero‐day alerts are announced by the makers, notifications go to a distribution list to take action and investigate.

Fourth‐party risk data can be tackled by counting the number of total fourth parties submitted by critical vendors. Recall the question was about what fourth parties the third‐party uses to provide services or products to KC. This is expected to be a few dozen for each at most, and with 25 vendors, it ended up being 250. The decision was made to add licenses to the vendor security tool so that these fourth parties could be monitored along with the rest of the vendor pool. The licenses are held in a separate folder in the software and are monitored in the CM program like any other vendor.

Data location trigger data is tied to the cloud solution that the vendor chooses. As described previously, some vendors who used CSPs were courted to collaborate and use APIs for monitoring the security health of the instance. These triggers will be explained in Chapter 10, ‘‘Cloud Security,’’ but they enabled both the vendor and KC to alert on specific criteria, such as MFA for root access no longer being required or encryption of the instance being disabled.

In addition, the CM team notes that the data's location is in the cloud (e.g., if it is in AWS, it's noted if it is US‐EAST or US‐WEST), which allows the team to do two things: First, the team can alert when one or more of the zones or data centers experiences high latency or is offline. Second, the team can look at the concentration risk. Concentration risk, in this case, is how many of KC's critical vendors are concentrated (i.e., located) in the same data center(s). If the concentration risk is too high, the team can work with the respective third parties to find ways to dissipate the concentration.

Connectivity security alerts and triggers are tied to significant and zero‐day notices from equipment manufacturers of the connectivity hardware. Because KC's CM survey of critical vendors collected the hardware and software versions, this provides an ability to get urgent notices of security patches specific to these hardware and software versions. Also, the team can set up “patch management” triggers in the database so that once the software running on the hardware is of sufficient age, it warrants a patch according to policy.

As the KC Cybersecurity team and TPRM leadership looked to mature their program, they found a need to lower the risk that systemically critical vendors presented to their operations and security. Always looking to take a risk‐based approach, they identified four critical risk areas for these third parties to provide more transparency and collaboration on. It took some convincing of certain vendors, but the partnership that the team developed over the preceding years of due diligence finally paid off. It merely had required an investment by the senior leadership and the operational managers to develop the program. However, the additional capabilities on a small but important vendor community allowed the team to lower the risk.

Continuous Monitoring Cybersecurity Personnel

The staffing requirements for this team are very different from the other teams. Here, skilled personnel are focused on hunting. They look at screens and searches to locate potential risks and then, like a good hunter, stalk the prey until it is killed. In this metaphor, the risk is reduced, or the personnel confirms the risk does exist. The teams that typically make up Governance, Risk, and Compliance are fighting off regulators, or audits, and ensuring the rest of the company is following cybersecurity policy and standards. At KC, these work streams are important, but the purpose of the CM team is to find problems, not prevent them.

Typical candidates for this CM team role are cyber threat analysts, networking security experts, or cloud security experts. Years of experience will vary depending on needs of the team, but at least 2–3 years is preferred so the staff can hit the ground running. KC Enterprises has found the most successful CM cyber threat analysts were threat analysts in their previous jobs, as they are trained to seek out and find clues to security breaches or incidents. Managers are looking to hire analysts who have the tenacity to find evidence and are constantly curious.

Third‐Party Breaches and the Incident Process

It's never a question of “if” there will be a breach by a third party, but when. Just as you would want to ensure that vendors have a plan to handle security incidents, so, too, must the Cyber Third‐Party Risk team have a plan of how to be alerted, and to investigate and close any incident involving a supplier. At KC Enterprises, this activity is handled by the CM team who are trained and focused on continuous engagement, as opposed to the other due diligence teams who are more point‐in‐time focused. The Third‐Party Incident Management (TPIM) process is documented, followed, and validated to ensure that the team does not make a mistake on this important activity.

Third‐Party Incident Management

The first step in a TPIM process is to create a playbook detailing the end‐to‐end process. In most organizations, the cybersecurity team handles internal incident management, and they are owners of the end‐to‐end process for Incident Management. At KC, the Cyber Third‐Party Risk team engages with the Incident Management team to build a separate process for when an incident involves a third party. While the Incident Management team still owns the Incident process, the Cyber Third‐Party Risk team manages the process as they investigate and resolve vendor‐related incidents.

An incident alert can come into the organization in multiple ways. Regardless of how they do, the process is broken down into four parts: discovery, investigation, reporting, and closing. Similar to the four phases of CM vendor engagement, these four parts of TPIM enable the team to ensure that each part is successfully completed before they move onto the next one. The communication and artifacts involved in each part of the process are all stored in the system of record, while a workflow tool is used to provide smooth hand‐offs from the third‐party risk team to the Incident Management teams within cybersecurity. A reporting requirement is built into any incident management process, along with escalation points when certain conditions are met.

The Discovery Phase  The Discovery Phase involves how the incident is brought to the company's attention. The Cyber Incident Management Team (CIMT) has a number of threat intelligence and other sources it combs regularly for signs of a potential vendor compromise. Listings of exposed credentials or data for sale on the Dark Web are often the first indicators of an incident. When such things are discovered, the CIMT team performs a cursory search in the vendor management system of record to see if there is a match. If the supplier is listed, then they notify the CM team to go to the Investigation phase.

Another avenue for suspected incidents is the CM team itself. The vendor security tool has the ability to report when a breach has been publicly announced and alerts the team. Just as the CIMT team reviews threat intelligence forums and the Dark Web, the CM team has access to the same tool sets and also looks for keywords that match the vendor list they manage.

The last avenue is notification from the vendors themselves, which always arrives in the form of a letter from their legal representation. The breach's discovery through this official channel doesn't require a lot of Discovery phase work, except to note the source of origin. It can also shorten the Investigation phase as they might have determined the root cause and damage already.

The Investigation Phase  The Investigation phase is designed to confirm the security incident and scope of damage. If the vendor has not self‐reported the breach, then the CM team assigns an analyst to the investigation who opens a ticket in the workflow tool for tracking and reporting. All these investigations are the highest priority for the analyst as time is crucial in these events. The analyst has a prescribed list of actions to take and artifacts to use to ensure consistency and accuracy.

First, they contact the supplier relationship manager listed in the system. They are required to receive a response from this manager within four hours of their initial notification. Initial notice can be sent via email, but if they do not respond within two hours, the directive is to call them via their listed cell phone number. If there is no response after four hours, they are expected to escalate their notification to the vendor manager's manager until an acknowledgment is provided. The email describes the information received thus far internally, including potential breach source, and the email requests the vendor's direct contact information to include phone numbers.

While this communication occurs, the analyst queries the systems of record for the vendor's risk data. They pull any existing due diligence reports (i.e., Intake, Ongoing assessments) and view any open findings or RAs for relevancy to the potential incident. If the number of records on a vendor is high or their risk sensitivity level is high, it indicates to the analyst that attention must be given to this issue immediately.

Once the vendor's contact information is acquired, the analyst begins a similar process for response and escalation (if necessary) with the vendor contact. First, they email, then follow up with phone calls as appropriate to obtain responses. Initial contact requires a response within 24 hours. The escalation paths and times are clearly noted in the vendor's documentation: If there is no response within the required time, then the analyst's request is escalated from the vendor's manager to senior contacts at the supplier as required.

The vendor receives a PDF questionnaire containing the following questions to which they are expected to respond.

Vendor Incident Management Questions:

  • Does your organization acknowledge the security incident or breach?
  • Did you engage your Incident Management process?
  • Was there any impact to KC Enterprises' data?
    • If yes, what is the data type and how much was potentially or confirmed as exposed?
    • Was there any operational impact to the services provided to KC Enterprises?
    • Has the incident's root cause been determined?
    • If the root cause has been determined, what corrective actions are being planned or taken? If planned, when will they take effect?

These questions are succinctly designed to obtain the necessary information and not delay the vendor's response. Additional questions can follow in the coming days and weeks, but more urgency is placed on determining if the incident is sufficient enough to require the Reporting phase to be initiated. When a small number of records are exposed (less than 50), it's up to the discretion of executives or regulators if it should be reported. Typically, these are instances when one of KC's processing companies (accidentally) switches recipients on an email or postal letter for billing on a few customers. While not ideal, these incidents are handled with a direct communication with the handful of customers affected.

The Investigation phase is concluded when the team has complete confirmation of a security incident taking place. If a security breach is confirmed by the vendor, then the final step is to ensure that all the information requested is completed on the questionnaire.

The Reporting Phase  The Reporting phase involves the required updates being sent to executive leadership and any regulatory supervisory agencies. This phase also includes the legal partners. Reporting to executives, regulatory bodies, and potentially affected customers is best left to the Legal team to ensure that actions do not open the company up to any further risks or issues.

Executive leadership is made aware if the breach or incident confirms a loss of data that regulators and customers must be notified about. Regulatory notification is handled by the Legal team as well, with all communications channeled through them.

Reporting continues until the incident is considered complete from notifications to regulators and executives. The length of the Reporting phase depends on the number of records and amount of damage. For a large event, reporting can last until all the customers are notified and any actions promised to compensate and/or provide additional monitoring services have been delivered.

The Closing Phase  The Closing phase involves updating any system of record with the final information about the incident. In addition, due diligence efforts may be required as a result of the breach. Calls about an on‐site assessment should be scheduled as soon as possible for the physical validation of security controls that required remediation due to the incident.

There are potential contractual implications for any vendor who is breached at KC Enterprises. First, all vendor contracts that meet the criteria of having protected data or a connection have language that enables severing of the contract in the event of a breach; severing the contract is not required but gives KC the option to do so if they choose. At the Closing phase, the Cyber Third‐Party Risk team, CIMT, business leadership for the areas affected by the breach, and appropriate senior leadership have a process to complete that decides if the contract is terminated or continues. If the decision is to terminate the contract, Legal takes over those steps to offboard them. If the decision is to continue the relationship, then additional due diligence is added for a prescribed time to lower the risk of another incident.

A vendor who has been breached or had a serious security incident is 30‐percent more likely to experience another breach or incident within the next year. As a result, KC Enterprises designates any supplier who has been breached or had a serious security incident as “high risk” to ensure they receive the appropriate due diligence for at least two years past the event. These vendors also go into an annual review process dictated by TPRM to review their progress to remediation and overall security control adherence.

Inside Look: Uber's Delayed Data Breach Reporting

In 2016, a cyberattack on Uber netted hackers the personal details of 57 million customers and its 600,000 drivers. This information included full names, email addresses, and phone numbers. It occurred because the company developers published code with their usernames and passwords on the software repository site GitHub. The credentials were privileged accounts and allowed the attackers to access their AWS servers where the data resided.

What made the situation worse was that Uber hid it for over a year before disclosing the incident. The breach occurred in October of 2016, but it was not revealed until November of 2017. Uber paid the hackers a $100,000 ransom to keep them quiet about the breach and to delete the data they had taken. Uber failed to disclose the breach, which violated federal and state disclosure laws.

The act of not disclosing the breach is the most egregious and Uber paid dearly for the mistake. It was fined $1.7 million by the British and Dutch privacy authorities. According to ICO Director of Investigations Steve Eckersley,

This was not only a serious failure of data security on Uber's part, but a complete disregard for the customers and drivers whose personal information was stolen. At the time, no steps were taken to inform anyone affected by the breach, or to offer help and support. That left them vulnerable.

Uber should consider itself very lucky that the breach occurred in 2016 before GDPR came into effect, or the fines could have been in the tens or hundreds of millions. GDPR dictates fines of up to 4 percent of global annual revenues or 20 million euros, whichever is more.

Inside Look: Nuance Breach

In December of 2017, Nuance, a Massachusetts‐based company that provides speech‐recognition software, was hacked by an unauthorized third party that accessed and exposed 45,000 PHI records. Worse, this was following a breach caused by malware NotPetya in June of the same year. The SEC filing for 2017 indicated that the NotPetya attack caused $92 million in damage, and they lost about $68 million in revenues due to service disruptions and refund credits related to the malware. Another $24 million was spent on remediation and restoration efforts. Nearly $200 million in one year is enough to ruin most companies.

The perpetrator was thought to be a former employee who broke into Nuance's systems and exposed the data. It included names, birth dates, medical records, and other sensitive data. The malicious former employee is a fine example of insider threats. Because employees have the knowledge and time, if mischievous, they have the ability to exfiltrate data potentially undetected. Logging and monitoring, access reviews, and ensuring least‐privilege are all controls that should be checked to reduce this risk.

Conclusion

Continuous Monitoring is an important ongoing due diligence activity designed to bridge the gap between scheduled point‐in‐time activities. While it can present some challenges in how to engage a vendor with this process, there is software that can provide alerts, and existing due diligence and cyber intelligence forums that can be leveraged to perform this Continuous Monitoring.

Enhanced Continuous Monitoring builds on the success of CM to target specific high‐risk vendors that management targets as needing additional oversight. This process enables CM teams to focus on an expanded set of vulnerabilities within a smaller set of critical third parties, as identified by business and cybersecurity. Insight into fourth parties, software vulnerabilities, data location, and connectivity details helps the CM team laser focus on those key risk control areas that are most concerning. This level of engagement with a vendor requires a partnership, but if they are a critical third party, then that should be a goal itself: a partnership to lower risk mutually.

Third‐party breaches are going to occur. A defined process, whether it is owned by a CIMT team or third‐party risk team directly, is important to ensure delays in notification and resolution are avoided. Many states and governments require specific notification periods, such as within 48 or 72 hours of the company becoming aware. Failure to report the incident can result in heavy fines. More importantly from a cybersecurity perspective, the sooner the work is done, the more quickly damage can be assessed, customers notified, and steps taken to address the breach.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.136.26.20