Chapter 11
Early Warnings
Something Bad Is on the Way

One thorn of experience is worth a whole wilderness of warning.

James Russell Lowell

Early warnings come in a variety of expressions. Some warnings are technical signals; others are behavioral. The quote by poet and diplomat James Russell Lowell is dead-on accurate. History really is one of the best early warning indicators: The pain emanating from that “thorn of experience” is telling.

Perhaps the best early warning system is the media. Looking online or reading newspapers the old-fashioned way provides clear evidence that the cyber attack problem is worsening and is not likely to improve in the near future. Even though only a minority of cyber breaches are reported in the media, the numbers are compelling and the impact is often substantial, even devastating. Expecting a cyber attack is an appropriate posture.

There are a number of early warnings that can signal a problem. These aren't the computer that's running slowly because it may be infected with a virus. There are a few early warning signals that companies often ignore or are simply unaware of their significance.

It is no secret that some of the better-protected companies are those that have felt the pain of a prior breach. Depending on a number of conditions associated with the company and its attackers, that pain may have been significant, resulting in legal, regulatory, financial, and reputation risk impact. With an average data breach cost of more than $7 million, according to the Ponemon Institute, the financial hit can be especially painful. Many breaches cost far, far more. Given the mega breaches of late 2013 and 2014, there's no telling what the total breach cost will turn out to be for the affected parties. One thing is certain, though: The value of reputation is immeasurable. The sooner an attack is identified, the better.

Not all companies that have experienced data breaches have listened well to the voice of experience within their own walls, but they should have. Many companies have reported multiple breaches. So either those companies failed to learn from their experience, or the cyber attackers were ingenious. Maybe both considerations can be true simultaneously. As history will clearly illustrate to anyone willing to examine it, data breaches, like the stingray, have a long tail that can deliver a painful strike and a memorable legacy, none of it good. For many companies, the process of responding to a data breach is very distracting. It sidetracks a company from the primary mission and necessarily switches it to a subsidiary mission—managing the event. That is why those that have been attacked generally have no interest in a repeat performance.

Curiously, some organizations have been the target of numerous breaches, yet they never really progressed to the point where trying to prevent them was important enough. History wasn't much of an early warning. In these circumstances, there are a number of factors that come into play:

  • The company doesn't have the money to invest more in managing operational risk, the result of narrow profit margins, higher costs of doing business, a beleaguered economy, or other financial issues.
  • The board and the senior management team have accepted breaches as a cost of doing business and have made a conscious or unconscious decision to roll the dice and roll with the punches, as they assuredly will come.
  • The board and senior management lack awareness. This sounds improbable given a breach history. But often these executives fail to connect the dots regarding security, risk, privacy, regulatory compliance, and data breaches.
  • Some board members and senior executives perceive that data breaches are applicable only to personal information and not to business proprietary information such as intellectual property and trade secrets.
  • There is also a lack of awareness about what hackers can do over the Internet, ranging from various types of denials of service to Web-based financial scams and frauds.
  • Many lack awareness of nation-state espionage and transnational organized crime networks.
  • Although it seems unlikely given the history of cyber attacks, many executives simply do not believe that they will be targeted. “We're too small to be noticed.” “No one knows we even exist.” “We don't have anything anyone else would want.” “We couldn't stop these attacks if we wanted to.”
  • Many companies remain unaware of regulatory data protection and reporting requirements. The executives in these companies simply fail to do what the statutes and regulations say they must do. Overall, compliance with such statutes and regulations is relatively low, with one highly regulated state estimating that compliance percentage is in the single digits.

For these reasons and others known only to those inside some enterprises, early warnings, at least in the form of historical precedent, remain unacknowledged or ignored. But there are other early warning signals.

Technical Signals Are There—But You've Got to Look

Internet protocol (IP) addresses are unique numbers that identify any device connecting to the Internet. This includes computers, tablets, even printers and copiers. DARPA, the Defense Advanced Research Projects Agency, one of the developers of the Internet, defined an IP address this way: “A name indicates what we seek. An address indicates where it is. A route indicates how to get there.”

Depending on the type of business, many companies have several kinds of IP addresses in the enterprise. Let's face it, a business today without any IP addresses is a dead business. But IP addresses are not either a one or a zero, or black and white. Many executives don't even know what an IP address means or what the “I” and the “P” stand for. Literally! This is often reflective of the gap between the more senior management team, security, and the technology people. Not knowing what kind of IP addresses are in the environment, though, quickly becomes a management concern.

Among IP addresses, there are authorized IP addresses and unauthorized IP addresses. The former is one way in which much business is conducted. The latter is how businesses are quite often breached.

Legitimate IP addresses make the wheels of commerce turn. The problem is that it is not always easy to make distinctions between what is a legitimate IP address and what is an IP address of hostile intent. It isn't so much that this process is extremely difficult. It's just that going through the exercise is another process to add to an already burdensome list of things to do. But there's also another concern.

A company was under attack. The cyber assault turned out to be significant, its origins offshore. In fact, upon examining the range of IP addresses, it turned out the attacks were originating from multiple locations in Eastern Europe and several cities in China. The IP addresses were associated with transnational organized criminal operations and the usual range of criminal pursuits, from human trafficking to narcotics distribution. Knowing that the IP addresses are toxic provides great incentive to mitigate the associated risks—or at least that is how it is supposed to work. But in this case, though, it got a bit more complicated. It seems that some of the toxic IP addresses came not directly from criminals. They came from the victim company's customers, similar to what was discussed in Chapter 2. That targeted company was doing business with customers that had been successfully penetrated by criminals. Doing business with that infected company thereby infected the targeted company, so now there were at least two victims. Ordinarily, the prudent advice and subsequent action would be to block the hostile or toxic IP addresses and notify the other company that it had been targeted. That's what should have been done. But this can often be confusing and is not without some degree of risk of irritating the customer, or so some executives believe.

The targeted company, when advised to notify its corporate customer that it had been compromised, declined to do so. It didn't want to raise such a sensitive issue with a customer. The concern was that the customer company would take offense, that it might overreact and cancel its contract, that its reputation would be tarnished and it would blame the messenger. But this was absolutely the wrong approach to take, and there are several reasons why.

Even if they do blame the messenger, it would be counterproductive for a company to ignore the concern. They are increasing their liability by potentially infecting other organizations. It's like spreading the flu. Most of those exposed are going to contract it, accept it into the enterprise, and suffer the consequences. So telling the customer that they are transmitting toxic IP addresses is important. It helps everyone. It helps them reduce their liability, and it helps their network of companies to avoid getting hit. Chances are, no one in the company is aware of the problem and they are likely to be grateful for the heads-up. The longer the condition goes unrecognized within their organization, the greater their liability.

Also, consider that the customer company that is unknowingly inserting toxic IP addresses into your enterprise may hold it against you if you fail to notify them that they are in possession of toxic IP addresses and spreading them indiscriminately. Failure to notify the customer company may increase your liability!

These toxic IP addresses can be identified before they do too much damage, but someone's got to check. One way is to simply test the environment, determining a specific sample size of IP addresses. Identify all authorized IP addresses, then determine if the unauthorized samples are toxic or benign. If there are unauthorized IP addresses, investigate: Where did those IP addresses come from? If they are authorized, make sure they are not toxic. Work with the IT security team to determine toxicity. It's important to remember that not all authorized IP addresses are benign, and not all unauthorized IP addresses are toxic. But managing risk is much harder without knowing what is in the environment. Unfortunately, too many organizations make assumptions that all of the IP addresses in their environment must be okay. This approach has led to many disappointing results.

Another potential early warning signal is the Internet service provider (ISP). Unfortunately, ISPs are sometimes selected for the wrong reasons: The price was right, the location was right, the terms were right, and so on. But the telling fact is that ISPs are not created equally. Some ISPs fail to monitor traffic responsibly, allowing suspect transmissions that may involve criminal activity. This happens a lot, and it is usually a violation of the ISP's governance and should be a violation of the contracting company's governance. It is important to conduct formal due diligence on ISPs. When a breach occurs, and if it involves an offshore ISP, things may get complicated, the ISP may be less responsive, and the damage associated with the breach may continue to proliferate until cooperation is forced. Many breach investigations have yielded information implicating ISPs, including some in the United States. Look carefully at ISP track records. If the ISP is located in higher-risk, corrupt nations, look twice. If necessary, manage the risk by selecting another ISP.

The important thing to remember is that ISPs are part of any enterprise. What they do and how they do it matters. Make sure that any ISP that is going to become part of the enterprise is fully vetted. Yes, it is an extra step, and yes, it can add to an already burdensome workload, but it is definitely worth the effort.

Know Who's Inside the Enterprise

This sounds pretty simple, but it is not. Not only is it critical to understand what IP addresses in the environment may be toxic and that responsible ISPs are being engaged, it is also important to understand which employees, as well as the employees of any external vendors, are inside the walls. It is important because, once inside the walls, there's a conveyance of trust.

Here's how that conclusion was made: background investigations. Background investigations can be somewhat like medical examinations, but with one big difference. The physician conducting the physical is (or at least should be) licensed to practice medicine. Conducting background investigations doesn't always require the same degree of expertise and licensing. Depending on a lot of factors, a physical examination can simply amount to a doctor looking at a patient's throat, ears, eyes, and so on, in a process that may take only a few minutes. Alternatively, some physical examinations are intensive and can take more than a day of patient-doctor time, plus the time of technicians, nurses, and other staff. There is also more expanded use of technology to conduct full-body scans, as well as any localized areas of concern. These exams are obviously more detailed, render greater details about the patient's health condition, and of course cost more money. It may also be argued that such an approach has greater value to the patient, to the attending physicians and staff, and to any interested third parties, such as a board of directors that is looking to make certain determinations about, say, hiring a CEO or extending the contract of the current one.

Background investigations are extremely variable, just as medical checkups are, and the results are equally variable. Like the medical physical, background investigations can provide signals or indicators of certain behaviors. The greater the level of detail about a particular illness, the more effective the management of the disorder. The more that is known about the background of an employee, the better the potential for future predictability. Although background checks are not foolproof (and neither are medical physicals), the key concept here is early warning. If there is a financial fraud inside the company, it would be useful to know that, say, one of the employees there, with virtually unrestricted access to certain data, had filed for bankruptcy, was deeply in debt, and had previously been convicted of a financial fraud. While that would not necessarily prove that the employee was part of the fraud, such findings would trigger the need for additional examination of the person's background. At least it's a clue. Knowing such information in advance would potentially result in an early warning indicator, causing a review of certain behaviors and conditions.

Understanding the background of every employee is invaluable. Ensuring that external vendors are doing the same for their employees is equally valuable. Here's an early warning signal that is not always evident but can be if you negotiate it into your service level agreements. It's simply this: If there is a breach in the external vendor's environment, whether or not your data is involved, you need a heads-up. Period. They don't have to disclose confidential information. They don't have to violate anyone's trust. But they do need to let you know if something is going on that could potentially impact your organization. And that's not all.

Make sure that the external provider is obligated to inform you when any of their employees with access to your data appear on the radar screen for additional background investigations or additional drug tests. There have been cases where employees have been under suspicion by their employer, the employer conducts additional background checks and even additional drug tests, yet the employee is still allowed to access sensitive customer data as part of their job. As has been stated so eloquently, “That dog don't hunt!” It's important information to know, and breaches have happened because the external vendor had suspicions about an employee, conducted one or more additional background investigations and drug tests, the findings were inconclusive, the employee was allowed to continue with access to sensitive customer data, and the external vendor's corporate customer was never notified of the suspicion. The next thing you know, there's been a breach.

Yes, there are complications that can occur for the external vendor. Yes, the employee may protest and may threaten legal action. And there is always the chance that the external vendor is wrong about the employee. But here's the thing: The potential risk is huge and costly. It may result in regulatory impairments and civil or even criminal litigation. It may end up being reported in the media, broadcast across the massive social media landscape, resulting in a lot of negative publicity. The bottom line? Companies need to require third-party vendors by contract to agree to this point. If the vendor doesn't agree, and such arrangement is not up for negotiation, then consider another vendor.

There's always pushback on this advice. Companies like to work with the same vendors they've been using for a long time. That's understandable. It can be time-consuming and disruptive to change vendors, no doubt about that. But before rejecting this out of hand, consider this: If there is going to be a data breach in your company, there is a better than reasonable likelihood that the breach will come via a third-party vendor.

Here's another tip. Monitor what employees are actually doing, especially those with access to sensitive data. Web surfing is often monitored, for example. Employees are restricted from, say, visiting pornography sites. Some companies employ e-mail monitoring programs to see what employees are sending out of the enterprise. But there is one area that some companies ignore and that has resulted in data breaches.

In one case, an employee with extensive access had downloaded a number of software licenses that would enable criminals in other countries, for a fee paid to the employee, to steal critical information. So monitoring what employees may be downloading from even legitimate web sites is one way of detecting breach potential. Question: If the company isn't doing business in, say, Finland, why would an employee download a Finnish license for a software program giving someone in Finland access to company computers? Answer: There's no good reason. More than likely, this is an early warning signal. Check into it!

What a Web we Weave… When Surfing

At home during the weekend, the executive was surfing the Web. Typing his name into a search engine, he was, as many do, looking to see what was being said about him. Maybe kudos for a speech he had given, perhaps a snide remark by a competitor, possibly an article in the local newspaper or even one of the national business publications. It was then that he discovered a web site that featured his name and his company's name. He also discovered information about his personal life, finances, and family on another web site, a scam web site in the business of bilking investors. It turns out the scam had been going on for several years, but no one had discovered it.

His discovery prompted the other senior executives in the organization to start their own Web surfing ventures in an effort to see if their boss was the only victim. As it turns out, he was. Hopefully, they continue to check the Web from time to time, part of a monitoring practice that is important in the business of identifying early warning signals that can damage reputations and more.

True, there are services and software that will monitor periodically or continuously. Some are good, while others are next to worthless and actually do damage by engendering a false sense of confidence—and that's always bad. Cost is a factor, too. The better solutions can be expensive, so many executives dismiss the need or practicality of them. “Besides, it won't happen to me,” is a frequent refrain. But executed efficiently and effectively, such solutions can provide early warnings and therefore value through improved risk management.

Sometimes the security organizations will want to conduct this service internally. While that can be cost-efficient, make sure someone is watching the watchers. There have been occasions where insiders have been responsible for the attacks and end up extorting, or planning to extort, the victims. Also, tracking their own histories on the Web may not be an effective use of executive time. Plus, they may not be very good at it. The Web is a giant, often intricate destination, and searching it definitively and regularly is not always satisfying in terms of results.

Recommendation: Try if you want, but best to leave it to the professionals and consider it a cost of doing business in the cyber-intensive twenty-first century. The chances are that these types of attacks, which are really the compromise of intellectual property and brand value, are going to increase, and significantly if not dramatically. The reason is that these crimes are relatively easy to commit, the financial payoff is substantial, and the risk to the criminals is low. This is a bad combination.

As in everyday life, the familiar sometimes—often, actually—becomes, well, familiar. The result is that familiarity breeds acceptance, even trust. Working around others in the same company breeds familiarity, often followed by trust. If someone is hired, many colleagues confer to that person a degree of trust, assuming that there is every reason to trust their colleague and no reason not to. That's when early warning signals are sometimes ignored. Some employees may even feel disloyal or paranoid in experiencing these early warning signals. Trust makes people feel good, and life is tough enough. Work can be challenging. We want to accept and be accepted. But sometimes that's a mistake. And sometimes that early warning signal is not just paranoia.

Other signals are ignored, too. Ever take a walk at night in a strange part of town? Did you feel on edge, perhaps a bit nervous, even uncertain about what could happen? You see people you don't know. Maybe they are following you. You think they may represent a threat. Things race through your mind. Are they really threatening or is just a fertile imagination at work? You shake it off. Nothing to worry about. But then you are attacked—physically assaulted.

Early warning signals exist, in nature and in the workplace. It's important to recognize them and to act on them, in the workplace and elsewhere. In human beings, it is actually a biological and chemical early warning, but one that is often ignored. Ironically, because of the desire to trust, early warning signals are not trusted. These sensations are often ignored. But that doesn't diminish their importance. These are signals that kept early man alive, when receiving such a signal caused fight or flight, simple internal reactions that made a difference.

Early warning signals of every kind have value. Recognizing them, understanding them, and acting upon them is the key. Remember, it's not paranoia if it's really happening. Given the volume, the types, and the severity of information breaches, the evidence suggests this is not about paranoia.

Companies with prior breaches often had early warnings. The signals failed to garner much attention. Some of the signals were simply not observed—invisible signals. Some were observed but ignored. Of course, sometimes there are no early warning signals. The attacks just happen. But not always.

Here's another early warning signal that, surprisingly, many companies miss. It's not a technical signal, nor is it a behavioral one. When vetting external vendors that are going to have access to sensitive data, take a look at the third-party vendor's history. Not only should that company be queried regarding its breach history, but that history should be independently verified. It's possible that employees involved in the negotiation may not be aware of certain breaches, or they may fail to disclose the breaches.

One vendor, unbeknownst to some of its customers, had a breach history stretching over more than a decade. Either the customer companies didn't look or they didn't care. But the failure to identify the prior breach history was a missed early warning signal, one that resulted in a serious data breach. Think about this for a moment. Would you engage a vendor that had a record of more than a decade of serious data breaches? How would that be justified? Would this pass a risk committee of the board? Would it pass a vendor management committee? Maybe it would pass due diligence, based on mitigating actions undertaken by the vendor, cost of services, and a variety of other factors. There may be valid reasons to select that vendor or, in the case of an existing vendor, to continue to use its services. But the real issue would be if the damning information had never been identified. And this is an early warning indicator that would cost virtually nothing. Type the vendor's name into a search engine and see what pops up.

With so much information available today through publicly available information on public- and private-sector web sites, there's really no excuse for not conducting some level of due diligence before selecting a vendor. Still, this does happen. Frequently, that early warning signal is already on the Web, posted on multiple web sites. You've just got to search for it.

Failing to uncover serious breaches, and especially problematic breach histories, never looks good when the board or directors and executive management question what went wrong. “How could this have happened? That company's history of data breaches is all over the Internet!”

These are the words no one wants to hear. But more and more, these words are heard, and the consequences are never pleasant.

Ignoring early warning signals can prove costly. But just like taking that annual physical at the doctor's office, it is far preferable to detect any malady before it can take hold and cause real damage.

There are many challenges ahead; that much is clear. But what is to be done about it? There's ample reason to be optimistic about the cyber future, not because cyber attacks are going to stop, but because the quest to more effectively manage the cyber future will hopefully result in a more trustworthy enterprise and a more robust community of transactional commerce. But a more trustworthy cyber environment will not just happen. It will not result solely from voluntary participation, nor will it result exclusively through regulating what we must protect and how we must protect it. The challenge ahead is uphill, even daunting, but it is not impossible.

Consider the irony that the Internet was devised by optimists in pursuit of avoiding unparalleled disaster. More of that thinking is needed. This is the thinking of an optimist. It is the optimist who examines the threat of a cyber breach and concludes that there is an opportunity to improve the organization's condition, reinforcing its reputation and brand, and thereby designing its future. An optimist will assess the risk and act upon it in confidence, knowing that this is an opportunity to persevere, prepare, and to invest in the values that are so vital to the preservation of trust and the future.

We do not know what the future holds because it isn't here, yet. We get to design it, or at least elements of it, and we do that because we believe there is merit and obligation in doing so. The Brazilian novelist Paulo Coelho wrote that “None of us knows what might happen even the next minute, yet still we go forward. Because we trust. Because we have Faith.” Despite the rampant cyber lawlessness and crime that threatens the integrity of seemingly every aspect of commerce and privacy and humanity, we must rise above these threats. Crime will continue, as it has since the beginning of time. Technology will continue to evolve, and companies will continue to adapt to the new ways and means of doing things that are enabled by increasingly complex technology that is supposed to make our lives and our work simpler.

The cyber war is a winnable war, although not one without casualties; the evidence of loss is all around us. Large ships turn slowly, while agile threats move at the speed of light and with near invisibility. We have to turn this massive global vessel rich with the assets resulting from secure commerce and face the cyber threat head-on. We simply have to commit to aggressively addressing the cyber threat and skillfully manage the risks that come with it. Serious choices are demanded of us, and serious consequences will accompany inaction. We must have faith in our resolve and in its result, and we must act now. Mark Twain wrote, "God created war so that Americans would learn geography." Maybe the cyber threat was invented so that we will learn the limits of technology and wake up to its risk.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.226.120