Chapter 7: Difficulty Securing the Modern Enterprise (with Solutions!)

The first three chapters defined the problems facing information security teams. The second three chapters described strategic solutions at a high level. The last three chapters will focus on very specific solutions to very specific problems. A common question I get from business executives is If we continue to spend more on security every year, why do we continue to see more breaches? Part of the answer to the question is what was covered in the first chapter, the economics for the attacker are not static and while the cost of cybercrime is rising, the benefit to the attacker is rising faster. Outside the pure economics of the situation though, there are several common challenges organizations face.

In this chapter, we will identify some of the most pressing challenges along with solutions I have found to be effective in my career. One of the things that makes cybersecurity so interesting is the ability to solve novel problems in interesting ways. Therefore, this is not an exhaustive list and is not intended to stifle creativity. Rather, it is an example of solutions that exist that may help spark some ideas for how you could solve problems that have not yet been identified.

In this chapter, we will talk about the following problems, each with some solutions I have found to be successful through the course of my career:

  • Cybersecurity talent shortage
  • Too much technology with too little process
  • What are we trying to accomplish?
  • Lack of continuing education

Cybersecurity talent shortage

The cybersecurity talent shortage is massive and growing. Cybersecurity is one of the best career fields for young people to enter. It is not uncommon for entry-level cybersecurity specialists to get several raises in their first year and be making a six-figure income within 3 years. There are few, if any, career fields outside of cybersecurity that offer that level of advancement and income prospects. Yet, cybersecurity continues to fail to attract a talent pool that can keep up with the demand for professionals.

There are many potential underlying factors. Cybersecurity is still not as diverse as it should be. Cybersecurity teams rarely reflect the entirety of the community. Solving this problem will require cybersecurity as a discipline to appeal to more people. Part of that appeal is representation in leadership and outreach. Part of it is changing the public perception that cybersecurity is a technical field with only engineering roles.

There are many people who have been successful on my teams throughout my career who have no technical background whatsoever. After entering cybersecurity, some gained a more technical skillset, but some did not. You can be successful in cybersecurity without ever getting into technology. We need critical thinkers to become analysts. We need strategic thinkers to help us build processes and programs and help us stay ahead of the attackers. These roles can be exciting and challenging and can appeal to people who would have never considered a career in cybersecurity. If you are reading this because you are considering a career in cybersecurity, I encourage you to join us. Cybersecurity has been the best thing that happened to me personally and professionally. If you know someone who is trying to decide what they would like to do for their career, please tell them about cybersecurity. We need all the talent we can muster to meet the current and future challenges we will face. Defending our way of life by protecting information and systems is vitally important, and we need you.

With respect to the skills gap, we will first define the problem and talk about meaningful ways that some are trying to solve it.

Not enough people!

Attracting, training, and retaining cybersecurity talent is a critical part of any program. It can be very difficult to accomplish because there are far more cybersecurity jobs than there are qualified people to fill them. I generally hesitate to use statistics because they become outdated quickly. However, the statistics in this space are sobering. Here are some that help define the human resources crisis facing cybersecurity leadership:

  • The International Information System Security Certification Consortium (ISC2) estimates that there are currently 2.8 million cybersecurity professionals employed in 11 major world economies. The same article estimates the gap at 4 million professionals. Put another way, the total need for cybersecurity professionals globally is 6.8 million, meaning we have fewer than half of the professionals we need. (HDI, 2020)
  • 88% of chief information security officers report very high levels of job-related stress. Many of them cite physical and mental health issues as a result. Their average tenure is 26 months. (Cimpanu, 2020)
  • The average tenure for an IT security specialist is less than a year. The average tenure of an information security analyst is 1–3 years. (National Cybersecurity Training Academy, 2021)

These numbers are staggering and getting worse. Think about it from the perspective of a CISO. You are facing a tremendous amount of pressure and job-related stress. You lose an analyst or a specialist and go into a market where less than half of the job openings can be filled. It understandably takes a long time to try to find a qualified candidate. The workload of the person who left does not disappear. Instead, it is passed onto the remaining team members. They become overworked and burn out and they start to quit. The cycle repeats itself and ultimately the CISO quits.

This is a macro problem and solving it will require big solutions and a partnership between the public sector and the private sector as well as a change in the way that we approach education. None of these solutions will make an immediate impact. In the next section, we will talk about services and how they can provide immediate relief but embracing them simply passes the problem to a service provider who can focus exclusively on it. Services alone will not solve the skills gap.

I have been fortunate in my career to build cybersecurity teams in several jurisdictions, and I have had opportunities to work with security teams around the world. There are some examples of programs that work as a public-private partnership. In fact, the United States has the fewest programs designed to help solve this problem of all the countries that I have operated in. In the United Kingdom, the government has sponsored programs where they will train and place professionals in our operations centers as apprentices. We have the ability to train them further and advance them through our organization. In Saipan, there is a similar government-sponsored training program to ensure we have a pipeline of qualified candidates. In the United States, we have similar programs, but we must source the entry-level talent ourselves. All these approaches, however, require the ability to take entry-level professionals and train them so they can advance. Most security teams cannot do that, they need skilled resources now, which don't exist in the quantity they are needed. Cybersecurity professionals with 3 years of experience have very little difficulty finding their next job. However, finding your first job can be very challenging. We need to solve that problem to help get more talent into the field gaining experience.

However, there is a bigger problem earlier in the pipeline. Too few kids are interested in cybersecurity. Cybersecurity is an exciting field that offers amazing career advancement and very good earnings potential. However, too few young people choose to get into the field. Diversity is a major problem.

According to the career analytics organization Zippia, the cybersecurity field is 77.9% male and 72.6% white (Zippia, 2021). We will not fill the cybersecurity talent gap with white males alone. Cybersecurity professionals and education institutions must do more to interest a more diverse set of candidates in the discipline. We need to partner with colleges and universities and their associated student groups to have conversations about the job prospects in a cybersecurity field. We need to talk to our kids about cybersecurity for many reasons. Not only can we help them live a safer life online, but we may be able to interest them in a lucrative career in a rapidly growing field.

The reality of the cybersecurity skills gap is that it is too large to close during the current generation. Even if you paired every skilled cybersecurity professional with an apprentice and had them train their apprentice in everything they knew, we would still not have enough professionals. Solving the cybersecurity skills gap requires an investment in the next generation and encouraging them to get interested in the field now so we will have newly trained skilled professionals in the next 20 years. For now, companies struggling with the skills gaps will need to look elsewhere for solutions.

If you are currently struggling with the cybersecurity skills gap, there are essentially three options. First, you could continue to try to compete on the open market for the few professionals that are available on the open market. You can expect to pay increasingly more to hire these people every year, and you can expect to spend a lot of energy and time training and retaining these people. Second, you could automate as much as possible. You should automate as much as you can, and we will talk about that in a subsequent section, but it is infeasible to automate all or even most of your operations using currently available technology. Third, you can turn to a service provider for help.

Services can help!

For those who do not want to participate in the escalating race for a diminishing supply of cybersecurity talent, there are solutions that can help. Good service providers, especially Managed Security Service Providers (MSSPs) take the burden of talent management on behalf of their customers. An MSSP is not immune to the cybersecurity skills gap or the lack of available talent, but they are focused on acquiring and retaining talent as their core business and therefore have an advantage over a team of cybersecurity professionals in an organization whose focus is elsewhere. Also, service providers can use resources more efficiently by leveraging their pools of talent across multiple customers and are more resilient when an employee leaves because of their scale in terms of security professionals. That scale also helps create the infrastructure to bring unskilled talent in at entry level and give them the skills necessary to be contributing team members.

From the perspective of a security professional, a service provider offers much better career advancement opportunities when compared to working for a business that is focused on something other than security. Also, those professionals are the star of the show at a service provider, whereas their entire department may be an afterthought in many organizations. As a result, in a market where professionals can choose what job they want, it is no surprise that many professionals prefer security services companies, especially early in their careers. As a result, service providers often train new security professionals and have the incumbent advantage in retaining the most talented among their staff. When professionals do choose to leave, it is often for senior leadership opportunities. The result is many security leaders have experience with service providers and are increasingly turning to these providers for help when they struggle to staff their teams. All these factors are leading to rapid growth in the services sector of the cybersecurity market. Many organizations that would have not considered an MSSP previously are increasingly turning to them to help solve the skills gap. Not all service providers are created equally though, and many customers have had a bad experience with a service provider in the past. It is important to thoroughly vet any services partner to ensure they have the specific expertise that is needed, and the terms of the service will meet your objectives. While hiring service providers is fundamentally different than hiring an employee, the process should be similar. A service provider will be solving problems that an employee may have solved if professionals were in sufficient supply. Many organizations do not have an effective process to define their needs and hire the appropriate service provider. They are more comfortable with hiring traditional employees. To solve staff shortage problems using services, you must develop a process for identifying your needs and selecting the right service provider to fill those needs.

There are several types of service providers, with different core expertise and different approaches. Following is my list of the types of service providers available. This list is based on my experience and helps me compartmentalize the offerings I see:

  • Managed Service Providers (MSPs) offering security services: Often, broad MSPs offer additional services in the security space. Some have deep security expertise and offer high-quality services. Many have a small number of security professionals and the service levels associated with security are much less robust than their general information technology skillsets.
  • Managed Security Service Providers (MSSPs): MSSPs are a broad category of organizations. I have put some sub-categories, as follows, because it is difficult to generalize MSSPs in terms of capabilities. What they have in common is they all provide managed security services for customers:
    • Product-neutral generalists: Product-neutral generalists is my term for MSSPs that offer a broad range of services across multiple security disciplines and vendors. These are generally the largest MSSPs, and among the most challenging to evaluate. In many cases, they have core expertise in some products and disciplines, and shallow expertise in others. The challenge is it is difficult to distinguish which disciplines are core and which are ancillary.
    • Product-specific generalists: Product-specific generalists cover a broad range of security disciplines, but only for a specific vendor. These are generally large security vendors such as Cisco, Symantec/Broadcom, or Microsoft. The advantage to using them is they generally have a strong relationship and deep support from their partners. The disadvantage is if you choose to switch technologies, you also must switch service providers.
    • Product-neutral specialists: Product-neutral specialists support multiple technologies in a few security disciplines. For example, some may focus on cloud security or information protection. The advantage of working with these providers is they will generally have deep expertise in their discipline and the ability to recommend alternatives if you choose to switch providers in their core area of expertise. The disadvantage is if you plan to use service providers for your broad security strategy, you will end up having to coordinate across multiple providers.

      Important Note

      One strategy that helps make this more effective is to work with a generalist for broad capabilities and a specialist for the disciplines that are most important.

    • Vendor-provided services: This category could also be product-specific specialists, but in reality, these types of services are generally provided by the technology vendor. There is a growing number of technology companies that are offering customers a holistic solution inclusive of technology and services in a comprehensive package. Essentially, you can buy the entire outcome as a service.
  • Consulting firms: Consulting firms often offer managed services. However, in many cases, they are offering full-time consultants that are essentially outsourced full-time employees. There is nothing wrong with this strategy, but it is different from a leveraged pool of resources like a traditional managed service, and it is important to know what you are getting as a customer.

All these service offerings can help take the pressure off teams that are struggling to find enough resources. However, embracing services will not solve the skills gap. The only near-term way to take pressure off the global security talent market is to automate as much as possible.

Automation

Automation is an important topic and has an important role to play in helping to relieve pressure on overburdened cybersecurity teams, both as part of security teams and as part of an MSSP. There are many types of automation that are being explored and built into technology products and service operations. When talking about automation, there are many misconceptions. For the purposes of this book, we will explore machine learning, which is being used effectively right now, and artificial intelligence, which offers amazing long-term opportunities, but will require a longer time horizon to mature. I will define categories of techniques and not specific technologies. The idea is to spark your imagination so you can apply what you have learned so far to imagine ways we can solve problems better using technology.

First, let's discuss techniques that are currently in use in many products you likely interact with every day, starting with machine learning.

Machine learning

Machine learning can be thought of as pattern recognition by machines. The idea is that you can feed a machine large datasets and the machine, without knowing much about the subject matter, can recognize similarities, differences, and patterns. There are two major categories of machine learning, supervised and unsupervised machine learning. First, we will discuss supervised machine learning.

Supervised machine learning

Supervised machine learning is a technique to use a human as a teacher to help a machine correctly identify a pattern. As an example, you could upload a set of information that you want the machine to recognize and a set of similar information that you want the machine to learn. You could then allow the machine to try to learn from the dataset and analyze the results. You could then adjust the dataset as necessary to change the profile. When you are happy with the profile, you can deploy the resulting algorithm and allow the machine to recognize the patterns you have trained it to identify. Supervised machine learning is the most predictable form of automation. You can think of supervised machine learning as a controlled scientific experiment. There is little opportunity for something wildly unexpected to happen. In some cases, this is preferable, but in others, it may limit the effectiveness of the technique.

For those who are looking for surprising breakthroughs and that have a higher risk tolerance, there is unsupervised machine learning.

Unsupervised machine learning

Unsupervised machine learning is a more hands-off approach than supervised machine learning. Essentially, unsupervised machine learning feeds a machine a large dataset and asks the machine to draw its own conclusions. A major warning is to remember that correlation does not equal causation. One example is criminal justice. If you were to feed an unsupervised machine learning model criminal statistics over the past 100 years in the United States, the machine may conclude that certain racial groups or genders are more predisposed to crime, ignoring sentencing disparities and unequal enforcement between groups. If that model was then used to make risk-based decisions, unfair prejudice may be built into the algorithm. In general, there are major ethical concerns with unsupervised machine learning algorithms as many datasets large enough to meaningfully train a system are inherently biased.

That said, unsupervised machine learning has the potential to yield novel insights that escape human perception. As a result, I would say unsupervised methods should not be abandoned, but you should exercise caution when deploying them to make consequential decisions. Like many potential technology innovations and novel applications of existing technology, it is important that people making decisions become well versed in the ethics of artificial intelligence and remember that with great power comes great responsibility.

Next, we will explore artificial intelligence techniques and try to separate truth from fiction.

Artificial intelligence

Artificial Intelligence (AI) is among the most overused terms in technology. It seems every marketing campaign for every new technology product uses AI as a buzzword. This is a shame because it makes it more difficult for people and companies dedicating resources towards making meaningful advancements in terms of AI to break through the noise. Most commercial technology products are not really using AI, but it is real, and it does exist.

AI is not a single technology, but actually a range of technologies that are designed to replace human beings in tasks they have been historically well-suited for. For example, before 1997, it was thought that human beings at the grandmaster level were uniquely suited to be very good at chess. The thought was a machine could never out-think a skilled chess player at the game. That perception was shattered when IBM's Deep Blue technology defeated world champion Garry Kasparov.

If you go on YouTube, you can see AI applications in militaries around the world that are reminiscent of the old Terminator movies starring Arnold Schwarzenegger. Many of us interact with chatbots and automated phone systems on a daily basis that are designed to replace human interaction. They are a form of AI As you can see, there is a major difference between a chatbot and a machine designed to emulate human behaviors and with the capacity to drive a car or fire a weapon. Therefore, defining AI into categories can be helpful to understand the space and find meaningful applications of the technology to solve the problems you are facing. We will define four broad categories of AI from the least sophisticated to the most sophisticated, starting with machines that are designed to react to set stimuli.

Reactive machines

Reactive machines are the simplest form of AI IBM's Deep Blue was a reactive machine. In 1997, this was considered advanced technology. By today's standards, it is rather unsophisticated. Deep Blue was created specifically to play chess. Every potential move was programmed into the computer along with the ideal reaction to that move. The result was a machine that could evaluate the quality of every potential move and mathematically select the one that led to the greatest chance of winning the game. The result was a machine that could not lose. Even the best chess players in the world could not defeat Deep Blue.

Reactive machines may seem basic, but they have real applications. For example, autonomous vehicles will be powered at least in part by reactive machines. Road conditions or traffic conditions become the inputs and the car will be programmed to respond in the way that is most advantageous. This brings up an ethical question though. Advantageous to whom? What if the best reaction in terms of the greater good would cause the death of the driver and no one else? What if the best alternative would cause the death of several other people but not the driver? The machine would likely be programmed to kill the driver, but the human driver would never make that choice. When life and death decisions are made by machines, there are significant ethical implications. Even though it is unlikely any of us would face that particular dilemma, handing over control of life and death decisions to a machine that would not value your life the same way you would is a difficult thing to do.

Reactive machines are the most basic form of AI because they have no memory and have little ability to evaluate context. They do not have the ability to learn and are explicitly programmed to respond to defined stimuli. If they encounter an input they have not been programmed to respond to, the program will fail. As a result, these types of techniques work best in environments where there are a defined set of potential stimuli. A chess game is a great example. They could be used in more expansive cases such as autonomous driving, but these applications would require exhaustive programming and thorough testing to ensure all sets of potential stimuli are accounted for.

In a security context, reactive machines can be used for automating first-level alert management, often known as triage. If the task is to confirm or deny that specific elements are present, it is a good application for a reactive machine. A simple example could be applied to vulnerability management. There are several types of vulnerability management alerts that are only applicable to certain operating systems. A reactive model could evaluate that report for false positives before passing it to a human for deeper analysis. The machine could have an inventory of the operating systems of the target systems and confirm only vulnerabilities that are applicable to the operating system in use.

Next, we will examine a form of machine learning that is one step above on the maturity scale, limited memory.

Limited memory

Limited memory AI builds upon the foundation of reactive machines. Limited memory allows the model to take historical information into account and learn. Many limited memory AI algorithms use unsupervised machine learning to draw parallels between input sets. In many cases, they are explicitly programmed for certain stimuli and then run in a simulated environment and allowed to learn. In some cases, they may be used to supervise others and learn from them. These applications have great potential. In some cases, people are very good at what they do but would struggle to teach another person how to do those things because much of what they do feels natural to them. However, through observation and learning, machines have the potential to deconstruct what the most talented people in a field do to master their craft. Deep learning is a form of limited memory AI, and most modern applications of AI are limited memory.

An example of how limited memory could be used in a security context would be an algorithm that can be programmed similarly to the reactive machine example to discard obvious false positives. The machine could then supervise a skilled analyst who is performing tier-two triage, looking for patterns the machine could learn. Theoretically, with a large enough dataset, the machine would eventually surpass the human ability to perform analysis since the human would make an occasional mistake. In some cases, this AI could be used to identify potential mistakes made by analysts, and eventually take over analysis tasks completely. Limited memory has great potential in tasks that are repetitive. In those cases, human error is increased as the task is repeated, and people pay less attention to the task at hand. Limited memory AI capabilities could retain sharp focus. Both reactive machines and limited memory AI are in use in several applications currently. The next AI category, Theory of Mind, is less widely deployed but is a critical capability for expanding the use of AI in society.

Theory of Mind

Theory of Mind is an interesting title for an important AI concept. Theory of Mind AI is in the conceptual phase, but it is an important advancement that makes widely deployed AI technology more feasible. Theory of Mind AI is designed to understand the emotions, thoughts, and beliefs of human parties it encounters. The idea is that much of an appropriate interaction between people is the emotions and beliefs they hold. Understanding the human mind more deeply will yield insights that allow machines to attempt to understand a person's state of mind and preconceived notions to communicate with people more effectively. There are differing opinions of how close we are to having Theory of Mind AI deployed.

Personally, I am skeptical. People who have dedicated their lives to studying the human brain are open about how little we understand about how our brains work. If our foremost experts cannot understand large portions of how our minds work, how could we program a machine with that understanding? The counterpoint to this argument is that Theory of Mind AI can be built by equipping AI that understands the part of the brain that is well understood with an unsupervised machine learning model that can learn the rest. I am skeptical that is possible.

However, Theory of Mind does not have to be an all-or-nothing proposition. While I think we are at least decades away from full Theory of Mind AI technology in wide use, I do think AI algorithms can understand parts of human behavior to try to determine intent. This is not full Theory of Mind technology, but still could be a meaningful advancement.

For example, with current behavior analytics techniques and existing AI technology, it is possible to distinguish intentional data theft from accidental exposure in many cases. Routing incidents to the proper response teams based on the intent of the user could be a useful way to reduce the burden on initial triage teams. If you could refer accidental exposure to a retraining algorithm that leverages appropriate learning content, you could retrain an end user. You could then refer true data theft incidents or potential data theft incidents to an incident response team and build an efficient security analytics capability that could cut the Mean Time to Respond (MTTR) to events from hours or days to seconds or minutes.

The next category of AI is the one that scares people most, self-aware AI.

Self-aware AI

Self-aware AI is the type of AI that is popular in science fiction. This is a theoretical concept currently and is likely decades or centuries away from becoming reality. Self-aware AI is AI that so closely mirrors the way the human mind works that it becomes aware of itself and thinks of itself as a sentient being. You can see how self-aware AI could accidentally be invented as we push Theory of Mind AI to its furthest possible extent. It is difficult to understand the implications for humanity if self-aware AI is created and it would challenge our perception of what is means to be human. It is unlikely that there are any applications for self-aware AI in cybersecurity. Limited memory AI is likely enough for most applications. In some cases, you can see how Theory of Mind AI could help in certain scenarios, especially when dealing with very sophisticated actors and complex schemes such as business email compromise.

In a security context, it is likely that Theory of Mind capabilities could be more useful to sophisticated attackers than it would be to cybersecurity teams. Hopefully, by this point, you have a better understanding of the different types of AI. Perhaps you can now view AI-driven claims from security vendors through a skeptical lens and not be swayed by the term. (Joshi, 2019) (Schellen, 2021)

Generally, infatuation with technology is a problem that affects many organizations, especially in security, and this is the topic of our next section.

Too much technology with too little process

I have worked with countless companies in my career, specifically helping them with their security strategy. The majority are overfocused on technology. I have never seen any that were overly focused on process. In Chapter 6, Information Security for a Changing World, we discussed security triumvirates, including the triumvirate of people, process, and technology. A triumvirate by definition should be equal in power. However, on average based on my experience, most security programs focus 60% of their effort and budget on technology, 30% on people, and 10% on process.

There are several theories on why this may be, but mine is that it is simply easier to select a technology than it is to define a process. In a world where people in cybersecurity teams are overloaded and stressed out, the easiest solution becomes the preferred solution. This technology proliferation and accompanying neglect of process design lead to a concept known as shelfware. Shelfware occurs when a technology is purchased but never deployed properly. As a result, it sits on the virtual shelf rather than providing any real value. This has become prevalent.

Example Case: Target

The data breach involving Target in 2014 was one of the most famous and largest data breaches in history. Much has been written about how adversaries compromised a third-party vendor, in Target's case a Heating, Ventilation, and Air Conditioning (HVAC) contractor, to gain access to sensitive systems involved in processing credit card transactions. This intrusion speaks to many of the concepts in this book, including timeless best practices such as the concept of least privilege. While it is likely that the vendor needed some access to Target's systems, it is unlikely that they needed access to Target's payment card systems. However, that is not the focus of this example case.

This example case is focused on the fact that Target was alerted to what was happening and failed to act. Prior to the attack, Target had implemented a type of cyber security alarm system made by FireEye. The software worked as intended and alerted the security team to the activities of the criminals. For some reason, the team failed to act on those warnings and the breach continued until Target was forced to report the compromise of over 40 million credit card numbers during the 2014 holiday shopping season. Target survived, but they suffered large financial losses and a significant loss of trust with their customers. It took years for Target to repair the company's reputation and likely cost the company billions of dollars in sales.

Target missed identified warning signs. The most likely reasons for that miss are either that the security team had too many alerts and therefore could not respond to them properly, or they implemented a powerful technology like FireEye without building the necessary process to help their team members respond appropriately. In either case, this massive data breach was caused by people and process failures, even though they had the proper technology in place to sound an alarm while the attack was happening. (Harris, 2014)

To explain why people like to implement technology over process another way, a technology choice is a multiple-choice question, whereas process solutions require free-form thinking. There is a defined market and customers can purchase the technology they think most closely matches their needs. Process design is an essay question. To define an effective process, you must talk to multiple people on disparate teams and think critically about the best way to implement something. While the aversion to process design is understandable, it leads to specific consequences that can be detrimental to a security program. The first is a concept I like to call console whiplash.

Console whiplash

Console whiplash occurs when a company has so many technologies that do not integrate with each other that the people responsible for managing the technology get virtual whiplash as they switch between the consoles. This is detrimental because it increases the likelihood that a person will make mistakes. Because of the skills shortage, most analysts cannot specialize in a single discipline. The result is teams that have a base-level understanding of many different technologies but lack deep expertise in any of them. The continued proliferation of technology makes console whiplash worse.

In many ways, console whiplash is the natural result of the best-of-breed strategy that many security teams have employed for many years. The best-of-breed strategy says that technologies should be selected for their individual merits, and if you have a different vendor for every capability, that's okay as long as you have the best tool. The best-of-breed strategy makes sense conceptually, but in practice requires many more people to effectively operate a program. We have established that cybersecurity talent is in short supply, which means a best-of-breed strategy without accompanying service assistance is nearly impossible to execute well in the current labor market.

The pure platform strategy is likely an overcorrection. A pure platform strategy says that we will select a single platform that can meet all our security needs. While there are broad platforms available, there are none that are best in class for every security discipline. They may have offerings in every category, but it would be hard to find any example of a company that was simultaneously delivering quality products in every security discipline. As a result, a pure platform strategy often results in an unacceptable compromise for critical security capabilities.

In my view, a better approach is the modified pareto Principle. The Pareto principle, often referred to as the 80/20 rule, says that 80% of the consequences come from 20% of the causes. In security, I would say that a pure platform strategy is likely sufficient for 80% of security use cases, but the most important 20% should use a best-of-breed strategy. This allows a security team to focus effort and budget on solving for the most important 20% of use cases while gaining efficiencies from the pure platform strategy for the remaining 80%. Of course, which 20% of the security use cases are most important varies between organizations but classifying the top 20% is an important exercise to help an organization focus. Applying the modified Pareto principle will help reduce the worst effects of console whiplash while protecting the ability to deploy advanced capabilities to protect the enterprise.

Another problem caused by too many technologies with too little process is siloed programs.

Siloed programs

A silo is a reference to grain silos in the physical world. Multiple grain silos are separate from each other. In a security program, a silo is a discipline that focuses only on its own view of security and does not effectively collaborate or share information with any other discipline. A security program made up of silos is inefficient and ineffective. Siloed programs are also a sign of poor security leadership.

Great leadership ensures people understand why they are doing what they are doing before focusing on what they are doing. Employees who understand the reasons behind their instructions will naturally collaborate with their peers to help further their mission. Employees who only understand what they are supposed to do are more likely to retreat to their own silos, especially when they become stressed out or overwhelmed.

When technology is disparate for each discipline and does not integrate in a meaningful way, it encourages silos to form. If technologies are tightly integrated and multiple teams are working with the same data or in the same tools, collaboration is often the default mode of operation. There are, of course, instances where companies with a best-of-breed strategy build collaborative security teams through great leadership or teams with a pure platform strategy still develop silos, but in general, there is a high correlation between best-of-breed technology strategies and siloed security programs.

The next issue related to an over-dependence on technology is a lack of business involvement in the security program.

Lack of business involvement

Most companies are not in business to do security, their core business is something else. This is obvious, but worth discussing. If business stakeholders are not involved in the security program, how could the security program be aligned with business objectives? Before working with a publicly traded company in the United States, I read the 10-K filing section 1A, titled Risk Factors. These are the risk factors for the business, but I read them looking for how the security program can help reduce these risks. It is rare to not find anything in the 10-K report that can be related to information security. Often, when I speak to security teams, they do not know what the business risks are, or how the security program relates to those business risks.

I recommend that a governance group be formed in every organization to steer the information security strategy. That governance group should include cross-functional leadership. The idea is that the business goals and security goals should remain aligned.

The next section is focused on making sure that everyone involved with the security program can answer a simple question. What are we trying to accomplish?

What are we trying to accomplish?

Many organizations do security for security's sake. There is a legitimate higher purpose for what they should be doing, but if no one on the team knows the higher purpose, does it matter? It is important to ensure security teams have clarity of purpose. If they can connect their day-to-day work to a higher purpose, they are more likely to do a great job in protecting the organization. If they are going through mundane tasks with little understanding of why, they are more likely to make mistakes.

There are some specific pieces of information that the security leadership should be aware of. First is the relationship between cyber risk and business risk.

Cyber risk is business risk

Cyber risk is business risk. The reason cyber security matters is because it is designed to protect the organization from harm. If a system is breached or information is stolen, the impact is a business impact. If a negligent employee discloses regulated information, the resulting fine is a business impact. It is important for everyone to know the connection between security risk and business risk and how their contribution to the security program protects their organization. In my opinion, each security team should spend time understanding why they are protecting what they are protecting and what could happen if they are unsuccessful.

Example Case: FlexMagic Consulting

When studying cybersecurity breaches, we often focus on large-scale breaches with very high costs, often launched against the world's largest and most recognizable companies. These case studies are useful because the target companies often have the resources to investigate the breach, providing us with valuable information about how it occurred that we can use to prevent similar future breaches. Also, those companies often continue to operate, which allows us to understand the long-term impacts of the security breach. This focus on large breaches involving large companies leads some to incorrectly assume that cybersecurity breaches do not affect small companies as much as their larger counterparts. This assumption could not be further from the truth.

Many small companies cannot survive a significant cybersecurity breach. To them, cyber risk is not only business risk, but also an existential threat. 3 of 5 companies that are hit by a cyber-attack go out of business. Most of them are small- to medium-sized enterprises, and many were successful prior to the cybersecurity event. Put another way, more companies go out of business after a cyber attack than those that can survive such an attack, even if the business is otherwise healthy. FlexMagic Consulting is one of these stories.

FlexMagic Consulting was a third-party consulting business with $2million in revenue and 9 employees. They were a well-respected business that had been operating in Colorado for over 30 years. As part of their benefits program, FlexMagic Consulting issued Flexible Spending Account cards, which could be used by employees for medical expenses.

In 2016, Russian attackers gained access to an administrator's password and used it to issue fraudulent Flexible Spending Account cards, with limits up to $5 million. These cards were used for procedures such as cosmetic surgery. When creditors demanded payment from FlexMagic consulting, they had to file for bankruptcy since they could not pay the claims. The attackers were caught and prosecuted, but the damage was done, and a business that had been successful for three decades was gone forever. With it, 9 people lost their jobs.

It is unlikely FlexMagic Consulting counted cybersecurity risks among catastrophic risks to their business. The events of 2016 showed that their cyber risk was an existential risk to their business. They should have defined the cyber risk as a business risk. If they had, they may have been more prepared to defend themselves against the fraudulent scheme that cost them their company. (Insure Trust, 2021) (ID Agent, 2021)

The next element that security leadership teams should understand is the entire cyber risk treatment plan.

Risk treatment planning

A risk treatment plan is a document that identifies risks to an organization in a register and defines how that risk will be treated. There are four categories of risk treatment:

  • Risk acceptance is the default risk treatment. If an organization does nothing about a risk, they are accepting it. Sometimes risk acceptance happens intentionally, and sometimes risks are accepted because they are not known, or the organization fails to treat the risk in another way. It is important to note that risk acceptance is a legitimate treatment, but only if the risk is identified and intentionally accepted by someone with the proper level of authority to do so.
  • Risk avoidance is a risk treatment that is applied when a company decides to stop engaging in a risky activity entirely. An example is if a company chose to avoid the risks associated with PCI compliance by ceasing to accept credit card payments. They no longer need to address the risk, but the impact on their business is in the form of lost revenue. True risk avoidance is rarely economically feasible for companies.
  • Risk Transference is a treatment where a third party is paid to accept risk on your behalf. This sounds confusing, but there is a common example that everyone is familiar with, insurance policies. When you buy an insurance policy, you are transferring risk to the insurance company.
  • Risk Mitigation is a treatment that seeks to reduce or eliminate the potential impact of a risk. Information security as a discipline is a risk mitigation strategy.

Risk treatment planning should create a risk register and assign a risk treatment to each identified risk along with the name and title of the person approving the risk treatment. This is especially important for risk acceptance. People get themselves in trouble when they accept risk that they do not have the proper level of authority to accept.

Next, we will discuss a method for how you may prioritize some risks over others, known as looking for material risk factors.

Looking for material risk factors

The number of risks related to information technology is nearly infinite. How can you build a risk register in such an environment? How do you keep the risks to a manageable level, and how do you know when your risk register is complete? The answers to these questions may vary between organizations, but often the first exercise is to define what constitutes a material risk.

Risk is often measured on a scale of likelihood and impact. To build a threshold for materiality, it is important to set a threshold of likelihood and impact. Risks that could have a catastrophic impact but are unlikely should probably be on the register. However, there is a limit. I have built the following chart to show how I would judge materiality on a three-level scale:

Figure 7.1 – Risk materiality matrix

Figure 7.1 – Risk materiality matrix

Each organization can define impact and likelihood as they see fit. Also, the color-coding and materiality of each box is a suggestion. The idea is that decision makers set a framework for determining whether a risk is material and what that means. As an example, an organization could decide that immaterial risks are going to be accepted by default and not recorded on the register. All material risks must be recorded on the register and given a treatment that is reviewed annually. Any immediate threats must be put on the register with a mitigation plan that will be executed within 30 days, and that mitigation plan is to be reviewed quarterly.

The point is not that you must follow my prescription. Rather, the idea is that you have a framework for how your organization identifies and treats risk so a risk register and a risk treatment plan can be developed in a reasonable time frame. Many organizations have a risk office or even a chief risk officer that can help build risk treatment plans. Many organizations treat cyber risk entirely differently than other business risks because they don't understand the risks and the treatments for cyber risk. This approach is flawed. Cyber risk is business risk, and it is the responsibility of the cyber security team to act as subject matter experts and help business stakeholders understand the risks and mitigation capabilities so they can make sound business decisions. It is not the role of the cyber security team to make business decisions on behalf of their stakeholders.

Next, we will look at another common challenge, which is the lack of continuing education.

Lack of continuing education

Cybersecurity awareness training is conducted in nearly 100% of organizations around the world. However, in my opinion, cybersecurity awareness training is conducted well in fewer than 10% of organizations around the world. Few organizations have a clear plan for what the training is intended to accomplish or how they will measure if the training was effective. If there are no metrics for the training, it should be assumed it wasn't effective. Also, few organizations have a clear goal for what they hope to accomplish.

In Chapter 5, Protecting against Common Attacks by Partnering with End Users, we discussed the process for creating an effective training program along with best practices for how the training should be delivered and reinforced. We will not repeat that content. Instead, we will address issues that make it difficult to maintain relevant skillsets in the modern security landscape. While these challenges apply to all employees, leaders generally have a longer tenure and are more likely to struggle to keep their understanding of cybersecurity challenges current.

First, we address a common theme in the modern enterprise, the pace of change.

The pace of change

In Chapter 6, Information Security for a Changing World, we discussed the pace of change and some examples of how much the world has changed during the average executive's career. People who have 20 years of experience or more have seen a tremendous amount of change in the workplace. Even people who have a technical background lose touch with changing technology when they move into leadership roles and their daily responsibilities change.

Example Case: "Microsoft" Data Breach

In May of 2021, an analyst with security company UpGuard notified Microsoft of a security flaw that exposed sensitive information housed by 39 of Microsoft's customers. This data breach is often unfairly attributed to Microsoft. While Microsoft created their cloud tool suite known as Power Apps, complete with their Open Data Protocols (OData) API, the public exposure of the data was due to a misconfiguration on the part of the 39 customers, not a flaw in Microsoft's own technology.

The OData API is designed to allow customers to expose information from their Microsoft services to other applications or users. As is always the case, the customers are responsible for controlling identity, access, and data. It is Microsoft's responsibility as a provider to secure its infrastructure and provide access to tools. It is the responsibility of the customer to ensure those tools are used properly and access is not granted to unauthorized people. Microsoft fulfilled its responsibility in this case. Its customers did not.

It is likely these customers did not understand how to use the API, and how to enable the proper permissions to secure it. The rapid pace of change and proliferation of cloud services in many organizations require additional training for security teams so these types of misconfigurations cannot happen. Too often, teams are expected to secure changing environments without getting the necessary skill updates for them to be successful in their mission. (McKeon, 2021)

The result of the pace of change is the need to constantly refresh the skills that are relevant to a person's job. It is important to determine what is relevant. For example, most executives have no need to understand the latest threat actor groups and the specific malware payloads and tactics they use to move laterally in an environment and execute ransomware attacks. However, it is likely relevant for those executives to understand that ransomware is primarily delivered through email and often when users click on links. They also should be aware of business email compromises and how they can work with their staff to ensure only valid requests are acted upon. This training need not be technical but is vital.

Next, we will discuss the need to update certain skills.

Updating certain skills

Updating certain skills returns to the idea that an organization must develop the discipline to define what cybersecurity skills are necessary for a person's job so those skills can be kept up to date. One-size-fits-all cybersecurity awareness training is ineffective and can sometimes be counterproductive since important messages are lost in the noise. It is more effective to develop module-based training and deliver to each role the modules that are relevant for their role and job function. This will not only make training more efficient, but also easier to keep up to date over time. When something materially changes in the module, the user can re-take only that module to update their understanding.

It is important to give our people the necessary tools to succeed. Part of that success is everyone doing their part to protect information and systems in the modern enterprise. It is my belief that most people want to help support the security program. However, security teams have historically made it difficult for non-technical people to do so.

Example Case: Home Depot

In 2014, American home improvement retailer Home Depot suffered a large security breach estimated to have compromised 56 million payment cards across the United States. This remains one of the largest cyber security breaches in history. The fallout from the Home Depot breach included a consequential settlement reached with 46 of the 50 states and the District of Columbia for $17.5 million. The overall breach is estimated to have cost Home Depot $179 million and is still growing.

Obviously, the business impacts of the Home Depot breach are significant and highlight the fact that cyber security risk is business risk because if those risks are manifested, there is often a significant financial impact on the company. The plaintiffs claim Home Depot failed in its responsibility to protect sensitive consumer information and therefore caused individuals and states unnecessary distress and economic harm. The terms of the agreement Home Depot made do not explicitly highlight the failures that led to the breach, but they do mandate actions Home Depot must take in the future, which indicates that appropriate protocols in these areas were not in place and directly led to the breach.

The terms of the agreement require Home Depot to hire a qualified Chief Information Security Officer (CISO), provide a robust security training program, and maintain a set of security policies designed to better protect sensitive information. It is unlikely that Home Depot did not have a CISO at all prior to the breach, so the agreement insinuates that the person was either unqualified for the position or, more likely, had not been provided with relevant training to update their skills. It is also likely that the training programs inside Home Depot were found to be inadequate. The strategies defined in this chapter could help the new CISO at Home Depot deliver more relevant and impactful training that would be helpful in preventing future security breaches. Finally, it is unlikely that Home Depot had no data security policies, but it is likely that the policies were found to be inadequate, outdated, or both.

At the heart of all these problems is people. Home Depot's focus should be to put the right people in place, allow security leadership to craft meaningful and effective policies, and to create an effective and ongoing security training program designed to ensure everyone responsible for protecting sensitive information entrusted to Home Depot has the training and skills necessary to be successful. (Starks, 2020)

Next, we will discuss another area that is important for employees to understand, especially leaders. I call it applying timeless concepts.

Applying timeless concepts

In Chapter 4, Protecting People, Information, and Systems with Timeless Best Practices, we defined the best practices in information security that have not changed although technology has changed significantly. It is important to ensure leaders across the organization understand these best practices. For example, if the CFO understands the concept of least privilege, they are more likely to request only the permissions necessary for the new accountant starting next week.

The timeless best practices are timeless because they are not tied to any specific technology. Therefore, everyone should be able to understand them, at least conceptually.

Summary

In this chapter, we discussed several challenges for the modern enterprise. We started by talking about the cybersecurity talent shortage, which is among the most significant challenges for securing the modern enterprise. We can and should be trying to inspire more people to join the cybersecurity profession, but we will be dealing with a talent shortfall for at least the next 10–20 years. As a result, automation will play a key role. You have learned about categories of machine learning and AI so you can apply the right technology solutions to the right problems. You have learned about the imbalance of technology and process in most security programs and the problems that imbalance can cause. You now understand how to identify and treat cyber risk as business risk and how to set up a continuing education program that ensures all team members, especially those in leadership, are equipped with the skills necessary to secure the modern enterprise. With what you have learned, you are ready to lead the cybersecurity function at your organization into the new world, and you have the context necessary to adapt to the next transformational change that will inevitably come. In the next chapter, we will look deeper into automation as part of our future, focusing on how we can identify and act upon automation opportunities.

Check your understanding

  1. In your own words, describe the difference between machine learning and AI.
  2. What are the four AI categories described in the chapter?
  3. Why do you think organizations deploy technical solutions and not process solutions?
  4. What are the four types of risk treatment? Provide a brief description and an example of each.
  5. What is a material risk factor? How would you determine if a risk is material or not?

Further reading

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.161.116