Chapter 6. Agile Vulnerability Management

New vulnerabilities in software are found every day. Many organizations are not aware of vulnerabilities in their systems until it is too late. What’s worse is that developers and their managers often ignore vulnerabilities that they do know about. This means that attackers can continue to exploit software vulnerabilities months or years after they were first reported, using automated scanners and exploit kits.

One of the most important responsibilities of a security team is vulnerability management: ensuring that people in your organization continuously check for known vulnerabilities, assess and understand the risks that these vulnerabilities pose to the organization, and take appropriate steps to remediate them.

Security teams need to work with developers, operations, compliance, and management in order to get all of this done, making vulnerability management an important touch point.

In this chapter, we’ll look at how to manage vulnerabilities and how to align vulnerability management with Agile approaches to getting work done. We’ll also look at how to fulfill the CYA paperwork aspect of vulnerability management required for compliance, in a lightweight, efficient way.

Vulnerability Scanning and Patching

Setting up and scheduling vulnerability scans properly, making sure that scanning policies are configured correctly and consistently enforced, reviewing and triaging the results based on risk, packaging up patches and testing to make sure that patches don’t introduce new problems, scheduling and deploying updates, and keeping track of all of this work so that you can prove that it has been done is a huge operational responsibility for an organization of any size.

Techniques and tools that we look at throughout this book, including automating configuration management in code, and automating builds, testing, and delivery, can be used to help make this work safer and cheaper.

First, Understand What You Need to Scan

Your first step in understanding and managing vulnerabilities is to identify all the systems and applications that you need to secure, both on premises and in the cloud. Getting an up-to-date and accurate inventory of what you need to scan is difficult for most organizations and can be effectively impossible at enterprise scale; whereas a small security team may be responsible for thousands of applications in multiple data centers.

One of the many advantages of using automated configuration management tools like Ansible, Chef, and Puppet is that they are based on a central repository that describes all of your servers, how they are configured, and what software packages are installed on them.

UpGuard: Continuous Vulnerability Assessment

UpGuard automatically discovers configuration information about Linux and Windows servers, network devices, and cloud services; identifies vulnerabilities and inconsistencies; and tracks changes to this information over time.

It continuously assesses vulnerability risks, automatically scanning systems or using information from other scanners, and assigns a compliance score for all of your systems.

UpGuard also creates tests to enforce configuration policies, and can generate runbook code that you can use with tools like Ansible, Chef, Puppet, Microsoft Windows PowerShell DSC, and Docker to apply updates and configuration changes.

You can use this information to understand the systems that you need to scan for vulnerabilities and to identify systems that need to be patched when you receive a vulnerability alert. Then you can use the same tools to automatically and quickly apply patches.

Then Decide How to Scan and How Often

Most vulnerability management programs run in a reactive, regularly scheduled scan-and-patch cycle, like Microsoft’s “Patch Tuesday”:

  1. Set up a scan of your servers and network devices using tools like Core Impact, Nessus, Nexpose, or OpenVAS, or an online service like Qualys. These scanners look for known vulnerabilities in common OS distributions, network devices, databases, and other runtime software, including outdated software packages, default credentials, and other dangerous configuration mistakes.

  2. Review the vulnerabilities reported, filter out duplicates and false positives, and prioritize true positive findings based on risk.

  3. Hand the results off to engineering to remediate by downloading and applying patches, or correcting the configuration, or adding a signature to your IDS/IPS or WAF/RASP to catch or block attempted exploits.

  4. Re-scan to make sure that the problems were actually fixed.

  5. Record what you did for auditors.

  6. Rinse and repeat next month or next quarter, or however often compliance requires.

How Scanning Needs to Change in Agile and DevOps Environments

It’s not enough to scan systems once a month or once a quarter when you are making changes every week or several times a day. You will never be able to keep up.

You’ll need to automate and streamline scanning so that it can be run much more often, every day if possible, as part of your build pipelines.

To create an efficient feedback loop, you’ll need to find a way to automatically remove duplicates and filter false positives from scanning results, and return the results directly into the tools that teams use to manage their work, whether it’s a ticketing system like Jira, or an Agile backlog management system like Rally or VersionOne, or a Kanban system like Trello.

Tracking Vulnerabilities

As we will see in this book, there are other important ways to get vulnerability information besides scanning your network infrastructure:

  • Scanning applications for common coding mistakes and runtime vulnerabilities using automated security testing (AST) tools that we’ll explore in Chapter 11, Agile Security Testing

  • Penetration testing and other manual and automated security testing

  • Bug bounty programs (discussed in Chapter 12)

  • Threat intelligence and vendor alerts

  • Software component analysis (SCA) scanning tools that check for known vulnerabilities in open source components, something we’ll look at in more detail later in this chapter

  • Scanning runtime containers and container images for known vulnerabilities

  • Scanning cloud instances for common vulnerabilities and unsafe configs using services like AWS Inspector

  • Manual code reviews or code audits of application code and infrastructure recipes and templates

  • Bug reports from partners, users, and other customers (which is not how you want to hear about a vulnerability in your system)

To get an overall picture of security risk, all of this information needs to be consolidated, tracked, and reported for all of your systems. And it needs to be evaluated, prioritized, and fed back to operations and developers in ways that make sense to them and fit how they work so that problems can get fixed quickly.

Managing Vulnerabilities

Vulnerabilities in software or infrastructure are defects in requirements or design or coding or implementation. They should be handled like any other defect in the system: fixed immediately, or added to the team’s backlog and prioritized along with other work, or dismissed because the team (including management) decides that the problem is not worth fixing.

But vulnerabilities introduce risks that engineering teams, product managers, and other people in the organization have a hard time understanding. It’s usually obvious to users and to the team when there is a functional bug or operational problem that needs to be fixed, or if the system’s performance is unacceptable. These are problems that engineering teams know how to deal with. Security vulnerabilities aren’t as cut and dried.

As we’ll see in Chapter 7, Risk for Agile Teams, to decide which security bugs need to be fixed and how quickly this has to be done, you need to know the answers to several questions:

  • What is the overall threat profile for your organization? What kind of threats does your organization face, and what are the threat actors after?

  • What is the risk profile of the system(s) where the vulnerability was found?

  • How widespread is the vulnerability?

  • How easy is it for an attacker to discover and to exploit?

  • How effective are your existing security defenses in defending against attacks?

  • What are the potential technical and business impacts to your organization of a successful attack?

  • Can you detect if an attack is in progress?

  • How quickly can you contain and recover from an attack, and block further attacks, once you have detected it?

  • What are the costs of fixing the vulnerability, and what is your confidence that this will be done correctly and safely?

  • What are your compliance obligations?

In order to understand and evaluate these risk factors, you need to look at vulnerabilities separately from the rest of the work that the engineering teams are doing. Tracking and reporting vulnerabilities is a required part of many organization’s GRC (governance, risk, and compliance) programs and is mandated by regulations such as PCI DSS to demonstrate to management and auditors that due care and attention has been followed in identifying and managing security risks.

Vulnerability management involves a lot of mundane, but important bookkeeping work:

  • Recording each vulnerability, when and where and how it was found or reported

  • Scanning or checking to see where else the vulnerability might exist

  • Determining the priority in which it should be fixed, based on a recognized risk rating

  • Assigning an owner, to ensure that the problem gets fixed

  • Scheduling the fix

  • Verifying that it was fixed properly and tracking how long it took to get fixed

  • Reporting all of this, and highlighting exceptions, to auditors

This is a full-time job in large enterprises. It’s a job that is not done at all in many smaller organizations, especially in startups.

Application vulnerability management tools like bugBlast, Code Dx, and Denim Group’s ThreadFix consolidate vulnerabilities found by different scanners as well as problems found in pen testing or manual reviews, and consolidate this information across multiple systems. This will help you to identify risks inside and across systems by identifying which systems or components have the most vulnerabilities, which types of vulnerabilities are the most common, and how long vulnerabilities have been left open.

You can assess which teams are doing the best job of remediation and whether teams are meeting their compliance requirements, and you can also evaluate the effectiveness of tools by looking at which tools find the most important vulnerabilities.

These tools simplify your job of managing findings by filtering out duplicate findings from multiple tools or different test runs and by giving you a chance to review, qualify, and prioritize vulnerability findings before asking developers to resolve them. Most vulnerability management tools have interfaces into popular development tools, so that you can automatically update the team’s backlog.

Dealing with Critical Vulnerabilities

When a critical vulnerability like Heartbleed or ShellShock is found that must be fixed quickly, you need to be able to rely on your vulnerability management program to ensure that:

  • You are informed about the vulnerability through threat intelligence and vulnerability feeds, such as a CERT alert, or sometimes even from the press. In the case of Heartbleed, this wasn’t that difficult, since, like the Anna Kournikova virus or ShellShock, it received wide coverage in the popular press, mostly because of its catchy name.

  • You understand what the bug is and how serious it is, based on the risk score assigned to the CVE.

  • You are confident that you can identify all the systems that need to be checked for this vulnerability.

  • You can verify whether and where you have the vulnerability through scanning or configuration checks.

  • You can quickly and safely patch or upgrade the vulnerable software package, or disable affected functions or add a signature to your IDS/IPS or firewall to block attacks as a workaround.

  • You can verify that the patch was done successfully, or that whatever other step that you took to mitigate the risk of an attack was successful.

Vulnerability management ties security, compliance, development, and operations together in a continuous loop to protect your systems and your organization.

Securing Your Software Supply Chain

An important part of managing vulnerabilities is understanding and securing your software supply chain: the software parts that modern systems are built with. Today’s Agile and DevOps teams take extensive advantage of open source libraries and frameworks to reduce development time and costs. But this comes with a downside: they also inherit bugs and vulnerabilities from other people’s code.

According to Sonatype, which runs the Central Repository, the world’s largest repository for open source software for Java developers: 80 to 90 percent of the code in today’s applications comes from open source libraries and frameworks.

A lot of this code has serious problems in it. The Central Repository holds more than 1.36 million components (as of September 2015), and almost 1,500 components are being added every day. More than 70,000 of the software components in the Central Repository contain known security vulnerabilities. On average, 50 new critical vulnerabilities in open source software are reported every day.

Sonatype looked at 31 billion download requests from 106,000 different organizations in 2015. It found that large financial services organizations and other enterprises are downloading an average of more than 230,000 “software parts” each year. But keep in mind this is only counting Java components. The total number of parts, including RubyGems, NuGets, Docker images, and other goodies, is actually much higher.

Of these 230,000 downloads, 1 in every 16 download requests was for software that had at least 1 known security vulnerability.

In just one example, Sonatype reviewed downloads for The Legion of the Bouncy Castle, a popular crypto library. It was downloaded 17.4 million times in 2015. But one-third of the time, people downloaded known vulnerable versions of the library. This means that almost 14,000 organizations across the world unnecessarily and probably unknowingly exposed themselves to potentially serious security risks while trying to make their applications more secure.

Scared yet? You should be.

It’s clear that teams must ensure that they know what open source components are included in all their applications, make sure that known good versions were downloaded from known good sources, and that these components are kept up to date when vulnerabilities are found and fixed.

Luckily, you can do this automatically by using SCA tools like OWASP’s Dependency Check project, or commercial tools like Black Duck, JFrog Xray, Snyk, Sonatype’s Nexus Lifecycle, or SourceClear.

OWASP Dependency Check

OWASP’s Dependency Check is a free scanner that catalogs all the open source components used in an application and highlights vulnerabilities in these dependencies. It works for Java, .NET, Ruby (gemspec), PHP (composer), Node.js, and Python, as well as some C/C++ projects. Dependency Check integrates with common build tools, including Ant, Maven, and Gradle, and CI servers like Jenkins.

Dependency Check reports on any components with known vulnerabilities reported in NIST’s National Vulnerability Database and gets updates from the NVD data feeds.

Here are some other popular open source dependency checking tools:

You can wire these tools into your build pipelines to automatically inventory open source dependencies, identify out-of-date libraries and libraries with known security vulnerabilities, and fail the build automatically if serious problems are found. By maintaining an up-to-date bill of materials for every system, you will be prepared for vulnerabilities like Heartbleed or DROWN, because you can quickly determine if you are exposed and what you need to fix.

These tools can also alert you when new dependencies are detected so that you can create a workflow to make sure that they get reviewed.

Vulnerabilities in Containers

If you are using containers like Docker in production (or even in development and test), you will need to enforce similar controls over dependencies in container images. Even though Docker scans images in official repos to catch packages with known vulnerabilities, there is still a good chance that someone will download a “poisoned image” containing out-of-date software or malware, or an image that is not safely configured.

You should scan images on your own, using an open source tool like OpenSCAP or Clair, or commercial scanning services from Twistlock, Tenable, or Black Duck Hub; and then check these images into your own secure repository or private registry, where they can be safely used by developers and operations staff.

Fewer, Better Suppliers

There are obvious maintenance costs and security risks to overextending your software supply chain. Following Toyota’s Lean Manufacturing model, your strategic goal should be to move to “fewer, better suppliers” over time, standardizing on libraries and frameworks and templates and images that are proven to work, that solve important problems for developers, and that have been vetted by security. At Netflix, they describe this as building a paved road, because developers—and security and compliance staff—know that if they take advantage of this code, the path ahead will be easier and safer.

Calculating Supply Chain Costs and Risks

Sonatype has developed a free calculator which will help developers and managers understand the cost and risks that you inherit over time from using too many third-party components.

But you need to recognize that although it makes good sense in the long term, getting different engineering teams to standardize on a set of common components won’t be easy. It’s difficult to ask developers supporting legacy apps to invest in making this kind of change. It’s equally difficult in microservices environments where developers expect to be free to use the right tools for the job, selecting technologies based on their specific requirements, or even on their personal interests.

One place to start is by standardizing on the lowest layers of software: the kernel, OS, and VMs, and on general-purpose utility functions like logging and metrics collection, which need to be used consistently across apps and services.

How to Fix Vulnerabilities in an Agile Way

A major problem that almost all organizations face is that even when they know that they have a serious security vulnerability in a system, they can’t get the fix out fast enough to stop attackers from exploiting the vulnerability. The longer vulnerabilities are exposed, the more likely the system will be, or has already been, attacked.

WhiteHat Security, which provides a service for scanning websites for security vulnerabilities, regularly analyzes and reports on vulnerability data that it collects. Using data from 2013 and 2014, WhiteHat found that 35 percent of finance and insurance websites are “always vulnerable,” meaning that these sites had at least one serious vulnerability exposed every single day of the year. The stats for other industries and government organizations were even worse. Only 25 percent of finance and insurance sites were vulnerable for fewer than 30 days of the year.

On average, serious vulnerabilities stayed open for 739 days, and only 27 percent of serious vulnerabilities were fixed at all, because of the costs and risks and overhead involved in getting patches out.1

There are many reasons that vulnerabilities take too long to fix, besides teams being too busy with feature delivery:

  • Time is wasted in bureaucracy and paperwork in handing off work between the security team and engineering teams.

  • Engineering teams don’t understand the vulnerability reports, how serious they are, and how to fix them.

  • Teams are scared of making a mistake and breaking the system when putting in a patch because they don’t have confidence in their ability to build and test and deploy updated software.

  • Change management is expensive and slow, including all the steps to build, review, test, and deploy changes and the necessary handoffs and approvals.

As we’ve seen throughout this book, the speed of Agile development creates new security risks and problems. But this speed and efficiency can also offer an important edge against attackers, a way to close vulnerability windows much faster.

Agile teams are built to respond and react to new priorities and feedback, whether this is a new feature or a problem in production that must be fixed. Just-in-time prioritization, incremental design and rapid delivery, automating builds and testing, measuring and optimizing cycle time—all of these practices are about making changes cheaper, faster, and easier.

Prioritizing Vulnerabilities

There are some factors that need to be considered in prioritizing the work to fix vulnerabilities:

Risk severity

Based on a score like CVSS.

Exploitability

The team’s assessment of how likely this vulnerability can be exploited in your environment, how widespread the vulnerability is, and what compensating controls you have in place.

Cost and risk of making the fix

The amount of work required to fix and test a vulnerability can vary widely, from rolling out a targeted minor patch from a supplier or making a small technical fix to correct an ACL or a default config setting, to major platform upgrades or overhauling application logic.

Compliance mandates

The cost/risk of being out of compliance.

DevOps practices and tools like automated configuration management, continuous delivery, and repeatable automated deployment make it even cheaper and safer and faster to get changes and fixes to production. DevOps shops rely on this capability to minimize their MTTR in responding to operations incidents, knowing that they can get patches out quickly to resolve operational problems.

Let’s look at how to take advantage of Agile practices and tools and feedback loops that are optimized for speed and efficiency, to reduce security risks.

Test-Driven Security

One way to ensure that vulnerabilities in your applications get fixed is to write an automated test (e.g., unit test or an acceptance test) which proves that the vulnerability exists, and check the test in with the rest of the code so that it gets run when the code is built. The test will fail until the vulnerability gets fixed. This is similar to the way that test-driven developers handle a bug fix, as we explain in Chapter 11, Agile Security Testing.

Testing for Heartbleed with Gauntlt

In Chapter 11, we show how to write security tests using the Gauntlt test framework. Gauntlt comes packaged with a set of sample attacks, including an example of a test specifically written to check for the Heartbleed vulnerability, which you can use as a template for writing your own security checks for other vulnerabilities.

Of course for this approach to be accepted by the team, the person writing the test needs to be accepted as a member of the team. This person needs the support of the Product Owner, who is responsible for prioritizing work and won’t be happy to have the team sidetracked with fixing something if she doesn’t understand why it is important or necessary.

This person also needs to understand and respect the team’s conventions and how the code and test suite are structured, and he needs the technical chops to write a good test. The test must clearly show that a real problem exists, and it has to conform with the approach that the rest of the team is following so that the team is willing to own the test going forward. All of this will likely require help from someone on the team, and it will be much easier if the team is already heavily invested in test automation.

Writing tests like this gives you evidence that the vulnerability has been fixed properly. And it provides insurance that the vulnerability won’t come back. As we’ve seen, it’s a big step in the right direction from dropping a vulnerability report on a developer’s desk.

Zero Bug Tolerance

Some Agile teams try to follow the ideal of “zero bug tolerance.” They insist on fixing every bug that is found before they can move forward and call a feature done, or before they can start on a new feature or story. If a problem is serious enough, the team might all stop doing other work and swarm on it until it is fixed.

If you can explain to these teams that vulnerabilities are bugs, real bugs that need to be fixed, then they would be obliged to fix them.

For the team to take this seriously, you need to do a few things:

  1. Be ruthless in eliminating false positives, and focus on vulnerabilities which are important to the organization, problems that are serious and exploitable.

  2. Get these onto the Agile team’s backlog in a form that the team understands.

  3. Spend some time educating the team, including the Product Owner, about what these bugs are and why they are important.

  4. Spend some more time helping the team understand how to test and fix each bug.

This approach is viable if you can start with the team early in development, to deal with vulnerabilities immediately as they come up. It’s not fair to the team or the organization to come back months or years into development with a long list of vulnerabilities that were found from a recent scan and expect the team to stop everything else and fix them right away. But you can start a conversation with the team and come up with a plan that balances security risks with the rest of its work, and agree on a bar that the team can realistically commit to meeting.

Collective Code Ownership

Another common idea in Agile teams is that the code is open to everyone on the team. Anyone can review code that somebody else wrote, refactor it, add tests to it, fix it, or change it.

This means that if a security engineer finds a vulnerability, she should be able to fix it, as long as she is seen as part of the team. At Google, for example, most of the code base is open to everyone in the organization, which means that security engineers can fix vulnerabilities in any part of the code base, provided that they follow the team’s conventions, and take ownership of any problems that they might accidentally introduce.

This takes serious technical skills (not a problem for security engineers at Google of course, but it might be more of a challenge in your organization) and confidence in those skills. But if you know that a bug is serious, and you know where the bug is in the code, and how to fix it properly, and how to check the fix in, then doesn’t it make sense to go and do it, instead of trying to convince somebody else to stop what he is doing and fix it for you?

Even if you lack confidence in your coding skills, a pull request or just marking code for review can be a good way to move an issue closer to being fixed.

Security Sprints, Hardening Sprints, and Hack Days

Another way to get security problems corrected, especially if you have a lot of them to deal with, for example, when you are going through a pen test or an audit, or responding to a breach, is to run a dedicated “security sprint” or “hardening sprint.”

For Agile teams, “hardening” is whatever you need to do to make the system ready for production. It’s when you stop thinking about delivering new features, and focus most or all of your time on packaging, deploying, installing, and configuring the system and making sure that it is ready to run. For teams following continuous delivery or continuous deployment, all of this is something that they prepare for every time that they check in a change.

But for many other teams, this can come as an ugly and expensive surprise, once they understand that what they actually need to do is to take a working functional prototype that runs fine in development and make it into an industrial grade system that is ready for the real world, including making sure that the system is reliable and secure.

In a hardening sprint, the development team stops working on new features and stops building out the architecture and instead spends a dedicated block of time together on getting the system ready to be released.

There is a deep divide between people who recognize that spending some time on hardening is sometimes needed, especially in large programs where teams need to work through integration issues; and other people who are adamant that allocating separate time for hardening is a sign that you are doing things—or everything—wrong, and that the team is failing, or has already failed. This is especially the case if what the team means by “hardening” is actually a separate sprint (or sprints) for testing and bug fixing work that should have been done as the code was written, in what is called an “Agilefall” approach.

Hardening sprints are built into the SAFe (Scaled Agile Framework), an enterprise framework for managing large Agile programs. SAFe makes allowances for work that can only really be done in a final hardening and packaging phase before a big system is rolled out, including security and compliance checks. Disciplined Agile Delivery (DAD), another enterprise Agile method originally created by Scott Ambler at IBM to scale Agile practices to large projects, also includes a hardening phase before each release.

Planning a security sprint as part of hardening may be something that you have to do at some point, or several points, in a project, especially if you are working on a legacy system that has a lot of technical and security debt built up. But hardening sprints are expensive and hard to sell to the customer and management, who naturally will want to know why the team’s velocity dropped to zero, and how it got into such a bad situation in the first place.

Try a Security Hack Day Instead of a Hardening Sprint

Some organizations hold regular hack days, where teams get to spend time off from their scheduled project work to learn new things, build or improve tools, prototype new ideas, and solve problems together.

Instead of hardening sprints, some teams have had success with security-focused hack days. In a hack day, which often ends late in the hack night, the team brings in some expert help and focuses on finding and fixing a specific kind of vulnerability. For example, you could get the team together, teach everyone all about SQL injection, how to find it and how to fix it properly. Then everyone works together, often in pairs, to fix as many of these vulnerabilities in the code as can safely get done in one day.

Hack days like this are obviously much cheaper and easier to make a case for than a dedicated sprint. They are also safer: developers are less likely to make mistakes or introduce regressions if they are all trained on what to do and focused on working on the same problem together for a short period of time. Hack days shine a light on security risks, help educate the team (after a few hours of fixing a specific vulnerability, everyone should be able to spot it and fix it quickly in the future), and they get important bugs fixed without slowing the team down too much.

Relying on a separate hardening sprint to find and fix vulnerabilities and other bugs before releasing code is risky, and over the long term, it’s fighting a losing battle. Forcing teams to stop working on new development and instead focus on security issues for weeks or months at a time was an early, and desperate, part of Microsoft’s Trustworthy Computing Initiative. But it didn’t take Microsoft long to realize that this was costing too much and wasn’t making a sustainable improvement in the security or reliability of its software. This is when the company switched its emphasis to building security practices directly into its development life cycle instead.

Taking On and Paying Down Security Debt

Agile teams have learned to recognize and find ways to deal with technical debt. Technical debt is the sum of all the things that the team, or the people who came before them, should have done when building the system, but didn’t have the time to do, or didn’t know that they should have done. These include shortcuts and quick-and-dirty hacks, tests that should have been written or that broke and were left broken, bugs that should have been fixed, code that wasn’t refactored but should have been, and patches that should have been applied.

All of these things add up over time, making the system more brittle and harder to change, less reliable, and less secure. Eventually, some of this debt will need to be paid back with interest, and usually by people who weren’t around when the debt was taken on: like children having to pay off their parents’ mortgage.

When teams or their managers prioritize delivering features quickly over making sure that they are writing secure code, don’t invest in training, cut back on reviews and testing, don’t look for security problems, and don’t take time to fix them early, they take on security debt. This debt increases the risk that the system will be compromised, as well as the cost of fixing the problem, as it could involve redesigning and rewriting parts of the system.

Sometimes this may be the right thing to do for the organization. For example, Lean startups and other teams building out a Minimum Viable Product (MVP) need to cut requirements back to the bone and deliver a working system as soon as possible in order to get feedback and see if it works. It’s a waste of limited time and money to write solid, secure code and go through all the reviews and testing to make sure that the code is right if there is a good chance that they are going to throw the code out in a few days or weeks and start again—or pack up and move to another project or find another job because the system was a failure or they ran out of money.

There are other cases where the people paying for the work, or the people doing the work, need to cut corners in order to hit an urgent deadline—where doing things now is more important than doing things right.

What is key is that everybody involved (i.e., people doing the work of building the system, the people paying for this work, and the people using the systems to do their work) must recognize and accept that they are taking on real risks when they make these decisions.

This is how Microsoft and Facebook and Twitter achieved market leadership. It’s a high-risk/high-reward strategy that can pay off and did pay off in these examples, but eventually all of these organizations were forced to confront the consequences of their choices and invest huge amounts of time and talent and money in trying to pay off their debts. It’s possible that they may never succeed in this: Microsoft has been fighting with serious security problems since Bill Gates made “Trustworthy Computing” the company’s highest priority back in 2002.

This is because just like credit card debt, security debt incurs interest. A little debt that you take on for a short period of time is easy to pay off. But lots of debt left over a long time can leave the system, and the organization, bankrupt.

Keep track of the security debt that you are taking on, and try to be deliberate and transparent about the choices you are making. Make security debt and other technical debt, such as the following, visible to the owners of the system when you are taking on risk:

  • Open vulnerabilities and outstanding patches (i.e., how big your window of vulnerability is and how long it has been open)

  • Outstanding pen test or audit findings

  • High-risk code areas that have low automated test coverage

  • Gaps in scanning or reviews

  • Gaps in training

  • Time-to-detect and time-to-repair metrics (i.e., how quickly the team can identify and respond to emergencies)

Write stories that explain what should be done to clean up the debt, and add these stories to the backlog so that they can be prioritized with other work. And make sure that everyone understands the risks and costs of not doing things right, right now and that it will cost more to make it right later.

Key Takeaways

Here are some of the keys to effectively dealing with vulnerabilities:

  • New vulnerabilities are found in software every day. Vulnerability assessment and management needs to be done on a continuous basis.

  • Leverage tooling and APIs to get vulnerabilities out of reports and into the team’s backlog so that they can be scheduled and fixed as part of other work.

  • Help the team, especially the Product Owner and Scrum Master, understand vulnerabilities and why and how they need to be fixed.

  • Watch out for vulnerabilities in third-party dependencies, including open source frameworks, libraries, runtime stacks, and container images. Scan dependencies at build time, and stop the build if any serious vulnerabilities are found. Cache safe dependencies in artifact repositories or private image registries, and encourage developers to use them.

  • Automated configuration management and continuous delivery pipelines enable you to respond quickly and confidently to serious vulnerabilities. Knowing that you can patch the software and push out a patch quickly is a major step forward in dealing with vulnerabilities.

  • Security Hack Days can be an effective way to get the team focused on understanding and fixing security vulnerabilities.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.17.128.129