© Scott Norberg 2020
S. NorbergAdvanced ASP.NET Core 3 Security https://doi.org/10.1007/978-1-4842-6014-2_11

11. Secure Application Life Cycle Management

Scott Norberg1 
(1)
Issaquah, WA, USA
 
I’ve spent pretty much the entire book up to this point talking about specific programming and configuration techniques you can use to help make your applications secure. Now it’s time to talk about how to verify that your applications are, in fact, secure. Let’s start by getting one thing out of the way: adding security after the fact never works well. Starting your security checks right before you go live “just to be sure” ensures that you won’t have enough time to fix more than the most egregious problems, and going live before doing any research means your customers’ information is at risk. As one recent example, Disney+ was hacked hours after going live.1
Table 11-1

Hours to fix bug based on introduction point (from NIST)

 

Stage Found

Stage Introduced

Requirements

Coding/Unit Testing

Integration

Beta Testing

Post-product Release

Requirements

1.2

8.8

14.8

15.0

18.7

Coding/unit testing

N/A

3.2

9.7

12.2

14.8

Integration

N/A

N/A

6.7

12.0

17.3

As if that weren’t enough, bugs are more expensive to fix once they’ve made it to production. To prove that, Table 11-1 shows NIST’s table of hours to fix a bug based on when it is introduced.2

Obviously, fixing bugs earlier in the process is easier than fixing them later. Improving security practices as you’re writing code is a necessary step in speeding up development and allowing you to focus on the features that your users will love. Reading this book, and thus knowing about best practices in security, is a great start! But you also need to verify that you’re doing (or not doing) a good job, so let’s explore what security professionals do.

Testing Tools

The vast majority of security assessments start with the security pro running various tools against your website. Sometimes the tools come back with specific findings, other times the tools come back with suspicious responses that the penetration tester uses to dig deeper. You’ve already touched upon how this works with the various tests we’ve done with Burp Suite. But since looking for suspicious results and digging deeper isn’t something you can do on a regular basis, let’s focus on the types of testing that is repeatable and automatable. Here is a list of types of testing tools available today:
  • Dynamic Application Security Testing (DAST ) : These scanners attack your website, using relatively benign payloads, in order to find common vulnerabilities.

  • Static Application Security Testing (SAST ): These scanners analyze the source code of your website, looking for common security vulnerabilities.

  • Source Component Analysis (SCA ): These scanners compare the version numbers of the components of your website (such as specific JavaScript libraries or NuGet packages), and compare that list to known lists of vulnerable components, in order to find software that you should upgrade.

  • Interactive Application Security Testing (IAST ): These scanners monitor the execution of code as it is running to look for various vulnerabilities.

There is a large ecosystem of other types of tools that will also detect security issues in your websites, most of which are targeted to the server, hosting, or network around a website. Since this book is targeted mainly to developers, I’ll focus on the tools that are most helpful in finding bugs that are caused by problems in website source code.

DAST Tools

DAST tools will attack your website in an automated, though less effective, manner than a manual penetration tester would. A DAST scanner’s first step is usually called a passive scan, in which it opens your website, logs in (if appropriate), clicks all links, submits all forms, etc., that it finds in order to determine how big your site is. Then it sends various (and mostly benign) payloads via forms, URLs, and API calls, looking for responses that would indicate a successful attack. This step is called an active scan.

This approach means that the vast majority of DAST scanners are language agnostic – meaning with a few exceptions such as recognizing CSRF tokens or session cookies, they’ll scan sites built with most languages equally effectively. It also means that any language-specific vulnerabilities may not be included in the scan.

Let’s look at a few examples of payloads that a typical DAST scanner might send to your website in an attempt to find vulnerabilities:
  • Sending <script>alert([random number])</script> in a comment form. If an alert pops up later on in the scan with that random number, an XSS vulnerability is likely present in the website.

  • Sending ' WAITFOR DELAY '00:00:15' -- to see if a 15 second delay occurs in page processing. If so, then a SQL injection vulnerability exists somewhere in the website.

  • Attempt to alter any XML requests to include a known third-party URL. If that URL is hit, then that particular endpoint is almost certainly vulnerable to XXE attacks.

The scanner will go through dozens or hundreds of variations to attempt to account for the various scenarios that might occur in a website. For instance, if your XSS vulnerability exists within an HTML tag attribute instead of within a tag’s text, onmouseover="alert([random number]) would be more likely to succeed than the preceding example. To see why, Listing 11-1 shows the attack, with the user’s input in italics.
<input type="text" value="onmouseover="alert([number])"/>
Listing 11-1

XSS attack within an HTML element attribute

The better scanners will account for a greater number of variations to find more vulnerabilities.

Once your scan is complete, any DAST scanner will make note of any vulnerabilities it finds and assign a severity (how bad is it) to each one. Most scanners will also assign a confidence (how likely is it actually a problem) to each finding.

In most cases, running an active scan against a website is relatively safe in the sense that they don’t intentionally deface your website or delete data. I strongly recommend running DAST scans against test versions of your website instead of production, though, because the following issues are quite common:
  • Because the active scan sends hundreds of variations of these attacks to your websites, it will try to submit forms hundreds of times. If your website sends an email (or performs some other action) on every form submission, you will get hundreds of emails.

  • If you have a page that is vulnerable to XSS attacks and the scanner finds it, you will get hundreds of alerts any time you navigate to that page.

  • Scanners will submit every form, even password change forms. You may find that your test user has a new password (and one you don’t know) after you’ve run a scan.

  • Some scanners, in an attempt to finish the scan as quickly as possible, will hit your website pretty hard, sending dozens of requests every second. This traffic can essentially bring your website down in a DoS attack if your hardware isn’t particularly strong.

  • Unless configured otherwise, these scanners click links indiscriminately. If you have a link that does something drastic, like delete all records of a certain type in the database, then you may find all sorts of data missing after the scan has completed.

  • In extreme cases, a DAST scanner may stumble upon a problem that, when hit, brings your entire website down. I’ve had this issue scanning the ASP.NET WebForms version of WebGoat, the intentionally vulnerable site OWASP built for training purposes.

You can, if you know your website, exclude paths that send emails and delete items from your scans, but it is much safer, and you will get better results, if you run the scan against a test website without the restrictions necessary running a scan safely against production.

One final tip in running DAST scanners: be sure to turn off any Web Application Firewall that may be protecting your website. Most DAST scanners don’t try to hide themselves from WAFs, so running a DAST scan against a website with a WAF is basically testing whether your WAF can detect a clumsy attack. You should rather want to test your website’s security instead.

DAST Scanner Strengths

DAST scanners can’t find everything, but they are good at finding errors that can be found with a single request/response. For instance, they are generally pretty good about finding reflected XSS because it’s relatively easy to perform a request and look for the script in the response. They are also generally good at finding most types of SQL injection attacks, because it is relatively easy to ask the database to delay a response and then to compare response times between the delayed request and a non-delayed request. You can also expect any respectable DAST scanner to find
  • Missing and/or misconfigured headers and cookies

  • Misconfigured cookies

  • HTTPS certificate issues

  • Various HTML issues, such as allowing autocomplete on a password text box

  • Finding issues with improperly protected file paths, such as file read operations that can be hijacked to show operating system files to the screen

DAST Scanner Weaknesses

The biggest complaints I hear about DAST scanners is that they produce too much “noise.” In other words, most scanners will produce a lot of false positives, duplicates, and unimportant findings that you’ll probably never fix. When you have this much noise in any particular report, it can sometimes be difficult finding the items you actually want to fix. (There are a few scanners that are out there that advertise their low false positive rate, but these generally have a low false negative rate too, meaning they will miss many genuine vulnerabilities that other scanners will catch.)

On top of that, DAST scanners are generally not great at finding vulnerabilities that require multiple steps to find. For instance, stored XSS and stored SQL injection vulnerabilities aren’t often found by good scanners. They also can’t easily find flaws with any business logic implementation, such as missing or misconfigured authentication and authorization for a page, improper storage of sensitive information in hidden fields, or mishandling uploaded files. And since DAST scanners don’t have access to your source code, you can’t expect them to find the following:
  • Cryptography issues such as poorly implemented algorithms, use of insecure algorithms, or insecure storage of cryptographic keys

  • Inadequate logging and monitoring

  • Use of code components with known vulnerabilities

Differences Between DAST Scanners

There are a wide variety of DAST scanners for websites out there at a wide variety of prices. Several scanners are free and open source, and several others cost five figures to install and run for one year. It’s easy to look at online comparisons like the one from Sec Tool Market3 and think that most scanners are pretty similar despite the price range. They aren’t. They differ greatly when it comes to scan speed, results quality, reporting quality, integration with other software, etc. Your mileage will vary with the tools available.

If you are just getting started with DAST scanning, I highly recommend starting with Zed Attack Proxy (ZAP) from OWASP.4 ZAP is far from the best scanner out there, but it is free and easy to use, and serves as low-effort entry into running DAST scans.

Once you have gotten used to how ZAP works, I recommend running scans with the Professional version of Burp Suite.5 Burp is a superior scanner to ZAP, has dozens of open source plugins to extend the functionality of the scanner, and is available for a very reasonable price ($400/year at the time of this writing). Unless you have specific reporting needs, it’s extremely difficult to beat the pure scan quality per dollar that you get with Burp Suite.

Once your process matures and you need more robust reporting capabilities, you may consider using one of the more expensive scanners out there. Sales pitches can differ from actual product quality, though. Here are some things to watch out for:
  • Most scanners say they support modern Single-Page Application (SPA) JavaScript frameworks, but implementation quality can vary widely from scanner to scanner. If you have a SPA website, be sure to test the scanner against your websites before buying.

  • Authentication support can vary from scanner to scanner. Some scanners only support username and password for authentication, some scanners are highly configurable, and some scanners say that they're highly configurable but then most configuration options don’t work well. I recommend looking for scanners that allow you to script or record your login, since this is the most reliable means to log in that I’ve found.

  • As mentioned earlier, some scanners explicitly try to minimize false positives with the goal of making sure you're not wasting your time on mistakes by the scanner. But in my experience, scanners that minimize false positives have an unacceptably high number of false negatives. Most scanners have some flexibility here – allowing you to do a fast scan when needed, but also allowing a detailed scan when you have time. Generally, though, stay away from scanners whose main sales pitch is their ability to minimize false positives.

My last piece of advice when it comes to DAST scanners is that you should strongly consider running multiple brands of DAST scanners against your website. Some scanners are generally better than others, but some scanners are generally better at finding some types of issues than others. Pairing a scanner that is good at finding configuration issues with one that is good at finding code injection is a (relatively) easy way at getting the best results overall.

SAST Tools

SAST scanners work by looking at your source code rather than trying to attack a running version of your website. While this means that SAST scanners are generally easier to configure and run, it does mean that SAST tools are language specific. And perhaps because of this, there is a much lower number of SAST scanners available for .NET programmers than DAST scanners. And also, unlike DAST scanners, there aren’t any really good free options out there – all good SAST scanners are quite expensive.

Since you may be on a budget, I’ll start by talking about free scanners. As I just mentioned, these aren’t the best scanners available, but they are better than nothing. Scanners for .NET come in two different types: those that you run outside of Visual Studio and those that run within it. Those that run outside of Visual Studio give you better reporting capabilities, as well as allow for easier management of remediating issues (in case you don’t want to fix everything immediately). Scanners that run within Visual Studio give immediate feedback, but don’t have reporting or bug tracking capabilities.

Two scanners I’ve used that analyze your source code outside of Visual Studio include

Quite frankly, SonarQube hardly qualifies as a security scanner. I know many companies use it for security scanning, but they shouldn’t. SonarQube is worth considering for its superior ability to pick up code maintainability issues, but it tends to miss obvious security issues that any scanner should catch.

VisualCodeGrepper is a bit better at finding security issues, but is a less polished product overall. Unlike SonarQube, which has a fairly polished UI, VisualCodeGrepper offers only simple exports. I personally wouldn’t depend on either to find security issues, but it is almost certainly worth using one or both of these occasionally for a sanity check against your app.

As mentioned earlier, scanners that work within Visual Studio are better at giving immediate feedback, but have no reporting capabilities. Here’s a list of the open source ones I’ve used:

Of these, I actually like FxCop, the analyzer that Visual Studio asks you to install, the least of the three. Both Puma Scan and Security Code Scan are better at finding issues than FxCop. None of the three were impressive, though. But given the minimal effort to install and use, you should be using one of these three to help you find security issues.

Using Visual Studio Scanners as a SAST Scanner

Given how poor SonarQube’s security quality is and how undeveloped a product VisualCodeGrepper is, you may want to use one of the scanners that work within Visual Studio as an external scanner if you need to run scans during your build process. While it is not directly supported to use these scanners as external scanners, you can do so with a little bit of work. These use Roslyn, a framework which allows you to load and interpret code using C#. Unfortunately, right now you have to use a .NET Framework project to do so, and you need to use a third-party library to analyze Core code.

First, you need to install several projects from NuGet:
  • Buildalyzer and Buildalyzer.Workspaces: These allow you to parse .NET Core projects within .NET Framework Roslyn parsers.

  • Microsoft.CodeAnalysis and Microsoft.CodeAnalysis.Workspaces: These allow you to run the Roslyn parsers which load and interpret project code.

Listing 11-2 shows the code to load all of the projects in the solution you want analyzed.
private static List<Project> GetProjects()
{
  var workspace = new AdhocWorkspace();
  var projects = new List<Project>();
  var solutionFilePath = PATH_TO_YOUR_SOLUTION_FILE;
  var manager = new AnalyzerManager(solutionFilePath);
  foreach (var key in manager.Projects.Keys)
  {
    var analyzer = manager.GetProject(manager.Projects[key].
      ProjectFile.Path);
    projects.Add(analyzer.AddToWorkspace(workspace));
  }
  return projects;
}
Listing 11-2

Loading projects from a solution

The AdhocWorkspace and the Project are objects from the Microsoft.CodeAnalysis namespace. It is the list of Projects that we’ll analyze using our analysis libraries. If you are analyzing a project that uses the older framework, you can use these classes directly. But since we’re analyzing projects using the newer Core, we’ll need to pull projects using the AnalyzerManager from the Buildalyzer package.

The rest of the code is fairly easy to understand, so let’s take a look at the code that pulls the analyzers from your scanner library.
private static void LoadAnalyzersFromAssembly(
  List<DiagnosticAnalyzer> analyzers, Assembly assembly)
{
  foreach (var type in assembly.GetTypes())
  {
    if (type.GetCustomAttributes(
      typeof(DiagnosticAnalyzerAttribute), false).Length > 0)
    {
      var attribute = (DiagnosticAnalyzerAttribute)type.↲
        GetCustomAttribute(↲
          typeof(DiagnosticAnalyzerAttribute));
      if (attribute.Languages.Contains("C#"))
        analyzers.Add(↲
          (DiagnosticAnalyzer)Activator.CreateInstance(type));
    }
  }
}
Listing 11-3

Pulling analyzers from the code library

If you know reflection, Listing 11-3 is pretty straightforward. We load all classes in the assembly and look to see which ones have a DiagnosticAnalyzer attribute. We create an instance of each of these classes and return the List to the calling code.

Next, we need to get the findings themselves.
protected static List<Diagnostic> GetFindings(
  ImmutableArray<DiagnosticAnalyzer> analyzers,
  List<Project> projects)
{
  var cancellationToken = default(CancellationToken);
  var diagnostics = new List<Diagnostic>();
  foreach (var project in projects)
  {
    var compilation = project.GetCompilationAsync(↲
      cancellationToken).Result;
    var compilerErrors = compilation.GetDiagnostics().
      Where(i => i.Severity == DiagnosticSeverity.Error);
    if (compilerErrors.Count() == 1 &&
      compilerErrors.Single().Id == "CS5001")
    {
        compilation = compilation.↲
          WithOptions(new CSharpCompilationOptions(↲
            OutputKind.DynamicallyLinkedLibrary));
    }
    var compilationWithAnalyzers =
      compilation.WithAnalyzers(analyzers);
    compilationWithAnalyzers.GetAnalyzerDiagnosticsAsync().
      Result;
    var diagnosticResults = compilationWithAnalyzers.
      GetAllDiagnosticsAsync().Result;
    foreach (var diag in diagnosticResults)
    {
      if (diag.Location == Location.None ||
        diag.Location.IsInMetadata)
      {
        diagnostics.Add(diag);
      }
      else
      {
        foreach (var document in project.Documents)
        {
          var tree = document.GetSyntaxTreeAsync(↲
            cancellationToken).Result;
          if (tree == diag.Location.SourceTree)
          {
            diagnostics.Add(diag);
          }
        }
      }
    }
  }
  return diagnostics;
}
Listing 11-4

Code to pull DiagnosticAnalyzer findings from projects

A full explanation of Listing 11-4 is outside the scope of this book, since a full explanation would require an understanding of Roslyn. There are a few things worth highlighting, though:
  • The compiler will throw an error if we try to analyze a DLL that is not meant to be executed directly, so we need to check for the specific error (CS5001), and if present, recompile with the output type of “DynamicallyLinkedLibrary” set.

  • The compilation object already understands how to use our analyzers, so we merely need to let our complication object now the analyzers we’re using by calling the compilation.WithAnalyzers method.

  • The foreach method looks for the source of the error, and if it is actionable, we make sure it’s added to our list of errors to be returned.

Once you have the list of Diagnostic objects, you can parse the results however you need to by creating bugs in your bug tracking system, creating a dashboard, or both.

Final Notes About Free SAST Scanners

While there was some variability of the effectiveness of these various scanners, a few patterns emerged:
  • None of the scanners looked directly at the cshtml pages, and only one of them (VisualCodeGrepper) looked at them indirectly. As a result, most scanners will not be able to find the vast majority of XSS issues.

  • The scanners consistently evaluated one line of code at a time, which means that if user input is added to a SQL query on one line but is sent unprotected to the database in another, the scanners wouldn’t find the vulnerability.

  • The scanners were generally pretty “dumb,” meaning they either flagged all possible instances of a vulnerability (such as flagging each method without an [Authorize] attribute as lacking protection, even though you almost always want some pages to be accessible to non-authenticated users) or ignored them all.

Any help is better than no help, though, so you should consider using one or more of these, especially if you can get feedback directly in Visual Studio.

Commercial SAST Scanner Quality

Commercial SAST scanners, like Checkmarx and Fortify, are much better than the ones mentioned here. Besides the fact that most commercial scanners are more configurable than free scanners, and thus are better able to find problems with your apps, these scanners are smart enough to understand simple context (like the separated SQL query creation and call I mentioned earlier). Unfortunately, they are also significantly more expensive. But if you can afford them, they’re well worth evaluating and then buying the best one for your needs.

SCA Tools

Many DAST and SAST scanners do not check for vulnerable libraries that you’ve included in your website. For instance, if a vulnerability is found in your favorite JavaScript framework, you’re often on your own to find the outdated and insecure component. SCA tools are intended to fill this gap for you. These tools either have their own database of vulnerabilities or go out and check the National Vulnerability Database and other similar databases in real time, and then compare the component names and versions in your website to a list of known-bad components. If anything matches, you are notified.

There are several free and commercial options for you to choose from, though the OWASP Dependency Check6 does a great job and is free.

Caution

A very large number of vulnerabilities in lesser-known components never make it to these vulnerability databases because security researchers just aren’t looking at them. And component managers often fix vulnerabilities without explicitly saying so. While it is a good idea to use SCA tools to check for known-bad components, don’t assume that if a component passed an SCA check it is secure. Keeping your libraries updated, regardless of whether a known security issue exists, is almost always a good idea.

Remember, attackers have access to these databases too. If your component scan does find an insecure component, it is important to update the insecure component as soon as possible. This is also true if you don’t use the particular feature that has the vulnerability. Once the component is identified as vulnerable, you may miss any updates to the list of vulnerable features in that component. If one of the features you do use shows up later, and you do miss it, you will open a door for attackers to get in.

IAST Tools

As mentioned earlier, IAST tools combine source code analysis with dynamic testing. The way these scanners work is that you install their service on the server and/or in the website, configure the service, and then browse the website (either manually or via a script). You don’t need to attack the website like a DAST tool would – the IAST tool looks at how code is being executed and determines vulnerabilities based on what it sees.

On the one hand, this seems like it would be the worst of both worlds because it requires language-specific monitoring but requires a running app to test. On the other hand, though, it can be the best of both worlds because you can get specific lines of code to fix like a SAST tool, but the scanner has to make fewer guesses as to what is actually a vulnerability like a DAST tool.

One limitation of IAST scanners very much worth mentioning – because they work by looking at how code is being processed on the server to find vulnerabilities, problems in JavaScript won’t be found. This is a very large problem because with the explosion of the use of Single-Page Application (SPA) frameworks, more and more of a website’s logic can be found in JavaScript, not server-side code. It will be interesting to see if any IAST vendors will find a solution to this problem.

IAST is still a relatively new concept, which means that
  • These scanners are not as mature as their DAST and SAST counterparts.

  • There are fewer options (both free and commercial) out there.

  • These tools aren’t used nearly as much as other types of scanners.

But as these tools become more well known, and as they become further developed, they will produce better results. I’d recommend getting familiar with them sooner rather than later.

Caution

I cannot emphasize enough that none of these tools – DAST, SAST, SCA, IAST, or any combination of these – will find anything close to all of your vulnerabilities. I encounter far too many people who say “[tool] verified that I have no vulnerabilities.” If you rely on these tools to find everything, you will be breached. These tools will only find your easy-to-find items.

Kali Linux

Kali Linux isn’t a type of testing tool or an individual tool in itself, instead it’s a distribution of Linux that has hundreds of preinstalled free and open source security tools. In addition to tools to scan web applications, Kali includes wireless cracking tools, reporting tools, vulnerability exploitation tools, etc. I actually recommend that you don’t use Kali for the simple reason that for every tool you’ll actually use, Kali provides several dozen that you won’t. It’d be easier to simply install the tools you use, but your mileage may vary.

Integrating Tools into Your CI/CD Process

As more and more developers and development teams look to automate their releases, it’s natural to want to automate security testing. Most security tools are relatively easy to automate, and some even advertise how easy it is to integrate those tools into your Continuous Integration/Continuous Deployment (CI/CD) pipelines. But automating security testing takes some forethought, because despite the hype, they won’t integrate into your processes as well as advertised.

Before I get started, let’s go over what most developers and managers ask for when they want to integrate security tools into a CI/CD process:
  1. 1.

    Developer checks in code.

     
  2. 2.

    Automated build starts running.

     
  3. 3.
    Either during the build or immediately after, SAST and SCA scans are run.
    1. a.

      If any vulnerabilities are found above a certain severity, then the build stops, a bug is created in your work tracking system, and the developer responsible for creating the vulnerability is notified.

       
     
  4. 4.

    After build completes, code is automatically deployed to test environment.

     
  5. 5.
    Automatically start a DAST scanner running against the test environment.
    1. a.

      If a security vulnerability at or above a certain severity is found, then the process stops, a bug is created, and the developer is informed.

       
     
  6. 6.

    The build is blocked until all issues are fixed.

     

Automating your SCA scanner would be relatively easy and relatively painless. I highly recommend running one after each build as outlined previously. Getting this to work as-is for other types of scanners, though, would take much more work than merely setting up the processes because of limitations inherent in these types of security scanners. Let’s dig into why.

CI/CD with DAST Scanners

There are several challenges with running DAST scanning in an automated fashion.

First, good DAST scans take time. My experience is that you can expect a minimum of an hour to run a scan with a good scanner against a nontrivial site. Scans that take several hours are not at all unusual. Slow scanners can even take days when scanning large sites. Several hours is far too long to wait for most companies’ CI/CD processes.

Second, not all results are worthy of your attention. One of the things I’ve heard said about DAST scanners is that “because they attack your website, they don't have the problem with false positives that SAST scanners have.” This is patently false. Good DAST scanners will find many security issues but will also churn out a lot of false positives. Some findings simply require a human to check to verify whether a vulnerability exists.

On top of this, you can expect your DAST scanner to churn out a large number of duplicates. In particular, DAST scanners tend to report each and every header issue that it finds, despite the fact that these are almost always configured on the site level for ASP.NET Core websites. In other words, if you have a vulnerability in shared code, you can expect that vulnerability to show up on each page that uses it.

Finally, DAST scans for many scanners are hard to configure. In particular, authentication and crawling can be difficult for scanners to get right. You can get around these issues by configuring the scanner to authenticate to your site and to crawl pages it missed, but these configurations tend to be fragile.

Instead of running DAST scans automatically during your CI/CD process, you will likely have better luck if you run the scans periodically instead of during your build. I recommend you do the following:
  • Run the scanner periodically, such as every night or every weekend.

  • Make it a part of your process to analyze the results the next day and report findings to the development team as soon as practical.

  • Establish SLAs (Service-Level Agreements) that the development team will fix all High findings within X days, Medium findings within Y days, etc., so vulnerabilities don't linger forever.

To be most effective, it will be helpful to have a DAST tool that can help you manage duplicates, can highlight new items from the previous scan, etc. Without that ability, managing the list will become too cumbersome and won't get done.

Caution

I said earlier that most DAST scanners churn out a lot of false positives. I also said earlier that some DAST scanners that advertise the fact that they don’t churn out a lot of false positives. It is worth emphasizing that these scanners miss obvious items that most other DAST scanners catch. I’d much rather catch more items and have some scan noise than have a small report that misses serious, easily detectable problems.

CI/CD with SAST Scanners

For CI/CD purposes, SAST scanners have one advantage over DAST scanners in that SAST scans take much less time to complete – most of the time minutes to hours instead of hours to days. Unfortunately, though, SAST scanners often have a much higher false positive rate than most DAST scanners. If you are going to run a SAST scanner as a part of your CI/CD process, you should strongly consider setting up the process so it reports only new findings, otherwise, using the same process that I recommended for DAST scanners will work well for SAST scanners, too.

CI/CD with IAST Scanners

IAST scanners are marketed as much better solutions for CI/CD processes than SAST and DAST scanners and most have integrations with bug tracking tools built in. However, IAST scanners still aren’t 100% accurate on their findings, meaning you can potentially get a large number of false positives or duplicates for a given scan. Like SAST and DAST scans, if you automatically create bugs based on the results of an IAST scan, you may have a lot of useless bugs in your bug tracking system. On top of that, IAST scanners need to have a running website in order to function properly. With those limitations, it may make the most sense to incorporate IAST analysis along with any QA analysis in order to use scanning with your processes most efficiently.

Catching Problems Manually

As mentioned earlier, scanners can’t catch everything. Most notably, scanners can’t reliably catch problems with implementation of business logic, such as properly protecting secure assets or safely processing calculations (e.g., calculating the total price in a shopping cart). For these types of issues, you need a human to take a look. Fortunately, this isn’t terribly difficult, and it starts with something you may already be doing: code reviews.

Code Reviews and Refactoring

You may already be using code reviews as a way to get a second opinion on code quality, because easy-to-read code is easier to find bugs, easier to maintain, etc. Easy-to-read code also makes it easier to find security issues. After all, if no one can understand your code, no one will be able to find security issues with it. So, now you have another reason to perform regular code reviews and fix the issues found during them.

That being said, you should consider having separate code reviews to look only for security problems. I’ve been in several situations where I’ve needed to test my own software, and I’ve found that I find many more software bugs if I’m operating purely in bug-hunting mode instead of fixing items as I go. The same is true for finding security issues. If I’m looking for a wide variety of problems, I’m more likely to miss harder-to-find security issues. Security-specific reviews help avoid this problem.

Finally, there are very few security professionals who can find flaws in source code. If you find one, though, you should consider bringing them in periodically to do a manual review of your code to find issues. Aside from the straightforward issues that you now know about after reading this book, there are several harder-to-find items that can only be found after finding something suspicious and taking the time to dig into it more thoroughly. Significant experience in security can make this process much faster and easier.

Hiring a Penetration Tester

Another way to catch issues manually is to hire a professional penetration tester. Good penetration testers are expensive and hard to find, but they will find issues that scanners, code reviews, and bad penetration testers never would.

If you do hire a penetration tester, be sure you know what the penetration tester’s process will be. I have heard from multiple sources that there are a few (or maybe more than a few) unethical and/or incompetent “penetration testers” who will simply run a scan of your website with Burp Suite and call it a “penetration test.” To guard against this, you should look for a penetration tester whose process looks something similar to this process outlined by the EC-Council7 (provider of the Certified Ethical Hacker exam). I outlined a similar process earlier in the book, but the CEH approach is worth repeating here:
  1. 1.

    Reconnaissance

     
  2. 2.

    Scanning and Enumeration

     
  3. 3.

    Gaining Access

     
  4. 4.

    Maintaining Access

     
  5. 5.

    Covering Tracks

     

I’ll go over each step in a little more detail.

Reconnaissance

The first step in any well-done hacking effort is to find as much information about the company or site you’re hacking as possible. For a website, the hacker will try to figure out what the website does, what information is stored, what language or framework it is written in, where it is hosted, and any other information to help the hacker determine where to start hacking and help them know what they should expect to find.

For more thorough tests, the hacker may look to find who is at your company via LinkedIn or similar means for possible phishing attacks, do some light scanning, or even dive in your dumpsters for sensitive information found in discarded materials.

Scanning and Enumeration

The next step is to scan your systems looking for vulnerabilities. Depending on the scope of the engagement, you may ask the hacker to scan just production, just test environments, just focus on websites, include networks and servers, etc. You should know what is being scanned and with which tools to avoid the Burp-only “penetration test” mentioned earlier.

After the automated scans, the hacker should look at the results and attempt to find ways into your systems that automated scans can’t find, such as flaws in your business logic or looking for anomalies in the scans to find items the scanner missed.

Gaining Access

After scanning, a normal penetration testing engagement would involve the hacker trying to use the information they gathered from the scans to infiltrate your systems. This is an important step because it is important for you to know what can be exploited by a malicious actor. For instance, as I talked about earlier in the book, a SQL injection vulnerability in a website whose database user permissions are locked down is a much less serious problem than a SQL injection vulnerability in a website whose database user has administrator permissions.

Maintaining Access

Most malicious attackers don’t want to just get in, they want to stay long enough to accomplish their goal of stealing information, destroying information, defacing your website, installing ransomware, or something else entirely. An ethical hacker will attempt to probe your system to know which of these a malicious hacker would be able to do.

Covering Tracks

As already mentioned several times so far in this book, hackers don’t want to be detected. Yes, this means that hackers will try to be stealthy in their attacks. But it also means that good hackers will want to delete any proof of their presence that may exist in your systems. This includes deleting relevant logs, deleting any installed software or malware, etc. Again, this helps you as the website owner know what a hacker can (and can’t) do with your systems.

If your penetration tester doesn’t do all of these steps and/or can’t walk you through how these steps will be performed, then you are probably not getting a full penetration test. That doesn’t mean that that service isn’t valuable, it just means that you need to be careful about what you are spending to get value for your money.

When to Fix Problems

I’ve encountered a wide range of attitudes when it comes to the speed in which you need to fix problems found by scanners. On one extreme, one of my friends in security put bugs into two categories, ones you fix immediately and ones that can wait until the next sprint. However, that isn’t practical for most websites. On the other extreme, I’ve encountered development teams that have no problem pushing any and all security fixes off indefinitely so they could focus on putting in features. This is just asking for problems (and to be fired). If neither of these extremes are the right answer, what is?

The answer will depend greatly on the size of your development team, the severity of the defects, the value of the data you’re protecting, the value of the immediate commitments you need to meet, the tolerance your upper management has for risks, etc. There is no one-size-fits-all answer here. There are a few rules of thumb I follow that seem to work in most environments, though:
  • Fix obvious items, like straightforward XSS or SQL injection attacks, immediately.

  • Fix any easy items, such as adding headers, in the next release or two.

  • Partial risk mitigation is often ok for complex problems. If a full fix for a security issue would take a week of development time, but a partial fix that fixes most attacks can be added in a few hours, insert the partial fix and put the full fix in your backlog.

  • For complex vulnerabilities that are difficult to exploit, communicate the vulnerability to senior management and ask for guidance. Your company may decide to simply accept the risk here.

  • Get in the habit of finding and fixing vulnerabilities before they get to production. In other words, run frequent scans and don’t allow yourself to get in the habit of allowing newly discovered vulnerabilities to production. You have a difficult enough time protecting against zero-day attacks; don’t knowingly introduce new vulnerabilities.

  • Have a plan to fix the security vulnerabilities on your backlog. Communicate the plan, and the risk, to upper management. Depending on the risk, budget, and other factors, they may hire programmers to help mitigate the risk sooner rather than later.

I want to emphasize that these are guidelines, and your specific needs may vary. But I find that these guidelines work in more places than not.

Learning More

If you want to learn more, I would suggest you start with The Web Application Hacker’s Handbook by Dafydd Stuttard (CEO of Portswigger, maker of Burp Suite) and Marcus Pinto. There’s not a lot of information here specific to the Microsoft web stack, but it’s the best book by far I’ve encountered on penetration testing websites.

For security-related news, I like The Daily Swig,8 another Portswigger product. Troy Hunt (https://troyhunt.com) is a Microsoft MVP/Regional Director who blogs regularly on security and is owner of haveibeenpwned.com, though he tends to focus on which companies got hacked recently more than I particularly care for. Otherwise, reading security websites like SecurityWeek and Dark Reading can keep you up to date with the latest security news.

If you want to learn by studying for a certification, I’d recommend studying for the Certified Ethical Hacker9 (CEH) or the Certified Information Systems Security Professional10 (CISSP). Both of these certifications dive deeply into other areas of security that may not be of interest to you as a web developer, and both require several years’ worth of experience before actually getting the certification, but you can learn quite a bit by studying for these exams. Studying for the GIAC Web Application Penetration Tester11 (GWAPT) exam is also a possibility, but I’ve been unable to find the variety of study materials for this exam as are available for the CEH or CISSP exams.

Finally, I would encourage you to try breaking into your own websites (in a safe test environment, of course). It’s one thing to read about various techniques that can be used to break into a website, but very few things teach as well as experience. What can you break? What can you steal? How can you prevent others from doing the same?

Summary

Knowing what secure code looks like is a good start to making your websites secure, but if you can’t work those techniques into your daily development, your websites won’t be secure. To help you find vulnerabilities, I covered various types of testing tools and then talked about how to integrate these into your CI/CD processes. Finally, I talked about how to catch issues manually, since tools can’t catch all problems.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.21.158.148