© Scott Norberg 2020
S. NorbergAdvanced ASP.NET Core 3 Security https://doi.org/10.1007/978-1-4842-6014-2_2

2. General Security Concepts

Scott Norberg1 
(1)
Issaquah, WA, USA
 

Now that we’ve talked about ASP.NET Core, it’s worth taking the time to cover some security-related topics that are included in most security courses, but unfortunately are ignored in most software development training materials. I will highlight areas where these concepts are most applicable to software developers and not delve too deeply into other areas of security that are still important to websites, such as network or physical (i.e., can anyone access my servers?) security. Unfortunately, there won’t be much flow to this chapter – these concepts aren’t necessarily related other than being concepts in security that we’ll dive into more deeply later in the book.

Please do not skip this chapter. It is high level and may not seem directly applicable to developing software at first, but we’ll be laying foundations for concepts covered later in the book.

What Is Security? (CIA Triad)

At first glance, the question “what is security?” seems to have an obvious answer: stopping criminals from breaking into your software systems to steal or destroy data. But stopping criminals from bringing down your website by flooding your server with requests, by most definitions, would also be covered under security. Stopping rogue employees from stealing or deleting data would also fall under most people’s definition of security. And what about stopping well-meaning employees for accidentally leaking, damaging, or deleting data?

The definition of security that most professionals accept is that the job of security is to protect the Confidentiality, Integrity, and Availability, also known as the “CIA triad,”1 of your systems, regardless of intent of criminality. (There is a movement to rename this to the “AIC triad” to avoid confusion with the Central Intelligence Agency, but it means the same thing.) Let’s examine each of these components in further detail.

Confidentiality

When most software developers talk about “security,” it is often protecting Confidentiality that they’re most concerned about. We want to keep our private conversations private, and that’s obvious for everyone involved. Here are examples of protecting Confidentiality that you should already be familiar with as a web developer:
  • Setting up roles within your system to make sure that low-privilege users cannot see the sensitive information that high-privilege users like administrators can.

  • Setting up certificates to use HTTPS prevents hackers sitting in between a user’s computer and the server from listening in on conversations.

  • Encrypting data, such as passport numbers or credit card numbers, to prevent hackers from making sense of your data if they were to break into your system.

If this were a book intended for security professionals rather than software developers, I would also cover such topics as protecting your servers from data theft or how to protect intruders from seeing sensitive information written on whiteboards, but these are out of scope for the majority of software developers.

Integrity

Preventing hackers from changing your data is also a vitally important, yet frequently overlooked, aspect of security. To see why protecting integrity is important, let’s demonstrate an all-too-real problem in a hypothetical e-commerce site where integrity was not protected:
  1. 1.

    A hacker visits an e-commerce website and adds an item to their cart.

     
  2. 2.

    The hacker continues through the checkout process to the page that confirms the order.

     
  3. 3.

    The developer, in order to protect users from price fluctuations, stores the price of the item when it was added to the cart in a hidden field.

     
  4. 4.

    The hacker, noticing the price stored in the hidden field, changes the price and submits the order.

     
  5. 5.

    The hacker is now able to order any product they want at any price (which could include negative prices, meaning the seller would pay the hacker to “buy” products).

     
Most e-commerce websites have solved this particular problem, but in my experience, most websites could do a much better job of protecting data integrity in general. In addition to protecting prices in e-commerce applications, here are several areas in which most websites could improve their integrity protections:
  • If a user submits information like an order in an e-commerce site or a job application, how can we be sure that no one has tampered with this data?

  • If a user logs into a system to enter text to be displayed on a page, as you would with a Content Management System (CMS), how can we be sure that no one has tampered with this information, preventing website defacement?

  • If we send data from one server to another via an API, how can we make sure that what was sent from server A made it safely to server B?

Fortunately, data integrity is easier to check than one would think at first glance. You’ll see how later in the book.

Availability

Most developers would probably agree that protecting your websites against Denial of Service (DoS) attacks (when an attacker sends enough requests to a web server that prevents it from responding to “real” requests) and Distributed Denial of Service attacks (when an attacker sends requests from many different sources to prevent networks from blocking one IP to stop the attack) would fall under the “security” umbrella. But taking proper backups and testing their validity is very much a responsibility of security because it directly affects the availability of the website if a problem were to occur.

I will generally focus more on confidentiality and integrity over availability in this book, since most defenses against most attacks against website availability fall outside of the responsibility of the average software developer. There is one thing worth noting, however. As you will see later on, helping protect the confidentiality or integrity of your data can harm availability, since protecting confidentiality and integrity causes extra processing to occur, making a website more susceptible to certain types of attacks against availability. It will be tempting for some developers to skip protections in the name of efficient processing, which improves availability. In many cases, this is simply the wrong approach to take. It is rare (though not unheard of, as you’ll see later) for one feature to cause a serious availability-related vulnerability, so it is usually best to focus on confidentiality and integrity and fix any availability issues as they arise.

In other words, in most cases, focusing on confidentiality and integrity is better than skipping protections in the name of picking up a few milliseconds of performance.

Definition of “Hacker”

While we’re defining what “security” means, let’s now take a moment to state explicitly how this book will use the word “hacker,” as well as explicitly state what we mean by a hacker doing “damage.” When you hear about hackers doing harm to your website, it is easy to picture a hacker breaking into your system to make your website sell unsavory products. But should we call someone who accidentally takes information they don’t need because of a logic flaw in your website? Or should we call stolen credit cards “damage”?

For the sake of this book, let’s define “hacker” as anyone looking to compromise the Confidentiality, Integrity, or Availability of the data within your website, whether with malicious intent or not. We’ll also use the word “breach” for any incident in which the Confidentiality, Integrity, or Availability of your website is compromised. Finally, we’ll use “damage” as a shorthand for any negative fallout, even if that is only damage to your reputation because no specific monetary harm has been incurred.

The Anatomy of an Attack

Unless you’ve studied security, you may not know what a cyberattack looks like. It’s easy imagining a computer hacker doing their magic against a computer system, but most hackers employ similar processes in breaking into systems. Depending on the source, the names of these steps will vary, but the actual content will be similar. Knowing this process will help you create defenses, because as we talked about in the section about layered security, your goal is not just to prevent hackers from getting into your systems, but also to help prevent them from being able to do damage once they get in.

Reconnaissance

If you want to build successful software, you’re probably not going to start by writing code. You’ll research your target audience, their needs, possibly create a budget and project plan, etc. An attack is similar, though admittedly usually on a smaller scale. Successful attackers usually don’t start by attacking your system. Instead, they do as much research as possible, not only about your systems but about the people at your company, your location, and possibly research whether you’ve been a victim of a cyberattack in the past.

Much of this research can occur legally against publicly accessible sources. For instance, LinkedIn is a surprisingly good source of useful information for an attack. By looking at the employees at a company, you can usually get the names of executives, get a sense for the technology stack used by the company by looking at the skills of employees in IT, and even get a sense for employee turnover which can give potential attackers a sense for number of disgruntled employees that might want to help out with an attack. Email addresses can often be gotten via LinkedIn as well, even for those that are not published. Enough people publicly post their emails that the pattern can be deduced, i.e., if several people in an organization have the email pattern "first initial + last [email protected]", you can be reasonably sure that many others do as well.

During this phase, a hacker would likely also do some generic scanning against company networks and websites using freely downloadable tools. These scans are designed to look for potentially vulnerable operating systems, websites, exposed software, networks, open ports, etc. It is not clear whether such actions are illegal, but they are common enough where most scans would not be remarked upon, much less prosecuted.

Penetrate

Research is important to know what attacks to try, but research by itself is not going to get a hacker into your system. At some point, hackers need to try to get in. Hackers will typically try to penetrate the most useful systems first. If a spear-phishing attack is attempted, then attacking the Chief Financial Officer (CFO) would probably be more helpful than attacking a marketing intern. Or if a computer is the target, attacking a computer with a database on it would be a more likely candidate than a server that sends promotional emails. However, that doesn’t mean that hackers would ignore the marketing intern – it’s also likely that the CFO has had more security training than the marketing intern, so the intern may be more likely to let the hacker in.

The system penetration can happen many ways, from attacking vulnerable software on servers or finding a vulnerability in a website. Two of the most commonly reported successful attack vectors, though, are either phishing attacks or rogue employees. As a web developer, it’s your responsibility to make sure that attackers cannot use the website you are building as a gateway into your system. There are also important steps you can take to help limit the damage attackers can do via a phishing attack. We will cover all of these later in the book. For now, let’s focus on the process at a higher level.

Expand

Once an attacker has made it inside your network, they need to expand their privileges. If a low-level employee happens to click a link that gives an attacker access to their desktop, taking pictures of their desktop might be interesting for a voyeur, but not particularly profitable for a hacker. Hacking the desktop of the CFO could be more profitable if you could find information to sell to stock traders, but even that is rather dubious. Instead, a hacker is likely to attempt to escalate their privileges by a number of means. Many of these methods are out of scope for this book because they involve planting viruses or making operating system level exploits. We will talk about methods to help prevent this type of escalation of privilege in web environments later on.

Hide Evidence

Finally, any good hacker will make an attempt to cover their tracks. The obvious reason is that they don’t want to get caught. While that’s certainly a factor, the longer a hacker has access to a system, the more information they can glean from it. Any good hacker will go through great lengths to hide their presence from you, including but certainly not limited to disguising their IPs, deleting information out of logs, or using previously hacked computers to attack others.

Catching Attackers in the Act

Catching attackers is a large subject – large enough where some devote their entire career to it. We obviously can’t cover an entire career’s worth of learning in a single book, especially a book about a different subject. But it is worth talking a little bit about it, because not only do most web developers not think about this during their web development, but it’s also a weakness within the ASP.NET Core framework itself.

Detecting Possible Criminal Activity

Whether you’re directly aware of it or not, you’re almost certainly already taking steps to stop criminals from attacking websites directly. Encoding any user input (which ASP.NET Core does automatically) when displayed on a web page makes it much harder to make the browser run user-supplied JavaScript. Using parameterized queries (or a data access technology like Entity Framework which uses parameterized queries under the hood) helps prevent users from executing arbitrary commands against your database. But one area in which most websites in general and ASP.NET in particular fall short is detecting the activity in the first place.

Detecting this activity requires you to know how users behave in the system. As one example, you're probably familiar with the idea of showing user details based on information within the URL, either in the query string or URL itself. You usually don't want users pulling information about ALL other users in the system by changing the URL, but if you’re not tracking the number of unauthorized or error requests against that URL, there is nothing stopping the hacker from getting the entire list of your users in your database, and there is no way for you to figure out who stole the information if you do realize that information has been stolen.

Note

The instincts of most people, including mine before I started studying security, is to stop any suspicious activity immediately as soon as it is detected before it can do more damage. This is not necessarily the best course of action if you want to figure out what the hacker is after or prevent them from attempting another attack. If you have the resources, sometimes the best course of action is to gather as much information about the attack as possible while it is occurring. Only after you have a good idea what the attacker is trying to do, how they are trying to do it, and of the scope of the damage, then you stop the attack to prevent even more damage. This approach may seem counterintuitive, but it gives you a great chance to learn from attackers after your system.

Being PCI or HIPAA compliant is increasingly dependent on having a logging system that is sufficient to detect this type of suspicious activity. And unfortunately, despite the improved logging system that comes with ASP.NET Core, there is no good or easy way to implement this within your websites. We will cover this in more detail later in the book.

Detection and Privacy Issues

One note, several governments, such as the European Union and the State of California, are cracking down on user privacy abuses. The type of spying that Google, Facebook, Amazon, and others have been doing on citizens has caused these organizations to pass laws that require companies to limit the tracking and inform users of tracking that is done. As of the time of this writing, it’s unclear where the right balance is between logging information for security forensics vs. not logging information for user privacy, but it’s something that the security community is keeping an eye on. If in doubt, it would be best to ask a lawyer.

Honeypots

A honeypot is the term for a fake resource that looks like the real thing, but its sole purpose is to find attackers. For example, an IT department might create an SMTP server that can’t actually send emails, but it does log all attempts at using the service. Honeypots are relatively common in the networking world, but oddly haven’t caught on in computer programming. This is unfortunate, since it wouldn’t take too much effort to set up a fake login page, such as at “/wp-login.php” to make lazy attackers think you’re running a WordPress site, that would capture as much information about the attacker as possible. One could then monitor any usage of that source much more closely than any other traffic, and possibly even stop it before it does any real harm.

Enticement vs. Entrapment

I need to make one very important distinction before going any further, and it’s the difference between enticement and entrapment. Enticement is the term for making resources available and seeing who takes advantage, such as the login example mentioned earlier. Entrapment is purposely telling potential hackers that a vulnerability exists in order to trick people into trying to take advantage of it. In other words, enticement occurs when you try to catch criminals performing activities that they would perform with or without your resource. Entrapment occurs when you encourage someone to commit a crime when they may or may not have done so without you.

This distinction is important because enticement is legal. Entrapment is not. When creating honeypots, you must make sure you do not cross the line into entrapment. If you do, you will certainly make it impossible to prosecute any crimes committed against you, and you may be subject to criminal prosecution yourself. If you have any questions about any gray area in between the two, please consult a lawyer.

When Are You Secure Enough?

Most people, when asked whether you have enough security, answer that “you can never have too much security.” This is simply wrong. Security is expensive, both in implementation and maintenance. On top of that, you could spend one trillion dollars working to secure your systems and still be vulnerable to some zero-day attack in a system you do not control. This is not a hypothetical situation. In one well-publicized example, two CPU-related security vulnerabilities were announced to the world in January 2018 – Spectre and Meltdown.2 Both of these had to do with how CPUs and operating systems preprocessed certain tasks in order to speed performance, but hadn’t locked down permissions on this preprocessed data. Unless you made operating systems, there was very little you could do to prevent this vulnerability from being used against you. Your only choice was to wait for your operating system vendor to come out with a patch and wait for new hardware resistant to these attacks to be developed. In the meantime, all computers were (and unpatched computers are) vulnerable. No amount of money would have saved you from these vulnerabilities, so you couldn't have been completely secure.

If you can’t make your software 100% secure, what is the goal? We should learn to manage our risks.

Unfortunately for us, risk management is another field which we cannot dive too deeply into in this book because it could be the subject of an entire shelf full of books itself. We can make a few important points in this area, though. First, it’s important to understand the value of the system we’re protecting. Is it a mission-critical system for your business? Or does it store personal information about any of your customers? Do you need to make sure it’s compliant with external frameworks or regulations like PCI or HIPAA? If so, you may want to err on the side of working harder to make sure your systems are secure. If not, you almost certainly can spend less time and money securing the system.

Second, it is important to know how systems interact with each other. For instance, you may decide not to secure a relatively unimportant system. But if its presence on your network creates an opportunity for hackers to escalate their privileges and access systems they otherwise couldn’t find (such as if the unimportant system shares a database with a more important one, or if a stolen password from one system could be used on another), then you should pay more attention to the security of the lower system than you might otherwise.

Third, knowing how much work should go into securing a system is a business decision, not a technical one. You’re not going to spend $100 to protect a $20 bill, because then your $20 bill is worth a negative $80 – you’re better off just giving the $20 away. But how much is too much? Would you spend $1 to protect it? $5? $10? There is no right answer, of course – it depends on the individual business and how important protecting that money is. Making sure your management knows and accepts the risks remaining after you’ve secured your system is key to having mature security.

Finally, try to have a plan in place to make sure you know what to do when an attack occurs. Will you try to detect the hacker, or just stop them? What will you tell customers? How will you get information out of your logs? Knowing these things ahead of time will make it easier during and after an attack should one occur.

Often, the easiest place to start in wrapping your head around your security maturity is figuring out what you need to protect.

Finding Sensitive Information

When deciding what in your website to protect, there is certainly information that is more important to protect than others. For instance, knowing how many times a particular user has logged into your website is certainly not as important as protecting any credit card numbers they may have given to you. What should you focus your time on?

When prioritizing information to protect, you should focus on protecting the information that is most sensitive and would cause the most damage if made public. To help you get started, here are some categories commonly used in healthcare and finance that will be useful for you to know:
  • PAI, or Personal Account Information : This is a term used in finance to refer to information specific to financial accounts, such as bank account numbers or credit card numbers.

  • PHI, or Personal Health Information : This is a term used in healthcare for information specific to someone’s health or treatment, such as diagnoses or medications.

  • PII, or Personally Identifiable Information : This is a term used in all industries for information specific to users, such as names, birthdates, or zip codes.

If your data falls under one of these categories, chances are you should take extra steps to protect it. Don’t let these be a limitation, though. As one example, if your system stores information as trade secrets to your company, it would not fall under these categories but should absolutely be secured.

Knowing what you should protect is important, but knowing when can be equally as important. If you’re storing your data securely but can easily be seen by anyone watching the network as it is sent to another system, the data is not secure. There are two terms that are useful in helping to ensure that your data is secure at all times:
  • Data in Transit : Data are moving from one point to another. In most cases in the book, this will refer to data that’s moving from one server to another, such as sending information from a user’s browser to your website or a database backup to the backup location, but it generally applies to any data that’s moving from one place to another.

  • Data at Rest : Data are being stored in one place, such as data within a database or the database backups themselves within their storage location.

It is necessary to secure both Data in Transit and Data at Rest in order to secure your data, and each requires different techniques to implement, which we’ll explore later in the book.

User Experience and Security

When talking about how far is too far to go with security, I would be remiss if I didn’t talk about what security does to user experience. First, though, I should define what I mean by this term. User experience, or UX, is the term for making a user interface as intuitive and easy as possible. The line between UX and user interface (UI) design is blurry, but the way I usually think of it is that UI is about making the site beautiful, and UX is making the site easy to use.

It’s not too hard to notice that security and UX often have competing goals. As we’ll see throughout this book, many safeguards that we put in place to make our websites more secure make the websites harder to use. I’d like to say again that our goal is NOT to make websites as secure as possible. No company in the world has the money to do the testing necessary to make this happen, nor does anyone want to drive away users who don’t want to jump through unreasonable hoops in order to get their work done. Instead, we need to find a balance between security and UX. Just like with costs, our balance will vary depending on what we want to accomplish. We should feel more comfortable asking our users to jump through hoops to log into their retirement account vs. to log into a site that allows users to play games. Context is everything here.

Third-Party Components

Most websites built now contain third-party libraries. Many websites use third-party JavaScript frameworks such as jQuery, Angular, React, or Vue. Many websites use third-party server components for particular processing and/or a particular feature. But are these components secure? At one time, conventional wisdom said that open source components had many people looking at them and so wouldn’t likely have serious bugs. Then Heartbleed, a serious vulnerability in the very common OpenSSL library, found in 2015, pretty much destroyed that argument.

While it’s true that most third-party components are relatively secure, ultimately it is you, the website developer, who will be held responsible if your website is hacked, regardless of whether the attack was successful because of a third-party library. Therefore, it is your responsibility to ensure that these libraries are safe to use, both now when the libraries are installed and later when they are updated.

There are several online libraries of known-vulnerable components that you can check regularly to ensure your software isn’t known to be vulnerable:

Later in the book, we’ll point you in the direction of tools that will help you manage these more easily without needing to check each database regularly manually.

It’s important to note that not all vulnerabilities make it to one of these lists. These lists are dependent upon security researchers reporting the vulnerability. If the vendor finds its own vulnerability, they may decide to fix the issue and roll out a fix without much fanfare. Always using the latest versions of these libraries, then ensuring that these libraries are updated when updates are available, will go a long way toward minimizing any threats that exist because of vulnerable components.

Zero-Day Attacks

Vulnerabilities that exist, but haven’t been discovered yet, are called zero-day vulnerabilities. Attacks that exploit these are called zero-day attacks. While these types of vulnerabilities get quite a bit of time and attention from security researchers, you probably don’t need to worry too much about these. Most attacks occur using well-known vulnerabilities. For most websites, keeping your libraries updated will be sufficient protection against the attacks you will face that target your third-party libraries.

Threat Modeling

While not central to this book, it’s worth taking a moment to dive a little bit into threat modeling . At a very high level, “threat modeling” is really just a fancy way of saying “think about how a hacker can attack my website.” Formal threat modeling, though, is a discipline on its own, with its own tools and techniques, most of which are outside the scope of this book. However, since you will need to do some level of threat modeling to ensure you’re writing secure code, let’s talk a little bit about the STRIDE framework. STRIDE is an acronym for six categories of threats you should watch out for in a threat modeling exercise.

Spoofing

Spoofing refers to someone appearing as someone else in your system. Two common examples are a hacker stealing the session token of a user to act on behalf of the victim, or a hacker using another computer to launch an attack against a website to hide the true source of the attack.

Tampering

Has the hacker changed my data in some way? What I said in the “Integrity” section of the CIA triad applies here, and we will get into ways to check for tampering later in the book.

Repudiation

In addition to checking to see if the data itself has been tampered with, it would be useful to know if the source has been tampered with. In other words, if I get an email from you, it would be useful for both of us if we could prove that the contents of the email were what you intended and that you, and no one else, sent it.

The ability to verify both the source and integrity of a message is called non-repudiation . Non-repudiation doesn’t get the attention it deserves in the development world, but I’ll talk about it later in the book because these checks are something you should consider adding to API calls.

Information Disclosure

Hackers often don’t have direct access to the information they need, so they need to get creative in pulling information out of the systems they’re attacking. Very often, information can be gleaned using indirect methods. As one example, imagine that you have created a website that allows potential customers to search arrest records for people with felonies, and sells access to any publicly available data for a fee. In order to entice customers to purchase the service, you allow anyone to search for names. If a record is found, prompt the user to pay the fee to get the information.

However, if I’m a user who only needs to know if any of the individuals I’m searching for have any felony, then I don’t need to pay a penny for your service. All I need to do is run a search for the name I’m looking for. If your service says “no records found” or some equivalent, I know that my individual has no felonies in your system. If I’m prompted to pay, then I know that they do.

A more common example of information disclosure (or, as it is often called by penetration testers, information leakage) can be found during a login process for a typical website. To help users remember their usernames and passwords, some websites will tell you “Username is invalid” if you cannot login because the username doesn’t exist in the system and “Password is invalid” if the username exists but the password is incorrect. Of course, in this scenario, a hacker can try all sorts of usernames and get a list of valid ones just by looking at the error message.

Unfortunately, while the default ASP.NET login page didn’t make this particular error – the website is programmed to show a generic error message if either the username is not found or the password is invalid – they made one that is almost as bad. If you want to pull the usernames from an ASP.NET website that uses the default login page, you can try submitting a username and password and checking for the amount of time it takes for the page to come back. The ASP.NET team decided to stop processing if the username wasn’t found, but that allows hackers to use page processing time to find valid usernames. Here is the proof: I sent 2000 requests to a default login page, half of them with valid usernames and half without, and there was a clear difference between the times it took to process valid vs. invalid usernames.
../images/494065_1_En_2_Chapter/494065_1_En_2_Fig1_HTML.png
Figure 2-1

Time to process logins in ASP.NET

As you can see in Figure 2-1, the processing time for a user login who didn’t exist in the system typically lasted 5 to 11 milliseconds, and the login processing time for a user who did exist in the system lasted at least 15 milliseconds. You should be able to see that hackers should be able to find out which usernames are valid based on this information alone. (This is even worse if users use their email addresses as usernames, since it means that users’ email addresses are exposed to hackers.) There are several lessons to be learned here:
  1. 1.

    If the .NET team can publish functionality with information leakage, then you probably will too. Don’t ignore this.

     
  2. 2.

    As mentioned earlier, sometimes there are trade-offs between different aspects of the CIA triad. In this case, by maximizing Availability (by reducing processing), we have harmed Confidentiality.

     
  3. 3.

    Contrary to popular belief, writing the most efficient code is not always the best thing you can do. In this case, protecting customer usernames is more important than removing a few extra milliseconds of processing.

     

We’ll discuss this example, and how to fix it, in greater detail in Chapter 7.

There is one final point worth making about Information Leakage. The vast majority of books and blogs that I’ve read on security (quite frankly including this one) don’t give this topic the attention it deserves, largely because it is so dependent upon specific business functionality. The login example given earlier is common on most websites, but writing about (or creating a test for, which we’ll talk about later) the felony search leakage example would be difficult to do in a generalized fashion. I’ll refer to Information Leakage occasionally throughout this book, but the lack of mentions is not indicative of its importance. Information Leakage is a critical vulnerability for you to be aware of when securing your websites.

Denial of Service

I touched upon this earlier in the chapter, but Denial of Service (DoS) attack is an attack in which an attacker overwhelms a website (or other software), causing it to be unresponsive to other requests. The most common type of DoS attack occurs when an attacker simply sends thousands of requests a second to a website. Your website can be vulnerable to DoS attacks if a particular page requires a large amount of processing, such as a ReDoS (Regular Expression Denial of Service) attack, when a particularly difficult-to-process regular expression is called a large number of times in short succession.

Another example of a DoS vulnerability happened in WordPress a couple of years ago. A publicly accessible page would take an array of JavaScript component names and combine the component source into a single file. However, a researcher found that if someone made a request to that page with ALL components requested, it took only a relatively small number of requests to slow the site down to the point it was unusable.

Tip

Despite the attention it’s receiving here, Denial of Service vulnerabilities are relatively rare. If you are already using best practices in your code writing, you probably are already preventing most of the code-caused DoS vulnerabilities from making it into your website.

Just a reminder, a Distributed Denial of Service attack, or DDoS, is something subtly different. DDoS attacks work similar to DoS attacks in that both try to overwhelm your server by sending thousands of requests a second. With DDoS, though, instead of getting numerous requests from one server, you might receive requests from hundreds or thousands of sources, making it hard to block any one source to stop the attack.

Elevation of Privilege

Elevation of Privilege, along with Layered Security, and the Principle of Least Privilege are all different components of a single concept: in order to minimize the damage a hacker does in your system, you should make sure that a breach in one part of your system does not result in a compromise of your entire system. Here’s a quick informal definition of each of the terms:
  • Layered Security: Components of your system have different access levels. Accessing more important systems requires higher levels of access.

  • Principle of Least Privilege: A user should only receive the minimum number of permissions to do their job.

  • Elevation of Privilege: When in your system, a hacker will try to increase their level of permissions in order to do more damage.

One example: in many companies, especially smaller environments, web developers have access to many systems that a hacker would want access to. If a hacker were to successfully compromise a web developer’s work account via a phishing attack, that hacker could have high access to a large number of systems. Instead, if the company uses layered security, the developer’s regular account would not have access to these systems, but instead they would need to use a separate account to access more sensitive parts of the system. In cases where the web developer needs to access servers only to read logs, the new account would follow the principle of least privilege and only have the ability to read the logs on that particular server. If a hacker were to compromise the user’s account, they would need to attempt an elevation of privilege in order to access specific files on the server.

It’s important to note here that there’s more to fear here from the company’s perspective than external bad actors. Statistics vary, but a significant percentage of breaches (possibly as much as a third, and that number may be rising)3 are at least aided by a disgruntled employee, so these concepts apply to apps written for internal company use as well.

Defining Security Terms

In the last section of this chapter, let’s go over some security concepts that will become important later in the book.

Brute Force Attacks

Some attacks occur after a hacker has researched your website, looking for specific vulnerabilities. Others occur by the attacker trying a lot of different things and hoping something works. This approach is called a brute force attack . One type of brute force attack is attempting to guess valid usernames and passwords by entering as many combinations of common username/password combinations as possible. Another example of a brute force attack was given earlier in the chapter; attackers attempting to take down your website by sending thousands of requests a second would be considered a brute force attack.

Unfortunately, ASP.NET does very little to help protect you against brute force attacks, so we will explore ways of preventing these later in the book.

Attack Surface

In security, “attack surface” refers to all areas that a hacker can reach that could be attacked. This term is rather loosely defined. For websites, these could all be considered a part of your attack surface depending on the context:
  • The web server itself

  • HTTP processing on the web server, since turning this functionality on opens up the server to HTTP-based attacks

  • Functionality within the website, since a vulnerable component may allow an attacker to escalate their privileges to attack another component

  • Other websites on the same server, since those websites may be compromised, allowing the hacker to access yours

  • Additional APIs that a browser may need to access for the website to function well

One of your goals should be to reduce the attack surface as much as reasonable to reduce the places that an attacker can get a foothold into your systems – and reduce the number of places you need to keep secure.

Caution

Reducing attack surfaces by combining endpoints does not necessarily increase the overall quality of your security. As one example, separating some of your sensitive data into its own API would increase your attack surface but decrease the damage that could be caused if an attacker were to escalate their privileges. Many factors will come into play as you design your systems with security in mind.

Security by Obscurity

It’s fairly common in many technology teams to hide sensitive data or systems in hard-to-find places with the idea that hackers can’t attack what they can’t find. This approach is called security by obscurity in the security world. Unfortunately for us as web developers, it’s not very effective. Here are a couple of reasons why:
  • Someone might simply stumble upon your “hidden” systems and unintentionally cause a breach.

  • It’s easy to believe that a hacker can’t find odd systems, but there are plenty of freely downloadable tools that will scan ports, URLs, etc. with little effort on the hacker’s part.

  • Even if the sensitive data is genuinely hard to find, your company is still vulnerable to attacks instigated by (or at least informed by) rogue employees.

Long story short, if you want something protected, actively take steps to protect it.

Man-in-the-Middle (MITM) Attacks

Man-in-the-Middle (MITM) attacks are what they sound like – if two computers are communicating, a third party can intercept the messages and either change the messages or simply listen in to steal data. Many readers will be surprised to know that MITM attacks can be pulled off using a very wide variety of techniques:
  • Using a proxy server between the user and web server, which listens in on all web traffic

  • Fooling the sending computer into thinking that the attacker’s computer is the intended recipient of a given message

  • Listening for electrical impulses that leak from wires when data is going through

  • Listening for electric emanations from the CPU itself while it is operating

The responsibility for stopping many MITM attacks falls under the responsibility of the network and administrators, since they are generally the ones responsible for preventing the type of access outlined in the last two bullet points. But it is vitally important that you as a developer be thinking about MITM attacks so you can protect both the Confidentiality (can anyone steal my private data?) and the Integrity (has anyone changed my private data?) of your data in transit.

Replay Attacks

One particular type of man-in-the-middle attack worth highlighting is a replay attack. In a replay attack, an attacker listens to traffic and then replays that traffic at a different time that is more to the hacker’s advantage. One example would be replaying a login sequence: if an attacker is able to find and replay a login sequence – regardless of whether or not the hacker knows the particulars of the login sequence, including the actual password used – then the attacker would be able to log in to a website using that user’s credentials.

Fail Open vs. Fail Closed

One question that software developers need to answer when creating a website is: how will my website handle errors? There are a lot of facets to this, and we’ll cover many of them in the book, but one important question that we’ll address here is this: are we going to fail open , i.e., generally allow users to continue about their business, or fail closed, i.e., block the user from performing any action at all?

As one (somewhat contrived) example, let’s say that you use a third-party API to check password strength when a user sets their password. If this service is down, you could fail open and allow the user to set their password to whatever they submitted. While less than ideal, users would be able to continue to change their password. On the other hand, if you chose to fail closed, you would prevent the user from changing their password at all and ask them to do so later. While this also is less than ideal, allowing users to change their password to something easily guessable puts both you as the webmaster and them at risk of data theft and worse.

In this particular case, it’s not clear whether failing open or failing closed is the right thing to do. In many cases, though, failing open is clearly the wrong thing to do. Here is an example of a poorly implemented try/catch block that allows any user to access the administrator home page.
public class AdminController : Controller
{
  private UserManager<IdentityUser> _userManager;
  public AdminController(UserManager<IdentityUser> manager)
  {
    _userManager = manager;
  }
  public IActionResult Index()
  {
    try
    {
      var user = _userManager.GetUserAsync(User).Result;
      //This will throw an ArgumentNullException
      //if the user is null
      if (!_userManager.IsInRoleAsync(user, "Admin").Result)
        return Redirect("/identity/account/login");
    }
    catch
    {
      //If an exception is thrown, the user still has access
      ViewBag.ErrorMessage = "An unknown error occurred.";
    }
    return View();
  }
}
Listing 2-1

Hypothetical admin controller with a bad try/catch block

In Listing 2-1, the programmer put in manual checks for user in role, intending to redirect them to the login page if they are not in the “Admin” role. (As many of you already know, there are easier ways of doing this, but we’ll get to that later.) But if an error occurs, this code just lets the user go to the page. But in this case, an ArgumentNullException is thrown if the user is not logged in, and then the code happily renders the view because the exception is swallowed. This is not the intended behavior, but since the code fails open by default, we’ve created a security bug by accidentally leaving open a means for anyone to get to the admin page.

Caution

I won’t go so far as to say that you will never want to fail open, but erring on the side of failing open causes all sorts of problems, and not all of them are related to security. Several years ago, I worked on a complex web application that erred on the side of failing open. The original development team threw try/catch blocks around basically everything and ignored most errors (doing even less than the previous example). The combination of having several bugs in the system coupled with the lack of meaningful error messages meant that users never knew what actions actually succeeded vs. not, and so they felt like they had to constantly double-check to make sure their actions went through. Needless to say, they hated the system, and a competing consulting firm lost a big-name client because of it.

Separation of Duties

Separation of Duties can be most easily explained by thinking of a small business accounting for the cash coming out of the cash register at the end of the day. You probably wouldn’t want the same person adding the totals of all the receipts as the person counting all the money at the end of the day. Why? With one person, it would be easy to pocket a couple of receipts and steal that money. Separating the money counting from the receipt calculations makes it harder to steal from the company.

In software development, this most obviously applies when talking about access to production systems and production data. I’m sure most of you have had challenges debugging a production issue because you had to work through others to get the information from the production server you needed. But, without that protection, a software developer could fairly easily steal data, write that to some place in the production server, steal it periodically, and then erase evidence afterward. There are other instances like this too, and I’m sure you can come up with a few if you think about it for a bit.

We will talk about separation of duties further in Chapter 11.

Fuzzing

A term that you’ll hear from multiple people in multiple contexts is fuzzing. We won’t talk much about fuzzing in this book, but it is worth taking a little bit of time talking about what it is in case it comes up in conversations about security.

Generally, fuzzing is the term for altering input to look for security bugs. For instance, if your system is expecting a single digit integer in a particular field, sending double digit integers, negative integers, floats, letters, symbols, and/or integers above four billion could all be considered fuzzing. Fuzzing, especially targeted fuzzing (i.e., changing input based on context rather than randomly sending any content that doesn’t match the original), can be a great way to find some types of bugs in web applications. With this definition, you can fuzz anything in a website that takes user input, including the URL, form fields, file uploads, etc.

The reason I mention fuzzing here is that a subset of people within the security community use the word “fuzzing” to mean subtly different. For these people, fuzzing is the term used for testing binary files by changing the inputs, looking for application crashes. These types of fuzzers, like AFL4 or ClusterFuzz,5 generally don’t work well against websites and so aren’t tools that a typical web developer would use regularly. But if you go to security conferences and talk to other developers, be aware that not everyone uses the term “fuzzing” the same way.

Phishing and Spear Phishing

You may already be familiar with the term “phishing,” which is the term when hackers try to trick users into divulging information by trying to appear like a legitimate service. One common attack that fits into this category would be an attacker sending out emails saying that your latest order from Amazon cannot be shipped because of credit card issues, and you need to reenter that information. The link in the email, instead of going to amazon.com, would instead go to the hacker’s site that only looks like amazon.com. When the user enters their username, password, and credit card information, the attacker steals it. Spear phishing is similar, except in that a spear-phish attack is targeted to a specific user. An example here might be if the attacker sees on LinkedIn that you’re a programmer at your company, and you’re connected to Bill, a software development manager, the attacker can try to craft an email, built specifically to fool you into thinking that Bill sent an email requesting you to do something, like provide new credentials into the system you’re building.

At first glance, it may seem like preventing phishing and spear phishing is outside of the scope of a typical web developer. But, as we’ll discuss later on, it’s very likely that phishers are performing attacks to gain access to systems that you as a developer are building, and therefore you need to be thinking about how to thwart phishing attacks to your systems.

Caution

For many years, it seemed like attackers would attack larger companies because there was more to gain from attacking them. As larger companies get better about security, though, it seems like attackers are increasingly targeting small companies. In one of the more alarming examples I heard about recently, a company with only eight office workers was targeted by a spear-phishing attack. A criminal created a Gmail account using the name of the company’s president, and then sent messages to all office workers asking for gift cards to be purchased for particular employees as a reward for hard work. The catch was that the gift card numbers should be sent via email so they could be handed out while everyone was offsite. Luckily, in this case, a quick confirmation with the president directly thwarted this attempt, but if a company with eight office workers is a target, then yours probably is too.

Summary

This chapter primarily gave you basic security information that we will build on later as we discuss how these concepts apply to ASP.NET Core. The CIA triad helped define what security is so you don’t neglect aspects of your responsibility (such as protecting data integrity), and then we discussed the typical structure of an attack against your system and talked about what you can and can’t do to try to catch attackers trying to get into your system. We also talked about the fact that you can’t create a completely secure site and then finished with defining some terms that we’ll use later in the book.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.216.186.164