9

Security Testing

“Si vis pacem, para bellum,” adapted from Publius Flavius Vegetius Renatus

(“If you want peace, prepare for war”)

In the previous chapter, you learned about techniques you can use when performing user experience testing. Although it may not appear as important as the actual functionality of your application, this is a prominent area that has a massive impact on your customers’ experience of your product. Next, we come to another place of testing where the importance may not be immediately obvious but can harbor some of the most severe bugs your application can suffer from – security testing.

Security testing is an extension of functional testing with a specific focus on security issues. There is an overlap with the tests described in previous chapters, such as text field inputs, but here, we will consider specific examples and tests for security-related topics. Security testing aims to ensure the CIA triad of the confidentiality, integrity, and availability of information. Data should be confidential, only available to the correct owners, and always accurate and available.

In a way, security testing is the opposite of user experience testing. UX testing is unique to your application and subjective. Security testing, in contrast, involves so many shared technologies and techniques that many bugs are well known. The main advice here is to not perform security testing on your own. Use some of the widely available tools that search for known issues for you. Your role is to guide and augment that testing with considerations specific to your application. As with other areas, this is a vast and growing area of software testing, of which this chapter only provides an overview.

In this chapter, you will learn about the following topics:

  • Defining your attack area
  • Security scans and code analysis
  • Tests for logging in
  • Tests for string and file inputs
  • Common web attacks
  • Handling personally identifiable information
  • Bug bounty programs
  • Security beyond the application

Security testing is fun because you get to be an attacker, attempting all manner of invalid access and incorrect behavior to gain more permissions and control than you are allowed to ensure your application blocks them all. As the Latin adage at the beginning of this chapter states, only by preparing for attacks can you ensure your application will survive peacefully and securely.

Advantages and disadvantages of security testing

Like user experience and maintainability testing, the key advantage of dedicated security testing is that it is the best way to discover this critical class of issue for your business. Its main advantages and disadvantages are summarized in this table:

Advantages

Disadvantages

The best way to discover security vulnerabilities

Requires a wide breadth of testing

Many tools available

Requires dedicated knowledge and skills

Easy to automate

Constantly evolving security vulnerabilities and threats

Only part of a broader security story

Table 9.1 – Advantages and disadvantages of security testing

Unlike exploratory testing or user experience testing, where a lack of familiarity can initially be an advantage, security testing requires domain knowledge from the start. This chapter will provide an introduction to that, and there are many other books and tools to automate and make this task easier. Security testing also requires a breadth of knowledge from testing user interfaces and APIs, web technologies, through databases, and down to networking and even hardware issues. This chapter will give you an overview of all those areas and their initial approaches.

On the plus side, focusing on this area is the best way to find security-related issues as they require dedicated tests and tools. There are some overlaps with other areas – such as the inputs used when testing error cases in black-box testing – but generally, these are tests you would not otherwise run. Since security is a shared issue for so many companies, many tools are available to help you with this form of testing, and it’s easy to automate. In this chapter, I will describe some of the techniques they use so that you understand what kinds of testing they are doing and why.

Security threats are constantly evolving as new systems introduce new weaknesses and new vulnerabilities are discovered. In this area, more than any other aspect of testing, it’s vital to stay up to date since you can’t rely on past test results to indicate that you’re currently secure.

The security of your product is a much broader problem than the tests you run. It starts with the design, includes the coding, and reviews, and extends into your organization’s policies on everything from network design to door access controls, from email policies to password requirements. The ISO27001 specification provides a comprehensive list of security considerations, most of which are beyond this book’s scope. Here, we will only cover part of the story – the technical security of the product you ship. We will start by listing the different types of attacks.

Attack types

The security threats to your application fall into two classes – acquiring access to restricted information and gaining control of private systems. The first class is easier and more common. Data leaks involve anything from accidentally allowing public access to data stores to using outdated cryptographic hashes, giving sufficiently resourced attackers the chance to break their encoding. It’s harder to control remote systems, but anywhere there is an input there is a chance to enter invalid data that will trick your application into obeying an attacker.

As a simple example, a 404 content injection attack involves creating a link that makes a trusted third party display a message of your choice. For example, you can enter www.example.com/visit_my_company in your browser. If example.com is vulnerable to this attack (which it isn’t, in reality), it would display an error such as The URL /visit_my_company was not found on this server.

You have now made example.com display text, which you chose. While it is in an error message, you can craft the text to make it appear realistic, and guide users from that trusted site to a malicious one you have prepared.

This chapter describes how to test for common attacks that seek to reveal your information or control your system. This can be a thankless area of testing; if you do your job well, no customer will ever notice. A failure, however, could leave your company at risk of paying millions of dollars to ransomware attackers or in regulatory fines, not to mention the reputational damage, so pay close attention.

Next, we will consider the first stage of securing your system: discovering its attack area.

Discovering the attack area

What is an outsider’s view of your company from a technical viewpoint? The first stage in security testing is working out your public presence, which is your attack area. You most likely run many public machines both for your company and the product you provide. Even if you only have a website, that is your attack area.

Are you sure about which machines are public? Search for all the records under your primary domain. DNS records are easy to add but difficult to remove – it is hard to be sure they’re not used by some rare but essential service. They tend to accrue over time, so if you are in a mature company, there may be many. Scan them all to see whether a machine is running on that address. Anything you find in your scan is part of your attack area.

Similar logic applies to any public IP ranges your company owns and runs. Some of these may be directly related to running your product, while others may host internal machines for your company’s use. Again, there will be machines you know about, but you need to scan the whole range for any others you weren’t aware of.

And finally, check for all machines hosted out of cloud providers. These should have DNS records and be found by the first scan, but they may not. You need to include all the machines on all the cloud providers your company uses for the scan to be complete.

This book is about testing new features of products, so many of these services are beyond the scope of what you strictly need to test. If your VPN server is insecure, that is not a job for a software tester unless you are in a tiny company where everyone chips in with everything. However, products often have a presence on the web, and the only way to discover any old testing servers that everyone had forgotten about is to do a complete scan. Once you have the full list, you can divide it into those owned by the IT or operations teams, as opposed to the ones you are testing as part of product development. Shut down any servers you no longer need to reduce your attack area.

With that list of machines, you only need to identify different server types. If you have 10 edge servers with shared configuration, then you only need to test one of them. To be sure, you need a configuration management system that enforces the same settings everywhere. If you are manually configuring them in any way, you will need to run a scan against all of them to check that there are no mistakes. Repeat that categorization for all your other servers. How many different types do you have?

As well as public machines, compile a list of internal servers and server types. If a device inside your network has been compromised, what connections could it make, and what other access could it gain? You will need an inventory tracking system to log the machines on your network, as well as their types and owners. You can also run network scans internally to identify any addresses in use that aren’t in your inventory.

Armed with the lists of addresses for internal and external machines, you can move on to running security scans.

Running security scans

While this chapter describes some core security testing requirements, it is unusual because this area has so much shared code and common vulnerabilities that third-party companies have extensively automated it. Don’t start security testing from scratch; you will never achieve the depth and breadth of knowledge compiled by third-party tools.

Security scanners can quickly find common security issues such as these:

  • Unnecessarily open ports: Accepting inputs to services you don’t need unnecessarily increases your attack area
  • Out-of-date software and libraries: Libraries are kept up to date with the latest security fixes, so running old software leaves you vulnerable
  • Out-of-date security hash functions: Older, less secure hash functions can be compromised. meaning attackers could break encrypted communications
  • Connections that don’t require encryption: Accidentally sending messages in clear text allows eavesdropping
  • Common web security vulnerabilities (such as CSRF, CORS, or content injection): See the Testing web application security section for more details

These are just examples among many possible findings, and each will be prioritized by the security scan. These can be broken down into a grid of how easy they are to exploit and the impact they would have if they were:

Impact

Likelihood

Negligible

Minor

Moderate

Major

Severe

Highly likely

Low

Medium

High

High

High

Likely

Low

Medium

Medium

High

High

Possible

Low

Low

Medium

Medium

High

Unlikely

Low

Low

Medium

Medium

Medium

Highly unlikely

Low

Low

Low

Medium

Medium

Table 9.2 – Determining the risk of a vulnerability given its impact and likelihood

So, if the likelihood of a vulnerability being exploited was Unlikely, but its impact was Major, the overall risk would be Medium. To help you choose a security scanner to use for your testing, OWASP maintains an extensive list of different security scanners here: https://owasp.org/www-community/Vulnerability_Scanning_Tools.

Security scan results

Your aim with this testing should be to use the available tools against your attack area, then augment that testing with checks customized to your application. Security scanners can check for all the issues and vulnerabilities in old versions of operating systems and web servers and any problems with their configuration. Your task is to understand what these tools are doing and to extend them.

First, security scans run Nmap or equivalent port scanning software to discover all the open ports. Some ports are used to provide your product’s service, but are there any surprises? Are any ports open but don’t need to be? The next simple step for security is to disable all the unnecessary ports on your machines, which quickly and simply blocks access to potentially vulnerable services. For services you require, ensure they are set to secure versions. Port 80 HTTP interfaces should redirect to 443 HTTPS, and similarly for FTP and others.

Real-world example – The over-enthusiastic security scan

In one company where I worked, we ran regular scans of our external addresses, then set up the same company to run a scan from inside our network.

The scan began, and machines immediately started to lose connection with one another. It was run out of hours, so it took a few minutes to spot the issue and identify the cause, but we rapidly shut the scan down, and our systems recovered.

The security scan had found open ports and had tried to connect. The connections had failed, but even the attempt was enough to knock out the live connections between machines and cause the communications failures we’d seen. We updated our systems before attempting that scan again.

For each open port, you need to list all the supported protocols. Most of the time, there will be a one-to-one mapping, such as one port for SNMP or telnet. But one port could support multiple protocols – for instance, both an API and a web interface could be available on a web port. You must list all the available protocols, not just the open ports.

Scanners are limited in how much they will use your application. Some only scan addresses and ports, while others load web pages and check for common vulnerabilities. However, to completely test your particular application or web page, you will need to design a custom set of tests to exercise your specific features.

Security scan reports can propose many low-risk recommendations, such as disabling ping responses. Even the critical errors need careful triage; you might be running an application with a known vulnerability but never exposing it due to how you use it. Security scans can’t tell that. They simply look at the version number, so you need to consider each report individually for what it means for your application.

It’s best to use scanning tools in conjunction with code analysis to detect issues, as described next.

Running code analysis

Part of your security approach should be source code analysis, which can identify security issues before the application is even run. This is a form of static testing, as described in Chapter 6, White-Box Functional Testing. Like linting, this automatic check can detect potential security issues such as being vulnerable to SQL injection attacks or buffer overflows.

Many tools are available for such analysis, and the development team should ensure they run one before the code reaches the test team.

Such tools are easy to run and can be built into deployment pipelines to check each code change. However, they can flag false positive results, and it can be challenging to uncover some classes of vulnerability, such as authentication or access control. They also can’t find configuration issues, as they only examine the code rather than how it is deployed.

Despite its weaknesses, code analysis can quickly find important classes of bugs and is a necessary step before proceeding with security testing. Before running the code and starting dynamic tests, there is one other static check to perform.

Upgrading everything

Upgrade everything now. The first finding from any code analysis or external security scans is reporting any versions that are out of date. Hosted services might take care of some of this for you, but you must regularly update any versions you control. Operating systems, web servers, programming languages, and all their related packages and extensions must be periodically upgraded. Schedule a repeating task to stay up to date with the latest patches and releases. Dependency management systems such as Poetry can ensure packages stay up to date during development.

While it is easy to say you should keep everything up to date, performing those upgrades and resolving any dependencies can be very difficult. Compared to adding new features, performing an upgrade that gives no tangible benefit can be a lower priority since, in the best case, your application works exactly as it did before.

In addition, making low-level changes like these risks introducing serious errors that are hard to predict, so you will need to run an in-depth set of regression tests to verify there are no breakages. The longer you have left it and the more out of date you are, the more painful it will be. However, upgrades are necessary, and all your other security measures rely on applying the latest updates and patches. So, bite the bullet, get it done, and stay up to date in the future to make it an easier task.

Given that code analysis and the security scan have passed, and you are running the latest versions, you can begin security testing in earnest, starting with the most basic security function: logging in.

Logging in

Logging in is vital to many applications, so much so that many standard frameworks provide this functionality. Here, you should use white-box knowledge: how much does your application use a standard framework, and how much have you implemented for yourselves? If you rely entirely on a third-party framework, you can keep your testing brief and focus elsewhere because others have tested and used that code. Even then, you need to check that it has been used correctly, such as requiring a login for all restricted screens. If your application implements most or all logging-in functionality itself, you need a far more comprehensive test plan, as described here.

Logging in comprises two functions: authentication and authorization. Authenticating involves verifying the identity of a user and proving they are who they say they are. Authorization grants access to some parts of your application based on that identity. At a basic level, there may be administrator and user privileges, where some pages are only accessible to administrators. The following sections consider testing those two functions.

Authentication

Your application can authenticate users in many ways, from traditional usernames and passwords to sending verification codes, time-based one-time passwords, biometrics, and multi-factor authentication (MFA). Since passwords are still ubiquitous, we will begin by discussing them, then move on to general issues. When using usernames and passwords, consider these requirements as standard:

  • The application should not indicate whether a username has already been signed up
  • Input fields should be resilient to injection attacks (see the Testing injection attacks section)
  • All passwords should be suitably complex
  • The application should handle multiple logins by the same user
  • Users should be logged out after some time
  • There should be a mechanism to reset a user’s password, which requires the user’s current password
  • If the user has forgotten their password, they should be able to reset it using a pre-verified mechanism, such as an email
  • When resetting a user’s password, all current sessions should be logged out
  • There should be rate limits on login attempts
  • There should be a logout button that prevents any future requests from succeeding until the user logs in again

Each of these deserves tests in your test plan. The following sections describe these in more detail while considering tests for usernames, passwords, and the login session overall.

Tests for usernames

The first operation users perform in many systems is picking a username, so make that process as painless as possible.

From a security point of view, it’s best not to indicate that an email address or username is already in use. That data leakage may be innocuous for most services, but if, for instance, you are providing a web page to help people cheat on their spouses, any information leakage is a critical bug.

When attempting to reset your password, you can give a conditional message: If an account exists for that address, we have emailed instructions on how to reset your password. That explains what is happening without admitting whether the email address is in use.

Login attempts also shouldn’t indicate whether a username is already using your application. Don’t have two different error messages, one for an incorrect username and another for an incorrect password; just say that the combination doesn’t work. Look out for more subtle indications too, such as the speed of the response. If the username is incorrect, your app could respond quickly that the login will fail, whereas performing the necessary hashes to check the password might take longer. The same processing should occur for both failures – check the password hashes even if the username is incorrect so that processing time doesn’t indicate whether the email address is in use.

However, it’s harder to avoid leaking information when initially signing up for a service. Silently failing because a username is in use is unhelpful to a user, compared to telling them they should try an alternative. You’ll have to weigh the security benefits versus the user experience costs for your service to decide what error to display.

For hardware servers, the login screen should give as little information as possible about the system to avoid helping attackers. This may be a page available on the public internet, so displaying the manufacturer and version of a server allows attackers to look up potential exploits. If only authorized users should be accessing that server, keep the login page as simple as possible. On web services where you hope the world will connect, you can make your login page more inviting.

Consider different email addresses that route to the same account. The email specification states that [email protected] and [email protected] are equivalent, known as sub-addressing. Anything between a + and @ symbols in the username is ignored for the process of routing the email. Addresses with dots added to the username are also equivalent. Not all email providers support those functions, but Gmail supports both. Since the strings are different, most applications treat them as entirely different users, even though they are routed to the same person. How does your application handle that case? So long as it works, it’s an invaluable tool for generating many test accounts.

Tests for passwords

Passwords should enforce complexity requirements to prevent users from entering overly simple passwords. When lists of stolen passwords are published, the most common choices are depressingly obvious, showing how tempting it is for users to pick easily-guessable options. However, the fault also lies with the services that allow those passwords. Make sure your application requires better security from your users.

Stricter password rules can be as simple as requiring different classes of symbols, such as numbers and special characters. It can be more effective to use an algorithm to detect the overall complexity of a password by taking many factors into account, including length and common character combinations. However, those requirements can be harder to describe to users.

There should be a limit to how many guesses a user is allowed within a given period. This blocks attempts to guess users’ passwords by firing many automated login attempts. If anyone exceeds the limit, they need to wait a configured period to try again. You’ll need to determine how many guesses should be allowed and how long the backoff period is, and all those values should be recorded in the feature specification.

There needs to be a mechanism to reset the password, requiring the user to reenter their current one as a precaution. Once they’ve reset their password, that should invalidate all older sessions.

Real-world example – How not to reset a password

On one service I used a few years ago, I forgot my password. They didn’t have an automatic reset mechanism (the first red flag), so I had to email their customer support to regain access. They helpfully emailed my password to me so I could log in again, in plain text, in the email body. Needless to say, I canceled that service as soon as possible and warned them about their security policy.

All user passwords should be hashed rather than stored in plain text; it should not be possible for you to email a user’s password back to them or read it at all. That is the absolute basic requirement for password security, but it has to be stated once. If the user forgets their password, it is gone forever.

However, it should always be possible for users to reset their passwords, usually by emailing a reset link to create a new one. For automated tests, you’ll need a system that can receive and parse emails to check that, and many services offer that functionality. Your email needs to be worded carefully to avoid being blocked by spam filters; emails encouraging users to click on links can rightly be treated with suspicion. You’ll need to regularly check that a defined list of common email providers has accepted them.

As well as the user's primary password, remember to also hash any other passwords or PINs they have to enter as part of your service. If they can password-protect documents or recordings, for instance, that field needs to have hashing, just like their login. If any of those sensitive fields are visible in the configuration or the logs, raise that as a bug.

Real-world example – Too much marketing spam

In one company where I worked, we started to get complaints that our emails were blocked as spam. This was disastrous – our service used emails to invite users, tell them about meetings, reset their passwords, and other critical features. They had worked before and hadn’t changed but were now being rejected.

It turned out an enthusiastic marketing team member had run an email campaign that had generated so many complaints that some mail providers had blocked our domain. That prevented other marketing emails but also emails crucial to using our service. We rapidly got in contact to get our domain off those lists and gave the marketing team a dedicated domain for future communications.

Does your application work with password managers? Many users don’t remember all their passwords but instead use one master login to a password manager, which can then store very secure, random strings to log in to different sites. Those managers work hard to be compatible with websites, but there can be issues, especially around the interface. For a complete check, you’ll need to try the popular password manager services against common web browsers.

API authentication

As well as users, consider all programmatic access to your application. APIs typically use either challenge-response authentication or authentication keys to grant access. Both types require a standard set of tests:

  • Valid authentication details should be accepted
  • Invalid authentication details should be rejected
  • Authentication details can be updated, after which the new details are accepted and the old details are rejected
  • Authentication details can be revoked, after which they are rejected

These can be used at each layer of your system, from external APIs, between internal modules, down to databases and data stores, so check the behavior at each level.

Tests for login sessions

Test that all pages/screens requiring a login can only be accessed by authenticated users.

Your application should handle multiple logins. The simplest is to only allow one session at a time, logging out any older sessions when a new session starts, although that limits the user experience. If you allow numerous live sessions, then you have to handle users submitting out-of-date information. Let’s look at the following case:

  • A user loads page A in browser tab 1
  • The user loads page A in browser tab 2 and makes a change
  • The user submits page A in browser tab 1 with out-of-date data

The application should store user state per session rather than per user so that multiple logins on the same account can use the application simultaneously. For sites with simple, atomic actions such as creating users, that is less of a problem. However, in applications that include short-term states, such as being halfway through a learning exercise or test, things could go wrong if the same user is using them twice.

Users should be logged out after a certain period, which is a simple, albeit time-consuming, test. If that value is configurable, you can reduce it to shorten your wait.

Finally, the Logout button should instantly ensure that no further requests are accepted for that session. In a simple application, that is easy to implement and test. However, as applications grow, if a single token is used for logging in to different parts of the application, you need to check that it is invalidated everywhere without waiting for the timeout.

Other login methods

Ideally, there should also be MFA using a separate secure mechanism such as a text message or email. The importance of that will depend on the sensitivity of your product; it would be a higher priority for banking or medical applications, for instance. You can also use SSO from various providers to avoid many of those issues and leave these security considerations to them. Whichever additional methods your application supports also need a section in the test plan.

Another way to avoid these issues is to require time-limited sign-in codes. This means your application doesn’t need passwords and leaves security issues with the email provider. On the downside, temporary codes make signing in longer since you must request a code, check your email, and then enter it instead of entering your usernames and passwords. Still, they can be helpful in applications where you sign in for a long time, such as chat apps.

One issue with such codes is when cloning a PC. Here, two applications can end up with the same unique token, which causes bugs. That’s difficult to avoid in practice, other than advising users to log out before cloning.

Logging in is crucial to so many applications that it needs significant testing in even the most commonly run test plans. Remember to test the negative cases – verify users who should have access and check that unauthorized access is rejected. Every method you support and every screen involved (such as signing up, logging in, password reset, and logging out) all need to be thoroughly covered.

Once logged in, the next question is how much access you should have. That is determined by authorization, which is considered next.

Authorization

Once a user has logged in, you must check that they have the proper privileges. Authorization schemes can vary from a distinction between admins and users to an entire system of group memberships and multiple permission levels. The critical point here is the negative testing – attempting invalid access. With a lower-level login, can that user reach unauthorized pages? The links might be hidden, but can you access the pages by entering the URL? In addition to higher privileged pages, can users see information from other users? Those are standard checks to run every release.

Real-world example – Signed in or not?

A new starter had trouble accessing an internal system in one place where I worked. He could sign in successfully but couldn’t access any of the tools.

Different user groups had access to different tools on that system, so I checked his username, group memberships, and group permissions, and everything was set up correctly. Everything worked fine for everyone else.

The problem turned out to be that he logged in with a capital letter in his username. The login process was not case sensitive, so he could log in successfully, but the check for group membership was case sensitive, so the system didn’t think he belonged to any groups and gave him no access. Using capital letters was the key.

Privilege changes are also important. If a user has had access revoked, does that take immediate effect? Does it apply across the system? If the user is logged in at the time, do the changes affect their current session, or are they logged out? They shouldn’t be able to continue with their old permissions just because they were logged in at the time.

Privilege escalation attacks aren’t easy, and some simple steps limit the possibility of falling victim to one. As mentioned previously, keep up to date with your system’s latest patches and releases to avoid falling prey to any known bugs. Limit the number of accounts with admin, root, and high privilege levels. Regularly review which logins need increased permissions. The fewer accounts, the less chance one will be compromised. Within those accounts, check for default or weak passwords, and set up MFA where possible.

Once a user has logged out, you need more negative tests to ensure they no longer have access to any pages, and that you can then log in as a new user. Can you only see the new users’ pages and none of the previous users’? Are notifications sent for the old user even though you’re logged into a new account?

How quickly can you log a user out? In the event of discovering a compromised account and a rogue employee, you need to be able to rapidly revoke their access to all your systems. If you have to wait until user lists are synched overnight, they could do significant damage in the meantime. Check how fast users lose their permissions, and check every system that login applies to, not just your core applications. Are they deleted from your test accounts? Check anywhere that maintains user lists and doesn’t synchronize them with the main one.

When you have logged in with the correct credentials, you can begin testing in earnest.

Testing injection attacks

Any inputs into your system are a possible way for a hacker to gain access or inject malicious data. Everything entered into your system should be checked, and as a tester, you get to play the role of the hacker, probing your application’s defenses. We met some of these attacks in Chapter 5, Black-Box Functional Testing, and the different input types users can enter. In text fields, the primary attacks are SQL injection, HTML injection, code injection, and Cross-Site Scripting (XSS) attacks.

SQL injection

SQL injection involves entering a string that, if naively copied into a line of code, will perform unauthorized database changes instigated by an attacker. Consider this snippet of Python that uses a string without validating it first:

SQLCommand = 'INSERT INTO users VALUES (username);'

This works fine if username is "Simon Amey":

SQLCommand = 'INSERT INTO users VALUES ("Simon Amey");'

However, it leaves it open to attack if the username string contains control characters:

username = 'Simon Amey"); DROP TABLE users;':

This now makes the string look like this:

SQLCommand = 'INSERT INTO users VALUES ("Simon Amey"); DROP TABLE users;");'

Adding semicolons to the string has split the command into three. The first command inserts the new name into the users as expected, but that doesn’t matter because the second command drops the whole table. The third command, with the closed bracket, is the remains of the initial string and is invalid.

To prevent this, all control characters should be escaped so that they are read as characters rather than acted on. There are simple functions in the different programming languages to achieve that. However, developers must remember to run that function on every text input field, so it’s your job to see whether they’ve missed one.

HTML injection

A similar vulnerability leads to HTML injection. Strings entered by the user are often presented back to them within the user interface and in emails. If text inputs aren’t validated when entered, they may be written into HTML and change the page’s look. At a simple level, you can test for this by adding HTML tags to your inputs:

username = "<b>Simon Amey</b>"

If your username appears in bold, then the inputs aren’t being validated, and malicious users could disrupt your pages far more than just changing how their name is displayed. Remember to check all inputs displayed to the user anywhere in your application. In one system I worked on, our web pages were protected against this attack, but the HTML characters took effect in emails sent out to users.

Real-world example – Testing the test tracking system

We recorded security tests in a test tracking system at one company where I worked. We entered the different cases, such as the input being too long and containing HTML or JavaScript tags.

When we read that test later, the page failed to display correctly – the test tracking system suffered from the bugs we were looking for in our product. It rendered HTML tags and executed JavaScript, showing popup boxes whenever we returned to that test case. It made the system unusable. Although it only affected test cases we wrote and viewed ourselves, it didn’t look like we could affect anyone else. We reported the bug to their support team, but it still hadn’t been fixed months later. Our product didn’t suffer from any of those issues.

Code injection

Code injection attacks attempt to exploit weaknesses in coding implementation to execute arbitrary, malicious code on the server. Much like SQL and HTML injection attacks, these rely on unsanitized strings from the user being trusted and used in internal functions.

One type of code injection attack relies on functions such as eval() that exist in PHP and Python. It takes a string as an argument but then executes it as code. This is a powerful way to solve some difficult programming challenges, but its use is heavily discouraged because of the associated security risks. While exceedingly difficult to exploit (the likelihood is highly unlikely; see Table 9.2), its impact is severe because it can let an attacker run code and potentially remotely control your server, giving it a high severity and a medium risk overall.

To exploit this issue, an attacker would need to find a point in the code that used an eval() function on a string that could be altered by user input. That is a thankfully rare occurrence but watch out for any possibility of it in your application’s code.

Another method of code injection is the famous buffer overflow attack, in which an attacker writes too much into a fixed-length buffer. This fills the buffer with data, then writes over other sections of memory. That other memory can include the return pointer for the function, potentially letting the attack redirect program execution to their own, malicious code. This class of vulnerability is again difficult to exploit in practice but has severe consequences.

Higher-level languages with error checking such as Java and Python are immune to this type of attack, but if your application uses lower-level languages such as C or C++, you will need to check for it.

Cross-site scripting attacks

The final example we’ll consider here is in the same family, where unvalidated inputs allow malicious users to alter a website’s behavior. This time, rather than HTML or SQL being added, scripts are run on whoever reads that page. This is an XSS attack. This time, the malicious input includes HTML tags to add in commands from a scripting language, typically JavaScript:

Username = "Simon Amey<script>alert('Test')</script>"

If this is naively stored and presented on a web page, an alert box will pop up, providing an obvious and safe way to check that this vulnerability is present. If it is, malicious users could add far more damaging code. Again, this needs to be checked for all inputs.

The following section considers vulnerabilities associated with uploading files to you application.

Validating file inputs

Any files that users can upload to your system also need to be scanned for malicious content. For the filename, check all the variables listed in Chapter 5, Black-Box Functional Testing. These tests are standard across many applications, and this section draws heavily from the OWASP website, which I highly recommend you visit for further reading and details.

Testing file uploads

For the file uploads, consider testing the following requirements:

  • Only authorized users should be allowed to upload files
  • Only accept specific file extensions
  • Check the file type rather than relying on the Content-Type header
  • Check the minimum and maximum file sizes
  • Virus-check all files
  • Protect the file against Cross-Site Request Forgery (CSRF) attacks (see the CSRF attacks section for more details)

Acting as an attacker, you should attempt all those attacks to see whether your system is vulnerable.

Within the file, does your application scan and protect itself against common attacks? There are several common enough to deserve dedicated tests:

  • Billion laughs (.zip or .xml bombs)
  • Exploiting vulnerabilities in file processing
  • Overwriting system files
  • XSS or CSRF

The billion laughs attack includes a .xml file with 1 entity with 10 sub-entities, each of which has 10 sub-entities, and so on until the whole expands to a billion different entities. In a famous example, those were simply strings saying lol, hence a billion laughs. If an application naively attempts to parse the file, that can drain resources to the point of causing processes to be killed for requiring too much memory. Other attacks of that form are also possible with nested .zip files, which expand to enormous sizes.

Vulnerabilities in file processing can be exploited when commonly used tools suffer from security issues. Then, carefully crafted files can give unwarranted access, up to remote code execution in some cases. You can avoid these by always staying up to date and checking for the tell-tale signs of files attempting to use those exploits, such as your monitoring flagging up multiple upload attempts and scanning for known vulnerabilities. Look for whichever exploits are doing the rounds as you read this.

Testing file storage

Once your files have been uploaded, you need to test their storage. Consider the following tests:

  • Are uploaded files encrypted at rest in your chosen storage location?
  • What processes can access the files?
    • Check that both the necessary access is available and that other processes can’t access them
  • If the files contain personally identifiable information (PII), are they stored in the correct geographical location?
    • See the Handling personally identifiable information section for more details
  • What are your storage limits, and when will you hit them at your current rate of usage?
  • What are your storage costs?
  • What are the maximum upload and download rates your application can support?

Any of those questions can uncover a critical issue with your storage model.

File uploads can also attempt to overwrite system files by including the same name. Again, check your system and add blocks against them. XSS and CSRF attacks will be considered in more detail later.

File uploads are common to both applications and websites, but there are classes of security concerns that only apply to web applications. These will be described in the next section.

Testing web application security

There are many common web security issues that you should protect yourself against, of varying degrees of severity. If you run a bug bounty program with these present, these are likely to be the first reports that you get. You will receive many duplicates of the same basic issues, so these are the faults to fix first to encourage researchers to explore more deeply.

Some tools step through these kinds of attacks, but here, we will describe how these attacks work and why.

Information leakage

An attacker looking for vulnerabilities to exploit needs to know what system they are attacking: what kind of web server is this, and what version is running? Web servers generally present this information in headers because it may help client web browsers with compatibility, but that information is not usually needed. Instead, it lets attackers know what exploits are likely to work, so it’s best to disable it. There are settings to implement that, which vary between different servers; whichever you use, make sure that’s in place.

Avoiding information leakage can go as far as security scans, which recommend disabling ping responses to obscure even the presence of servers. However, these are usually discoverable using Nmap – if they are on the network, they should respond to some requests at least – so unless you only allow specific IP connections, you may as well leave pings active for use by the operations team.

Balanced with common sense, give away as little information about your system as possible.

404 content injection

As we saw previously, if a user types an invalid URL, you shouldn’t echo their text back to them, or if you do, it should be checked to ensure it isn’t vulnerable to HTML content injection, as described previously.

While it doesn’t give access to any data, the problem with this attack is that an attacker can put their words on a site you own.

For instance, going to the site www.example.com/wrong might display an error message on the form:

The URL /wrong was not found on this server.

This is a genuine page served by www.example.com, but now, it has the text you entered in the URL as part of the page. Despite the words surrounding that, it might trick a user into reading your message. Here, we have simply used the word wrong, but an attacker could direct users to an alternative, malicious website:

%2f%20This%20site%20is%20on%20maintenance%20Please%20visit%20www.devil.com

If the web browser is also vulnerable to HTML content injection, you might be able to make the malicious URL a link to encourage victims to click on it. It is simple enough to alter the configuration to set up a customized 404 page that is displayed to all invalid URLs and doesn’t echo back the user input. This makes your brand look more professional by letting you choose how that page looks, and where to redirect people.

Even when browsers are protected against simple content injection attacks, as described previously, encoding control characters in the URL can trigger other issues. The ..%2f sequence translates into ../ when encoded in hexadecimal format. It causes different errors compared to simply adding an invalid URL, so also check those patterns.

Clickjacking

You should make sure your application’s web pages have their Content Security Policy correctly set or the older X-Frame-Options HTTP header to prevent your site from being loaded as a frame on another, possibly malicious location.

Clickjacking works by overlaying an invisible frame over a valid frame that the user wants to use. The user attempts to click on a button on the visible frame but actually clicks a hidden button that grants the attacker extra privileges or access.

To prevent that, you can specify that your page cannot be loaded as an iframe within a page on another domain. These are options you can enable for your site and then leave in place; if you don’t, then bug bounty researchers will remind you about them.

Long password attack

When logging into a website, the username is simply compared against the list of users configured on a system. The password, however, isn’t stored in plain text but is hashed, so any password a user enters has to go through a computationally intensive transformation before it can be checked.

Usually, people log in rarely enough, and with passwords short enough, that even though password hashing takes some CPU time, it is well within a server’s capabilities. However, if the password field isn’t limited, an attacker could enter a colossal password, say a million characters long. Running the hashing algorithm on that can require so much CPU time that other login attempts fail, causing a Denial-of-Service (DoS) attack on the server.

All text fields should have length limits applied to them, especially the Password field, because of that vulnerability.

Host header attacks

Every web request contains a host header indicating the website they are trying to reach so that a server hosted on a single IP address can host several different sites. DNS resolution on the domain will direct requests to that server; then, the host header tells the server which site the request is for.

A host header attack involves sending an invalid domain, one not hosted on that server. Your application should reject that, but a poorly configured server might redirect the request to the specified domain. This implies the server trusts the host header field, even though it can be set by clients, possibly maliciously.

This doesn’t cause a problem on its own, but it can be used in conjunction with other attacks to reroute requests from a trusted domain to malicious ones. The configuration is simple enough: web servers should only accept requests for domains they host and reject anything else.

CSRF attacks

CSRF attacks can occur when a user is logged into vulnerable site A but then visits malicious site B. Without protection in place, site B can include a link to site A to perform some action, and because the user is logged in, the browser will automatically have the session cookie, which gives access. That malicious request could retrieve information or make changes, such as altering the email address for the account to provide the attacker with more privileges.

You can check this on your system by seeing whether the credentials are stored in cookies on the user side. If those are all you need for access, then the site is vulnerable. The protection here is to add a CSRF token included in each request. Every form on your website needs to add that, so check whether any were missed.

CORS attacks

Cross-Origin Resource Sharing (CORS) allows websites to load adverts or other information from other sites rather than hosting everything locally. A strict policy should ensure resources are only loaded from necessary sites; however, policies require manual updates, so there is a temptation to make them too lax and accept requests from too many places.

As with CSRF attacks, CORS attacks can occur when a user is logged into Vulnerable site A but then visits Malicious site B. The victim executes a malicious script on site B, which issues a request to site A:

Figure 9.1 – CORS attack stage 1

Figure 9.1 – CORS attack stage 1

For the attack to work, site A must have an open Access Control Allow Origin (ACAO) policy and an open Access Control Allow Credentials (ACAC) policy. If that is the case, the request is validated, and credential information is sent from the browser to Malicious site B:

Figure 9.2 – CORS attack stage 2

Figure 9.2 – CORS attack stage 2

Look out for wildcard checks – if your site accepts data from example.com, does it accept requests from malicious-example.com or example.com.malicious.com? The pattern matching should be exact and exclude prefixes and suffixes.

All these exploits are well known and have mitigations available. Test them all, and keep up to date as new ones arise. The following section deals with a security concern that is always present: managing sensitive personal information.

Handling personally identifiable information

You will almost certainly need to store PII somewhere in your application, so you need to be sure it is saved securely. PII includes the following:

  • Names
  • Dates of birth
  • Birthplaces
  • Names of family members
  • Access keys
  • Usernames or aliases
  • Credit card numbers
  • Email addresses
  • Telephone numbers
  • Physical addresses
  • Social security or national insurance numbers
  • IP addresses
  • Passwords
  • Personal photographs
  • Passport information
  • User gender

Some jurisdictions, such as General Data Protection Regulation (GDPR) regulations in the EU, include a further category of sensitive personal information that requires separate handling. This includes the following:

  • Medical information
  • Financial information
  • Biometric information
  • Education information
  • Employment information
  • Sexual orientation
  • Political opinions
  • Trade union membership
  • Genetic data
  • User race or ethnic origin
  • User religion

Sensitive PII could put users at risk if it was disclosed and warrants further security concerns such as requirements to be encrypted at rest and in transit. You’ll need to check whether your application deals with that data and the requirements for it.

The first step to securing PII is not to collect it if you don’t need it. Unless you have a good reason, don’t ask users for their data, which avoids all subsequent headaches about securing and managing it. Watch out for inadvertently gathering data. It’s unlikely you’ll accidentally ask users for their religion, but do your logs record your visitors’ IP addresses or the URLs they visit?

What information do your logs record? This can be vital for debugging but should be as limited as possible. Check where the logs are stored and who has access to them; again, limit that. There will be logs throughout the system, from the communication layer down to the database, so check them all.

Real-world example – Touch tones in logs

One product I worked on used Dual-Tone Multi-Frequency (DTMF) digits – touch tones – where telephones sent numbers to our service. These performed various actions, and we recorded them in our logs for debugging purposes. Sometimes, these were used to choose from a menu, but our system could call other companies such as banks, and people could use DTMF to enter their credit card numbers, which also ended up in our logs. A security review discovered that issue, and we removed that logging.

Having reduced the amount of PII you collect, record exactly where it is copied and saved. Some jurisdictions require data to be stored within its geographic region, so ensure you have a list of those requirements and understand where you hold information. What copies do you make of the data, and are they stored in the correct region? If you have separate systems for statistics, logging, backup, or testing, check that they also conform to any geographic restrictions on storage.

The next step is to use PII as little as possible. User identities should be converted into universally unique identifiers (UUIDs), and those should be used in any messages or URLs that need to be unique to a user. PII should be stored in a single, secure location and copied as little as possible. Again, look out for any values recorded in the logs where data is entered or used.

An important test is to look for PII copied to other locations. Items to check include the following:

  • Source code
  • Cookies
  • Databases
  • Files
  • Configuration

You can use regular expressions to search for common patterns such as credit card numbers and email addresses in those locations.

The next part of protecting PII is to delete it when it is no longer needed. There should be a policy to say how long you will keep personal information and systems in place to delete it when it is no longer needed. Some data is naturally time-limited, such as browsing history or call records, and that should be aged out after some period. Any information still being used, such as usernames, can be kept as long as needed, but you should also have a policy to identify and remove stale accounts.

Despite that, you might want your application to maintain statistics for far longer than your other data, at which point it needs to be anonymized. For instance, you can keep the number of unique users per day without storing exactly which users they were. That can let you graph usage over time without the need for PII. That anonymization also needs to be tested – is it running correctly and removing all the necessary information? Again, you can search the anonymized data for any of the data types listed previously. This time, the search results are simpler to filter because there should be no PII of any kind.

PII is massively helpful to companies but requires strict requirements for handling it. Track its complete life cycle: where is it gathered, stored, used, anonymized, and deleted? You need to check each step.

In the next section, we will look at a simple way of outsourcing some of your security testing by running a bug bounty program.

Running a bug bounty program

Security testing is one area that is particularly easy to outsource. You should keep up to date with the latest security warnings that affect your application, but you can also apply for ethical hackers to try to find weaknesses in your application. Running a bug bounty program requires an investment of your time to answer the reports and a budget to make payments for valid discoveries. However, it is a quick way to get feedback and alternative points of view on your application’s security. You can advertise your program on common forums and your site, and part of being a researcher is finding those adverts.

Security researchers should be familiar with the latest tools and know how to check for the latest vulnerabilities. This can save you time to concentrate on other aspects of product testing without having to recruit someone and make the long-term commitment of paying their salary. Researchers are particularly good at finding common problems on standard interfaces, such as missing security configuration on Apache or Nginx web servers. They can quickly make sure you don’t have any glaring errors.

More in-depth researchers may request a test login, which can give greater access to the system if there isn’t a free signup option. In this case, you can provide them access, ideally, to a test system that is separate from your live installation. They can then perform potentially destructive tests and attempt to extract data from the database without gaining access to genuine customer data. Requesting a login is a good sign that a researcher wants to spend more time investigating your application, so do encourage that and help if you can.

There is a law of diminishing returns to a bug bounty program. Researchers will probably find some simple, common problems, and you will receive duplicate reports of the same vulnerabilities until you resolve them. Often, these are minor issues, so they aren’t a high priority to fix compared to other changes you want to make in your application. However, you need to fix or otherwise remove interfaces to encourage researchers to look for more exciting problems.

The downside of a bug bounty program, besides the resources required to run it, is that most researchers will use the same simple tools to attack the same interfaces, particularly web interfaces. If your application uses other protocols, researchers may not attack those interfaces at all. When I worked for a video conferencing company, almost all the reports were issues with our web portal and the machines we hosted. None found weaknesses in the protocols we used for video calls – H.323 and SIP. That probably matches the skills of malicious hackers – they are also more likely to attack standard interfaces. However, to ensure that those other protocols are secure, you will probably have to test them yourself. Don’t assume that because you have no bug bounty reports about an interface it is secure; more likely, it means it is not being tested.

To get ahead of any bug bounty researchers, look for any old machines and interfaces away from your main application that also need to be secured.

Avoiding security through obscurity

Does your application have any backdoors? For debugging or administrative purposes, does it listen on any ports? Have any superuser accounts been added for emergency access? In larger companies, this is the responsibility of the operations team, but if you can, also check them from a test perspective.

When you discovered your attack area, were any admin interfaces left open, such as telnet or SNMP, that needed to be secured? If possible, close these down; otherwise, you must ensure they are secured through the necessary passwords, access restrictions, and keys. Security requirements can often be combined to greater effect, so apply as many restrictions as possible.

Never rely on security through obscurity. If an interface is publicly accessible – whether it’s an open port or a particular URL – assume it will be found. The question is, what could an attacker learn from that interface, or what access do they gain? Restrict logins and apply all these security recommendations to those pages too.

There can be a great temptation in security testing to concentrate on the main interfaces. They are big, public, and obvious. So, consider the non-obvious cases – what about the disaster recovery access? What old systems are still running, and what deprecated interfaces are still available? What DNS records are still active, pointing to old servers? Those obscure, low-priority, little-used interfaces are where your real security risk lies, so this is the time to concentrate on them.

Considering security beyond the application

This chapter has focused on testing your product and the technical weaknesses it may have. This is only one aspect of system security and not the most important one. If you want administrator access to a rival’s system, the easiest way isn’t to discover a privilege escalation bug – it’s to trick an administrator into telling you their password. Social engineering with phishing emails is a huge problem that requires training, policies, and technical solutions such as email filters.

Internal policies are vital to security, such as requiring laptop hard drives to be encrypted and using a password manager to secure logins, along with 2FA. Wherever possible, these shouldn’t be company policies advising users what to do but should be enforced on all users’ devices.

Security is an area where the smallest gap can undo vast amounts of hard work, and it’s easy to be lulled into a false sense of confidence. Just because you have excellent security in one area doesn’t mean your security is excellent overall. Strong protection against social engineering means nothing if your database is left exposed on a public site; great security testing of your product is pointless if your colleagues might leave an unlocked, unencrypted laptop on a train. Security requires a holistic approach, of which this chapter only considers one part. This part is your responsibility as a tester, but it is not the whole story.

Summary

In this chapter, you have seen many examples of security processes and tests you should run on your application. The first step in security testing is identifying the attack area – what different kinds of servers do you have, which are public and private, what ports do they have open, and what protocols do they support? Armed with that information, you can perform security scans and design test plans on the relevant machines.

We described running security scans and code analyses as the first steps for testing security, and considered the main areas of security vulnerability, including logging in, privilege levels, and user and file inputs. We looked at web server misconfigurations that can lead to security problems and considered PII, which is particularly sensitive and needs to be identified throughout your system, along with a process to ensure its deletion.

Finally, we looked at systems around security testing of the application, including running a bug bounty program and security systems for your company as a whole. These are significant topics, and this chapter was designed to suggest further research rather than being a complete guide. Still, a security discussion does not end with just the security of your application.

In the next chapter, we will consider an area of functionality just for internal users – maintainability. While it doesn’t affect the customer experience of your product, checking that it is easy to maintain will make life easier for everyone in development so that you can make all the other improvements faster.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.23.130.108