Chapter 5. Security and Requirements

All systems start with requirements. And so does security.

In this chapter we’ll look at community-built tools such as the OWASP Application Security Verification Standard (ASVS), which lists standard security mechanisms and controls for designing and reviewing applications; and the SAFECode group’s list of security stories, which you can use to help make sure that security is taken into account when the team is thinking about requirements or filling requirements in.

We’ll also look at some simple techniques for defining security requirements in an Agile way, and at how and where the security team needs to be engaged in building and managing requirements.

Dealing with Security in Requirements

Traditional Waterfall or V-model software development assumes that all the requirements for a system can be captured, analyzed, and exhaustively defined up front, then handed off to the development team to design, build, and test. Any changes would be handled as exceptions.

Agile software development assumes that requirements or needs can only be understood in person, because many functional requirements are like art: “I’ll know it when I see it.”

Specifically, Agile software practitioners believe that requirements are difficult for users or customers to accurately specify, because language is a lossy communication mechanism, and because often what the users say they want is not what they actually want.

Agile requirements are therefore done iteratively and concretely, relying heavily on personas and prototypes, then delivered in small, frequent steps for demos and feedback.

Regardless of how you are specifying requirements, it is often hard to define the security attributes of the software that you are designing and building.

Users are able to explain their needs for software to act in certain ways, but no user is ever going to know that she needs secure tokens at the session layer for CSRF protection—nor should the user be expected to know this kind of thing.

Software development methodologies group requirements like this into sets of cross-functional or nonfunctional requirements, taking into account security, maintainability, performance, stability, and other aspects of a system that need to be accounted for as teams design, code, and test software.

But Agile methods have a difficult time dealing with nonfunctional and cross-functional requirements, but they are difficult to associate with concrete user needs, and difficult for a customer or customer representative to prioritize against delivering customer-facing features.

The security and reliability of a system often depends on fundamental, early decisions made in architecture and design, because security and reliability can’t be added later without having to throw away code and start over, which nobody wants to do.

People who object to the Agile way of working point to this as where Agile falls down. A lack of forward planning, up front requirements definition and design, and an emphasis on delivering features quickly can leave teams with important nonfunctional gaps in the system that might not be found until it’s too late.

In our experience, Agile doesn’t mean unplanned or unsafe. Agile means open to change and improvement, and as such we believe that it is possible to build software with intrinsic security requirements in an Agile manner.

Let’s start with explaining a bit about how requirements are done in Agile development—and why.

Agile Requirements: Telling Stories

Most requirements in Agile projects are captured as user stories: informal statements that describe what a user needs and why. Stories are concrete descriptions of a need, or a specific solution to a problem, stating clearly what the user needs to do and the goal the user wants to achieve, usually from the point of view of a user or type of user in the system. They are written in simple language that the team and users can all understand and share.

Most stories start off as an “epic”: a large, vague statement of a need for the system, which will be progressively elaborated into concrete stories, until the team members clearly understand what they actually need to build, closer to when they need to build it.

Stories are short and simple, providing just enough information for the team to start working, and encouraging the team to ask questions and engage the users of the system for details. This forces team members to work to understand what the user wants and why, and allows them to fill in blanks and make adjustments as they work on implementing the solution.

Unlike Waterfall projects, where the project manager tries to get the scope defined completely and exhaustively up front and deal with changes as exceptions, Agile teams recognize that change is inevitable and expect requirements to change in response to new information. They want to deliver working software quickly and often so that they can get useful feedback and respond to it.

This is critical, since it means that unlike planned Waterfall projects, Agile teams tend not to create interconnected requirements. Each user story or piece of functionality should stand on its own if the team decides to stop delivering at any point. This fixed-time-and-budget, variable-scope approach is common in Agile projects.

What Do Stories Look Like?

Most Agile teams follow a simple user story template popularized by Mike Cohn and others:

As a {type of user}

I want to {do something}

so that {I can achieve a goal}

Each story is written on a story card, an index card, or sticky note, or as an electronic representation of this.

Conditions of Satisfaction

For each story, the team works with the Product Owner to fill in details about the feature or change, and writes up conditions of satisfaction, or acceptance criteria. If you are using written story cards, these details would be recorded on the back of the card. The conditions of satisfaction are a list of specific functionality that the team needs to demonstrate to prove that the story is done.

Conditions of satisfaction guide the team on designing a feature and make up the list of test cases that must pass for a specific story. These criteria are statements of what the system must do under different circumstances: what the user’s choices will be, how the system should respond to the user, and any constraints on the user’s actions.

Most of these statements will be positive, focused on the main success scenarios of a feature or interaction. Which means that most of the tests that the team writes will be positive tests, intended to prove that these scenarios pass.

When writing conditions of satisfaction, there is usually little attention paid to what should happen if an action fails, or on exceptions or other negative scenarios. As we’ll see in Chapter 11, Agile Security Testing, this is a serious problem when it comes to security, because attackers don’t stay on the main success paths through the system. They don’t behave like normal users. They try to abuse the capabilities of the system, looking for weaknesses and oversights that will give them access to features and information that they shouldn’t have.

Tracking and Managing Stories: The Backlog

As stories are written, they are added to a product or project backlog. The backlog is a list of stories in prioritized order that defines all the features that need to be delivered, and changes or fixes that need to be made to the system at that point in time. Teams pull stories from the backlog based on priority, and schedule them to be worked on.

In Kanban or other continuous flow models, individual team members pull the highest priority story from the top of the backlog queue. In Scrum and XP, stories are selected from the overall product backlog based on priority and broken down into more detailed tasks for the team in its sprint backlog, which defines the set of work that the team will deliver in its next time box.

In some Agile environments, each story is written up on an index card or a sticky note. The backlog of stories is put up on a wall so that the work to be done is visible to everyone on the team.

Other teams, especially in larger shops, track stories electronically, using a system like Jira, Pivotal Tracker, Rally, or VersionOne. Using an electronic story tracking system offers a few advantages, especially from a compliance perspective:

  1. An electronic backlog automatically records history on changes to requirements and design, providing an audit trail of when changes were made, who approved them, and when they were done.

  2. Other workflows can automatically tie back into stories. For example, code check-ins can be tagged with the story ID, allowing you to easily trace all the work done on a story, including coding changes, reviews, automated testing, and even deployment of the feature through build pipelines.

  3. You can easily search for security stories, compliance requirements, stories that deal with private information, or critical bug fixes, and tag them for review.

  4. You can also tag security and compliance issues for bigger-picture analysis to understand what kinds of issues come up and how often across projects, and use this information to target security education or other proactive investments by your security team.

  5. Information in online systems can be more easily shared across teams, especially in distributed work environments.

Stories in the product backlog are continuously reviewed, updated, elaborated on, re-prioritized, or sometimes deleted by the Product Owner or other members of the team, as part of what is called “grooming the backlog.”

Dealing with Bugs

Are bugs stories? How are bugs tracked? Some teams don’t track bugs at all. They fix them right away, or they don’t fix them at all. Some teams only track the bugs that they weren’t able to fix right away, adding them to the backlog as technical debt.

But what about security vulnerabilities? Are they tracked by the team as bugs? Or are they tracked by the security team separately, as part of its vulnerability management program? We’ll look at this in more detail in Chapter 6, Agile Vulnerability Management.

Getting Security into Requirements

For security teams, the speed that decisions are made in Agile development environments and the emphasis on “working software over documentation” means that they need to stay close to development teams to understand what the team is working on and recognize when requirements and priorities are changing.

As we discussed earlier in how to scale security across teams, you will need to work out how and when you can afford to get the security team involved—and when you can’t afford not to.

The security team should participate in release planning, sprint planning, and other planning meetings to help review and fill in security-related and compliance-related stories, and other high-risk stories, as they come up. Being part of the planning team gives security a better understanding of what is important to the organization, and a chance to help the Product Owner and the rest of the team understand and correctly prioritize security and compliance issues.

If possible, they also should participate in the development team’s daily stand-ups to help with blockers and to watch out for sudden changes in direction.

Security should also be involved in backlog reviews and updates (backlog grooming), and stay on the lookout for stories that have security, privacy, or compliance risks.

Security doesn’t always have to wait for the development team. They can write stories for security, privacy, and compliance requirements on their own and submit them to the backlog.

But the best way to scale your security capability is to train the team members on the ideas and techniques in this chapter and help them to create security personas and attack trees so that they can understand and deal with security risks and requirements on their own.

Security Stories

How do security requirements fit into stories?

Stories for security features (user/account setup, password change/forgot password, etc.) are mostly straightforward:

As a {registered user}

I want to {log on to the system}

so that {I can see and do only the things that I am authorized to see and do}

Stories for security features work like any other story. But, because of risks associated with making a mistake in implementing these features, you need to pay extra attention to the acceptance criteria, such as the following examples and test scenarios:

User logs on successfully

What should the user be able to see and do? What information should be recorded and where?

User fails to log on because of invalid credentials

What error(s) should the user see? How many times can the users try to log on before access is disabled, and for how long? What information should be recorded and where?

User forgets credentials

This should lead to another story to help the user in resetting a password.

User is not registered

This should lead to another story to help the user get signed up and issued with credentials.

Privacy, Fraud, Compliance, and Encryption

Besides security features, the following are other security requirements that may need to be considered:

Privacy

Identifying information that is private or sensitive, and that needs to be protected through encryption or tokenization, access control, and auditing.

Fraud protection

Identity management, enforcing separation of duties, verification and approval steps in key workflows, auditing and logging, identifying patterns of behavior, and thresholds and alerting on exceptions.

Regulatory compliance

What do you need to include in implementing controls (authentication, access control, auditing, encryption), and what do you need to prove in development and operations for assurance purposes?

Compliance requirements will constrain how the team works, what reviews or testing it needs to do, and what approvals or oversight it requires, as well as what evidence the team needs to keep of all these steps in developing and delivering the system. We will look more into how compliance is handled in Agile and DevOps environments in a separate chapter.

Encryption

There are two parts to encryption requirements:

  1. Understanding what information needs to be encrypted

  2. How encryption must be done: permitted algorithms and key management techniques

Crypto Requirements: Here Be Dragons

Encryption is an area where you need to be especially careful with requirements and implementation. Some of this guidance may come from regulators. For example, the Payment Card Industry Data Security Standard (PCI DSS) for systems that handle credit card data lays out explicit cryptographic requirements:

  1. In Section 3, PCI DSS lists the information that needs to be tokenized, one-way hashed, or encrypted; and requirements for strong cryptography and key management (generating and storing keys, distributing keys, rotating and expiring them).

  2. In the glossary, of all places, PCI DSS defines “strong cryptography” and lists examples of standards and algorithms that are acceptable. It then points to “the current version of NIST Special Publication 800-57 Part 1 for more guidance on cryptographic key strengths and algorithms.” In the glossary under “Cryptographic Key Generation,” it refers to other guides that lay out how key management should be done.

This isn’t clear or simple—but crypto is not clear or simple. Crypto is one area where if you don’t know what you are doing, you need to get help from an expert. And if you do know what you are doing, then you should probably still get help from an expert.

Whatever you do when it comes to crypto: do not try to invent your own crypto algorithm or try to modify somebody else’s published algorithm, ever.

We’ll look at compliance and privacy requirements (and at encryption again) in Chapter 14, Compliance.

The team needs to find some way to track these requirements and constraints, either as part of the team’s guidelines for development or checklists in story writing or their Definition of Done: the team’s contract with one another and with the rest of the organization on what is needed before a story is complete and ready to be delivered, and before the team can move on to other work.

Tracking and dealing with nonfunctional requirements like security and reliability is an unsolved problem in Agile development. Experts disagree on “the right way” to do this. But they do agree that it needs to be done. The important thing is to make sure that the team comes up with a way to recognize and track these requirements, and that the team sticks with it.

SAFECode Security Stories

SAFECode, the Software Assurance Forum for Excellence in Code, is an industry group made up of vendors like Adobe, Oracle, and Microsoft which provides guidance on software security and assurance. In 2012, it published a free list of “Practical Security Stories and Security Tasks for Agile Development Environments”, sharing some of their ideas on how to include security in Agile requirements planning and implementation.

There are SAFECode stories to prevent common security vulnerabilities in applications: XSS, path traversal, remote execution, CSRF, OS command injection, SQL injection, and password brute forcing. Other stories cover checks for information exposure through error messages, proper use of encryption, authentication and session management, transport layer security, restricted uploads, and URL redirection to untrusted sites.

There are also stories that go into detailed coding issues, such as NULL pointer checking, boundary checking, numeric conversion, initialization, thread/process synchronization, exception handling, and use of unsafe/restricted functions. And there are stories which describe secure development practices and operational tasks for the team: making sure that you’re using the latest compiler; patching the runtime and libraries; using static analysis, vulnerability scanning, and code reviews of high-risk code; tracking and fixing security bugs; and more advanced practices that require help from security experts, like fuzzing, threat modeling, pen tests, and environmental hardening.

Altogether this is a comprehensive list of security risks that need to be managed, and secure development practices that should be followed on most projects. While the content is good, there are problems with the format.

To understand why, let’s take a look at a couple of SAFECode security stories:

As a(an) architect/developer, I want to ensure,
AND as QA I want to verify that
the same steps are followed in the same order to
perform an action, without possible deviation on purpose or not

As a(an) architect/developer, I want to ensure,
AND as QA I want to verify that
the damage incurred to the system and its data
is limited if an unauthorized actor is able to take control of a process or
otherwise influence its behavior in unpredicted ways

As you can see, SAFECode’s security stories are a well-intentioned, but a-w-k-w-a-r-d attempt to reframe nonfunctional requirements in Agile user story format. Many teams will be put off by how artificial and forced this approach is, and how alien it is to how they actually think and work.

Although SAFECode’s stories look like stories, they can’t be pulled from the backlog and delivered like other stories, and they can’t be removed from the backlog when they are done, because they are never “done.” The team has to keep worrying about these issues throughout the project and the life of the system.

Each SAFECode security story has a list of detailed backlog tasks that need to be considered by the team as it moves into sprint planning or as it works on individual user stories. But most of these tasks amount to reminding developers to follow guidelines for secure coding and to do scanning or other security testing.

Teams may decide that it is not practical or even necessary to track all these recurring tasks in the backlog. Some of the checks should be made part of the team’s Definition of Done for stories or for the sprint. Others should be part of the team’s coding guidelines and review checklists, or added into the automated build pipeline, or they could be taken care of by training the team in secure coding so that the team members know know how to do things properly from the start.

Free Security Guide and Training from SAFECode

SAFECode also provides free training and secure coding guidelines that teams can follow to build secure systems.

This includes a free guide for secure development which is especially valuable for C/C++ developers, covering common security problems and providing extensive links to tools, articles, and other guidance.

If you can’t afford more comprehensive secure development training for your team, SAFECode offers a set of introductory online training courses in secure coding for C/C++ and Java, crypto, threat modeling, security in cloud computing, penetration testing, and how to defend against specific attacks like SQL injection.

SAFECode’s Security Stories are not a tool that you should try to force onto an Agile team. But they are a way to get security requirements on the table. Reviewing and discussing these stories will create a conversation about security practices and controls with the team and encourage the team members to respond with ideas of their own.

Security Personas and Anti-Personas

Personas are another tool that many Agile teams use when defining requirements and designing features. Personas are fictional descriptions of different types of people who will use the system. For each persona, the team writes a fictional biography, describing their background and experience, technical ability, goals, and preferences. These profiles could be built up from interviews, market research, or in brainstorming sessions.

Personas help the team to get into the mindset of users in concrete ways, to understand how someone would want to use a feature and why. They can be helpful in working out user experience models. Personas are also used in testing to come up with different kinds of test scenarios.

For teams that are already using personas, it can be useful to ask the team to also consider anti-personas: users of the system that won’t follow the normal rules.

Designers and teams look for ways to make system features as simple and intuitive to use as possible. However, security can require that a system put deliberate speed bumps or other design anti-patterns in place, because we recognize that adversaries are also going to use our system, and we need to make it difficult for them to achieve their goals.

When defining personas, the recommendation is to create a single persona for each “category” or “class” of user. There is very little to gain from creating too many or complex personas when a few simple ones will do.

For example, one system that an author worked on had 11 user personas, and only 5 anti-personas: Hacking Group Member, Fraudulent User, Organized Criminal Gang, Malware Author, and Compromised Sysadmin.

When detailing an anti-persona, the key things to consider are the motivations of the adversary, their capability, and their cutoff points. It’s important to remember that adversaries can include legitimate users who have an incentive to break the system. For example, an online insurance claim system might have to consider users who are encouraged to lie to claim more money.

Personas are used by the entire team to help design the entire system. They shouldn’t be constrained to the application, so understanding how those users might attack business processes, third parties, and physical premises can be important. It’s possible that the team is building a computer solution that is simply one component of a large business process, and the personas represent people who want to attack the process through the application.

Here are some examples of simple anti-personas:

  • Brian is a semiprofessional fraudster

    • He looks for a return on investment of attacks of at least £10k

    • Brian doesn’t want to get caught, and won’t do anything that he believes will leave a trail

    • Brian has access to simple hacking tools but has little computer experience and cannot write code on his own

  • Laura is a low-income claimant

    • Laura doesn’t consider lying to the welfare system immoral and wants to claim the maximum she can get away with

    • Laura has friends who are experts in the benefits system

    • Laura has no technical competence

  • Greg is an amateur hacker in an online hacking group

    • Greg wants to deface the site or otherwise leave a calling card

    • Greg is after defacing as many sites as possible and seeks the easiest challenges

    • Greg has no financial acumen and is unaware of how to exploit security holes for profit

    • Greg is a reasonable programmer and is able to script and modify off-the-shelf tools

For more examples of anti-personas, and more on how to use attacker personas or anti-personas in security requirements and threat modeling, check out Appendix C of Adam Shostack’s book Threat Modeling: Designing for Security (Wiley).

Attacker Stories: Put Your Black Hat On

Another way to include security in requirements is through attacker stories or misuse cases (instead of use cases). In these stories the team spends some time thinking through how a feature could be misused by an attacker or by another malicious, or even careless, user. This forces the team to think about what specific actions it needs to defend against, as the following example shows:

As {some kind of adversary}

I want to {try to do something bad}

so that {I can steal or damage sensitive information
or get something without paying for it
or disable a key function of the system
or some other bad thing…}

These stories are more concrete, and more testable, than SAFECode’s security stories. Instead of acceptance criteria which prove out success scenarios, each attacker story has a list of specific “negation criteria” or “refutation criteria”: conditions or scenarios that you need to disprove for the story to be considered done.

Take a user story, and as part of elaborating the story and listing the scenarios, step back and look at the story through a security lens. Don’t just think of what the user wants to do and can do. Think about what you don’t want them to do. Get the same people who are working on the story to “put their black hats on” and think evil for a little while, to brainstorm and come up with negative cases.

Thinking like an attacker isn’t easy or natural for most developers, as we discuss in Chapter 8, Threat Assessments and Understanding Attacks. But it will get easier with practice. A good tester on your team should be able to come up with ideas and test cases, especially if he has experience in exploratory testing; or you could bring in a security expert to help the team to develop scenarios, especially for security features. You can also look at common attacks and requirements checklists like SAFECode’s security stories or the OWASP ASVS, which we will look at later in this chapter.

Anti-personas can come in very useful for misuse cases. The As A can be the name or anti-persona in question, and this can also help developers to build the so that.

Attacker Stories Versus Threat Modeling

Writing attacker stories or misuse cases overlaps in some ways with threat modeling. Both of these techniques are about looking at the system from the point of view of an attacker or other threat actor. Both of these techniques help you to plug security holes up front, but they are done at different levels:

  • Attacker stories are done from the point of view of the user, as you define feature workflows and user interactions, treating the system as a black box.

  • Threat modeling is a white-box approach, done from the point of view of the developer or designer, reviewing controls and trust assumptions from an insider’s perspective.

Attacker stories can be tested in an automated fashion. This is particularly useful to teams that follow test-driven development (TDD) or behavior-driven development (BDD) practices, where developers write automated tests for each story before they write the code, and use these tests to drive their design thinking. By including tests for attacker stories, they can ensure that security features and controls cannot be disabled or bypassed.

Writing Attacker Stories

Attacker stories act as a mirror to user stories. You don’t need to write attacker stories for every user story in the system. But you should at least write them in the following scenarios:

  • You write stories for security features, like the logon story.

  • You write or change stories that deal with money, private data, or important admin functions in the system.

  • You find that a story calls into other services that deal with money, private data, or important admin functions so that your feature doesn’t become a back door.

These are the kinds of user stories that are most interesting to attackers or fraudsters. This is when you need to take on the persona of an attacker and look at features in the system from an adversarial point of view.

As we’ve seen, the adversary doesn’t have to be a hacker or a cyber criminal. The adversary could be an insider with a grudge, a selfish user who is willing to take advantage of others, or a competitor trying to steal information about your customers or your intellectual property. Or the adversary could just be an admin user who needs to be protected from making expensive mistakes, or an external system that may not always behave correctly.

Challenge the scenarios in the user story, and ask some basic questions:

  1. What could go wrong? What would happen if the user doesn’t follow the main success scenarios through a feature? What checks do you need to add, and what could happen if a check fails? Look carefully at limits, data edits, error handling, and what kind of testing you need to do.

  2. Ask questions about the user’s identity and the data that is provided in the scenario. Can you trust them? How can you be sure?

  3. What information could an adversary be looking for? What information can she already see, and what could he do with this information?

  4. Are you logging or auditing everything that you need to? When should you create an alert or other notification?

Use this exercise to come up with refutation criteria (the user can do this, but can’t do that; the user can see this, but can’t see that), instead of, or as part of, the conditions of satisfaction for the story. Prioritize these cases based on risk, and add the cases that you agree need to be taken care of as scenarios to the current story or as new stories to the backlog if they are big enough.1

As an Attacker, I MUST NOT Be Able To…

Another way of writing attacker stories is to describe in the story what you don’t want the attacker to be able to do:

As {some kind of adversary}

I MUST NOT be able to {do something bad}

so that….

This can be easier than trying to write a story from the attacker’s point of view, because you don’t have to understand or describe the specific attack steps that the adversary might try. You can simply focus on what you don’t want him to be able to do: you don’t want him to see or change another user’s information, enter a high-value transaction without authorization, bypass credit limit checks, and so on.

The team will have to fill in acceptance criteria later, listing specific actions to check for and test, but this makes the requirements visible, something that needs to be prioritized and scheduled.

Attacker stories or misuse cases are good for identifying business logic vulnerabilities, reviewing security features (e.g., authentication, access control, auditing, password management, and licensing) and anti-fraud controls, tightening up error handling and basic validation, and keeping onside of privacy regulations. And they can help the team come up with more and better test cases.

Writing these stories fits well into how Agile teams think and work. They are done at the same level as user stories, using the same language and the same approach. It’s a more concrete way of thinking about and dealing with threats than a threat modeling exercise, and it’s more useful than trying to track a long list of things to do or not to do.

You end up with specific, actionable test cases that are easy for the team, including the Product Owner, to understand and appreciate. This is critically important in Scrum, because the Product Owner decides what work gets done and in what priority. And because attacker stories are done in-phase, by the people who are working on the stories as they are working on the stories (rather than a separate review activity that needs to coordinated and scheduled), they are more likely to get done.

Spending a half hour or so together thinking through a piece of the system from this perspective should help the team find and prevent weaknesses up front. As new threats and risks come up, and as you learn about new attacks or exploits, it’s important to go back and revisit your existing stories and write new attacker stories to fill in gaps.

Attack Trees

A relatively new methodology for understanding the ways that systems can be attacked is to use an attack tree.

This approach was first described by Bruce Schneier in 1999, where he proposed a structured method of outlining growing chains of attack.

To build an attack tree, you start by outlining the goals of an attacker. This might be to decrypt secrets, gain root access, or make a fraudulent claim.

We then map all the possible ways that someone can achieve the action. The canonical example from Schneier says that to open a safe, you might pick the lock, learn the combination, cut open the safe, or install it improperly.

We then iterate on the attack tree, covering any points where we think that we are involved; so for example, to learn the combination, one could find the combination written down, or one could get the combination from a target.

Modern usage of these attack trees can result in very broad trees and very deep trees at times. Once we have this tree, we can start looking at each node and determining properties such as likelihood, cost, ease of attack, repeatability, chance of being caught, and total reward to the attacker. The properties you will use will depend on your understanding of your adversaries and the amount of time and effort you want to go to.

For each node in the tree, you can easily identify which nodes are higher risk by calculating the cost-benefit ratio for the attacker.

Once we have identified the highest risk areas, we can consider countermeasures, such as staff training, patrolling guards, and alarms.

If done well, this can enable security to trace back controls and identify why they put a control in place, and justify its value. If you get extra investment, would you be better off with a more expensive firewall or installing a secrets management system? It can also help you identify replacement controls. So if a security control is unacceptable to the use of the system, you can understand what the impact is to removing it and what risks it opens up.

Advanced users of this attack tree method are able to build per-system attack trees as well as departmental, unit, and even organization-wide attack trees. One of us used this method to completely change the security spending of the organization toward real risks rather than the traditional security views of where to spend money (hint: it meant spending a lot less on fancy firewalls than before).

Building an Attack Tree

Building an attack tree is an interesting experience. This technique is very powerful, but also very subjective.

We’ve found that it is best to build the trees over a series of workshops, each workshop containing a mix of security specialists, technical specialists, and business specialists.

An initial workshop with a security specialist should be used to outline the primary goals of the trees that you want to consider. For example, in one recent session looking at a federated login system, we determined that there were only three goals that we cared about: logging in, stealing credentials, and denial of service.

Use the STRIDE acronym to review common threats:

  • Spoofing user identity

  • Tampering

  • Repudiation

  • Information disclosure

  • Denial of service

  • Elevation of privilege

Consider your system in the face of these threats, and come up with a set of goals.

Then call together workshops with a mix of security professionals, technical specialists, and business specialists. We’ve found that the business specialists are the most valuable here. While the security and technical professionals are good at creatively coming up with how to achieve a goal, the business professionals tend to be much better at coming up with the goals themselves, and they tend to know the limits of the system (somehow security and technical professionals seem to view computer systems as perfect and inviolate). Business professionals tend to be far more realistic about the flaws and the manual workarounds in place that make the systems work.

Once the trees are complete, security professionals can do a lot of the work gauging the properties, such as cost and so forth.

Maintaining and Using Attack Trees

Once the trees are written and available to the teams, they will of course slowly become out of date and incorrect as the context and situation change.

Attack trees can be stored in digital form, either in spreadsheets, or we’ve found the mind map format to work effectively.

They should be reviewed on a regular basis; and in particular, changes in security controls should be checked with the attack trees to ensure that the risk profile hasn’t changed significantly.

We’ve had experience in at least one company where the attack trees are stored electronically in a wiki, and all the controls are linked to the digital story cards, so the status of each story is recorded in a live view. This shows the security team the current state of the threat tree, any planned work that might affect it, and allows compliance officers to trace back from a work order to find out why it was requested and when it was completed.

You should see what works for your teams and your security and compliance officers, but this kind of interlinking is very valuable for high-performing and fast-moving teams to give them situational awareness to help in making decisions.

Infrastructure and Operations Requirements

Because of the speed at which today’s Agile—and especially DevOps—teams deliver systems to production, and the rate that they make changes to systems once they are being used, developers have to be much closer to operations and the infrastructure. There are no Waterfall handoffs to separate operations and maintenance as developers move on to the next project. Instead, these teams work in a service-based model, where they share or sometimes own the responsibility of running and supporting the system. They are in it for the life of the system.

This means that they need to think not just about the people who will use the system, but also the people who will run it and support it: the infrastructure and network engineers, operations, and customer service. All of them become customers and partners in deciding how the system will be designed and implemented.

For example, while working on auditing and logging in the system, the team must meet the needs of the following teams and requirements:

Business analytics

Tracking details on users and their activity to understand which features users find valuable, which features they don’t, how they are using the system, and where and how they spend their time. This information is used in A/B testing to drive future product design decisions and mined using big data systems to find trends and patterns in the business itself.

Compliance

Requirements for activity auditing to meet regulatory requirements, what information needs to be recorded, for how long, and who needs to see this information.

Infosec

What information is needed for attack monitoring and forensic analysis.

Ops

System monitoring and operational metrics needed for service-level management and planning.

Development

The development team’s own information needs for troubleshooting and debugging.

Teams need to understand and deal with operational requirements for confidentiality, integrity, and availability, whether these requirements come from the engineering teams, operations, or compliance. They need to understand the existing infrastructure and operations constraints, especially in enterprise environments, where the system needs to work as part of a much larger whole. They need answers to a lot of important questions:

  • What will the runtime be: cloud or on-premises or hybrid, VMs or containers or bare metal servers?

  • What OS?

  • What database or backend data storage?

  • How much storage and CPU and memory capacity?

How will the infrastructure be secured? What compliance constraints do we have to deal with?

Packaging and deployment

How will the application and its dependencies be packaged and built? How will you manage the build and deployment artifacts? What tools and procedures will be used to deploy the system? What operational windows do you have to make updates– when can you take the system down, and for how long?

Monitoring

What information (alerts, errors, metrics) does ops need for monitoring and troubleshooting? What are the logging requirements (format, time synchronization, rotation)? How will errors and failures be escalated?

Secrets management

What keys, passwords, and other credentials are needed for the system? Where will they be stored? Who needs to have access to them? How often do they need to be rotated, and how is this done?

Data archival

What information needs to be backed up and kept and for how long to meet compliance, business continuity or other requirements? Where do logs need to be stored, and for how long?

Availability

How is clustering handled -at the network, OS, database and application level? What are the Recovery Time and Recovery Point Objectives (RTO/RPO) for serious failures? How will DDOS attacks be defended against?

Separation of duties

Are developers permitted access to production for support purposes? Is testing in production allowed? What can developers see or do, what can’t they see or do? What changes (if any) are they allowed to make without explicit approval? What auditing needs to be done to prove all of this?

Logging

As we’ve already seen, logging needs to be done for many different purposes: for support and troubleshooting, for monitoring, for forensics, for analytics. What header information needs to be recorded to serve all of these purposes: date and timestamp (to what precision?), user ID, source IP, node, service identifier, what else? What type of information should be recorded at each level (DEBUG, INFO, WARN, ERROR)?

Should log messages be written for human readers, or be parsed by tools? How do you protect against tampering and poisoning of logs? How do you detect gaps in logging? What sensitive information needs to be masked or omitted in logs?

What system events and security events need to be logged? PCI DSS provides some good guidance for logging and auditing. To comply with PCI DSS, you need to record the following information:

  • All access to critical or sensitive data

  • All access by root/admin users

  • All access to audit trails (audit the auditing system)

  • Access control violations

  • Authentication events and changes (logons, new users, password changes, etc.)

  • Auditing and logging system events: start, shutdown, suspend, restart, and errors

Security and the CIA

All operational requirements for security are mapped to one or more elements of the CIA (C: Confidentiality, I: Integrity, A: Availability) triad:

Confidentiality

Ensuring that the information can only be read, consumed, or used by system users who are the appropriate people.

Integrity

Ensuring that the information can only be modified or changed by system users who are supposed to change it. That it is only changed in the appropriate way and that the changes are correctly logged.

Availability

Ensuring that the information is accessible to users who need it at the time when they need it.

Key Takeaways

Here are the key things that you need to think about to get security onto the backlog:

  • Security happens in the thought processes and design of stories, not just during coding. Be involved early and educate everyone to think about security.

  • Pay attention to user stories as they are being written and elaborated. Look for security risks and concerns.

  • Consider building attack trees to help your team understand the ways that adversaries could attack your system, and what protections you need to put in place.

  • Write attacker stories for high-risk user stories: a mirror of the story, written from an adversary’s point of view.

  • Use OWASP’s ASVS and SAFECode’s Security Stories as resources to help in writing stories, and for writing conditions of satisfaction for security stories.

  • If the team is modeling user personas, help them to create anti-personas to keep adversaries in mind when filling out requirements and test conditions.

  • Think about operations and infrastructure, not just functional needs, when writing requirements, including security requirements.

1 This description of attacker stories is based on work done by Judy Neher, an independent Agile security consultant. Watch “Abuser Stories - Think Like the Bad Guy with Judy Neher - at Agile 2015”.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.154.151