SD3+C Strategy and Practices for Secure Applications

The SD3+C strategy and practices was born out of the previously mentioned TWC initiative and the subsequent creation of the SDL. As previously mentioned, the primary goal of the SDL was to augment a software development organization’s processes by integrating principles and practices that lead to improved software security. The SD3+C represents a framework that organizes key security best practices into a set of simple principles. In the next several pages, we will review each of these principles in greater detail and enumerate a number of their associated best practices.

Secure by Design

The strategy of securing by design encourages application developers to focus on applying security-focused principles during the application design phase. This includes analyzing the security risks posed by the application’s features, modeling the various threats to the application, and applying security best practices to the application design and code. Security practices applied during the application design are perhaps the most effective to the overall quality and security of the application. Decisions and actions taken during the application design will ultimately have the greatest impact downstream during application testing. These decisions, as you will see, begin with threat modeling and risk analysis.

Implement Threat Modeling and Risk Mitigation Tactics

Threat modeling is based on the idea that the application possesses assets that are desirable to protect. Therefore, it is important that threat analysis be conducted to identify potential vulnerabilities or attack vectors. The goal of threat modeling not only includes identifying the potential threats but also surfacing mitigation tactics that can be applied to the application. When threat modeling, application development teams wear the hat of the attacker and attempt to identify weaknesses in their design or implementation. Threat modeling is one of the most important security practices a team can apply. The process requires an investment of time, but it is not complicated, and the results yield fewer security bugs in the long run. As we have been discussing throughout this book, finding issues early in the development cycle are significantly less costly and risky to address. Threat modeling is a great methodology for finding security design flaws early. The overall process involves several steps, including the following:

  • Identifying the assets of the application and the entry points.

  • Analyzing the data flow of the application.

  • Analyzing or brainstorming the known threats to the application.

  • Evaluating and ranking the threats by decreasing risk level and identifying threats that are exploitable vulnerabilities in the application.

  • Identifying mitigation strategies and tactics for reducing or eliminating the threats.

  • Choosing the appropriate technologies for applying the mitigation strategies.

Developing threat models and analyzing vulnerabilities involves building data flow diagrams of the application features and determining how specific threats map to particular application components. The Microsoft methodology for completing this exercise is to apply what is known as the STRIDE threat model. STRIDE is an acronym for a set of categorized security threats. The letters represent the following principles:

  • Spoofing. Defined as an attempt to obtain access to an application by using a false identity.

  • TamperingInvolves the unauthorized usage or modification of application data.

  • Repudiation. Represents the ability of either known or unknown users to deny that they performed a particular action within the application.

  • Information disclosure. The unauthorized or unwanted disclosure of sensitive data within the application.

  • Denial of service. The act of rendering all or parts of an application unavailable to its users.

  • Elevation of privilege. When the identity of a user with elevated privileges is assumed by a user with limited privileges and gains access to data or functionality that is not intended for them.

Note

Note

For additional, more detailed information about threat modeling and the practices associated with threat modeling, I recommend reading Writing Secure Code, Second Edition, by Michael Howard and David LeBlanc (Microsoft Press, 2002). This book is the definitive security reference at Microsoft. In addition to this title, Threat Modeling, by Frank Swiderski and Window Snyder (Microsoft Press, 2004), presents an insightful deep dive on the specific practice of threat modeling.

As mentioned, developing a threat model can require an investment of time on the part of the development team. Fortunately, Microsoft has released the SDL Threat Modeling tool, which can help guide application development teams through the threat modeling process. It can be found at http://msdn.microsoft.com/en-us/security/dd206731.aspx.

Once the team has identified and categorized the threats, which are likely to be similar to the threats defined in 6-1, the next steps involve stack ranking the threats in order of risk level and developing mitigation strategies and plans for addressing the threats. Let’s review the design best practices that .NET developers can apply to mitigate application security threats discovered during the threat modeling process.

Apply Best Practices to Application Design

Thus far in this chapter, we have discussed the importance of applications being designed to withstand potentially hostile conditions. Additionally, we reviewed the importance of analyzing the potential threats to your application and applying a layered approach to defending your application against predators. Incorporating security considerations into your application from the early stages of the design will not only ensure that your application and its users are protected from attackers, but it will also decrease the risk of finding high-impact security bugs late in the development cycle. The following represent specific design tactics that will help you avoid introducing security vulnerabilities into your application.

Apply .NET authentication and authorization mechanisms

Many applications require users to be authenticated to the application and subsequently authorized to access specific features within the application. Authentication mechanisms like Windows Security or ASP.NET forms-based security in the .NET Framework support the process of identifying a user, while authorization mechanisms like role-based security provide access control.

Application developers who wish to leverage Windows security for authentication should implement the WindowsIdentity and WindowsPrinciple classes to interrogate the current user’s Windows authentication credentials. If Windows security is not the desired implementation choice, application developers should create custom authentication infrastructure by leveraging the GenericIdentity and GenericPrinciple classes, respectively. Each of these methods offers ample functionality for developers who need to implement user authentication.

Applications requiring user authentication are very likely to require user authorization as well. Role-based security demands are a great technique for restricting access to specific methods or functionality within your application code. These techniques allow specific user roles to be defined either using Windows authentication or custom authentication mechanisms and subsequently get verified or demanded prior to execution of a specific method or block of code. Unauthorized access results in a security exception, which can be caught, logged, and messaged appropriately to the user.

Encrypt sensitive data

Protecting user data is perhaps one of the most important responsibilities of an application development team. Many applications today, especially those found on the Web, request personal information from users to facilitate use of the software. This data is incredibly risky to transmit and store. Personally identifiable information, or PII as it is commonly known, is any piece of information that can be used to uniquely identify, contact, or locate an individual person. It is critically important to take the necessary precautions to ensure that data of this classification does not get exposed to unauthorized users or, worse, a malicious hacker. Application designs should ensure transmission and storage of sensitive data leverages strong encryption and access to the machines and data stores is limited to personnel who have been properly authorized. In some cases, certain types of data may require specific handling as defined by certain governing bodies. We will explore this later when we discuss the secure in deployment and communication strategy.

Assume external applications and code is insecure

If your application relies on an external application or API, it is best to assume that the external dependency is not secure. Because the external system or API is out of your immediate control, it cannot be assumed that the person or persons responsible for that system have implemented a robust security model. Application developers who find themselves in this situation should take the necessary precautions, especially when accepting user-inserted data, to defend against possible attack. This could involve any number of mitigation tactics such as sanitizing data, using encrypted communications, or enabling IP range filtering.

Design to fail, and fail securely

Application failures will happen, whether they are the result of attacks or normal operating calamities like hardware failures or system bugs. The practical approach to this problem is for application developers to plan for failure and implement designs that will ensure failures happen both gracefully and securely. As discussed in Chapter 5, redundancy in design can help overcome unanticipated failures. However, ensuring that your application code is prepared for failure and does not accidently offer elevated permissions or disclose sensitive data during a failure scenario requires implementation by application developers.

Handle errors and exceptions securely

Imagine making an online purchase and, upon entering your credit card information and proceeding to the next step, you encounter an application error. In the error message, you note a sufficiently cryptic error message accompanied by an error code and your credit card number in plain text. How confident would you feel about the data-handling practices of the online merchant? If all confidential information were handled effectively, there would be a lot less fraud in the world. Application developers need to handle all transaction failures with the same security and privacy practices that are used in handling successful application transactions. Therefore, developers should make sure that precautions are taken to protect sensitive data in exception messages, event logs, and in debug sessions.

Implement least privilege

If you have ever run Microsoft SQL server using the "sa" account, raise your hand. Many of us are guilty of not adhering to the principle of least privilege. Arguably, it is the simplest way to get our applications to work properly. However, it is also the most insecure. Applications should be designed to execute with the least possible privileges required to meet the requirements. This will ensure that, if the application identity were to be exploited, the potential scope of damage could be minimized. It is best to incorporate this early in the development process to ensure that developers can build and test their code using the permission set that will be applied in a production setting.

Implement privilege separation

Similar to the principle of least privilege, the principle of separation of privilege recommends the separation of application functionality into parts that require minimal privileges, such as normal-use features, from those that require elevated privileges, such as administrative or management features. This ensures that, if the primary application identity were to be compromised, minimal damage could be inflicted by the attacker.

Sanitize input

With cross-site scripting and SQL injection being two of the most widely perpetrated exploits of applications, it is critically important for application developers to focus a great deal of energy on sanitizing user input in their applications. User-created input should be considered evil until proven otherwise. Application developers should incorporate the appropriate countermeasures to ensure that all input data is interrogated and sanitized as a means to prevent script or SQL injection. The simplest way to accomplish this is to remove potentially offending markup from the input. In the case of cross-site scripting, Microsoft has released an encoding library called the Anti-Cross Site Scripting Library, which helps developers protect their Web-based applications from cross-site scripting attacks. The library can be downloaded from http://www.microsoft.com/downloads/details.aspx?FamilyId=EFB9C819-53FF-4F82-BFAF-E11625130C25&displaylang=en.

Validate security coding best practices with FxCop

In addition to applying security design best practices to your application, it is also important to ensure that your code adheres to the .NET Framework design guidelines for security. Validating your application code against these best practices is quite simple using FxCop or the code analysis features within Visual Studio 2008. For example, FxCop has approximately 25 code analysis rules available out of the box that inspect application code for potential vulnerabilities. These rules include recommendations for sealing methods that satisfy private interfaces as well as making static constructors private. As we will review in Chapter 10, code analysis rules can be extremely useful in raising the quality (or security, in this case) of your code during the feature development period.

Incorporate security-focused code reviews

In the spirit of applying security best practices to application code, it is also helpful to incorporate security-focused code reviews. Typically, organizations assign an individual to be the Chief Security Officer or Security Architect. This person is responsible for security overall and will likely conduct security code reviews in addition to the threat modeling and threat assessments that have already been done. This is an important additional step in ensuring that vulnerabilities are not introduced into the application code.

Secure by Default

The second tenet of the SD3+C strategy is secure by default. The strategy of securing by default encourages developers to implement security-focused default settings for their applications. It is a widely held belief that most application users accept the application default settings during installation. By applying that rationale, we can assume that the default settings will be the configurations most commonly used in a production setting. Therefore, your users are likely to realize the most benefit when the default configurations provide the most security. The challenge to providing secure default configurations, you might imagine, is to ensure that the default configuration settings are both secure and user friendly. It is not appropriate to place the burden of securing the application on the user by asking him or her to alter configurations after installation. Therefore, application developers should ensure that default application configurations provide the most security for their users. Let’s consider a few specific approaches to ensuring secure defaults.

Install only necessary components by default

As we have discussed earlier in this chapter, it is important to reduce the overall attack surface of the application. Therefore, installing more components of the application than are required only increases the potential set of vulnerabilities. It is important that application developers be vigilant about reducing the number of components that are installed by default while also ensuring that the customer’s expectations for the desired feature set are met. To strike the right balance with the application user community, it is recommended that application development teams solicit feedback from average and power users and use that feedback to achieve the appropriate set of defaults.

Configure restrictive permissions by default

Securing by default is a principle that puts authorization in the control of the application user. Many commonly used applications, such as Windows Defender, employ this principle to ensure that the application does not take any liberties not afforded to it by the actual user. For example, Windows Defender builds a white list, or "allow list," of applications that are allowed to access the Internet by simply asking the user to grant access initially. This same practice is also visible in Windows Vista, where actions that require elevated permissions or affect system-level resources require an additional affirmation from the user. Although some may see this tactic as a suboptimal user experience, there are certainly more clever ways to implement it and achieve the same result. Regardless of implementation specifics, this practice does, in fact, create a much more secure runtime environment for users.

Secure in Deployment and Communication

The strategy of securing in deployment and communication primarily focuses on applying security processes and practices to the management of your application during run time, or rather, after it has been deployed. Despite any best effort made during the application development process to avert security bugs, inevitably there will be vulnerabilities discovered after the software releases. Therefore, the secure in deployment and communication tenet reminds application developers to ensure that processes and practices are in place to find and mitigate security issues after the application is in the hands of our users. Further, it recommends that development teams provide timely communication and remediation for any issues discovered post release. These practices not only provide ongoing support and security improvements for users of the software, but they also directly engage customers in the dialog about the importance of security. Let’s review the recommended best practices for implementing the secure in deployment and communication strategy for your applications.

Establish a support and bug remediation process

Every application development team requires a plan for supporting the application once it has been released to a production environment. This also includes establishing a process for addressing issues with the application as the issues get discovered or reported. Certain classes of application bugs, such as functional issues, may afford developers the luxury of patching the application on a predefined schedule. Security bugs often are the exception, however. Security vulnerabilities discovered after release require a rapid response from application development teams, but they also require a response that is effective. To accomplish this, teams should consider defining and publishing the processes and procedures they will follow to expedite security bug fixes as they are discovered.

Provide setup and configuration guidance to users

When discussing the secure by default tenet, we highlighted the importance of providing users with the most secure set of default configurations in the first run experience of your application. This obviously benefits the larger percentage of application users since most are likely to desire only defaults. However, for the remaining and arguably the savviest set of users, it is important to provide guidance about the security implications of enabling the additional feature set. This can be accomplished by simply providing the appropriate level of documentation through readme files, white papers, or integrated help applications.

Adhere to compliance requirements

Security compliance requirements are becoming more and more prevalent in certain industries. For example, in the health-care industry, the Health Insurance Portability and Accountability Act (HIPAA) of 1996 established specific security requirements for health insurance data handling and storage practices. While HIPAA standards and practices are defined at the governmental level, other private organizations have also established data security and handling guidelines. For instance, the Payment Card Industry Data Security Standard (PCI DSS) provides recommended security guidelines for processing card payments online. In each of these scenarios, there are clear guidelines and either requirements or recommendations for security compliance. Application developers should be aware of these requirements, work with their corporate attorneys to understand the implications from a business perspective, and ensure that the application is designed from the outset to handle specific needs. These requirements can and often do vary by industry, but it is imperative to understand the implications they can have on the security of the application or the application data.

Involve users in the security dialog

Although it may seem trivial, it is nonetheless important to have an open dialog about security with the users of your application. Proactively engaging users, discussing vulnerabilities, and educating them on the importance and relevance of security are critical to ensuring the best experience possible for your users. This type of engagement can also improve the perception of your application’s security. For example, despite the large investment in security in Windows Vista, it could be argued that it has been at times perceived by the user community as not being very secure. We know that all software has security flaws, but perhaps the perception of Windows Vista may have been different had users been more knowledgeable about the value that the software provides them. By contrast, though, if we all knew how much crime there is in the world, we might appreciate the value of law enforcement but never go outside. Therefore, it is important to engage users in the security dialog but not to create panic.

Establish a security response and communication plan

As we just mentioned, communication with the user community is important in establishing a relationship of trust with your users. If security vulnerabilities are discovered, it is best to respond to those vulnerabilities with communication and a remediation plan. For example, Microsoft has established the Microsoft Security Response Center (MSRC) as a means to identify, monitor, resolve, and respond to security vulnerabilities. One of its key responsibilities is to communicate in a timely manner with the user community, which in this case includes enterprise customers and highly skilled technical users. It enables this communication through its blog and other Webcentric communication mechanisms like e-mail and Really Simple Syndication (RSS) feeds. Additionally, Microsoft combines this communication with updates to its software via the Windows Update service. This allows Microsoft to provide software updates to the user community quickly, which ensures that vulnerabilities are mitigated as soon as possible. While this scenario may not apply to all application developers, it nevertheless illustrates the importance of response and communication around security vulnerabilities, especially in an environment as fluid as the Internet, where new security issues are being discovered every day.

As you have read, the SD3+C provides a great framework of security principles that helps to distill a broad set of best practices into a simple, organized model. Although a number of the principles and practices we discussed earlier in this chapter are applicable to a broad range of software development technologies, this is a book focused predominantly on managed code. Therefore, it is important that we review the key security principles of the .NET Framework and the flexibility it provides to application developers to secure their applications.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.192.59