Chapter 1

Secure Software Concepts

1.1 Introduction

Ask any architect and they are likely to agree with renowned author Thomas Hemerken on his famous quote, “the loftier the building, the deeper the foundation must be laid.” For superstructures to withstand the adversarial onslaught of natural forces, they must stand on a very solid and strong foundation. Hack-resilient software is one that reduces the likelihood of a successful attack and mitigates the extent of damage if an attack occurs. In order for software to be secure and hack resilient, it must factor in secure software concepts. These concepts are foundational and should be considered for incorporation into the design, development, and deployment of secure software.

1.2 Objectives

As a Certified Secure Software Lifecycle Professional (CSSLP), you are expected to:

  • Understand the concepts and elements of what constitutes secure software.
  • Be familiar with the principles of risk management as it pertains to software development.
  • Know how to apply information security concepts to software development.
  • Know the various design aspects that need to be taken into consideration to architect hack-resilient software.
  • Understand how policies, standards, methodologies, frameworks, and best practices interplay in the development of secure software.
  • Be familiar with regulatory, privacy, and compliance requirements for software and the potential repercussions of noncompliance.
  • Understand security models and how they can be used to architect hacker-proof software.
  • Know what trusted computing is and be familiar with mechanisms and related concepts of trusted computing.
  • Understand security issues that need to be considered when purchasing or acquiring software.

This chapter will cover each of these objectives in detail. It is imperative that you fully understand not just what these secure software concepts are but also how to apply them in the software that your organization builds or buys.

1.3 Holistic Security

A few years ago, security was about keeping the bad guys out of your network. Network security relied extensively on perimeter defenses such as firewalls, demilitarized zones (DMZ), and bastion hosts to protect applications and data that were within the organization’s network. These perimeter defenses are absolutely necessary and critical, but with globalization and the changing landscape in the way we do business today, where there is a need to allow access to our internal systems and applications, the boundaries that demarcated our internal systems and applications from the external ones are slowly thinning and vanishing. This warrants that the hosts (systems) on which our software runs are even more closely guarded and secured. Having the need to open our networks and securely allow access now requires that our applications (software) are hardened, in addition to the network or perimeter security controls. The need is for secure applications running on secure hosts (systems) in secure networks. The need is for holistic security, which is the first and foremost software security concept that one must be familiar with. It is pivotal to recognize that software is only as secure as the weakest link. Today, software is rarely deployed as a stand-alone business application. It is often complex, running on host systems that are interconnected to several other systems on a network. A weakness (vulnerability) in any one of the layers may render all controls (safeguards and countermeasures) futile. The application, host, and network must all be secured adequately and appropriately. For example, a Structured Query Language (SQL) injection vulnerability in the application can allow an attacker to be able to compromise the database server (host) and from the host, launch exploits that can impact the entire network. Similarly, an open port on the network can lead to the discovery and exploitation of unpatched host systems and vulnerabilities in applications. Secure software is characterized by the securing of applications, hosts, and networks holistically, so there is no weak link, i.e., no Achilles’ heel (Figure 1.1).

Figure 1.1

Image of Securing the network, hosts, and application layer

Securing the network, hosts, and application layer.

1.4 Implementation Challenges

Despite the recognition of the fact that the security of networks, systems, and software is critical for the operations and sustainability of an organization or business, the computing ecosystem today seems to be plagued with a plethora of insecure networks and systems and more particularly insecure software. In today’s environment where software is rife with vulnerabilities, as is evident in full disclosure lists, bug tracking databases, and hacking incident reports, software security cannot be overlooked, but it is. Some of the primary reasons why there is a prevalence of insecure software may be attributed to the following:

  • Iron triangle constraints
  • Security as an afterthought
  • Security versus usability

1.4.1 Iron Triangle Constraints

From the time an idea to solve a business problem using a software solution is born to the time that solution is designed, developed, and deployed, there is a need for time (schedule), resources (scope), and cost (budget). Resources (people) with appropriate skills and technical knowledge are not always readily available and are costly. The defender is expected to play vigilante 24/7, guarding against all attacks while being constrained to play by the rules of engagement, whereas the attacker has the upper hand because the attacker needs to be able to exploit just one weakness and can strike anytime without the need to have to play by the rules. Additionally, depending on your business model or type of organization, software development can involve many stakeholders. To say the least, software development in and of itself is a resource-, schedule- (time-), and budget-intensive process. Adding the need to incorporate security into the software is seen as having the need to do “more” with what is already deemed “less” or “insufficient”. Constraints in schedule, scope, and budget (the components of the iron triangle as shown in Figure 1.2) are often the reasons why security requirements are left out of software. If the software development project’s scope, schedule (time), and budget are very rigidly defined (as is often the case), it gives little to no room to incorporate even the basic, let alone additional, security requirements, and unfortunately what is typically overlooked are elements of software security.

Figure 1.2

Diagram of Iron triangle

Iron triangle.

1.4.2 Security as an Afterthought

Developers and management tend to think that security does not add any business value because it is not very easy to show a one-to-one return on security investment (ROSI). Iron triangle constraints often lead to add-on security, wherein secure features are bolted on and not built into the software. It is important that secure features are built into the software, instead of being added on at a later stage, because it has been proven that the cost to fix insecure software earlier in the software development life cycle (SDLC) is significantly lower when compared to having the same issue addressed at a later stage, as illustrated in Figure 1.3. Addressing vulnerabilities just before a product is released is very expensive.

Figure 1.3

Graph of Relative cost of fixing code issues at different stages of the SDLC

Relative cost of fixing code issues at different stages of the SDLC.

1.4.3 Security versus Usability

Another reason why it is a challenge to incorporate secure features in software is that the incorporation of secure features is viewed as rendering the software to become very complex, restrictive, and unusable. For example, the human resources organization needs to be able to view payroll data of employees and the software development team has been asked to develop an intranet Web application that the human resources personnel can access. When the software development team consults with the security consultant, the security consultant recommends that such access should be granted to only those who are authenticated and authorized and that all access requests must be logged for review purposes. The security consultant also advises the software team to ensure that the authentication mechanism uses passwords that are at least 15 characters long, require upper- and lowercase characters, and have a mix of alphanumeric and special characters, which need to be reset every 30 days. Once designed and developed, this software is deployed for use by the human resources organization. It is quickly apparent that the human resources personnel are writing their complex passwords on sticky notes and leaving them in insecure locations such as their desk drawers or in some cases even on their system monitors. They are also complaining that the software is not usable because it takes a lot of time for each access request to be processed, because all access requests are not only checked for authorization but also audited (logged). There is absolutely no doubt that the incorporation of security comes at the cost of performance and usability. This is true if the software design does not factor in the concept known as psychological acceptability. Software security must be balanced with usability and performance. We will be covering “psychological acceptability” in detail along with many other design concepts in Chapter 3.

1.5 Quality and Security

In a world that is driven by the need for and assurance of quality products, it is important to recognize that there is a distinction between quality and security, particularly as they apply to software products. Almost all software products go through a quality assurance (or testing) phase before being released or deployed, wherein the functionality of the software, as required by the business client or customer, is validated and verified. Quality assurance checks are indicative of the fact that the software is reliable (functioning as designed) and that it is functional (meets the requirements as specified by the business owner). Following Total Quality Management (TQM) processes like the Six Sigma (6σ) or certifying software with International Organization for Standardization (ISO) quality standards are important in creating good quality software and achieving a competitive edge in the marketplace, but it is important to realize that such quality validation and certifications do not necessarily mean that the software product is secure. A software product that is secure will add to the quality of that software, but the inverse is not always necessarily true.

It is also important to recognize that the presence of security functionality in software may allow it to support quality certification standards, but it does not necessarily imply that the software is secure. Vendors often tout the presence of security functionality in their products in order to differentiate themselves from their competitors, and while this may be true, it must be understood that the mere presence of security functionality in the vendor’s software does not make it secure. This is because security functionality may not be configured to work in your operating environment, or when it is, it may be implemented incorrectly. For example, software that has the functionality to turn on logging of all critical and administrative transactions may be certified as a quality secure product, but unless the option to log these transactions is turned on within your computing environment, it has added nothing to your security posture. It is therefore extremely important that you verify the claims of the vendors within your computing environment and address any concerns you may come across before purchase. In other words, trust, but always verify. This is vital when evaluating software whether you are purchasing it or building it in-house.

1.6 Security Profile: What Makes a Software Secure?

As mentioned, in order to develop hack-resilient software, it is important to incorporate security concepts in the requirements, design, code, release, and disposal phases of the SDLC.

The makeup of your software from a security perspective is the security profile of your software, and it includes the incorporation of these concepts in the SDLC. As Figure 1.4 illustrates, some of these concepts can be classified as core security concepts, whereas others are general or design security concepts. However, these security concepts are essential building blocks for secure software development. In other words, they are the bare necessities that need to be addressed and cannot be ignored.

Figure 1.4

Chart of Security profile

Security profile.

This section will cover these security concepts at an introductory level. They will be expanded in subsequent sections within the scope of each domain.

1.6.1 Core Security Concepts

1.6.1.1 Confidentiality

Prevalent in the industry today are serious incidents of identity theft and data breaches that can be directly tied to the lack or insufficiency of information disclosure protection mechanisms. When you log into your personal bank account, you expect to see only your information and not anyone else’s. Similarly, you expect your personal information not to be available to anyone who requests it. Confidentiality is the security concept that has to do with protection against unauthorized information disclosure. It has to do with the viewing of data. Not only does confidentiality assure the secrecy of data, but it also can help in maintaining data privacy.

1.6.1.2 Integrity

In software that is reliable or, in other words, performs as intended, protecting against improper data alteration is also known as resilient software. Integrity is the measure of software resiliency, and it pertains to the alternation or modification of data and the reliable functioning of software.

When you use an online bill payment system to pay your utility bill, you expect that upon initiating a transfer of payment from your bank to the utility service provider, the amount that you have authorized to transfer is exactly the same amount that is debited from your account and credited into the service provider’s account. Not only do you expect that the software that handles this transaction to work as it is intended, but you also expect that the amount you specified for the transaction is not altered by anyone or anything else. The software must debit from the account you specify (and not any other account) and credit into a valid account that is owned by the service provider (and not by anyone else). If you have authorized to pay $129.00, the amount debited from your account must be exactly $129.00 and the amount credited in the service provider’s account must not be $12.90 or $1290.00, but $129.00 as well. From the time the data transaction commences until the time that data come to rest or are destroyed, it must not be altered by anyone or any process that is not authorized.

So integrity of software has two aspects to it. First, it must ensure that the data that are transmitted, processed, and stored are as accurate as the originator intended and second, it must ensure that the software performs reliably.

1.6.1.3 Availability

Availability is the security concept that is related to the access of the software or the data or information it handles. Although the overall purpose of a business continuity program (BCP) may be to ensure that downtime is minimized and that the impact upon business disruption is minimal, availability as a concept is not merely a business continuity concept but a software security concept as well. Access must take into account the “who” and “when” aspects of availability. First, the software or the data it processes must be accessible by only those who are authorized (who) and, second, it must be accessible only at the time (when) that it is required. Data must not be available to the wrong people or at the wrong time.

A service level agreement (SLA) is an example of an instrument that can be used to explicitly state and govern availability requirements for business partners and clients. Load balancing and replication are mechanisms that can be used to ensure availability. Software can also be developed with monitoring and alerting functionality that can detect disruptions and notify appropriate personnel to minimize downtime, once again ensuring availability.

1.6.2 General Security Concepts

In this section, we will cover general security concepts that aim at mitigating disclosure, alteration, and destruction threats thereby ensuring the core security concepts of confidentiality, integrity, and availability.

1.6.2.1 Authentication

Software is a conduit to an organization’s internal databases, systems, and network, and so it is critically important that access to internal sensitive information is granted only to those entities that are valid. Authentication is a security concept that answers the question, “Are you who you claim to be?” It not only ensures that the identity of an entity (person or resource) is specified according to the format that the software is expecting it, but it also validates or verifies the identity information that has been supplied. In other words, it assures the claim of an entity by verifying identity information.

Authentication succeeds identification in the sense that the person or process must be identified before it can be validated or verified. The identifying information that is supplied is also known as credentials or claims. The most common form of credential is a combination of username (or user ID) and password, but authentication can be primarily achieved in any one or in combination of the following three factors:

  1. Knowledge: The identifying information provided in this mechanism for validation is something that one knows. Examples of this type of authentication include username and password, pass phrases, or a personal identification number (PIN).
  2. Ownership: The identifying information provided in this mechanism for validation is something that you own or have. Examples of this type of authentication include tokens and smart cards.
  3. Characteristic: The identifying information provided in this mechanism for validation is something you are. The best known example for this type of authentication is biometrics. The identifying information that is supplied in characteristic-based authentication such as biometric authentication is digitized representations of physical traits or features. Blood vessel patterns of the retina, fingerprints, and iris patterns are some common physical features that are used but there are limitations with biometrics as physical characteristics can change with medically related maladies (Schneier, 2000). Physical actions such as signatures (pressure and slant) that can be digitized can also be used in characteristic-based authentication.

Multifactor authentication, which is the use of more than one factor to authenticate, is considered to be more secure than single-factor authentication where only one of the three factors (knowledge, ownership, or characteristic) is used for validating credentials. Multifactor authentication is recommended for validating access to systems containing sensitive or critical information. The Federal Financial Institutions Examination Council (FFIEC) guidance on authentication in an Internet banking environment highlights that the use of single-factor authentication as the only control mechanism in such an environment is inadequate and additional mitigation compensating controls, layered security including multifactor authentication is warranted.

1.6.2.2 Authorization

Just because an entity’s credentials can be validated does not mean that the entity should be given access to all of the resources that it requests. For example, you may be able to log into the accounting software within your company, but you are still not able to access the human resources payroll data, because you do not have the rights or privileges to access the payroll data. Authorization is the security concept in which access to objects is controlled based on the rights and privileges that are granted to the requestor by the owner of the data or system or according to a policy. Authorization decisions are layered on top of authentication and must never precede authentication, i.e., you do not authorize before you authenticate unless your business requirements require you to give access to anonymous users (those who are not authenticated), in which case the authorization decision may be uniformly restrictive to all anonymous users.

The requestor is referred to as the subject and the requested resource is referred to as the object. The subject may be human or nonhuman such as a process or another object. The subject may also be categorized by privilege level such as an administrative user, manager, or anonymous user. Examples of an object include a table in the database, a file, or a view. A subject’s actions such as creation, reading, update, or deletion (CRUD) on an object is dependent on the privilege level of the subject. An example of authorization based on the privilege level of the subject is that an administrative user may be able to create, read, update, and delete (CRUD) data, but an anonymous user may be allowed to only read (R) the data, whereas a manager may be allowed to create, read, and update (CRU) data.

1.6.2.3 Auditing/Logging

Consider the following scenario. You find out that the price of a product in the online store is different from the one in your brick and mortar store and you are unsure as to how this price discrepancy situation has come about. Upon preliminary research, it is determined that the screen used to update prices of products for the online store is not tightly controlled and any authenticated user within your company can make changes to the price. Unfortunately, there is no way to be able to tell who made the price changes because no information is logged for review upon the update of pricing information. Auditing is the security concept in which privileged and critical business transactions are logged. This logging can be used to build a history of events, which can be used for troubleshooting and forensic evidence. In the scenario above, if the authenticated credentials of the logged-on user who made the price changes is logged along with a timestamp of the change and the before and after price, the history of the changes can be built to track down the user who made the change. Auditing is a passive detective control mechanism.

At a bare minimum, audit fields that include who (user or process) did what (CRUD), where (file or table), and when (created or modified timestamp) along with a before and after snapshot of the information that was changed must be logged for all administrative (privilege) or critical transactions as defined by the business. Additionally, newer audit logs must always be appended to and never overwrite older logs. This could result in a capacity or space issue based on the retention period of these logs; this needs to be planned for. The retention period of these audit logs must be based on regulatory requirements or organizational policy, and in cases where the organizational policy for retention conflicts with regulatory requirements, the regulatory requirements must be followed and the organizational policy appropriately amended to prevent future conflicts.

Nonrepudiation addresses the deniability of actions taken by either a user or the software on behalf of the user. Accountability to ensure nonrepudiation can be accomplished by auditing when used in conjunction with identification. In the price change scenario, if the software had logged the price change action and the identity of the user who made that change, that individual can be held accountable for their action, giving the individual a limited opportunity to repudiate or deny their action, thereby assuring nonrepudiation.

Auditing is a detective control, and it can be a deterrent control as well. Because one can use the audit logs to determine the history of actions that are taken by a user or the software itself, auditing or logging is a passive detective control. The fore knowledge of being audited could potentially deter a user from taking unauthorized actions, but it does not necessarily prevent them from doing so.

It is understood that auditing is a very important security concept that is often not given the attention it deserves when building software. However, there are certain challenges with auditing as well that warrant attention and addressing. They are

  1. Performance impact
  2. Information overload
  3. Capacity limitation
  4. Configuration interfaces protection
  5. Audit log protection

Auditing can have impact on the performance of the software. It was noted earlier that there is usually a trade-off decision that is necessary when it comes to security versus usability. If your software is configured to log every administrative and critical business transaction, then each time those operations are performed, the time to log those actions can have a bearing on the performance of the software.

Additionally, the amount of data logged may result in information overload, and without proper correlation and pattern discerning abilities, administrative and critical operations may be overlooked, thereby reducing the security that auditing provides. It is therefore imperative to log only the needed information at the right frequency. A best practice would be to classify the logs when being logged using a bucketing scheme so that you can easily sort through large volumes of logs when trying to determine historical actions. An example of a bucketing scheme can be “Informational Only,” “Administrative,” “Business Critical,” “Error,” “Security,” “Miscellaneous,” etc. The frequency for reviewing the logs need to be defined by the business, and this is usually dependent on the value of the software or the data it transmits, processes, and stores to the business.

In addition to information overload, logging all information can result in capacity and space issues for the systems that hold the logs. Proper capacity planning and archival requirements need to be predetermined to address this.

Furthermore, the configuration interfaces to turn on or off the audit logs and the types of logs to audit must also be designed, developed, and protected. Failure to protect the audit log configuration interfaces can result in an attack going undetected. For example, if the configuration interfaces to turn auditing on or off is left unprotected, an attacker may be able to turn logging off, perform their attack, and turn it back on once they have completed their malicious activities. In this case, nonrepudiation is not ensured. So it must be understood that the configuration interfaces for auditing can potentially increase the attack surface area and nonrepudiation abilities can be seriously hampered.

Finally, the audit logs themselves are to be deemed an asset to the business and can be susceptible to information disclosure attacks. One must be diligent as to what to log and the format of the log itself. For example, if the business requirement for your software is to log authentication failure attempts, it is recommended that you do not log the value supplied for the password that was used, as the failure may have resulted from an inadvertent and innocuous user error. Should you have the need to log the password value for troubleshooting reasons, it would be advisable to hash the password before recording it so that even if someone gets unauthorized access to the logs, sensitive information is still protected.

1.6.2.4 Session Management

Just because someone is authenticated and authorized to access system resources does not mean that security controls can be lax after an authenticated session is established, because a session can be hijacked. Session hijacking attacks happen when an attacker impersonates the identity of a valid user and interjects themselves into the middle of an existing session, routing information from the user to the system and from the system to the user through them. This can lead to information disclosure (confidentiality threat), alteration (integrity threat), or a denial of service (availability threat). It is also known as a man-in-the-middle (MITM) attack. Session management is a security concept that aims at mitigating session hijacking or MITM attacks. It requires that the session is unique by the issuance of unique session tokens, and it also requires that user activity is tracked so that someone who is attempting to hijack a valid session is prevented from doing so.

1.6.2.5 Errors and Exception Management

Errors and exceptions are inevitable when dealing with software. Whereas errors may be a result of user ignorance or software breakdown, exceptions are software issues that are not handled explicitly when the software behaves in an unintended or unreliable manner. An example of user error is that the user mistypes his user ID when trying to log in. Now if the software was expecting the user ID to be supplied in a numeric format and the user typed in alpha characters in that field, the software operations will result in a data type conversion exception. If this exception is not explicitly handled, it would result in informing the user of this exception and in many cases disclose the entire exception stack. This can result in information disclosure potentially revealing the software’s internal architectural details and in some cases even the data value. It is recommended as a secure software best practice to ensure that the errors and exception messages be nonverbose and explicitly specified in the software. An example of a verbose error message would be displaying “User ID did not match” or “Password is incorrect,” instead of using the nonverbose or laconic equivalent such as “Login invalid.” Additionally, upon errors or exceptions, the software is to fail to a more secure state. Organizations are tolerant of user errors, which are inevitable, permitting a predetermined number of user errors before recording it as a security violation. This predetermined number is established as a baseline and is referred to in operations as clipping level. An example of this is, after three failed incorrect PIN entries, your account is locked out until an out-of-band process unlocks it or a certain period has elapsed. The software should never fail insecure, which would be characterized by the software allowing access after three failed incorrect PIN entries. Errors and exception management is the security concept that ensures that unintended and unreliable behavior of the software is explicitly handled, while maintaining a secure state and protection against confidentiality, integrity, and availability threats.

1.6.2.6 Configuration Parameters Management

Software is made up of code and parameters that need to be established for it to run. These parameters may include variables that need to be initialized in memory for the software to start, connection strings to databases in the backend, or cryptographic keys for secrecy to just name a few. These configuration parameters are part of the software makeup that needs to be not only configured but protected as well because they are to be deemed an asset that is valuable to the business. What good is it to lock the doors and windows in an attempt to secure your valuables within the house when you leave the key under the mat on the front porch? Configuration management in the context of software security is the security concept that ensures that the appropriate levels of protection are provided to secure the configurable parameters that are needed for the software to run. Note that we will also be covering configuration management as it pertains to IT services in Chapters 6 and 7.

1.6.3 Design Security Concepts

In this section we will discuss security concepts that need to be considered when designing and architecting software. These concepts are defined in the following. We will expand on each of these concepts in more concrete detail in Chapter 3.

  • Least Privilege: A security principle in which a person or process is given only the minimum level of access rights (privileges) that is necessary for that person or process to complete an assigned operation. This right must be given only for a minimum amount of time that is necessary to complete the operation.
  • Separation of Duties (or) Compartmentalization Principle: Also known as the compartmentalization principle, or separation of privilege, separation of duties is a security principle stating that the successful completion of a single task is dependent on two or more conditions that need to be met and just one of the conditions will be insufficient in completing the task by itself.
  • Defense in Depth (or) Layered Defense: Also known as layered defense, defense in depth is a security principle where single points of complete compromise are eliminated or mitigated by the incorporation of a series or multiple layers of security safeguards and risk-mitigation countermeasures.
  • Fail Secure: A security principle that aims to maintain confidentiality, integrity, and availability by defaulting to a secure state, rapid recovery of software resiliency upon design, or implementation failure. In the context of software security, fail secure can be used interchangeably with fail safe.
  • Economy of Mechanisms: This, in layman terms, is the keep-it-simple principle because the likelihood of a greater number of vulnerabilities increases with the complexity of the software architectural design and code. By keeping the software design and implementation details simple, the attackability or attack surface of the software is reduced.
  • Complete Mediation: A security principle that ensures that authority is not circumvented in subsequent requests of an object by a subject by checking for authorization (rights and privileges) upon every request for the object. In other words, the access requests by a subject for an object is completely mediated each time, every time.
  • Open Design: The open design security principle states that the implementation details of the design should be independent of the design itself, which can remain open, unlike in the case of security by obscurity wherein the security of the software is dependent on the obscuring of the design itself. When software is architected using the open design concept, the review of the design itself will not result in the compromise of the safeguards in the software.
  • Least Common Mechanisms: The security principle of least common mechanisms disallows the sharing of mechanisms that are common to more than one user or process if the users and processes are at different levels of privilege. For example, the use of the same function to retrieve the bonus amount of an exempt employee and a nonexempt employee will not be allowed. In this case the calculation of the bonus is the common mechanism.
  • Psychological Acceptability: This security principle aims at maximizing the usage and adoption of the security functionality in the software by ensuring that the security functionality is easy to use and at the same time transparent to the user. Ease of use and transparency are essential requirements for this security principle to be effective.
  • Leveraging Existing Components: This is a security principle that focuses on ensuring that the attack surface is not increased and no new vulnerabilities are introduced by promoting the reuse of existing software components, code, and functionality.
  • Weakest Link: You have heard of the saying, a chain is only as strong as the weakest link. This security principle states that the hack resiliency of your software security will depend heavily on the protection of weakest components, be it code, service, or interface. A breakdown in the weakest link will result in a security breach.
  • Single Point of Failure: Single point of failure is the security principle that ensures that your software is designed to eliminate any single source of complete compromise. Although this is similar to the weakest link principle, the distinguishing difference between the two is that the weakest link need not necessarily be a single point of failure but could be as a result of various weak sources. Usually, in software security, the weakest link is a superset of several single points of failure.

1.7 Security Concepts in the SDLC

Security concepts span across the entire life cycle and will need to be addressed in each phase. Software security requirements, design, development, and deployment must take into account all of these security concepts. Lack or insufficiency of attention in any one phase may render the efforts taken in other phases completely futile. For example, capturing requirements to handle disclosure protection (confidentiality) in the requirements gathering phase of your SDLC but not designing confidentiality controls in the design phase of your SDLC can potentially result in information disclosure breaches.

Often, these concepts are used in conjunction with other concepts or they can be used independently, but it is important that none of these concepts are ignored, even if it is deemed as not applicable or in some cases contradictory to other concepts. For example, the economy of mechanism concept in implementing a single sign-on mechanism for simplified user authentication may directly conflict with the complete mediation design concept and necessary architectural decisions must be taken to address this without compromising the security of the software. In no situation can they be ignored.

1.8 Risk Management

One of the key aspects of managing security is risk management. It must be recognized that the goal of risk management spans more than the mere protection of information technology (IT) assets as it is intended to protect the entire organization so that there are minimal to no disruption in the organization’s abilities to accomplish its mission. Risk management processes include the preliminary assessment for the need of security controls, the identification, development, testing, implementation, and verification (evaluation) of security controls so that the impact of any disruptive processes are at an acceptable or risk-appropriate level. Risk management, in the context of software security, is the balancing act between the protection of IT assets and the cost of implementing software security controls, so that the risk is handled appropriately. The second revision of the Special Publication 800-64 by the National Institute of Standards and Technology (NIST), entitled “Security Considerations in the Systems Development Life Cycle (SDLC),” highlights that a prerequisite to a comprehensive strategy to manage risk to IT assets is to consider security during the SDLC. By addressing risk throughout the SDLC, one can avoid a lot of headaches upon release or deployment of the software.

1.8.1 Terminology and Definitions

Before we delve into the challenges with risk management as it pertains to software and software development, it is imperative that there is a strong fundamental understanding of terms and risk computation formulae used in the context of traditional risk management.

Some of the most common terms and formulae that a CSSLP must be familiar with are covered in this section. Some of the definitions used in this section are from NIST Risk Management Guide to Information Technology Systems Special Publication 800-30.

1.8.1.1 Asset

Assets are those items that are valuable to the organization, the loss of which can potentially cause disruptions in the organization’s ability to accomplish its missions. These may be tangible or intangible in nature. Tangible assets, as opposed to intangible assets, are those that can be perceived by physical senses. They can be more easily evaluated than intangible assets. Examples of tangible IT assets include networking equipment, servers, software code, and also data that are transmitted and stored by your applications. In the realm of software security, data are the most important tangible asset, second only to people. Examples of intangible assets include intellectual property rights such as copyright, patents, trademarks, and brand reputation. The loss of brand reputation for an organization may be disastrous, and recovery from such a loss may be nearly impossible. Arguably, company brand reputation is the most valuable intangible asset, and the loss of intangible assets may have more dire consequences than the loss of tangible assets; however, regardless of whether the asset is tangible, the risk of loss must be assessed and appropriately managed. In threat modeling terminology, an asset is also referred to as an “Object.” We will cover subject/object matrix in the context of threat modeling in Section 3.3.3.2.

1.8.1.2 Vulnerability

A weakness or flaw that could be accidently triggered or intentionally exploited by an attacker, resulting in the breach or breakdown of the security policy is known as vulnerability. Vulnerabilities can be evident in the process, design, or implementation of the system or software. Examples of process vulnerabilities include improper check-in and check-out procedures of software code or backup of production data to nonproduction systems and incomplete termination access control mechanisms. The use of obsolete cryptographic algorithms such as Data Encryption Standard (DES), not designing for handling resource deadlocks, unhandled exceptions, and hard-coding database connection information in clear text (humanly readable form) in line with code are examples of design vulnerabilities. In addition to process and design vulnerabilities, weaknesses in software are made possible because of the way in which software is implemented. Some examples of implementation vulnerabilities are: the software accepts any user supplied data and processes it without first validating it; the software reveals too much information in the event of an error and not explicitly closing open connections to backend databases.

Some well-known and useful vulnerability tracking systems and vulnerability repositories that can be leveraged include the following:

  • U.S. Computer Emergency Readiness Team (US-CERT) Vulnerability Notes Database: The CERT vulnerability analysis project aims at reducing security risks due to software vulnerabilities in both developed and deployed software. In software that is being developed, they focus on vulnerability discovery and in software that is already deployed, they focus on vulnerability remediation. Newly discovered vulnerabilities are added to the Vulnerability Notes Database. Existing ones are updated as needed.
  • Common Vulnerability Scoring System (CVSS): As the name suggests, the CVSS is a system designed to rate IT vulnerabilities and help organizations prioritize security vulnerabilities.
  • Open Source Vulnerability Database: This database is independent and open source created by and for the security community, with the goal of providing accurate, detailed, current, and unbiased technical information on security vulnerabilities.
  • Common Vulnerabilities and Exposures (CVE): CVE is a dictionary of publicly known information security vulnerabilities and exposures. It is free for use and international in scope.
  • Common Weakness Enumeration (CWE™): This specification provides a common language for describing architectural, design, or coding software security weaknesses. It is international in scope, freely available for public use, and intended to provide a standardized and definitive “formal” list of software weaknesses. Categorizations of software security weaknesses are derived from software security taxonomies.

1.8.1.3 Threat

Vulnerabilities pose threats to assets. A threat is merely the possibility of an unwanted, unintended, or harmful event occurring. When the event occurs upon manifestation of the threat, it results in an incident. These threats can be classified into threats of disclosure, alteration, or destruction. Without proper change control processes in place, a possibility of disclosure exists when sensitive code is disclosed to unauthorized individuals if they can check out the code without any authorization. The same threat of disclosure is possible when production data with actual and real significance is backed up to a developer or test machine, when sensitive database connection information is hard-coded in line with code in clear text, or if the error and exception messages are not handled properly. Lack of or insufficient input validation can pose the threat of data alteration, resulting in violation of software integrity. Insufficient load testing, stress testing, and code level testing pose the threat of destruction or unavailability.

1.8.1.4 Threat Source/Agent

Anyone or anything that has the potential to make a threat materialize is known as the threat source or threat agent. Threat agents may be human or nonhuman. Examples of nonhuman threat agents in addition to nature that are prevalent in this day and age are malicious software (malware), such as adware, spyware, viruses, and worms. We will cover the different types of threat agents when we discuss threat modeling in Chapter 3.

1.8.1.5 Attack

Threat agents may intentionally cause a threat to materialize or threats can occur as a result of plain user error or accidental discovery as well. When the threat agent actively and intentionally causes a threat to happen, it is referred to as an “attack” and the threat agents are commonly referred to as “attackers.” In other words, an intentional action attempting to cause harm is the simplest definition of an attack. When an attack happens as a result of an attacker taking advantage of a known vulnerability, it is known as an “exploit.” The attacker exploits a vulnerability causing the attacker (threat agent) to cause harm (materialize a threat).

1.8.1.6 Probability

Also known as “likelihood,” probability is the chance that a particular threat can happen. Because the goal of risk management is to reduce the risk to an acceptable level, the measurement of the probability of an unintended, unwanted, or harmful event being triggered is important. Probability is usually expressed as a percentile, but because the accuracy of quantifying the likelihood of a threat is mostly done using best guesstimates or sometimes mere heuristic techniques, some organizations use qualitative categorizations or buckets, such as high, medium, or low to express the likelihood of a threat occurring. Regardless of whether a quantitative or qualitative expression, the chance of harm caused by a threat must be determined or at least understood as the bare minimum.

1.8.1.7 Impact

The outcome of a materialized threat can vary from very minor disruptions to inconveniences imposed by levied fines for lack of due diligence, breakdown in organization leadership as a result of incarceration, to bankruptcy and complete cessation of the organization. The extent of how serious the disruptions to the organization’s ability to achieve its goal are referred to as the impact.

1.8.1.8 Exposure Factor

Exposure factor is defined as the opportunity for a threat to cause loss. Exposure factor plays an important role in the computation of risk. Although the probability of an attack may be high, and the corresponding impact severe, if the software is designed, developed, and deployed with security in mind, the exposure factor for attack may be low, thereby reducing the overall risk of exploitation.

1.8.1.9 Controls

Security controls are mechanisms by which threats to software and systems can be mitigated. These mechanisms may be technical, administrative, or physical in nature. Examples of some software security controls include input validation, clipping levels for failed password attempts, source control, software librarian, and restricted and supervised access control to data centers and filing cabinets that house sensitive information. Security controls can be broadly categorized into countermeasures and safeguards. As the name implies, countermeasures are security controls that are applied after a threat has been materialized, implying the reactive nature of these types of security controls. On the other hand, safeguards are security controls that are more proactive in nature. Security controls do not remove the threat itself but are built into the software or system to reduce the likelihood of a threat being materialized. Vulnerabilities are reduced by security controls.

However, it must be recognized that improper implementation of security controls themselves may pose a threat. For example, say that upon the failure of a login attempt, the software handles this exception and displays the message “Username is valid but the password did not match” to the end user. Although in the interest of user experience, this may be acceptable, an attacker can read that verbose error message and know that the username exists in the system that performs the validation of user accounts. The exception handling countermeasure in this case potentially becomes the vulnerability for disclosure, owing to improper implementation of the countermeasure. A more secure way to handle login failure would have been to use generic and nonverbose exception handling in which case the message displayed to the end user may just be “Login invalid.”

1.8.1.10 Total Risk

Total risk is the likelihood of the occurrence of an unwanted, unintended, or harmful event. This is traditionally computed using factors such as the asset value, threat, and vulnerability. This is the overall risk of the system, before any security controls are applied. This may be expressed qualitatively (e.g., high, medium, or low) or quantitatively (using numbers or percentiles).

1.8.1.11 Residual Risk

Residual risk is the risk that remains after the implementation of mitigating security controls (countermeasures or safeguards).

1.8.2 Calculation of Risk

Risk is conventionally expressed as the product of the probability of a threat source/agent taking advantage of a vulnerability and the corresponding impact. However, estimation of both probability and impact are usually subjective and so quantitative measurement of risk is not always accurate. Anyone who has been involved with risk management will be the first to acknowledge that the calculation of risk is not a black or white exercise, especially in the context of software security.

However, as a CSSLP, you are expected to be familiar with classical risk management terms such as single loss expectancy (SLE), annual rate of occurrence (ARO), and annual loss expectancy (ALE) and the formulae used to quantitatively compute risk.

  • Single Loss Expectancy: SLE is used to estimate potential loss. It is calculated as the product of the value of the asset (usually expressed monetarily) and the exposure factor, which is expressed as a percentage of asset loss when a threat is materialized. See Figure 1.5 for a calculation of SLE.
  • Annual Rate of Occurrence: The ARO is an expression of the number of incidents from a particular threat that can be expected in a year. This is often just a guesstimate in the field of software security and thus should be carefully considered. Looking at historical incident data within your industry is a good start for determining what the ARO should be.
  • Annual Loss Expectancy: ALE is an indicator of the magnitude of risk in a year. ALE is a product of SLE × ARO (see Figure 1.6).

The identification and reduction of the total risk using controls so that the residual risk is within the acceptable range or threshold, wherein business operations are not disrupted, is the primary goal of risk management. To reduce total risk to acceptable levels, risk mitigation strategies in total instead of merely selecting a single control (safeguard) must be considered. For example, to address the risk of disclosure of sensitive information such as credit card numbers or personnel health information, mitigation strategies that include a layered defense approach using access control, encryption or hashing, and auditing of access requests may have to be considered, instead of merely selecting and implementing the Advanced Encryption Standard (AES). It is also important to understand that although the implementation of controls may be a decision made by the technical team, the acceptance of specific levels of residual risk is a management decision that factors in the recommendations from the technical team. The most effective way to ensure that software developed has taken into account security threats and addressed vulnerabilities, thereby reducing the overall risk of that software, is to incorporate risk management processes into the SDLC itself. From requirements definition to release, software should be developed with insight into the risk of it being compromised and necessary risk management decisions and steps must be taken to address it.

Figure 1.5

SLE = ASSET VALUE ($) × EXPOSURE FACTOR (%)

Calculation of SLE.

Figure 1.6

Calculation of ALE

Calculation of ALE.

1.8.3 Risk Management for Software

It was aforementioned that risk management as it relates to software and software development has its challenges. Some of the reasons for these challenges are:

  • Software risk management is still maturing.
  • Determination of software asset values is often subjective.
  • Data on the exposure factor, impact, and probability of software security breaches is lacking or limited.
  • Technical security risk is only a portion of the overall state of secure software.

Risk management is still maturing in the context of software development, and there are challenges that one faces, because risk management is not yet an exact science when it comes to software development. Not only is this still an emerging field, but it is also difficult to quantify software assets accurately. Asset value is often determined as the value of the systems that the software runs on, instead of the value of the software itself. This is very subjective as well. The value of the data that the software processes is usually just an estimate of potential loss. Additionally, owing to the closed nature of the industry, wherein the exact terms of software security breaches are not necessarily fully disclosed, one is left to speculate on what it would cost an organization should a similar breach occur within their own organization. Although historical data such as the chronology of data breaches published by the Privacy Rights Clearing House are of some use to learn about the potential impact that can be imposed on an organization, they only date back a few years (since 2005) and there is really no way of determining the exposure factor or the probability of similar security breaches within your organization.

Software security is also more than merely writing secure code, and some of the current-day methodologies of computing risk using the number of threats and vulnerabilities that are found through source and object code scanning is only a small portion of the overall risk of that software. Process and people related risks must be factored in as well. For example, the lack of proper change control processes and inadequately trained and educated personnel can lead to insecure installation and operation of software that was deemed to be technically secure and had all of its code vulnerabilities addressed. A plethora of information breaches and data loss has been attributed to privileged third parties and employees who have access to internal systems and software. The risk of disclosure, alteration, and destruction of sensitive data imposed by internal employees and vendors who are allowed to have access within your organization is another very important aspect of software risk management that cannot be ignored.

Unless your organization has a legally valid document that transfers the liability to another party, your organization assumes all of the liability when it comes to software risk management. Your clients and customers will look for someone to be held accountable for a software security breach that affects them, and it will not be the perpetrators that they would go after but you, whom they have entrusted to keep them secure and serviced. The “real” risk belongs to your organization.

1.8.4 Handling Risk

Suppose your organization operates an e-commerce store selling products on the Internet. Today, it has to comply with data protection regulations such as the Payment Card Industry Data Security Standard (PCI DSS) to protect card holder data. Before the PCI DSS regulatory requirement was in effect, your organization has been transmitting and storing the credit card primary account number (PAN), card holder name, service code, expiration date of the card along with sensitive authentication data such as the full magnetic track data, the card verification code, and the PIN, all in clear text (humanly readable form). As depicted in Figure 1.7, PCI DSS version 1.2 disallows the storage of any sensitive authentication information even if it is encrypted or the storage of the PAN along with card holder name, service code, and expiration data is in clear text. Over open, public networks such as the Internet, Wireless, Global Systems for Mobile communications (GSM), or Global Packet Radio Service (GPRS), card holder data and sensitive authentication data cannot be transmitted in clear text.

Figure 1.7

Image of Payment Card Industry Data Security Standard applicability information

Payment Card Industry Data Security Standard applicability information.

Note that although the standard does not disallow transmission of these data in the clear over closed, private networks, it is still a best practice to comply with the standard and protect this information to avoid any potential disclosure, even to internal employees or privileged access users.

As a CSSLP, you advise the development team that the risk of disclosure is high and it needs to be addressed as soon as possible. The management team now has to decide on how to handle this risk, and they have five possible ways to address it.

  1. Ignore the risk: They can choose to not handle the risk and do nothing, leaving the software as is. The risk is left unhandled. This is highly ill advised because the organization can find itself at the end of a class action lawsuit and regulatory oversight for not protecting the data that its customers have entrusted to it.
  2. Avoid the risk: They can choose to discontinue the e-commerce store, which is not practical from a business perspective because the e-commerce store is the primary source of sales for your organization. In certain situations, discontinuing use of the existing software may be a viable option, especially when the software is being replaced by a newer product. Risk may be avoided, but it must never be ignored.
  3. Mitigate the risk: The development team chooses to implement security controls (safeguards and countermeasures) to reduce the risk. They plan to use security protocols such as Secure Sockets Layer (SSL)/Transport Layer Security (TLS) or IPSec to safeguard sensitive card holder data over open, public networks. Although the risk of disclosure during transmission is reduced, the residual risk that remains is the risk of disclosure in storage. You advise the development team of this risk. They choose to encrypt the information before storing it. Although it may seem like the risk is mitigated completely, there still remains the risk of someone deciphering the original clear text from the encrypted text if the encryption solution is weakly implemented. Moreover, according to the PCI DSS standard, sensitive authentication data cannot be stored even if it is encrypted and so the risk of noncompliance still remains. So it is important that the decision makers who are responsible for addressing the risk are made aware of the compliance, regulatory, and other aspects of risk and not merely yield to choosing a technical solution to mitigate it.
  4. Accept the risk: At this juncture, management can choose to accept the residual risk that remains and continue business operations or they can choose to continue to mitigate it by not storing disallowed card holder information. When the cost of implementing security controls outweighs the potential impact of the risk itself, one can accept the risk. However, it is imperative to realize that the risk acceptance process must be a formal process, and it must be well documented, preferably with a contingency plan to address the residual risk in subsequent releases of the software.
  5. Transfer the risk: One additional method by which management can choose to address the risk is to simply transfer it. This is usually done by buying insurance and works best for the organization when the cost of implementing the security controls exceeds the cost of potential impact of the risk itself. It must be understood, however, that it is the liability that is transferred and not necessarily the risk itself. This is because your customers are still going to hold you accountable for security breaches in your organization and the brand or reputational damage that can be realized may far outweigh the liability protection that your organization receives by way of transference of risk. Another way of transferring risk is to transfer the risk to independent third-party assessors who attest by way of vulnerability assessments and penetration testing that the software is secure for public release. However, when this is done, it must be contractually enforceable.

1.8.5 Risk Management Concepts: Summary

As you may know, a picture is worth a thousand words. The risk management concepts we have discussed so far are illustrated for easier understanding in Figure 1.8.

Figure 1.8

Image of Risk management concept flow

Risk management concept flow.

Owners value assets (software) and wish to minimize risk to assets. Threat agents wish to abuse and/or may damage assets. They may give rise to threats that increase the risk to assets. These threats may exploit vulnerabilities (weaknesses) leading to the risk to assets. Owners may or may not be aware of these vulnerabilities. When known, these vulnerabilities may be reduced by the implementation of controls that reduce the risk to assets. It is also noteworthy to understand that the controls themselves may pose vulnerabilities leading to risk to assets. For example, the implementation of fingerprint reader authentication in your software as a biometric control to mitigate access control issues may itself pose the threat of denial of service to valid users, if the crossover error rate, which is the point at which the false rejection rate equals the false acceptance rate, for that biometric control is high.

1.9 Security Policies: The “What” and “Why” for Security

Contrary to what one may think it to be, a security policy is more than merely a written document. It is the instrument by which digital assets that require protection can be identified. It specifies at a high level “What” needs to be protected and the possible repercussions of noncompliance.

In addition to defining the assets that the organization deems as valuable, security policies identify the organization’s goals and objectives and communicate management’s goals and objectives for the organization.

Recently, legal and regulatory compliance has been evident as an important driver of information security spending and initiatives. Security policies help in ensuring an organization’s compliance with legal and regulatory requirements, if they complement and not contradict these laws and regulations. With a clear-cut understanding of management’s expectations, the likelihood of personal interpretations and claiming ignorance is curtailed, especially when auditors find gaps between organizational processes and compliance requirements. It protects the organization from any surprises by providing a consistent basis for interpreting or resolving issues that arise. The security policy provides the framework and point of reference that can be used to measure an organization’s security posture. The gaps that are identified when being measured against a security policy, a consistent point of reference, can be used to determine effective executive strategy and decisions.

Additionally, security policies ensure nonrepudiation, because those who do not follow the security policy can be personally held accountable for their behavior or actions.

Security policies can also be used to provide guidance to architect secure software by addressing the confidentiality, integrity, and availability aspects of software.

Security policies can also define the functions and scope of the security team, document incident response and enforcement mechanisms, and provide for exception handling, rewards, and discipline.

1.9.1 Scope of the Security Policies

The scope of the information security policy may be organizational or functional. Organizational policy is universally applicable, and all who are part of the organization must comply with it, unlike a functional policy, which is limited to a specific functional unit or a specific issue. An example of organizational policy is the remote access policy that is applicable to all employees and nonemployees who require remote access into the organizational network. An example of a functional security policy is the data confidentiality policy, which specifies the functional units that are allowed to view sensitive or personal information. In some cases, these can even define the rights personnel have within these functional units. For example, not all members of the human resources team are allowed to view the payroll data of executives.

It may be a single comprehensive document or it may be comprised of many specific information security policy documents.

1.9.2 Prerequisites for Security Policy Development

It cannot be overstressed that security policies provide a framework for a comprehensive and effective information security program.

The success of an information security program and more specifically the software security initiatives within that program is directly related to the enforceability of the security controls that need to be determined and incorporated into the SDLC. A security policy is the instrument that can provide this needed enforceability. Without security policies, one can reasonably argue that there are no teeth in the secure software initiatives that a passionate CSSLP or security professional would like to have in place. Those who are or who have been responsible for incorporating security controls and activities within the SDLC know that a security program often initially faces resistance. You can probably empathize being challenged by those who are resistant, and who ask questions such as, “Why must I now take security more seriously as we have never done this before?” or “Can you show me where it mandates that I must do what you are asking me to do?” Security policies give authority to the security professional or security activity.

It is therefore imperative that security policies providing authority to enforce security controls in software are developed and implemented in case your organization does not already have them. However, the development of security policies is more than a mere act of jotting a few “Thou shall” or “Thou shall not” rules in paper. For security policies to be effectively developed and enforceable requires the support of executive management (top-level support). Without the support of executive management, even if security policies are successfully developed, their implementation will probably fail. The makeup of top-level support must include support from signature authorities from various teams and not just the security team. Including ancillary and related teams (such as legal, privacy, networking, development, etc.) in the development of the security policies has the added benefit of buy in and ease of adoption from the teams that need to comply with the security policy when implemented.

In addition to top-level support and inclusion of various teams in the development of a security policy, successful implementation of the security policy also requires marketing efforts that communicate the goals of management through the policy to end users. End users must be educated to determine security requirements (controls) that the security policy mandates, and those requirements must be factored into the software that is being designed and developed.

1.9.3 Security Policy Development Process

Security policy development is not a onetime activity. It must be an evergreen activity, i.e., security policies must be periodically evaluated so that they are contextually correct and relevant to address current-day threats. An example of a security policy that is not contextually correct is a regulatory imposed or adopted policy that mandates multifactor authentication in your software for all financial transactions, but your organization is not already set up to have the infrastructure such as token readers or biometric devices to support multifactor authentication. An example of a security policy that is not relevant is one in which the policy requires you to use obsolete and insecure cryptographic technology such as the DES for data protection. DES has been proven to be easily broken with modern technology, although it may have been the de facto standard when the policy was developed. With the standardization of the AES, DES is now deemed to be an obsolete technology. Policies that have explicitly mandated DES are no longer relevant, and so they must be reviewed and revised. Contextually incorrect, obsolete, and insecure requirements in policies are often flagged as noncompliant issues during an audit. This problem can be avoided by periodic review and revisions of the security policies in effect. Keeping the security policies high level and independent of technology alleviates the need for frequent revisions.

It is also important to monitor the effectiveness of security policies and address issues that are identified as part of the lessons learned.

1.10 Security Standards

High-level security policies are supported by more detailed security standards. Standards support policies in that adoption of security policies are made possible owing to more granular and specific standards. Like security policies, organizational standards are considered to be mandatory elements of a security program and must be followed throughout the enterprise unless a waiver is specifically granted for a particular function.

1.10.1 Types of Security Standards

As Figure 1.9 depicts, security standards can be broadly categorized into Internal or External standards.

Figure 1.9

Diagram of Categorization of security standards

Categorization of security standards.

Internal standards are usually specific. The coding standard is an example of an internal software security standard. External standards can be further classified based on the issuer and recognition. Depending on who has issued the standard, external security standards can be classified into industry standards or government standards. An example of an industry issued standard is the PCI DSS. Examples of government issued standards include those generated by the NIST. Not all standards are geographically recognized and enforceable in all regions uniformly. Depending on the extent of recognition, external security standards (Weise, 2009) can be classified into national and international security standards. Although national security standards are often more focused and inclusive of local customs and practices, international standards are usually more comprehensive and generic in nature spanning various standards with the goal of interoperability. The most prevalent example of internationally recognized standards is ISO, whereas examples of nationally recognized standards are the Federal Information Processing Standards (FIPS) and those by the American National Standards Institute (ANSI), in the United States. It is also noteworthy to recognize that with globalization impacting the modicum of operations in the global landscape, most organizations lean more toward the adoption of international standards over national ones.

It is important to recognize that unlike standards that are mandatory, guidelines are not. External standards generally provide guidelines to organizations, but organizations tend to designate them as the organization’s standard, which make them mandatory.

It must be understood that within the scope of this book, a complete and thorough exposition of each standard related to software security would not be possible. As a CSSLP, it is important that you are not only familiar with the standards covered here but also other standards that apply to your organization. In the next section, we will be covering the following internal and external standards pertinent to security professionals as it applies to software:

  • Coding standards
  • PCI DSS
  • NIST standards
  • ISO standards
  • Federal Information Processing standards

1.10.1.1 Coding Standards

One of the most important internal standards that has a tremendous impact on the security of software is the coding standard. The coding standard specifies the requirements that are allowed and that need to be adopted by the development organization or team while writing code (building software). Coding standards need not be developed for each programming language or syntax but can include various languages into one. Organizations that do not have a coding standard must plan to have one created and adopted.

The coding standard not only brings with it many security advantages but provides for nonsecurity related benefits as well. Consistency in style, improved code readability, and maintainability are some of the nonsecurity related benefits one gets when they follow a coding standard. Consistency in style can be achieved by ensuring that all development team members follow the prescribed naming conventions, overloaded operations syntax, or instrumentation, etc., explicitly specified in the coding standard. Instrumentation is the inline commenting of code that is used to describe the operations undertaken by a code section. Instrumentation also considerably increases code readability. One of the biggest benefits of following a coding standard is maintainability of code, especially in a situation when there is a high rate of employee turnover. When the developer who has been working on your critical software products leaves the organization, the inheriting team or team member will have a reduced learning time, if the developer who left had followed the prescribed coding standard.

Following the coding standard has security advantages as well. Software designed and developed to the coding standard is less prone to error and exposure to threats, especially if the coding standard has taken into account and incorporated in it, security aspects when writing secure code. For example, if the coding standard specifies that all exceptions must be explicitly handled with a laconic error message, then the likelihood of information disclosure is considerably reduced. Also, if the coding standard specifics that each try-catch block must include a finally block as well, where objects instantiated are disposed, then upon following this requirement, the chances of dangling pointers and objects in memory are reduced, thereby addressing not only security concerns but performance as well.

1.10.1.2 Payment Card Industry Data Security Standards

With the prevalence of e-commerce and Web computing in this day and age, it is highly unlikely that those who are engaged with business that transmits and processes payment card information have not already been inundated with the PCI requirements, more particularly the PCI DSS. Originally developed by American Express, Discover Financial Services, JCB International, MasterCard Worldwide, and Visa, Inc. International, the PCI is a set of comprehensive requirements aimed at increasing payment account data security. It is regarded as a multifaceted security standard because it includes requirements not only for the technological elements of computing such as network architecture and software design but also for security management, policies, procedures, and other critical protective measures.

The goal of the PCI DSS is to facilitate organization’s efforts to proactively protect card holder payment account data. It comprises 12 foundational requirements that are mapped into six sections or control objectives as Figure 1.10 illustrates.

Figure 1.10

PCI DSS control objectives to requirements mapping

PCI DSS control objectives to requirements mapping.

If your organization has the need to transmit, process, or store the PAN, then PCI DSS requirements are applicable. Certain card holder data elements such as the sensitive authentication data comprised of the full magnetic strip, the security code, and the PIN block are disallowed from being stored after authorization even if it is cryptographically protected. Although all of the requirements have a bearing on software security, the requirement that is directly and explicitly related to software security is Requirement 6, which is the requirement to develop and maintain secure systems and applications. Each of these requirements is further broken down into subrequirements, and it is recommended that you become familiar with each of the 12 foundational PCI DSS requirements if your organization is involved in the processing of credit card transactions. It is important to highlight Requirement 6 and its subrequirements (6.1 to 6.6) because they are directly related to software development. Table 1.1 tabulates PCI DSS Requirement 6 subrequirements one level deep.

Table 1.1

PCI DSS Requirement 6 and Its Subrequirements

No.

Requirement

6

Develop and maintain secure systems and applications.

6.1

Ensure that all system components and software have the latest vendor-supplied security patches installed. Install critical security patches within 1 month of release.

6.2

Establish a process to identify newly discovered security vulnerabilities (e.g., alert subscriptions) and update configuration standards to address new vulnerability issues.

6.3

Develop software applications in accordance with industry best practices (e.g., input validation, secure error handling, secure authentication, secure cryptography, secure communications, logging, etc.), and incorporate information security throughout the software development life cycle.

6.4

Follow change control procedures for all changes to system components.

6.5

Develop all Web applications based on secure coding guidelines (such as OWASP) to cover common coding vulnerabilities in software development.

6.6

For public-facing Web applications, address new threats and vulnerabilities on an ongoing basis and ensure these applications are protected against known attacks by either reviewing these applications annually or upon change, using manual or automated security assessment tools or methods, or by installing a Web application firewall in front of the public-facing Web application.

1.10.1.3 NIST Standards

Founded in the start of the industrial revolution in 1901 by the Congress with a goal to prevent trade disputes and encourage standardization, the NIST develops technologies, measurement methods, and standards to aid U.S. companies in the global marketscape. Although NIST is specific to the United States, in outsourced situations, the company to which software development is outsourced may be required to comply with these standards. This is often contractually enforced.

NIST programs assist in improving the quality and capabilities of software used by business, research institutions, and consumers. They help secure electronic data and maintain availability of critical electronic services by identifying vulnerabilities and cost-effective security measures.

One of the core competencies of NIST is the development and use of standards. They have the statutory responsibility to set security standards and guidelines for sensitive federal systems, but these standards are selectively adopted and used by the private sector on a voluntary basis as well. The computer security division information technology laboratory (ITL) periodically publishes bulletins and the Special Publications (SP) 500 and 800 series. While the SP 500 series are more generic IT-related publications, the SP 800 series was established in order to organize information technology security publications separately. NIST also includes computer security-related FIPS. Many of these publications are of interest to a security professional within the context of software security. One SP that is noteworthy is the SP 800-64 publication, which discusses security considerations in the information systems development life cycle.

This section will introduce the various SP 800 series publications that have considerable implications for software security.

1.10.1.3.1 SP 800-12: An Introduction to Computer Security: The NIST Handbook

This handbook provides a broad overview of computer security, providing guidance to secure hardware, software, and information resources. It explains computer security-related concepts, cost considerations, and interrelationships of security controls. Security controls are categorized into management controls, operational controls, and technology controls. A section within the handbook is dedicated to security and planning in the computer systems life cycle. Figure 1.11 illustrates the breadth of security concepts and controls covered in the NIST Special Publication 800-12 handbook. The handbook does not specify requirements explicitly but rather discusses the benefits of different security controls and the scenarios in which they would be appropriately applicable. It provides advice and guidance without stipulating any penalties for noncompliance.

Figure 1.11

Image of NIST SP 800-12 security concepts and controls

NIST SP 800-12 security concepts and controls.

1.10.1.3.2 SP 800-14: Generally Accepted Principles and Practices for Securing IT Systems

Similar to the SP 800-12 handbook in its organization, the SP 800-14 document provides a baseline that organizations can use to establish and review their IT security programs. Unlike SP 800-12, this document gives insight into the basic security requirements that most IT systems should contain, to various stakeholders, including management, internal auditors, users, system developers, and security practitioners. It provides a foundation that can be used as a point of reference. The foundation starts with generally accepted system security principles and moves on to identify common practices that are used for securing IT systems.

1.10.1.3.3 SP 800-30: Risk Management Guide for IT

As mentioned earlier, one of the key aspects of security management is risk management, which plays a critical role in protecting an organization’s information assets and its mission from IT-related risks. The SP 800-30 guide starts with an overview of risk management and covers items that are deemed critical success factors for an effective risk management program. The guide also covers how risk management can be integrated into the systems development life cycle along with the roles of individuals and their responsibilities in the process. It describes a comprehensive risk assessment methodology that includes nine primary steps for conducting a risk assessment of an IT system. It also covers control categories, cost–benefit analysis, residual risk evaluation, and the mitigation options and steps that need to be taken upon the completion of a risk assessment process. As an example, Figure 1.12 illustrates the risk mitigation action points that are part of the NIST Special Publication 800-30 guide.

Figure 1.12

Image of Risk management action points

Risk management action points.

1.10.1.3.4 SP 800-64: Security Considerations in the Information Systems Development Life Cycle

Currently in second revision, the SP 800-64 is NIST’s more directly related publication for a CSSLP because it provides guidance for building security into the IT systems (or software) development life cycle (SDLC) from the inception of the system or software. It serves a wide range of audiences of information systems and information security professionals ranging from system owners, information owners, developers, and program managers. Building security in as opposed to bolting it on at a later stage enables organizations to maximize their ROSI by:

  • Identifying and mitigating security vulnerabilities and misconfigurations early in the SDLC where the cost to implement security controls is considerably lower.
  • Bringing to light any engineering or design issues that may require redesign at a later stage of the SDLC, if security has not been considered early but is now required.
  • Identifying shared security services that can be leveraged, reducing development cost and time.
  • Comprehensively managing risk and facilitating executives to make informed risk related go/no-go and risk handling (accept, transfer, mitigate, or avoid) decisions.

In addition to describing security integration into a linear, sequential, and structured development methodology, such as the waterfall software development methodology, this document also provides insight into IT projects that are not as clearly defined. This includes SDLC-based development, such as supply chain, cross IT platforms (or in some cases, organization), virtualization, IT facility-oriented (data center, hot sites) developments, and the burgeoning service-oriented architectures (SOA). The core elements of integrating security into the SDLC for non-SDLC–based development projects remain the same, but it must be recognized that key success factors for such projects are communications and documentation of stakeholder relationships apropos to securing the solution.

1.10.1.3.5 SP 800-100: Information Security Handbook: A Guide for Managers

While the SP 800-100 is a must read for management professionals who are responsible for establishing and implementing an information security program, it can also benefit nonmanagement personnel as it provides guidance from a management perspective for developers, architects, HR, operational, and acquisition personnel as well. It covers a wide range of information security program elements, providing guidance on information security governance, risk management, capital planning, and investment control, security planning, IT contingency planning, interconnecting systems, performance measures, incident response, configuration management, certification and accreditation, acquisitions, awareness and training, and even security in the SDLC. It is recommended that as a CSSLP, you are familiar with the contents of this guide.

1.10.1.4 ISO Standards

The ISO is the primary body that develops International Standards for all industry sectors except electrotechnology and telecommunications. Electrotechnology standards are developed by the International Electrotechnical Commission (IEC) and telecommunication standards are developed by the International Telecommunications Union (ITU), which is the same organization that establishes X.509 digital certificate versions. ISO in conjunction with IEC (prefixed as ISO/IEC) has developed several international standards that are directly related to information security. Unlike many other standards that are broad in their guidance, most ISO standards are highly specific. To ensure that the standards are aligned to changes in technology, periodic review of each standard after its publication (at least every 5 years) is part of the ISO standards development process.

The ISO standards related to information security and software engineering are covered in this section at a definitional and introductory level. It is highly recommended that as a CSSLP, you are not only familiar with these standards but also how they are applicable within your organization.

1.10.1.4.1 ISO/IEC 27000:2009—Information Security Management System (ISMS) Overview and Vocabulary

This standard aims to provide a common glossary of terms and definitions. It also provides an overview and introduction to the ISMS family of standards covering

  • Requirements definition for an ISMS
  • Detailed guidance to interpret the plan–do–check–act (PDCA) processes
  • Sector-specific guidelines and conformity assessments for ISMS
1.10.1.4.2 ISO/IEC 27001:2005—Information Security Management Systems

What the ISO 9001:2000 standards do for quality, the ISO 27001:2005 standard will do for information security. This standard is appropriate for all types of organizations ranging from commercial companies to not-for-profit organizations and the government.

ISO/IEC 27001:2005 specifies the requirements for establishing, implementing, operating, monitoring, reviewing, maintaining, and improving a documented ISMS. It can be used to aid in

  • Formulating security requirements
  • Ensuring compliance with external legal, regulatory, and compliance requirements and with internal policies, directives, and standards
  • Managing security risks cost effectively
  • Generating and selecting security controls requirements that will adequately address security risks
  • Identifying existing ISMS processes and defining new ones
  • Determining the status of the information security management program
  • Communicating organizational information security policies, standards, and procedures to other partner organizations and relevant security information to their customers
  • Enabling the business instead of impeding it
1.10.1.4.3 ISO/IEC 27002:2005/Cor1:2007—Code of Practice for Information Security Management

This ISO/IEC 27002 is the replacement for the ISO 17799 standard, which was formerly known as BS 7799. Arguably, this is the most well-known security standard and is intended to provide a common basis and practical guidelines for developing organizational security standards and effective security management practices. This standard establishes guidelines and general principles for initiating, implementing, maintaining, and improving information security management in an organization. It outlines several best practices of control objectives and controls in diverse areas of information security management, ranging from security policy, information security organization, asset management, HR, physical and environmental security, access control, communications and operations management, business continuity management, incident management, compliance, and even information systems acquisition, development, and maintenance.

The control objectives and controls in this standard are intended to address the findings from the risk assessment. Cor. denotes a Technical corrigendum, which is a document issued to correct a technical error or ambiguity in a normative document or to correct information that has been outdated, provided the modification has no effect on the technical normative elements of the standard it corrects.

1.10.1.4.4 ISO/IEC FCD 27003—Information Security Management Systems Implementation Guidance

This standard is still under development and aims at providing guidance in implementing an ISMS focusing on the PDCA method, with respect to establishing, implementing, reviewing, and improving the ISMS itself.

1.10.1.4.5 ISO/IEC 27005:2008—Information Security Risk Management

It should be no surprise that a CSSLP must be familiar with the ISO/IEC 27005 standard as it is the International Standard for information security risk management. The basic principle of risk management is to ensure that organizational risk is reduced to acceptable thresholds and that the residual risk is at or preferably below that threshold. This standard provides the necessary guidance for information security risk management and is designed to assist the implementation of security control to a satisfactory level based on establishing the scope or context for risk assessment, assessing the risks, making risk-based decisions to treat the identified risks, and communicating and monitoring risk. The ISO/IEC 30001 standard is currently under development and is expected to be the likely replacement for or enhancement to the ISO/IEC 27005:2008 international information security risk management standard.

1.10.1.4.6 ISO/IEC 27006:2007—Requirements for Bodies Providing Audit and Certification of Information Security Management Systems

This primary goal of this standard is to support accreditation and certification bodies that audit and certify information security management systems. It includes the competency and reliability requirements that an auditing and certifying body must demonstrate and also provides guidance on how to interpret the requirements it contains to ensure reliable and consistent certification of Information Security Management Systems.

In addition to the several 27000 series of ISO/IEC standards that provide a blueprint for an ISMS, there are other ISO/IEC standards that have a noteworthy relationship to information security and software security, which are extremely important for a CSSLP to be familiar with. Two of these standards, the ISO/IEC 15408 and the ISO/IEC 9126, are covered in this section.

1.10.1.4.7 ISO/IEC 15408—Evaluating Criteria for IT Security (Common Criteria)

The ISO/ IEC 15408 is more commonly known as the Common Criteria and is a series of internationally recognized set of guidelines that define a common framework for evaluating security features and capabilities of IT security products. The Common Criteria allow vendors to have their products evaluated by an independent third party against the predefined evaluation assurance levels (EALs) clearly defined in the standard. It provides confidence to the owners that the security products they are developing or procuring meet and implement the minimum security functionality and assurance specifications and that the evaluation of the product itself has been conducted in a rigorous, neutral, objective, and standard manner. The Common Criteria can also be used by auditors to evaluate security functionality and assurance levels and to ensure that all organizational security policies are enforced, all threats are countered to acceptable levels, and that the security objectives are achieved.

It is a standard with multiple parts as listed here.

  • ISO/IEC 15408-1:2005 or Part 1 introduces the common criteria providing the evaluation criteria for IT security as it pertains to security functional requirements (SFRs) and security assurance requirements (SARs). It introduces the general model that covers the Protection Profile (PP), the Security Target (ST), and the Target of Evaluation (TOE), and the relationships between these elements of the Common Criteria evaluation process as depicted in Figure 1.13. The PP is used to create a set of generalized security requirements that are reusable. The ST expresses the security requirements and specifies the security functions for a particular product or system that is being evaluation. The ST is what is used by evaluators as the basis of their evaluations in conformance to the guidelines specified in the ISO/IEC 15408 standard. The product or system that is being evaluated is known as the TOE.
  • ISO/IEC 15408-2:2008 or Part 2 contains the comprehensive catalog of predefined SFRs that needs to be part of the security evaluation against the TOE. These requirements are hierarchically organized using a structure of classes, families, and components.
  • ISO/IEC 15408-3:2008 or Part 3 defines the SARs and includes the EALs for measuring assurance of a TOE. There are seven EAL ratings predefined in Part 3 of the ISO/IEC 15408 standards, and a security product with a higher EAL rating is indicative of a greater degree of security assurance for that product against comparable products with a lower EAL rating. Table 1.2 tabulates the seven EAL ratings and reflects what each EAL rating mean.

The predefined SFRs and SARs defined in the ISO/IEC 15408 standard can be used to address vulnerabilities that arise from failures in requirements, development, and/or in operations. Software that does not include security functional or assurance requirements can be rendered ineffective and insecure even if it meets all business functionality. Without security functional and assurance validation, poor development methodologies and incorrect design can also lead to vulnerabilities that can easily compromise not just the assurance of confidentiality, integrity, and availability of the software or the information it handles but also the business value it provides. Additionally, without an active evaluation of the security functionality and assurance, any software designed and developed to correct specifications may still be installed and deployed in a vulnerable state (e.g., admin privileges, unprotected audit logs, etc.) and thereby render operations insecure.

Table 1.2

ISO/IEC 15408 Evaluation Assurance Levels

Evaluation Assurance Level

TOE Assurance

EAL1

Functionally tested

EAL2

Structurally tested

EAL3

Methodically tested and checked

EAL4

Methodically designed, tested, and reviewed

EAL5

Semiformally designed and tested

EAL6

Semiformally verified design and tested

EAL7

Formally verified design and tested

Figure 1.13

Image of Relationships between common criteria elements

Relationships between common criteria elements.

1.10.1.4.8 ISO/ IEC 21827:2008—System Security Engineering Capability Maturity Model® (SSE-CMM)

The System Security Engineering Capability Maturity Model® (SSE-CMM) internationally recognized standard provides guidelines to ensure secure engineering of systems (and software) by augmenting existing project and organizational process areas and encompassing all phases in the SDLC in its scope from concepts definition, requirement analysis, design, development, testing, deployment, operations, maintenance, and disposal. It also includes guidance on best practices for interactions with other organizations, acquisitions, and certification and accreditation (C&A). This model is now the de facto standard metric for evaluating security engineering practices for the organization or the customer and for establishing confidence in organizational processes to assure security. It has close affinity to other CMMs that focus on other engineering disciplines and is often used in conjunction with them.

1.10.1.4.9 ISO/IEC 9216—Software Engineering Product Quality

In addition to the ISO 9000 series of standards for quality, the ISO/IEC also publishes the ISO/IEC 9126 standard that provides the guidelines for quality of software products. Like the ISO/IEC 15408, this is a multipart standard. It currently has four parts to it that cover the quality model, external and internal metrics, and the quality in use metric, which are used to measure the quality of the software product that is engineered. Internal metrics are those that measure the quality of the software itself while external metrics measure software quality as part of measuring the overall behavior of the computer-based system that includes the software. This standard provides the definition and associated quality evaluation process to be used when specifying the requirements for software products throughout the SDLC. It provides guidance on six external quality characteristics that can be used to measure the quality of software. These six characteristics are functionality, reliability, usability, efficiency, maintainability, and portability. Quality of use metrics are those that measure the quality of software when it is being used in a specific context. Uniquely, this standard also takes into consideration the measurement of software quality from the perspective of managers, developers, end users, and evaluators. The guidance to evaluate software product quality that the ISO/IEC 9126 standard provides includes how to define the software quality requirements and how to prepare for and conduct the evaluation.

As a security professional, it is important to understand that not all quality software may be secure, and by leveraging the guidance established in this standard, which prescribes measuring software quality from different perspectives, one of which is an evaluator, a security professional can evaluate software quality from a security perspective.

1.10.1.5 Federal Information Processing Standards (FIPS)

In addition to the various Special Publications NIST produces, they also develop the FIPS. FIPS publications are developed to address federal requirements for

  • Interoperability of disparate systems
  • Portability of data and software
  • Computer security

Some of the well-known FIPS publications closely related to software security are

  • FIPS 140-2: Security Requirement for Cryptographic Modules
  • FIPS 197: Advanced Encryption Standard
  • FIPS 201: Personal Identity Verification (PIV) of Federal Employees and Contractors

This section covers these FIPS publications at an introductory level.

1.10.1.5.1 FIPS 140-2: Security Requirement for Cryptographic Modules

The FIPS 140-2 is the standard that specifies requirements that will need to be satisfied by a cryptographic module. It provides four increasing qualitative levels (Level 1 through Level 4) intended to cover a wide range of potential application and environments. The security requirements cover areas that are related to secure design and implementation of a cryptographic module, which include cryptographic module specification, ports and interfaces, roles, services, and authentication, finite state model, physical security, operational environment, cryptographic key management, electromagnetic interference/electromagnetic compatibility (EMI/EMC), self-tests, and design assurance. Additionally, this standard also specifies that cryptographic module developers and vendors are required to document implemented controls to mitigate other (noncryptographic) attacks (e.g., differential power analysis and TEMPEST).

1.10.1.5.2 FIPS 197: Advanced Encryption Standards (AES)

FIPS 197 specifies an approved cryptographic algorithm to ensure the confidentiality of electronic data. The AES algorithm is a symmetric block cipher that can be used to encrypt (convert humanly intelligible plaintext to unintelligible form called cipher text) and decrypt (convert cipher text to plaintext). This standard replaced the withdrawn FIPS 46-3 Data Encryption Standard that prescribed the need to use one of the two algorithms, DES or Triple Data Encryption Algorithm (TDEA), for data protection, because the AES algorithm was faster and stronger in its protection of data over the DES algorithm.

1.10.1.5.3 FIPS 201: Personal Identity Verification (PIV) of Federal Employees and Contractors

The FIPS 201 standard was developed in response to the need to ensure that the claimed identity of personnel (employees and contractors) who require physical or electronic access to secure and sensitive facilities and data are appropriately verified. This standard specifies the architecture and technical requirements for a common identification standard for federal employees and contractors.

1.10.2 Benefits of Security Standards

Security standards provide a common and consistent basis for building and maintaining secure software as they enable operational efficiency and organizational agility. For example, assume that all the software developed in your organization was developed using the then standard for cryptographic functionality DES and now your organization requires all of your software to use AES. In such a scenario, the effort to switch over can be consistently and efficiently addressed across various software teams in your organization, because there are no proprietary or nonstandard software that requires specialized attention. Security standards lower the total cost of ownership (TCO) by facilitating ease of adoption and maintenance and by increasing operational efficiency and organizational agility when changes to standards are needed.

Security standards are useful to provide interoperability as well. Today, we live in a world that is highly interconnected, despite the fact that not all players in the global marketscape use the same technology and communication protocols. Interoperability gives vendor independence and allows for these heterogeneous and disparate systems to communicate with each other using a common protocol. Such communication needs to be secure as well, and security standards such as WS-security, secure electronic transmission (SET) are good examples of standards that not only allow for interoperability but also security. WS-security is the secure communication protocol of Web services.

Security standards can also be leveraged to provide your company with a competitive advantage, in addition to providing some degree of liability protection. It is not uncommon to observe that customers are more comfortable purchasing products and services from Web sites that publicize that they are compliant to the PCI DSS requirements than from those that do not. Organizations that choose to knowingly ignore such security standards can be held liable and accountable in a court of law.

Security standards provide a common baseline for assessments. Most standards complement best practices and adopt such a standard. Following this standard can facilitate formal evaluation and certification of the software product itself. The ISO 15408 standard provides common criteria (and hence this standard is also known as the Common Criteria) that can be used to evaluate a vendor product from not only a functionality perspective but also an assurance perspective. When evaluating software from several external third party vendors, it is therefore important to request the common criteria rating of their product, which will give an indication of the assurance (security) and reliability (functionality) of that product.

Security standards can be used to demonstrate indirect governance as well, because they contain security control objectives that when satisfied often address compliance and regulatory requirements. ISO/IEC 27001 certified ISMS demonstrates that your system is compliant with many of the information security requirements as mandated by state, national, and international regulations such as the California SB1386, Federal Information Security Management Act (FISMA), GLBA, HIPAA, EU Safe Harbor, and PIPEDA.

1.11 Best Practices

In addition to standards, there are several best practices for information security that are important for a security professional to be aware of. Some of these best practices have become de facto standards, and for all practical purposes, one can consider them to be standard in their implementation. Some of the popular best practices that have a direct bearing on software security are the Open Web Application Security Project (OWASP) and the Information Technology Infrastructure Library (ITIL).

1.11.1 Open Web Application Security Project (OWASP)

The OWASP is a worldwide free and open community that is focused on application security and predominantly Web application security. It can be considered the leading best practice for Web application security. All of OWASP undertakings are community focused and vendor neutral.

The projects undertaken aim at improving the current state of Web application security and the work and results are openly and freely available to anyone. OWASP projects can be broadly categorized as development or documentation projects. The development projects aim at providing the security community with free tools and the documentation projects help in generating practical guidance on various aspects of application security in the form of publications and guides.

One of the most popular publications within OWASP is the OWASP Top 10, which periodically publishes the Top 10 Web application security vulnerabilities as depicted in Figure 1.14 and their appropriate protection mechanisms. There have been two OWASP Top 10 publications so far, with the first one published in 2004 and superseded by the second one published in 2007. The current version of the PCI DSS (version 1.2.1) requires Web applications to be developed using secure coding guidelines to prevent common coding vulnerabilities in the SDLC and refers to the OWASP as a Web application secure coding guideline. Vulnerabilities that are part of the Top 10 and their remediation measures will be covered in depth in Chapters 4 and 5.

Figure 1.14

Image of OWASP Top 10

OWASP Top 10.

Some of the most popular guides developed in the OWASP are the

  • Development Guide
  • Code Review Guide
  • Testing Guide

1.11.1.1 OWASP Development Guide

This is a comprehensive manual for designing, developing, and deploying secure Web applications and Web services. The target audiences for this guide are architects, developers, consultants, and auditors. This guide covers the various security controls that software developers should build into the software they design and develop.

1.11.1.2 OWASP Code Review Guide

This is a comprehensive manual for understanding how to detect Web application vulnerabilities in the code and what safeguards can be taken to address them. The guide calls out that for a successful code review process, the reviewer must be familiar with the following:

  • Programming language (code)
  • Working knowledge of the software (context)
  • End users (audience)
  • Impact of the availability of the software to the business or its lack thereof (importance)

Conducting code reviews to verify application security is much more cost-effective than having to test the software for security vulnerabilities.

1.11.1.3 OWASP Testing Guide

The Testing Guide is a comprehensive manual that covers the necessary procedures and tools to validate software assurance. This Testing Guide can also be used as part of a comprehensive application security verification. The target audiences for this guide are software developers, software testers, and security specialists.

1.11.1.4 Other OWASP Projects

OWASP is currently actively working on several other useful Web application security projects, some of which are worth mentioning here: the Application Security Desk Reference (ASDR), the Enterprise Security Application Programming Interface (ESAPI), and the Software Assurance Maturity Model (SAMM). More information about each of these projects can be obtained from the OWASP Web site.

It is highly recommended that you are familiar with these guides to be an effective secure software professional.

1.12 Information Technology Infrastructure Library (ITIL)

Although the ITIL has been around for nearly two decades, it is now gaining acceptance and popularity and is considered to be the de facto standard for service management. It was developed by the Central Computer and Telecommunication Agency in the United Kingdom. For an IT organization to be effective, it must be able to deliver to the business the expected level of service, even when operating within the constraints of scope, schedule, and budget. Delivering business value by meeting the business SLA is enhanced when the IT organization adopts a framework that includes best practices and standards on service management. The ITIL is a cohesive best practice framework that was originally developed in alignment with the then UK standard for IT Service Management (BS 15000), which is now ISO/IEC 20000, the first international standard for IT Service Management. ITIL today is in its third version (commonly known as ITIL V3) and considers the life cycle of a service from initial planning, alignment to business need to final retirement, unlike its previous versions, which were process focused. ITIL V3 was revised to be aligned with industry best practices and standards and aptly covers existing information security standards, such as those in the ISO 27000 series. Although security management is no longer a separate publication in the current version, it must still be recognized that the security framework guidance in ITIL aligns very closely to information security standards, and this can be leveraged to provide information security services to the business. As a CSSLP, it is recommended that you are familiar with ITIL and its relationship to security, especially security in the SDLC.

1.13 Security Methodologies

There are several security methodologies that aid in the design, development, testing, and deployment of secure software. These range from simple methodologies to those more robust and comprehensive that can be used at different stages of the SDLC. In this section we will discuss the most popular security methodologies and how they can be leveraged to build secure software.

1.13.1 Socratic Methodology

The Socratic methodology is a useful technique for addressing issues that arise from individuals who have opposing views on the need for security in the software they build. It is a form of cross-examination and is also known as the Method of Elenchus (Elenchus in ancient Greek means cross-examination) whose goal is to instigate ideas and stimulate rational thought. The way it works is that the one with the opposing viewpoint is questioned on their rationale for their position, often with a negative form of their question itself. The Socratic methodology in layman’s terms can be referred to as the “Questioning the Questioner” methodology wherein the questioner is questioned on their viewpoint, often using their own question. For example, if someone were to challenge the need for encryption as a disclosure protection mechanism and asks you, “Why is it that I must ensure that data is protected against disclosure threats?”, instead of giving them reasons such as “the security policy mandates it” or “the consequence of disclosure can be disastrous” or even that “it is the right thing to do for our customers,” the Socratic method suggests that you revert the question back to the questioner in a negative form, which means, you question in return “Why is it that you must not ensure that data are protected against disclosure threats?” In addition to curtailing opposition to the incorporation of security in software, the Socratic methodology can also be used to analyze complex concepts and determine security requirements by asking questions that instigate ideas and stimulate rational thought.

1.13.2 Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE®)

The Carnegie Mellon Software Engineering Institute (SEI) in conjunction with the US-CERT codeveloped OCTAVE, which is a risk-based information security strategic assessment methodology. OCTAVE is an acronym for Operationally Critical Threat, Asset, and Vulnerability Evaluation, and it includes a suite of tools, techniques, and methods.

OCTAVE provides insight into the organizational risk and the state of security and resiliency within the organization. It can be self-directed and supports cross-functional teams to assess organizational and technical risk and is available in three flavors: the original OCTAVE for any organization, OCTAVE-S for smaller organizations, and OCTAVE-Allegro, which is a streamlined approached for information security assessment and assurance.

OCTAVE is performed in three phases as depicted in Figure 1.15 and described in the following:

Figure 1.15

Diagram of Operationally critical threats, assets, and vulnerability evaluation phases

Operationally critical threats, assets, and vulnerability evaluation phases.

  • Phase 1: Build asset-based threat profiles — In this phase, the risk analysis team determines information related items that are of value (assets) and important to the organization for continued business operations. The team then prioritizes those assets into critical assets and describes security requirements for each critical asset. In the next step, the team identifies potential threats that can be orchestrated against each critical asset, creating a threat profile for each asset. This evaluation is conducted to determine the risk at the organizational level.
  • Phase 2: Identify infrastructure vulnerabilities — In this phase, the risk analysis team examines infrastructural components (such as network paths, ports, protocols) and their level of resistance against attacks with the intent to identify weaknesses (vulnerabilities). This evaluation is conducted to determine the technical risks.
  • Phase 3: Develop security strategy and plans — In this phase, the risk analysis team makes plans to address threats to and mitigate vulnerabilities in critical assets that were identified in the first two phases.

A complete and in-depth description of OCTAVE is beyond the scope of this book. As a CSSLP, it is advisable to be familiar with this robust and comprehensive risk analysis and management methodology.

1.13.3 STRIDE and DREAD

STRIDE is a threat modeling methodology (Howard & LeBlanc, 2003) that is performed in the design phase of software development in which threats are grouped into the following six broad categories:

  1. Spoofing: impersonating another user or process
  2. Tampering: unauthorized alterations that impact integrity
  3. Repudiation: cannot prove the action; deniability of claim
  4. Information disclosure: exposure of information to unauthorized user or process that impacts confidentiality
  5. Denial of service: service interruption that impacts availability
  6. Elevation of privilege: unauthorized increase of user or process rights

DREAD is a risk calculation or rating methodology (Howard and LeBlanc, 2003) that is often used in conjunction with STRIDE but does not need to be. To overcome inconsistencies and qualitative risk ratings (such as high, medium, and low), the DREAD methodology aims to arrive at rating the identified (and categorized) threats by applying the following five dimensions:

  1. Damage potential: What will be the impact upon exploitability?
  2. Reproducibility: What is the ease of recreating the attack/exploit?
  3. Exploitability: What minimum skill level is necessary to launch the attack/exploit?
  4. Affected users: How many users will be potentially impacted upon a successful attack/exploit?
  5. Discoverability: What is the ease of finding the vulnerability that yields the threat?

STRIDE and DREAD are covered in depth in Chapter 3.

1.13.4 Open Source Security Testing Methodology Manual (OSSTMM)

The Institute for Security and Open Methodologies (ISECOM) developed the Open Source Security Testing Methodology Manual (OSSTMM), which is a peer-reviewed testing methodology for conducting security tests and how to measure the results using applicable metrics. It is technically focused and broad in its evaluation covering three channels and five major sections as tabulated in Table 1.3.

Table 1.3

OSSTMM Channels and Sections

Channels

Sections

Tests

Communications

Data networks

Information and data controls

Telecommunications

Computers and telecommunications networks

Physical

Human

Personnel security awareness levels

Fraud and social engineering control levels

Security processes

Physical

Access controls

Building and perimeter locations

Spectrum

Wireless

Wireless devices

Mobile devices

The primary purpose of this manual is to provide a scientific methodology for the accurate characterization of security through examination and correlation of test results in a consistent and reliable way. Secondarily, it provides guidelines to auditors to perform an assurance audit to show that the tests themselves were thorough, complete, and compliant and the results of the test are quantifiable, reliable, consistent, and accurately representative of the tests. The output from an OSSTMM security audit is a report known as the Security Test Audit Report (STAR), which includes the specific actions conducted in tests, the corresponding metrics, and the state of the strength of controls.

1.13.5 Flaw Hypothesis Method (FHM)

The Flaw Hypothesis Method (FHM) is as the name suggests a vulnerability prediction and analysis method that uses comprehensive penetration testing to test the strength of the security of the software. FHM is very useful in the area of software certification. By simulating attacks (penetration testing), weaknesses in design (flaws) and coding (bugs) can be uncovered in the current version of the software, but this can be used to determine security requirements for future versions of the software as well. There are four primary phases (stages) in the FHM as described in the following:

  • Phase 1: Hypothesizing potential flaws in the software from documentation. This documentation can be internal documentation that describes the software context and working knowledge (behavior) of the software or it can be externally published vulnerability reports or lists. One major technique used in this phase of the FHM is the deviational method, in which deviations from known software behavior (misuse cases) is used to generate or hypothesize flaws.
  • Phase 2: Confirmation of flaws by conducting actual simulation penetration tests and desk checking tests. Desk checking attests program logic by executing program statements using sample data. The flaws that are exploitable are marked as “confirmed” and those that are not are marked as “refuted.”
  • Phase 3: Generalization of confirmed flaws to uncover other possibilities of weaknesses in the software.
  • Phase 4: Addressing the discovered flaws in the software to mitigate risk by either adding countermeasures in the current version or designing in safeguards for future versions.

One of the major drawbacks of the FHM is that it can help identify only known threats; nonetheless, this is a very powerful methodology to attest to the security strength of software that has already been deployed or is being developed.

1.13.6 Six Sigma (6σ)

Sigma in statistics is used to represent deviation from the norm. Although Six Sigma is a business management strategy for quality, it can be closely related to security because it is used for process improvement by measuring if a product (software) or service is near perfect in quality by eliminating defects. Defects are defined as deviations from specifications (requirements). Near perfect implies that the process is as close as possible to having zero defects.

For a process to be certified as having Six Sigma quality, it must have at the maximum 3.4 defects per million opportunities (DPMO) where an opportunity is defined as a chance for deviation (or nonconformance) to specifications. The key submethodologies by which Six Sigma quality can be achieved are:

  • DMAIC (define, measure, analyze, improve, and control), which is used for incremental improvement of existing processes that are below Six Sigma quality.
  • DMADV (define, measure, analyze, design, and verify), which is used to develop new processes for Six Sigma products and services. It can also be used for new versions of the product or service when the extent of changes is substantially greater than what incremental improvements can address.

The Six Sigma processes are usually executed by trained professionals who are certified as Six Sigma green belts or black belts.

It is important to note that although a software product may be of Six Sigma quality, it may still be insecure if the specifications do not include security requirements. This further accentuates the importance of ensuring that security requirements are determined and included in addition to functional specifications.

1.13.7 Capability Maturity Model Integration (CMMI)

Developed by the SEI and based on TQM, like Six Sigma, the Capability Maturity Model Integration (CMMI) is a process improvement methodology as well, which provides guidance for quality improvement and point of reference for appraising existing processes. Simply put, CMMI is a 1 to 5 rating scale that can be used to rate the maturity of the software development processes within one’s organization.

Three areas in which CMMI can be used are development (products), delivery (services), and acquisition (products and services).

CMMI includes a collection of best practices that one can use to compare their organizational processes against. When this is done formally, it is referred to as an appraisal and the Standard CMMI Appraisal Method for Process Improvement (SCAMPI) incorporates some of the industry best practices for process improvements. The formal appraisals yield one of the five CMMI maturity levels that can be indicative of processes ranging from chaotic and ad hoc to highly optimized within your organization. The five CMMI maturity levels are:

  • Initial (Level 1): Processes are ad hoc, poorly controlled, reactive, and highly unpredictable.
  • Repeatable (Level 2): Also reactive in nature, the processes are grouped at the project level and are characterized as being repeatable and managed by basic project management tracking of cost and schedule.
  • Defined (Level 3): Level 2 maturity level deals with processes at the project level, but in this level, the maturity of the organizational processes is established and improved continuously. Processes are characterized, well understood, and proactive in nature.
  • Managed Quantitatively (Level 4): In this level, the premise for maturity is that what cannot be measured cannot be managed and so the processes are measured against appropriate metrics and controlled.
  • Optimizing (Level 5): In this level, the focus is on continuous process improvements through innovative technologies and incremental improvements. Organizations with this level of software development process maturity have the ability to quickly and effectively adapt to changing business objectives, thereby allowing the organization to scale.

Incorporation of security into the SDLC is easier and more efficient if the organizations already have a higher level of process maturity.

1.14 Security Frameworks

Some of the most prominent security frameworks that are related with software security or associated areas are described in this section.

1.14.1 Zachman Framework

Although it is nearly three decades since the Zachman Framework was formulated by John Zachman, it is still regarded as a robust enterprise architecture framework. The goal of the framework is to align IT to the business. It is often depicted as a 6 × 6 matrix that factors in six reification transformations (strategist, owner, designer, builder, implementer, and workers) along the rows and six communication interrogatives (what, how, where, who, when, and why) as columns. The intersection of the six transformations and the six interrogatives yield the architectural elements. Using the same interrogative technique against the reification transformations from a security standpoint view can be useful in determining the security architecture that needs to be designed.

1.14.2 Control Objectives for Information and Related Technology (COBIT®)

Published by the IT Governance Institute (ITGI), the Control Objectives for Information and related Technology (COBIT®) is an IT governance framework with supporting tools that can be used to close gaps between control requirements, technical issues, and business risks. It defines the reasons for IT governance, the stakeholders, and what it needs to accomplish. It enables policy development and adds emphasis on regulatory compliance. The complete COBIT package includes the following six publications:

  1. Executive Summary
  2. Framework
  3. Control Objectives
  4. Audit Guidelines
  5. Implementation Toolset
  6. Management Guidelines

1.14.3 Committee of Sponsoring Organizations (COSO)

The Committee of Sponsoring Organizations (COSO) is a conglomeration of worldwide recognized frameworks that provides guidance on organizational governance, business ethics, internal controls, enterprise risk management, fraud, and financial reporting. COSO describes a unified approach for evaluation of internal control systems that have been designed to provide reasonable assurance. The Enterprise Risk Management (ERM) COSO framework that emphasizes the importance of identifying and managing risks across the enterprise is widely adopted and used.

1.14.4 Sherwood Applied Business Security Architecture (SABSA)

The Sherwood Applied Business Security Architecture (SABSA) is a framework for developing risk-based enterprise security architectures and for delivering security solutions that support business initiatives. It is based on the premise that security requirements are determined from the analysis of the business requirements. It is a layered model that covers the different phases of the IT life cycle from strategy, design, and implementation to operations. Each layer represents a view of a role played in the SDLC and the associated security architecture that can be derived from it as tabulated in Table 1.4. It is compliant with other acclaimed frameworks, standards, and methodologies such as COBIT, ISO 27000 series, and ITIL.

Table 1.4

SABSA Layers

View

Security Architecture Level

Business

Contextual

Architect

Conceptual

Designer

Logical

Builder

Physical

Tradesman

Component

Facilities manager

Operational

1.15 Regulations, Privacy, and Compliance

Until a few years ago, organizations that were under regulatory oversight for software security (more particularly data) breaches were an exception. This seems to be no longer the case as is evident from the chronology of data breaches report, published by the Privacy Rights Clearinghouse, which enlists to date 260 million or more records that have been breached as a result of software insecurity. Financial levies and cost of recovery have been so exorbitant in many cases that it caused disruptions up to total bankruptcy of organizations, not to mention the loss in stakeholder trust. This has led to the plethora of regulations and privacy mandates that organizations need to comply with. The cost of noncompliance combined with the need to regain (in cases where it is lost) or retain (in cases where it is not yet lost) stakeholder trust have become driving factors for the organizations to include regulatory and privacy requirements as part of their governance programs that includes the need to incorporate security in the SDLC as an integral part of the process.

Regulations and privacy mandates exist primarily to provide a check-and-balance mechanism to earn stakeholder trust and prevent the disclosure of personal identifiable, health, or financial information (PII, PHI, and PFI). Regulatory and privacy requirements need to be determined during the requirements phase of the SDLC and control mechanisms to ensure that they are complied with must be factored into the software design, architecture, development, and deployment. It is imperative that software development team members work closely with the legal and/or privacy teams in your organization to obtain the list of applicable regulations for your organization.

Covering in detail each and every regulation and privacy requirement that is necessary to comply with is beyond the scope of this book. In this section, some of the significant regulations, acts, and privacy mandates are introduced. This is followed by the challenges they invoke and a brief description of how to ensure that privacy requirements are not ignored and privacy-related guidelines and concerns are addressed when dealing with building secure software. It is highly advisable that as a CSSLP, you are familiar with each of these significant regulations and acts as well as any other regulatory, privacy, and compliance requirements that your organization needs to be compliant with.

1.15.1 Significant Regulations and Acts

1.15.1.1 Sarbanes–Oxley (SOX) Act

The Sarbanes–Oxley Act, commonly referred to as SOX, is arguably the most significant of regulations that has a direct impact on security. Also known as the Public Company Accounting Reform and Investor Protection Act, SOX was enacted in 2002 to improve quality and transparency in financial reporting and independent audits and accounting services for public companies. This came on the heels of major corporate and accounting frauds perpetrated by companies like Enron, Tyco International, and WorldCom and intended to increase corporate responsibility to its investors.

The SOX Act has 11 titles that mandate specific requirements for financial reporting and address:

  1. Public Company Accounting Oversight Board
  2. Auditor Independence
  3. Corporate Responsibility
  4. Enhanced Financial Disclosures
  5. Analyst Conflicts of Interest
  6. Commission Resources and Authority
  7. Studies and Reports
  8. Corporate and Criminal Fraud Accountability
  9. White-Collar Crime Penalty Enhancements
  10. Corporate Tax Returns
  11. Corporate Fraud and Accountability

Two sections under the SOX Act that became prominent and in some cases contentious in the context of security controls and the Security Exchange Commission (SEC) directives that adopted rules to conform with the SOX Act were Section 302, which covers corporate responsibility for financial controls, and Section 404, which deals with management’s assessment of internal controls. The strength of the controls is assessed, and an internal control report is generated that describes the adequacy and effectiveness of the disclosed controls.

1.15.1.2 BASEL II

BASEL II is the European Financial Regulatory Act that was originally developed to protect against financial operations risks and fraud. It was developed initially to be an international standard for banking regulators and provide recommendations on banking regulations and laws.

1.15.1.3 Gramm–Leach–Bliley Act (GLBA)

The Gramm–Leach–Bliley Act (GLB Act) is a financial privacy act that aims to protect consumers’ personal financial information (PFI) contained in financial institutions. It is also known as the Financial Modernization Act of 1999, the GLB Act has the following three main parts to the privacy requirements:

  1. Financial Privacy Rule governs the collection and disclosure of PFI. Inclusive in its scope are companies that are nonfinancial in nature as well.
  2. Safeguards Rule applies only to financial institutions (banks, credit unions, securities firms, insurance companies, etc.) and mandates that these institutions design, implement, and maintain safeguards to protect customer information.
  3. Pretexting Provisions of this Act provide protection to consumers from individuals and companies who falsely pretend (pretext) a need to obtain PFI.

All three rules are related to software that deals with the collection, processing, retention, and disposal of PFI.

1.15.1.4 Health Insurance Portability and Accountability Act (HIPAA)

This is another privacy rule but unlike the GLB Act that deals with PFI, the Health Insurance Portability and Accountability Act (HIPAA) deals with personal health information (PHI). Instituted by the Office of Civil Rights (OCR) in 1996, HIPAA protects the privacy of individual identifiable health information. It was developed to assure patient information confidentiality and safety.

1.15.1.5 Data Protection Act

The Data Protection Act of 1998 was enacted to regulate the collection, processing, holding, using, and disclosure of an individual’s private or personal information. The European Union Personal Data Protection Directive (EUDPD), in fact, declares that personal data protection is a fundamental human right and requires that personal data that are no longer necessary for the purposes they were collected in the first place must either be deleted or modified so that they no longer can identify the individual that the data were originally collected from. Software that collects, processes, stores, and archives personal data must therefore be designed and developed with deletion or de-identification mechanisms. The Personal Information Protection and Electronics Document Act (PIPEDA) is in Canada what the EUDPD is in the European Union.

1.15.1.6 Computer Misuse Act

This act makes provisions for securing computer material against unauthorized access and/or modification. Computer misuse such as hacking, unauthorized access, unauthorized modification of contents, and disruptive activities like the introduction of viruses are designated as criminal offenses.

1.15.1.7 State Security Breach Laws

The majority of states in the United States of America now have some form of regulation or bill to deal with security breaches associated with the compromise of personal information. The one that needs special mention is the California State Bill 1386 (SB 1386), which was the harbinger of its kind. SB 1386 requires that personal information be destroyed when it is no longer needed by the collecting entity. It also requires that entities doing business in the state of California notify the owners of personal information that their information protection has been breached or reasonably believed to have been accessed or acquired by someone unauthorized.

1.15.2 Challenges with Regulations and Privacy Mandates

While it is necessary for organizations to comply with regulatory and privacy requirements, it has been observed that such compliance does come with some challenges. Some of the challenges that organizations face when they need to comply with regulations and privacy mandates are open interpretations, auditor’s subjectivity, localized jurisdiction, regional variations, and inconsistent enforcement.

Most regulations are not very specific but are general and broad in their description. They do not call out specific security requirements that need to be incorporated into the software. This leaves room for different organizations to interpret the requirements as they see fit for their organization.

Additionally, an auditor’s experience and knowledge has a lot to do with the interpretation of the regulatory and/or privacy requirements, because the requirements are usually generic and broad in nature.

Augmenting the open interpretations issue is the fact that when these regulations need to be enforced because of noncompliance, the applicability of these regulations is not universal, internationally or domestically. Jurisdiction is localized. For example, the European data protection act is much more stringent and different from that of the United States or Asia. Such regional variations can hamper the flow of business operations and application of security in software development because one region may have to comply with the regulations while the other region may not find it needful.

Open interpretation, auditor’s subjectivity, localized jurisdiction, and regional variations make it difficult to enforce these regulations uniformly and consistently.

1.15.3 Privacy and Software Development

Privacy requirements must be taken into account and deemed as important as security or reliability requirements when developing secure and compliant software. Some standards and best practices such as the PCI DSS disallow the collection of certain private and sensitive information.

Privacy initiatives must consider data privacy and the support from the business as well. Data classification can help in identifying data that will need to have privacy protection requirements applied. Categorizing the data into tiers based on privacy impact (e.g., high, medium, or low) assists in ensuring that appropriate levels of privacy controls are in place. In order for the privacy program to be effective, some proven strategies have been gaining the support of executive and top-level management as sponsors or champions of enforcement using a policy or standard.

Best practice guidelines for data privacy that need to be included in software requirements analysis, design, and architecture can be addressed if one complies with the following rules:

  • If you do not need it, do not collect it.
  • If you need to collect it for processing only, collect it only after you have informed the user that you are collecting their information and they have consented, but do not store it.
  • If you have the need to collect it for processing and storage, then collect it, with user consent, and store it only for an explicit retention period that is compliant with organizational policy and/or regulatory requirements.
  • If you have the need to collect it and store it, then do not archive it if the data have outlived their usefulness and there is no retention requirement.

The Acceptably Use Policy (AUP) and log-in banners are two mechanisms that are commonly used to solicit user consent by informing users that their personal information is harvested and possibly retained or that they are being monitored when using company resources. The AUP protects the employer against violators of policy and is a deterrent to individuals who may be engaged in malicious or nefarious activities that put their employment at risk.

Additionally, AUPs must be complementary and not contradictory to information security policies, explicitly stating what users are allowed to do and what they are not allowed to do. Some examples of acceptable user behavior include the use of company resources diligently, limiting software to execute within an IP range, and restriction of trial version software components to development server instances only. Some examples of unacceptable user behavior include reverse engineering the software, prohibited resale of Original Equipment Manufacturer (OEM) individual licenses, surfing porn or hate sites, and sharing illegal software.

1.16 Security Models

Just as an architectural model is an abstraction of the real building, security models are a formal abstraction of the security policy that is comprised of the set of security requirements that needs to be part of the system or software, so that it is resistant to attack, can tolerate the attacks that cannot be resisted, and can recover quickly from the undesirable state, if compromised. In other words, it is a formal presentation of the security policy. Security models include the sequence of steps that are required to develop secure software or systems and provide the “blueprint” for the implementation of security policies.

Security models can be broadly categorized into confidentiality models, integrity models, and access control models.

In this section we will be covering the popular security models, with special attention given to how they apply to software security.

  • Confidentiality Models
    • Bell–LaPadula (BLP)
  • Integrity Models
    • Biba
    • Clark and Wilson
  • Access Control Models
    • Brewer and Nash

1.16.1 BLP Confidentiality Model

If disclosure protection is the primary concern, one must consider the BLP confidentiality model in their software design. BLP is a confidentiality model that defines the notion of a secure state, i.e., access (read only, write only, or read and write) to information is permitted based on rules and the classification of the information itself (Tipton & Krause, 2007).

BLP rules can be specified using properties. The three properties are simple security property that has to do with read access, the star (*) security property that has to do with write access, and the strong star security property that has to do with both read and write access capabilities.

The simple security property states that if you have “read” capability, you can read data at your level of secrecy or at a lower level of secrecy, but you must not be allowed to read data at a higher level of secrecy. This is commonly known as the “No Read Up” rule of BLP.

The star (*) security property states that if you have “write” capability, you can write data at your level of secrecy or at a higher level of secrecy without compromising its value, but you must not be allowed to write data at a lower level of secrecy. Writing to a level you cannot read creates a type of covert channel because you cannot read what you write.

The strong star security property states that if you have both “read” and “write” capabilities, you can read and write data only at your level of secrecy and that you must not be allowed to read and write to levels of higher or lower secrecy.

Assume that the completion of your data classification exercise has yielded the following classification in decreasing order of protection needs, namely, Confidential > Restricted > Public.

BLP confidential model will mandate that someone who is allowed to view only Restricted information is not permitted to read information classified as Confidential (“no read up”) and at the same time, they are not allowed to write at the Public level (“no write down”) as depicted in Figure 1.16. BLP is often simplified in its description as the security model that enforces the “no read up” and “no write down” security policy.

Figure 1.16

Image of Bell–LaPadula confidentiality model

Bell–LaPadula confidentiality model.

BLP has a strong impact on software design. When a thread executing at a lower priority level is prevented from accessing (reading) a thread executing at a higher priority level or modifying (writing to) a thread executing at a lower priority level, it is operating in accordance with the rules of the BLP confidentiality model.

1.16.2 Biba Integrity Model

While the BLP model deals primarily with confidentiality assurance, the Biba Integrity model was the first to address modification or alteration protection. The BLP model has to do more with “read” capability and the Biba model has to do more with “write” capability. Like the BLP model, the Biba model also has the simple security property and the star (*) security property, and so it can be deemed to be the integrity equivalent of the BLP model (Tipton & Krause, 2007).

The simple security property states that if you have read capability, you can read data at your level of accuracy or from a higher level of accuracy, but you must not be allowed to read data from a lower level of accuracy. Allowing a read down operation can result in the risk of contaminating the accuracy of your data.

The star (*) security policy states that if you have write capability, you can write data at your own level of accuracy or to a lower level of accuracy, but you must not be allowed to write data at a higher level of accuracy. Allowing a write up operation can result in the risk of possibly contaminating the data that exist at the higher level.

Assume that the completion of your data classification exercise has yielded the following classification in decreasing order of protection needs: Top Secret > Secret > Unclassified.

The Biba Integrity model will mandate that someone who is allowed to view only Secret information is not permitted to read information classified as Unclassified (“no read down”), and at the same time, they are not allowed to write at the Top Secret level (“no write up”) as depicted in Figure 1.17. Biba is often simplified in its description as the security model that enforces the “no read down” and “no write up” security policy.

Figure 1.17

Image of Biba Integrity model

Biba Integrity model.

In addition to the simple security and the star (*) security property, Biba adds a third property, unique to the Biba security model, that is known as the invocation property. The invocation property states that subjects cannot send messages (invoke services) to objects with higher integrity.

1.16.3 Clark and Wilson Model (Access Triple Model)

Like the Biba Integrity model, the Clark and Wilson model is an integrity model as well. It not only focuses on unauthorized subjects making modifications to objects, but also addresses integrity aspects of authorized personnel making unauthorized changes. For example, an authenticated employee on your network (authorized personnel) should not be able to make changes to his own salary information and give himself a bonus (unauthorized changes) without being challenged. The Clark and Wilson model is even more exhaustive in the sense that in addition to addressing integrity goals, it also aims at addressing consistency goals by maintaining internal and external consistency by defining well-formed transactions.

Let us take for example that customers are allowed place orders for your company’s products over the Web using the company online e-commerce store. After the customer confirms their order submission, the software is designed to first add the customer to the database and then generate an order tied to the customer that is recorded in the customer order table. Order details (the products your customer selected) are then subsequently added to the order detail table in the database and are referenced to the customer order table using the order ID. Assume that while the order details are being added to the database, the database connection pools are maxed out and the transaction fails. If your software is designed and developed in accordance with the Clark and Wilson security model, then you can expect the software to rollback the order entry when the order details fails to ensure that data consistency is ensured. The Clark and Wilson model is also known as an access triple model. The access triple model ensures that access of a subject to an object is restricted and allowed only through a trusted program (which can be your software) as depicted in Figure 1.18. For example, all database operations are allowed only through a software program or application (which preferably is audited) and no direct database access is allowed. The user subject-to-program and program-to-object (data) binding creates a form of separation of duties that ensures integrity.

Figure 1.18

Image of Clark and Wilson access triple model

Clark and Wilson access triple model.

1.16.4 Brewer and Nash Model (Chinese Wall Model)

The Brewer and Nash model is an access control security model that was developed to ensure that the Chinese Wall security policy is met. The Chinese Wall security policy is a set of rules that allow individuals to access proprietary data as long as there is no conflict of interest, i.e., no subjects can access objects on the other side of a wall that is defined with two subjects as depicted in Figure 1.19. The motivation for this model came from the need to avoid exposing sensitive information about a company to its competitor, especially in settings where the same financial consultant is providing services to both competing organizations. In such a situation, the access rights of the individual must be dynamically established based on the data that the individual has previously accessed.

Figure 1.19

Image of Chinese wall security model

Chinese wall security model.

The Brewer and Nash Chinese Wall security model is very applicable in today’s software landscape. With an increase in Software as a Service (SaaS) solution, the need for a definitive wall to exist between your organization’s data and your competitor’s data is a mandatory requirement. For example, if you use a Customer Relationship Management (CRM) SaaS solution, such as salesforce.com to manage your customer and prospective client list, and your sensitive data are hosted in a shared environment, then there needs to be a wall that is defined to prevent your competitor who is also using the same SaaS CRM solution from accessing your sensitive information and vice versa. If access to competitor information is allowed, then a conflict of interest situation is created and this is what the Brewer and Nash model aims to avoid. The Brewer and Nash model is not only an access control model but is also considered to be an information flow model.

The security models covered so far are by no means an exhaustive list of all information security models that exist today. There are other security models such as the noninterference model, state machine models, the Graham–Denning model, and the Harrison–Ruzzo–Ullman Result model that as a security professional, it is advisable for you to be familiar with, so that your role as a CSSLP is most effective.

1.17 Trusted Computing

The State-of-the-Art Report (SOAR) on Software Security Assurance starts by accurately stating that the objective of software assurance is to establish a basis for gaining justifiable confidence that software will consistently demonstrate desirable properties. These desirable properties can range from quality (error free), reliability (functioning as designed), dependability (predictable outputs), usability (nonrestrictive in performing what the user expects), interoperability (function in disparate heterogeneous environments), safely (without harm to user), fault-tolerant, and, of course, security (resistant to attack, tolerant upon breach, and quick to recover from an insecure state). Consistently demonstrate implies that these properties are evident each time every time. Justifiable confidence in other words is “Trust.” So a simple layman’s definition of software assurance is that it is the concept that aims to answer the question, “Can the software be trusted?”

Microsoft is known for its Trustworthy Computing initiative, but trusted computing is not a new concept nor is it a vendor proprietary concept, specific to Microsoft. Microsoft’s Trustworthy Computing initiative aims at covering four tenets, one of which is security, the other three being privacy (individual rights of the user), reliability (predictable, resilient, and recoverable), and business integrity (social responsibility of the organization to its consumers). It was initiated to address security vulnerabilities in Microsoft’s software, and its success has been demonstrated to be directly related to the incorporation of security into the software product development life cycle with the goal to address and avoid design flaws and implementation (coding) bugs before the product is released.

The key thing to note is that software assurance is about trust and not security, which is what software security assurance is about. Security is one of the various desirable properties, expected of the software under the superset of trust. Trusted computing, in other words, is ensuring software assurance, and in the context of the CSSLP, we focus primarily on software security assurance.

There are certain concepts that a CSSLP must be familiar with in regards to trusted computing. These include the Ring Protection, Trust Boundary (or Security Perimeter), Trusted Computing Base (TCB), and Reference Monitor.

1.17.1 Ring Protection

Current-day operating systems (OSs) employ a security mechanism known as ring protection. On the basis of the Honeywell Multics Operating System architecture, ring protection mechanism can be portrayed as a set of concentric numbered rings as depicted in Figure 1.20.

Figure 1.20

Image of Ring protection

Ring protection.

It is the ring number that determines the level of access that is allowed. The ring number has an inverse relationship with the level of access, i.e., the lower the ring level, the higher the level of access and vice versa. Operations performed at ring 0 level are highly privileged, and this includes OS kernel functionality and access. Ring 3 level is where software applications run. Hackers use the terms root, owned, or pwned when they successfully exploit vulnerabilities and gain the highest level privilege (such as privileges at ring 0) in the system. Rootkits (covered later) operate by gaining ring 0 level privileges as well.

1.17.2 Trust Boundary (or Security Perimeter)

Trust boundary is the abstract concept that determines the point at which trust levels change. It is also referred to as the security perimeter. There is a very clear-cut trust boundary at each ring level starting with the outermost user-land ring level with low trust to the innermost kernel-land ring level that is highly privileged. The concept of a trust boundary is not just limited to ring protection mechanisms. Trust boundaries must be taken into account in software design and architecture. For example, in architecting software that will be deployed in an Internet environment, trust at different zones must be factored into the design and architecture. Security controls in the Internet zone where there is lower trust must be much more restrictive than what one can expect in the DMZ or the Intranet zone. We will revisit this concept under the context of Threat Modeling in Chapter 3.

1.17.3 Trusted Computing Base (TCB)

Even though the etymology of the term Trusted Computing Base (TCB) is from the Trusted Computer System Evaluation Criteria (TCSEC) more commonly known as the Orange Book, which is considered by some to be dated, its application in the software security world today is not vestigial by any account.

As described earlier, the security policy is the set of security requirements that needs to be part of the system or software that makes it resistant to most attacks, tolerable to attacks that cannot be resisted, and quickly recoverable from an undesirable state, if compromised. The TCB is the abstract concept that ensures that the security policy is enforced at all times. The TCB includes all of the components (hardware, software, and firmware) and mechanisms (process and interprocess communications) and human factors that provide security, which if failed would result in a security breach or violation. It is an abstract concept in the sense that software architects and designers must take into account all the hardware, software, and firmware components and their mechanisms to design secure software. The hardware, firmware, and software elements of a TCB are also referred to as the security kernel.

Two important characteristics for the TCB to be effective and efficient are that it must be simple and testable. The testability of the TCB means the TCB can be verified as being functionally complete and correct.

The TCB can ensure that the security policy is enforced by monitoring four basic functions. These are:

  1. Process activation
  2. Execution domain switching
  3. Memory protection
  4. Input/output operations

1.17.3.1 Process Activation

In-depth discussion of the process activation within a computer is beyond the scope of this book, and in this section, process activation is covered at a more generic and basic level. Most of us are probably familiar with an online e-commerce transaction. You add a product to your shopping cart, specify any discount code if available, verify the total amount, and place the order. What happens behind the scenes is that the software in such a scenario is designed for calculating the total price of the order using a few functions, such as function A, which is used to compute the subtotal amount (unit price times quantity before discounts), function B is used to compute the discount amount (discount percentage times sub total amount) if a discount is available, function C is used to calculate the tax amount (tax percentage times the subtotal price), and function D is used to determine the total price (subtotal price minus discount amount plus tax). At the bits and bytes level, these functions are translated into an executing process (say, A, B, C, and D) that can be made up of one or many threads (say A.1 to get unit price, A.2 to get quantity, A.3 to get the product of unit price and quantity, etc.), respectively. A thread is a single set of instructions and its associated data. The associated data values (such as unit price, quantity, discount code, tax percentage) are loaded into memory when the instructions call for them. Each of these process threads are controlled by the computers’ central processing unit (CPU) that fills its own registers (holding spaces) with the instructions to execute for the processes to complete. In this case, in order for the total price (process D) to be determined, the process must be interrupted by the computation of the tax process (process C), which, in turn, is dependent on the computation of the subtotal price (process A). In other words, the instructions for process D in the CPU is to be interrupted by process C, which, in turn, will need to be interrupted by process A, so that this total operation can complete. A process is said to be activated when it is allowed to interact with the CPU or, in other words, when its own interrupt is called for by the CPU. When a process no longer needs to interact with the CPU upon the completion of all of the instructions within that process, that process is said to be deactivated.

In the context of software security, it is extremely important for the TCB to ensure that the activation of processes is not circumvented and sabotaged by a malicious process that can result in a compromise with undesirable effects.

1.17.3.2 Execution Domain Switching

Software applications are expected to operate at the outermost ring level with the highest ring number (ring 3 or user-land) and calls for native operating system kernel access at the lowest ring number (ring 0 or kernel-land) must not be directly allowed. There needs to be a strict dichotomy between kernel-land and user-land, and processes executing in one domain must not be allowed access to execute in the other domain. Benefits of such an isolation are not only for reasons of confidentiality and integrity, wherein the OS kernel execution is independent and contained, protecting against disclosure of sensitive information (such as cryptographic keys) or alteration of instruction sequences but also for availability as applications that crash in the user-land will not affect the stability of the entire system.

Each process and its set of data values must be isolated from other processes and the TCB must ensure that one process executing at a particular domain cannot switch to another domain that requires a different level of trust for operations to continue and complete, i.e., switching from low trust user-land to highly privileged kernel-land and back is not allowed.

1.17.3.3 Memory Protection

Because each execution domain includes instruction sets in CPU registers and data stored in memory, the TCB monitors memory references to ensure that disclosure, alteration (contamination), and destruction of memory contents is disallowed.

1.17.3.4 Input/Output Operations

Input/output (I/O) utilities execute at ring 1, the ring level closest to the kernel-land. This allows for the OS to control the access to input devices (e.g., keyboard and mouse) and output devices (e.g., monitor, printer, and disks). When your software needs to write to the database stored on a disk, the instruction for this operation will have to be passed from ring 3 where your software is executing to ring 1 to request access to the disk via ring 2, which is where the OS utilities and disk device drivers (programs) operate. The TCB ensures that the sequence of cross-domain communications for access to I/O devices does not violate the security policy.

1.17.4 Reference Monitor

Subjects are active entities that request a resource. Subjects can be human or nonhuman such as another program or a batch process. The resources that are requested are also referred to as objects. Objects are passive entities and examples of this include a file, a program, data, or hardware. A subject’s access to an object must be mediated and allowed based on the subject’s privilege level. This access is mediated by what is commonly known to as the reference monitor. The reference monitor is an abstract concept that enforces or mediates access relationships between subjects and objects.

Trusted Computing is only possible when the reference monitor itself is

  • Tamper-proof (disallowing unauthorized modifications)
  • Always invoked (so that other processes cannot circumvent the access checks)
  • Verifiable (correct and complete in its access mediation functionality)

1.17.5 Rootkits

Authors Hoglund and Butler in their book, Rootkits, define a rootkit as “a set (kit) of programs and code that allows an attacker to maintain a permanent or consistent undetectable access to ‘root,’ the most powerful user on a computer.” Because rootkits are programs and code that execute at the highest privilege levels and are undetectable, intent to maliciously use rootkits can have a dire and serious impact on trusted computing and security policy enforcement. Malicious users and programs usually use rootkits to modify the operating system (OS) and masquerade as legitimate programs such as kernel-loadable modules (*nix OS) or device drivers (Windows OS). Because they masquerade as legitimate programs, they are undetectable. These rootkits can be used to install keyloggers, alter log files, install covert channels, and evade detection and removal.

Rootkits are primarily used for remote control or for software eavesdropping. Hackers and malicious software (malware) such as spyware attempt to exploit vulnerabilities in software in order to install rootkits in unpatched and unhardened systems. It is therefore imperative that the security of the software we build or buy does not allow for the compromise of trusted computing by becoming victims to the malicious use of rootkits.

It must, however, be recognized that intrinsically, rootkits are not a security threat and there are several valid and legitimate reasons for developing this type of technology. These include using the rootkits for remote troubleshooting purposes, sanctioned and consented law enforcement, espionage situations, and also monitoring user behavior.

1.18 Trusted Platform Module (TPM)

Like the TCB, another concept and mechanism that helps ensure trusted computing is the Trusted Platform Module (TPM). Developed by the Trusted Computing Group (TCG), whose mission is to develop and support open industry specifications for trusted computing across multiple platform types, the TPM is a specification used in personal computers and other systems to ensure protection against disclosure of sensitive or private information as well as the implementation of the specification itself. The implementation of the specification, currently in version 1.2, is a microcontroller commonly referred to as the TPM chip usually affixed to the motherboard (hardware) itself.

Although the TPM itself does not control what software runs, the TPM provides generation and tamperproof storage of cryptographic keys that can be used to create and store identity (user or platform) credentials for authentication purposes. A TPM chip can be used to uniquely identify a hardware device and provide hardware-based device authentication. It can be complementary to smartcards and biometrics and in that sense facilitates strong multifactor authentication and enables true machine and user authentication by requiring the presentation of authorization data before disclosing sensitive or private information.

TPM systems offer enhanced and added security and protection against external software attack or physical theft because they take into account hardware-based security aspects in addition to the security capabilities provided by software. It must, however, be understood that keys and sensitive information stored in the TPM chip are still vulnerable to disclosure if the software that is requesting this information for processing is not architected securely, as has been demonstrated in the cold boot side channel attack. This further accentuates the fact that software security is critical to ensure trusted computing. The TPM can also be leveraged by software developers to increase security in the software they write by using the TCG’s Trusted Software Stack (TSS) interface specification. The TCG also publishes the Trusted Server specification (for server security), the Trusted Network Connect architecture (for network security), and the Mobile Trusted Module (for mobile computing security).

Side channel attacks including the cold boot attack will be covered in Chapter 7.

1.19 Acquisitions

Security considerations in software acquisitions will be covered in-depth in Chapter 6. In this section, we will be introduced to the reasons for software acquisitions, acquisition mechanisms, and the security aspects to consider when acquiring software.

It is not surprising that not all software is built in-house. In fact, a substantial amount of software within one’s organization is probably developed by a third party and purchased as commercial off-the-shelf (COTS) software. A buy versus build decision is usually dependent on the time (schedule), resource (scope), and cost (budget) elements of the iron triangle. Generally, when the time to market is short, the resources available with the appropriate skills are low and the cost for development is tight, management leans more toward a software acquisition (buy) decision. Table 1.5 illustrates some of the questions to ask when evaluating a buy versus build decision.

In addition to the iron triangle elements impacting a buy versus build decision, two other trends also demonstrate a direct effect on software acquisition over building it in-house. These are outsourcing and SaaS. With the abundance of qualified resources at a low cost in lower cost software development companies around the world, many organizations jumped on to the outsourcing bandwagon and had their software developed by someone on the other side of the globe, without factoring in the security aspects that need to be part of outsourcing. When software development is outsourced, it is critical that the organization is aware of who is writing the software for them and if the software can be trusted. Code developed outside the control of your organization will need to be thoroughly inspected and reviewed for back doors, Trojans, logic bombs, etc., before accepting that software and deploying it within your organization. Also, with the change in the way that software is sold as a service, instead of buying it as a product and hosting it within your organization, the software is often hosted as a service externally in a shared environment that you have little to no control over.

Software can be acquired using one or more of the following mechanisms:

  • Direct purchase
  • Original equipment manufacturer licenses
  • System integration (outsourced buy)
  • Partnering (alliance) with the software vendor

While the buy decision has the benefits of readily available software and appropriately skilled resources who work for the software vendor, it does come with some costs of customization that is invariably required, vendor dependence, and legal protection mechanisms such as contracts, SLAs, and intellectual property (IP) protection mechanisms such as copyright, trademarks, and patents. Legal and IP protection mechanisms are covered in-depth in Chapter 6.

Additionally, if security requirements are not explicitly stated before purchase, there is a high degree of likelihood that the software product you buy to deploy in-house does not meet the security requirements. When was the last time you saw a request for proposal with security requirements explicitly stated? Not only must security requirements be explicitly communicated to the software vendor in advance, but it must be verified as well. Unfortunately, in most cases, when software is acquired, evaluation of the COTS software is on the functionality, performance, and integration abilities of the software and not necessarily on security. And even in cases where the vendor claims security in their software as a differentiating factor, the claimed security is seldom verified within the organization’s computing ecosystem, prior to its purchase. It is important that COTS software vendors are trusted, but it is even more imperative for secure software assurance that their claims are verified.

Software assurance in acquisitions is emerging and expected to become integral to the software security assurance initiatives. The U.S. Department of Defense (in conjunction with the Department of Homeland Security) is currently working on a reference guide for security-enhanced software acquisition and outsourcing entitled “Software Assurance in Acquisition: Mitigating Risks to the Enterprise,” which is a document worth being familiar with for any CSSLP or security professional who is engaged in the software procurement or purchasing decision and/or process.

In essence, regardless of whether you buy or you build the software, SARs must be part of the process and in no situation can these requirements be ignored.

1.20 Summary

In conclusion, we have established that software security can no longer be on the sidelines and that it is important for security and secure design tenets to be factored into the SDLC. The interplay between software security and risk management was demonstrated with special attention given to challenges in software risk management. Governance instruments such as policies and standards were covered along with common methodologies, best practices, and framework. We looked at how abstract security models and trusted computing concepts (TCB and TPM) impact software security. Finally, we discussed the reasons for software acquisition, acquisition mechanisms, and the security aspects that need to be part of the SDLC.

1.21 Review Questions

  1. The primary reason for incorporating security into the software development life cycle is to protect

    A. Unauthorized disclosure of information

    B. Corporate brand and reputation

    C. Against hackers who intend to misuse the software

    D. Developers from releasing software with security defects

  2. The resiliency of software to withstand attacks that attempt to modify or alter data in an unauthorized manner is referred to as

    A. Confidentiality

    B. Integrity

    C. Availability

    D. Authorization

  3. The main reason as to why the availability aspects of software must be part of the organization’s software security initiatives is:

    A. Software issues can cause downtime to the business.

    B. Developers need to be trained in the business continuity procedures.

    C. Testing for availability of the software and data is often ignored.

    D. Hackers like to conduct denial of service attacks against the organization.

  4. Developing the software to monitor its functionality and report when the software is down and unable to provide the expected service to the business is a protection to assure which of the following?

    A. Confidentiality

    B. Integrity

    C. Availability

    D. Authentication

  5. When a customer attempts to log into his bank account, he is required to enter a number that is used only once (nonce) from the token device that was issued to the customer by the bank. This type of authentication is also known as which of the following?

    A. Ownership-based authentication

    B. Two factor authentication

    C. Characteristic-based authentication

    D. Knowledge-based authentication

  6. Multifactor authentication is most closely related to which of the following security design principles?

    A. Separation of duties

    B. Defense in-depth

    C. Complete mediation

    D. Open design

  7. Audit logs can be used for all of the following except

    A. Providing evidentiary information

    B. Assuring that the user cannot deny their actions

    C. Detecting the actions that were undertaken

    D. Preventing a user from performing some unauthorized operations

  8. Impersonation attacks such as man-in-the-middle (MITM) attacks in an Internet application can be best mitigated using proper

    A. Configuration management

    B. Session management

    C. Patch management

    D. Exception management

  9. Organizations often predetermine the acceptable number of user errors before recording them as security violations. This number is otherwise known as

    A. Clipping level

    B. Known error

    C. Minimum security baseline

    D. Maximum tolerable downtime

  10. A security principle that maintains the confidentiality, integrity, and availability of the software and data, besides allowing for rapid recovery to the state of normal operations, when unexpected events occur is the security design principle of

    A. Defense in-depth

    B. Economy of mechanisms

    C. Fail safe

    D. Psychological acceptability

  11. Requiring the end user to accept an “as-is” disclaimer clause before installation of your software is an example of risk

    A. Avoidance

    B. Mitigation

    C. Transference

    D. Acceptance

  12. An instrument that is used to communicate and mandate organizational and management goals and objectives at a high level is a

    A. Standard

    B. Policy

    C. Baseline

    D. Guideline

  13. The Systems Security Engineering Capability Maturity Model is an internationally recognized standard that publishes guidelines to

    A. Provide metrics for measuring the software and its behavior and using the software in a specific context of use

    B. Evaluate security engineering practices and organizational management processes

    C. Support accreditation and certification bodies that audit and certify information security management systems

    D. Ensure that the claimed identity of personnel are appropriately verified

  14. Which of the following is a framework that can be used to develop a risk-based enterprise security architecture by determining security requirements after analyzing the business initiatives?

    A. Capability Maturity Model Integration (CMMI)

    B. Sherwood Applied Business Security Architecture (SABSA)

    C. Control Objectives for Information and related Technology (COBIT®)

    D. Zachman Framework

  15. The property of this Biba security model prevents the contamination of data assuring its integrity by

    A. Not allowing the process to write above its security level

    B. Not allowing the process to write below its security level

    C. Not allowing the process to read above its security level

    D. Not allowing the process to read below its security level

  16. Which of the following is known to circumvent the ring protection mechanisms in operating systems?

    A. Cross Site Request Forgery (CSRF)

    B. Coolboot

    C. SQL injection

    D. Rootkit

  17. Which of the following is a primary consideration for the software publisher when selling commercial off-the-shelf (COTS) software?

    A. Service level agreements

    B. Intellectual property protection

    C. Cost of customization

    D. Review of the code for backdoors and Trojan horses

  18. The single loss expectancy can be determined using which of the following formulae?

    A. Annualized rate of occurrence (ARO) × exposure factor

    B. Probability × impact

    C. Asset value × exposure factor

    D. Annualized rate of occurrence (ARO) × asset value

  19. Implementing IPSec to assure the confidentiality of data when it is transmitted is an example of which type of risk?

    A. Avoidance

    B. Transference

    C. Mitigation

    D. Acceptance

  20. The Federal Information Processing Standard (FIPS) that prescribe guidelines for biometric authentication is

    A. FIPS 46-3

    B. FIPS 140-2

    C. FIPS 197

    D. FIPS 201

  21. Which of the following is a multifaceted security standard that is used to regulate organizations that collects, processes, and/or stores cardholder data as part of their business operations?

    A. FIPS 201

    B. ISO/IEC 15408

    C. NIST SP 800-64

    D. PCI DSS

  22. Which of the following is the current Federal Information Processing Standard (FIPS) that specifies an approved cryptographic algorithm to ensure the confidentiality of electronic data?

    A. Security Requirements for Cryptographic Modules (FIPS 140-2)

    B. Data Encryption Standard (FIPS 46-3)

    C. Advanced Encryption Standard (FIPS 197)

    D. Digital Signature Standard (FIPS 186-3)

  23. The organization that publishes the 10 most critical Web application security risks (Top Ten) is the

    A. U.S. Computer Emergency Readiness Team (US-CERT)

    B. Web Application Security Consortium (WASC)

    C. Open Web Application Security Project (OWASP)

    D. Forums for Incident Response and Security Teams (FIRST)

References

Common Criteria. n.d. Common Criteria recognition agreement. http://www.commoncri teriaportal.org/ccra/ (accessed Mar. 3, 2011).

Federal Financial Institutions Examination Council. 2010. Authentication in an Internet banking environment. http://www.ffiec.gov (accessed Mar. 10, 2010).

Howard, M., and D. LeBlanc. 2003. Writing Secure Code. Redmond, WA: Microsoft.

International Organization for Standardization. 2010. ISO standards. 10 Feb. 2010. http://www.iso.org/iso/iso_catalogue.htm (accessed Feb. 10, 2010).

National Institute of Standards and Technology (NIST). 1996. Federal Information Processing Standards Publications. http://www.itl.nist.gov/fipspubs (accessed May 15, 2010).

NIST. 2002. Federal Information Security Management Act (FISMA) implementation project. Computer Security Division, Computer Security Resource Center. http://csrc.nist.gov/groups/SMA/fisma/index.html (accessed June 15, 2010).

NIST. 2007. Special publications (800 Series). http://csrc.nist.gov/publications/PubsSPs.html (accessed June 15, 2010).

PCI Security Standards Council. n.d., a. Payment Application Data Security Standard (PA-DSS). https://www.pcisecuritystandards.org/ (accessed Feb. 10, 2010).

PCI Security Standards Council. n.d., b. Payment Card Industry Data Security Standard (PCI DSS). https://www.pcisecuritystandards.org/ (accessed Feb. 10, 2010).

Schneier, B. 2000. Secrets and Lies: Digital Security in a Networked World. New York, NY: John Wiley.

Tipton, H. F., and M. Krause. 2007. Information Security Management Handbook. Boca Raton, FL: Auerbach.

Trusted Computing Group. 2010. Trusted Platform Module. http://www.trustedcomputinggroup.org/ (accessed June 15, 2010).

Weise, J. 2009. Why security standards? ISSA Journal August: 29–32.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.143.5.201