image
1
General Security Concepts
In this chapter you will
•  Explore the CSSLP exam objectives
•  Learn basic terminology associated with computer and information security
•  Discover the basic approaches to computer and information security
•  Examine security models used to implement security in systems
•  Explore the types of adversaries associated with software security
image
So, why should you be concerned with taking the CSSLP exam? There is a growing need for trained secure software professionals, and the (ISC)2 Certified Secure Software Lifecycle Professional (CSSLP) exam is a perfect way to validate your knowledge and understanding of the secure software development field. The exam is an appropriate mechanism for many different individuals, including project managers, architects, developers, analysts, and testers, to show proof of professional achievement in secure software development. The exam’s objectives were developed with input and assistance from industry and government agencies.
ISC2 publishes a Candidate Information Bulletin (CIB) via their website that contains an outline of the topics associated with the exam. This book follows the topic outline of the March 2010 version of the CIB, providing details and illustrative examples of the topics listed in the outline. The structure of this book mirrors the structure of the outline. Readers are advised to download the current version of the CIB and use it as part of their preparation for the exam.
To earn the CSSLP credential, an applicant must have a minimum of four years of direct full-time secure software lifecycle professional work experience in one or more of the seven domains of the (ISC)² CSSLP CBK or three years of direct full-time secure software lifecycle professional work experience in one or more of the seven domains of the CSSLP CBK with a four-year college degree in an information technology discipline. The requirements used in this text are from (ISC)2 Certified Secure Software Lifecycle Professional (CSSLP) Candidate Information Bulletin, April 2013 version (www.isc2.org).
The CSSLP Knowledge Base
In terms of the exam itself, the CSSLP exam is designed to cover a wide range of secure software development topics that a professional secure software development practitioner would be expected to have knowledge of. The test includes information from seven knowledge domains. The specific domains that are covered on the exam according to ISC2 are as follows:
Knowledge Domain
•  Secure Software Concepts
•  Secure Software Requirements
•  Secure Software Design
•  Secure Software Implementation/Coding
•  Secure Software Testing
•  Software Acceptance
•  Software Deployment, Operations, Maintenance, and Disposal
The exam consists of a series of questions, each designed to have a single best answer or response. The other choices are designed to provide options that an individual might have if they had an incomplete knowledge or understanding of the security topic represented by the question.
This All-in-One Exam Guide is designed to assist you in preparing for the CSSLP exam. It is organized around the same objectives as the exam and attempts to cover all of the areas the exam includes. Using this guide in no way guarantees that you will pass the exam, but it will greatly assist you in preparing to successfully meet the challenge posed by the CSSLP exam.
General Security Concepts
Secure software development is intimately tied to the information security domain. For members of the software development team to develop secure software, a reasonable knowledge of security principles is required. The first knowledge domain area, Secure Software Concepts, comprises a collection of principles, tenets, and guidelines from the information security domain. Understanding these concepts as they apply to software development is a foundation of secure software development.
Security Basics
Security can be defined in many ways, depending upon the specific discipline that it is being viewed from. From an information and software development point of view, some specific attributes are commonly used to describe the actions associated with security: confidentiality, integrity, and availability. A second set of action-oriented elements, authentication, authorization, and auditing, provide a more complete description of the desired tasks associated with the information security activity. A final term, non-repudiation, describes an act that one can accomplish when using the previous elements. An early design decision is determining what aspects of protection are required for data elements and how they will be employed.
Confidentiality
Confidentiality is the concept of preventing the disclosure of information to unauthorized parties. Keeping secrets secret is the core concept of confidentiality. The identification of authorized parties makes the attainment of confidentiality dependent upon the concept of authorization, which is presented later in this chapter. There are numerous methods of keeping data confidential, including access controls and encryption. The technique employed to achieve confidentiality depends upon whether the data is at rest, in transit, or in use. Access controls are typically preferred for data in use and at rest, while encryption is common for data in transit and at rest.
Integrity
Integrity is similar to confidentiality, except rather than protecting the data from unauthorized access, integrity refers to protecting the data from unauthorized alteration. Unauthorized alteration is a more fine-grained control than simply authorizing access. Users can be authorized to view information but not alter it, so integrity controls require an authorization scheme that controls update and delete operations. For some systems, protecting the data from observation by unauthorized parties is critical, whereas in other systems, it is important to protect the data from unauthorized alteration. Controlling alterations, including deletions, can be an essential element in a system’s stability and reliability. Integrity can also play a role in the determination of authenticity.
Availability
Access to systems by authorized personnel can be expressed as the system’s availability. Availability is an often-misunderstood attribute, but its value is determined by the criticality of the data and its purpose in the system. For systems such as email or web browsing, temporary availability issues may not be an issue at all. For IT systems that are controlling large industrial plants, such as refineries, availability may be the most important attribute. The criticality of data and its use in the system are critical factors in determining a system’s availability. The challenge in system definition and design is to determine the correct level of availability for the data elements of the system.
The objective of security is to apply the appropriate measures to achieve a desired risk profile for a system. One of the challenges in defining the appropriate control objectives for a system is classifying and determining the appropriate balance of the levels of confidentiality, integrity, and availability in a system across the data elements. Although the attributes are different, they are not necessarily contradictory. They all require resources, and determining the correct balance between them is a key challenge early in the requirements and design process.
image
image   
EXAM TIP   The term CIA is commonly used in the security industry to refer to confidentiality, integrity, and availability.
Authentication
Authentication is the process of determining the identity of a user. All processes in a computer system have an identity assigned to them so that a differentiation of security functionality can be employed. Authentication is a foundational element of security, as it provides the means to define the separation of users by allowing the differentiation between authorized and unauthorized users. In systems where all users share a particular account, they share the authentication and identity associated with that account.
It is the job of authentication mechanisms to ensure that only valid users are admitted. Authentication, on the other hand, deals with verifying the identity of a subject. To help understand the difference, consider the example of an individual attempting to log in to a computer system or network. Authentication is the process used to verify to the computer system or network that the individual is who they claim to be. Three general methods are used in authentication. In order to verify your identity, you can provide
•  Something you know
•  Something you have
•  Something about you (something that you are)
The most common authentication mechanism is to provide something that only you, the valid user, should know. The most common example of something you know is the use of a userid (or username) and password. In theory, since you are not supposed to share your password with anybody else, only you should know your password, and thus by providing it you are proving to the system that you are who you claim to be. Unfortunately, for a variety of reasons, such as the fact that people have a tendency to choose very poor and easily guessed passwords or share them, this technique to provide authentication is not as reliable as it should be. Other, more secure, authentication mechanisms are consequently being developed and deployed.
Another common method to provide authentication involves the use of something that only valid users should have in their possession, commonly referred to as a token. A physical-world example of this is the simple lock and key. Only those individuals with the correct key will be able to open the lock and thus achieve admittance to your house, car, office, or whatever the lock was protecting. For computer systems, the token frequently holds a cryptographic element that identifies the user. The problem with tokens is that people can lose them, which means they can’t log in to the system, and somebody else who finds the key may then be able to access the system, even though they are not authorized. To address the lost token problem, a combination of the something-you-know and something-you-have methods is used—requiring a password or PIN in addition to the token. The key is useless unless you know this code. An example of this is the ATM card most of us carry. The card is associated with a personal identification number (PIN), which only you should know. Knowing the PIN without having the card is useless, just as having the card without knowing the PIN will also not provide you access to your account. Properly configured tokens can provide high levels of security at an expense only slightly higher than passwords.
The third authentication method involves using something that is unique about the user. We are used to this concept from television police dramas, where a person’s fingerprints or a sample of their DNA can be used to identify them. The field of authentication that uses something about you or something that you are is known as biometrics. A number of different mechanisms can be used to accomplish this form of authentication, such as a voice print, a retinal scan, or hand geometry. The downside to these methods is the requirement of additional hardware and the lack of specificity that can be achieved with other methods.
Authorization
After the authentication system identifies a user, the authorization system takes over and applies the predetermined access levels to the user. Authorization is the process of applying access control rules to a user process, determining whether or not a particular user process can access an object. There are numerous forms of access control systems, and these are covered later in the chapter. Three elements are used in the discussion of authorization: a requestor (sometimes referred to as the subject), the object, and the type or level of access to be granted. The authentication system identifies the subject to be one of a known set of subjects associated with a system. When a subject requests access to an object, be it a file, a program, an item of data, or any other resource, the authorization system makes the access determination as to grant or deny access. A third element is the type of access requested, with the common forms being read, write, create, delete, or the right to grant access rights to other subjects. The instantiation of authentication and authorization systems into a working identity management system is discussed in Chapter 11.
Accounting (Auditing)
Accounting is a means of measuring activity. In IT systems, this can be done by logging crucial elements of activity as they occur. With respect to data elements, accounting is needed when activity is determined to be crucial to the degree that it may be audited at a later date and time. Management has a responsibility for ensuring work processes are occurring as designed. Should there be a disconnect between planned and actual operational performance metrics, then it is management’s responsibility to initiate and ensure corrective actions are taken and effective. Auditing is management’s lens to observe the operation in a nonpartisan manner. Auditing is the verification of what actually happened on a system. Security-level auditing can be performed at several levels, from an analysis of the logging function that logs specific activities of a system, to the management verification of the existence and operation of specific controls on a system.
Auditing can be seen as a form of recording historical events in a system. Operating systems have the ability to create audit structures, typically in the form of logs that allow management to review activities at a later point in time. One of the key security decisions is the extent and depth of audit log creation. Auditing takes resources, so by default it is typically set to a minimal level. It is up to a system operator to determine the correct level of auditing required based on a system’s criticality. The system criticality is defined by the information criticality associated with the information manipulated or stored within it. Determination and establishment of audit functionality must occur prior to an incident, as the recording of the system’s actions cannot be accomplished after the fact.
Audit logs are a kind of balancing act. They require resources to create, store, and review. The audit logs in and of themselves do not create security; it is only through the active use of the information contained within them that security functionality can be enabled and enhanced. As a general rule, all critical transactions should be logged, including when they occurred and which authorized user is associated with the event. Additional metadata that can support subsequent investigation of a problem is also frequently recorded.
image
image   
NOTE   A key element in audit logs is the employment of a monitoring, detection, and response process. Without mechanisms or processes to “trigger” alerts or notifications to admins based on particular logged events, the value of logging is diminished or isolated to a post-incident resource instead of contributing to an alerting or incident prevention resource.
Non-repudiation
Non-repudiation is the concept of preventing a subject from denying a previous action with an object in a system. When authentication, authorization and auditing are properly configured, the ability to prevent repudiation by a specific subject with respect to an action and an object is ensured. In simple terms, there is a system in place to prevent a user from saying they did not do something, a system that can prove, in fact, whether an event took place or not. Non-repudiation is a very general concept, so security requirements must specify which subject, objects, and events for which non-repudiation is desired, as this will affect the level of audit logging required. If complete non-repudiation is desired, then every action by every subject on every object must be logged, and this could be a very large log dataset.
System Tenets
The creation of software systems involves the development of several foundational system elements within the overall system. Communication between components requires the management of a communication session, commonly called session management. When a program encounters an unexpected condition, an error can occur. Securely managing error conditions is referred to as exception management. Software systems require configuration in production, and configuration management is a key element in the creation of secure systems.
Session Management
Software systems frequently require communications between program elements, or between users and program elements. The control of the communication session between these elements is essential to prevent the hijacking of an authorized communication channel by an unauthorized party. Session management refers to the design and implementation of controls to ensure that communication channels are secured from unauthorized access and disruption of a communication. A common example is the Transmission Control Protocol (TCP) handshake that enables the sequential numbering of packets and allows for packet retransmission for missing packets; it also prevents the introduction of unauthorized packets and the hijacking of the TCP session. Session management requires additional work and has a level of overhead, and hence may not be warranted in all communication channels. User Datagram Protocol (UDP), a connectionless/sessionless protocol, is an example of a communication channel that would not have session management and session-related overhead. An important decision early in the design process is determining when sessions need to be managed and when they do not. Understanding the use of the channel and its security needs should dictate the choice of whether session management is required or not.
Exception Management
There are times when a system encounters an unknown condition, or is given input that results in an error. The process of handling these conditions is referred to as exception management. A remote resource may not respond, or there may be a communication error—whatever the cause of the error, it is important for the system to respond in an appropriate fashion. Several criteria are necessary for secure exception management. First, all exceptions must be detected and handled. Second, the system should be designed so as not fail to an insecure state. Last, all communications associated with the exception should not leak information.
For example, assume a system is connecting to a database to verify user credentials. Should an error occur, as in the database is not available when the request is made, the system needs to properly handle the exception. The system should not inadvertently grant access in the event of an error. The system may need to log the error, along with information concerning what caused the error, but this information needs to be protected. Releasing the connection string to the database or passing the database credentials with the request would be a security failure.
Configuration Management
Dependable software in production requires the managed configuration of the functional connectivity associated with today’s complex, integrated systems. Initialization parameters, connection strings, paths, keys and other associated variables are typical examples of configuration items. As these elements can have significant effects upon the operation of a system, they are part of the system and need to be properly controlled for the system to remain secure. The identification and management of these elements is part of the security process associated with a system.
Management has a responsibility to maintain production systems in a secure state, and this requires that configurations be protected from unauthorized changes. This has resulted in the concept of configuration management, change control boards, and a host of workflow systems designed to control the configuration of a system. One important technique frequently employed is the separation of duties between production personnel and development/test personnel. This separation is one method to prevent the contamination of approved configurations in production.
Secure Design Tenets
Secure designs do not happen by accident. They are the product of deliberate architecture, deliberate plans, and structured upon a foundation of secure design principles. These principles have borne the test of time and have repeatedly proven their worth in a wide variety of security situations. The seminal work in this area, the application of secure design principles to computer systems, is Saltzer and Schroeder’s 1975 article “The Protection of Information in Computer Systems.”
Good Enough Security
Security is never an absolute, and there is no such thing as complete or absolute security. This is an important security principle, for it sets the stage for all of the security aspects associated with a system. There is a trade-off between security and other aspects of a system. Secure operation is a requirement for reliable operation, but under what conditions? Every system has some appropriate level of required security, and it is important to determine this level early in the design process. Just as one would not spend $10,000 on a safe to protect a $20 bill, a software designer will not use national security–grade encryption to secure publically available information.
Least Privilege
One of the most fundamental approaches to security is least privilege. Least privilege means that a subject should have only the necessary rights and privileges to perform its current task with no additional rights and privileges. Limiting a subject’s privileges limits the amount of harm that can be caused, thus limiting a system’s exposure to damage. In the event that a subject may require different levels of security for different tasks, it is better to switch security levels with tasks rather than run all the time with the higher level of privilege.
Another issue that falls under the least privilege concept is the security context in which an application runs. All programs, scripts, and batch files run under the security context of a specific user on an operating system. They will execute with specific permissions as if they were that user. The infamous Sendmail exploit utilized this specific design issue. Sendmail needs root-level access to accomplish specific functions, and hence the entire program was run as root. If the program was compromised after it had entered root access state, the attacker could obtain a root-level shell. The crux of this issue is that programs should execute only in the security context that is needed for that program to perform its duties successfully.
Separation of Duties
Another fundamental approach to security is separation of duties. Separation of duties ensures that for any given task, more than one individual needs to be involved. The critical path of tasks is split into multiple items, which are then spread across more than a single party. By implementing a task in this manner, no single individual can abuse the system. A simple example might be a system where one individual is required to place an order and a separate person is needed to authorize the purchase.
This separation of duties must be designed into a system. Software components enforce separation of duties when they require multiple conditions to be met before a task is considered complete. These multiple conditions can then be managed separately to enforce the checks and balances required by the system.
Defense in Depth
Defense in depth is one of the oldest security principles. If one defense is good, multiple overlapping defenses are better. A castle has a moat, thick walls, restricted access points, high points for defense, multiple chokepoints inside, etc., in essence a whole series of defenses all aligned toward a single objective. Defense in depth is also known by the terms layered security (or defense) and diversity defense.
Software should utilize the same type of layered security architecture. There is no thing as perfect security. No system is 100 percent secure, and there is nothing that is foolproof, so a single specific protection mechanism should never be solely relied upon for security. Every piece of software and every device can be compromised or bypassed in some way, including every encryption algorithm, given enough time and resources. The true goal of security is to make the cost of compromising a system greater in time and effort than it is worth to an adversary.
Diversity of defense is a related concept that complements the idea of various layers of security. The concept of layered security, illustrated in Figure 1-1, is the application of multiple security defenses to craft an overlapping more comprehensive solution. For the layers to be diverse, they should be dissimilar in nature so that if an adversary makes it past one layer, another layer may still be effective in maintaining the system in a secure state. Coupling encryption and access control provides multiple layers that are diverse in their protection nature, and yet both can provide confidentiality.
image
image
Figure 1-1   Layered security
Defense in depth provides security against many attack vectors. The fact that any given defense is never 100 percent effective is greatly assisted by a series of different defenses, multiplying their effectiveness. Defense in depth is a concept that can be applied in virtually every security function and instance.
Fail-safe
As mentioned in the exception management section, all systems will experience failures. The fail-safe design principle is that when a system experiences a failure, it should fail to a safe state. One form of implementation is to use the concept of explicit deny. Any function that is not specifically authorized is denied by default. When a system enters a failure state, the attributes associated with security, confidentiality, integrity, and availability need to be appropriately maintained. Availability is the attribute that tends to cause the greatest design difficulties. Ensuring that the design includes elements to degrade gracefully and return to normal operation through the shortest path assists in maintaining the resilience of the system. During design, it is important to consider the path associated with potential failure modes and how this path can be moderated to maintain system stability and control under failure conditions.
Economy of Mechanism
The terms security and complexity are often at odds with each other. This is because the more complex something is, the harder it is to understand, and you cannot truly secure something if you do not understand it. Another reason complexity is a problem within security is that it usually allows too many opportunities for something to go wrong. If an application has 4000 lines of code, there are a lot fewer places for buffer overflows, for example, than in an application of two million lines of code.
As with any other type of technology or problem in life, when something goes wrong with security mechanisms, a troubleshooting process is used to identify the actual issue. If the mechanism is overly complex, identifying the root of the problem can be overwhelming, if not nearly impossible. Security is already a very complex issue because there are so many variables involved, so many types of attacks and vulnerabilities, so many different types of resources to secure, and so many different ways of securing them. You want your security processes and tools to be as simple and elegant as possible. They should be simple to troubleshoot, simple to use, and simple to administer.
Another application of the principle of keeping things simple concerns the number of services that you allow your system to run. Default installations of computer operating systems often leave many services running. The keep-it-simple principle tells us to eliminate those that we don’t need. This is also a good idea from a security standpoint because it results in fewer applications that can be exploited and fewer services that the administrator is responsible for securing. The general rule of thumb should be to always eliminate all nonessential services and protocols. This, of course, leads to the question, how do you determine whether a service or protocol is essential or not? Ideally, you should know what your computer system or network is being used for, and thus you should be able to identify those elements that are essential and activate only them. For a variety of reasons, this is not as easy as it sounds. Alternatively, a stringent security approach that one can take is to assume that no service is necessary (which is obviously absurd) and activate services and ports only as they are requested. Whatever approach is taken, there is a never-ending struggle to try to strike a balance between providing functionality and maintaining security.
Complete Mediation
The principle of complete mediation states that when a subject’s authorization is verified with respect to an object and an action, this verification occurs every time the subject requests access to an object. The system must be designed so that the authorization system is never circumvented, even with multiple, repeated accesses. This principle is the one involved in the use of the security kernel in operating systems, functionality that cannot be bypassed, and allows security management of all threads being processed by the operating system (OS). Modern operating systems and IT systems with properly configured authentication systems are very difficult to compromise directly, with most routes to compromise involving the bypassing of a critical system such as authentication. With this in mind, it is important during design to examine potential bypass situations and prevent them from becoming instantiated.
Open Design
Another concept in security that should be discussed is the idea of security through obscurity. This approach has not been effective in the actual protection of the object. Security through obscurity may make someone work a little harder to accomplish a task, but it does not provide actual security. This approach has been used in software to hide objects, like keys and passwords, buried in the source code. Reverse engineering and differential code analysis have proven effective at discovering these secrets, eliminating this form of “security.”
The concept of open design states that the security of a system must be independent of the form of the security design. In essence, the algorithm that is used will be open and accessible and the security must not be dependent upon the design, but rather on an element such as a key. Modern cryptography has employed this principle effectively; security depends upon the secrecy of the key, not the secrecy of the algorithm being employed.
Least Common Mechanism
The concept of least common mechanism refers to a design method designed to prevent the inadvertent sharing of information. Having multiple processes share common mechanisms leads to a potential information pathway between users or processes. Having a mechanism that services a wide range of users or processes places a more significant burden on the design of that process to keep all pathways separate. When presented with a choice between a single process that operates on a range of supervisory and subordinate-level objects and/or a specialized process tailored to each, choosing the separate process is the better choice.
The concepts of least common mechanism and leveraging existing components can place a designer at a conflicting crossroad. One concept advocates reuse and the other separation. The choice is a case of determining the correct balance associated with the risk from each.
Psychological Acceptability
Users are a key part of a system and its security. To include a user in the security of a system requires that the security aspects be designed so that they are psychologically acceptable to the user. When a user is presented with a security system that appears to obstruct the user, the result will be the user working around the security aspects of the system. For instance, if a system prohibits the emailing of certain types of attachments, the user can encrypt the attachment, masking it from security, and perform the prohibited action anyway.
Ease of use tends to trump many functional aspects. The design of security in software systems needs to be transparent to the user, just like air—invisible, yet always there, serving the need. This places a burden on the designers; security is a critical functional element, yet one that should impose no burden on the user.
Weakest Link
The weakest link is the common point of failure for all systems. Every system by definition has a “weakest” link. Adversaries do not seek out the strongest defense to attempt a breach. A system can only be considered as strong as its weakest link. Expending additional resources to add to the security of a system is most productive when it is applied to the weakest link. Throughout the software lifecycle, it is important to understand the multitude of weaknesses associated with a system, including the weakest link. Including in the design a series of diverse defenses, sometimes called defense in depth, is critical to harden a system against exploitation. Managing the security of a system requires understanding the vulnerabilities and defenses employed, including the relative strengths of each one, so that they can be properly addressed.
Leverage Existing Components
Component reuse has many business advantages, including increases in efficiency and security. As components are reused, fewer new components are introduced to the system; hence, the opportunity for additional vulnerabilities is reduced. This is a simplistic form of reducing the attack surface area of a system. The downside of massive reuse is associated with a monoculture environment, which is where a failure has a larger footprint because of all the places it is involved with.
Single Point of Failure
Just as multiple defenses are a key to a secure system, so, too, is a system design that is not susceptible to a single point of failure. A single point of failure is any aspect of a system that, if it fails, the entire system fails. It is imperative for a secure system to not have any single points of failure. The design of a software system should be such that all points of failure are analyzed and a single failure does not result in system failure. Single points of failure can exist for any attribute, confidentiality, integrity, availability, etc., and may well be different for each attribute. Examining designs and implementations for single points of failure is important to prevent this form of catastrophic failure from being released in a product or system.
Security Models
Models are used to provide insight and explanation. Security models are used to understand the systems and processes developed to enforce security principles. Three key elements play a role in systems with respect to model implementation: people, processes, and technology. Addressing a single element of the three may provide benefits, but more effectiveness can be achieved through addressing multiple elements. Controls that rely on a single element, regardless of the element, are not as effective as controls that address two or all three elements.
Access Control Models
The term access control has been used to describe a mechanism to ensure protection. Access controls define what actions a subject can perform on specific objects. Access controls assume that the identity of the user has been verified through an authentication process. There are a variety of different access control models that emphasize different aspects of a protection scheme. One of the most common mechanisms used is an access control list (ACL). An ACL is a list that contains the subjects that have access rights to a particular object. An ACL will identify not only the subject, but also the specific access that subject has for the object. Typical types of accesses include read, write, and execute. Several different models are discussed in security literature, including discretionary access control (DAC), mandatory access control (MAC), role-based access control (RBAC), and rule-based access control (RBAC).
Bell-LaPadula Confidentiality Model
The Bell-LaPadula model is a confidentiality preserving model. The Bell-LaPadula security model employs both mandatory and discretionary access control mechanisms when implementing its two basic security principles. The first of these principles is called the Simple Security Rule, which states that no subject can read information from an object with a security classification higher than that possessed by the subject itself. This rule is also referred to as the “no-read-up” rule. This means that the system must have its access levels arranged in hierarchal form, with defined higher and lower levels of access. Because the Bell-LaPadula model was designed to preserve confidentiality, it is focused on read and write access. Reading material higher than a subject’s level is a form of unauthorized access.
image
image   
NOTE   The Simple Security Rule is just that: the most basic of security rules. It basically states that in order for you to see something, you have to be authorized to see it.
The second security principle enforced by the Bell-LaPadula security model is known as the *-property (pronounced “star property”). This principle states that a subject can write to an object only if its security classification is less than or equal to the object’s security classification. This is also known as the “no-write-down” principle. This prevents the dissemination of information to users that do not have the appropriate level of access. This can be used to prevent data leakage, such as the publishing of bank balances, presumably protected information, to a public webpage.
Take-Grant Model
The take-grant model for access control is built upon graph theory. This model is conceptually very different from the other models, but has one distinct advantage: it can be used to definitively determine rights. This model is a theoretical model based on mathematical representation of the controls in the form of a directed graph, with the vertices being the subjects and objects. The edges between them represent the rights between the subject and objects. There are two unique rights to this model: take and grant. The representation of the rights takes the form of {t, g, r, w}, where t is the take right, g is the grant right, r is the read right, and w is the write right. A set of four rules, one each for take, grant, create, and remove, forms part of the algebra associated with this mathematical model.
The take-grant model is not typically used in the implementation of a particular access control system. Its value lies in its ability to analyze an implementation and answer questions concerning whether a specific implementation is complete or might be capable of leaking information.
Access Control Matrix Model
The access control matrix model is a simplified form of access control notation where the allowed actions a subject is permitted with an object are listed in a matrix format. This is a very general-purpose model, with no constraints on its formulation. The strength in this model is its simplicity in design, but this also leads to its major weakness: difficulty in implementation. Because it has no constraints, it can be very difficult to implement in practice and does not scale well. As the number of subjects and objects increase, the intersections increase as the product of the two enumerations, leading to large numbers of ACL entries.
Role-based Access Control
Access control lists can become long, cumbersome, and take time to administer properly. An access control mechanism that addresses the length and cost of ACLs is the role-based access control (RBAC). In this scheme, instead of each user being assigned specific access permissions for the objects associated with the computer system or network, users are assigned to a set of roles that they may perform. A common example of roles would be developer, tester, production, manager, and executive. In this scheme, a user could be a developer and be in a single role or could be a manager over testers and be in two roles. The assignment of roles need not be exclusionary. The roles are, in turn, assigned the access permissions necessary to perform the tasks associated with them. Users will thus be granted permissions to objects in terms of the specific duties they must perform. An auditor, for instance, can be assigned read access only—allowing audits, but preventing change.
Rule-based Access Control
A second use of the acronym RBAC is for rule-based access control. Rule-based access control systems are much less common than role-based access control, but they serve a niche. In rule-based access control, we again utilize elements such as access control lists to help determine whether access should be granted or not. In this case, a series of rules is contained in the access control list, and the determination of whether to grant access will be made based on these rules. An example of such a rule might be a rule that states that nonmanagement employees may not have access to the payroll file after hours or on weekends. Rule-based access control can actually be used in addition to, or as a method of, implementing other access control methods. For example, role-based access control may be used to limit access to files based on job assignment, and rule-based controls may be added to control time-of-day or network restrictions.
MAC Model
A less frequently employed system for restricting access is mandatory access control. MAC has its roots in military control systems, and referring to the Orange Book, we can find a definition for mandatory access controls, which are “a means of restricting access to objects based on the sensitivity (as represented by a label) of the information contained in the objects and the formal authorization (i.e., clearance) of subjects to access information of such sensitivity.”
In MAC systems, the owner or subject can’t determine whether access is to be granted to another subject; it is the job of the operating system to decide. In MAC, the security mechanism controls access to all objects and individual subjects cannot change that access. This places the onus of determining security access upon the designers of a system, requiring that all object and subject relationships be defined before use in a system. SELinux, a specially hardened form of Linux based on MAC, was developed by the National Security Agency (NSA) to demonstrate the usefulness of this access model.
DAC Model
Both discretionary access control and mandatory access control are terms originally used by the military to describe two different approaches to controlling what access an individual had on a system. As defined by the Orange Book, a Department of Defense document that at one time was the standard for describing what constituted a trusted computing system, discretionary access controls are “a means of restricting access to objects based on the identity of subjects and/or groups to which they belong. The controls are discretionary in the sense that a subject with certain access permission is capable of passing that permission (perhaps indirectly) on to any other subject.”
DAC is really rather simple. In systems that employ discretionary access controls, the owner of an object can decide which other subjects may have access to the object and what specific access they may have. The owner of a file can specify what permissions are granted to which users. Access control lists are the most common mechanism used to implement discretionary access control. The strength of DAC is its simplicity. The weakness is that it is discretionary, or in other words, optional.
Multilevel Security Model
The multilevel security model is a descriptive model of security where separate groups are given labels and these groups act as containers, keeping information and processes separated based on the labels. These can be hierarchical in nature, in which some containers can be considered to be superior to or include lower containers. An example of multilevel security is the military classification scheme: Top Secret, Secret, and Confidential. A document can contain any set of these three, but the “container” assumes the label of the highest item contained. If a document contains any Top Secret information, then the entire document assumes the Top Secret level of protection. For purposes of maintenance, individual items in the document are typically marked with their applicable level, so that if information is “taken” from the document, the correct level can be chosen. Additional levels can be added to the system, such as NoForn, which restricts distribution from foreigners. There is also the use of keywords associated with Top Secret, so that specific materials can be separated and not stored together.
Integrity Models
Integrity-based models are designed to protect the integrity of the information. For some types of information, integrity can be as important as, or even more important than, confidentiality. Public information, such as stock prices, is available to all, but the correctness of their value is crucial, leading to the need to ensure integrity.
Biba Integrity Model
In the Biba model, integrity levels are used to separate permissions. The principle behind integrity levels is that data with a higher integrity level is believed to be more accurate or reliable than data with a lower integrity level. Integrity levels indicate the level of “trust” that can be placed in the accuracy of information based on the level specified.
The Biba model employs two rules to manage integrity efforts. The first rule is referred to as the low-water-mark policy, or “no-write-up” rule. This policy in many ways is the opposite of the *-property from the Bell LaPadula model, in that it prevents subjects from writing to objects of a higher integrity level. The Biba model’s second rule states that the integrity level of a subject will be lowered if it acts on an object of a lower integrity level. The reason for this is that if the subject then uses data from that object, the highest the integrity level can be for a new object created from it is the same level of integrity of the original object. In other words, the level of trust you can place in data formed from data at a specific integrity level cannot be higher than the level of trust you have in the subject creating the new data object, and the level of trust you have in the subject can only be as high as the level of trust you had in the original data.
Clark-Wilson Model
The Clark-Wilson security model takes an entirely different approach than the Biba model, using transactions as the basis for its rules. It defines two levels of integrity: constrained data items (CDI) and unconstrained data items (UDI). CDI data is subject to integrity controls, while UDI data is not. The model then defines two types of processes: integrity verification processes (IVPs) and transformation processes (TPs). IVPs ensure that CDI data meets integrity constraints (to ensure the system is in a valid state). TPs are processes that change the state of data from one valid state to another. Data in this model cannot be modified directly by a user; it can only be changed by trusted TPs.
Using banking as an example, an object with a need for integrity would be an account balance. In the Clark-Wilson model, the account balance would be a CDI because its integrity is a critical function for the bank. Since the integrity of account balances is of extreme importance, changes to a person’s balance must be accomplished through the use of a TP. Ensuring that the balance is correct would be the duty of an IVP. Only certain employees of the bank should have the ability to modify an individual’s account, which can be controlled by limiting the number of individuals who have the authority to execute TPs that result in account modification.
Information Flow Models
Another methodology in modeling security is built around the notion of information flows. Information in a system must be protected when at rest, in transit, and in use. Understanding how information flows through a system, the components that act upon it, and how it enters and leaves a system provides critical data on the necessary protection mechanisms. A series of models that explores aspects of data or information flow in a system assist in the understanding of the application of appropriate protection mechanisms.
Brewer-Nash Model (Chinese Wall)
The Brewer Nash model is a model designed to enforce confidentiality in commercial enterprise operations. In a commercial enterprise, there are situations where different aspects of a business may have access to elements of information that cannot be shared with the other aspects. In a financial consulting firm, personnel in the research arm may become privy to information that would be considered “insider information.” This information cannot, by law or ethically, be shared with other customers. The common term used for this model is the Chinese Wall model.
Security is characterized by elements involving technology, people, and processes. The Brewer Nash model is a model where elements associated with all three can be easily understood. Technology can be employed to prevent access to data by conflicting groups. People can be trained not to compromise the separation of information. Policies can be put in place to ensure that the technology and the actions of personnel are properly engaged to prevent compromise. Employing actions in all three domains provides a comprehensive implementation of a security model.
Data Flow Diagrams
The primary issue in security is the protection of information when stored, while in transit, and while being processed. Understanding how data moves through a system is essential in designing and implementing the security measures to ensure appropriate security functionality. Data flow diagrams (DFDs) are specifically designed to document the storage, movement, and processing of data in a system. Data flow diagrams are graphical in nature, and are constructed on a series of levels. The highest level, level 0, is a high-level contextual view of the data flow through the system. The next level, level 1, is created by expanding elements of the level 0 diagram. This level can be exploded further to a level 2 diagram, or lowest-level diagram of a system.
Use Case Models
While a DFD examines a system from the information flow perspective, the use case model examines the system from a functional perspective model. Requirements from the behavioral perspective provide a description of how the system utilizes data. Use cases are constructed to demonstrate how the system processes data for each of its defined functions. Use cases can be constructed for both normal and abnormal (misuse cases) to facilitate the full description of how the system operates. Use case modeling is a well-defined and mainstream method for system description and analysis. Combined with DFDs, use cases provide a very comprehensive overview of how a system uses and manipulates data, facilitating a complete understanding of the security aspects of a system.
Assurance Models
The Committee on National Security Systems has defined software assurance as the “level of confidence that software is free from vulnerabilities, either intentionally designed into the software or accidentally inserted at any time during its lifecycle, and that the software functions in the intended manner.” This shift in focus moves toward the preventive element of the operational security model and is driven by a management focus on system design and construction. The current software development methodology employed by many teams is focused on speed to market and functionality. More functions and new versions can drive sales. This focus has led to a lesser position for security, with patching issues when found as the major methodology. This has proven less than satisfactory for many types of critical programs, and the government has led the effort to push for an assurance-based model of development.
Assurance cases, including misuse and abuse cases, are designed and used to construct a structured set of arguments and a corresponding body of evidence to satisfactorily demonstrate specific claims with respect to its security properties. An assurance case is structured like a legal case. An overall objective is defined. Specific elements of evidence are presented that demonstrate conclusively a boundary of outcomes that eliminates undesirable ones and preserves the desired ones. When sufficient evidence is presented to eliminate all undesired states or outcomes, the system is considered to be assured of the claim.
The Operational Model of Security
There are three primary actionable methods of managing security in production: prevention, detection, and response. The operational security model encompasses these three elements in a simple form for management efforts. Most effort is typically put on prevention efforts, for incidents that are prevented are eliminated from further concern. For the issues that are not prevented, the next step is detection. Some issues may escape prevention efforts, and if they escape detection efforts, then they can occur without any intervention on the part of security functions. Elements that are detected still need response efforts. The operational model of security, illustrated in Figure 1-2, shows examples of elements in the model.
image
image
Figure 1-2   Operational model of security
Adversaries
Security is the protection of elements from an adversary. Adversaries come in many shapes and sizes, with widely varying motives and capabilities. The destructive capability of an adversary depends upon many factors, including efforts to protect an element from damage. One of the most damaging adversaries is Mother Nature. Mother Nature strikes in the form of disasters, from narrow damage associated with storms such as tornados, to large-scale issues from hurricanes, ice storms, and resulting outages in the form of power and network outages. The saving grace with Mother Nature is the lack of motive or specific intent, and hence the nonadaptability of an attack. Other classifications of adversaries are built around capabilities, specifically in the form of their adaptability and capability to achieve their objective despite security controls in place.
Adversary Type
Adversaries can be categorized by their skill level and assigned a type. This classification is useful when examining the levels of defenses needed and to what degree they can be effective. As the skill level of adversaries increases, the numbers in each category decline. While little to no specific training is required to practice at the level of a script kiddie, years of dedicated study are required to obtain, and even more to retain, the rank of an elite hacker. Fortunately, the number of elite hackers is very small.
Script Kiddie
The term script kiddie is used to describe the most basic form of attacker. The term is considered derisive by most in the industry, as it describes a user who can only use published scripts to perform attacks. The specific knowledge one has when in this category can be virtually nonexistent, other than the skill to Google a script and then try it against a target. This category is seen to comprise as much as 80 to 85 percent of the attacking community. The good news is that the attack vectors are known and typically there are specific defenses to prevent these attacks from achieving their end. The bad news is there are so many of them that they create a level of background noise that must be addressed, requiring resources to manage.
Hacker
The term hacker has historically referred to a user who is an explorer, one who explores how a system operates and how to circumvent its limits. The term cracker has been used to refer to an individual who was a hacker, but with malicious intent. This level of attacker, by nature, has training in how systems operate and has the ability to manipulate systems to achieve outcomes that may or may not be desired or permitted. The actual skill level can vary, but the higher-skilled individuals have the ability to develop new scripts that can be employed by those of lesser skill levels. This group is seen to be 15 to 20 percent of the attacking population. This group is the key adversary, for between their skill and motivation, the damage they cause may be catastrophic to an organization.
Elite
The elite group of hackers is a very small fraction, 1 to 3 percent of the overall attacking population. The key distinguishing element in this group is truly skill based, with a skill level that would be considered impossible by most users. This group is completely underground, for one of the essential skills to enter this level is the skill to cover one’s tracks to the point of making them virtually undetectable and untraceable. We know they exist, however, because of two factors. First, there are members at this skill level that operate on the side of the good guys, or white hats. Second, there are specific exploits, known as zero-day exploits, where the exploit precedes the “discovery” of the vulnerability. After a vulnerability is found and patched, when later analysis shows cases of long-term exploitation, the obvious answer is a set of highly skilled attackers that maintain an extremely low profile. This group has the skills to make them almost defense proof, and unless you are an extremely sensitive industry, spending resources to defend against this group is not particularly efficient.
Adversary Groups
Adversaries can be analyzed from the perspective of adversary groups. The grouping of adversaries, based on skill and capability, provides a structure that can be used to analyze the effectiveness of defenses. The least capable form is an unstructured threat, a single actor from the outside. A highly structured threat, or nation-state attack, has significantly greater capability. The delineation of an attacker as either an insider or outsider can also play a role in determining capability. An insider has an advantage in that they already have some form of legitimate access that can be exploited. This, coupled with their internal knowledge of systems and the value of data and its location, places insiders ahead of the pack on essential knowledge in the prosecution of an attack.
Unstructured Threat
Unstructured threats are those with little or no resources. Typically individuals or groups with limited skills, unstructured threats are limited in their ability to focus and pursue a target over time or with a diversity of methods. Most script kiddies act as solo attackers and lose interest in their current target when difficulties in the attack arise. Most unstructured threats pursue targets of opportunity rather than specific targets for motivational reasons. Because the skill level is low, their ability to use comprehensive attack methods is limited, and this form of threat is fairly easily detected and mediated. Because of the random nature of their attack patterns, searching any target for a given exploit, unstructured threats act as a baseline of attack noise, ever present, yet typically only an annoyance, not a serious threat to operations.
Structured Threat
When attackers become organized and develop a greater resource base, their abilities can grow substantially. The structured threat environment is indicative of a group that has a specific mission, has the resources to commit significant time and energy to an attack, at times for periods extending into months, and has the ability to build a team with the varied skills necessary to exploit an enterprise and its multitude of technologies and components. These groups can employ tools and techniques that are more powerful than simple scripts floating on the Internet. They have the ability to develop specific code and employ and use botnets and other sophisticated elements to perpetrate their criminal activities.
The level of sophistication of modern botnets has demonstrated that structured threats can be real and have significant effects on an enterprise. There have been significant efforts on the part of law enforcement to shut down criminal enterprises utilizing botnets, and security efforts on the part of system owners are needed to ensure that their systems are secured and monitored for activity from this level of threat. Where an unstructured threat may penetrate a system, either by luck or fortune, their main goal is to perform an attack. Structured threats are characterized by their goal orientation. It is not enough to penetrate a system; they view penetration as merely a means to their end. And their end is to steal information that is of value and that may result in a loss for the firm under attack.
Highly Structured Threat
A highly structured threat is a structured threat with significantly greater resources. Criminal organizations have been found to employ banks of programmers that develop crimeware. The authors of the modern botnets are not single individuals, but rather structured programming teams producing a product. Organized crime has moved into the identity theft and information theft business, as it is much safer from prosecution than robbing banks and companies the old-fashioned way. The resource base behind several large criminal organizations involved in cybercrime enables them to work on security issues for years, with teams of programmers building and utilizing tools that can challenge even the strongest defenses.
Nation-state Threat
When a highly structured threat is employed by a nation-state, it assumes an even larger resource base, and to some degree, a level of protection. The use of information systems as a conduit for elements of espionage and information operations is a known reality of the modern technology-driven world. In the past few years, a new form of threat, the advanced persistent threat (APT), has arisen. The APT is a blended attack composed of multiple methods, designed to penetrate a network’s defenses and then live undetected as an insider in a system. The objective of the APT is attack specific, but rather than attempt to gather large amounts of information and chance being caught, the APT has been hallmarked by judicious, limited, almost surgical efforts to carefully extract valuable information, yet leave the system clean and undetected.
Nation-state–level threats will not be detected or deflected by ordinary defense mechanisms. The U.S. government has had to resort to separate networks and strong cryptographic controls to secure themselves from this level of attack. The positive news is that nation-state attack vectors are not aimed at every firm, only a select few of typically government-related entities. Espionage is about stealing secrets, and fortunately, the monetization of the majority of these secrets for criminal purposes is currently limited to cases of identity theft, personal information theft, and simple bank account manipulation (wire fraud). The primary defense against the environment of attackers present today is best driven by aiming not at nation-state–level threats or highly structured threats, for they are few in number. Targeting the larger number of structured threat vectors is a more efficient methodology and one that can be shifted to more serious threats as the larger numbers are removed from the picture.
Insider vs. Outsider Threat
Users of computer systems have been divided into two groups: insiders and outsiders. Insiders are those that have some form of legitimate access to a system. Outsiders are those characterized as not having a legitimate form of access. For many years, efforts have been aimed at preventing outsiders from gaining access. This was built around a belief that “criminals” were outside the organization. Upon closer examination, however, it has been found that criminals exist inside the system as well, and the fact that a user has legitimate access overcomes one of the more difficult elements of a computer hack—to gain initial access. Insiders thus have the inside track for account privilege escalation, and they have additional knowledge of a system’s strengths, weaknesses, and the location of the valuable information flows. A modern comprehensive security solution is one that monitors a system for both external and internal attacks.
Threat Landscape Shift
The threat landscape has for years been defined by the type of actor in the attack, the motivations being unique to each group. From individual hackers that are, in essence, explorers whose motivation is to explore how a system works, to the hacktivist groups that have some message to proclaim (a common rationale for web defacement), to nation-states whose motivation is espionage based, the group defined the purpose and the activity to a great degree. Around 2005, a shift occurred in the attack landscape: the criminalization of cyberspace. This was a direct result of criminal groups developing methods of monetizing their efforts. This has led to an explosion in attack methods, as there is a market for exploits associated with vulnerabilities. This has driven actual research into the art of the attack, with criminal organizations funding attacks for the purpose of financial gain. The botnets being used today are sophisticated decentralized programs, using cryptographic and antivirus detection methods, and are designed to avoid detection. These botnets are used to gather credentials associated with online financial systems, building profiles of users including bank access details, stock account details, and other sensitive information.
This shift has a practical effect on all software—everything is now a target. In the past, unless one was a bank, large financial institution, or government agency, security was considered not as important as “who would attack me?” This dynamic has changed, as criminals are now targeting smaller businesses, and even individuals, as the crimes are nearly impossible to investigate and prosecute, and just as people used to think they could hide in all the bits on the Internet, the criminals are doing just that. And with millions of PCs under botnet control, their nets are wide, and even small collections from large numbers of users are profitable. The latest shift in direction is toward targeting individuals, making specific detection even more difficult.
Chapter Review
In this chapter, you grew acquainted with the CSSLP exam and the elements of its associated body of knowledge. The first part of the chapter explored basic security concepts and terms. The constructs of confidentiality, integrity, and availability were introduced, together with the supporting elements of authentication, authorization, auditability, and non-repudiation. The basic tenets of software systems, session management, exception management, and configuration management were introduced and placed in context with operational security objectives. Secure design principles were described, including:
•  Good enough security
•  Defense in depth
•  Weakest link
•  Single point of failure
•  Least privilege
•  Separation of duties
•  Psychological acceptability
•  Fail-safe
•  Open design
•  Complete mediation
•  Leverage existing components
•  Least common mechanisms
•  Economy of mechanism
A thorough understanding of these secure design principles is expected knowledge for CSSLP candidates.
Moving from security principles to security models, the chapter covered the principle of security depending upon people, processes, and technology. Access control models, including mandatory access controls, discretionary access controls, role-based access controls, rule-based access controls, and matrix access controls, were presented. The Bell-LaPadula, Biba, Clark-Wilson, and Brewer-Nash security models were described, presenting their role in designing secure software.
The types and characteristics of threats and adversaries were presented. Understanding the threat landscape is a key element in understanding software assurance.
Quick Tips
•  Information assurance and information security place the security focus on the information and not the hardware or software used to process it.
•  The original elements of security were confidentiality, integrity, and availability—the “CIA” of security.
•  Authentication, authorization, auditability, and non-repudiation have been added to CIA to complete the characterization of operational security elements.
•  Systems have a set of characteristics; session management, exception management, and configuration management provide the elements needed to secure a system in operation.
•  A series of secure design principles describe the key characteristics associated with secure systems.
•  Security models describe key aspects of system operations with respect to desired operational characteristics, including preservation of confidentiality and integrity.
•  The Bell-LaPadula security model preserves confidentiality and includes the simple security rule, the *-property, and the concept of “no read up, no write down.”
•  The Biba integrity model preserves integrity and includes the concept of “no write up and no read down.”
•  Access control models are used to describe how authorization is implemented in practice.
•  Understanding the threat environment educates the software development team on the security environment the system will face in production.
Questions
To further help you prepare for the CSSLP exam, and to provide you with a feel for your level of preparedness, answer the following questions and then check your answers against the list of correct answers found at the end of the chapter.
  1.  Which access control mechanism provides the owner of an object the opportunity to determine the access control permissions for other subjects?
A.  Mandatory
B.  Role-based
C.  Discretionary
D.  Token-based
  2.  The elements UDI and CDI are associated with which access control model?
A.  Mandatory access control
B.  Clark-Wilson model
C.  Biba integrity model
D.  Bell-LaPadula confidentiality model
  3.  The concept of separating elements of a system to prevent inadvertent information sharing is?
A.  Leverage existing components
B.  Separation of duties
C.  Weakest link
D.  Least common mechanism
  4.  Which of the following is true about the Biba integrity model?
A.  No write up, no read down.
B.  No read up, no write down.
C.  It is described by the simple security rule.
D.  It uses the high-water-mark principle.
  5.  The concept of preventing a subject from denying a previous action with an object in a system is a description of?
A.  Simple security rule
B.  Non-repudiation
C.  Defense in depth
D.  Constrained data item (CDI)
  6.  What was described in the chapter as being essential in order to implement discretionary access controls?
A.  Object owner–defined security access
B.  Certificates
C.  Labels
D.  Security classifications
  7.  The CIA of security includes:
A.  Confidentiality, integrity, authentication
B.  Certificates, integrity, availability
C.  Confidentiality, inspection, authentication
D.  Confidentiality, integrity, availability
  8.  Complete mediation is an approach to security that includes:
A.  Protect systems and networks by using defense in depth.
B.  A security design that cannot be bypassed or circumvented.
C.  The use of interlocking rings of trust to ensure protection to data elements.
D.  The use of access control lists to enforce security rules.
  9.  The fundamental approach to security in which an object has only the necessary rights and privileges to perform its task with no additional permissions is a description of:
A.  Layered security
B.  Least privilege
C.  Role-based security
D.  Clark-Wilson model
10.  Which access control technique relies on a set of rules to determine whether access to an object will be granted or not?
A.  Role-based access control
B.  Object and rule instantiation access control
C.  Rule-based access control
D.  Discretionary access control
11.  The security principle that ensures that no critical function can be executed by any single individual (by dividing the function into multiple tasks that can’t all be executed by the same individual) is known as:
A.  Discretionary access control
B.  Security through obscurity
C.  Separation of duties
D.  Implicit deny
12.  The ability of a subject to interact with an object describes:
A.  Authentication
B.  Access
C.  Confidentiality
D.  Mutual authentication
13.  Open design places the focus of security efforts on:
A.  Open-source software components
B.  Hiding key elements (security through obscurity)
C.  Proprietary algorithms
D.  Producing a security mechanism in which its strength is independent of its design
14.  The security principle of fail-safe is related to:
A.  Session management
B.  Exception management
C.  Least privilege
D.  Single point of failure
15.  Using the principle of keeping things simple is related to:
A.  Layered security
B.  Simple Security Rule
C.  Economy of mechanism
D.  Implementing least privilege for access control
Answers
  1.  C. This is the definition of discretionary access control.
  2.  B. Constrained data item (CDI) and unconstrained data item (UDI) are elements of the Clark Wilson security model.
  3.  D. The key is inadvertent information sharing, a condition that least common mechanism is designed to prevent.
  4.  A. Biba is designed to preserve integrity; hence, no write up (changing elements you don’t have permission to).
  5.  B. This is the definition of non-repudiation.
  6.  A. Object owners define access control in discretionary access control systems.
  7.  D. Don’t forget that even though authentication was described at great length in this chapter, the “A” in the CIA of security represents availability, which refers to both the hardware and data being accessible when the user wants them.
  8.  B. This is the definition of complete mediation.
  9.  B. This was the description supplied for least privilege. Layered security referred to using multiple layers of security (such as at the host and network layers) so that if an intruder penetrates one layer, they still will have to face additional security mechanisms before gaining access to sensitive information.
10.  C. This is a description of rule-based access control.
11.  C. This is a description of the separation of duties principle.
12.  B. This is the definition of access.
13.  D. Open design states that the security of a system must be independent from its design. In essence, the algorithm that is used will be open and accessible, and the security must not be dependent upon the design, but rather on an element such as a key.
14.  B. The principle of fail-safe states that when failure occurs, the system should remain in a secure state and not disclose information. Exception management is the operational tenet associated with fail-safe.
15.  C. The principle of economy of mechanism states that complexity should be limited to make security manageable; in other words, keep things simple.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.218.184.214