Appendix A Answers to Practice Questions

Chapter 1—Secure Software Concepts Questions

  1. The primary reason for incorporating security into the software development life cycle is to protect

    A. The unauthorized disclosure of information

    B. The corporate brand and reputation

    C. Against hackers who intend to misuse the software

    D. The developers from releasing software with security defects

    Answer is B

    Rationale/Answer Explanation:

    When security is incorporated into the software development life cycle, confidentiality, integrity, and availability can be assured and external hacker and insider threat attempts thwarted. Developers will generate more hack-resilient software with fewer vulnerabilities, but protection of the organization’s reputation and corporate brand is the primary reason for software assurance.

  2. The resiliency of software to withstand attacks that attempt to modify or alter data in an unauthorized manner is referred to as

    A. Confidentiality

    B. Integrity

    C. Availability

    D. Authorization

    Answer is B

    Rationale/Answer Explanation:

    When the software program operates as expected, it is said to be reliable or internally consistent. Reliability is an indicator of the integrity of software. Hack-resilient software are reliable (functioning as expected), resilient (able to withstand attacks), and recoverable (capable of being restored to normal operations when breached or upon error).

  3. The main reason why the availability aspects of software must be part of the organization’s software security initiatives is that

    A. Software issues can cause downtime to the business

    B. Developers need to be trained in the business’ continuity procedures

    C. Testing for availability of the software and data is often ignored

    D. Hackers like to conduct denial of service (DoS) attacks against the organization

    Answer is A

    Rationale/Answer Explanation:

    One of the tenets of software assurance is “availability.” Software issues can cause software unavailability and downtime to the business. This is often observed as a denial of service (DoS) attack.

  4. Developing software to monitor its functionality and report when the software is down and unable to provide the expected service to the business is a protection to assure which of the following?

    A. Confidentiality

    B. Integrity

    C. Availability

    D. Authentication

    Answer is C

    Rationale/Answer Explanation:

    Confidentiality controls assure protection against unauthorized disclosure.

    Integrity controls assure protection against unauthorized modifications or alterations.

    Availability controls assure protection against downtime/denial of service and destruction of information.

    Authentication is the mechanism to validate the claims/credentials of an entity.

    Authorization covers the subject’s rights and privileges upon requested objects.

  5. When a customer attempts to log into his bank account, he is required to enter a nonce from the token device that was issued to the customer by the bank. This type of authentication is also known as which of the following?

    A. Ownership-based authentication

    B. Two-factor authentication

    C. Characteristic-based authentication

    D. Knowledge-based authentication

    Answer is A

    Rationale/Answer Explanation:

    Authentication can be achieved in one or more of the following ways: using something one knows (knowledge-based), something one has (ownership-based), and something one is (characteristic-based). Using a token device is ownership-based authentication. When more than one way is used for authentication purposes, it is referred to as multifactor authentication, and this is recommended over single-factor authentication.

  6. Multifactor authentication is most closely related to which of the following security design principles?

    A. Separation of duties

    B. Defense in depth

    C. Complete mediation

    D. Open design

    Answer is B

    Rationale/Answer Explanation:

    Having more than one way of authentication provides for a layered defense, which is the premise of the defense in depth security design principle.

  7. Audit logs can be used for all of the following except

    A. Providing evidentiary information

    B. Assuring that the user cannot deny their actions

    C. Detecting the actions that were undertaken

    D. Preventing a user from performing some unauthorized operations

    Answer is D

    Rationale/Answer Explanation:

    Audit log information can be a detective control (providing evidentiary information) and a deterrent control when the users know that they are being audited, but it cannot prevent any unauthorized actions. When the software logs user actions, it also provides nonrepudiation capabilities because the user cannot deny their actions.

  8. Impersonation attacks, such as man-in-the-middle (MITM) attacks, in an Internet application can be best mitigated using proper

    A. Configuration management

    B. Session management

    C. Patch management

    D. Exception management

    Answer is B

    Rationale/Answer Explanation:

    An Internet application means that the ability to manage identities as would be possible in an Intranet application is not easy and, in some cases, infeasible. Internet applications also use stateless protocols, such as HTTP or HTTPS, and this requires the management of user sessions.

  9. Organizations often predetermine the acceptable number of user errors before recording them as security violations. This number is otherwise known as the

    A. Clipping level

    B. Known error

    C. Minimum security baseline

    D. Maximum tolerable downtime

    Answer is A

    Rationale/Answer Explanation:

    The predetermined number of acceptable user errors before recording the error as a potential security incident is referred to as the clipping level. For example, if the number of allowed failed login attempts before the account is locked out is three, then the clipping level for authentication attempts is three.

  10. A security principle that maintains the confidentiality, integrity, and availability of the software and data, besides allowing for rapid recovery to the state of normal operations, when unexpected events occur is the security design principle of

    A. Defense in depth

    B. Economy of mechanisms

    C. Failsafe

    D. Psychological acceptability

    Answer is C

    Rationale/Answer Explanation:

    The failsafe principle prescribes that access decisions must be based on permission rather than exclusion. This means that the default situation is lack of access, and the protection scheme identifies conditions under which access is permitted. The alternative, in which mechanisms attempt to identify conditions under which access should be refused, presents the wrong psychological base for secure system design. A design or implementation mistake in a mechanism that gives explicit permission tends to fail by refusing permission, which is a safe situation since it will be quickly detected. On the other hand, a design or implementation mistake in a mechanism that explicitly excludes access tends to fail by allowing access, a failure that may go unnoticed in normal use. This principle applies both to the outward appearance of the protection mechanism and to its underlying implementation.

  11. Requiring the end user to accept an “as is” disclaimer clause before installation of your software is an example of risk

    A. Avoidance

    B. Mitigation

    C. Transference

    D. Acceptance

    Answer is C

    Rationale/Answer Explanation:

    When an “as is” disclaimer clause is used, the risk is transferred from the publisher of the software to the user of the software.

  12. An instrument that is used to communicate and mandate organizational and management goals and objectives at a high level is a

    A. Standard

    B. Policy

    C. Baseline

    D. Guideline

    Answer is B

    Rationale/Answer Explanation:

    Policies are high-level documents that communicate the mandatory goals and objectives of company management. Standards are also mandatory, but not quite at the same high level as policy. Guidelines provide recommendations on how to implement a standard. Procedures are usually step-by-step instructions of how to perform an operation. A baseline has the minimum levels of controls or configurations that need to be implemented.

  13. The Systems Security Engineering Capability Maturity Model (SSE-CMM®) is an internationally recognized standard that publishes guidelines to

    A. Provide metrics for measuring the software and its behavior and using the software in a specific context of use

    B. Evaluate security engineering practices and organizational management processes

    C. Support accreditation and certification bodies that audit and certify information security management systems

    D. Ensure that the claimed identity of personnel are appropriately verified

    Answer is B

    Rationale/Answer Explanation:

    The evaluation of security engineering practices and organizational management processes are provided as guidelines and prescribed in the Systems Security Engineering Capability Maturity Model (SSE-CMM®). The SSE-CMM is an internationally recognized standard that is published as ISO 21827.

  14. Which of the following is a framework that can be used to develop a risk-based enterprise security architecture by determining security requirements after analyzing the business initiatives?

    A. Capability Maturity Model Integration (CMMI)

    B. Sherwood Applied Business Security Architecture (SABSA)

    C. Control Objectives for Information and related Technology (COBIT®)

    D. Zachman Framework

    Answer is B

    Rationale/Answer Explanation:

    SABSA is a proven framework and methodology for Enterprise Security Architecture and Service Management. SABSA ensures that the needs of your enterprise are met completely and that security services are designed, delivered, and supported as an integral part of your business and IT management infrastructure.

  15. The * property of this Biba security model prevents the contamination of data assuring its integrity by

    A. Not allowing the process to write above its security level

    B. Not allowing the process to write below its security level

    C. Not allowing the process to read above its security level

    D. Not allowing the process to read below its security level

    Answer is A

    Rationale/Answer Explanation:

    The Biba integrity model prevents unauthorized modification. It states that the maintenance of integrity requires that data not flow from a receptacle of a given integrity to a receptacle of higher integrity. If a process can write above its security level, trustworthy data could be contaminated by the addition of less trustworthy data.

  16. Which of the following is known to circumvent the ring protection mechanisms in operating systems?

    A. Cross site request forgery (CSRF)

    B. Coolboot

    C. SQL injection

    D. Rootkit

    Answer is D

    Rationale/Answer Explanation:

    Rootkits are known to compromise the operating system ring protection mechanisms and masquerade as a legitimate operating system taking control of it.

  17. Which of the following is a primary consideration for the software publisher when selling commercially off the shelf (COTS) software?

    A. Service level agreements (SLAs)

    B. Intellectual property protection

    C. Cost of customization

    D. Review of the code for backdoors and Trojan horses

    Answer is B

    Rationale/Answer Explanation:

    All of the other options are considerations for the software acquirer (purchaser).

  18. The single loss expectancy can be determined using which of the following formulae?

    A. Annualized rate of occurrence (ARO) ⨯ exposure factor

    B. Probability ⨯ impact

    C. Asset value ⨯ exposure factor

    D. Annualized rate of occurrence (ARO) ⨯ asset value

    Answer is C

    Rationale/Answer Explanation:

    Single loss expectancy is the expected loss of a single disaster. It is computed as the product of asset value and the exposure factor. SLE = asset value ⨯ exposure factor.

  19. Implementing IPSec to assure the confidentiality of data when they are transmitted is an example of risk

    A. Avoidance

    B. Transference

    C. Mitigation

    D. Acceptance

    Answer is C

    Rationale/Answer Explanation:

    The implementation of IPSec at the network layer helps to mitigate threats to the confidentiality of transmitted data.

  20. The Federal Information Processing Standard (FIPS) that prescribes guidelines for biometric authentication is

    A. FIPS 46-3

    B. FIPS 140-2

    C. FIPS 197

    D. FIPS 201

    Answer is D

    Rationale/Answer Explanation:

    Personal identity verification (PIV) of federal employees and contractors is published as FIPS 201, and it prescribes some guidelines for biometric authentication.

  21. Which of the following is a multifaceted security standard used to regulate organizations that collect, process, and/or store cardholder data as part of their business operations?

    A. FIPS 201

    B. ISO/IEC 15408

    C. NIST SP 800-64

    D. PCI DSS

    Answer is D

    Rationale/Answer Explanation:

    The PCI DSS is a multifaceted security standard that includes requirements for security management, policies, procedures, network architecture, software design, and other critical protective measures. This comprehensive standard is intended to help organizations proactively protect customer account data.

  22. Which of the following is the current Federal Information Processing Standard (FIPS) that specifies an approved cryptographic algorithm to ensure the confidentiality of electronic data?

    A. Security Requirements for Cryptographic Modules (FIPS 140-2)

    B. Data Encryption Standard (FIPS 46-3)

    C. Advanced Encryption Standard (FIPS 197)

    D. Digital Signature Standard (FIPS 186-3)

    Answer is C

    Rationale/Answer Explanation:

    The advanced encryption standard (AES) specifies a FIPS-approved cryptographic algorithm that can be used to protect electronic data. The AES algorithm is a symmetric block cipher that can encrypt (encipher) and decrypt (decipher) information. Encryption converts data to an unintelligible form called ciphertext; decrypting the ciphertext converts the data back into their original form, called plaintext. The AES algorithm is capable of using cryptographic keys of 128, 192, and 256 bits to encrypt and decrypt data in blocks of 128 bits.

  23. The organization that publishes the ten most critical Web application security risks is the

    A. Computer Emergency Response Team (CERT)

    B. Web Application Security Consortium (WASC)

    C. Open Web Application Security Project (OWASP)

    D. Forums for Incident Response and Security Teams (FIRST)

    Answer is C

    Rationale/Answer Explanation:

    The Open Web Application Security Project (OWASP) Top Ten provides a powerful awareness document for Web application security. The OWASP Top Ten represents a broad consensus about what the most critical Web application security flaws are.

Chapter 2—Secure Software Requirements Questions

  1. Which of the following must be addressed by software security requirements? Choose the best answer

    A. Technology used in building the application

    B. Goals and objectives of the organization

    C. Software quality requirements

    D. External auditor requirements

    Answer is B

    Rationale/Answer Explanation:

    When determining software security requirements, it is imperative to address the goals and objectives of the organization. Management’s goals and objectives need to be incorporated into the organizational security policies. While external auditor, internal quality requirements, and technology are factors that need consideration, compliance with organizational policies must be the foremost consideration.

  2. Which of the following types of information is exempt from confidentiality requirements?

    A. Directory information

    B. Personally identifiable information (PII)

    C. User’s card holder data

    D. Software architecture and network diagram

    Answer is A

    Rationale/Answer Explanation:

    Information that is public is also known as directory information. The name “directory” information comes from the fact that such information can be found in a public directory, such as a phone book. When information is classified as public information, confidentiality assurance protection mechanisms are not necessary.

  3. Requirements that are identified to protect against the destruction of information or the software itself are commonly referred to as

    A. Confidentiality requirements

    B. Integrity requirements

    C. Availability requirements

    D. Authentication requirements

    Answer is C

    Rationale/Answer Explanation:

    Destruction is the threat against availability, as disclosure is the threat against confidentiality, and alteration is the threat against integrity.

  4. The amount of time by which business operations need to be restored to service levels as expected by the business when there is a security breach or disaster is known as

    A. Maximum tolerable downtime (MTD)

    B. Mean time before failure (MTBF)

    C. Minimum security baseline (MSB)

    D. Recovery time objective (RTO)

    Answer is D

    Rationale/Answer Explanation:

    The maximum tolerable downtime (MTD) is the maximum length of time a business process can be interrupted or unavailable without causing the business itself to fail. The recovery time objective (RTO) is the time period in which the organization should have the interrupted process running again at or near the same capacity and conditions as before the disaster/downtime. MTD and RTO are part of availability requirements. It is advisable to set the RTO to be less than the MTD.

  5. The use of an individual’s physical characteristics, such as retinal blood patterns and fingerprints, for validating and verifying the user’s identity if referred to as

    A. Biometric authentication

    B. Forms authentication

    C. Digest authentication

    D. Integrated authentication

    Answer is A

    Rationale/Answer Explanation:

    Forms authentication has to do with usernames and passwords that are input into a form (e.g., a Web page/form). Basic authentication transmits the credentials in Base64 encoded form, while digest authentication provides the credentials as a hash value (also known as a message digest). Token-based authentication uses credentials in the form of specialized tokens and is often used with a token device. Biometric authentication uses physical characteristics to provide the credential information.

  6. Which of the following policies is most likely to include the following requirement? “All software processing financial transactions need to use more than one factor to verify the identity of the entity requesting access.”

    A. Authorization

    B. Authentication

    C. Auditing

    D. Availability

    Answer is B

    Rationale/Answer Explanation:

    When two factors are used to validate an entity’s claim and/or credentials, it is referred to as two-factor authentication, and when more than two factors are used for authentication purposes, it is referred to as multifactor authentication. It is important to determine first whether if there exists a need for two- or multifactor authentication.

  7. A means of restricting access to objects based on the identity of subjects and/or groups to which they belong is the definition of

    A. Nondiscretionary access control (NDAC)

    B. Discretionary access control (DAC)

    C. Mandatory access control (MAC)

    D. Rule-based access control

    Answer is B

    Rationale/Answer Explanation:

    Discretionary access control (DAC) is defined as “a means of restricting access to objects based on the identity of subjects and/or groups to which they belong.” The controls are discretionary in the sense that a subject with a certain access permission is capable of passing that permission (perhaps indirectly) on to any other subject. DAC restricts access to objects based on the identity of the subject and is distinctly characterized by the decision of the owner of the resource regarding who has access and their level of privileges or rights.

  8. Requirements that when implemented can help to build a history of events that occurred in the software are known as

    A. Authentication requirements

    B. Archiving requirements

    C. Auditing requirements

    D. Authorization requirements

    Answer is C

    Rationale/Answer Explanation:

    Auditing requirements are those that assist in building a historical record of user actions. Audit trails can help detect when an unauthorized user makes a change or an authorized user makes an unauthorized change, both of which are cases of integrity violations. Auditing requirements not only help with forensic investigations as a detective control, but can also be used for troubleshooting errors and exceptions, if the actions of the software are tracked appropriately.

  9. Which of the following is the primary reason for an application to be susceptible to a man-in-the-middle (MITM) attack?

    A. Improper session management

    B. Lack of auditing

    C. Improper archiving

    D. Lack of encryption

    Answer is A

    Rationale/Answer Explanation:

    Easily guessable and nonrandom session identifiers can be hijacked and replayed if not managed appropriately, and this can lead to MITM attacks.

  10. The process of eliciting concrete software security requirements from high-level regulatory and organizational directives and mandates in the requirements phase of the SDLC is also known as

    A. Threat modeling

    B. Policy decomposition

    C. Subject–object modeling

    D. Misuse case generation

    Answer is B

    Rationale/Answer Explanation:

    The process of eliciting concrete software security requirements from high-level regulatory and organizational directives and mandates is referred to as policy decomposition. When the policy decomposition process completes, all the gleaned requirements must be measurable components.

  11. The first step in the protection needs elicitation (PNE) process is to

    A. Engage the customer

    B. Model information management

    C. Identify least privilege applications

    D. Conduct threat modeling and analysis

    Answer is A

    Rationale/Answer Explanation:

    IT is there for the business and not the other way around. The first step when determining protection needs is to engage the customer, followed by modeling the information and identifying least privilege scenarios. Once an application profile is developed, we can undertake threat modeling and analysis to determine the risk levels, which can be communicated to the business to prioritize the risk.

  12. A requirements traceability matrix (RTM) that includes security requirements can be used for all of the following except

    A. Ensuring scope creep does not occur

    B. Validating and communicating user requirements

    C. Determining resource allocations

    D. Identifying privileged code sections

    Answer is D

    Rationale/Answer Explanation:

    Identifying privileged code sections is part of threat modeling and not part of an RTM.

  13. Parity bit checking mechanisms can be used for all of the following except

    A. Error detection

    B. Message corruption

    C. Integrity assurance

    D. Input validation

    Answer is D

    Rationale/Answer Explanation:

    Parity bit checking is primarily used for error detection, but it can be used for assuring the integrity of transferred files and messages.

  14. Which of the following is an activity that can be performed to clarify requirements with the business users by using diagrams that model the expected behavior of the software?

    A. Threat modeling

    B. Use case modeling

    C. Misuse case modeling

    D. Data modeling

    Answer is B

    Rationale/Answer Explanation:

    A use case models the intended behavior of the software or system. In other words, the use case describes behavior that the system owner intended. This behavior describes the sequence of actions and events that are to be taken to address a business need. Use case modeling and diagramming are very useful for specifying requirements. It can be effective in reducing ambiguous and incompletely articulated business requirements by explicitly specifying exactly when and under what conditions certain behaviors occur. Use case modeling is meant to model only the most significant system behavior, not all, and so it should not be considered a substitute for requirements specification documentation.

  15. Which of the following is least likely to be identified by misuse case modeling?

    A. Race conditions

    B. Misactors

    C. Attacker’s perspective

    D. Negative requirements

    Answer is A

    Rationale/Answer Explanation:

    Misuse cases, also known as abuse cases, help identify security requirements by modeling negative scenarios. A negative scenario is an unintended behavior of the system, one that the system owner does not want to have occur within the context of the use case. Misuse cases provide insight into the threats that can occur to the system or software. It provides the hostile users’ point of view and is an inverse of the use case. Misuse case modeling is similar to the use case modeling, except that the former models misactors and unintended scenarios or behavior. Misuse cases may be intentional or accidental. One of the most distinctive traits of misuse cases is that they can be used to elicit security requirements, unlike other requirements determination methods that focus on end user functional requirements.

  16. Data classification is a core activity that is conducted as part of which of the following?

    A. Key management life cycle

    B. Information life cycle management

    C. Configuration management

    D. Problem management

    Answer is B

    Rationale/Answer Explanation:

    Data classification is the conscious effort to assign a level of sensitivity to data assets based on potential impact upon disclosure, alteration, or destruction. The results of the classification exercise can then be used to categorize the data elements into appropriate buckets. Data classification is part of information life cycle management.

  17. Web farm data corruption issues and card holder data encryption requirements need to be captured as part of which of the following requirements?

    A. Integrity

    B. Environment

    C. International

    D. Procurement

    Answer is B

    Rationale/Answer Explanation:

    When determining requirements, it is important to elicit requirements that are tied to the environment in which the data will be marshaled or processed. Viewstate corruption issues in Web farm settings where all the servers were not configured identically or lack of card holder data encryption in public networks have been observed when the environmental requirements were not identified or taken into account.

  18. When software is purchased from a third party instead of being built in-house, it is imperative to have contractual protection in place and have the software requirements explicitly specified in which of the following?

    A. Service level agreements (SLA)

    B. Nondisclosure agreements (NDA)

    C. Noncompete agreements

    D. Project plans

    Answer is A

    Rationale/Answer Explanation:

    SLAs should contain the levels of service expected for the software to provide, and this becomes crucial when the software is not developed in-house.

  19. When software is able to withstand attacks from a threat agent and not violate the security policy, it is said to be exhibiting which of the following attributes of software assurance?

    A. Reliability

    B. Resiliency

    C. Recoverability

    D. Redundancy

    Answer is B

    Rationale/Answer Explanation:

    Software is said to be reliable when it is functioning as expected. Resiliency is the measure of the software’s ability to withstand an attack. When the software is breached, its ability to restore itself back to normal operations is known as the recoverability of the software. Redundancy has to do with high availability.

  20. Infinite loops and improper memory calls are often known to cause threats to which of the following?

    A. Availability

    B. Authentication

    C. Authorization

    D. Auditing

    Answer is A

    Rationale/Answer Explanation:

    Improper coding constructs such as infinite loops and improper memory management can lead to denial of service and resource exhaustion issues, which impact availability.

  21. Which of the following is used to communicate and enforce availability requirements of the business or client?

    A. Nondisclosure agreements (NDA)

    B. Corporate contracts

    C. Service level agreements (SLA)

    D. Threat models

    Answer is C

    Rationale/Answer Explanation:

    SLAs should contain the levels of service the software is expected to provide, and this becomes crucial when the software is not developed in-house.

  22. Software security requirements that are identified to protect against disclosure of data to unauthorized users is otherwise known as

    A. Integrity requirements

    B. Authorization requirements

    C. Confidentiality requirements

    D. Nonrepudiation requirements

    Answer is C

    Rationale/Answer Explanation:

    Destruction is the threat against availability, as disclosure is the threat against confidentiality, and alteration is the threat against integrity.

  23. The requirements that assure reliability and prevent alterations are to be identified in which section of the software requirements specifications (SRS) documentation?

    A. Confidentiality

    B. Integrity

    C. Availability

    D. Auditing

    Answer is B

    Rationale/Answer Explanation:

    Destruction is the threat against availability, as disclosure is the threat against confidentiality, and alteration is the threat against integrity.

  24. Which of the following is a covert mechanism that assures confidentiality?

    A. Encryption

    B. Steganography

    C. Hashing

    D. Masking

    Answer is B

    Rationale/Answer Explanation:

    Encryption and hashing are overt mechanisms to assure confidentiality. Masking is an obfuscating mechanism to assure confidentiality. Steganography is hiding information within other media as a cover mechanism to assure confidentiality. Steganography is more commonly referred to as invisible ink writing and is the art of camouflaging or hidden writing, where the information is hidden and the existence of the message itself is concealed. Steganography is primarily useful for covert communications and is prevalent in military espionage communications.

  25. As a means to assure the confidentiality of copyright information, the security analyst identifies the requirement to embed information inside another digital audio, video, or image signal. This is commonly referred to as

    A. Encryption

    B. Hashing

    C. Licensing

    D. Watermarking

    Answer is D

    Rationale/Answer Explanation:

    Digital watermarking is the process of embedding information into a digital signal. These signals can be audio, video, or pictures.

  26. Checksum validation can be used to satisfy which of the following requirements?

    A. Confidentiality

    B. Integrity

    C. Availability

    D. Authentication

    Answer is B

    Rationale/Answer Explanation:

    Parity bit checking is useful in the detection of errors or changes made to data when they are transmitted. A common use of parity bit checking is to do a cyclic redundancy check (CRC) for data integrity as well, especially for messages longer than one byte (8 bits) long. Upon data transmission, each block of data is given a computed CRC value, commonly referred to as a checksum. If there is an alteration between the origin of data and their destination, the checksum sent at the origin will not match the one computed at the destination. Corrupted media (CDs, DVDs) and incomplete downloads of software yield CRC errors.

  27. A requirements traceability matrix (RTM) that includes security requirements can be used for all of the following except

    A. Ensuring against scope creep

    B. Validating and communicating user requirements

    C. Determining resource allocations

    D. Identifying privileged code sections

    Answer is D

    Rationale/Answer Explanation:

    Identifying privileged code sections is part of threat modeling and not part of an RTM.

Chapter 3—Secure Software Design Questions

  1. During which phase of the software development life cycle (SDLC) is threat modeling initiated?

    A. Requirements analysis

    B. Design

    C. Implementation

    D. Deployment

    Answer is B

    Rationale/Answer Explanation:

    Although it is important to visit the threat model during the development, testing, and deployment phase of the software development life cycle (SDLC), the threat modeling exercise should commence in the design phase of the SDLC.

  2. Certificate authority, registration authority, and certificate revocation lists are all part of which of the following?

    A. Advanced encryption standard (AES)

    B. Steganography

    C. Public key infrastructure (PKI)

    D. Lightweight directory access protocol (LDAP)

    Answer is C

    Rationale/Answer Explanation:

    PKI makes it possible to exchange data securely by hiding or keeping secret a private key on one system while distributing the public key to the other systems participating in the exchange.

  3. The use of digital signatures has the benefit of providing which of the following not provided by symmetric key cryptographic design?

    A. Speed of cryptographic operations

    B. Confidentiality assurance

    C. Key exchange

    D. Nonrepudiation

    Answer is D

    Rationale/Answer Explanation:

    Nonrepudiation and proof of origin (authenticity) are provided by the certificate authority’s (CA) attaching its digital signature, encrypted with the private key of the sender, to the communication that is to be authenticated, and this attests to the authenticity of both the document and the sender.

  4. When passwords are stored in the database, the best defense against disclosure attacks can be accomplished using

    A. Encryption

    B. Masking

    C. Hashing

    D. Obfuscation

    Answer is C

    Rationale/Answer Explanation:

    An important use for hashes is storing passwords. The actual password should never be stored in the database. Using hashing functions, you can store the hash value of the user password and use that value to authenticate the user. Because hashes are one-way (not reversible), they offer a heightened level of confidentiality assurance.

  5. Nicole is part of the “author” role as well as the “approver” role, allowing her to approve her own articles before they are posted on the company blog site. This violates the principle of

    A. Least privilege

    B. Least common mechanisms

    C. Economy of mechanisms

    D. Separation of duties

    Answer is D

    Rationale/Answer Explanation:

    Separation of duties, or separation of privilege, is the principle that it is better to assign tasks to several specific individuals so that no one user has total control over the task. It is closely related to the principle of least privilege, the idea that a minimum amount of privilege is granted to individuals with a need to know for the minimum (shortest) amount of time.

  6. The primary reason for designing single sign on (SSO) capabilities is to

    A. Increase the security of authentication mechanisms

    B. Simplify user authentication

    C. Have the ability to check each access request

    D. Allow for interoperability between wireless and wired networks

    Answer is B

    Rationale/Answer Explanation:

    The design principle of economy of mechanism states that one must keep the design as simple and small as possible. This well known principle deserves emphasis for protection mechanisms because design and implementation errors that result in unwanted access paths will not be noticed during normal use. As a result, techniques that implement protection mechanisms, such as line-by-line inspection of software, are necessary. For such techniques to be successful, a small and simple design is essential. SSO supports this principle by simplifying the authentication process.

  7. Database triggers are primarily useful for providing which of the following detective software assurance capabilities?

    A. Availability

    B. Authorization

    C. Auditing

    D. Archiving

    Answer is C

    Rationale/Answer Explanation:

    All stored procedures could be updated to incorporate auditing logic, but a better solution is to use database triggers. You can use triggers to monitor actions performed on the database tables and automatically log auditing information.

  8. During a threat modeling exercise, the software architecture is reviewed to identify

    A. Attackers

    B. Business impact

    C. Critical assets

    D. Entry points

    Answer is D

    Rationale/Answer Explanation:

    During threat modeling, the application is dissected into its functional components. The development team analyzes the components at every entry point and traces data flow through all functionality to identify security weaknesses.

  9. A man-in-the-middle (MITM) attack is primarily an expression of which type of the following threats?

    A. Spoofing

    B. Tampering

    C. Repudiation

    D. Information disclosure

    Answer is A

    Rationale/Answer Explanation:

    Although it may seem that an MITM attack is an expression of the threat of repudiation, and it can be, it is primarily a spoofing threat. In a spoofing attack, an attacker impersonates a legitimate user of the system. A spoofing attack is mitigated through authentication so that adversaries cannot become any other user or assume the attributes of another user. When undertaking a threat modeling exercise, it is important to list all possible threats, regardless of whether they have been mitigated, so that you can later generate test cases where necessary. If the threat is not documented, there is a high likelihood that the software will not be tested for those threats. Using a categorized list of threats (such as spoofing, tampering, repudiation, information disclosure, denial of service, and elevation of privilege [STRIDE]) is useful to address all possible threats.

  10. IPSec technology, which helps in the secure transmission of information, operates in which layer of the open systems interconnect (OSI) model?

    A. Transport

    B. Network

    C. Session

    D. Application

    Answer is B

    Rationale/Answer Explanation:

    Although software security has specific implications on layer seven, the application of the OSI stack, the security at other levels of the OSI stack is also important and should be leveraged to provide defense in depth. The seven layers of the OSI stack are physical (1), data link (2), network (3), transport (4), session (5), presentation (6), and application (7). SSL and IPSec can be used to assure confidentiality for data in motion. SSL operates at the transport layer (4), and IPSec operates at the network layer (3) of the OSI model.

  11. When internal business functionality is abstracted into service-oriented contract-based interfaces, it is primarily used to provide for

    A. Interoperability

    B. Authentication

    C. Authorization

    D. Installation ease

    Answer is A

    Rationale/Answer Explanation:

    A distinctive characteristic of SOA is that the business logic is abstracted into discoverable and reusable contract-based interfaces to promote interoperability between heterogeneous computing ecosystems.

  12. At which layer of the open systems interconnect (OSI) model must security controls be designed to mitigate side channel attacks effectively?

    A. Transport

    B. Network

    C. Data link

    D. Physical

    Answer is D

    Rationale/Answer Explanation:

    Side channel attacks use unconventional means to compromise the security of the system and, in most cases, require physical access to the device or system. Therefore, to mitigate side channel attacks, physical protection must be used.

  13. Which of the following software architectures is effective in distributing the load between the client and the server, but increases the attack surface since it includes the client as part of the threat vectors?

    A. Software as a service (SaaS)

    B. Service-oriented architecture (SOA)

    C. Rich Internet application (RIA)

    D. Distributed network architecture (DNA)

    Answer is C

    Rationale/Answer Explanation:

    RIAs require Internet protocol (IP) connectivity to the backend server. Browser sandboxing is recommended since the client is also susceptible to attack now, but it is not a requirement. The workload is shared between the client and the server, and the user’s experience and control is increased in RIA architecture.

  14. When designing software to work in a mobile computing environment, the trusted platform module (TPM) chip can be used to provide which of the following types of information?

    A. Authorization

    B. Identification

    C. Archiving

    D. Auditing

    Answer is B

    Rationale/Answer Explanation:

    Trusted platform module (TPM) is the name assigned to a chip that can store cryptographic keys, passwords, and certificates. It can be used to protect mobile devices other than personal computers. It is also used to provide identity information for authentication purposes in mobile computing. It also assures secure startup and integrity. The TPM can be used to generate values used with whole-disk encryption, such as the Windows Vista’s BitLocker. It is developed to specifications of the Trusted Computing Group.

  15. When two or more trivial pieces of information are brought together with the aim of gleaning sensitive information, it is referred to as what type of attack?

    A. Injection

    B. Inference

    C. Phishing

    D. Polyinstantiation

    Answer is B

    Rationale/Answer Explanation:

    An inference attack is one in which the attacker combines information available in the database with a suitable analysis to glean information that is presumably hidden or not as evident. This means that individual data elements when viewed collectively can reveal confidential information. It is therefore possible to have public elements in a database reveal private information by inference. The first things to ensure are that the database administrator does not have direct access to the data in the database and that the administrator’s access to the database is mediated by a program (the application) and audited. In situations where direct database access is necessary, it is important to ensure that the database design is not susceptible to inference attacks. Inference attacks can be mitigated by polyinstantiation.

  16. The inner workings and internal structure of backend databases can be protected from disclosure using

    A. Triggers

    B. Normalization

    C. Views

    D. Encryption

    Answer is C

    Rationale/Answer Explanation:

    Views provide a number of security benefits. They abstract the source of the data being presented, keeping the internal structure of the database hidden from the user. Furthermore, views can be created on a subset of columns in a table. This capability can allow users granular access to specific data elements. Views can also be used to limit access to specific rows of data.

  17. Choose the best answer. Configurable settings for logging exceptions, auditing, and credential management must be part of

    A. Database views

    B. Security management interfaces

    C. Global files

    D. Exception handling

    Answer is B

    Rationale/Answer Explanation:

    Security management interfaces (SMIs) are administrative interfaces for your application that have the highest level of privileges on the system and can do tasks such as

    • User provisioning: adding/deleting/enabling user accounts
    • Granting rights to different user roles
    • System restarting
    • Changing system security settings
    • Accessing audit trails, user credentials, exception logs

    Although SMIs are often not explicitly stated in the requirements, and subsequently not threat modeled, strong controls, such as least privilege and access controls, must be designed and built in when developing SMIs because the compromise of an SMI can be devastating, ranging from complete compromise and installing backdoors, to disclosure, alteration, and destruction (DAD) attacks on audit logs, user credentials, exception logs, etc. SMIs need not always be deployed with the default accounts set by the software publisher, although they are often observed to be.

  18. The token that is primarily used for authentication purposes in a single sign on (SSO) implementation between two different organizations is

    A. Kerberos

    B. Security assert markup language (SAML)

    C. Liberty alliance ID-FF

    D. One-time password (OTP)

    Answer is B

    Rationale/Answer Explanation:

    Federation technology is usually built on a centralized identity management architecture leveraging industry standard identity management protocols, such as SAML, WS Federation (WS-*), and Liberty Alliance. Of the three major protocol families associated with federation, SAML seems to be recognized as the de facto standard for enterprise-to-enterprise federation. SAML works in cross-domain settings, while Kerberos tokens are useful only within a single domain.

  19. Syslog implementations require which additional security protection mechanisms to mitigate disclosure attacks?

    A. Unique session identifier generation and exchange

    B. Transport layer security

    C. Digital rights management (DRM)

    D. Data loss prevention

    Answer is B

    Rationale/Answer Explanation:

    The syslog network protocol has become a de facto standard for logging programs and server information over the Internet. Many routers, switches, and remote access devices will transmit system messages, and there are syslog servers available for Windows and UNIX operating systems. TLS protection mechanisms such as SSL wrappers are needed to protect syslog data in transit as they are transmitted in the clear. SSL wrappers such as stunnel provide transparent SSL functionality.

  20. Rights and privileges for a file can be granularly granted to each client using which of the following technologies?

    A. Data loss prevention (DLP)

    B. Software as a service (SaaS)

    C. Flow control

    D. Digital rights management (DRM)

    Answer is D

    Rationale/Answer Explanation:

    Digital rights management (DRM) solutions give copyright owners control over access and use of copyright protected material. When users want to access or sue digital copyrighted material, they can do so on the terms of the copyright owner.

Chapter 4—Secure Software Implementation/Coding Questions

  1. Software developers write software programs primarily to

    A. Create new products

    B. Capture market share

    C. Solve business problems

    D. Mitigate hacker threats

    Answer is C

    Rationale/Answer Explanation:

    IT and software development teams function to provide solutions to the business. Manual and inefficient business processes can be automated and made efficient using software programs.

  2. The process of combining necessary functions, variables, and dependency files and libraries required for the machine to run the program is referred to as

    A. Compilation

    B. Interpretation

    C. Linking

    D. Instantiation

    Answer is C

    Rationale/Answer Explanation:

    Linking is the process of combining the necessary functions, variables, and dependencies files and libraries required for the machine to run the program. The output that results from the linking process is the executable program or machine code/file that the machine can understand and process. In short, linked object code is the executable. Link editors that combine object codes are known as linkers. Upon the completion of the compilation process, the compiler invokes the linker to perform its function. There are two types of linking: static linking and dynamic linking.

  3. Which of the following is an important consideration to manage memory and mitigate overflow attacks when choosing a programming language?

    A. Locality of reference

    B. Type safety

    C. Cyclomatic complexity

    D. Parametric polymorphism

    Answer is B

    Rationale/Answer Explanation:

    Code is said to be type safe if it only accesses memory resources that do not belong to the memory assigned to it. Type safety verification takes place during the just in time (JIT) compilation phase and prevents unsafe code from becoming active. Although you can disable type safety verification, it can lead to unpredictable results. The best example is that code can make unrestricted calls to unmanaged code, and if that code has malicious intent, the results can be severe. Therefore, the framework only allows fully trusted assemblies to bypass verification. Type safety is a form of “sandboxing.” Type safety must be one of the most important considerations in regards to security when selecting a programming language.

  4. Using multifactor authentication is effective in mitigating which of the following application security risks?

    A. Injection flaws

    B. Cross-site scripting (XSS)

    C. Buffer overflow

    D. Man-in-the-middle (MITM)

    Answer is D

    Rationale/Answer Explanation:

    As a defense against man-in-the-middle (MITM) attacks, authentication and session management need to be in place. Multifactor authentication provides greater defense than single factor authentication and is recommended. Session identifiers that are generated should be unpredictable, random, and nonguessable.

  5. Implementing completely automated public turing test to tell computers and humans apart (CAPTCHA) protection is a means of defending against

    A. SQL injection

    B. Cross-site scripting (XSS)

    C. Cross-site request forgery (CSRF)

    D. Insecure cryptographic storage

    Answer is C

    Rationale/Answer Explanation:

    In addition to assuring that the requestor is a human, CAPTCHAs are useful in mitigating CSRF attacks. Since CSRF is dependent on a preauthenticated token’s being in place, using CAPTCHA as the anti-CSRF token is an effective way of dealing with the inherent XSS problems regarding anti-CSRF tokens, as long as the CAPTCHA image itself is not guessable, predictable, or re-served to the attacker.

  6. The findings of a code review indicate that cryptographic operations in code use the Rijndael cipher, which is the original publication of which of the following algorithms?

    A. Skipjack

    B. Data encryption standard (DES)

    C. Triple data encryption standard (3DES)

    D. Advanced encryption standard (AES)

    Answer is D

    Rationale/Answer Explanation:

    Advanced encryption standard (FIPS 197) is published as the Rijndael cipher. Software should be designed so that you should be able to replace one cryptographic algorithm with a stronger one, when needed, without much rework or recoding. This is referred to as cryptographic agility.

  7. Which of the following transport layer technologies can best mitigate session hijacking and replay attacks in a local area network (LAN)?

    A. Data loss prevention (DLP)

    B. Internet protocol security (IPSec)

    C. Secure sockets layer (SSL)

    D. Digital rights management (DRM)

    Answer is C

    Rationale/Answer Explanation:

    SSL provides disclosure protection and protection against session hijacking and replay at the transport layer (layer 4), while IPSec provides confidentiality and integrity assurance operating in the network layer (layer 3). DRM provides some degree of disclosure (primarily IP) protection and operates in the presentation layer (layer 6), and data loss prevention (DLP) technologies prevent the inadvertent disclosure of data to unauthorized individuals, predominantly those external to the organization.

  8. Verbose error messages and unhandled exceptions can result in which of the following software security threats?

    A. Spoofing

    B. Tampering

    C. Repudiation

    D. Information disclosure

    Answer is D

    Rationale/Answer Explanation:

    Information disclosure is primarily a design issue and therefore is a language-independent problem, although with accidental leakage, many newer high-level languages can worsen the problem by providing verbose error messages that might be helpful to attack in their information gathering (reconnaissance) efforts. It must be recognized that there is a tricky balance between providing the user with helpful information about errors and preventing attackers from learning about the internal details and architecture of the software. From a security standpoint, it is advisable not to disclose verbose error messages and provide the users with a helpline to get additional support.

  9. Code signing can provide all of the following except

    A. Anti-tampering protection

    B. Authenticity of code origin

    C. Runtime permissions for code

    D. Authentication of users

    Answer is D

    Rationale/Answer Explanation:

    Code signing can provide all of the following: anti-tampering protection assuring integrity of code, authenticity (not authentication) of code origin, and runtime permissions for the code to access system resources. The primary benefit of code signing is that it provides users with the identity of the software’s creator, and this is particularly important for mobile code, which is code downloaded from a remote location over the Internet.

  10. When an attacker uses delayed error messages between successful and unsuccessful query probes, he is using which of the following side channel techniques to detect injection vulnerabilities?

    A. Distant observation

    B. Cold boot

    C. Power analysis

    D. Timing

    Answer is D

    Rationale/Answer Explanation:

    Poorly designed and implemented systems are expected to be insecure, but even most well designed and implemented systems have subtle gaps between their abstract models and their physical realization due to the existence of side channels. A side channel is a potential source of information flow from a physical system to an adversary beyond what is available via the conventional (abstract) model. These include subtle observation of timing, electromagnetic radiations, power usage, analog signals, and acoustic emanations. The use of nonconventional and specialized techniques along with physical access to the target system to discover information is characteristic of side channel attacks. The analysis of delayed error messages between successful and unsuccessful queries is a form of timing side channel attacks.

  11. When the runtime permissions of the code are defined as security attributes in the metadata of the code, it is referred to as

    A. Imperative syntax security

    B. Declarative syntax security

    C. Code signing

    D. Code obfuscation

    Answer is B

    Rationale/Answer Explanation:

    There are two types of security syntax: declarative security and imperative security. Declarative syntax addresses the “what” part of an action, whereas imperative syntax tries to deal with the “how” part. When security requests are made in the form of attributes (in the metadata of the code), it is referred to as declarative security. It does not precisely define the steps as to how the security will be realized. When security requests are made through programming logic within a function or method body, it is referred to as imperative security. Declarative security is an all-or-nothing kind of implementation, while imperative security offers greater levels of granularity and control because the security requests run as lines of code intermixed with the application code.

  12. When an all-or-nothing approach to code access security is not possible and business rules and permissions need to be set and managed more granularly in inline code functions and modules, a programmer can leverage which of the following?

    A. Cryptographic agility

    B. Parametric polymorphism

    C. Declarative security

    D. Imperative security

    Answer is D

    Rationale/Answer Explanation:

    When security requests are made in the form of attributes, it is referred to as declarative security. It does not precisely define the steps as to how the security will be realized. Declarative syntax actions can be evaluated without running the code because attributes are stored as part of an assembly’s metadata, while the imperative security actions are stored as intermediary language (IL). This means that imperative security actions can be evaluated only when the code is running. Declarative security actions are checks before a method is invoked and are placed at the class level, being applicable to all methods in that class, unlike imperative security. Declarative security is an all-or-nothing kind of implementation, while imperative security offers greater levels of granularity and control, because the security requests run as lines of code intermixed with the application code.

  13. An understanding of which of the following programming concepts is necessary to protect against memory manipulation buffer overflow attacks? Choose the best answer.

    A. Error handling

    B. Exception management

    C. Locality of reference

    D. Generics

    Answer is C

    Rationale/Answer Explanation:

    Computer processors tend to access memory in a very patterned way. For example, in the absence of branching, if memory location X is accessed at time t, there is a high probability that memory location X+1 will also be accessed in the near future. This kind of clustering of memory references into groups is referred to as locality of reference. The basic forms of locality of reference are temporal (based on time), spatial (based on address space), (branch conditional), and equidistant (somewhere between spatial and branch using simple linear functions that look for equidistant locations of memory to predict which location will be accessed in the near future). While this is good from a performance vantage point, it can lead to an attacker’s predicting memory address spaces and causing memory corruption and buffer overflow.

  14. Exploit code attempts to take control of dangling pointers that are

    A. References to memory locations of destroyed objects

    B. Nonfunctional code left behind in the source

    C. Payload code that the attacker uploads into memory to execute

    D. References in memory locations used prior to being initialized

    Answer is A

    Rationale/Answer Explanation:

    A dangling pointer, also known as a stray pointer, occurs when a pointer points to an invalid memory address. This is often observed when memory management is left to the developer. Dangling pointers are usually created in one of two ways. An object is destroyed (freed), but the reference to the object is not reassigned and is later used. Or a local object is popped from the stack when the function returns, but a reference to the stack-allocated object is still maintained. Attackers write exploit code to take control of dangling pointers so that they can move the pointer to where their arbitrary shell code is injected.

  15. Which of the following is a feature of the most recent operating systems (OS) that makes it difficult for an attacker to guess the memory address of the program by making the memory address different each time the program is executed?

    A. Data execution prevention (DEP)

    B. Executable space protection (ESP)

    C. Address space layout randomization (ASLR)

    D. Safe security exception handler (/SAFESEH)

    Answer is C

    Rationale/Answer Explanation:

    In the past, the memory manager would try to load binaries at the same location in the linear address space each time the program was run. This behavior made it easier for shell coders by ensuring that certain modules of code would always reside at a fixed address and could be referenced in exploit code using raw numeric literals. Address space layout randomization (ASLR) is a feature in newer operating systems (introduced in Windows Vista) that deals with this predictable and direct referencing issue. ASLR makes the binary load in a random address space each time the program is run.

  16. When the source code is made obscure using special programs in order to make the readability of the code difficult when disclosed, the code is also known as

    A. Object code

    B. Obfuscated code

    C. Encrypted code

    D. Hashed code

    Answer is B

    Rationale/Answer Explanation:

    Reverse engineering is used to infer how a program works by inspecting it. Code obfuscation, which makes the readability of code extremely difficult and confusing, can be used to deter (not prevent) reverse engineering attacks. Obfuscating code is not detective or corrective in its implementation.

  17. The ability to track ownership, changes in code, and rollback abilities is possible because of which of the following configuration management processes?

    A. Version control

    B. Patching

    C. Audit logging

    D. Change control

    Answer is A

    Rationale/Answer Explanation:

    The ability to track ownership, changes in code, and rollback abilities is possible because of versioning, which is a configuration management process. Release management of software should include proper source code control and versioning. A phenomenon known as “regenerative bugs” is often observed when it comes to improper release management processes. Regenerative bugs are fixed software defects that reappear in subsequent releases of the software. This happens when the software coding defect (bug) is detected in the testing environment (such as user acceptance testing), and the fix is made in that test environment and promoted to production without retrofitting it into the development environment. The latest version in the development environment does not have the fix, and the issue reappears in subsequent versions of the software.

  18. The main benefit of statically analyzing code is that

    A. Runtime behavior of code can be analyzed

    B. Business logic flaws are more easily detectable

    C. Analysis is performed in a production or production-like environment

    D. Errors and vulnerabilities can be detected earlier in the life cycle

    Answer is D

    Rationale/Answer Explanation:

    The one thing that is common in all software is source code, and this source code needs to be reviewed from a security perspective to ensure that security vulnerabilities are detected and addressed before the software is released into the production environment or to customers. Code review is the process of systematically analyzing the code for insecure and inefficient coding issues. In addition to static analysis, which reviews code before it goes live, there are also dynamic analysis tools, which conduct automated scans of applications in production to unearth vulnerabilities. In other words, dynamic tools test from the outside in, while static tools test from the inside out. Just because the code compiles without any errors, it does not necessarily mean that it will run without errors at runtime. Dynamic tests are useful to get a quick assessment of the security of the applications. This also comes in handy when source code is not available for review.

  19. Cryptographic protection includes all of the following except

    A. Encryption of data when they are processed

    B. Hashing of data when they are stored

    C. Hiding of data within other media objects when they are transmitted

    D. Masking of data when they are displayed

    Answer is D

    Rationale/Answer Explanation:

    Masking does not use any overt cryptography operations, such as encryption, decryption, or hashing, or covert operations, such as data hiding, as in the case of steganography, to provide disclosure protection.

  20. Assembly and machine language are examples of

    A. Natural language

    B. Very high-level language (VHLL)

    C. High-level language (HLL)

    D. Low-level language

    Answer is D

    Rationale/Answer Explanation:

    A programming language in which there is little to no abstraction from the native instruction codes that the computer can understand is also referred to as low-level language. There is no abstraction from native instruction codes in machine language. Assembly languages are the lowest level in the software chain, which makes them incredibly suitable for reversing. It is therefore important to have an understanding of low-level programming languages to understand how an attacker will attempt to circumvent the security of the application at its lowest level.

Chapter 5—Secure Software Testing Questions

  1. The ability of the software to restore itself to expected functionality when the built-in security protection is breached is also known as

    A. Redundancy

    B. Recoverability

    C. Resiliency

    D. Reliability

    Answer is B

    Rationale/Answer Explanation:

    When the software performs as it is expected to, it is said to be reliable. When errors occur, the reliability of software is impacted, and the software needs to be able to restore itself to expected operations. The ability of the software to be restored to normal, expected operations is referred to as recoverability. The ability of the software to withstand attacks against its reliability is referred to as resiliency. Redundancy is about availability, and reconnaissance is related to information gathering, as in fingerprinting/footprinting.

  2. In which of the following software development methodologies does unit testing enable collective code ownership and is critical to assure software assurance?

    A. Waterfall

    B. Agile

    C. Spiral

    D. Prototyping

    Answer is B

    Rationale/Answer Explanation:

    Unit testing enables collective code ownership. Collective code ownership encourages everyone to contribute new ideas to all segments of the project. Any developer can change any line of code to add functionality, fix bugs, or re-factor. No one person becomes a bottleneck for changes. The way this works is for each developer to work in concert (usually more in agile methodologies than the traditional model) to create unit tests for his code as it is developed. All code released into the source code repository includes unit tests. Code that is added, bugs as they are fixed, and old functionality as it is changed will be covered by automated testing.

  3. The use of if-then rules is characteristic of which of the following types of software testing?

    A. Logic

    B. Scalability

    C. Integration

    D. Unit

    Answer is A

    Rationale/Answer Explanation:

    If-then rules are constructs of logic, and when these constructs are used for software testing, it is generally referred to as logic testing.

  4. The implementation of secure features, such as complete mediation and data replication, needs to undergo which of the following types of tests to ensure that the software meets the service level agreements (SLA)?

    A. Stress

    B. Unit

    C. Integration

    D. Regression

    Answer is A

    Rationale/Answer Explanation:

    Tests that assure that the service level requirements are met are characteristic of performance testing. Load and stress testing are types of performance tests. While stress testing is testing by starving the software, load testing is done by subjecting the software to extreme volumes or loads.

  5. Tests conducted to determine the breaking point of the software after which the software will no longer be functional are characteristic of which of the following types of software testing?

    A. Regression

    B. Stress

    C. Integration

    D. Simulation

    Answer is B

    Rationale/Answer Explanation:

    The goal of stress testing is to determine if the software will continue to operate reliably under duress or extreme conditions. Often the resources that the software needs are taken away from the software, and the software’s behavior is observed as part of the stress test.

  6. Which of the following tools or techniques can be used to facilitate the white box testing of software for insider threats?

    A. Source code analyzers

    B. Fuzzers

    C. Banner grabbing software

    D. Scanners

    Answer is A

    Rationale/Answer Explanation:

    White box testing, or structural analysis, is about testing the software with prior knowledge of the code and configuration. Source code review is a type of white box testing. Embedded code issues that are implanted by insiders, such as Trojan horses and logic bombs, can be detected using source code analyzers.

  7. When very limited or no knowledge of the software is made known to the software tester before she can test for its resiliency, it is characteristic of which of the following types of security tests?

    A. White box

    B. Black box

    C. Clear box

    D. Glass box

    Answer is B

    Rationale/Answer Explanation:

    In black box or behavioral testing, test conditions are developed on the basis of the program’s or system’s functionality; that is, the tester requires information about the input data and observed output, but does not know how the program or system works. The tester focuses on testing the program’s behavior (or functionality) against the specification. With black box testing, the tester views the program as a black box and is completely unconcerned with the internal structure of the program or system. In white box or structural testing, the tester knows the internal program structure, such as paths, statement coverage, branching, and logic. White box testing is also referred to as clear box or glass box testing. Gray box testing is a software testing technique that uses a combination of black box and white box testing.

  8. Penetration testing must be conducted with properly defined

    A. Rules of engagement

    B. Role-based access control mechanisms

    C. Threat models

    D. Use cases

    Answer is A

    Rationale/Answer Explanation:

    Penetration testing must be controlled, not ad hoc in nature, with properly defined rules of engagement.

  9. Testing for the randomness of session identifiers and the presence of auditing capabilities provides the software team insight into which of the following security controls?

    A. Availability

    B. Authentication

    C. Nonrepudiation

    D. Authorization

    Answer is C

    Rationale/Answer Explanation:

    When session management is in place, it provides for authentication, and when authentication is combined with auditing capabilities, it provides nonrepudiation. In other words, the authenticated user cannot claim broken sessions or intercepted authentication and deny their user actions due to the audit logs’ recording their actions.

  10. Disassemblers, debuggers, and decompilers can be used by security testers primarily to determine which of the following types of coding vulnerabilities?

    A. Injection flaws

    B. Lack of reverse engineering protection

    C. Cross-site scripting

    D. Broken session management

    Answer is B

    Rationale/Answer Explanation:

    Disassemblers, debuggers, and decompilers are utilities that can be used for reverse engineering software, and software testers should have these utilities in their list of tools to validate protection against reversing.

  11. When reporting a security defect in the software, which of the following also needs to be reported so that variance from the intended behavior of the software can be determined?

    A. Defect identifier

    B. Title

    C. Expected results

    D. Tester name

    Answer is C

    Rationale/Answer Explanation:

    Knowledge of the expected results along with the defect information can be used to determine the variance between what the results need to be and what is deficient.

  12. An attacker analyzes the response from the Web server, which indicates that its version is Microsoft Internet Information Server 6.0 (Microsoft-IIS/6.0), but none of the IIS exploits that the attacker attempts to execute on the Web server is successful. Which of the following is the most probable security control that is implemented?

    A. Hashing

    B. Cloaking

    C. Masking

    D. Watermarking

    Answer is B

    Rationale/Answer Explanation:

    Detection of Web server versions is usually done by analyzing HTTP responses. This process is known as banner grabbing. But the administrator can change the information that gets reported, and this process is known as cloaking. Banner cloaking is a security through obscurity approach to protect against version enumeration.

  13. Smart fuzzing is characterized by injecting

    A. Truly random data without any consideration for the data structure

    B. Variations of data structures that are known

    C. Data that get interpreted as commands by a backend interpreter

    D. Scripts that are reflected and executed on the client browser

    Answer is B

    Rationale/Answer Explanation:

    The process of sending random data to test security of an application is referred to as “fuzzing” or “fuzz testing.” There are two levels of fuzzing: dumb fuzzing and smart fuzzing. Sending truly random data, known as dumb fuzzing, often does not yield great results and has the potential of bringing the software down, causing a denial of service (DoS). If the code being fuzzed requires data to be in a certain format but the fuzzer does not create data in that format, most of the fuzzed data will be rejected by the application. The more knowledge the fuzzer has of the data format, the more intelligent it can be at creating data. These more intelligent fuzzers are known as smart fuzzers.

  14. Which of the following is the most important to ensure when the software is forced to fail as part of security testing? Choose the best answer.

    A. Normal operational functionality is not restored automatically

    B. Access to all functionality is denied

    C. Confidentiality, integrity, and availability are not adversely impacted

    D. End users are adequately trained and self-help is made available for the end user to fix the error on their own

    Answer is C

    Rationale/Answer Explanation:

    As part of security testing, the principle of failsafe must be assured. This means that confidentiality, integrity, and availability are not adversely impacted when the software fails. As part of general software testing, the recoverability of the software, or restoration of the software to normal operational functionality, is an important consideration, but it need not always be an automated process.

  15. Timing and synchronization issues, such as race conditions and resource deadlocks, can most likely be identified by which of the following tests? Choose the best answer.

    A. Integration

    B. Stress

    C. Unit

    D. Regression

    Answer is B

    Rationale/Answer Explanation:

    Race conditions and resource exhaustion issues are more likely to be identified when the software is starved of the resources that it expects, as is done during stress testing.

  16. The primary objective of resiliency testing of software is to determine

    A. The point at which the software will break

    B. If the software can restore itself to normal business operations

    C. The presence and effectiveness of risk mitigation controls

    D. How a blackhat would circumvent access control mechanisms

    Answer is C

    Rationale/Answer Explanation:

    Security testing must include both external (blackhat) and insider threat analysis, and it should be more than just testing for the ability to circumvent access control mechanisms. The resiliency of software is the ability of the software to be able to withstand attacks. The presence and effectiveness of risk mitigation controls increase the resiliency of the software.

  17. The ability of the software to withstand attempts of attackers who intend to breach the built-in security protection is also known as

    A. Redundancy

    B. Recoverability

    C. Resiliency

    D. Reliability

    Answer is C

    Rationale/Answer Explanation:

    Resiliency of software is defined as the ability of the software to withstand attacker attempts.

  18. Drivers and stub-based programming are useful to conduct which of the following tests?

    A. Integration

    B. Regression

    C. Unit

    D. Penetration

    Answer is C

    Rationale/Answer Explanation:

    In order for unit testing to be thorough, the unit/module and the environment for the execution of the module need to be complete. The necessary environment includes the modules that either call or are called by the unit of code being tested. Stubs and drivers are designed to provide the complete environment for a module so that unit testing can be carried out. A stub procedure is a dummy procedure that has the same input/output (I/O) parameters as the given procedure. A driver module should have the code to call the different functions of the module being tested with appropriate parameter values for testing. In layman’s terms, the driver module is akin to the caller, and the stub module can be seen as the callee.

  19. Assurance that the software meets the expectations of the business as defined in the service level agreements (SLAs) can be demonstrated by which of the following types of tests?

    A. Unit

    B. Integration

    C. Performance

    D. Regression

    Answer is C

    Rationale/Answer Explanation:

    Assurance that the software meets the expectations of the business as defined in the service level agreements (SLAs) can be demonstrated by performance testing. Once the importance of the performance of an application is known, it is necessary to understand how various factors affect the performance. Security features can have an impact on performance, and this must be checked to ensure that service level requirements can be met.

  20. Vulnerability scans are used to

    A. Measure the resiliency of the software by attempting to exploit weaknesses

    B. Detect the presence of loopholes and weaknesses in the software

    C. Detect the effectiveness of security controls that are implemented in the software

    D. Measure the skills and technical know-how of the security tester

    Answer is B

    Rationale/Answer Explanation:

    A vulnerability is a weakness (or loophole), and vulnerability scans are used to detect the presence of weaknesses in software.

Chapter 6—Software Acceptance Questions

  1. Your organization has the policy to attest the security of any software that will be deployed into the production environment. A third party vendor software is being evaluated for its readiness to be deployed. Which of the following verification and validation mechanisms can be employed to attest the security of the vendor’s software?

    A. Source code review

    B. Threat modeling the software

    C. Black box testing

    D. Structural analysis

    Answer is C

    Rationale/Answer Explanation:

    Since third party vendor software is often received in object code form, access to source code is usually not provided, and structural analysis (white box) or source code analysis is not possible. Looking into the source code or source code look-alike by reverse engineering without explicit permission can have legal ramifications. Additionally, without documentation on the architecture and software makeup, a threat modeling exercise would most likely be incomplete. License validation is primarily used for curtailing piracy and is a component of verification and validation mechanisms. Black box testing or behavioral analysis would be the best option to attest the security of third party vendor software.

  2. When procuring commercial off-the-shelf (COTS) software for release within your global organization, special attention must be given to multilingual and multicultural capabilities of the software since they are more likely to have

    A. Compilation errors

    B. Canonicalization issues

    C. Cyclomatic complexity

    D. Coding errors

    Answer is B

    Rationale/Answer Explanation:

    The process of canonicalization resolves multiple forms into standard canonical forms. In software that needs to support multilingual, multicultural capabilities such as Unicode, input filtration can be bypassed by a hacker who sends in data in an alternate form from the standard form. Input validation for alternate forms is therefore necessary.

  3. To meet the goals of software assurance, when accepting software from a vendor, the software acquisition phase must include processes to

    A. Verify that installation guides and training manuals are provided

    B. Assess the presence and effectiveness of protection mechanisms

    C. Validate vendors’ software products

    D. Assist the vendor in responding to the request for proposals

    Answer is B

    Rationale/Answer Explanation:

    To maintain the confidentiality, integrity, and availability of software and the data it processes, prior to the acceptance of software, vendor claims of security must be assessed not only for their presence, but also their effectiveness within your computing ecosystem.

  4. Your organization’s software is published as a trial version without any restricted functionality from the paid version. Which of the following must be designed and implemented to ensure that customers who have not purchased the software are limited in the availability of the software?

    A. Disclaimers

    B. Licensing

    C. Validity periods

    D. Encryption

    Answer is C

    Rationale/Answer Explanation:

    Software functionality can be restricted using a validity period as is often observed in the “try-before-you-buy” or “demo” versions of software. It is recommended to have a stripped down version of the software for the demo version, and if feasible, it is advisable to include the legal team to determine the duration of the validity period (especially in the context of digital signatures and Public Key Infrastructure solutions).

  5. Software escrowing is more closely related to which of the following risk-handling strategies?

    A. Avoidance

    B. Mitigation

    C. Acceptance

    D. Transference

    Answer is D

    Rationale/Answer Explanation:

    Since there is an independent third party engaged in an escrow agreement, business continuity is assured for the acquirer when the escrow agency maintains a copy of the source/object code from the publisher. For the publisher, it protects the intellectual property since the source code is not handed to the acquirer directly, but to the independent third escrow party. For both the acquirer and the publishers, some risk is transferred to the escrow party, who is responsible for maintaining the terms of the escrow agreement.

  6. Which of the following legal instruments assures the confidentiality of software programs, processing logic, database schema, and internal organizational business processes and client lists?

    A. Noncompete agreements

    B. Nondisclosure agreements (NDA)

    C. Service level agreements (SLA)

    D. Trademarks

    Answer is B

    Rationale/Answer Explanation:

    Nondisclosure agreements assure confidentiality of sensitive information, such as software programs, processing logic, database schema, and internal organizational business processes and client lists.

  7. “As is” clauses and disclaimers transfer the risk of using the software from the software publisher to the

    A. Developers

    B. End users

    C. Testers

    D. Business owners

    Answer is B

    Rationale/Answer Explanation:

    Disclaimers, or “as is” clauses, transfer the risk from the software provider to the end user.

  8. Improper implementation of validity periods using length-of-use checks in code can result in which of the following types of security issues for legitimate users?

    A. Tampering

    B. Denial of service

    C. Authentication bypass

    D. Spoofing

    Answer is B

    Rationale/Answer Explanation:

    If the validity period set in the software is not properly implemented, then legitimate users can potentially be denied service. It is therefore imperative to ensure that the duration and checking mechanism of validity periods is properly implemented.

  9. The process of evaluating software to determine whether the products of a given development phase satisfy the conditions imposed at the start of the phase is referred to as

    A. Verification

    B. Validation

    C. Authentication

    D. Authorization

    Answer is A

    Rationale/Answer Explanation:

    Verification is defined as the process of evaluating software to determine whether the products of a given development phase satisfy the conditions imposed at the start of the phase. In other words, verification ensures that the software performs as it is required and designed to do. Validation is the process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements. In other words, validation ensures that the software meets required specifications.

  10. When verification activities are used to determine if the software is functioning as expected, it provides insight into which of the following aspects of software assurance?

    A. Redundancy

    B. Reliability

    C. Resiliency

    D. Recoverability

    Answer is B

    Rationale/Answer Explanation:

    Verification ensures that the software performs as it is required and designed to do, which is a measure of the software’s reliability.

  11. When procuring software, the purchasing company can request the evaluation assurance levels (EALs) of the software product, which are determined using which of the following evaluation methodologies?

    A. Operationally Critical Assets Threats and Vulnerability Evaluation® (OCTAVESM)

    B. Security quality requirements engineering (SQUARE)

    C. Common criteria

    D. Comprehensive, lightweight application security process (CLASP)

    Answer is C

    Rationale/Answer Explanation:

    Common criteria (ISO 15408) are a security product evaluation methodology with clearly defined ratings, such as evaluation assurance levels (EALs). In addition to assurance validation, the common criteria also validate software functionality for the security target. EALs ratings assure the owner of the assurance capability of the software/system, so common criteria are also referred to as an owner assurance model.

  12. The final activity in the software acceptance process is the go/no go decision that can be determined using

    A. Regression testing

    B. Integration testing

    C. Unit testing

    D. User acceptance testing

    Answer is D

    Rationale/Answer Explanation:

    The end users of the business have the final say on whether the software can be deployed/released. User acceptance testing (UAT) determines the readiness of the software for deployment to the production environment or release to an external customer.

  13. Management’s formal acceptance of the system after an understanding of the residual risks to that system in the computing environment is also referred to as

    A. Patching

    B. Hardening

    C. Certification

    D. Accreditation

    Answer is D

    Rationale/Answer Explanation:

    While certification is the assessment of the technical and nontechnical security controls of the software, accreditation is a management activity that assures that the software has adequate levels of software assurance protection mechanisms.

  14. You determine that legacy software running in your computing environment is susceptible to cross site request forgery (CSRF) attacks because of the way it manages sessions. The business has the need to continue use of this software, but you do not have the source code available to implement security controls in code as a mitigation measure against CSRF attacks. What is the best course of action to undertake in such a situation?

    A. Avoid the risk by forcing the business to discontinue use of the software

    B. Accept the risk with a documented exception

    C. Transfer the risk by buying insurance

    D. Ignore the risk since it is legacy software

    Answer is B

    Rationale/Answer Explanation:

    When there are known vulnerabilities in legacy software and there is not much you can do to mitigate the vulnerabilities, it is recommended that the business accept the risk with a documented exception to the security policy. When accepting this risk, the exception to policy process must ensure that there is a contingency plan in place to address the risk by either replacing the software with a new version or discontinuing its use (risk avoidance). Transferring the risk may not be a viable option for legacy software that is already in your production environment, and one must never ignore the risk or take the vulnerable software out of the scope of an external audit.

  15. As part of the accreditation process, the residual risk of software evaluated for deployment must be accepted formally by the

    A. Board members and executive management

    B. Business owner

    C. Information technology (IT) management

    D. Security organization

    E. Developers

    Answer is B

    Rationale/Answer Explanation:

    Risk must always be accepted formally by the business owner.

Chapter 7—Software Deployment, Operations, Maintenance, and Disposal Questions

  1. When software that worked without any issues in the test environments fails to work in the production environment, it is indicative of

    A. Inadequate integration testing

    B. Incompatible environment configurations

    C. Incomplete threat modeling

    D. Ignored code review

    Answer is B

    Rationale/Answer Explanation:

    When the production environment does not mirror the development or test environments, software that works fine in nonproduction environments are observed to experience issues when deployed in the production environment. This underlines the need for simulation testing.

  2. Good security metrics are characterized by all of the following except that they are

    A. Quantitatively expressed

    B. Objectively expressed

    C. Contextually relevant

    D. Collected manually

    Answer is D

    Rationale/Answer Explanation:

    A good security metric is expressed quantitatively and is contextually accurate. Regardless of how many times the metrics are collected, the results are not significantly variant. Good metrics are usually collected in an automated manner so that the collector’s subjectivity does not come into effect.

  3. Removal of maintenance hooks, debugging code and flags, and unneeded documentation before deployment are all examples of software

    A. Hardening

    B. Patching

    C. Reversing

    D. Obfuscation

    Answer is A

    Rationale/Answer Explanation:

    Locking down the software by removing unneeded code and documentation to reduce the attack surface of the software is referred to as software hardening. Before hardening the software, it is crucial to harden the operating system of the host on which the software program will be run.

  4. Which of the following has the goal of ensuring that the resiliency levels of software are always above the acceptable risk threshold as defined by the business postdeployment?

    A. Threat modeling

    B. Code review

    C. Continuous monitoring

    D. Regression testing

    Answer is C

    Rationale/Answer Explanation:

    Operations security is about staying secure or keeping the resiliency levels of the software above the acceptable risk levels. It is the assurance that the software will continue to function as expected in a reliable fashion for the business without compromising its state of security by monitoring, managing, and applying the needed controls to protect resources (assets).

  5. Audit logging application events, such as failed login attempts, sales price updates, and user roles configuration, are examples of which of the following type of security control?

    A. Preventive

    B. Corrective

    C. Compensating

    D. Detective

    Answer is D

    Rationale/Answer Explanation:

    Audit logging is a type of detective control. When the users are made aware that their activities are logged, audit logging could function as a deterrent control, but it is primarily used for detective purposes. Audit logs can be used to build the sequence of historical events and give insight into who (subject such as user/process) did what (action), where (object), and when (timestamp).

  6. When a compensating control is to be used, the payment card industry data security standard (PCI DSS) prescribes that the compensating control must meet all of the following guidelines except

    A. Meet the intent and rigor of the original requirement

    B. Provide an increased level of defense over the original requirement

    C. Be implemented as part of a defense in depth measure

    D. Commensurate with additional risk imposed by not adhering to the requirement

    Answer is B

    Rationale/Answer Explanation:

    PCI DSS prescribes that the compensating control must provide a similar level, not an increased level of defense over the original requirement.

  7. Software deployed in a high-trust environment, such as the environment within the organizational firewall, when not continuously monitored is most susceptible to which of the following types of security attacks? Choose the best answer.

    A. Distributed denial of service (DDoS)

    B. Malware

    C. Logic bombs

    D. DNS poisoning

    Answer is C

    Rationale/Answer Explanation:

    Logic bombs can be planted by an insider, and when the internal network is not monitored, the likelihood of this is much higher.

  8. Bastion host systems can be used to monitor the security of the computing environment continuously when it is used in conjunction with intrusion detection systems (IDS) and which other security control?

    A. Authentication

    B. Authorization

    C. Archiving

    D. Auditing

    Answer is D

    Rationale/Answer Explanation:

    IDS and auditing are both detective types of controls that can be used to monitor the security health of the computing environment continuously.

  9. The first step in the incident response process of a reported breach is to

    A. Notify management of the security breach

    B. Research the validity of the alert or event further

    C. Inform potentially affected customers of a potential breach

    D. Conduct an independent third party evaluation to investigate the reported breach

    Answer is B

    Rationale/Answer Explanation:

    Upon the report of a breach, it is important to go into a triaging phase in which the validity and severity of the alert/event is investigated further. This reduces the number of false positives that are reported to management.

  10. Which of the following is the best recommendation to champion security objectives within the software development organization?

    A. Informing the developers that they could lose their jobs if their software is breached

    B. Informing management that the organizational software could be hacked

    C. Informing the project team about the recent breach of the competitor’s software

    D. Informing the development team that there should be no injection flaws in the payroll application

    Answer is D

    Rationale/Answer Explanation:

    Using security metrics over fear, uncertainty, and doubt (FUD) is the best recommendation to champion security objectives within the software development organization.

  11. Which of the following independent processes provides insight into the presence and effectiveness of security and privacy controls and is used to determine the organization’s compliance with the regulatory and governance (policy) requirements?

    A. Penetration testing

    B. Audits

    C. Threat modeling

    D. Code review

    Answer is B

    Rationale/Answer Explanation:

    Periodic audits (both internal and external) can be used to assess the overall state of the organization’s security health.

  12. The process of using regular expressions to parse audit logs into information that indicate security incidents is referred to as

    A. Correlation

    B. Normalization

    C. Collection

    D. Visualization

    Answer is B

    Rationale/Answer Explanation:

    Normalizing logs means that duplicate and redundant information is removed from the logs after the time is synchronized for each log set, and the logs are parsed to deduce patterns that are identified in the correlation phase.

  13. The final stage of the incident management process is

    A. Detection

    B. Containment

    C. Eradication

    D. Recovery

    Answer is D

    Rationale/Answer Explanation:

    The incident response process involves preparation, detection, analysis, containment, eradication, and recovery. The goal of incident management is to restore (recover) service to normal business operations.

  14. Problem management aims to improve the value of information technology to the business because it improves service by

    A. Restoring service to the expectation of the business user

    B. Determining the alerts and events that need to be continuously monitored

    C. Depicting incident information in easy to understand user friendly format

    D. Identifying and eliminating the root cause of the problem

    Answer is D

    Rationale/Answer Explanation:

    The goal of problem management is to identify and eliminate the root cause of the problem. All of the other definitions are related to incident management. The goal of incident management is to restore service, while the goal of problem management is to improve service.

  15. The process of releasing software to fix a recently reported vulnerability without introducing any new features or changing hardware configuration is referred to as

    A. Versioning

    B. Hardening

    C. Patching

    D. Porting

    Answer is C

    Rationale/Answer Explanation:

    Patching is the process of applying updates and hot fixes. Porting is the process of adapting software so that an executable program can be created for a computing environment that is different from the one for which it was originally designed (e.g., different processor architecture, operating system, or third party software library).

  16. Fishbone diagramming is a mechanism that is primarily used for which of the following processes?

    A. Threat modeling

    B. Requirements analysis

    C. Network deployment

    D. Root cause analysis

    Answer is D

    Rationale/Answer Explanation:

    Ishikawa diagrams or fishbone diagrams are used to identify the cause and effect of a problem and are commonly used to determine the root cause of the problem.

  17. As a means to assure the availability of the existing software functionality after the application of a patch, the patch needs to be tested for

    A. Proper functioning of new features

    B. Cryptographic agility

    C. Backward compatibility

    D. Enabling of previously disabled services

    Answer is C

    Rationale/Answer Explanation:

    Regression testing of patches is crucial to ensure that there were no newer side effects and that all previous functionality as expected is still available.

  18. Which of the following policies needs to be established to dispose of software and associated data and documents securely?

    A. End-of-life

    B. Vulnerability management

    C. Privacy

    D. Data classification

    Answer is A

    Rationale/Answer Explanation:

    End-of-life (EOL) policies are used for the disposal of code, configuration, and documents based on organizational and regulatory requirements.

  19. Discontinuance of software with known vulnerabilities and replacement with a newer version is an example of risk

    A. Mitigation

    B. Transference

    C. Acceptance

    D. Avoidance

    Answer is D

    Rationale/Answer Explanation:

    When software with known vulnerabilities is replaced with a secure version, it is an example of avoiding the risk. It is not transference because the new version may not have the same risks. It is not mitigation since no controls are implemented to address the risk of the old software. It is not acceptance since the risk of the old software is replaced with the risk of the newer version. It is not ignorance, because the risk is not left unhandled.

  20. Printer ribbons, facsimile transmissions, and printed information not securely disposed of are susceptible to disclosure attacks by which of the following threat agents? Choose the best answer.

    A. Malware

    B. Dumpster divers

    C. Social engineers

    D. Script kiddies

    Answer is B

    Rationale/Answer Explanation:

    Dumpster divers are threat agents that can steal information from printed media (e.g., printer ribbons, facsimile transmission, printed paper).

  21. System resources can be protected from malicious file execution attacks by uploading the user supplied file and running it in which of the following environments?

    A. Honeypot

    B. Sandbox

    C. Simulated

    D. Production

    Answer is B

    Rationale/Answer Explanation:

    Preventing malicious file execution attacks takes some careful planning from the architectural and design phases of the SDLC to thorough testing. In general, a well written application will not use user-supplied input in any filename for any server-based resource (such as images, XML and XSL transform documents, or script inclusions) and will have firewall rules in place preventing new outbound connections to the Internet or internally back to any other server. However, many legacy applications continue to have a need to accept user-supplied input and files without the adequate levels of validation built in. When this is the case, it is advisable to separate the production environment and upload the files to a sandbox environment before the files can be processed.

  22. As a means to demonstrate the improvement in the security of code that is developed, one must compute the relative attack surface quotient (RASQ)

    A. At the end of development phase of the project

    B. Before and after the code is implemented

    C. Before and after the software requirements are complete

    D. At the end of the deployment phase of the project

    Answer is B

    Rationale/Answer Explanation:

    In order to understand if there is an improvement in the resiliency of the software code, the RASQ, which attempts to quantify the number and kinds of vectors available to an attacker, needs to be computed before and after code development is completed and the code is frozen.

  23. When the code is not allowed to access memory at arbitrary locations that are out of range of the memory address space that belongs to the object’s publicly exposed fields, it is referred to as which of the following types of code?

    A. Object code

    B. Type safe code

    C. Obfuscated code

    D. Source code

    Answer is B

    Rationale/Answer Explanation:

    Code is said to be type safe if it only accesses memory resources that do not belong to the memory assigned to it. Type safety verification takes place during the just in time (JIT) compilation phase and prevents unsafe code from becoming active. Although you can disable type safety verification, it can lead to unpredictable results. The best example is that code can make unrestricted calls to unmanaged code, and if that code has malicious intent, the results can be severe. Therefore, the framework only allows fully trusted assemblies to bypass verification. Type safety is a form of “sandboxing.” Type safety must be one of the most important considerations in regards to security when selecting a programming language and phasing out older generation programming languages.

  24. Direct modifications to data in the database by developers must be prevented by

    A. Periodically patching database servers

    B. Implementing source code version control

    C. Logging all database access requests

    D. Proper change control management

    Answer is D

    Rationale/Answer Explanation:

    Proper change control management is useful to provide separation of duties as it can prevent direct access to backend databases by developers.

  25. Which of the following documents is the best source to contain damage, and which needs to be referred to and consulted with upon the discovery of a security breach?

    A. Disaster Recovery Plan

    B. Project Management Plan

    C. Incident Response Plan

    D. Quality Assurance and Testing Plan

    Answer is C

    Rationale/Answer Explanation:

    An Incident Response Plan (IRP) must be developed and tested for completeness as it is the document that one should refer to and follow in the event of a security breach. The effectiveness of an IRP is dependent on the awareness of users on how to respond to an incident, and increased awareness can be achieved by proper education and training.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.97.126