A. The unauthorized disclosure of information
B. The corporate brand and reputation
C. Against hackers who intend to misuse the software
D. The developers from releasing software with security defects
Answer is B
Rationale/Answer Explanation:
When security is incorporated into the software development life cycle, confidentiality, integrity, and availability can be assured and external hacker and insider threat attempts thwarted. Developers will generate more hack-resilient software with fewer vulnerabilities, but protection of the organization’s reputation and corporate brand is the primary reason for software assurance.
A. Confidentiality
B. Integrity
C. Availability
D. Authorization
Answer is B
Rationale/Answer Explanation:
When the software program operates as expected, it is said to be reliable or internally consistent. Reliability is an indicator of the integrity of software. Hack-resilient software are reliable (functioning as expected), resilient (able to withstand attacks), and recoverable (capable of being restored to normal operations when breached or upon error).
A. Software issues can cause downtime to the business
B. Developers need to be trained in the business’ continuity procedures
C. Testing for availability of the software and data is often ignored
D. Hackers like to conduct denial of service (DoS) attacks against the organization
Answer is A
Rationale/Answer Explanation:
One of the tenets of software assurance is “availability.” Software issues can cause software unavailability and downtime to the business. This is often observed as a denial of service (DoS) attack.
A. Confidentiality
B. Integrity
C. Availability
D. Authentication
Answer is C
Rationale/Answer Explanation:
Confidentiality controls assure protection against unauthorized disclosure.
Integrity controls assure protection against unauthorized modifications or alterations.
Availability controls assure protection against downtime/denial of service and destruction of information.
Authentication is the mechanism to validate the claims/credentials of an entity.
Authorization covers the subject’s rights and privileges upon requested objects.
A. Ownership-based authentication
B. Two-factor authentication
C. Characteristic-based authentication
D. Knowledge-based authentication
Answer is A
Rationale/Answer Explanation:
Authentication can be achieved in one or more of the following ways: using something one knows (knowledge-based), something one has (ownership-based), and something one is (characteristic-based). Using a token device is ownership-based authentication. When more than one way is used for authentication purposes, it is referred to as multifactor authentication, and this is recommended over single-factor authentication.
A. Separation of duties
B. Defense in depth
C. Complete mediation
D. Open design
Answer is B
Rationale/Answer Explanation:
Having more than one way of authentication provides for a layered defense, which is the premise of the defense in depth security design principle.
A. Providing evidentiary information
B. Assuring that the user cannot deny their actions
C. Detecting the actions that were undertaken
D. Preventing a user from performing some unauthorized operations
Answer is D
Rationale/Answer Explanation:
Audit log information can be a detective control (providing evidentiary information) and a deterrent control when the users know that they are being audited, but it cannot prevent any unauthorized actions. When the software logs user actions, it also provides nonrepudiation capabilities because the user cannot deny their actions.
A. Configuration management
B. Session management
C. Patch management
D. Exception management
Answer is B
Rationale/Answer Explanation:
An Internet application means that the ability to manage identities as would be possible in an Intranet application is not easy and, in some cases, infeasible. Internet applications also use stateless protocols, such as HTTP or HTTPS, and this requires the management of user sessions.
A. Clipping level
B. Known error
C. Minimum security baseline
D. Maximum tolerable downtime
Answer is A
Rationale/Answer Explanation:
The predetermined number of acceptable user errors before recording the error as a potential security incident is referred to as the clipping level. For example, if the number of allowed failed login attempts before the account is locked out is three, then the clipping level for authentication attempts is three.
A. Defense in depth
B. Economy of mechanisms
C. Failsafe
D. Psychological acceptability
Answer is C
Rationale/Answer Explanation:
The failsafe principle prescribes that access decisions must be based on permission rather than exclusion. This means that the default situation is lack of access, and the protection scheme identifies conditions under which access is permitted. The alternative, in which mechanisms attempt to identify conditions under which access should be refused, presents the wrong psychological base for secure system design. A design or implementation mistake in a mechanism that gives explicit permission tends to fail by refusing permission, which is a safe situation since it will be quickly detected. On the other hand, a design or implementation mistake in a mechanism that explicitly excludes access tends to fail by allowing access, a failure that may go unnoticed in normal use. This principle applies both to the outward appearance of the protection mechanism and to its underlying implementation.
A. Avoidance
B. Mitigation
C. Transference
D. Acceptance
Answer is C
Rationale/Answer Explanation:
When an “as is” disclaimer clause is used, the risk is transferred from the publisher of the software to the user of the software.
A. Standard
B. Policy
C. Baseline
D. Guideline
Answer is B
Rationale/Answer Explanation:
Policies are high-level documents that communicate the mandatory goals and objectives of company management. Standards are also mandatory, but not quite at the same high level as policy. Guidelines provide recommendations on how to implement a standard. Procedures are usually step-by-step instructions of how to perform an operation. A baseline has the minimum levels of controls or configurations that need to be implemented.
A. Provide metrics for measuring the software and its behavior and using the software in a specific context of use
B. Evaluate security engineering practices and organizational management processes
C. Support accreditation and certification bodies that audit and certify information security management systems
D. Ensure that the claimed identity of personnel are appropriately verified
Answer is B
Rationale/Answer Explanation:
The evaluation of security engineering practices and organizational management processes are provided as guidelines and prescribed in the Systems Security Engineering Capability Maturity Model (SSE-CMM®). The SSE-CMM is an internationally recognized standard that is published as ISO 21827.
A. Capability Maturity Model Integration (CMMI)
B. Sherwood Applied Business Security Architecture (SABSA)
C. Control Objectives for Information and related Technology (COBIT®)
D. Zachman Framework
Answer is B
Rationale/Answer Explanation:
SABSA is a proven framework and methodology for Enterprise Security Architecture and Service Management. SABSA ensures that the needs of your enterprise are met completely and that security services are designed, delivered, and supported as an integral part of your business and IT management infrastructure.
A. Not allowing the process to write above its security level
B. Not allowing the process to write below its security level
C. Not allowing the process to read above its security level
D. Not allowing the process to read below its security level
Answer is A
Rationale/Answer Explanation:
The Biba integrity model prevents unauthorized modification. It states that the maintenance of integrity requires that data not flow from a receptacle of a given integrity to a receptacle of higher integrity. If a process can write above its security level, trustworthy data could be contaminated by the addition of less trustworthy data.
A. Cross site request forgery (CSRF)
B. Coolboot
C. SQL injection
D. Rootkit
Answer is D
Rationale/Answer Explanation:
Rootkits are known to compromise the operating system ring protection mechanisms and masquerade as a legitimate operating system taking control of it.
A. Service level agreements (SLAs)
B. Intellectual property protection
C. Cost of customization
D. Review of the code for backdoors and Trojan horses
Answer is B
Rationale/Answer Explanation:
All of the other options are considerations for the software acquirer (purchaser).
A. Annualized rate of occurrence (ARO) ⨯ exposure factor
B. Probability ⨯ impact
C. Asset value ⨯ exposure factor
D. Annualized rate of occurrence (ARO) ⨯ asset value
Answer is C
Rationale/Answer Explanation:
Single loss expectancy is the expected loss of a single disaster. It is computed as the product of asset value and the exposure factor. SLE = asset value ⨯ exposure factor.
A. Avoidance
B. Transference
C. Mitigation
D. Acceptance
Answer is C
Rationale/Answer Explanation:
The implementation of IPSec at the network layer helps to mitigate threats to the confidentiality of transmitted data.
A. FIPS 46-3
B. FIPS 140-2
C. FIPS 197
D. FIPS 201
Answer is D
Rationale/Answer Explanation:
Personal identity verification (PIV) of federal employees and contractors is published as FIPS 201, and it prescribes some guidelines for biometric authentication.
A. FIPS 201
B. ISO/IEC 15408
C. NIST SP 800-64
D. PCI DSS
Answer is D
Rationale/Answer Explanation:
The PCI DSS is a multifaceted security standard that includes requirements for security management, policies, procedures, network architecture, software design, and other critical protective measures. This comprehensive standard is intended to help organizations proactively protect customer account data.
A. Security Requirements for Cryptographic Modules (FIPS 140-2)
B. Data Encryption Standard (FIPS 46-3)
C. Advanced Encryption Standard (FIPS 197)
D. Digital Signature Standard (FIPS 186-3)
Answer is C
Rationale/Answer Explanation:
The advanced encryption standard (AES) specifies a FIPS-approved cryptographic algorithm that can be used to protect electronic data. The AES algorithm is a symmetric block cipher that can encrypt (encipher) and decrypt (decipher) information. Encryption converts data to an unintelligible form called ciphertext; decrypting the ciphertext converts the data back into their original form, called plaintext. The AES algorithm is capable of using cryptographic keys of 128, 192, and 256 bits to encrypt and decrypt data in blocks of 128 bits.
A. Computer Emergency Response Team (CERT)
B. Web Application Security Consortium (WASC)
C. Open Web Application Security Project (OWASP)
D. Forums for Incident Response and Security Teams (FIRST)
Answer is C
Rationale/Answer Explanation:
The Open Web Application Security Project (OWASP) Top Ten provides a powerful awareness document for Web application security. The OWASP Top Ten represents a broad consensus about what the most critical Web application security flaws are.
A. Technology used in building the application
B. Goals and objectives of the organization
C. Software quality requirements
D. External auditor requirements
Answer is B
Rationale/Answer Explanation:
When determining software security requirements, it is imperative to address the goals and objectives of the organization. Management’s goals and objectives need to be incorporated into the organizational security policies. While external auditor, internal quality requirements, and technology are factors that need consideration, compliance with organizational policies must be the foremost consideration.
A. Directory information
B. Personally identifiable information (PII)
C. User’s card holder data
D. Software architecture and network diagram
Answer is A
Rationale/Answer Explanation:
Information that is public is also known as directory information. The name “directory” information comes from the fact that such information can be found in a public directory, such as a phone book. When information is classified as public information, confidentiality assurance protection mechanisms are not necessary.
A. Confidentiality requirements
B. Integrity requirements
C. Availability requirements
D. Authentication requirements
Answer is C
Rationale/Answer Explanation:
Destruction is the threat against availability, as disclosure is the threat against confidentiality, and alteration is the threat against integrity.
A. Maximum tolerable downtime (MTD)
B. Mean time before failure (MTBF)
C. Minimum security baseline (MSB)
D. Recovery time objective (RTO)
Answer is D
Rationale/Answer Explanation:
The maximum tolerable downtime (MTD) is the maximum length of time a business process can be interrupted or unavailable without causing the business itself to fail. The recovery time objective (RTO) is the time period in which the organization should have the interrupted process running again at or near the same capacity and conditions as before the disaster/downtime. MTD and RTO are part of availability requirements. It is advisable to set the RTO to be less than the MTD.
A. Biometric authentication
B. Forms authentication
C. Digest authentication
D. Integrated authentication
Answer is A
Rationale/Answer Explanation:
Forms authentication has to do with usernames and passwords that are input into a form (e.g., a Web page/form). Basic authentication transmits the credentials in Base64 encoded form, while digest authentication provides the credentials as a hash value (also known as a message digest). Token-based authentication uses credentials in the form of specialized tokens and is often used with a token device. Biometric authentication uses physical characteristics to provide the credential information.
A. Authorization
B. Authentication
C. Auditing
D. Availability
Answer is B
Rationale/Answer Explanation:
When two factors are used to validate an entity’s claim and/or credentials, it is referred to as two-factor authentication, and when more than two factors are used for authentication purposes, it is referred to as multifactor authentication. It is important to determine first whether if there exists a need for two- or multifactor authentication.
A. Nondiscretionary access control (NDAC)
B. Discretionary access control (DAC)
C. Mandatory access control (MAC)
D. Rule-based access control
Answer is B
Rationale/Answer Explanation:
Discretionary access control (DAC) is defined as “a means of restricting access to objects based on the identity of subjects and/or groups to which they belong.” The controls are discretionary in the sense that a subject with a certain access permission is capable of passing that permission (perhaps indirectly) on to any other subject. DAC restricts access to objects based on the identity of the subject and is distinctly characterized by the decision of the owner of the resource regarding who has access and their level of privileges or rights.
A. Authentication requirements
B. Archiving requirements
C. Auditing requirements
D. Authorization requirements
Answer is C
Rationale/Answer Explanation:
Auditing requirements are those that assist in building a historical record of user actions. Audit trails can help detect when an unauthorized user makes a change or an authorized user makes an unauthorized change, both of which are cases of integrity violations. Auditing requirements not only help with forensic investigations as a detective control, but can also be used for troubleshooting errors and exceptions, if the actions of the software are tracked appropriately.
A. Improper session management
B. Lack of auditing
C. Improper archiving
D. Lack of encryption
Answer is A
Rationale/Answer Explanation:
Easily guessable and nonrandom session identifiers can be hijacked and replayed if not managed appropriately, and this can lead to MITM attacks.
A. Threat modeling
B. Policy decomposition
C. Subject–object modeling
D. Misuse case generation
Answer is B
Rationale/Answer Explanation:
The process of eliciting concrete software security requirements from high-level regulatory and organizational directives and mandates is referred to as policy decomposition. When the policy decomposition process completes, all the gleaned requirements must be measurable components.
A. Engage the customer
B. Model information management
C. Identify least privilege applications
D. Conduct threat modeling and analysis
Answer is A
Rationale/Answer Explanation:
IT is there for the business and not the other way around. The first step when determining protection needs is to engage the customer, followed by modeling the information and identifying least privilege scenarios. Once an application profile is developed, we can undertake threat modeling and analysis to determine the risk levels, which can be communicated to the business to prioritize the risk.
A. Ensuring scope creep does not occur
B. Validating and communicating user requirements
C. Determining resource allocations
D. Identifying privileged code sections
Answer is D
Rationale/Answer Explanation:
Identifying privileged code sections is part of threat modeling and not part of an RTM.
A. Error detection
B. Message corruption
C. Integrity assurance
D. Input validation
Answer is D
Rationale/Answer Explanation:
Parity bit checking is primarily used for error detection, but it can be used for assuring the integrity of transferred files and messages.
A. Threat modeling
B. Use case modeling
C. Misuse case modeling
D. Data modeling
Answer is B
Rationale/Answer Explanation:
A use case models the intended behavior of the software or system. In other words, the use case describes behavior that the system owner intended. This behavior describes the sequence of actions and events that are to be taken to address a business need. Use case modeling and diagramming are very useful for specifying requirements. It can be effective in reducing ambiguous and incompletely articulated business requirements by explicitly specifying exactly when and under what conditions certain behaviors occur. Use case modeling is meant to model only the most significant system behavior, not all, and so it should not be considered a substitute for requirements specification documentation.
A. Race conditions
B. Misactors
C. Attacker’s perspective
D. Negative requirements
Answer is A
Rationale/Answer Explanation:
Misuse cases, also known as abuse cases, help identify security requirements by modeling negative scenarios. A negative scenario is an unintended behavior of the system, one that the system owner does not want to have occur within the context of the use case. Misuse cases provide insight into the threats that can occur to the system or software. It provides the hostile users’ point of view and is an inverse of the use case. Misuse case modeling is similar to the use case modeling, except that the former models misactors and unintended scenarios or behavior. Misuse cases may be intentional or accidental. One of the most distinctive traits of misuse cases is that they can be used to elicit security requirements, unlike other requirements determination methods that focus on end user functional requirements.
A. Key management life cycle
B. Information life cycle management
C. Configuration management
D. Problem management
Answer is B
Rationale/Answer Explanation:
Data classification is the conscious effort to assign a level of sensitivity to data assets based on potential impact upon disclosure, alteration, or destruction. The results of the classification exercise can then be used to categorize the data elements into appropriate buckets. Data classification is part of information life cycle management.
A. Integrity
B. Environment
C. International
D. Procurement
Answer is B
Rationale/Answer Explanation:
When determining requirements, it is important to elicit requirements that are tied to the environment in which the data will be marshaled or processed. Viewstate corruption issues in Web farm settings where all the servers were not configured identically or lack of card holder data encryption in public networks have been observed when the environmental requirements were not identified or taken into account.
A. Service level agreements (SLA)
B. Nondisclosure agreements (NDA)
C. Noncompete agreements
D. Project plans
Answer is A
Rationale/Answer Explanation:
SLAs should contain the levels of service expected for the software to provide, and this becomes crucial when the software is not developed in-house.
A. Reliability
B. Resiliency
C. Recoverability
D. Redundancy
Answer is B
Rationale/Answer Explanation:
Software is said to be reliable when it is functioning as expected. Resiliency is the measure of the software’s ability to withstand an attack. When the software is breached, its ability to restore itself back to normal operations is known as the recoverability of the software. Redundancy has to do with high availability.
A. Availability
B. Authentication
C. Authorization
D. Auditing
Answer is A
Rationale/Answer Explanation:
Improper coding constructs such as infinite loops and improper memory management can lead to denial of service and resource exhaustion issues, which impact availability.
A. Nondisclosure agreements (NDA)
B. Corporate contracts
C. Service level agreements (SLA)
D. Threat models
Answer is C
Rationale/Answer Explanation:
SLAs should contain the levels of service the software is expected to provide, and this becomes crucial when the software is not developed in-house.
A. Integrity requirements
B. Authorization requirements
C. Confidentiality requirements
D. Nonrepudiation requirements
Answer is C
Rationale/Answer Explanation:
Destruction is the threat against availability, as disclosure is the threat against confidentiality, and alteration is the threat against integrity.
A. Confidentiality
B. Integrity
C. Availability
D. Auditing
Answer is B
Rationale/Answer Explanation:
Destruction is the threat against availability, as disclosure is the threat against confidentiality, and alteration is the threat against integrity.
A. Encryption
B. Steganography
C. Hashing
D. Masking
Answer is B
Rationale/Answer Explanation:
Encryption and hashing are overt mechanisms to assure confidentiality. Masking is an obfuscating mechanism to assure confidentiality. Steganography is hiding information within other media as a cover mechanism to assure confidentiality. Steganography is more commonly referred to as invisible ink writing and is the art of camouflaging or hidden writing, where the information is hidden and the existence of the message itself is concealed. Steganography is primarily useful for covert communications and is prevalent in military espionage communications.
A. Encryption
B. Hashing
C. Licensing
D. Watermarking
Answer is D
Rationale/Answer Explanation:
Digital watermarking is the process of embedding information into a digital signal. These signals can be audio, video, or pictures.
A. Confidentiality
B. Integrity
C. Availability
D. Authentication
Answer is B
Rationale/Answer Explanation:
Parity bit checking is useful in the detection of errors or changes made to data when they are transmitted. A common use of parity bit checking is to do a cyclic redundancy check (CRC) for data integrity as well, especially for messages longer than one byte (8 bits) long. Upon data transmission, each block of data is given a computed CRC value, commonly referred to as a checksum. If there is an alteration between the origin of data and their destination, the checksum sent at the origin will not match the one computed at the destination. Corrupted media (CDs, DVDs) and incomplete downloads of software yield CRC errors.
A. Ensuring against scope creep
B. Validating and communicating user requirements
C. Determining resource allocations
D. Identifying privileged code sections
Answer is D
Rationale/Answer Explanation:
Identifying privileged code sections is part of threat modeling and not part of an RTM.
A. Requirements analysis
B. Design
C. Implementation
D. Deployment
Answer is B
Rationale/Answer Explanation:
Although it is important to visit the threat model during the development, testing, and deployment phase of the software development life cycle (SDLC), the threat modeling exercise should commence in the design phase of the SDLC.
A. Advanced encryption standard (AES)
B. Steganography
C. Public key infrastructure (PKI)
D. Lightweight directory access protocol (LDAP)
Answer is C
Rationale/Answer Explanation:
PKI makes it possible to exchange data securely by hiding or keeping secret a private key on one system while distributing the public key to the other systems participating in the exchange.
A. Speed of cryptographic operations
B. Confidentiality assurance
C. Key exchange
D. Nonrepudiation
Answer is D
Rationale/Answer Explanation:
Nonrepudiation and proof of origin (authenticity) are provided by the certificate authority’s (CA) attaching its digital signature, encrypted with the private key of the sender, to the communication that is to be authenticated, and this attests to the authenticity of both the document and the sender.
A. Encryption
B. Masking
C. Hashing
D. Obfuscation
Answer is C
Rationale/Answer Explanation:
An important use for hashes is storing passwords. The actual password should never be stored in the database. Using hashing functions, you can store the hash value of the user password and use that value to authenticate the user. Because hashes are one-way (not reversible), they offer a heightened level of confidentiality assurance.
A. Least privilege
B. Least common mechanisms
C. Economy of mechanisms
D. Separation of duties
Answer is D
Rationale/Answer Explanation:
Separation of duties, or separation of privilege, is the principle that it is better to assign tasks to several specific individuals so that no one user has total control over the task. It is closely related to the principle of least privilege, the idea that a minimum amount of privilege is granted to individuals with a need to know for the minimum (shortest) amount of time.
A. Increase the security of authentication mechanisms
B. Simplify user authentication
C. Have the ability to check each access request
D. Allow for interoperability between wireless and wired networks
Answer is B
Rationale/Answer Explanation:
The design principle of economy of mechanism states that one must keep the design as simple and small as possible. This well known principle deserves emphasis for protection mechanisms because design and implementation errors that result in unwanted access paths will not be noticed during normal use. As a result, techniques that implement protection mechanisms, such as line-by-line inspection of software, are necessary. For such techniques to be successful, a small and simple design is essential. SSO supports this principle by simplifying the authentication process.
A. Availability
B. Authorization
C. Auditing
D. Archiving
Answer is C
Rationale/Answer Explanation:
All stored procedures could be updated to incorporate auditing logic, but a better solution is to use database triggers. You can use triggers to monitor actions performed on the database tables and automatically log auditing information.
A. Attackers
B. Business impact
C. Critical assets
D. Entry points
Answer is D
Rationale/Answer Explanation:
During threat modeling, the application is dissected into its functional components. The development team analyzes the components at every entry point and traces data flow through all functionality to identify security weaknesses.
A. Spoofing
B. Tampering
C. Repudiation
D. Information disclosure
Answer is A
Rationale/Answer Explanation:
Although it may seem that an MITM attack is an expression of the threat of repudiation, and it can be, it is primarily a spoofing threat. In a spoofing attack, an attacker impersonates a legitimate user of the system. A spoofing attack is mitigated through authentication so that adversaries cannot become any other user or assume the attributes of another user. When undertaking a threat modeling exercise, it is important to list all possible threats, regardless of whether they have been mitigated, so that you can later generate test cases where necessary. If the threat is not documented, there is a high likelihood that the software will not be tested for those threats. Using a categorized list of threats (such as spoofing, tampering, repudiation, information disclosure, denial of service, and elevation of privilege [STRIDE]) is useful to address all possible threats.
A. Transport
B. Network
C. Session
D. Application
Answer is B
Rationale/Answer Explanation:
Although software security has specific implications on layer seven, the application of the OSI stack, the security at other levels of the OSI stack is also important and should be leveraged to provide defense in depth. The seven layers of the OSI stack are physical (1), data link (2), network (3), transport (4), session (5), presentation (6), and application (7). SSL and IPSec can be used to assure confidentiality for data in motion. SSL operates at the transport layer (4), and IPSec operates at the network layer (3) of the OSI model.
A. Interoperability
B. Authentication
C. Authorization
D. Installation ease
Answer is A
Rationale/Answer Explanation:
A distinctive characteristic of SOA is that the business logic is abstracted into discoverable and reusable contract-based interfaces to promote interoperability between heterogeneous computing ecosystems.
A. Transport
B. Network
C. Data link
D. Physical
Answer is D
Rationale/Answer Explanation:
Side channel attacks use unconventional means to compromise the security of the system and, in most cases, require physical access to the device or system. Therefore, to mitigate side channel attacks, physical protection must be used.
A. Software as a service (SaaS)
B. Service-oriented architecture (SOA)
C. Rich Internet application (RIA)
D. Distributed network architecture (DNA)
Answer is C
Rationale/Answer Explanation:
RIAs require Internet protocol (IP) connectivity to the backend server. Browser sandboxing is recommended since the client is also susceptible to attack now, but it is not a requirement. The workload is shared between the client and the server, and the user’s experience and control is increased in RIA architecture.
A. Authorization
B. Identification
C. Archiving
D. Auditing
Answer is B
Rationale/Answer Explanation:
Trusted platform module (TPM) is the name assigned to a chip that can store cryptographic keys, passwords, and certificates. It can be used to protect mobile devices other than personal computers. It is also used to provide identity information for authentication purposes in mobile computing. It also assures secure startup and integrity. The TPM can be used to generate values used with whole-disk encryption, such as the Windows Vista’s BitLocker. It is developed to specifications of the Trusted Computing Group.
A. Injection
B. Inference
C. Phishing
D. Polyinstantiation
Answer is B
Rationale/Answer Explanation:
An inference attack is one in which the attacker combines information available in the database with a suitable analysis to glean information that is presumably hidden or not as evident. This means that individual data elements when viewed collectively can reveal confidential information. It is therefore possible to have public elements in a database reveal private information by inference. The first things to ensure are that the database administrator does not have direct access to the data in the database and that the administrator’s access to the database is mediated by a program (the application) and audited. In situations where direct database access is necessary, it is important to ensure that the database design is not susceptible to inference attacks. Inference attacks can be mitigated by polyinstantiation.
A. Triggers
B. Normalization
C. Views
D. Encryption
Answer is C
Rationale/Answer Explanation:
Views provide a number of security benefits. They abstract the source of the data being presented, keeping the internal structure of the database hidden from the user. Furthermore, views can be created on a subset of columns in a table. This capability can allow users granular access to specific data elements. Views can also be used to limit access to specific rows of data.
A. Database views
B. Security management interfaces
C. Global files
D. Exception handling
Answer is B
Rationale/Answer Explanation:
Security management interfaces (SMIs) are administrative interfaces for your application that have the highest level of privileges on the system and can do tasks such as
Although SMIs are often not explicitly stated in the requirements, and subsequently not threat modeled, strong controls, such as least privilege and access controls, must be designed and built in when developing SMIs because the compromise of an SMI can be devastating, ranging from complete compromise and installing backdoors, to disclosure, alteration, and destruction (DAD) attacks on audit logs, user credentials, exception logs, etc. SMIs need not always be deployed with the default accounts set by the software publisher, although they are often observed to be.
A. Kerberos
B. Security assert markup language (SAML)
C. Liberty alliance ID-FF
D. One-time password (OTP)
Answer is B
Rationale/Answer Explanation:
Federation technology is usually built on a centralized identity management architecture leveraging industry standard identity management protocols, such as SAML, WS Federation (WS-*), and Liberty Alliance. Of the three major protocol families associated with federation, SAML seems to be recognized as the de facto standard for enterprise-to-enterprise federation. SAML works in cross-domain settings, while Kerberos tokens are useful only within a single domain.
A. Unique session identifier generation and exchange
B. Transport layer security
C. Digital rights management (DRM)
D. Data loss prevention
Answer is B
Rationale/Answer Explanation:
The syslog network protocol has become a de facto standard for logging programs and server information over the Internet. Many routers, switches, and remote access devices will transmit system messages, and there are syslog servers available for Windows and UNIX operating systems. TLS protection mechanisms such as SSL wrappers are needed to protect syslog data in transit as they are transmitted in the clear. SSL wrappers such as stunnel provide transparent SSL functionality.
A. Data loss prevention (DLP)
B. Software as a service (SaaS)
C. Flow control
D. Digital rights management (DRM)
Answer is D
Rationale/Answer Explanation:
Digital rights management (DRM) solutions give copyright owners control over access and use of copyright protected material. When users want to access or sue digital copyrighted material, they can do so on the terms of the copyright owner.
A. Create new products
B. Capture market share
C. Solve business problems
D. Mitigate hacker threats
Answer is C
Rationale/Answer Explanation:
IT and software development teams function to provide solutions to the business. Manual and inefficient business processes can be automated and made efficient using software programs.
A. Compilation
B. Interpretation
C. Linking
D. Instantiation
Answer is C
Rationale/Answer Explanation:
Linking is the process of combining the necessary functions, variables, and dependencies files and libraries required for the machine to run the program. The output that results from the linking process is the executable program or machine code/file that the machine can understand and process. In short, linked object code is the executable. Link editors that combine object codes are known as linkers. Upon the completion of the compilation process, the compiler invokes the linker to perform its function. There are two types of linking: static linking and dynamic linking.
A. Locality of reference
B. Type safety
C. Cyclomatic complexity
D. Parametric polymorphism
Answer is B
Rationale/Answer Explanation:
Code is said to be type safe if it only accesses memory resources that do not belong to the memory assigned to it. Type safety verification takes place during the just in time (JIT) compilation phase and prevents unsafe code from becoming active. Although you can disable type safety verification, it can lead to unpredictable results. The best example is that code can make unrestricted calls to unmanaged code, and if that code has malicious intent, the results can be severe. Therefore, the framework only allows fully trusted assemblies to bypass verification. Type safety is a form of “sandboxing.” Type safety must be one of the most important considerations in regards to security when selecting a programming language.
A. Injection flaws
B. Cross-site scripting (XSS)
C. Buffer overflow
D. Man-in-the-middle (MITM)
Answer is D
Rationale/Answer Explanation:
As a defense against man-in-the-middle (MITM) attacks, authentication and session management need to be in place. Multifactor authentication provides greater defense than single factor authentication and is recommended. Session identifiers that are generated should be unpredictable, random, and nonguessable.
A. SQL injection
B. Cross-site scripting (XSS)
C. Cross-site request forgery (CSRF)
D. Insecure cryptographic storage
Answer is C
Rationale/Answer Explanation:
In addition to assuring that the requestor is a human, CAPTCHAs are useful in mitigating CSRF attacks. Since CSRF is dependent on a preauthenticated token’s being in place, using CAPTCHA as the anti-CSRF token is an effective way of dealing with the inherent XSS problems regarding anti-CSRF tokens, as long as the CAPTCHA image itself is not guessable, predictable, or re-served to the attacker.
A. Skipjack
B. Data encryption standard (DES)
C. Triple data encryption standard (3DES)
D. Advanced encryption standard (AES)
Answer is D
Rationale/Answer Explanation:
Advanced encryption standard (FIPS 197) is published as the Rijndael cipher. Software should be designed so that you should be able to replace one cryptographic algorithm with a stronger one, when needed, without much rework or recoding. This is referred to as cryptographic agility.
A. Data loss prevention (DLP)
B. Internet protocol security (IPSec)
C. Secure sockets layer (SSL)
D. Digital rights management (DRM)
Answer is C
Rationale/Answer Explanation:
SSL provides disclosure protection and protection against session hijacking and replay at the transport layer (layer 4), while IPSec provides confidentiality and integrity assurance operating in the network layer (layer 3). DRM provides some degree of disclosure (primarily IP) protection and operates in the presentation layer (layer 6), and data loss prevention (DLP) technologies prevent the inadvertent disclosure of data to unauthorized individuals, predominantly those external to the organization.
A. Spoofing
B. Tampering
C. Repudiation
D. Information disclosure
Answer is D
Rationale/Answer Explanation:
Information disclosure is primarily a design issue and therefore is a language-independent problem, although with accidental leakage, many newer high-level languages can worsen the problem by providing verbose error messages that might be helpful to attack in their information gathering (reconnaissance) efforts. It must be recognized that there is a tricky balance between providing the user with helpful information about errors and preventing attackers from learning about the internal details and architecture of the software. From a security standpoint, it is advisable not to disclose verbose error messages and provide the users with a helpline to get additional support.
A. Anti-tampering protection
B. Authenticity of code origin
C. Runtime permissions for code
D. Authentication of users
Answer is D
Rationale/Answer Explanation:
Code signing can provide all of the following: anti-tampering protection assuring integrity of code, authenticity (not authentication) of code origin, and runtime permissions for the code to access system resources. The primary benefit of code signing is that it provides users with the identity of the software’s creator, and this is particularly important for mobile code, which is code downloaded from a remote location over the Internet.
A. Distant observation
B. Cold boot
C. Power analysis
D. Timing
Answer is D
Rationale/Answer Explanation:
Poorly designed and implemented systems are expected to be insecure, but even most well designed and implemented systems have subtle gaps between their abstract models and their physical realization due to the existence of side channels. A side channel is a potential source of information flow from a physical system to an adversary beyond what is available via the conventional (abstract) model. These include subtle observation of timing, electromagnetic radiations, power usage, analog signals, and acoustic emanations. The use of nonconventional and specialized techniques along with physical access to the target system to discover information is characteristic of side channel attacks. The analysis of delayed error messages between successful and unsuccessful queries is a form of timing side channel attacks.
A. Imperative syntax security
B. Declarative syntax security
C. Code signing
D. Code obfuscation
Answer is B
Rationale/Answer Explanation:
There are two types of security syntax: declarative security and imperative security. Declarative syntax addresses the “what” part of an action, whereas imperative syntax tries to deal with the “how” part. When security requests are made in the form of attributes (in the metadata of the code), it is referred to as declarative security. It does not precisely define the steps as to how the security will be realized. When security requests are made through programming logic within a function or method body, it is referred to as imperative security. Declarative security is an all-or-nothing kind of implementation, while imperative security offers greater levels of granularity and control because the security requests run as lines of code intermixed with the application code.
A. Cryptographic agility
B. Parametric polymorphism
C. Declarative security
D. Imperative security
Answer is D
Rationale/Answer Explanation:
When security requests are made in the form of attributes, it is referred to as declarative security. It does not precisely define the steps as to how the security will be realized. Declarative syntax actions can be evaluated without running the code because attributes are stored as part of an assembly’s metadata, while the imperative security actions are stored as intermediary language (IL). This means that imperative security actions can be evaluated only when the code is running. Declarative security actions are checks before a method is invoked and are placed at the class level, being applicable to all methods in that class, unlike imperative security. Declarative security is an all-or-nothing kind of implementation, while imperative security offers greater levels of granularity and control, because the security requests run as lines of code intermixed with the application code.
A. Error handling
B. Exception management
C. Locality of reference
D. Generics
Answer is C
Rationale/Answer Explanation:
Computer processors tend to access memory in a very patterned way. For example, in the absence of branching, if memory location X is accessed at time t, there is a high probability that memory location X+1 will also be accessed in the near future. This kind of clustering of memory references into groups is referred to as locality of reference. The basic forms of locality of reference are temporal (based on time), spatial (based on address space), (branch conditional), and equidistant (somewhere between spatial and branch using simple linear functions that look for equidistant locations of memory to predict which location will be accessed in the near future). While this is good from a performance vantage point, it can lead to an attacker’s predicting memory address spaces and causing memory corruption and buffer overflow.
A. References to memory locations of destroyed objects
B. Nonfunctional code left behind in the source
C. Payload code that the attacker uploads into memory to execute
D. References in memory locations used prior to being initialized
Answer is A
Rationale/Answer Explanation:
A dangling pointer, also known as a stray pointer, occurs when a pointer points to an invalid memory address. This is often observed when memory management is left to the developer. Dangling pointers are usually created in one of two ways. An object is destroyed (freed), but the reference to the object is not reassigned and is later used. Or a local object is popped from the stack when the function returns, but a reference to the stack-allocated object is still maintained. Attackers write exploit code to take control of dangling pointers so that they can move the pointer to where their arbitrary shell code is injected.
A. Data execution prevention (DEP)
B. Executable space protection (ESP)
C. Address space layout randomization (ASLR)
D. Safe security exception handler (/SAFESEH)
Answer is C
Rationale/Answer Explanation:
In the past, the memory manager would try to load binaries at the same location in the linear address space each time the program was run. This behavior made it easier for shell coders by ensuring that certain modules of code would always reside at a fixed address and could be referenced in exploit code using raw numeric literals. Address space layout randomization (ASLR) is a feature in newer operating systems (introduced in Windows Vista) that deals with this predictable and direct referencing issue. ASLR makes the binary load in a random address space each time the program is run.
A. Object code
B. Obfuscated code
C. Encrypted code
D. Hashed code
Answer is B
Rationale/Answer Explanation:
Reverse engineering is used to infer how a program works by inspecting it. Code obfuscation, which makes the readability of code extremely difficult and confusing, can be used to deter (not prevent) reverse engineering attacks. Obfuscating code is not detective or corrective in its implementation.
A. Version control
B. Patching
C. Audit logging
D. Change control
Answer is A
Rationale/Answer Explanation:
The ability to track ownership, changes in code, and rollback abilities is possible because of versioning, which is a configuration management process. Release management of software should include proper source code control and versioning. A phenomenon known as “regenerative bugs” is often observed when it comes to improper release management processes. Regenerative bugs are fixed software defects that reappear in subsequent releases of the software. This happens when the software coding defect (bug) is detected in the testing environment (such as user acceptance testing), and the fix is made in that test environment and promoted to production without retrofitting it into the development environment. The latest version in the development environment does not have the fix, and the issue reappears in subsequent versions of the software.
A. Runtime behavior of code can be analyzed
B. Business logic flaws are more easily detectable
C. Analysis is performed in a production or production-like environment
D. Errors and vulnerabilities can be detected earlier in the life cycle
Answer is D
Rationale/Answer Explanation:
The one thing that is common in all software is source code, and this source code needs to be reviewed from a security perspective to ensure that security vulnerabilities are detected and addressed before the software is released into the production environment or to customers. Code review is the process of systematically analyzing the code for insecure and inefficient coding issues. In addition to static analysis, which reviews code before it goes live, there are also dynamic analysis tools, which conduct automated scans of applications in production to unearth vulnerabilities. In other words, dynamic tools test from the outside in, while static tools test from the inside out. Just because the code compiles without any errors, it does not necessarily mean that it will run without errors at runtime. Dynamic tests are useful to get a quick assessment of the security of the applications. This also comes in handy when source code is not available for review.
A. Encryption of data when they are processed
B. Hashing of data when they are stored
C. Hiding of data within other media objects when they are transmitted
D. Masking of data when they are displayed
Answer is D
Rationale/Answer Explanation:
Masking does not use any overt cryptography operations, such as encryption, decryption, or hashing, or covert operations, such as data hiding, as in the case of steganography, to provide disclosure protection.
A. Natural language
B. Very high-level language (VHLL)
C. High-level language (HLL)
D. Low-level language
Answer is D
Rationale/Answer Explanation:
A programming language in which there is little to no abstraction from the native instruction codes that the computer can understand is also referred to as low-level language. There is no abstraction from native instruction codes in machine language. Assembly languages are the lowest level in the software chain, which makes them incredibly suitable for reversing. It is therefore important to have an understanding of low-level programming languages to understand how an attacker will attempt to circumvent the security of the application at its lowest level.
A. Redundancy
B. Recoverability
C. Resiliency
D. Reliability
Answer is B
Rationale/Answer Explanation:
When the software performs as it is expected to, it is said to be reliable. When errors occur, the reliability of software is impacted, and the software needs to be able to restore itself to expected operations. The ability of the software to be restored to normal, expected operations is referred to as recoverability. The ability of the software to withstand attacks against its reliability is referred to as resiliency. Redundancy is about availability, and reconnaissance is related to information gathering, as in fingerprinting/footprinting.
A. Waterfall
B. Agile
C. Spiral
D. Prototyping
Answer is B
Rationale/Answer Explanation:
Unit testing enables collective code ownership. Collective code ownership encourages everyone to contribute new ideas to all segments of the project. Any developer can change any line of code to add functionality, fix bugs, or re-factor. No one person becomes a bottleneck for changes. The way this works is for each developer to work in concert (usually more in agile methodologies than the traditional model) to create unit tests for his code as it is developed. All code released into the source code repository includes unit tests. Code that is added, bugs as they are fixed, and old functionality as it is changed will be covered by automated testing.
A. Logic
B. Scalability
C. Integration
D. Unit
Answer is A
Rationale/Answer Explanation:
If-then rules are constructs of logic, and when these constructs are used for software testing, it is generally referred to as logic testing.
A. Stress
B. Unit
C. Integration
D. Regression
Answer is A
Rationale/Answer Explanation:
Tests that assure that the service level requirements are met are characteristic of performance testing. Load and stress testing are types of performance tests. While stress testing is testing by starving the software, load testing is done by subjecting the software to extreme volumes or loads.
A. Regression
B. Stress
C. Integration
D. Simulation
Answer is B
Rationale/Answer Explanation:
The goal of stress testing is to determine if the software will continue to operate reliably under duress or extreme conditions. Often the resources that the software needs are taken away from the software, and the software’s behavior is observed as part of the stress test.
A. Source code analyzers
B. Fuzzers
C. Banner grabbing software
D. Scanners
Answer is A
Rationale/Answer Explanation:
White box testing, or structural analysis, is about testing the software with prior knowledge of the code and configuration. Source code review is a type of white box testing. Embedded code issues that are implanted by insiders, such as Trojan horses and logic bombs, can be detected using source code analyzers.
A. White box
B. Black box
C. Clear box
D. Glass box
Answer is B
Rationale/Answer Explanation:
In black box or behavioral testing, test conditions are developed on the basis of the program’s or system’s functionality; that is, the tester requires information about the input data and observed output, but does not know how the program or system works. The tester focuses on testing the program’s behavior (or functionality) against the specification. With black box testing, the tester views the program as a black box and is completely unconcerned with the internal structure of the program or system. In white box or structural testing, the tester knows the internal program structure, such as paths, statement coverage, branching, and logic. White box testing is also referred to as clear box or glass box testing. Gray box testing is a software testing technique that uses a combination of black box and white box testing.
A. Rules of engagement
B. Role-based access control mechanisms
C. Threat models
D. Use cases
Answer is A
Rationale/Answer Explanation:
Penetration testing must be controlled, not ad hoc in nature, with properly defined rules of engagement.
A. Availability
B. Authentication
C. Nonrepudiation
D. Authorization
Answer is C
Rationale/Answer Explanation:
When session management is in place, it provides for authentication, and when authentication is combined with auditing capabilities, it provides nonrepudiation. In other words, the authenticated user cannot claim broken sessions or intercepted authentication and deny their user actions due to the audit logs’ recording their actions.
A. Injection flaws
B. Lack of reverse engineering protection
C. Cross-site scripting
D. Broken session management
Answer is B
Rationale/Answer Explanation:
Disassemblers, debuggers, and decompilers are utilities that can be used for reverse engineering software, and software testers should have these utilities in their list of tools to validate protection against reversing.
A. Defect identifier
B. Title
C. Expected results
D. Tester name
Answer is C
Rationale/Answer Explanation:
Knowledge of the expected results along with the defect information can be used to determine the variance between what the results need to be and what is deficient.
A. Hashing
B. Cloaking
C. Masking
D. Watermarking
Answer is B
Rationale/Answer Explanation:
Detection of Web server versions is usually done by analyzing HTTP responses. This process is known as banner grabbing. But the administrator can change the information that gets reported, and this process is known as cloaking. Banner cloaking is a security through obscurity approach to protect against version enumeration.
A. Truly random data without any consideration for the data structure
B. Variations of data structures that are known
C. Data that get interpreted as commands by a backend interpreter
D. Scripts that are reflected and executed on the client browser
Answer is B
Rationale/Answer Explanation:
The process of sending random data to test security of an application is referred to as “fuzzing” or “fuzz testing.” There are two levels of fuzzing: dumb fuzzing and smart fuzzing. Sending truly random data, known as dumb fuzzing, often does not yield great results and has the potential of bringing the software down, causing a denial of service (DoS). If the code being fuzzed requires data to be in a certain format but the fuzzer does not create data in that format, most of the fuzzed data will be rejected by the application. The more knowledge the fuzzer has of the data format, the more intelligent it can be at creating data. These more intelligent fuzzers are known as smart fuzzers.
A. Normal operational functionality is not restored automatically
B. Access to all functionality is denied
C. Confidentiality, integrity, and availability are not adversely impacted
D. End users are adequately trained and self-help is made available for the end user to fix the error on their own
Answer is C
Rationale/Answer Explanation:
As part of security testing, the principle of failsafe must be assured. This means that confidentiality, integrity, and availability are not adversely impacted when the software fails. As part of general software testing, the recoverability of the software, or restoration of the software to normal operational functionality, is an important consideration, but it need not always be an automated process.
A. Integration
B. Stress
C. Unit
D. Regression
Answer is B
Rationale/Answer Explanation:
Race conditions and resource exhaustion issues are more likely to be identified when the software is starved of the resources that it expects, as is done during stress testing.
A. The point at which the software will break
B. If the software can restore itself to normal business operations
C. The presence and effectiveness of risk mitigation controls
D. How a blackhat would circumvent access control mechanisms
Answer is C
Rationale/Answer Explanation:
Security testing must include both external (blackhat) and insider threat analysis, and it should be more than just testing for the ability to circumvent access control mechanisms. The resiliency of software is the ability of the software to be able to withstand attacks. The presence and effectiveness of risk mitigation controls increase the resiliency of the software.
A. Redundancy
B. Recoverability
C. Resiliency
D. Reliability
Answer is C
Rationale/Answer Explanation:
Resiliency of software is defined as the ability of the software to withstand attacker attempts.
A. Integration
B. Regression
C. Unit
D. Penetration
Answer is C
Rationale/Answer Explanation:
In order for unit testing to be thorough, the unit/module and the environment for the execution of the module need to be complete. The necessary environment includes the modules that either call or are called by the unit of code being tested. Stubs and drivers are designed to provide the complete environment for a module so that unit testing can be carried out. A stub procedure is a dummy procedure that has the same input/output (I/O) parameters as the given procedure. A driver module should have the code to call the different functions of the module being tested with appropriate parameter values for testing. In layman’s terms, the driver module is akin to the caller, and the stub module can be seen as the callee.
A. Unit
B. Integration
C. Performance
D. Regression
Answer is C
Rationale/Answer Explanation:
Assurance that the software meets the expectations of the business as defined in the service level agreements (SLAs) can be demonstrated by performance testing. Once the importance of the performance of an application is known, it is necessary to understand how various factors affect the performance. Security features can have an impact on performance, and this must be checked to ensure that service level requirements can be met.
A. Measure the resiliency of the software by attempting to exploit weaknesses
B. Detect the presence of loopholes and weaknesses in the software
C. Detect the effectiveness of security controls that are implemented in the software
D. Measure the skills and technical know-how of the security tester
Answer is B
Rationale/Answer Explanation:
A vulnerability is a weakness (or loophole), and vulnerability scans are used to detect the presence of weaknesses in software.
A. Source code review
B. Threat modeling the software
C. Black box testing
D. Structural analysis
Answer is C
Rationale/Answer Explanation:
Since third party vendor software is often received in object code form, access to source code is usually not provided, and structural analysis (white box) or source code analysis is not possible. Looking into the source code or source code look-alike by reverse engineering without explicit permission can have legal ramifications. Additionally, without documentation on the architecture and software makeup, a threat modeling exercise would most likely be incomplete. License validation is primarily used for curtailing piracy and is a component of verification and validation mechanisms. Black box testing or behavioral analysis would be the best option to attest the security of third party vendor software.
A. Compilation errors
B. Canonicalization issues
C. Cyclomatic complexity
D. Coding errors
Answer is B
Rationale/Answer Explanation:
The process of canonicalization resolves multiple forms into standard canonical forms. In software that needs to support multilingual, multicultural capabilities such as Unicode, input filtration can be bypassed by a hacker who sends in data in an alternate form from the standard form. Input validation for alternate forms is therefore necessary.
A. Verify that installation guides and training manuals are provided
B. Assess the presence and effectiveness of protection mechanisms
C. Validate vendors’ software products
D. Assist the vendor in responding to the request for proposals
Answer is B
Rationale/Answer Explanation:
To maintain the confidentiality, integrity, and availability of software and the data it processes, prior to the acceptance of software, vendor claims of security must be assessed not only for their presence, but also their effectiveness within your computing ecosystem.
A. Disclaimers
B. Licensing
C. Validity periods
D. Encryption
Answer is C
Rationale/Answer Explanation:
Software functionality can be restricted using a validity period as is often observed in the “try-before-you-buy” or “demo” versions of software. It is recommended to have a stripped down version of the software for the demo version, and if feasible, it is advisable to include the legal team to determine the duration of the validity period (especially in the context of digital signatures and Public Key Infrastructure solutions).
A. Avoidance
B. Mitigation
C. Acceptance
D. Transference
Answer is D
Rationale/Answer Explanation:
Since there is an independent third party engaged in an escrow agreement, business continuity is assured for the acquirer when the escrow agency maintains a copy of the source/object code from the publisher. For the publisher, it protects the intellectual property since the source code is not handed to the acquirer directly, but to the independent third escrow party. For both the acquirer and the publishers, some risk is transferred to the escrow party, who is responsible for maintaining the terms of the escrow agreement.
A. Noncompete agreements
B. Nondisclosure agreements (NDA)
C. Service level agreements (SLA)
D. Trademarks
Answer is B
Rationale/Answer Explanation:
Nondisclosure agreements assure confidentiality of sensitive information, such as software programs, processing logic, database schema, and internal organizational business processes and client lists.
A. Developers
B. End users
C. Testers
D. Business owners
Answer is B
Rationale/Answer Explanation:
Disclaimers, or “as is” clauses, transfer the risk from the software provider to the end user.
A. Tampering
B. Denial of service
C. Authentication bypass
D. Spoofing
Answer is B
Rationale/Answer Explanation:
If the validity period set in the software is not properly implemented, then legitimate users can potentially be denied service. It is therefore imperative to ensure that the duration and checking mechanism of validity periods is properly implemented.
A. Verification
B. Validation
C. Authentication
D. Authorization
Answer is A
Rationale/Answer Explanation:
Verification is defined as the process of evaluating software to determine whether the products of a given development phase satisfy the conditions imposed at the start of the phase. In other words, verification ensures that the software performs as it is required and designed to do. Validation is the process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements. In other words, validation ensures that the software meets required specifications.
A. Redundancy
B. Reliability
C. Resiliency
D. Recoverability
Answer is B
Rationale/Answer Explanation:
Verification ensures that the software performs as it is required and designed to do, which is a measure of the software’s reliability.
A. Operationally Critical Assets Threats and Vulnerability Evaluation® (OCTAVESM)
B. Security quality requirements engineering (SQUARE)
C. Common criteria
D. Comprehensive, lightweight application security process (CLASP)
Answer is C
Rationale/Answer Explanation:
Common criteria (ISO 15408) are a security product evaluation methodology with clearly defined ratings, such as evaluation assurance levels (EALs). In addition to assurance validation, the common criteria also validate software functionality for the security target. EALs ratings assure the owner of the assurance capability of the software/system, so common criteria are also referred to as an owner assurance model.
A. Regression testing
B. Integration testing
C. Unit testing
D. User acceptance testing
Answer is D
Rationale/Answer Explanation:
The end users of the business have the final say on whether the software can be deployed/released. User acceptance testing (UAT) determines the readiness of the software for deployment to the production environment or release to an external customer.
A. Patching
B. Hardening
C. Certification
D. Accreditation
Answer is D
Rationale/Answer Explanation:
While certification is the assessment of the technical and nontechnical security controls of the software, accreditation is a management activity that assures that the software has adequate levels of software assurance protection mechanisms.
A. Avoid the risk by forcing the business to discontinue use of the software
B. Accept the risk with a documented exception
C. Transfer the risk by buying insurance
D. Ignore the risk since it is legacy software
Answer is B
Rationale/Answer Explanation:
When there are known vulnerabilities in legacy software and there is not much you can do to mitigate the vulnerabilities, it is recommended that the business accept the risk with a documented exception to the security policy. When accepting this risk, the exception to policy process must ensure that there is a contingency plan in place to address the risk by either replacing the software with a new version or discontinuing its use (risk avoidance). Transferring the risk may not be a viable option for legacy software that is already in your production environment, and one must never ignore the risk or take the vulnerable software out of the scope of an external audit.
A. Board members and executive management
B. Business owner
C. Information technology (IT) management
D. Security organization
E. Developers
Answer is B
Rationale/Answer Explanation:
Risk must always be accepted formally by the business owner.
A. Inadequate integration testing
B. Incompatible environment configurations
C. Incomplete threat modeling
D. Ignored code review
Answer is B
Rationale/Answer Explanation:
When the production environment does not mirror the development or test environments, software that works fine in nonproduction environments are observed to experience issues when deployed in the production environment. This underlines the need for simulation testing.
A. Quantitatively expressed
B. Objectively expressed
C. Contextually relevant
D. Collected manually
Answer is D
Rationale/Answer Explanation:
A good security metric is expressed quantitatively and is contextually accurate. Regardless of how many times the metrics are collected, the results are not significantly variant. Good metrics are usually collected in an automated manner so that the collector’s subjectivity does not come into effect.
A. Hardening
B. Patching
C. Reversing
D. Obfuscation
Answer is A
Rationale/Answer Explanation:
Locking down the software by removing unneeded code and documentation to reduce the attack surface of the software is referred to as software hardening. Before hardening the software, it is crucial to harden the operating system of the host on which the software program will be run.
A. Threat modeling
B. Code review
C. Continuous monitoring
D. Regression testing
Answer is C
Rationale/Answer Explanation:
Operations security is about staying secure or keeping the resiliency levels of the software above the acceptable risk levels. It is the assurance that the software will continue to function as expected in a reliable fashion for the business without compromising its state of security by monitoring, managing, and applying the needed controls to protect resources (assets).
A. Preventive
B. Corrective
C. Compensating
D. Detective
Answer is D
Rationale/Answer Explanation:
Audit logging is a type of detective control. When the users are made aware that their activities are logged, audit logging could function as a deterrent control, but it is primarily used for detective purposes. Audit logs can be used to build the sequence of historical events and give insight into who (subject such as user/process) did what (action), where (object), and when (timestamp).
A. Meet the intent and rigor of the original requirement
B. Provide an increased level of defense over the original requirement
C. Be implemented as part of a defense in depth measure
D. Commensurate with additional risk imposed by not adhering to the requirement
Answer is B
Rationale/Answer Explanation:
PCI DSS prescribes that the compensating control must provide a similar level, not an increased level of defense over the original requirement.
A. Distributed denial of service (DDoS)
B. Malware
C. Logic bombs
D. DNS poisoning
Answer is C
Rationale/Answer Explanation:
Logic bombs can be planted by an insider, and when the internal network is not monitored, the likelihood of this is much higher.
A. Authentication
B. Authorization
C. Archiving
D. Auditing
Answer is D
Rationale/Answer Explanation:
IDS and auditing are both detective types of controls that can be used to monitor the security health of the computing environment continuously.
A. Notify management of the security breach
B. Research the validity of the alert or event further
C. Inform potentially affected customers of a potential breach
D. Conduct an independent third party evaluation to investigate the reported breach
Answer is B
Rationale/Answer Explanation:
Upon the report of a breach, it is important to go into a triaging phase in which the validity and severity of the alert/event is investigated further. This reduces the number of false positives that are reported to management.
A. Informing the developers that they could lose their jobs if their software is breached
B. Informing management that the organizational software could be hacked
C. Informing the project team about the recent breach of the competitor’s software
D. Informing the development team that there should be no injection flaws in the payroll application
Answer is D
Rationale/Answer Explanation:
Using security metrics over fear, uncertainty, and doubt (FUD) is the best recommendation to champion security objectives within the software development organization.
A. Penetration testing
B. Audits
C. Threat modeling
D. Code review
Answer is B
Rationale/Answer Explanation:
Periodic audits (both internal and external) can be used to assess the overall state of the organization’s security health.
A. Correlation
B. Normalization
C. Collection
D. Visualization
Answer is B
Rationale/Answer Explanation:
Normalizing logs means that duplicate and redundant information is removed from the logs after the time is synchronized for each log set, and the logs are parsed to deduce patterns that are identified in the correlation phase.
A. Detection
B. Containment
C. Eradication
D. Recovery
Answer is D
Rationale/Answer Explanation:
The incident response process involves preparation, detection, analysis, containment, eradication, and recovery. The goal of incident management is to restore (recover) service to normal business operations.
A. Restoring service to the expectation of the business user
B. Determining the alerts and events that need to be continuously monitored
C. Depicting incident information in easy to understand user friendly format
D. Identifying and eliminating the root cause of the problem
Answer is D
Rationale/Answer Explanation:
The goal of problem management is to identify and eliminate the root cause of the problem. All of the other definitions are related to incident management. The goal of incident management is to restore service, while the goal of problem management is to improve service.
A. Versioning
B. Hardening
C. Patching
D. Porting
Answer is C
Rationale/Answer Explanation:
Patching is the process of applying updates and hot fixes. Porting is the process of adapting software so that an executable program can be created for a computing environment that is different from the one for which it was originally designed (e.g., different processor architecture, operating system, or third party software library).
A. Threat modeling
B. Requirements analysis
C. Network deployment
D. Root cause analysis
Answer is D
Rationale/Answer Explanation:
Ishikawa diagrams or fishbone diagrams are used to identify the cause and effect of a problem and are commonly used to determine the root cause of the problem.
A. Proper functioning of new features
B. Cryptographic agility
C. Backward compatibility
D. Enabling of previously disabled services
Answer is C
Rationale/Answer Explanation:
Regression testing of patches is crucial to ensure that there were no newer side effects and that all previous functionality as expected is still available.
A. End-of-life
B. Vulnerability management
C. Privacy
D. Data classification
Answer is A
Rationale/Answer Explanation:
End-of-life (EOL) policies are used for the disposal of code, configuration, and documents based on organizational and regulatory requirements.
A. Mitigation
B. Transference
C. Acceptance
D. Avoidance
Answer is D
Rationale/Answer Explanation:
When software with known vulnerabilities is replaced with a secure version, it is an example of avoiding the risk. It is not transference because the new version may not have the same risks. It is not mitigation since no controls are implemented to address the risk of the old software. It is not acceptance since the risk of the old software is replaced with the risk of the newer version. It is not ignorance, because the risk is not left unhandled.
A. Malware
B. Dumpster divers
C. Social engineers
D. Script kiddies
Answer is B
Rationale/Answer Explanation:
Dumpster divers are threat agents that can steal information from printed media (e.g., printer ribbons, facsimile transmission, printed paper).
A. Honeypot
B. Sandbox
C. Simulated
D. Production
Answer is B
Rationale/Answer Explanation:
Preventing malicious file execution attacks takes some careful planning from the architectural and design phases of the SDLC to thorough testing. In general, a well written application will not use user-supplied input in any filename for any server-based resource (such as images, XML and XSL transform documents, or script inclusions) and will have firewall rules in place preventing new outbound connections to the Internet or internally back to any other server. However, many legacy applications continue to have a need to accept user-supplied input and files without the adequate levels of validation built in. When this is the case, it is advisable to separate the production environment and upload the files to a sandbox environment before the files can be processed.
A. At the end of development phase of the project
B. Before and after the code is implemented
C. Before and after the software requirements are complete
D. At the end of the deployment phase of the project
Answer is B
Rationale/Answer Explanation:
In order to understand if there is an improvement in the resiliency of the software code, the RASQ, which attempts to quantify the number and kinds of vectors available to an attacker, needs to be computed before and after code development is completed and the code is frozen.
A. Object code
B. Type safe code
C. Obfuscated code
D. Source code
Answer is B
Rationale/Answer Explanation:
Code is said to be type safe if it only accesses memory resources that do not belong to the memory assigned to it. Type safety verification takes place during the just in time (JIT) compilation phase and prevents unsafe code from becoming active. Although you can disable type safety verification, it can lead to unpredictable results. The best example is that code can make unrestricted calls to unmanaged code, and if that code has malicious intent, the results can be severe. Therefore, the framework only allows fully trusted assemblies to bypass verification. Type safety is a form of “sandboxing.” Type safety must be one of the most important considerations in regards to security when selecting a programming language and phasing out older generation programming languages.
A. Periodically patching database servers
B. Implementing source code version control
C. Logging all database access requests
D. Proper change control management
Answer is D
Rationale/Answer Explanation:
Proper change control management is useful to provide separation of duties as it can prevent direct access to backend databases by developers.
A. Disaster Recovery Plan
B. Project Management Plan
C. Incident Response Plan
D. Quality Assurance and Testing Plan
Answer is C
Rationale/Answer Explanation:
An Incident Response Plan (IRP) must be developed and tested for completeness as it is the document that one should refer to and follow in the event of a security breach. The effectiveness of an IRP is dependent on the awareness of users on how to respond to an incident, and increased awareness can be achieved by proper education and training.
3.144.97.126