Appendix A
Answers to Sample Questions

Domain 1: Access Controls

  1. What type of controls are used in a rule set based access control system?
    1. Discretionary
    2. Mandatory
    3. Role based
    4. Compensating

    Answer: A

    Rule set based access controls (RSBAC) are discretionary controls giving data owners the discretion to determine the rules necessary to facilitate access.

  2. What framework is the rule set based access controls logic based upon?
    1. Logical framework for access control
    2. Specialized framework for access control
    3. Technical framework for access control
    4. Generalized framework for access control

    Answer: D

    The RSBAC framework logic is based on the work done for the generalized framework for access control (GFAC) by Abrams and LaPadula.

  3. View based access controls are an example of a:
    1. Audit control
    2. Constrained user interface
    3. Temporal constraint
    4. Side Channel

    Answer: B

    View based access control (VBAC) are most commonly found in database applications to control access to specific parts of a database. The constrained user interface in VBAC restricts or limits an access control subject’s ability to view or perhaps act on “components” of an access control object based on the access control subject’s assigned level of authority. Views are dynamically created by the system for each user-authorized access.

    Simply put, VBAC separates a given access control object into subcomponents and then permits or denies access for the access control subject to view or interact with specific subcomponents of the underlying access control object.

  4. Which of the following are supported authentication methods for iSCSI? (Choose two.)
    1. Kerberos
    2. Transport layer security (TLS)
    3. Secure remote password (SRP)
    4. Layer 2 tunneling protocol (L2TP)

    Answer: A and C

    There are a number of authentication methods supported with iSCSI:

    • Kerberos— Kerberos is a network authentication protocol. It is designed to provide strong authentication for client/server applications by using secret-key cryptography.
    • SRP (Secure Remote Password)—SRP is a secure password-based authentication and key-exchange protocol. SRP exchanges a cryptographically-strong secret as a byproduct of successful authentication, which enables the two parties to communicate securely.
    • SPKM1/2 (Simple Public-Key Mechanism)—SPKM provides authentication, key establishment, data integrity, and data confidentiality in an online distributed application environment using a public-key infrastructure. The use of a public-key infrastructure allows digital signatures supporting non-repudiation to be employed for message exchanges.
    • CHAP (Challenge Handshake Authentication Protocol)—CHAP is used to periodically verify the identity of the peer using a 3-way handshake. This is done upon initial link establishment, and may be repeated any time after the link has been established.
  5. According to the following scenario, what would be the most appropriate access control model to deploy?
  6. Scenario: A medical records database application is used by a health-care worker to access blood test records. If a record contains information about an HIV test, the health-care worker may be denied access to the existence of the HIV test and the results of the HIV test. Only specific hospital staff would have the necessary access control rights to view blood test records that contain any information about HIV tests.
    1. Discretionary access control
    2. Context based access control
    3. Content dependent access control
    4. Role based access control

    Answer: C

    Content dependent access control is used to protect databases containing sensitive information. Content dependent access control works by permitting or denying the access control subjects access to access control objects based on the explicit content within the access control object.

    Context based access control is often confused with content dependent access control but they are two completely different methodologies. While content dependent access control makes decisions based on the content within an access control object, context based access control is not concerned with the content; it is only concerned with the context or the sequence of events leading to the access control object being allowed through the firewall.

    In the example of blood test records for content dependent access control above, the access control subject would be denied access to the access control object because it contained information about an HIV test. Context based access control could be used to limit the total number of requests for access to any blood test records over a given period of time. Hence, a health-care worker may be limited to accessing the blood test database more than 100 times in a 24-hour period.

    While context based access control does not require that permissions be configured for individual access control objects, it requires that rules be created in relation to the sequence of events that precede an access attempt.

  7. Which of the following is NOT one of the three primary rules in a Biba formal model?
    1. An access control subject cannot request services from an access control object that has a higher integrity level.
    2. An access control subject cannot modify an access control object that has a higher integrity level.
    3. An access control subject cannot access an access control object that has a lower integrity level.
    4. An access control subject cannot access an access control object that has a higher integrity level.

    Answer: D

    An access control subject cannot access an access control object that has a higher integrity level is not one of the three primary rules in the Biba formal model.

  8. Which of the following is an example of a firewall that does not use context based access control?
    1. Static packet filter
    2. Circuit gateway
    3. Stateful inspection
    4. Application proxy

    Answer: A

    Context based access control also considers the “state” of the connection, and in a static packet filter no consideration is given to the connection state. Each and every packet is compared to the rule base regardless of whether it had previously been allowed or denied.

  9. Where would you find a singulation protocol being used?
    1. Where there is a radio frequency ID system deployed, and tag collisions are a problem
    2. Where there is router that has gone offline in a multi-path storage network
    3. Where there is a radio frequency ID system deployed, and reader collisions are a problem
    4. Where there is switch that has gone offline in a multi-path storage network

    Answer: C

    Some common problems with RFID are reader collision and tag collision. Reader collision occurs when the signals from two or more readers overlap. The tag is unable to respond to simultaneous queries. Systems must be carefully set up to avoid this problem; many systems use an anti-collision protocol (also called a singulation protocol). Anti-collision protocols enable the tags to take turns in transmitting to a reader.

  10. Which of the following are not one of the principal components of access control systems? (Choose two.)
    1. Objects
    2. Biometrics
    3. Subjects
    4. Auditing

    Answer: A and C

    While biometrics devices are used in some access control systems to confirm an individual’s identity, they are not considered to be one of the principal components of an access control system.

    While auditing is used in many access control systems, it is not a mandatory feature or function of all systems, and is not always enabled.

    Both objects and subjects are the building blocks of all access control systems.

  11. Which of the following are behavioral traits in a biometric device?
    1. Voice pattern and keystroke dynamics
    2. Signature dynamics and iris scan
    3. Retina scan and hand geometry
    4. d. Fingerprint and facial recognition

    Answer: A

    Voice pattern, signature dynamics, and keystroke dynamics all are behavioral traits in biometric devices.

  12. In the measurement of biometric accuracy, which of the following is commonly referred to as a “type 2 error”?
    1. Cross-over error rate (CER)
    2. Rate of false rejection—false rejection rate (FRR)
    3. Input/Output per second (IOPS)
    4. Rate of false acceptance—false acceptance rate (FAR)

    Answer: D

    A false reject rate is a type 1 error, false acceptance rate is a type 2 error, and cross-over error rate is the intersection when FRR equals FAR.

  13. What is the difference between a synchronous and asynchronous password token?
    1. Asynchronous tokens contain a password which is physically hidden and then transmitted for each authentication while synchronous tokens do not
    2. Synchronous tokens are generated with the use of a timer while asynchronous tokens do not use a clock for generation
    3. Synchronous tokens contain a password which is physically hidden and then transmitted for each authentication while asynchronous tokens do not
    4. Asynchronous tokens are generated with the use of a timer while synchronous tokens do not use a clock for generation

    Answer: B

    Security tokens are used to prove one's identity electronically (as in the case of a customer trying to access their bank account). The token is used in addition to or in place of a password to prove that the customer is who they claim to be. The token acts like an electronic key to access something. All tokens contain some secret information that are used to prove identity. There are four different ways in which this information can be used:

    • Static password token. The device contains a password which is physically hidden (not visible to the possessor), but which is transmitted for each authentication. This type is vulnerable to replay attacks.
    • Synchronous dynamic password token. A timer is used to rotate through various combinations produced by a cryptographic algorithm. The token and the authentication server must have synchronized clocks.
    • Asynchronous password token. A one-time password is generated without the use of a clock, either from a one-time pad or cryptographic algorithm.
    • Challenge response token. Using public key cryptography, it is possible to prove possession of a private key without revealing that key. The authentication server encrypts a challenge (typically a random number, or at least data with some random parts) with a public key; the device proves it possesses a copy of the matching private key by providing the decrypted challenge.
  14. What is an authorization table?
    1. A matrix of access control objects, access control subjects and their respective rights
    2. A service or program where access control information is stored and where access control decisions are made
    3. A listing of access control objects and their respective rights
    4. A listing of access control subjects and their respective rights

    Answer: A

    An authorization table is a matrix of access control objects, access control subjects, and their respective rights. The authorization table is used in some DAC systems to provide for a simple and intuitive user interface for the definition of access control rules.

  15. What ports are used during Kerberos authentication?
    1. 53 and 25
    2. 169 and 88
    3. 53 and 88
    4. 443 and 21

    Answer: C

    Table A-1: Network Ports Used During Kerberos Authentication

    Service NameUDPTCP
    DNS5353
    Kerberos8888
  16. What are the five areas that make up the identity management lifecycle?
    1. Authorization, proofing, provisioning, maintenance, and establishment
    2. Accounting, proofing, provisioning, maintenance, and entitlement
    3. Authorization, proofing, provisioning, monitoring, and entitlement
    4. Authorization, proofing, provisioning, maintenance, and entitlement

    Answer: D

    In essence, identity management is the process for managing the entire life cycle of digital identities, including the profiles of people, systems, and services, as well as the use of emerging technologies to control access to company resources. A digital identity is the representation of a set of claims made by a digital subject including, but not limited to, computers, resources, or persons about itself or another digital subject. The goal of identity management, therefore, is to improve companywide productivity and security, while lowering the costs associated with managing users and their identities, attributes, and credentials.

    There are 5 areas that make up the identity management lifecycle:

    • Authorization
    • Proofing
    • Provisioning
    • Maintenance
    • Entitlement

Domain 2: Security Operations

  1. Security awareness training aims to educate users on:
    1. What they can do to maintain the organization’s security posture
    2. How to secure their home computer systems
    3. The work performed by the information security organization
    4. How attackers defeat security safeguards

    Answer: A

    The aim of security awareness is to make the organization more resistant to security vulnerabilities and therefore maintain the organization’s security posture.

  2. Which of the following are operational aspects of configuration management (CM)?
    1. Identification, documentation, control, and auditing
    2. Documentation, control, accounting, and auditing
    3. Control, accounting, auditing, and reporting
    4. Identification, control, accounting, and auditing

    Answer: D

    Configuration management begins with identification of the baseline configuration as one or more configuration items within the configuration management database (CMDB). Once the baseline is established, all changes are controlled throughout the lifecycle of the component. Each change is accounted for by capturing, tracking, and reporting on change requests, configurations, and change history. Finally, auditing insures integrity by verifying that the actual configuration matches the information captured and tracked in the CMDB and that all changes have been appropriately recorded and tracked.

  3. The systems certification process can best be described as a:
    1. Process for obtaining stakeholder signoff on system configuration
    2. Method of validating adherence to security requirements
    3. Means of documenting adherence to security standards
    4. Method of testing a system to assure that vulnerabilities have been addressed

    Answer: B

    Certification reviews the system against the requirements specified in the system security plan to ensure that all required control specifications are met. Answer D is incorrect because while the process may include vulnerability testing, this is only one component of the process and residual risks do not preclude certification against requirements. Answer C is incorrect because the controls specified in the security plan are requirements, not standards. The answer is not A, because the certification process results in a recommendation only; accreditation is the process of obtaining signoff to operate the system.

  4. Which of the following is a degausser used to do?
    1. Render media that contain sensitive data unusable.
    2. Overwrite sensitive data with zeros so that it is unreadable.
    3. Eliminate magnetic data remanence on a disk or tape.
    4. Reformat a disk or tape for subsequent reuse.

    Answer: C

    Degaussing eliminates remanence by applying and then removing a strong magnetic field, removing magnetic signals from media. Reformatting does not actually erase data, so D is incorrect. While degaussing may render some media unusable, this is not the aim, so A is also incorrect. Answer B is incorrect because data are not overwritten by degaussing; it is removed.

  5. A web application software vulnerability that allows an attacker to extract sensitive information from a backend database is known as a:
    1. Cross-site scripting vulnerability
    2. Malicious file execution vulnerability
    3. Injection flaw
    4. Input validation failure

    Answer: C

    An injection flaw occurs when user-supplied data can be sent directly to a command processor or query interpreter; attackers can exploit this flaw by supplying a query string as input to a web application to extract data from a database. Answer A is incorrect; cross-site scripting vulnerabilities allow an attacker to execute scripts, typically in a user’s browser. Malicious file execution (B) is a vulnerability in applications that accept file names or object references as input and then execute the files or objects. Answer D is also incorrect, as failures in input validation can have many adverse consequences (not necessarily disclosure of database content).

  6. The security practice that restricts user access based on need to know is called:
    1. Mandatory access control
    2. Default deny configuration
    3. Role-based access control
    4. Least privilege

    Answer: D

    Least privilege grants users and processes only those privileges they require to perform authorized functions, that is, “need to know.” Mandatory access control (A) limits access based on the clearance of the subject and the sensitivity of the object, and may provide access to objects of lower sensitivity where there is no business purpose for access; therefore, a is incorrect. Answer B is also incorrect; a “default deny” configuration refers to rule-based access control in which only that which is explicitly authorized is allowed; this is implemented in most firewall rule sets and is the premise behind whitelisting. Role-based access control (C) is not correct because while it provides access based on a role a user is associated with, it does not allow for granting individuals access to specific objects based on a need to know and may provide more access than is required to perform a certain function.

  7. A security guideline is a:
    1. Set of criteria that must be met to address security requirements
    2. Tool for measuring the effectiveness of security safeguards
    3. Statement of senior management expectations for managing the security program
    4. Recommended security practice

    Answer: D

    A guideline is a recommended security practice, but is not required (as in A or C) or enforced. As a recommended practice, there is no standard of measurement, so B is also incorrect.

  8. A security baseline is a:
    1. Measurement of security effectiveness when a control is first implemented
    2. Recommended security practice
    3. Minimum set of security requirements for a system
    4. Measurement used to determine trends in security activity

    Answer: C

    A baseline is a special type of security standard that specifies the minimum security controls or requirements for a system. Answer A is incorrect because this more accurately describes a configuration baseline established in a CMDB, but does not indicate whether requirements have been met. Answer B refers to a guideline and is incorrect. Answer D is incorrect; a benchmark, not a baseline, is a value used in metrics against which to measure variations in performance.

  9. An antifraud measure that requires two people to complete a transaction is an example of the principle of:
    1. Separation of duties
    2. Dual control
    3. Role-based access control
    4. Defense in depths

    Answer: B

    Dual control requires two people to physically or logically complete a process, such that one initiates and the other approves, or completes, the process. Dual control operates under the theory that controls that require more than one person operating together to circumvent are more secure than those under the control of a single individual. Answer A is incorrect; under separation of duties, two individuals may perform two separate, although perhaps similar, processes; that is, they perform separate functions. Answer C is incorrect; this refers to an access control model. Answer D is incorrect because dual control is actually a single control mechanism, not a series of layered controls.

  10. The waterfall model is a:
    1. Development method that follows a linear sequence of steps
    2. Iterative process used to develop secure applications
    3. Development method that uses rapid prototyping
    4. Extreme programming model used to develop web applications

    Answer: A

    The waterfall method is a linear sequence of seven steps used in application development. It is not iterative, does not make use of prototypes, and does not use rapid application development (RAD) or extreme programming techniques; thus B, C, and D are incorrect.

  11. Code signing is a technique used to:
    1. Ensure that software is appropriately licensed for use
    2. Prevent source code tampering
    3. Identify source code modules in a release package
    4. Support verification of source code authenticity

    Answer: D

    Code signing using hash functions and a digital signature is used in the release process to insure that the code that is moved to production is the same as that which was approved for production release. Answer A is not correct because the signature is not the same as a license key. Answer B is not correct because signing itself does not prevent tampering, although it can be used to detect tampering. Answer C is not correct; code signing verifies authenticity of the signed code, but does not identify discrete components or packages.

  12. The role of information owner in the system security plan includes:
    1. Maintaining the system security plan
    2. Determining privileges that will be assigned to users of the system
    3. Assessing the effectiveness of security controls
    4. Authorizing the system for operation

    Answer: B

    The information owner determines who can access the system and the privileges that will be granted to users. The system owner is responsible for A, maintaining the system security plan. The approver or authorizing official is responsible for D, authorizing the system for operation, at the end of the certification and accreditation process. Assessing the effectiveness of security controls (C) is the responsibility of the system security officer.

  13. What are the mandatory tenets of the ISC2 Code of Ethics? (Choose all that apply.)
    1. Protect society, the commonwealth, and the infrastructure.
    2. Act honorably, honestly, justly, responsibly, and legally.
    3. Promote and preserve public trust and confidence in information and systems.
    4. Advance and protect the profession.

    Answer: A, B and D

    There are four mandatory tenets of the Code of Ethics:

    1. Protect society, the commonwealth, and the infrastructure.
    2. Act honorably, honestly, justly, responsibly, and legally.
    3. Provide diligent and competent service to principals.
    4. Advance and protect the profession.

    “Promote and preserve public trust and confidence in information and systems” is part of the Code of Ethics canons, but it is not one of the four mandatory tenets.

  14. What principle does confidentiality support?
    1. Due diligence
    2. Due care
    3. Least privilege
    4. Collusion

    Answer: C

    Confidentiality supports the principle of least privilege by providing that only authorized individuals, processes, or systems should have access to information on a need-to-know basis. The level of access that an authorized individual should have is at the level necessary for them to do their job. In recent years, much press has been dedicated to the privacy of information and the need to protect it from individuals, who may be able to commit crimes by viewing the information. Identity theft is the act of assuming one’s identity through knowledge of confidential information obtained from various sources.

  15. What two things are used to accomplish non-repudiation?
    1. Proofing and provisioning
    2. Encryption and authorization
    3. Monitoring and private keys
    4. Digital signatures and public key infrastructure

    Answer: D

    Non-repudiation can be accomplished with digital signatures and PKI. The message is signed using the sender’s private key. When the recipient receives the message, they may use the sender's public key to validate the signature. While this proves the integrity of the message, it does not explicitly define the ownership of the private key. A certificate authority must have an association between the private key and the sender (meaning only the sender has the private key) for the non-repudiation to be valid.

  16. What are the elements that make up information security risks?
    1. Requirements, threats, and exposures
    2. Threats, vulnerabilities, and impacts
    3. Assessments, vulnerabilities, and expenses
    4. Impacts, probabilities, and known errors

    Answer: B

    Information security risk can be thought of as the likelihood of loss due to threats exploiting vulnerabilities, that is:

    1. RISK = THREAT + VULNERABILITY + IMPACT
  17. What is an example of a compensating control?
    1. A fence
    2. Termination
    3. Job rotation
    4. Warning banner

    Answer: C

    Compensating controls are introduced when the existing capabilities of a system do not support the requirements of a policy. Compensating controls can be technical, procedural, or managerial. Although an existing system may not support the required controls, there may exist other technology or processes that can supplement the existing environment, closing the gap in controls, meeting policy requirements, and reducing overall risk.

    AdministrativeTechnicalPhysical
    DirectivePolicyConfiguration StandardsAuthorized Personnel Only Signs
    Traffic Lights
    DeterrentPolicyWarning BannerBeware of Dog Sign
    PreventativeUser Registration ProcedurePassword-Based LoginFence
    DetectiveReview Violation ReportsLogsSentry
    CCTV
    CorrectiveTerminationUnplug, isolate, and terminate connectionFire Extinguisher
    RecoveryDR PlanBackupsRebuild
    CompensatingSupervision
    Job Rotation
    Logging
    CCTV
    Keystroke Logging
    Layered Defense
  18. What is remote attestation?
    1. A form of integrity protection that makes use of a hashed copy of hardware and software configuration to verify that configurations have not been altered.
    2. A form of confidentiality protection that makes use of a cached copy of hardware and software configuration to verify that configurations have not been altered.
    3. A form of integrity protection that makes use of a cached copy of hardware and software configuration to verify that configurations have not been altered.
    4. A form of confidentiality protection that makes use of a hashed copy of hardware and software configuration to verify that configurations have not been altered.

    Answer: A

    Trusted Platform Module (TPM) chips provide additional security features such as platform authentication and remote attestation, a form of integrity protection that makes use of a hashed copy of hardware and software configuration to verify that configurations have not been altered.

  19. With regards to the change control policy document and change management, where should analysis/impact assessment take place?
    1. After the decision making and prioritization activities, but before approval.
    2. After the recording of the proposed change(s), but before decision making and prioritization activities.
    3. After the approval, but before status tracking activities.
    4. After the request submission, but before recording of the proposed change(s).

    Answer: B

    The change control policy document covers the following aspects of the change process under management control:

    1. Request Submission—A request for change is submitted to the Change Control Board for review, prioritization, and approval. Included in the request should be a description of the change and rationale or objectives for the request, a change implementation plan, impact assessment, and a backout plan to be exercised in the event of a change failure or unanticipated outcome.
    2. Recording—Details of the request are recorded for review, communication, and tracking purposes.
    3. Analysis/Impact Assessment—Changes are typically subject to peer review for accuracy and completeness, and to identify any impacts on other systems or processes that may arise as a result of the change.
    4. Decision Making and Prioritization—The team reviews the request, implementation and backout plans, and impacts and determines whether the change should be approved, denied, or put on hold. Changes are scheduled and prioritized, and any communication plans are put in place.
    5. Approval—Formal approval for the change is granted and recorded.
    6. Status Tracking—The change is tracked through completion. A post-implementation review may be performed.

Domain 3: Risk, Identification, Monitoring, and Analysis

  1. Which of the following terms refers to a function of the likelihood of a given threat source exercising a potential vulnerability, and the resulting impact of that adverse event on the organization?
    1. Threat
    2. Risk
    3. Vulnerability
    4. Asset

    Answer: B

    A threat (A) is the potential for a threat source to exercise (accidentally trigger or intentionally exploit) a specific vulnerability. A vulnerability (C) is a flaw or weakness in system security procedures, design, implementation, or internal controls that could be exercised (accidentally triggered or intentionally exploited) and result in a security breach or a violation of the system’s security policy. An asset (D) is anything of value that is owned by an organization.

  2. The process of an authorized user analyzing system security by attempting to exploit vulnerabilities to gain access to systems and data is referred to as:
    1. Vulnerability assessment
    2. Intrusion detection
    3. Risk management
    4. Penetration testing

    Answer: D

    Vulnerability assessments (A) only attempt to determine if vulnerabilities exist but do not attempt to actively exploit identified vulnerabilities. Intrusion detection (B) is an automated technique for identifying active intrusion attempts. Risk management (C) is the process of assessing and mitigating risk.

  3. The process for assigning a dollar value to anticipated losses resulting from a threat source successfully exploiting a vulnerability is known as:
    1. Qualitative risk analysis
    2. Risk mitigation
    3. Quantitative risk analysis
    4. Business impact analysis

    Answer: C

    A qualitative risk analysis (A) assesses impact in relative terms such as high, medium, and low impact without assigning a dollar value. Risk mitigation (B) describes a process of applying risk mitigation strategies to reduce risk exposure to levels that are acceptable to the organization. A business impact analysis (D) assesses financial and nonfinancial impacts to an organization that would result from a business disruption.

  4. When initially responding to an incident, it is critical for the SSCP to:
    1. Notify executive management.
    2. Restore affected data from backup.
    3. Follow organizational incident response procedures.
    4. Share information related to the incident with everyone in the organization.

    Answer: C

    You should always follow the organization’s incident response policy when responding to an incident. Information related to the incident should only be shared on a need-to-know basis. Many types of incidents will not require notification to executive management. The incident response policy and procedure should define when notification to executive management is required. Data restoration should not be performed until forensic analyses and evidence gathering is complete.

  5. Which of the following are threat sources to information technology systems? (Choose all that apply.)
    1. Natural threats
    2. Human threats
    3. Environmental threats
    4. Software bugs

    Answer: A, B, C, and D

    Natural threats, human threats, environmental threats and software bugs are all potential threat sources to information technology systems.

  6. The expected monetary loss to an organization from a threat to an asset is referred to as:
    1. Single loss expectancy
    2. Asset value
    3. Annualized rate of occurrence
    4. Exposure factor

    Answer: A

    Asset value (B) is the value of a specific asset to the organization. Annualized rate of occurrence (C) represents the expected number of occurrences of a specific threat to an asset in a given year. Exposure factor (D) represents the portion of an asset that would be lost if a risk to the asset was realized.

  7. Which risk-mitigation strategy would be appropriate if an organization decided to implement additional controls to decrease organizational risk?
    1. Risk avoidance
    2. Risk reduction
    3. Risk transference
    4. Risk acceptance

    Answer: B

    Risk avoidance (A) is a strategy that is used to reduce risk by avoiding risky behaviors. Risk transference (C) is a strategy to transfer risk from the organization to a third party by methods such as insurance or outsourcing. Risk acceptance (D) is a strategy in which the organization decides to accept the risk associated with the potential occurrence of a specific event.

  8. During which phase of the risk assessment process is technical settings and configuration documented?
    1. Risk determination
    2. Results documentation
    3. System characterization
    4. Control analysis

    Answer: C

    In the risk determination phase (A), overall risk to an IT system is assessed. During the results documentation phase (B), results of the risk assessment are documented. In the control analysis phase (D), controls are assessed to evaluate their effectiveness.

  9. What is the correct order of steps for the NIST risk assessment process?
    1. Communicate, prepare, conduct, and maintain
    2. Prepare, conduct, communicate, and maintain
    3. Conduct, communicate, prepare, and maintain
    4. Maintain, communicate, prepare, and conduct

    Answer: B

    NIST Special Publication 800-30 R1, “Risk Management Guide for Information Technology Systems” details a four-step risk assessment process. The risk assessment process described by NIST is composed of the following steps (as shown in Figure A-1):

    1. 1. Prepare for the assessment
    2. 2. Conduct the assessment
    3. 3. Communicate the assessment results
    4. 4. Maintain the assessment
    bapp01f001.tif

    Figure A-1: The NIST Risk Assessment Process

  10. Cross-referencing and stimulus response algorithms are qualities of what associated with what activity?
    1. Vulnerability testing
    2. Penetration testing
    3. Static application security testing
    4. Dynamic application security testing

    Answer: A

    Vulnerability testing usually employs software specific to the activity and tends to have the following qualities:

    • OS Fingerprinting—This technique is used to identify the operating system in use on a target. OS fingerprinting is the process where a scanner can determine the operating system of the host by analyzing the TCP/IP stack flag settings. These settings vary on each operating system from vendor to vendor or by TCP/IP stack analysis and banner grabbing. Banner grabbing is reading the response banner presented for several ports such as FTP, HTTP, and Telnet. This function is sometimes built into mapping software and sometimes into vulnerability software.
    • Stimulus and Response Algorithms—These are techniques to identify application software versions, then referencing these versions with known vulnerabilities. Stimulus involves sending one or more packets at the target. Depending on the response, the tester can infer information about the target’s applications. For example, to determine the version of the HTTP server, the vulnerability testing software might send an HTTP GET request to a web server, just like a browser would (the stimulus), and read the reply information it receives back (the response) for information that details the fact that it is Apache version X, IIS version Y, etc.
    • Privileged Logon Ability—The ability to automatically log onto a host or group of hosts with user credentials (administrator-level or other level) for a deeper “authorized” look at systems is desirable.
    • Cross-Referencing—OS and applications/services (discovered during the port-mapping phase) should be cross-referenced to identify possible vulnerabilities. For example, if OS fingerprinting reveals that the host runs Red Hat Linux 8.0 and that portmapper is one of the listening programs, any pre-8.0 portmapper vulnerabilities can likely be ruled out. Keep in mind that old vulnerabilities have resurfaced in later versions of code even though they were patched at one time. While these instances may occur, the filtering based on OS and application fingerprinting will help the security practitioner better target systems and use the security practitioner's time more effectively.
    • Update Capability—Scanners must be kept up-to-date with the latest vulnerability signatures; otherwise, they will not be able to detect newer problems and vulnerabilities. Commercial tools that do not have quality personnel dedicated to updating the product are of reduced effectiveness. Likewise, open-source scanners should have a qualified following to keep them up-to-date.
    • Reporting Capability—Without the ability to report, a scanner does not serve much purpose. Good scanners provide the ability to export scan data in a variety of formats, including viewing in HTML or PDF format or to third-party reporting software, and are configurable enough to give the ability to filter reports into high-, mid-, and low-level detail depending on the intended audience for the report. Reports are used as basis for determining mitigation activities later. Additionally many scanners are now feeding automated risk management dashboards using application portal interfaces.
  11. What is the correct order for the phases of penetration testing?
    1. Information gathering, preparation, information evaluation and risk analysis, active penetration and analysis, and reporting
    2. Preparation, information gathering, active penetration and analysis, information evaluation, and risk analysis and reporting
    3. c. Preparation, active penetration and analysis, information gathering, information evaluation, and risk analysis and reporting
    4. Preparation, Information gathering, information evaluation and risk analysis, active penetration, and analysis and reporting

    Answer: D

    Penetration testing consists of five different phases:

    • Phase 1—Preparation
    • Phase 2—Information gathering
    • Phase 3—Information evaluation and risk analysis
    • Phase 4—Active penetration
    • Phase 5—Analysis and reporting
  12. Where do the details as to how the security objectives of a security baseline are to be fulfilled come from?
    1. The system security plan
    2. A security implementation document
    3. The enterprise system architecture
    4. Authorization for the system to operate

    Answer: B

    A security baseline defines a set of basic security objectives which must be met by any given service or system. The objectives are chosen to be pragmatic and complete, and do not impose technical means. Therefore, details on how these security objectives are fulfilled by a particular service/system must be documented in a separate security implementation document. These details depend on the operational environment a service/system is deployed into, and might, thus, creatively use and apply any relevant security measure. Derogations from the baseline are possible and expected, and must be explicitly marked.

  13. Shoulder surfing, Usenet searching, and Dumpster diving are examples of what kind of activity?
    1. Risk analysis
    2. Social engineering
    3. Penetration testing
    4. Vulnerability assessment

    Answer: B

    Social engineering is an activity that involves the manipulation of persons or physical reconnaissance to get information for use in exploitation or testing activities.

  14. What is the most important reason to analyze event logs from multiple sources?
    1. They will help you obtain a more complete picture of what is happening on your network and how you go about addressing the problem.
    2. The log server could have been compromised.
    3. Because you cannot trust automated scripts to capture everything.
    4. To prosecute the attacker once he can be traced.

    Answer: A

    By analyzing various logs sources, it is possible to piece together a timeline of events and user activity. Answer (B) is partially correct, but not the most fitting answer. This is only a small picture of what the attacker could have done. Answer (C) may be true, but again is not the most important reason. This answer is meant to distract. Answer (D) may apply in some cases, but this is not the primary goal of correlating event logs.

  15. Security testing includes which of the following activities? (Choose all that apply.)
    1. Performing a port scan to check for up-and-running services.
    2. Gathering publicly available information.
    3. Counterattacking systems determined to be hostile.
    4. Posing as technical support to gain unauthorized information.

    Answer: A, B, and D

    Counterattacking systems determined to be hostile is never something an organization wants to do, and does not constitute security testing. Answer (A) is part of security testing. Using a network mapping technique such as nmap can reveal security holes. Answer (B) can involve googling an organization to determine information for future attacks. Answer (D) is an example of social engineering and should be a part of an organizations security testing process.

  16. Why is system fingerprinting part of the security testing process?
    1. It is one of the easiest things to determine when performing a security test.
    2. It shows what vulnerabilities the system may be subject to.
    3. It tells an attacker that a system is automatically insecure.
    4. It shows the auditor whether a system has been hardened.

    Answer: B

    Some versions of an OS or software may be vulnerable, and this information is useful to an attacker. Answer (A) may or may not be true depending on the system, and is not a reason to determine the OS or other system details. Answer (C) is true, but it does not answer why system fingerprinting is part of the security testing process. Answer (D) is not true. Just because a machine is running a particular OS, for example, does not mean it has not been updated and patched to prevent certain vulnerabilities.

Domain 4: Incident Response and Recovery

  1. Creating incident response policies for an organization would be an example of:
    1. A technical control
    2. An administrative control
    3. A logical control
    4. A physical control

    Answer: B

    Administrative controls are “managerial” and are a part of corporate security policy. Technical controls (A) implement specific technologies. A policy would not constitute a specific technological process. Physical controls (D) constitute elements such as Closed Caption Television (CCTV), padlocks, or any other physical barrier or device to bar access. Logical control is a fictitious term.

  2. A security audit is best defined as:
    1. A covert series of tests designed to test network authentication, hosts, and perimeter security.
    2. A technical assessment that measures how well an organization uses strategic security policies and tactical security controls for protecting its information assets.
    3. Employing intrusion detection systems (IDs) to monitor anomalous traffic on a network segment and logging attempted break-ins.
    4. Hardening systems before deploying them on the corporate network.

    Answer: B

    Answers (A and C) are good examples of a type of security audit, but do not answer the question of what a security audit is. Answer (D) is a security control check used to harden a host against vulnerabilities.

  3. What is the primary purpose of testing an intrusion detection system?
    1. To observe that the IDS is observing and logging an appropriate response to a suspicious activity.
    2. To determine if the IDS is capable of discarding suspect packets.
    3. To analyze processor utilization to verify whether hardware upgrades are necessary.
    4. To test whether the IDS can log every possible event on the network.

    Answer: A

    The primary purpose of an IDS is to detect known attacks or anomalous activity. Answer (B) would fall more along the line of an intrusion prevention system or a firewall. Answer (C) is not correct because CPU utilization is not the primary concern of an IDS, but rather load balancing or bandwidth limiting. Answer (D) is an unrealistic and storage-consuming goal unrelated to the primary purpose of an IDS.

  4. Which of the following is true regarding computer intrusions?
    1. Covert attacks such as a distributed denial of service (DDoS) attack harm public opinion of an organization.
    2. Overt attacks are easier to defend against because they can be readily identified.
    3. Network intrusion detection systems (NIDS) help mitigate computer intrusions by notifying personnel in real-time.
    4. Covert attacks are less effective because they take more time to accomplish.

    Answer: C

    NIDS can monitor data in real-time and notify appropriate personnel. Answer (A), a DDOS attack, is an example of an overt attack. Answer (B) is not true, as overt attacks can be just as complex and hard to defend against as covert attacks. Answer (D) is certainly not true. A waiter can steal a credit card number just as fast as any overt method.

  5. This documents the steps that should be performed to restore IT functions after a business disruption event:
    1. Critical business functions
    2. Business continuity plan
    3. Disaster recovery plan
    4. Crisis communications plan

    Answer: C

    Critical business functions (A) are functions that are integral to the success of an organization, without which the organization is incapable of operating. Business continuity plans (B) focus on the continuity and recovery of critical business functions during and after disaster. A crisis communications plan (D) details how organizations will communicate internally and externally during a disaster situation.

  6. During which phase of incident response are the results of incident response activities documented and communicated to the appropriate parties?
    1. Post-incident activity
    2. Detection and analysis
    3. Containment, eradication, and recovery
    4. Preparation

    Answer: A

    During the detection and analysis phase (B), security incidents are initially identified and analyzed to determine if an actual incident has occurred. During the containment, eradication, and recovery phase (C), security incidents are contained, corrected, and systems are restored to normal operations. During the preparation phase (D), incident response policies and procedures are documented and training is provided to enable the incident response team to be prepared to respond to an incident.

  7. What is the first type of disaster recovery testing that should be performed when initially testing a disaster recovery plan?
    1. Simulation
    2. Structured walkthrough
    3. Parallel
    4. Full interruption

    Answer: B

    Simulation testing (A) simulates an actual disaster and is a more in-depth testing approach than structured walkthrough testing. Parallel testing (C) uses testing performed at alternate data processing sites. This test involves significant cost to the organization and should not be performed before structured walkthrough testing. Full interruption testing (D) requires that business operations are actually interrupted at the primary processing facility.

  8. This concept refers to the point in time to which data could be restored in the event of a business disruption:
    1. Recovery time objective (RTO)
    2. Business impact analysis (BIA)
    3. Recovery point objective (RPO)
    4. Maximum tolerable downtime (MTD)

    Answer: C

    The recovery time objective (A) indicates the period of time within which a business function or information technology system must be restored after a business disruption. A business impact analysis (B) assesses financial and nonfinancial impacts to an organization that would result from a business disruption. Maximum tolerable downtime (D) is the maximum amount of time that a business function can be unavailable before an organization is harmed to the degree that puts the survivability of the organization at risk.

  9. Which RAID level uses block-level striping with parity information distributed across multiple disks?
    1. RAID 0
    2. RAID 1
    3. RAID 4
    4. RAID 5

    Answer: D

    RAID 0 stripes data across multiple disks but no parity information is included. RAID 1 uses mirroring to store identical copies of data on multiple disks. RAID 4 implements striping at the block level and uses a dedicated parity disk. RAID 4 is not used in practice.

  10. The type of data backup that only backs up files that have been changed since the last full backup is called:
    1. Full backup
    2. Incremental backup
    3. Partial backup
    4. Differential backup

    Answer: D

    In a full backup (A), the entire system is copied to backup media. Incremental backups (B) record changes from the previous day or previous incremental backup. A partial backup (C) is not a widely accepted backup type.

  11. Selecting this type of alternate processing site would be appropriate when an organization needs a low-cost recovery strategy and does not have immediate system recovery requirements:
    1. Cold site
    2. Warm site
    3. Hot site
    4. Mobile site

    Answer: A

    A cold site is the lowest cost type of alternative processing site. The warm site, hot site, and mobile site are all higher-cost solutions that support quicker recovery requirements.

  12. What are the phases of the incident response process? (Choose all that apply.)
    1. Preparation
    2. Detection and analysis
    3. Assessment and recovery
    4. Authorization

    Answer: A and B

    Phases of the incident response process include:

    1. Preparation
    2. Detection and analysis
    3. Containment, eradication, and recovery
    4. Post-incident activity
  13. This data backup strategy allows data backup to an offsite location via a WAN or Internet connection:
    1. Remote journaling
    2. Electronic vaulting
    3. RAID
    4. Clustering

    Answer: B

    RAID (C) refers to a method for writing data across multiple disks to provide redundancy or improve performance. Remote journaling (A) transfers journals and database transaction logs electronically to an offsite location. Clustering (D) uses multiple systems to reduce the risk associated with a single point of failure.

Domain 5: Cryptography

  1. Applied against a given block of data, a hash function creates:
    1. A chunk of the original block used to ensure its confidentiality
    2. A block of new data used to ensure the original block’s confidentiality
    3. A chunk of the original block used to ensure its integrity
    4. A block of new data used to ensure the original block’s integrity

    Answer: D

    Applied against a block of data, hash functions generate a hash of the original data that verifies the data has not been modified from its original form.

  2. In symmetric key cryptography, each party should use:
    1. A publicly available key
    2. A previously exchanged secret key
    3. A randomly generated value unknown to everyone
    4. A secret key exchanged with the message

    Answer: B

    In symmetric key cryptography, each party must exchange a private key in advance of establishing encrypted communications.

  3. Nonrepudiation of a message ensures that the message:
    1. Can be attributed to a particular author.
    2. Is always sent to the intended recipient.
    3. Can be attributed to a particular recipient.
    4. Is always received by the intended recipient.

    Answer: A

    The idea of nonrepudiation is to link the actions of an individual to those actions with a great deal of certainty.

  4. In Electronic Code Book (ECB) mode, data are encrypted using:
    1. A cipher-based on the previous block of a message
    2. A user-generated variable-length cipher for every block of a message
    3. A different cipher for every block of a message
    4. The same cipher for every block of a message

    Answer: D

    ECB uses the same cipher for each block resulting in identical ciphertext blocks when encrypting identical plaintext blocks.

  5. In cipher block chaining (CBC) mode, the key is constructed by:
    1. Generating new key material completely at random
    2. Cycling through a list of user defined choices
    3. Modifying the previous block of ciphertext
    4. Reusing the previous key in the chain of message blocks

    Answer: C

    CBC XOR's the previous block of ciphertext with the current block of plaintext to produce the key used to encrypt the block.

  6. Stream ciphers are normally selected over block ciphers because of:
    1. The high degree of strength behind the encryption algorithms
    2. The high degree of speed behind the encryption algorithms
    3. Their ability to use large amounts of padding in encryption functions
    4. Their ability to encrypt large chunks of data at a time

    Answer: B

    Stream ciphers tend to be faster than block ciphers while generally being less robust and operating on single bits of information.

  7. A key escrow service is intended to allow for the reliable:
    1. Recovery of inaccessible private keys
    2. Recovery of compromised public keys
    3. Transfer of inaccessible private keys between users
    4. Transfer of compromised public keys between users

    Answer: A

    Key escrow services are third-party organizations that can provide a customer organization with archived keys should the recovery of the customer organization’s encrypted data be required.

  8. The correct choice for encrypting the entire original data packet in a tunneled mode for an IPSec solution is:
    1. Generic routing encapsulation (GRE)
    2. Authentication header (AH)
    3. Encapsulating security payload (ESP)
    4. Point-to-point tunneling protocol (PPTP)

    Answer: C

    An IPSec solution that uses ESP will encapsulate the entire original data packet when implemented in a tunnel mode.

  9. When implementing an MD5 solution, what randomizing cryptographic function should be used to help avoid collisions?
    1. Multistring concatenation
    2. Modular addition
    3. Message pad
    4. Salt

    Answer: D

    A cryptographic salt is a series of random bits added to a password or passphrase to help avoid a possible hash collision.

  10. Key clustering represents the significant failure of an algorithm because:
    1. A single key should not generate different ciphertext from the same plaintext, using the same cipher algorithm.
    2. Two different keys should not generate the same ciphertext from the same plaintext, using the same cipher algorithm.
    3. Two different keys should not generate different ciphertext from the same plaintext, using the same cipher algorithm.
    4. A single key should not generate the same ciphertext from the same plaintext, using the same cipher algorithm.

    Answer: B

    In key clustering, two different keys end up generating the same ciphertext from the same plaintext while using the same cipher algorithm.

  11. Asymmetric key cryptography is used for the following:
    1. Asymmetric key cryptography is used for the following:
    2. Encryption of data, nonrepudiation, access control
    3. Nonrepudiation, steganography, encryption of data
    4. Encryption of data, access control, steganography

    Answer: B

    Steganography is the hiding of a message inside of another medium, and does not rely on the use of asymmetric key cryptography.

  12. Which of the following algorithms supports asymmetric key cryptography?
    1. Diffie-Hellman
    2. Blowfish
    3. SHA-256
    4. Rijndael

    Answer: A

    Diffie-Hellman is an asymmetric algorithm.

  13. A certificate authority (CA) provides which benefit to a user?
    1. Protection of public keys of all users
    2. History of symmetric keys
    3. Proof of nonrepudiation of origin
    4. Validation that a public key is associated with a particular user

    Answer: D

    A certificate authority “signs” an entities digital certificate to certify that the certificate content accurately represents the certificate owner. Answer (A) is not a certificate authority function, because public keys are not meant to be kept secret. Answer (B) is a function of key management. Answer (C) is a function of a digital certificate.

  14. What is the output length of a RIPEMD-160 hash?
    1. 150 bits
    2. 128 bits
    3. 160 bits
    4. 104 bits

    Answer: C

    Research and Development in Advanced Communications Technologies in Europe (RACE) Integrity Primitives Evaluation Message Digest (RIPEMD) is a hash function that produces 160-bit message digests using a 512-bit block size.

  15. ANSI X9.17 is concerned primarily with:
    1. Financial records and retention of encrypted data
    2. The lifespan of master key-encrypting keys (KKM’s)
    3. Formalizing a key hierarchy
    4. Protection and secrecy of keys

    Answer: D

    ANSI X9.17 was developed to address the need of financial institutions to transmit securities and funds securely using an electronic medium. Specifically, it describes the means to ensure the secrecy of keys. The ANSI X9.17 approach is based on a hierarchy of keys. At the bottom of the hierarchy are data keys (DKs). Data keys are used to encrypt and decrypt messages. They are given short lifespans, such as one message or one connection. At the top of the hierarchy are master key-encrypting keys (KKMs).

    KKMs, which must be distributed manually, are afforded longer lifespans than data keys. Using the two-tier model, the KKMs are used to encrypt the data keys. The data keys are then distributed electronically to encrypt and decrypt messages. The two-tier model may be enhanced by adding another layer to the hierarchy. In the three-tier model, the KKMs are not used to encrypt data keys directly, but to encrypt other key-encrypting keys (KKs). The KKs, which are exchanged electronically, are used to encrypt the data keys.

  16. What is the input that controls the operation of the cryptographic algorithm?
    1. Decoder wheel
    2. Encoder
    3. Cryptovariable
    4. Cryptographic routine

    Answer: C

    The key or cryptovariable is the input that controls the operation of the cryptographic algorithm. It determines the behavior of the algorithm and permits the reliable encryption and decryption of the message.

  17. AES is a block cipher with variable key lengths of?
    1. 128, 192 or 256 bits
    2. 32, 128 or 448 bits
    3. 8, 64, 128 bits
    4. 128, 256 or 448 bits

    Answer: A

    AES is a block cipher. It has variable key length of 128, 192, or 256 bits; the default is 256 bits. It encrypts data blocks of 128 bits in 10, 12 and 14 round depending on the key size.

  18. A hashed message authentication code (HMAC) works by:
    1. Adding a non-secret key value to the input function along with the source message.
    2. Adding a secret key value to the output function along with the source message.
    3. Adding a secret key value to the input function along with the source message.
    4. Adding a non-secret key value to the output function along with the source message.

    Answer: C

    A MAC based on DES is one of the most common methods of creating a MAC; however, it is slow in operation compared to a hash function. A hash function such as MD5 does not have a secret key, so it cannot be used for a MAC. Therefore, RFC 2104 was issued to provide a hashed MACing system that has become the process used now in IPSec and many other secure Internet protocols, such as SSL/TLS. Hashed MACing implements a freely available hash algorithm as a component (black box) within the HMAC implementation. This allows ease of the replacement of the hashing module if a new hash function becomes necessary. The use of proven cryptographic hash algorithms also provides assurance of the security of HMAC implementations. HMACs work by adding a secret key value to the hash input function along with the source message. The HMAC operation provides cryptographic strength similar to a hashing algorithm, except that it now has the additional protection of a secret key, and still operates nearly as rapidly as a standard hash operation.

  19. The main types of implementation attacks include: (Choose all that apply.)
    1. Linear
    2. Side-channel analysis
    3. Fault analysis
    4. Probing

    Answer: B, C, and D

    Implementation attacks are some of the most common and popular attacks against cryptographic systems due to their ease and reliance on system elements outside of the algorithm. The main types of implementation attacks include:

    • Side-channel analysis
    • Fault analysis
    • Probing attacks

    Side-channel attacks are passive attacks that rely on a physical attribute of the implementation such as power consumption/emanation. These attributes are studied to determine the secret key and the algorithm function. Some examples of popular side-channels include timing analysis and electromagnetic differential analysis.

    Fault analysis attempts to force the system into an error state to gain erroneous results. By forcing an error, gaining the results and comparing it with known good results, an attacker may learn about the secret key and the algorithm.

    Probing attacks attempt to watch the circuitry surrounding the cryptographic module in hopes that they complementary components will disclose information about the key or the algorithm. Additionally new hardware may be added to the cryptographic module to observe and inject information.

  20. What is the process of using a key encrypting key (KEK) to protect session keys called?
    1. Key distribution
    2. Key escrow
    3. Key generation
    4. Key wrapping

    Answer: D

    The process of using a KEK to protect session keys is called key wrapping. Key wrapping uses symmetric ciphers to securely encrypt (thus encapsulating) a plaintext key along with any associated integrity information and data. One application for key wrapping is protecting session keys in untrusted storage or when sending over an untrusted transport. Key wrapping or encapsulation using a KEK can be accomplished using either symmetric or asymmetric ciphers. If the cipher is a symmetric KEK, both the sender and the receiver will need a copy of the same key. If using an asymmetric cipher, with public/private key properties, to encapsulate a session key both the sender and the receiver will need the other’s public key.

    Protocols such as SSL, PGP, and S/MIME use the services of KEKs to provide session key confidentiality, integrity, and sometimes to authenticate the binding of the session key originator and the session key itself to make sure the session key came from the real sender and not an attacker.

Domain 6: Networks and Communications Security

  1. Which of the following is typically deployed as a screening proxy for web servers?
    1. Intrusion prevention system
    2. Kernel proxies
    3. Packet filters
    4. Reverse proxies

    Answer: D

    A reverse proxy is a device or service placed between a client and a server in a network infrastructure. Incoming requests are handled by the proxy, which interacts on behalf of the client with the desired server or service residing on the server. The most common use of a reverse proxy is to provide load balancing for web applications and APIs. Reverse proxies can also be deployed to offload services from applications as a way to improve performance through SSL acceleration, intelligent compression, and caching. They can also enable federated security services for multiple applications.

    A reverse proxy may act either as a simple forwarding service or actively participate in the exchange between client and server. When the proxy treats the client and server as separate entities by implementing dual network stacks, it is called a full proxy. A full reverse proxy is capable of intercepting, inspecting, and interacting with requests and responses. Interacting with requests and responses enables more advanced traffic management services such as application layer security, web acceleration, page routing, and secure remote access.

  2. A customer wants to keep cost to a minimum and has only ordered a single static IP address from the ISP. Which of the following must be configured on the router to allow for all the computers to share the same public IP address?
    1. Virtual private network (VPN)
    2. Port address translation (PAT)
    3. Virtual local area network (VLAN)
    4. Power over Ethernet (PoE)

    Answer: B

    An extension to network address translation (NAT) is to translate all addresses to one routable IP address and translate the source port number in the packet to a unique value. The port translation allows the firewall to keep track of multiple sessions that are using PAT.

  3. Sayge installs a new wireless access point (WAP) and users are able to connect to it. However, once connected, users cannot access the Internet. Which of the following is the MOST likely cause of the problem?
    1. a. An incorrect subnet mask has been entered in the WAP configuration.
    2. b. Users have specified the wrong encryption type and packets are being rejected.
    3. c. The signal strength has been degraded and latency is increasing hop count.
    4. d. The signal strength has been degraded and packets are being lost.

    Answer: A

    The subnet mask is broken into two parts, the network ID and the host ID. The network ID represents the network that the device is connected to. If, for example, the subnet mask in question was supposed to be 255.224.0.0, but instead was entered as 255.240.0.0, then the device would only be able to see other computers in the 255.240.0.0 subnet, and the default gateway of the subnet. When the wrong subnet mask is entered for a network configuration, the device will not be able to communicate with any other devices outside of the subnet until the right subnet mask is entered, allowing them to be able to interact with the devices on the network that the subnet mask represents.

  4. Which of the following devices should be part of a network’s perimeter defense?
    1. Web server, host based intrusion detection system (HIDS), and a firewall
    2. DNS server, firewall, and a boundary router
    3. Switch, firewall, and a proxy server
    4. Firewall, proxy server, and a host based intrusion detection system (HIDS)

    Answer: D

    The security perimeter is the first line of defense between trusted and untrusted networks. In general it will include a firewall and a router to help filter traffic. Security perimeters may also include proxies and devices such as intrusion detection systems to warn of suspicious traffic flows.

  5. A security incident event management (SIEM) service performs which of the following function(s)? (Choose all that apply.)
    1. Coordinates software for security conferences and seminars
    2. Aggregates logs from security devices and application servers looking for suspicious activity
    3. Gathers firewall logs for archiving
    4. Reviews access control logs on servers and physical entry points to match user system authorization with physical access permissions

    Answer: B, C, and D

    SIEM is a solution that involves harvesting logs and event information from a variety of different sources on individual servers or assets, and analyzing it as a consolidated view with sophisticated reporting. Similarly, entire IT infrastructures can have their logs and event information centralized and managed by large-scale SIEM deployments. SIEM will not only aggregate logs but will perform analysis and issue alerts (e-mail, pager, audible, etc.) according to suspicious patterns.

  6. A botnet can be characterized as a:
    1. Type of virus
    2. Group of dispersed, compromised machines controlled remotely for illicit reasons
    3. Automatic security alerting tool for corporate networks
    4. Network used solely for internal communications

    Answer: B

    A bot is a type of malware that an attacker can use to control an infected computer or mobile device. A group or network of machines that have been co-opted this way and are under the control of the same attacker is known a botnet.

  7. During a disaster recovery test, several billing representatives need to be temporarily set up to take payments from customers. It has been determined that this will need to occur over a wireless network, with security being enforced where possible. Which of the following configurations should be used in this scenario?
    1. WPA2, SSID disabled, and 802.11a
    2. WEP, SSID disabled, and 802.11g
    3. WEP, SSID enabled, and 802.11b
    4. WPA2, SSID enabled, and 802.11n

    Answer: A

    WPA2 is a security technology commonly used on Wi-Fi wireless networks. WPA2 (Wi-Fi Protected Access 2) replaced the original WPA technology on all certified Wi-Fi hardware since 2006 and is based on the IEEE 802.11i technology standard for data encryption. WPA was used to replace WEP, which is not considered a secure protocol for wireless systems due to numerous issues with its implementation. Disabling the SSID will further enhance the security of the solution, as it requires the user that wants to connect to the WAP to have the exact SSID, as opposed to selecting it from a list.

  8. A new installation requires a network in a heavy manufacturing area with substantial amounts of electromagnetic radiation and power fluctuations. Which media is best suited for this environment is little traffic degradation is tolerated?
    1. Shielded twisted pair
    2. Coax
    3. Fiber
    4. Wireless

    Answer: C

    Since fiber optic cabling relies on light as the transmission mechanism, electromagnetic interference will not affect it.

  9. What is the network ID portion of the IP address 191.154.25.66 if the default subnet mask is used?
    1. 191
    2. 191.154.25
    3. 191.154

    Answer: C

    If the default subnet mask is used, then the network ID portion of the IP address 191.154.25.66 is 191.154. The first octet, 191, indicates that this is a class B address. In a class B address, the first two octets of the address represent the network portion. The default subnet mask for a Class B network address is 255.255.0.0.

  10. Given the address 192.168.10.19/28, which of the following are valid host addresses on this subnet? (Choose two.)
    1. 192.168.10.31
    2. 192.168.10.17
    3. 192.168.10.16
    4. 192.168.10.29

    Answer: B and D

    192.168.10.19/28 belongs to 192.168.10.16 network with mask of 255.255.255.240. This offers 14 usable IP address range from 192.168.10.17 – 30.

  11. Circuit-switched networks do which of the following tasks?
    1. Divide data into packets and transmit it over a virtual network.
    2. Establish a dedicated circuit between endpoints.
    3. Divide data into packets and transmit it over a shared network.
    4. Establish an on-demand circuit between endpoints.

    Answer: B

    Circuit-switched networks establish a dedicated circuit between endpoints. These circuits consist of dedicated switch connections. Neither endpoint starts communicating until the circuit is completely established. The endpoints have exclusive use of the circuit and its bandwidth. Carriers base the cost of using a circuit-switched network on the duration of the connection, which makes this type of network only cost-effective for a steady communication stream between the endpoints. Examples of circuit- switched networks are the plain old telephone service (POTS), Integrated Services Digital Network (ISDN), and Point-to-Point Protocol (PPP).

  12. What is the biggest security issue associated with the use of a multiprotocol label switching (MPLS) network?
    1. Lack of native encryption services
    2. Lack of native authentication services
    3. Support for the wired equivalent privacy (WEP) and data encryption standard (DES) algorithms
    4. The need to establish peering relationships to cross Tier 1 carrier boundaries

    Answer: A

    MPLS is often referred to as “IP VPN” because of the ability to couple highly deterministic routing with IP services. In effect, this creates a VPN-type service that makes it logically impossible for data from one network to be mixed or routed over to another network without compromising the MPLS routing device itself. MPLS does not include encryption services; therefore, any MPLS service called “IP VPN” does not in fact contain any cryptographic services. The traffic on these links would be visible to the service providers.

  13. The majority of DNS traffic is carried using User Datagram Protocol (UDP); what types of DNS traffic is carried using Transmission Control Protocol (TCP)? (Choose all that apply)
    1. Query traffic
    2. Response traffic
    3. DNNSEC traffic that exceeds single packet size maximum
    4. Secondary zone transfers

    Answer: C and D

    Most of the attention paid to DNS security focuses on the DNS query and response transaction. This transaction is a UDP transaction; however, DNS utilizes both UDP and TCP transport mechanisms. DNS TCP transactions are used for secondary zone transfers and for DNSSEC traffic that exceeds the maximum single packet size. The original single packet size was 512 bytes, but there is an extension available to DNS that allows the single packet size to be set to 4096 bytes.

  14. What is the command that a client would need to issue to initialize an encrypted FTP session using Secure FTP as outlined in RFC 4217?
    1. ENABLE SSL
    2. ENABLE TLS
    3. AUTH TLS
    4. AUTH SSL

    Answer: C

    Secure FTP with TLS is an extension to the FTP standard that allows clients to request that the FTP session be encrypted. This is done by sending the AUTH TLS command. The server has the option of allowing or denying connections that do not request TLS. This protocol extension is defined in the proposed standard RFC 4217.

  15. What is the IEEE designation for Priority-based Flow Control (PFC) as defined in the Data Center Bridging (DCB) Standards?
    1. 802.1Qbz
    2. 802.1Qau
    3. 802.1Qaz
    4. 802.1Qbb

    Answer: D

    The DCB standards define four new technologies:

    • Priority-based flow control (PFC), 802.1Qbb allows the network to pause different traffic classes.
    • Enhanced transmission selection (ETS), 802.1Qaz defines the scheduling behavior of multiple traffic classes, including strict priority and minimum guaranteed bandwidth capabilities. This should enable fair sharing of the link, better performance, and metering.
    • Quantized congestion notification (QCN), 802.1Qau supports end-to-end flow control in a switched LAN infrastructure and helps eliminate sustained, heavy congestion in an Ethernet fabric. Before the network can use QCN, you must implement QCN in all components in the converged enhanced Ethernet (CEE) data path (converged network adapters (CNAs), switches, and so on). QCN networks must also use PFC to avoid dropping packets and ensure a lossless environment.
    • Data Center Bridging Exchange Protocol (DCBX), 802.1Qaz supports discovery and configuration of network devices that support PFC, ETS, and QCN.
  16. What is the integrity protection hashing function that the Session Initiation Protocol (SIP) uses?
    1. SHA-160
    2. MD4
    3. MD5
    4. SHA-256

    Answer: C

    SIP provides integrity protection through MD5 hash functions.

  17. Layer 2 Tunneling Protocol (L2TP) is a hybrid of:
    1. Cisco’s Layer 2 Forwarding (L2F) and Microsoft’s Point to Point Tunneling Protocol (PPTP)
    2. Microsoft’s Layer 2 Forwarding (L2F) and Cisco’s Point to Point Tunneling Protocol (PPTP)
    3. Cisco’s Layer 2 Forwarding (L2F) and Point to Point Protocol (PPP)
    4. Microsoft’s Layer 2 Forwarding (L2F) and Point to Point Protocol (PPP)

    Answer: A

    Layer 2 Tunneling Protocol (L2TP) is a hybrid of Cisco’s Layer 2 Forwarding (L2F) and Microsoft’s Point to Point Tunneling Protocol (PPTP).

  18. With regards to LAN-based security, what is the key difference between the control plane and the data plane?
    1. The data plane is where forwarding/routing decisions are made, while the control plane is where commands are implemented.
    2. The control plane is where APIs are used to monitor and oversee, while the data plane is where commands are implemented.
    3. The control plane is where forwarding/routing decisions are made, while the data plane is where commands are implemented.
    4. The data plane is where APIs are used to monitor and oversee, while the control plane is where commands are implemented.

    Answer: C

    The control plane is where forwarding/routing decisions are made. Switches and routers have to figure out where to send frames (L2) and packets (L3). The switches and routers that run the network run as discrete components, but since they are in a network, they have to exchange information such as host reachability, and status with neighbors. This is done in the control plane using protocols like spanning tree, OSPF, BGP, QoS enforcement, etc.

    The data plane is where the action takes place. It includes things like the forwarding tables, routing tables, ARP tables, queue's, tagging and re-tagging, etc. The data plane carries out the commands of the control plane.

    For example, in the control plane, you set up IP networking and routing (routing protocols, route preferences, static routers, etc…) and connect hosts and switches/routers together. Each switch/router figures out what is directly connected to it, and then tells its neighbor what it can reach and how it can reach it. The switches/routers also learn how to reach hosts and networks not attached to it. Once all of the routers/switches have a coherent picture--shared via the control plane--the network is converged.

    In the data plane, the routers/switches use what the control plane built to dispose of incoming and outgoing frames and packets. Some get sent to another router, for example. Some may get queued up when congested. Some may get dropped if congestion gets bad enough.

  19. There are several record types associated with the use of DNSSEC. What does the DS record type represent?
    1. A private key
    2. A public key
    3. A hash of a key
    4. A signature of an RRSet

    Answer: C

    The DNSSEC trust chain is a sequence of records that identify either a public key or a signature of a set of resource records. The root of this chain of trust is the root key which is maintained and managed by the operators of the DNS root. DNSSEC is defined by the IETF in RFCs 4033, 4034, and 4035.

    There are several important new record types:

    • DNSKEY: a public key, used to sign a set of resource records (RRset).
    • DS: delegation signer, a hash of a key.
    • RRSIG: a signature of a RRset that share name/type/class.
  20. MACsec (IEEE 802.1AE) is used to provide secure communication for all traffic on Ethernet links. How is MACsec configured?
    1. Through key distribution
    2. Using connectivity groups
    3. Using key generation
    4. Using connectivity associations

    Answer: D

    MACsec is configured in connectivity associations. MACsec is enabled when a connectivity association is assigned to an interface.

    MACsec provides security through the use of secured point-to-point Ethernet links. The point-to-point links are secured after matching security keys—a user-configured pre-shared key when you enable MACsec using static connectivity association key (CAK) security mode or a user-configured static secure association key when you enable MACsec using static secure association key (SAK) security mode—are exchanged and verified between the interfaces at each end of the point-to-point Ethernet link. Other user-configurable parameters, such as MAC address or port, must also match on the interfaces on each side of the link to enable MACsec.

Domain 7: Systems and Application Security

  1. “VBS” is used in the beginning of most antivirus vendors to represent what component of the CARO general-structure?
    1. Family
    2. Platform
    3. Modifier
    4. Suffix

    Answer: B

    VBS is short for Visual Basic Script and is a prefix commonly associated with VBS threats. The general structure of CARO as presented in this chapter is Platform.Type.Family_Name.Variant[:Modifier]@Suffix.

  2. W64.Root.AC is what variant of this malcode?
    1. W64
    2. AC
    3. Root
    4. C

    Answer: B

    The variant is commonly the last element added to a malcode name, AC in this example.

  3. W64.Slober.Z@mm spreads through what primary vector, according to Symantec naming conventions?
    1. Mass Mailer
    2. Windows 64-bit
    3. Windows 8/8.1
    4. E-mail

    Answer: A

    Symantec uses the @SUFFIX mailing convention to identify how malcode spreads. In this case the suffix is @mm, which stands for mass mailer. Answers (B and C) are specific to the platform, not to how the malcode spreads. Answer (D) is also used by Symantec, but would be specified as @m.

  4. A SSCP discovers an antivirus message indicating detection and removal of Backdoor.win64.Agent.igh. What should the SSCP do to monitor to the threat?
    1. Use rootkit detection software on the host
    2. Update antivirus signature files
    3. Run a full host scan
    4. Monitor egress traffic from the computer

    Answer: D

    The CARO name indicates that this is a backdoor Trojan. Backdoor Trojans provide attackers with remote access to the computer. Monitoring of network communications is critical in identifying egress communications related to the Trojan. Installation or use of various rootkit or antivirus solutions is not helpful in monitoring the threat. Additionally, antivirus has already detected the threat on the system.

  5. Malcode that infects existing files on a computer to spread are called what?
    1. Rootkit
    2. Worms
    3. Viruses
    4. Trojans

    Answer: C

    Viruses require a host file to infect. Trojans do not replicate but masquerade as something legitimate. Worms create copies of themselves as they spread. Rootkits are used for stealth to increase survivability in the wild.

  6. A Trojan that executes a destructive payload when certain conditions are met is called what?
    1. Data diddler
    2. Rootkit
    3. Logic bomb
    4. Keylogger

    Answer: C

    Keyloggers are not destructive but merely steal keystrokes on a system. Data diddler is defined online by Virus Bulletin as a destructive overwriting Trojan, but it does not have a “time” or conditional component to when the payload is executed like that of a logic bomb. A rootkit is a stealthy type of software, typically malicious, designed to hide the existence of certain processes or programs from normal methods of detection and enable continued privileged access to a computer.

  7. How does a cavity virus infect a file with malcode?he infected host using different antivirus software?
    1. Appends code
    2. Injects code
    3. Removes code
    4. Prepends code

    Answer: B

    Cavity viruses inject code into various locations of a file. Prepending is to put code before the body of a file. Appending is to put code following the body of a file.

  8. Mebroot is unique because it modifies what component of a computer to load on system startup?
    1. Windows registry keys
    2. Kernel
    3. Master boot record
    4. Startup folder

    Answer: C

    Mebroot is a kernel-level rootkit that modifies the master boot record to load before the operating system even runs in memory. Modifications made to the Windows registry keys and startup folder are not unique, as they are used by many programs to load specified settings with the operating system.

  9. SYS and VXD hostile codes are commonly associated with what type of threat?
    1. Trojans
    2. Userland rootkits
    3. Worms
    4. Kernel rootkits

    Answer: D

    Kernel-level rootkits normally have SYS and VXD filenames. Userland rootkits are typically a DLL extension. Trojans and worms are general classifications for malcode that are not as specific as the answer Kernel rootkits.

  10. A potentially unwanted program (PUP) refers to software that may include what? (Choose all that apply.)
    1. Monitoring capability
    2. End User License Agreement (EULA)
    3. Patch management capability
    4. Ability to capture data

    Answer: A, B and D

    This is technically legal software that includes an End User License Agreement (EULA) but may monitor or capture sensitive data.

  11. 0.0.0.0 avp.ch is a string found within a Trojan binary, indicating that it likely performs this type of change to a system upon infection:
    1. Downloads code from avp.ch
    2. Modifies the HOSTS file to prevent access to avp.ch
    3. Communicates with a remote C&C at avp.ch
    4. Contains a logic bomb that activates immediately

    Answer: B

    The structure of the string is that of a HOSTS file, indicating that it likely modifies the HOSTS file on the computer.

  12. What does it mean when a SSCP does not see explorer.exe in the Windows Task Manager on a host machine?
    1. It is normal for explorer.exe to not appear in Windows Task Manager.
    2. explorer.exe is likely injected and hidden by a Windows rootkit.
    3. explorer.exe does not need to be visible if svchost.exe is visible.
    4. Internet Explorer is open and running in memory.

    Answer: B

    explorer.exe provides the Windows desktop graphical user interface (GUI) and should always be visible within the Windows Task Manager. If it is not visible, a Windows rootkit is likely concealing the process after having injected into it.

  13. If a SSCP attempts to analyze suspicious code using a VMware based test environment and nothing executes, what might be the next steps to take to further analyze the code?
    1. Submit the code to an online sandbox scanner to compare behavioral results.
    2. b. Modify advanced settings to disable hardware acceleration and similar components and execute the code again.
    3. Call VMware technical support for help in identifying the problem(s) causing the code not to execute.
    4. Run the malcode in a native, non-virtualized test environment to see if it is anti-VMware.

    Answer: A, B and D

    Answer (C) will not help to identify what the suspicious code may be, to analyze it further, or to learn anything of value about the code, as VMware technical support will not be able to answer any questions regarding the suspicious code itself.

  14. What does the “vector of attack” refer to?
    1. Software that can infect multiple hosts
    2. The primary action of a malicious code attack
    3. How the transmission of malcode takes place
    4. The directions used to control the placement of the malcode

    Answer: C

    The vector of attack is how the transmission of malcode takes place, such as e-mail, a link sent to an instant messenger user, or a hostile website attempting to exploit vulnerable software on a remote host. This is one of the most important components of a malcode incident for a security practitioner to understand to properly protect against reinfection or additional attacks on the infrastructure of a corporate network.

  15. What is direct kernel object modification an example of?
    1. A technique used by persistent mode rootkits to modify data structures
    2. A technique used by memory based rootkits to modify data structures
    3. A technique used by user mode rootkits to modify data structures
    4. A technique used by kernel mode rootkits to modify data structures

    Answer: D

    Kernel-mode rootkits are considered to be more powerful than other kinds of rootkits since, not only can they intercept the native API in kernel-mode, but they can also directly manipulate kernel-mode data structures. A common technique for hiding the presence of a malware process is to remove the process from the kernel's list of active processes. Since process management APIs rely on the contents of the list, the malware process will not display in process management tools like Task Manager or Process Explorer. Another kernel mode rootkit technique is to simply modify the data structures in kernel memory. For example, kernel memory must keep a list of all running processes and a rootkit can simply remove themselves and other malicious processes they wish to hide from this list. This technique is known as direct kernel object modification (DKOM).

  16. What kind of an attack is the following sample code indicative of?
  17. ../../../
    1. Covert channel
    2. Buffer overflow
    3. Directory traversal
    4. Pointer overflow

    Answer: C

    A directory traversal exploits a lack of security in web applications and allows an attacker to access files. The directory traversal:

    • Uses a common means of representing a parent directory, ../ (dot dot slash) to access files not intended to be accessed.
    • Consists of adding the characters ../ to the right side of a URL, An example is: ../../../../<filename>.
  18. What does a second generation antivirus scanner use to search for probable malware instances?
    1. Heuristic rules
    2. Malware signatures
    3. Malware signatures
    4. Generic decryption

    Answer: A

    A second-generation scanner does not rely on a specific signature. Rather, the scanner uses heuristic rules to search for probable malware instances. One class of such scanners looks for fragments of code that are often associated with malware. An example of this type of scanner would be a scanner that may look for the beginning of an encryption loop used in a polymorphic virus and discover the encryption key. Once the key is discovered, the scanner can decrypt the malware to identify it, then remove the infection and return the program to service. Another second-generation approach is integrity checking. A checksum can be appended to each program. If malware alters or replaces some program without changing the checksum, then an integrity check will catch this change. To counter malware that is sophisticated enough to change the checksum when it alters a program, an encrypted hash function can be used. The encryption key is stored separately from the program so that the malware cannot generate a new hash code and encrypt that. By using a hash function rather than a simpler check-sum, the malware is prevented from adjusting the program to produce the same hash code as before.

  19. What type of botnet detection and mitigation technique is NetFlow used for?
    1. Anomaly detection
    2. DNS log analysis
    3. Data monitoring
    4. Honeypots

    Answer: C

    The most common detection and mitigation techniques include:

    • Flow data monitoring—This technique uses flow-based protocols to get summary network and transport-layer information from network devices. Cisco NetFlow is often used by service providers and enterprises to identify command-and-control traffic for compromised workstations or servers that have been subverted and are being remotely controlled as members of botnets used to launch DDoS attacks, perform keystroke logging, and other forms of illicit activity.
    • Anomaly detection—While signature-based approaches try to have a signature for every vulnerability, anomaly detection (or behavioral approaches) try to do the opposite. They characterize what normal traffic is like, and then look for deviations. Any burst of scanning activity on the network from zombie machines can be detected and blocked. Anomaly detection can be effectively used on the network as well as on endpoints (such as servers and laptops). On endpoints, suspicious activity and policy violations can be identified and infections prevented.
    • DNS log analysis—Botnets often rely on free DNS hosting services to point a subdomain to IRC servers that have been hijacked by the botmaster, and that host the bots and associated exploits. Botnet code often contains hard-coded references to a DNS server, which can be spotted by any DNS log analysis tool. If such services are identified, the entire botnet can be crippled by the DNS server administrator by directing offending subdomains to a dead IP address (a technique known as null-routing). While this technique is effective, it is also the hardest to implement since it requires cooperation from third-party hosting providers and name registrars.
    • Honeypots—A honeypot is a trap that mimics a legitimate network, resource, or service, but is in fact a self-contained, secure, and monitored area. Its primary goal is to lure and detect malicious attacks and intrusions. Effective more as a surveillance and early warning system, it can also help security researchers understand emerging threats. Due to the difficulty in setup and the active analysis required, the value of honeypots on large-scale networks is rather limited.
  20. What kind of tool should be used to check for cross-site scripting (XSS) vulnerabilities?
    1. Rootkit revealer
    2. Web vulnerability scanner
    3. Terminal emulator
    4. Decompiler

    Answer: B

    To check for cross-site scripting vulnerabilities, use a web vulnerability scanner. a web vulnerability scanner crawls an entire website and automatically checks for cross-site scripting vulnerabilities. It will indicate which URLs/scripts are vulnerable to these attacks. Besides cross-site scripting vulnerabilities a web application scanner will also check for SQL injection and other web vulnerabilities.

  21. What are the five phases of an advanced persistent threat (APT)?
    1. Reconnaissance, capture, incursion, discovery, and exfiltration
    2. Reconnaissance, discovery, incursion, capture, and exfiltration
    3. Incursion, reconnaissance, discovery, capture, and exfiltration
    4. Reconnaissance, incursion, discovery, capture, and exfiltration

    Answer: D

    The five phases of an APT are detailed below:

    1. Reconnaissance—Attackers leverage information from a variety of areas to understand their target.
    2. Incursion—Attackers break into the target network by using social engineering to deliver targeted malware to vulnerable systems and people.
    3. 3. Discovery—The attacker maps the organization's defenses from the inside out, allowing them to have a complete picture of the strengths and weaknesses of the network. This allows the attacker to pick and choose what vulnerabilities and weaknesses they will attempt to exploit through the deployment of multiple parallel vectors to ensure success.
    4. Capture—Attackers access unprotected systems and capture information over an extended period of time. They will also traditionally install malware to allow for the secret acquisition of data and potential disruption of operations if required.
    5. Exfiltration—Captured information is sent back to the attackers for analysis and potentially further exploitation.
  22. What would a malware author need to do in order to prevent the heuristic technology used by antivirus vendors from detecting the malware code hidden inside of a program file?
    1. Use a runtime packer that is virtual environment aware
    2. Encrypt the malware files
    3. Decompile the malware files
    4. Use a runtime packer that is not virtual environment aware

    Answer: A

    A simple explanation of software packers, or compression, is that symbols are used to represent repeated patterns in the software code. A packed file can contain malware and unless your antivirus product knows how to unpack the file the malware will not be detected. That would seem to be the end of the story, except that we have something called run-time packers. Here is how they work. The packed file is an executable program that is only partially packed. A tiny bit of the program is not packed. The beginning of the program is not packed so, when the packed executable is run, it starts unpacking the rest of the file. The un-packer tool is built right in.

    Runtime packers are used by malware authors because it makes it much harder to detect the malware. Antivirus vendors use heuristic technologies that create a virtual computer inside the scanning engine and then run the program inside the virtualized environment. This can force a run-time packed program to unpack itself; there is always a catch though. The malware programmer can make the program detect that it is running in a virtual environment and then the program may not unpack itself or may only unpack harmless parts of itself to fool the virus scanning program.

  23. What is a goat machine used for?
    1. Configuration management
    2. Hosting of network monitoring software
    3. Testing of suspicious software
    4. Creation of baseline images

    Answer: C

    Native goat machines must be able to be restored quickly through imaging solutions like Acronis software or Ghost. They ideally mirror the images used in the corporate environment and are put on a separate network for the security practitioner to use in order to run their laboratory tests. It is a good idea for the security practitioner to create multiple goat images based on patched and not patched, to test for exploitation success, and up-to-date builds from the network.

  24. Identify whether each of the following activities is strategic or tactical:
    • Defense in depth
    • Hardening systems
    • Senior management support
    • Backing up data
    • The formation of a CERT/CSIRT team

    Answer:

    • Defense in depth: Strategic
    • Hardening systems: Tactical
    • Senior management support: Strategic
    • Backing up data: Tactical
    • The formation of a CERT/CSIRT Team: Strategic
  25. What is the correct description of the relationship between a data controller and a data processor role with regards to privacy and data protection (P&DP) laws?
    1. The processor determines the purposes and means of processing of public data, while the controller processes public data on behalf of the processor.
    2. The controller determines the purposes and means of processing of public data, while the processor processes public data on behalf of the controller.
    3. c. The controller determines the purposes and means of processing of personal data, while the processor processes personal data on behalf of the controller.
    4. The processor determines the purposes and means of processing of personal data, while the controller processes personal data on behalf of the processor.

    Answer: C

    The ultimate goal of P&DP laws is to provide safeguards to the individuals (data subjects) for the processing of their personal data in the respect of their privacy and will: this is achieved with the definitions of principles/rules to be fulfilled by the operators involved in the data processing. These operators can process the data playing the role of data controller or data processor.

    Following are typical meanings for common privacy terms:

    • Data Subject—An identifiable person is one who can be identified, directly or indirectly, in particular by reference to an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural or social identity; telephone number, IP address, etc.
    • Personal Data—Any information relating to an identified or identifiable natural person. There are many types of personal data such as, sensitive/health data, biometric data, telephone/telematic traffic data. According to the type of personal data, the P&DP laws usually set out specific privacy and data protection obligations (e.g., security measures, data subject’s consent for the processing etc.
    • Processing—Operations which is performed upon personal data, whether or not by automatic means, such as collection, recording, organization, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination, or otherwise making available, alignment or combination, blocking, erasure, or destruction. Processing is made for specific purposes and scopes (e.g., marketing, selling products, for the purpose of justice, for the management of a employer-employee work relationships, for public administration, health services). According to the purpose and scope of a processing, the P&DP laws usually set out specific privacy and data protection obligations (e.g., Security measures, data subject’s consent for the processing). controller, the natural or legal person, public authority, agency, or any other body which alone or jointly with others determines the purposes and means of the processing of personal data; where the purposes and means of processing are determined by national or community laws or regulations, the controller, or the specific criteria for his nomination may be designated by national or community law.
    • Processor—a natural or legal person, public authority, agency, or any other body which processes personal data on behalf of the controller.
  26. According to the NIST Definition of Cloud Computing (NIST SP 800-145), what are the three Cloud Service Models?
    1. Software as a service (SaaS), platform as a service (paas) and internet of things as a service (TaaS)
    2. Software as a service (SaaS), platform as a service (PaaS) and infrastructure as a service (IaaS)
    3. Software as a service (SaaS), business process as a service (BPaaS) and infrastructure as a service (IaaS)
    4. Security as a service (SaaS), platform as a service (PaaS) and infrastructure as a service (IaaS)

    Answer: B

    Service Models:

    • Software as a Service (SaaS)—The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through either a thin client interface, such as a web browser (e.g., web-based email), or a program interface. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.
    • Platform as a Service (PaaS)—The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages, libraries, services, and tools supported by the provider.3 The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly configuration settings for the application-hosting environment.
    • Infrastructure as a Service (IaaS)—The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications; and possibly limited control of select networking components (e.g., host firewalls).
  27. Which of the following are storage types used with an infrastructure as a service solution?
    1. Volume and block
    2. Structured and object
    3. Unstructured and ephemeral
    4. Volume and object

    Answer: D

    IaaS uses the following storage types:

    • Volume storage—A virtual hard drive that can be attached to a virtual machine instance and be used to host data within a file system. Volumes attached to IaaS instances behave just like a physical drive or an array does. Examples include VMware VMFS, Amazon EBS, RackSpace RAID, and OpenStack Cinder.
    • Object storage—Object storage is like a file share accessed via APIs or a web interface. Examples include Amazon S3 and Rackspace cloud files.
  28. What is the Cloud Security Alliance Cloud Controls Matrix?
    1. A set of regulatory requirements for cloud service providers.
    2. An inventory of cloud service security controls that are arranged into separate security domains.
    3. A set of software development life cycle requirements for cloud service providers.
    4. An inventory of cloud service security controls that are arranged into a hierarchy of security domains.

    Answer: B

    The Cloud Security Alliance Cloud Controls Matrix (CCM) is an essential and up to date security controls framework that is addressed to the cloud community and stakeholders. A fundamental richness of the CCM is its ability to provide mapping/cross relationships with the main industry-accepted security standards, regulations, and controls frameworks such as the ISO 27001/27002, ISACAs COBIT, and PCI-DSS. The CCM can be seen as an inventory of cloud service security controls.

  29. Which of the following are attributes of cloud computing?
    1. Minimal management effort and shared resources
    2. High cost and unique resources
    3. Rapid provisioning and slow release of resources
    4. Limited access and service provider interaction

    Answer: A

    “Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”

    N.I.S.T Definition of Cloud Computing (SP 800-145)

  30. When using an infrastructure as a service solution, what is the capability provided to the customer?
    1. To provision processing, storage, networks, and other fundamental computing resources where the consumer is not able to deploy and run arbitrary software, which can include operating systems and applications.
    2. To provision processing, storage, networks, and other fundamental computing resources where the provider is able to deploy and run arbitrary software, which can include operating systems and applications.
    3. To provision processing, storage, networks, and other fundamental computing resources where the auditor is able to deploy and run arbitrary software, which can include operating systems and applications.
    4. To provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications.

    Answer: D

    According to the NIST Definition of Cloud Computing, in IaaS, “the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications; and possibly limited control of select networking components (e.g., host firewalls).”

  31. When using a platform as a service solution, what is the capability provided to the customer?
    1. To deploy onto the cloud infrastructure provider-created or acquired applications created using programming languages, libraries, services, and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly configuration settings for the application-hosting environment.
    2. To deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages, libraries, services, and tools supported by the provider. The provider does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly configuration settings for the application-hosting environment.
    3. To deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages, libraries, services, and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly configuration settings for the application-hosting environment.
    4. To deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages, libraries, services, and tools supported by the consumer. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly configuration settings for the application-hosting environment.

    Answer: C

    According to the NIST Definition of Cloud Computing, in PaaS, “the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages, libraries, services, and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly configuration settings for the application-hosting environment.”

  32. What are the four cloud deployment models?
    1. Public, internal, hybrid, and community
    2. External, private, hybrid, and community
    3. Public, private, joint, and community
    4. Public, private, hybrid, and community

    Answer: D

    According to the NIST Definition of Cloud Computing, the cloud deployment models are:

    • Private cloud—The cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units). It may be owned, managed, and operated by the organization, a third party, or some combination of them, and it may exist on or off premises.
    • Community cloud—The cloud infrastructure is provisioned for exclusive use by a specific community of consumers from organizations that have shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be owned, managed, and operated by one or more of the organizations in the community, a third party, or some combination of them, and it may exist on or off premises.
    • Public cloud—The cloud infrastructure is provisioned for open use by the general public. It may be owned, managed, and operated by a business, academic, or government organization, or some combination of them. It exists on the premises of the cloud provider.
    • Hybrid cloud—The cloud infrastructure is a composition of two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load balancing between clouds).
  33. When setting up resource sharing within a host cluster, which option would you choose to mediate resource contention?
    1. Reservations
    2. Limits
    3. Clusters
    4. Shares

    Answer: D

    Within a host cluster, resources are allocated and managed as if they are pooled or jointly available to all members of the cluster. The use of resource sharing concepts such as reservations, limits, and shares may be used to further refine and orchestrate the allocation of resources according to requirements imposed by the cluster administrator.

    • Reservations allow for the guaranteeing of a certain minimum amount of the clusters pooled resources to be made available to a specified virtual machine.
    • Limits allow for the guaranteeing of a certain maximum amount of the clusters pooled resources to be made available to a specified virtual machine.
    • Shares allow for the provisioning of the remaining resources left in a cluster when there is resource contention. Specifically, shares allow the cluster’s reservations to be allocated, and then to address any remaining resources that may be available for use by members of the cluster through a prioritized percentage-based allocation mechanism.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.143.5.15