Chapter 7

Domain 6: Security Assessment and Testing (Designing, Performing, and Analyzing Security Testing)

Abstract

Domain 6 discusses security assessment and testing, which are critical components of any information security program. Organizations must accurately assess their real-world security, focus on the most critical components, and make necessary changes to improve. This domain describes two major components of assessment and testing: overall security assessments (including vulnerability scanning, penetration testing, security assessments, and security audits), and testing software via static and dynamic methods. Static testing tests the code passively: the code is not running. This includes walkthroughs, syntax checking, and code reviews. Dynamic methods include fuzzing, a type of black box testing that submits random, malformed data as inputs into software programs to determine if they will crash.

Keywords

Dynamic Testing
Fuzzing
Penetration Testing
Static Testing
Synthetic Transactions

Exam objectives in this chapter

Assessing Access Control
Software Testing Methods

Unique Terms and Definitions

Dynamic Testing – Tests code while executing it
Fuzzing – A type of black box testing that submits random, malformed data as inputs into software programs to determine if they will crash
Penetration Testing – Authorized attempt to break into an organization’s physical or electronic perimeter (and sometimes both)
Static Testing – Tests code passively: the code is not running.
Synthetic Transactions – Also called synthetic monitoring: involves building scripts or tools that simulate activities normally performed in an application

Introduction

Security assessment and testing are critical components of any information security program. Organizations must accurately assess their real-world security, focus on the most critical components, and make necessary changes to improve.
In this domain we will discuss two major components of assessment and testing: overall security assessments (including vulnerability scanning, penetration testing, and security audits), and testing software via static and dynamic methods.

Assessing Access Control

A number of processes exist to assess the effectiveness of access control. Tests with a narrower scope include penetration tests, vulnerability assessments, and security audits. A security assessment is a broader test that may include narrower tests, such as penetration tests, as subsections.

Penetration Testing

A penetration tester is a white hat hacker who receives authorization to attempt to break into an organization’s physical or electronic perimeter (and sometimes both). Penetration tests (called “pen tests” for short) are designed to determine whether black hat hackers could do the same. They are a narrow, but often useful, test, especially if the penetration tester is successful.
Penetration tests may include the following tests:
Network (Internet)
Network (internal or DMZ)
War dialing
Wireless
Physical (attempt to gain entrance into a facility or room)
Network attacks may leverage client-side attacks, server-side attacks, or Web application attacks. See Chapter 4, Domain 3: Security Engineering for more information on these attacks. War dialing uses a modem to dial a series of phone numbers, looking for an answering modem carrier tone (the penetration tester then attempts to access the answering system); the name derives from the 1983 movie WarGames.
Social engineering is a no-tech or low-tech method that uses the human mind to bypass security controls. Social engineering may be used in combination with many types of attacks, especially client-side attacks or physical tests. An example of a social engineering attack combined with a client-side attack is emailing malware with a Subject line of “Category 5 Hurricane is about to hit Florida!” A physical social engineering attack (used to tailgate an authorized user into a building) is described in Chapter 4, Domain 3: Security Engineering.
A zero-knowledge (also called black box) test is “blind”; the penetration tester begins with no external or trusted information, and begins the attack with public information only. A full-knowledge test (also called crystal-box) provides internal information to the penetration tester, including network diagrams, policies and procedures, and sometimes reports from previous penetration testers. Partial-knowledge tests are in between zero and full knowledge: the penetration tester receives some limited trusted information.
Some clients prefer the zero knowledge approach, feeling this will lead to a more accurate simulation of a real attacker’s process. This may be a false premise: a real attacker may be an insider, or have access to inside information.
Full-knowledge testing can be far more efficient, allowing the penetration tester to find weaker areas more quickly. Most penetration tests have a scope that includes a limitation on the time spent conducting the test. Limited testing time may lead to a failed test, where more time could lead to success. Full-knowledge tests are also safer: systems are less likely to crash if the penetration tester has extensive information about the targets before beginning the test.

Penetration Testing Tools and Methodology

Penetration testers often use penetration testing tools, which include the open source Metasploit (http://www.metasploit.org), and closed source Core Impact (http://www.coresecurity.com) and Immunity Canvas (http://www.immunitysec.com). Pen testers also use custom tools, as well as malware samples and code posted to the Internet.
Penetration testers use the following methodology:
Planning
Reconnaissance
Scanning (also called enumeration)
Vulnerability assessment
Exploitation
Reporting
Black hat hackers typically follow a similar methodology (though they may perform less planning, and obviously omit reporting). Black hats will also cover their tracks (erase logs and other signs of intrusion), and frequently violate system integrity by installing back doors (in order to maintain access). A penetration tester should always protect data and system integrity.

Note

Penetration tests are sometimes controversial. Some argue that a penetration test really tests the skill of the penetration tester, and not the perimeter security of an organization. If a pen test is successful, there is value to the organization. But what if the penetration test fails? Did it fail because there is no perimeter risk? Or did it fail because the penetration tester lacked the skill or the time to complete the test? Or did it fail because the scope of the penetration test was too narrow?

Assuring Confidentiality, Data Integrity and System Integrity

Penetration testers must ensure the confidentiality of any sensitive data that is accessed during the test. If the target of a penetration test is a credit card database, the penetration tester may have no legal right to view or download the credit cards. Testers will often request that a dummy file containing no regulated or sensitive data (sometimes called a flag) be placed in the same area of the system as the credit card data, and protected with the same permissions. If the tester can read and/or write to that file, then they prove they could have done the same to the credit card data.
Penetration testers must be sure to ensure the system integrity and data integrity of their client’s systems. Any active attack (where data is sent to a system, as opposed to a passive read-only attack) against a system could potentially cause damage: this can be true even for an experienced penetration tester. This risk must be clearly understood by all parties: tests are often performed during change maintenance windows for this reason.
One potential issue that should be discussed before the penetration test commences is the risk of encountering signs of a previous or current successful malicious attack. Penetration testers sometimes discover that they are not the first attacker to compromise a system: someone has beaten them to it. Attackers will often become more malicious if they believe they have been discovered, sometimes violating data and system integrity. The integrity of the system is at risk in this case, and the penetration tester should end the penetration test, and immediately escalate the issue.
Finally, the final penetration test report should be protected at a very high level: it contains a roadmap to attack the organization.

Vulnerability Testing

Vulnerability scanning (also called vulnerability testing) scans a network or system for a list of predefined vulnerabilities such as system misconfiguration, outdated software, or a lack of patching. A vulnerability testing tool such as Nessus (http://www.tenable.com/products/nessus-vulnerability-scanner) or OpenVAS (http://www.openvas.org) may be used to identify the vulnerabilities.
We learned that Risk = Threat × Vulnerability in Chapter 2, Domain 1: Security and Risk Management. It is important to remember that vulnerability scanners only show half of the risk equation: their output must be matched to threats to map true risk. This is an important half to identify, but these tools only perform part of the total job. Many organizations fall into the trap of viewing vulnerabilities without matching them to threats, and thus do not understand or mitigate true business risk.

Security Audits

A security audit is a test against a published standard. Organizations may be audited for PCI-DSS (Payment Card Industry Data Security Standard, discussed in Chapter 3, Domain 2: Asset Security) compliance, for example. PCI-DSS includes many required controls, such as firewalls, specific access control models, and wireless encryption. An auditor then verifies a site or organization meets the published standard.

Security Assessments

Security assessments are a holistic approach to assessing the effectiveness of access control. Instead of looking narrowly at penetration tests or vulnerability assessments, security assessments have a broader scope.
Security assessments view many controls across multiple domains, and may include the following:
Policies, procedures, and other administrative controls
Assessing the real world-effectiveness of administrative controls
Change management
Architectural review
Penetration tests
Vulnerability assessments
Security audits
As the above list shows, a security assessment may include other distinct tests, such as penetration tests. The goal is to broadly cover many other specific tests, to ensure that all aspects of access control are considered.

Internal and Third Party Audits

Security professionals routinely play a significant role in audits. In audits, the expectation is that an organization is being measured against a particular standard. While more loose usage of the word audit is employed, even with purely internal auditing the organization is assessing adherence to practices that they have deemed appropriate.
Organizations routinely undergo a variety of audits against various standards on an almost continuous basis. Some of these audits might simply involve self-reporting to a third party or be carried out solely for internal use by the organization. These audits should be conducted by only internal resources. Quite often, however, external auditors will be performing their own evaluation of an organization for report purposes. In either case, security professionals frequently play a role in the collection and communication of answers to specific requests, response and remediation of audit findings, and demonstrating effective mitigations that might prevent a negative finding.

Log Reviews

As a security control, logs can and should play a vital role in detection of security issues, greatly inform incident response, and further forensic review. From an assessment and testing standpoint, the goal is to review logs to ensure they can support information security as effectively as possible.
Reviewing security audit logs within an IT system is one of the easiest ways to verify that access control mechanisms are performing adequately. Reviewing audit logs is primarily a detective control.
According to NIST Special Publication 800-92 (http://csrc.nist.gov/publications/nistpubs/800-92/SP800-92.pdf), the following log types should be collected:
Network Security Software/Hardware:
Antivirus logs
IDS/IPS logs
Remote Access Software (such as VPN logs)
Web proxy
Vulnerability management
Authentication servers
Routers and firewalls
Operating System:
System events
Audit records
Applications
Client requests and server responses
Usage information
Significant operational actions [1]
The intelligence gained from proactive audit log management and monitoring can be very beneficial: the collected antivirus logs of thousands of systems can give a very accurate picture of the current state of malware. Antivirus alerts combined with a spike in failed authentication alerts from authentication servers or a spike in outbound firewall denials may indicate that a password-guessing worm is attempting to spread on a network.
According to “Five mistakes of Log Analysis” by Anton Chuvakin (see http://www.computerworld.com/s/article/96587/Five_mistakes_of_log_analysis), audit record management typically faces five distinct problems:
1. Logs are not reviewed on a regular and timely basis.
2. Audit logs and audit trails are not stored for a long enough time period.
3. Logs are not standardized or viewable by correlation toolsets—they are only viewable from the system being audited.
4. Log entries and alerts are not prioritized.
5. Audit records are only reviewed for the “bad stuff.” [2]
Many organizations collect audit logs, and then commit one or more of these types of mistakes. The useful intelligence referenced in the previous section (identifying worms via antivirus alerts, combined with authentication failures or firewall denials) is only possible if these mistakes are avoided.

Centralized Logging

Centralized log storage should be configured. Having logs in a central repository allows for more scalable security monitoring and intrusion detection capabilities. A centralized log repository can also help to verify the integrity of log information should the endpoint’s view of the logs be corrupted or intentionally altered. Ensuring the integrity of log information should be considered when transmitting and storing log data.

Note

Syslog, the most widely used logging subsystem, by default transmits log data in plaintext over UDP/514 when sending data to a remote server. UDP, a transport protocol that does not guarantee the delivery of transmissions, has implications for ensuring the continuity of logging. This means that the central log server might not have received all the log data, even though the endpoint has no facility for knowing that it failed to be delivered successfully. The plaintext nature of Syslog means that a suitably positioned adversary could see the (potentially sensitive) log data as it traverses the network. Syslog messages may also be spoofed due to the lack of authentication, lack of encryption, and use of UDP as the layer 4 transport protocol.
In addition to the centralized logs, preferably at least some limited recent logs should be maintained on the endpoint system itself. Having local logs in addition to the centralized log store can help in several ways. Should the continuity of logging be disrupted, the logs might still be able to be recovered from the endpoint. If an adversary intentionally corrupts or edits the logs on the endpoint comparing the differences can guide incident response to the adversary activities.

Log Retention

A retention and rotation policy for log information should be created and maintained. The retention and rotation should vary depending upon the source of the log, the type of logged information, and the practical value of the log information. Having a tremendous volume of log data that is categorically ignored provides very little value, and can also make finding meaningful data in the rest of the logs more challenging. While the security value of the log information is important, log retention can also be relevant to legal or regulatory compliance matters. Legal or regulatory considerations must be accounted for when considering log retention.

Software Testing Methods

In addition to the testing the features and stability of the software, software testing increasingly focuses on discovering specific programmer errors (such as lack of bounds checking) that could lead to vulnerabilities that increase the risk of system compromise.
Unlike off-the-shelf applications, custom developed applications don’t have a vendor providing security patches on a routine basis. The onus is on the organization developing the application to discover these flaws. Source code review of custom developed applications is one of the key approaches employed in application security.
Two general approaches to automated code review exist: static and dynamic analysis. The CISSP also calls out manual code review, which simply implies a knowledgeable person reviewing the code manually. Pair programming, employed in agile software development shops, (discussed in Chapter 9, Domain 8: Software Development Security) could be considered an example of manual source code review.

Static and Dynamic Testing

Static testing tests the code passively; the code is not running. This includes walkthroughs, syntax checking, and code reviews. Static analysis tools review the raw source code itself looking for evidence of known insecure practices, functions, libraries, or other characteristics having been used in the source code. The Unix program ‘lint’ performed static testing for C programs.
Code compiler warnings can also be considered a ‘lite’ form of static analysis. The C compiler GCC (Gnu Compiler Collection, see: https://gcc.gnu.org) contains static code analysis features: “The gcc compiler includes many of the features of lint, the classic C program verifier, and then some… The gcc compiler can identify many C program constructs that pose potential problems, even for programs that conform to the syntax rules of the language. For instance, you can request that the compiler report whether a variable is declared but not used, a comment is not properly terminated, or a function returns a type not permitted in older versions of C.” Please note that GCC itself is not testable, it is given as an example of a compiler with static testing capabilities. [3]
Dynamic testing tests the code while executing it. With dynamic testing, security checks are performed while actually running or executing the code or application under review.
Both approaches are appropriate and complement each other. Static analysis tools might uncover flaws in code that have not even yet been fully implemented in a way that would expose the flaw to dynamic testing. However, dynamic analysis might uncover flaws that exist in the particular implementation and interaction of code that static analysis missed.
White box software testing gives the tester access to program source code, data structures, variables, etc. Black box testing gives the tester no internal details: the software is treated as a black box that receives inputs.

Traceability Matrix

A Traceability Matrix (sometimes called a Requirements Traceability Matrix, or RTM) can be used to map customers’ requirements to the software testing plan: it “traces” the “requirements,” and ensures that they are being met. It does this by mapping customer use cases to test cases. Figure 7.1 shows a sample Requirements Traceability Matrix.
image
Figure 7.1 Sample Requirements Traceability Matrix [4]

Synthetic Transactions

Synthetic transactions, or synthetic monitoring, involves building scripts or tools that simulate activities normally performed in an application. The typical goal of using synthetic transactions/monitoring is to establish expected norms for the performance of these transactions. These synthetic transactions can be automated to run on a periodic basis to ensure the application is still performing as expected. These types of transactions can also be useful for testing application updates prior to deployment to ensure the functionality and performance will not be negatively impacted. This type of testing or monitoring is most commonly associated with custom developed web applications.
The Microsoft TechNet article Monitoring by Using Synthetic Transactions describes synthetic transactions: “For example, for a Web site, you can create a synthetic transaction that performs the actions of a customer connecting to the site and browsing through its pages. For databases, you can create transactions that connect to the database. You can then schedule these actions to occur at regular intervals to see how the database or Web site reacts and to see whether your monitoring settings, such as alerts and notifications, also react as expected.” [5]

Software Testing Levels

It is usually helpful to approach the challenge of testing software from multiple angles, addressing various testing levels, from low to high. The software testing levels of Unit Testing, Installation Testing, Integration Testing, Regression Testing, and Acceptance Testing are designed to accomplish that goal:
Unit Testing: Low-level tests of software components, such as functions, procedures or objects
Installation Testing: Testing software as it is installed and first operated
Integration Testing: Testing multiple software components as they are combined into a working system. Subsets may be tested, or Big Bang integration testing tests all integrated software components
Regression Testing: Testing software after updates, modifications, or patches
Acceptance Testing: testing to ensure the software meets the customer’s operational requirements. When this testing is done directly by the customer, it is called User Acceptance Testing.

Fuzzing

Fuzzing (also called fuzz testing) is a type of black box testing that submits random, malformed data as inputs into software programs to determine if they will crash. A program that crashes when receiving malformed or unexpected input is likely to suffer from a boundary checking issue, and may be vulnerable to a buffer overflow attack.
Fuzzing is typically automated, repeatedly presenting random input strings as command line switches, environment variables, and program inputs. Any program that crashes or hangs has failed the fuzz test.
Fuzzing can be considered a particular type of dynamic testing. Fuzzers are simply used to automate providing input to the application. Many people commonly associate fuzzers specifically with uncovering simple buffer overflow conditions. However, advanced and custom fuzzers will do more than simply provide tremendous volume of input to an application. Fuzzers can and have been used to uncover much more complex flaws than the traditional buffer overflow flaws.

Combinatorial Software Testing

Combinatorial software testing is a black-box testing method that seeks to identify and test all unique combinations of software inputs. An example of combinatorial software testing is pairwise testing (also called all pairs testing).
NIST gives the following example of pairwise testing (see: http://csrc.nist.gov/groups/SNS/acts/documents/kuhn-kacker-lei-hunter09.pdf), “Suppose we want to demonstrate that a new software application works correctly on PCs that use the Windows or Linux operating systems, Intel or AMD processors, and the IPv4 or IPv6 protocols. This is a total of 2 × 2 × 2 = 8 possibilities but, as (Table 7.1) shows, only four tests are required to test every component interacting with every other component at least once. In this most basic combinatorial method, known as pairwise testing, at least one of the four tests covers all possible pairs (t = 2) of values among the three parameters.” [6]

Table 7.1

NIST Pairwise Testing Example [7]

image

Misuse Case Testing

Software design has historically focused on developing code to provide desired or required functionality. While security requirements might well be defined for an application in development, they are rarely required to achieve the desired goals for the application’s design.
Use cases for applications spell out how various functionality is going to be leveraged within an application. Formal use cases are typically built as a flow diagram, written in UML (Unified Modeling Language), and are created to help model expected behavior and functionality.
The idea of misuse case testing is to formally model, again most likely using UML, how security impact could be realized by an adversary abusing the application. This can be seen simply as a different type of use case, but the reason for calling out misuse case testing specifically is to highlight the general lack of considering attacks against the application.
A more formal and commonly recognized way to consider negative security outcomes in software development is threat modeling. Threat modeling has become significantly more prominent in recent years given Microsoft’s highlighting its importance in their Security Development Lifecycle (SDL).

Test Coverage Analysis

Test or code coverage analysis attempts to identify the degree to which code testing applies to the entire application. The goal is to ensure there are no significant gaps where a lack of testing could allow for bugs or security issues to be present that otherwise should have been discovered.

Interface Testing

Traditional interface testing within applications is primarily concerned with appropriate functionality being exposed across all the ways users can interact with the application. From a security-oriented vantage point, the goal is to ensure that security is uniformly applied across the various interfaces. Effectively, this type of testing considers varied potential attack vectors an adversary could leverage.
A simplified example of this might be a web application that uses Adobe Flash when a client presents with that capability, but will present an alternative view to clients that lack support for Adobe Flash. If testing was only performed with a desktop browser that had Flash support built-in, then security flaws that are present in the mobile version of the application presented to iPhones might well be missed. While interface testing encompasses more than just desktop vs. mobile browser, the concept still applies. An application’s security requirements must be implemented regardless of how a person or machine is interfacing with the code.

Analyze and Report Test Outputs

Accumulating vast quantities of security test results is easy; actually improving security based on those results is much more difficult. An example of this is organizations performing vulnerability scans on an almost continuous basis. However, simply producing that report does nothing to actually improve upon the situation. Producing the security testing data is a necessary first step, but is not sufficient alone to improve future test results.
The volume of data to be analyzed is likely staggering, but an approach should be employed to prioritize reviewing and acting on some results before others. As with many things in security, the approach to triage should be informed by an understanding of risk. Imagine the exact same flaw or vulnerability existed on every system in an organization. Would the risk associated with each vulnerability be the same? No, of course not. Even though the exact same flaw exists the risk could be drastically different based upon, for example, the criticality of the system or data, and the likelihood of an adversary being able to exploit each particular manifestation of the flaw.
The organization should already have significant data that speaks to confidentiality, integrity, and availability concerns for business assets. This data should be used to inform the analysis of security testing output. Depending upon how easily consumable the risk data is, some basic prioritization and analysis might be able to be automated. Certainly other data will require manual review, at least initially, but to the extent possible should be documented in a way that helps better automate future test data review.

Summary of Exam Objectives

In this domain we have learned about various methods to test real-world security of an organization, including vulnerability scanning, penetration testing, security assessments, and audits. Vulnerability scanning determines one half of the “Risk = Threat × Vulnerability” equation. Penetration tests seek to match those vulnerabilities with threats, to demonstrate real-world risk. Assessments provide a broader view of the security picture, and audits demonstrate compliance with a published specification, such as PCI-DSS.
We discussed testing code security, including static methods such as source code analysis, walkthroughs, syntax checking, and use of secure compilers. We discussed dynamic methods used on running code, including fuzzing and various forms of black box testing. We also discussed Synthetic transactions, which attempt to emulate real-world uses of an application through the use of scripts or tools that simulate activities normally performed in an application.

Self Test

Note

Please see the Self Test Appendix for explanations of all correct and incorrect answers.
1. Which software testing level tests software after updates, modifications, or patches?
A. Acceptance testing
B. Integration testing
C. Regression testing
D. Unit testing
2. What is a type of testing that submits random malformed data as inputs into software programs to determine if they will crash?
A. Black box testing
B. Combinatorial testing
C. Fuzzing
D. Pairwise testing
3. What type of software testing tests code passively?
A. Black box testing
B. Dynamic testing
C. Static testing
D. White box testing
4. What type of penetration test begins with no external or trusted information, and begins the attack with public information only?
A. Full knowledge
B. Partial knowledge
C. Grey box
D. Zero knowledge
5. What type of assessment would best demonstrate an organizations’ compliance with PCI-DSS (Payment Card Industry Data Security Standard)?
A. Audit
B. Penetration test
C. Security assessment
D. Vulnerability assessment
6. What type of test provides internal information to the penetration tester, including network diagrams, policies and procedures, and sometimes reports from previous penetration testers?
A. Full knowledge
B. Partial knowledge
C. Grey box
D. Zero knowledge
7. What can be used to ensure software meets the customer’s operational requirements?
A. Integration testing
B. Installation testing
C. Acceptance testing
D. Unit testing
8. What term describes a no-tech or low-tech method that uses the human mind to bypass security controls?
A. Fuzzing
B. Social engineering
C. War dialing
D. Zero-knowledge test
9. What term describes a black-box testing method that seeks to identify and test all unique combinations of software inputs?
A. Combinatorial software testing
B. Dynamic testing
C. Misuse case testing
D. Static Testing
10. What term describes a holistic approach for determining the effectiveness of access control, and has a broad scope?
A. Security assessment
B. Security audit
C. Penetration test
D. Vulnerability assessment
Use the following scenario to answer questions 11 through 14:
You are the CISO of a large bank and have hired a company to provide an overall security assessment, and also provide a penetration test of your organization. Your goal is to determine overall information security effectiveness. You are specifically interested in determining if theft of financial data is possible.
Your bank has recently deployed a custom-developed three-tier web application that allows customers to check balances, make transfers, and deposit checks by taking a photo with their smartphone and then uploading the check image. In addition to a traditional browser interface, your company has developed a smartphone app for both Apple iOS and Android devices.
The contract has been signed, and both scope and rules of engagement have been agreed upon. A 24/7 operational IT contact at the bank has been made available in case of any unexpected developments during the penetration test, including potential accidental disruption of services.
11. Assuming the penetration test is successful: what is the best way for the penetration testing firm to demonstrate the risk of theft of financial data?
A. Instruct the penetration testing team to conduct a thorough vulnerability assessment of the server containing financial data
B. Instruct the penetration testing team to download financial data, redact it, and report accordingly
C. Instruct the penetration testing team that they may only download financial data via an encrypted and authenticated channel
D. Place a harmless ‘flag’ file in the same location as the financial data, and inform the penetration testing team to download the flag
12. What type of penetration test will result in the most efficient use of time and hourly consultant expenses?
A. Automated knowledge
B. Full knowledge
C. Partial Knowledge
D. Zero Knowledge
13. You would like to have the security firm test the new web application, but have decided not to share the underlying source code. What type of test could be used to help determine the security of the custom web application?
A. Secure compiler warnings
B. Fuzzing
C. Static testing
D. White box testing
14. During the course of the penetration test: the testers discover signs of an active compromise of the new custom-developed three-tier web application. What is their best source of action?
A. Attempt to contain and eradicate the malicious activity
B. Continue the test
C. Quietly end the test, immediately call the operational IT contact, and escalate the issue
D. Shut the server down
15. Drag and drop: Which of the following statements about Syslog are true? Drag and drop all correct answers from left to right.
image
Figure 7.2 Drag and Drop

Self Test Quick Answer Key

1. C
2. C
3. C
4. D
5. A
6. A
7. C
8. B
9. A
10. A
11. D
12. B
13. B
14. C
15.
image
Figure 7.3 Drag and Drop – Answer
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.104.120