A3 MOCK CTFL EXAMINATION COMMENTARY

Question 1

This is a K1 question relating to Learning Objective FL-1.5.1 – Identify the psychological factors that influence the success of testing.

Options a, b and d are all helping to build a ‘them and us’ culture between testers and developers, whereas option c is clearly mentioned in the syllabus as helping to lead to successful testing. This is therefore the required answer.

Question 2

This is a K1 question relating to the keywords from Chapter 4.

Option a is about creating tables to facilitate testing but is not related to testing of decision tables.

Option c is a different way of expressing the ideas in option a and is not about decision table testing.

Option d is about a white-box technique for testing conditions in code or other logic.

Decision table testing is a black-box technique related to the testing of decision tables as expressed in option b, which is taken from the glossary.

Question 3

This is a K1 question relating to Learning Objective FL-2.3.2 – Recognize that functional, non-functional, and white-box tests occur at any test level.

The correct answer is c, as confirmed by the glossary.

Option a is incorrect because some types of non-functional testing require specialist skills, such as performance testing, but not all.

Option b is also incorrect, because whilst some non-functional testing can be carried out by developers, such as looking for memory leaks, non-functional testing should also be carried out at the higher test levels.

Option d is incorrect because the coverage can be measured against the targets set.

Question 4

This is a K1 question relating to Learning Objective FL-3.2.2 – Recognize the different roles and responsibilities in a formal review.

Option b incorrectly allocates the task of assigning staff to the review leader (this is a management function).

Option c incorrectly suggests that reviewers must be subject matter experts, but reviewers can be drawn from any stakeholders as well as other specialists.

Option d incorrectly suggests that moderators decide who will be involved and time/place of a review (this are the review leader’s responsibilities).

Option a correctly identifies the responsibilities of management as defined in the syllabus (section 3.2.2).

Question 5

This is a K2 question relating to Learning Objective FL-2.2.1 – Compare the different test levels from the perspective of objectives, test basis, test objects, typical defects and failures, and approaches and responsibilities.

Option b – a database module – is incorrect; a database module could be a test object for component testing, but not a test basis.

Option c – an interface definition – is incorrect; this could be a test basis for integration testing.

Option d – a business process – is incorrect; this could be a test basis for acceptance testing.

Option a – a detailed design – is a suitable test basis for component testing.

Question 6

This is a K2 question relating to Learning Objective FL-1.2.1 – Give examples of why testing is necessary.

Testing cannot ensure that the right version of software being delivered – that is the task of configuration management – so option a is incorrect.

Testing, in itself, does not improve quality (option c). Testing can identify defects, but it is in the fixing of defects that quality is actually improved.

Testing cannot show that software is error free – it can only show the presence of defects (this is one of the seven testing principles), which rules out option d.

Testing can be used to assess quality; for example, by measuring defect density or reliability, so option b is correct.

Question 7

This is a K2 question relating to Learning Objective FL-3.1.2 – Use examples to describe the value of static testing.

Option a is a straightforward example of project delay; there is no indication of whether or not static testing was employed, so the value of static testing cannot be determined.

Option c indicates that test specifications were rigorously reviewed, but this did not prevent project overrun, so does not indicate the value of static testing.

Option d does not, in itself, demonstrate the value of static testing because it identifies a situation in which delay and overspend occurred, whether or not static testing was deployed.

Option b is the best option because it identifies a situation in which, although delay occurred early in the life cycle because static testing was deployed, the project still completed on time and on budget. This suggests that the use of static techniques may have made the development phase more efficient and effective.

Question 8

This is a K1 question relating to Learning Objective FL-5.2.5 – Identify factors that influence the effort related to testing.

Option b is incorrect because the overall effort required is not related to the number of testers. It may take longer with fewer testers, but the overall effort will be the same.

Option c is incorrect because, although testing cannot proceed without test environments, this does not affect the effort required to complete the testing.

Option d is incorrect because the effort required is not affected by the cost of any tools, though the testing budget may be.

Option a is correct because it will determine how much testing is needed at each stage and how many stages there will be.

Question 9

This is a K1 question relating to Learning Objective FL-5.3.1 – Recall metrics used for testing.

Option a is not a measure of testing, just a head count.

Option b is a count of how many test cases were written; it does not count test cases actually used in the tests.

Option d is a measure of the size of the test basis but not a metric of testing.

Option c is correct; it measures the total number of tests prepared and identifies how many were actually run and how many were not run.

Question 10

This is a K1 question relating to Learning Objective FL-6.2.1 – Identify the main principles for selecting a tool.

Options a, c and d are correct; all are taken directly from the syllabus, section 6.2.1.

Option b is drawn from syllabus section 6.2.2 and is about the use of pilot projects for introducing a tool into an organisation. Tool evaluation can only be achieved by using the tool in the organisation, so could not be part of the initial selection.

Question 11

This is a K2 question relating to Learning Objective FL-1.2.3 – Distinguish between error, defect, and failure.

Option b is incorrect because, while problems happen in software and software projects, even though work is undertaken by professionals, this does not in itself imply unprofessional behaviour. Option b really describes an attitude rather than what was done, and we have no way of knowing anything about the attitude of those working on this website.

An error is the underlying cause (perhaps the person writing the program specification misunderstood what was written in the requirements document) of a failure but not the failure itself, so option c is incorrect.

A defect may be the cause of a failure (e.g. perhaps the developer used ‘>’ rather than ‘>=’ in a condition in the code) but is not the actual failure, so option d is incorrect.

Incorrect tickets being issued is an observed consequence, which is the failure itself – so option a is the correct answer.

Question 12

This is a K2 question relating to Learning Objective FL-4.2.5 – Explain how to derive test cases from a use case.

Options a, c and d are all derived from the syllabus, section 4.2.5, and accurately describe how some aspect of use case testing is carried out.

Behaviours that have not been defined could not be systematically tested by use case testing, so option b is the correct answer because it does not describe how to derive test cases from a use case.

Question 13

This is a K2 question relating to Learning Objective FL-2.3.1 – Compare functional, non-functional, and white-box testing

Option a is incorrect because code coverage is normally measured in white-box testing, not when carrying out functional testing.

Option c is incorrect because white-box testing focuses on program behaviour, not system behaviour.

Option d is incorrect because functional testing should be done at all levels, not just at system and acceptance testing.

Option b is correct because non-functional testing can, and usually does, make use of black-box techniques.

Question 14

This is a K2 question relating to Learning Objective FL-5.2.1 – Summarize the purpose and content of a test plan.

Options b, c and d are all directly lifted from the syllabus, section 5.2.1.

Option a incorrectly states that a master test plan must be completed before a project starts. This explicitly contradicts the correct statement in option b. A master test plan may be used, with separate test plans for different test levels or test types, but since these can be updated throughout the project the master test plan must also change.

Question 15

This is a K2 question relating to Learning Objective FL-4.3.2 – Explain decision coverage.

Option a is incorrect because it measures only the number of white-box tests executed and not the coverage achieved.

Option c is incorrect because it calculates the number of decision outcomes achieved per test rather than the coverage achieved.

Option d is incorrect because it measures the inverse of decision coverage.

Option b correctly defines decision coverage in line with section 4.3.2 of the syllabus.

Question 16

This is a K2 question relating to Learning Objective FL-4.4.1 – Explain error guessing.

Option b is incorrect because, while such lists may be of value in avoiding defects, they are not used in error guessing.

Option c is incorrect because, while error guessing may be based partly on an individual tester’s experience the technique utilises other sources, such as failures that have occurred in other applications.

Option d is incorrect because this is an example of retesting, not error guessing.

Option a is correct and reflects the syllabus, section 4.4.1.

Question 17

This is a K2 question relating to Learning Objective FL-5.2.3 – Give examples of potential entry and exit criteria.

Option a is a project management issue and not an entry criterion for a testing phase.

Option c is not necessarily relevant, since some earlier testing phases may not have delivered components or sub-systems for the part of the system about to be tested.

Option d is incorrect because not all defects reported in previous phases may be relevant, and clearance of defects that are relevant should be addressed by exit criteria from previous phases.

Option b is correct because testing cannot begin until a test environment is available.

Question 18

This is a K2 question relating to Learning Objective FL-3.2.1 – Summarize the activities of the work product review process.

Option a incorporates two planning activities, omits the initiation phase, includes the reviewing phase and swaps the issue communication and analysis phase with the fixing and reporting phase.

Option b also includes two items from the planning phase and one correct item from the initiation phase, but then omits the individual review phase before moving on to issue communication and analysis and fixing defects phases.

Option d correctly includes items from the planning phase, the initiation phase and the individual review phase, but then moves straight to fixing defects before the details of defects have been communicated (part of ‘issue communication and analysis’).

Option c is correct because it includes one item from each phase in the correct sequence.

Question 19

This is a K1 question relating to Learning Objective Keyword

Option a identifies a test expert or test guru, which is not the same thing as a test oracle, so option a is incorrect.

Option c encapsulates the process of test analysis, so option c is incorrect.

Option d is a test estimation method, so option d is incorrect.

Option b is clearly defined in the syllabus and is the correct answer.

Question 20

This is a K2 question relating to Learning Objective FL-4.4.2 – Explain exploratory testing.

Option a is incorrect because exploratory testing does not use predefined tests.

Option b is incorrect because exploratory testing is not associated with model-based test strategies but may sometimes be associated with reactive test strategies.

Option d is incorrect because, while exploratory testing may sometimes use session-based testing, this is not characteristic of exploratory testing. When session-based testing is used, this is to structure the testing activity rather than to ensure that tests are documented.

Option c is correct and corresponds to a statement in section 4.4.2 of the syllabus.

Question 21

This is a K2 question relating to Learning Objective FL-1.4.2 – Describe the test activities and respective tasks within the test process.

All of the options list activities from one of the groups of testing activities, so the task is to identify activities for test execution.

Option a is incorrect because it lists test-planning activities.

Option b is incorrect because it lists test implementation activities.

Option d is incorrect because it lists test analysis activities.

Option c is correct because it lists test execution activities.

Beware of option b – the activities listed are preparation for test execution, rather than for the running of tests themselves.

Question 22

This is a K2 question relating to Learning Objective FL-5.2.6 – Explain the difference between two estimation techniques: the metrics-based technique and the expert-based technique

Option a is incorrect because the use of a mathematical equation is not necessarily based on data from previous projects as required by the metrics approach, and the expert-based approach does not necessarily use data from previous projects.

Option b is incorrect because the expert-based approach requires an expert or at least an experienced tester.

Option c is incorrect because it bases metrics on previous estimates rather than on data about what actually happened.

Option d is correct as defined in the syllabus, section 5.2.6.

Question 23

This is a K2 question relating to Learning Objective FL-4.4.3 – Explain checklist-based testing.

Option a is incorrect because checklists are high-level lists and some variability in testing is likely to occur.

Option b is incorrect because checklists can be used to support a wide variety of test types.

Option c is incorrect because testers may create new checklists, expand exiting checklists or use an existing checklist.

Option d is correct as defined in section 4.4.3 of the syllabus.

Question 24

This is a K2 question relating to Learning Objective FL-6.1.1 – Classify test tools according to their purpose and the test activities they support.

Option a is incorrect because model-based testing tools are categorised as tools for test design and implementation and they are suitable for use by testers.

Option b is incorrect because test-driven development tools are best suited to developers but are categorised as tool support for test design and implementation.

Option c is incorrect because test data preparation tools are categorised as tool support for test design and implementation and are suitable for use by testers.

Option d is the correct answer because coverage tools are classified as tool support for test execution and logging and are most suitable for developers.

Question 25

This is a K2 question relating to Learning Objective FL-5.4.1 – Summarise how configuration management supports testing.

Option a is correct but incomplete because it mentions only the testware and system components separately and there is no mention of the relationships between items.

Option c is correct but incomplete because it relates only to documentation.

Option d is incorrect because it relates only to test items.

Option b correctly refers to managing the system components, the testware and the relationship between them.

Question 26

This is a K3 question relating to Learning Objective FL-4.2.1 – Apply equivalence partitioning to derive test cases from given requirements

The required partitions are:

> £10,000

£8,001–£10,000

£5,001–£8,000

£3,001–£5,000

< £3,001

There are five partitions, so option b is correct.

Question 27

This is a K2 question relating to Learning Objective FL-1.4.4 – Explain the value of maintaining traceability between the test basis and test work products.

Option a is incorrect. This could help in clarifying whether all requirements are covered by one or more test cases, but the requirements could be included in a test case that was not actually run.

Option c is incorrect because traceability has no direct bearing on whether the project will be delivered on time, and it is not something that is stated (in the syllabus or elsewhere) as a ‘benefit’ of traceability.

Option d is incorrect and inappropriate in that it implies a ‘blame culture’ rather than one where cooperation and product quality are to the forefront.

Option b is correct and is specifically mentioned in section 1.4.4 of the syllabus.

Question 28

This is a K3 question relating to Learning Objective FL-4.2.2 – Apply boundary value analysis to derive test cases from given requirements.

Option a is incorrect because it starts each time zone on the boundary rather than just under, but ends each time zone correctly.

Option b is incorrect because it starts each time zone correctly but does not check the end of each time zone correctly.

Option c is incorrect because it starts each time zone on the boundary rather than before the boundary and ends each time zone too early.

Option d is correct because it correctly tests the values just before and on the lower boundaries, and on and just over the higher boundaries.

Question 29

This is a K2 question relating to Learning Objective FL-3.2.5 – Explain the factors that contribute to a successful review.

Option a is incorrect: it is always important for users to review requirements documents; the presence of designers, developers and testers may help with any technical terms.

Option c is incorrect: an inspection is not likely to be appropriate for this document and one key success factor is that an appropriate review type is applied. Metrics are not important at this stage, but removal of ambiguity, clarity of expression and understandability for users are vital.

Option d is incorrect: reviews should always be scheduled with adequate notice and time for participants to prepare.

Option b is the correct answer (in line with section 3.2.5 in the syllabus): it will allow reviews to begin earlier and will provide earlier feedback to authors to enable improvements to be made continually.

Question 30

This is a K2 question relating to Learning Objective FL-2.4.1 – Summarize triggers for maintenance testing.

Option a is incorrect because a new feature required for an iteration implies development rather than maintenance.

Option b is incorrect because it again implies development rather than maintenance.

Option c is incorrect for the same reason.

Option d is correct because data migration is a typically maintenance activity.

Question 31

This is a K2 question relating to Learning Objective FL-5.5.3 – Describe, by using examples, how product risk analysis may influence the thoroughness and scope of testing.

Options b, c and d are incorrect because they are all related to managing the project.

Option a specifically relates to the testing activities and how these need to be driven by risk levels.

Question 32

This is a K3 question relating to Learning Objective FL-4.2.2 – Apply boundary value analysis to derive test cases from given requirements.

Option a is incorrect because it incorrectly tests for 0 and 1 as well as incorrectly identifying boundary values for partitions.

Option c is incorrect because it tests the lower boundaries incorrectly (though it tests the upper boundaries correctly).

Option d incorrectly uses three values at each boundary.

Option b is correct, testing below and on the lower boundaries and on and above the upper boundaries.

Question 33

This is a K3 question relating to Learning Objective FL-5.2.4 – Apply knowledge of prioritization, and technical and logical dependencies, to schedule test execution for a given set of test cases.

Option a was the original schedule, but this has now been altered, so option a is incorrect.

Option c places test cases 5, 3 and 2 in the correct sequence but test case 6, which is high priority, is relegated to fifth place, so option c is incorrect.

Option d places 6 in second position but does not make test case 3 dependent on test case 5, so option d is incorrect.

Option b is the correct answer because it correctly sequences 5, 3 and 2 and places test case 6 ahead of this trio.

Question 34

This is a K3 question relating to Learning Objective FL-4.2.4 – Apply state transition testing to derive test cases from given requirements.

Option b is incorrect because an invalid transition is represented (test case 6).

Option c is incorrect because all of the valid transitions are represented (test cases 1–5 correspond to the five valid transitions shown in the diagram).

Option d is incorrect because all valid transitions are represented, and one invalid transition is addressed in test case 6.

Option a is correct because the table correctly identifies the five valid transitions (test cases 1–5) and one invalid transition (test case 6).

Question 35

This is a K3 question relating to Learning Objective FL-5.6.1 – Write a defect report, covering defects found during testing.

Options a, c and d are all incorrect because they list valid fields on a defect report, but none of them prevents action being taken to resolve the problem.

Option b is the correct answer because the actual result is needed to enable corrective action to be correctly applied.

Question 36

This is a K3 question relating to Learning Objective FL-3.2.4 – Apply a review technique to a work product to find defects.

Option a is not the best answer. Checklists can help to focus attention on specific aspects of a system and to ensure that typical defect types are addressed, but this is not an ideal mechanism for addressing defects that will affect multiple stakeholders.

Option b is a better option, in that it provides specific scenarios for reviewers to consider, but it still does not provide the breadth of involvement required.

Option c is unlikely to be effective in that it provides little or no guidance to reviewers.

Option d is the best answer because it encourages individual reviewers to take on multiple stakeholder viewpoints. This makes the overall review more effective and could be modified to enable reviewers to cooperate in working through scenarios incorporated into the requirements document (and possibly also identify new scenarios to be considered).

Question 37

This is a K2 question relating to Learning Objective FL-2.4.2 – Describe the role of impact analysis in maintenance testing.

Option a is incorrect because out-of-date test cases are a hindrance to maintenance, but this is not the purpose of impact analysis.

Option c is incorrect because achieving maintainability is an issue for development, not maintenance.

Option d is incorrect; this might be part of a retrospective or post-implementation review but is not related to impact analysis.

Option b is correct because a decision should be based on the likely consequences of the change, which is one purpose of impact analysis. Impact analysis is also about determining what testing will be needed following a change.

Question 38

This is a K2 question relating to Learning Objective FL-1.5.2 – Explain the difference between the mindset required for test activities and the mindset required for development activities.

All of the choices are mentioned in section 1.5.2 of the syllabus – but not all in a positive light from the perspective of testers.

Two of the choices (i and ii) are listed as favourable for testers. Items iii and iv in the list are possible aspects of a developer’s mindset that are detrimental to a tester perspective. Item v is a positive component of a developer mindset that has no particular bearing on testing.

Option d is the correct answer because it is the only option that mentions items i and ii.

Question 39

This is a K3 question relating to Learning Objective FL-4.2.3 – Apply decision table testing to derive test cases from given requirements.

Option a is feasible and will apply to most applicants, so it is not the correct answer.

Options c and d both involve penalising for 3 points and for 6+ points, which is valid within the rules, so neither of these is the correct answer.

Option b is infeasible because it is not possible to incur 6 penalty points without also incurring 3 penalty points.

Option b is therefore the correct answer.

Be careful in questions like this one to note that the question asked for infeasible test cases. It is easy, especially under time pressure, to opt for the more usual expectation of identifying feasible test cases.

Question 40

This is a K2 question relating to Learning Objective FL-1.3.1 – Explain the seven testing principles.

Option a appears to reflect the defect clustering principle, but there is no reason to assume that defect clustering will be associated with the work of the most junior developer, so option a is incorrect.

Option b is an example of the ‘absence of errors’ fallacy – the fact that no errors have been found does not mean that code can be released into production – so option b is incorrect.

Option d is a direct statement of the ‘absence of errors’ fallacy, so option d is incorrect.

Option c is the best statement of the defect clustering principle – defects are often found in the same places, so testing should focus first on areas where defects are expected or where defect density is high. Option c is therefore the correct answer.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.220.160.216