Glossary

This glossary contains terminology specifically related to the areas of software testing and test management, and other terms that are related to the topic of this book.

The first occurrence of a term in the book which is defined in the glossary is preceded by an arrow “→”. Terminology that has already been defined in »Software Testing Foundations« [Spillner 07] will not be repeated here. Refer to [URL: ISTQB Glossary] for an up-to-date version of the ISTQB glossary.1

acceptance criteria: The exit criteria that a component or system must satisfy in order to be accepted by a user, customer, or other authorized entity. [IEEE 610.12]

acceptance testing: Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system. [After IEEE 610.12]

anomaly: Any condition that deviates from expectation based on requirements specifications, design documents, user documents, standards, etc. or from someone’s perception or experience. Anomalies may be found during, but not limited to, reviewing, testing, analysis, compilation, or use of software products or applicable documentation. [IEEE 1044].

application expert: expert who, as a result of training, or because of his professional function, has comprehesive expertise in the application domain of the system under test assessment: see audit.

audit: An independent evaluation of software products or processes to ascertain compliance to standards, guidelines, specifications, and/or procedures based on objective criteria, including documents that specify:

(1) the form or content of the products to be produced

(2) the process by which the products shall be produced

(3) how compliance to standards or guidelines shall be measured. [IEEE 1028]

baseline: A specification or software product that has been formally reviewed or agreed upon, that thereafter serves as the basis for further development, and that can be changed only through a formal change control process. [After IEEE 610.12]

build: An operational version of a system or component that incorporates a specified subset of the capabilities that the final product will provide. [After IEEE 610.12]

CAST: Acronym for Computer Aided Software Testing. See also test automation.

configuration: The composition of a component or system as defined by the number, nature, and interconnections of its constituent parts.

coverage: The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite.

coverage item: An entity or property used as a basis for test coverage, e.g., equivalence partitions or code statements.

defect analysis: part of debugging; tracing a failure back to its causal defect and perhaps even to the original error.

defect classification: practice used to define the defect class.*

defect density: The number of defects identified in a component or system divided by the size of the component or system (expressed in standard measurement terms, e.g., lines-of-code, number of classes or function points).

Defect Detection Percentage (DDP): The number of defects found by a test phase, divided by the number found by that test phase and any other means afterwards.

defect distribution: percentage share of incident reports of a particular nature in relation to the total number of all recorded reports (e.g., defects in a particular component).

defect management: The process of recognizing, investigating, taking action and disposing of defects. It involves recording defects, classifying them and identifying the impact. [After IEEE 1044]

defect management tool: A tool that facilitates the recording and status tracking of defects and changes. They often have workflow-oriented facilities to track and control the allocation, correction and re-testing of defects and provide reporting facilities. See also incident management tool.

defect management system: see defect management tool.

defect report: A document reporting on any flaw in a component or system that can cause the component or system to fail to perform its required function. [After IEEE 829]

defect trend: progression of the number of incident reports or defects discovered by testing (e.g., number of newly created incident reports, number of corrected defects).

domain expert: see application expert.

equivalence class: See equivalence partition.

equivalence partition: A portion of an input or output domain for which the behavior of a component or system is assumed to be the same, based on the specification.

failure: Deviation of the component or system from its expected delivery, service or result. [After Fenton]

Failure Mode and Effect Analysis (FMEA): A systematic approach to risk identification and analysis of identifying possible modes of failure and attempting to prevent their occurrence.

failure rate: The ratio of the number of failures of a given category to a given unit of measure, e.g., failures per unit of time, failures per number of transactions, failures per number of computer runs. [IEEE 610.12]

fault days: number of days from defect injection into the system and proof that a resulting failure has occurred.

feature: An attribute of a component or system specified or implied by requirements documentation (for example reliability, usability or design constraints). [After IEEE 1008]

Function Point Analysis (FPA): Method aiming to measure the size of the functionality of an information system. The measurement is independent of the technology. This measurement may be used as a basis for the measurement of productivity, the estimation of the needed resources, and project control.

handover test: A subset of all the tests of a test level that either serves as an acceptance test of the receiving test level (or from development) or as a -> release test to the next test level (or release).

horizontal traceability: The tracing of requirements for a test level through the layers of test documentation (e.g., test plan, test design specification, test case specification, and test procedure specification or test script).

image: byte-identical image of the content of a hard drive for data backup purposes, for saving a particular system state or a particular system configuration.

imaging: creating a hard drive image (see image).

IT expert: expert who, as a result of training, or because of his professional function, has comprehesive expertise in the area of computer sciences, information technology, telecommunication etc.

measure: The number or category assigned to an attribute of an entity by making a measurement. [ISO 14598]

measurement: The process of assigning a number or category to an entity to describe an attribute of that entity. [ISO 14598]

measurement scale: A scale that constrains the type of data analysis that can be performed on it. [ISO 14598]

metric: A measurement scale and the method used for measurement. [ISO 14598]

migration testing: See conversion testing.

operational profile: expected distribution of the frequency with which product functions are used during operation.

PREview: early, up-stream review in which testers participate in order to check documents with regard to their applicability to testing, specifically for the creation of test cases.

priority: The level of (business) importance assigned to an item, e.g., defect.

product risk: A risk directly related to the test object.

project: A project is a unique set of coordinated and controlled activities with start and finish dates undertaken to achieve an objective conforming to specific requirements, including the constraints of time, cost and resources. [ISO 9000]

project risk: A risk related to management and control of the (test) project, e.g., lack of staffing, strict deadlines, changing requirements, etc.

quality: The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations. [After IEEE 610.12]

quality gate: milestone* focusing on quality control.

quality management: Coordinated activities to direct and control an organization with regard to quality. Direction and control with regard to quality generally includes the establishment of the quality policy and quality objectives, quality planning, quality control, quality assurance and quality improvement. [ISO 9000]

quality policy: overall intentions and directions of an organization as regards quality.

release candidate: Build considered sufficiently stable and mature to be a candidate for (external) release. If this evaluation is supported by test, the release candidate becomes a “real” release.*

release plan: schedule (issued by product management) indicating at what time and in what frequency development is going to deliver builds and releases (either for testing or for external release).

release test: test subset of a test level; used to either pass the test object on to the next test level or to release it.

requirement: A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document. [After IEEE 610.12]

requirements coverage: percentage share of all (system) requirements which were validated by at least one test case. See also coverage.*

requirements tracing: see traceability.

residual defect estimation: estimation of the total number of defects remaining in the system after code reviews, testing, and other quality assurance activities.

risk analysis: The process of assessing identified risks to estimate their impact and probability of occurrence (likelihood).

risk-based testing: Testing oriented towards exploring and providing information about product risks.*

risk control: The process through which decisions are reached and protective measures are implemented for reducing risks to, or maintaining risks within, specified levels.

risk identification: The process of identifying risks using techniques such as brainstorming, checklists and failure history.

risk inventory: A project’s current and historical risk-related information including the risk management context, along with the chronological record of risks, priority ordering, risk-related measures, treatment status, contingency plans, and risk action requests.

risk management: Systematic application of procedures and practices to the tasks of identifying, analyzing, prioritizing, and controlling risk.

risk mitigation: See risk control.

risk priority number (RPN): used in the FMEA to quantify defect opportunities by calculating the product of probability of occurence, expected damage, and probability of detection.

risk profile: see risk inventory.

severity: The degree of impact that a defect has on the development or operation of a component or system. [After IEEE 610.12]

stakeholder: the objective of system development is to satisfy the needs and requirements of several people, groups, institutions, or documents (e.g., legal documents). These needs and requirements can be very different and may even be contradictory and opposing to each other. All involved persons, groups, institutions, and documents are stakeholders.

standards: Mandatory requirement employed and enforced to prescribe a disciplined uniform approach to software development, that is, mandatory conventions and practices are in fact standards. [After IEEE 610.12]

Survival probability: The probability that no failure will occur within a specified time interval.

system configuration: See configuration.

system failure: See failure.

test activity: activity or part of an activity performed as part of the test process.*

test approach: The implementation of the test strategy for a specific project. It typically includes the decisions made that follow based on the (test) project’s goal and the risk assessment carried out, starting points regarding the test process, the test design techniques to be applied, exit criteria and test types to be performed.

test center: institution, or organizational unit, which provides testing as an external or internal service for development projects.

test control: A test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned. See also test management.

test coverage: See coverage.

test design: See test design specification.

test design specification: A document specifying the test conditions (coverage items) for a test item, the detailed test approach and identifying the associated high level test cases. [After IEEE 829]

test design technique: Procedure used to derive and/or select test cases.

test framework: See test harness.

test handbook: See test policy.

test harness: A test environment comprised of stubs and drivers needed to execute a test.

test intensity: intensity with which a particular quality attribute is checked by a number of test cases. This may be determined quantitatively by coverage measurements or purely qualitatively by comparing different testing techniques (a business process based test with synchronous application of a thorough equivalence analysis has a significantly higher test intensity than one without it).

test management: The planning, estimating, monitoring and control of test activities, typically carried out by a test manager.

test management tool: A tool that provides support to the test management and control part of a test process. It often has several capabilities, such as testware management, scheduling of tests, the logging of results, progress tracking, incident management and test reporting.

test manager: The person responsible for project management of testing activities and resources, and evaluation of a test object. The individual who directs, controls, administers, plans and regulates the evaluation of a test object.

test metric: A quantitative measure of a test case, test run or test cycle including measurement instructions.

test monitoring: A test management task that deals with the activities related to periodically checking the status of a test project. Reports are prepared that compare the actuals to that which was planned.

test object scope: size of the test object, which can be measured by means of different metrics*, e.g., lines of code, function points, and others.

test plan: A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process. [After IEEE 829]

test planning: The activity of establishing or updating a test plan.

test policy: A high level document describing the principles, approach and major objectives of the organization regarding testing.

Test Process Improvement (TPI): A continuous framework for test process improvement that describes the key elements of an effective test process, especially targeted at system testing and acceptance testing.

test progress: metric for the status of a test project. See also test monitoring.

test progress report: See test monitoring and test summary report.

test project: basically a temporary endeavor to achieve specified test goals within a defined period of time. Usually, a test project is part of a software or system development project.

test report: See test summary report.

test schedule: A schedule that identifies all tasks required for a successful testing effort, a schedule of all test activities, and their corresponding resource requirements.

test specification technique: See test design technique.

test summary report: A document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria. [After IEEE 829]

test technique: See test design technique.

test tool: A software product that supports one or more test activities, such as planning and control, specification, building initial files and data, test execution and test analysis. [TMap] See also CAST.

test topic: a group of test cases that are collectively executed and / or managed because they test related aspects of the test object or share the same test objectives.

test type: A group of test activities aimed at testing a component or system focused on a specific test objective, i.e. functional test, usability test, regression test etc. A test type may take place on one or more test levels or test phases. [After TMap]

traceability: The ability to identify related items in documentation and software, such as requirements with associated tests. See also horizontal traceability, vertical traceability.

TTCN-3: (Testing and Test Control Notation, Version 3) A flexible and powerful language applicable to the specification of all types of reactive system tests over a variety of communication interfaces.

usability testing: Testing to determine the extent to which the software product is understood, easy to learn, easy to operate and attractive to the users under specified conditions. [After ISO 9126]

use case: A sequence of transactions in a dialogue between a user and the system with a tangible result.

vertical traceability: The tracing of requirements through the layers of development documentation to components.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.139.239.41