Terminology

ATDDAcceptance Test-Driven Development, a software design methodology based on the design of acceptance tests (unit testing level) and the development of the minimum code necessary for the tests to pass successfully. It is related to BDD and TDD. It should be noted that the term “acceptance” is taken here in the sense of acceptance by the developers and not “acceptance” by the users.
BDDBehavior-Driven Development, an agile development method combining the principles and techniques of TDD and languages (e.g. gherkin) to define the behavior and the expected results. BDD is considered an effective technique or practice when the problem to be solved is complex.
BIBusiness Intelligence, the term encompassing the strategies and technologies used by companies for data analysis and business information management. In general, this involves the analysis of large volumes of data, structured or not, in order to interpret this data, identify opportunities and implement effective strategies based on their analyses.
BITBuilt-In Test, an internal component test (hardware or software) allowing the component to ensure its proper technical or functional operation.
BVABoundary Value Analysis, a testing technique focusing on the analysis of boundary values (valid and invalid) of ordered equivalence partitions.
CAFCapacité à Faire, team bandwidth, the ability of a team to perform a job or effort.
CBITContinuous Built-In Test, an internal component test (hardware or software) that allows the component to continuously ensure its proper technical or functional operation. Such a test allows the component to warn a supervisor in the event of a drop in performance or other non-operations.
CCBChange Control Board, a change management committee. This may concern requirements, but, more generally, configuration changes of the systems under its control.
CCPConformance Creation Plan, a plan defining how the requirements will be demonstrated as compliant, via a variation including Inspections, Analysis, Demonstrations (software tests) and Tests in operational environments (acceptance tests and pilot phases). This CCP is a stage in the design milestones, close to the start of the design and to be included in the PDR milestone.
CDContinuous Delivery, a software engineering approach where teams deliver software in short cycles, ensuring that software can be delivered reliably at any time, without doing so manually. This approach tends to reduce cost, time and risk, by enabling incremental application updates. A simple and repeatable process is important for continuous delivery.
CDContinuous Deployment, a software engineering approach where features are delivered frequently through automated deployments. In an environment where data-driven microservices provide the functionality, and where microservices may have multiple instances, CD consists of instantiating a new version of a microservice and disposing of an old version.
CDRCritical Design Review, a design milestone review, after the PDR and intended to allow the start of the design of the subsystem or system with the least possible risk.
CertificationA formal attestation or confirmation of certain characteristics through an organization certification process.
CI/CD(Combined) Continuous Integration and Continuous Delivery practices, forcing automation in building, testing and deploying applications. DevOps practices involve continuous development, continuous testing, continuous integration, continuous deployment and continuous monitoring of software applications throughout their life cycle. The CI/CD Pipeline underpins DevOps operations.
COTSCommercial Off The Shelf, an existing commercial product on the shelf, which can be purchased “as is”. This applies to both software and hardware products.
CoverageThe rate by which a set of requirements or risks are covered by test cases as well as executions of these test cases, in order to demonstrate that these requirements or risks are correctly verified and validated without forgetting any aspect.
CRICoverage of Risks, a metric for measuring risk levels according to their criticality.
CTTClassification Tree Testing, a testing method based on classification trees.
DCOTData COmbination Testing, a test method based on the combination of nominal data and their use. It is often necessary to have a diagram of the data architecture in order to identify when and where this data is used in the application or system.
DCYTData CYcle Testing, a testing technique focusing on the life cycles of data (creation, referencing, use, destruction) within subsystems and systems, requiring an understanding of the architecture of data in systems and subsystems.
DDEDefect Detection Efficiency, the metric to determine the efficiency of processes in detecting defects.
DebuggingThe process for finding, analyzing and eliminating the causes of failure in software, mainly carried out by designers and developers.
DODDefinition of Done, a checklist defining when a task can be considered completely finished (which should include updating documentation, providing test results, etc.).
DOMDOMain testing, a test technique combining the equivalence partitions and the boundary value testing techniques, applied to adjacent ordered equivalence partitions (usually numerical values). The term “domain” is to be understood here as a subset of variables which are processed by the system under test and for which it is possible to determine whether their values are in an interval comprised by limit values.
DORDefinition of Ready, a checklist defining all the things necessary for a task to be able to run without having to wait for information not yet available from other activities.
DTTDecision Table Testing, a test technique making it possible to limit the combination of tests to be executed when there are combinations of business rules to be processed.
E2EEnd to End, tests that carry out a functional process from the beginning to the end of this process. This type of test can be run on a single system as well as interact with several interconnected systems.
EAIEnterprise Application Integration, a middleware that integrates systems and applications across a company. EAIs connect applications in a way that simplifies and automates business processes as much as possible, while avoiding significant changes to enterprise applications. This allows data to be integrated between various systems, eliminates dependencies on a vendor and provides a common interface for users.
ETLExtract, Transform, Load, a generic procedure for copying data from one or more sources to one or more destination systems. These software packages include the conversion of data from one format to another and their transfer.
Fault injection, fault seeding or defect injectionA test technique aimed at verifying that the defined test processes (around automation tools) are capable of identifying the types of anomalies expected. This technique consists of creating known anomalies and checking whether they are detected by the processes and tools implemented.
GQMGoal Question Metric, a system measurement approach that defines:
– a conceptual or managerial level (the Goal);
– an operational level (set of Questions) to focus on a specific characteristic;
– a quantitative level (Metric) based on the models and associated with each question.
IBITInitial Built-In Test, an internal test of the component (hardware or software) allowing during its initial launch to ensure its proper technical or functional operation.
InspectionA formal review of a work product (deliverable) to identify issues, which uses defined roles and metrics to improve the review process.
MTBFMean Time Between Failures, the time between two failures calculated as the arithmetic mean between two system failures. This measurement depends on the definition of a system failure, usually when the system is “out of order”. The higher the MTBF, the longer the system will operate before failing.
MTTRMean Time to Recover, an average time for the system to recover and restart a normal operation.
MVPMinimum Viable Product, the version of a product with enough features for customers to use.
NRTNon-Regression Tests, functional tests aimed at ensuring the absence of regression or side effects between a previous version and the current version.
ODCOrthogonal Defect Classification, an orthogonal classification of defects. This methodology was developed by IBM in the 1980s, associating defect typologies with design phases/processes and measuring the presence of these defect typologies in order to improve design processes.
PBIProduct (or Project) Backlog Item, element, requirement, user story, use case to be developed or anomalies to be corrected in the Agile Scrum world, which describes what is required of a product (or project), generally sorted in descending order of priority.
PBITPower-on Built-In Test, a test carried out during the start-up of a subsystem or a system and making it possible to display the state of the subsystem or system.
PBRPerspective-Based Reviews.
PCTProcess Cycle Test, test of the process cycle, test method intended to ensure that administrative actions implemented obtain the expected results.
PDRPreliminary Design Review, a design milestone where the main orientations are defined by the design teams and validated by the client teams.
RCARoot Cause Analysis, an analysis of the processes in order to identify the initial – root – cause for a defect.
RequirementA condition or ability required of a component, product or system to solve a problem or achieve an objective that must be held or possessed by this component, product or system in order to satisfy a contract, standard, specification or other formally or informally imposed document. Requirements can be formalized in various documents, for example: User Stories, Use Cases, Features, etc. or be informal (not formalized).
ReviewAn evaluation of the state of a product or project to identify deviations from the expected results and recommend improvements. For example: management review, informal review, technical review, technical inspection and proofreading.
Root causeThe cause of an anomaly; see also RCA.
RUNThe activity or phase of testing, after SETUP, that actually executes the test.
SETUPThe activity or test phase to prepare the data and environment necessary for the execution of a test that will take place in the RUN phase.
SLAService Level Agreement identifies the level of service (e.g. response time, availability, etc.) expected from the system or component. SLAs are often associated with penalties and contractual clauses between suppliers and users.
SMARTSpecific, Measurable, Achievable, Realizable and Traceable (sometimes also considered achievable over time), and applying to goals or requirements.
TDDTest-Driven Development, a software development methodology based on the automation of low-level unit tests. It starts with designing a test that – since there is no code it cannot run successfully – followed by designing the minimum code for the test to run successfully.
TDRTechnical Debt Ratio, a metric to measure technical debt.
TEARDOWNA test activity or phase that removes test data created during testing to allow test reruns in the future.
Technical reviewA formal peer review of a work product by a team of technically qualified people who examine the deliverable and identify deviations from specifications and standards.
TestThe process present in all life cycle activities, static and/or dynamic, concerning the planning and evaluation of software products and related products to determine whether they meet the requirements, to demonstrate that they are fit for the purpose and detect anomalies. This is sometimes known as V&V (Verification and Validation).
Test caseA set of input values, execution preconditions, expected results and execution post-conditions, developed for a particular test objective or condition, such as executing a particular path of a program or verifying the compliance with a specific requirement.
Test conditionSomething – a behavior or a combination of conditions – that may be interesting or useful to test.
Test objectiveA reason or purpose for designing and performing a test.
ValidateDemonstrate, by providing objective evidence, that a system can be used by users for their specific tasks [ISO29119-1].
ValidationA process to ensure that specifications are correct and complete. The system life cycle process can use software specifications and derived requirements for system validation [DO178B].
VerificationAn evaluation of the products of a process in order to guarantee their accuracy and consistency with respect to the input data and the rules applied to this process [DO178B].
VerifyDemonstrate, by providing objective evidence, that specific requirements have been met [ISO29119-1].
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.222.64.243