Model-Based Test Cases Reuse and Optimization
Mohamed Mussa; Ferhat Khendek Electrical and Computer Engineering, Concordia University, Montréal, QC, Canada
Abstract
Several test generation techniques have been proposed in the literature. These techniques target separately specific levels of testing without relating them to each other in order to avoid redundancy and enable reuse and optimization. In this chapter, we look into connecting different levels of testing. We propose a model-based testing framework that enables reusability and optimization across different levels of testing. Test cases at one level are reused to generate test cases of subsequent levels of testing. Furthermore, test cases at one level are optimized by relating them to test cases of preceding testing levels and removed if they are found redundant.
Keywords
Model-based testing; Component testing; Integration testing; Test cases reuse; Acceptance test optimization
1 Introduction
Testing aims at enhancing the quality of software products. Software goes through different levels of testing; the main ones are unit, component, integration, system, and acceptance level testing. For each level, engineers plan the test, design test cases, exercise the test cases on the target implementation, and evaluate the results. Different test approaches have been developed over the last 3 decades. They usually target specific levels of testing. The lack of clear and systematic connections between the software testing levels is a noteworthy problem in software testing [1,2].
The investigation of this issue is the main objective of this chapter. Moreover, we propose a model-based setting to tackle this problem. Model-driven engineering (MDE) [3] is gaining in maturity and popularity. The introduction of model-based testing (MBT) [4,5] has been an important progress in software testing. Several MBT approaches covering a wide spectrum of modeling languages and software domains, such as embedded systems and telecommunication [6], have been proposed. The unified modeling language (UML) [7] is now widely accepted. Recently, the object management group (OMG) [8] standardized a UML testing profile (UTP) [9]. The profile enables test concepts in UML models in order to create precise UML test models. The literature shows a growing interest in UTP-based approaches [1,10–15]. However, the focus is still on MBT approaches for specific software levels of testing [10–18].
In this chapter, we propose a model-based software testing framework to connect testing levels and enable reusability and optimization across different testing levels [19–22]. The framework is based on UTP. It is composed of two approaches for test generation and test optimization. The test generation approach generates test models for a target testing level by reusing test models from the previous level. We elaborate and discuss the specific case of generation of integration test models from component test models. For this purpose, we investigate the merging of test cases. While there has been a lot of research toward merging architectural models, only a little has been done for merging behavioral models [23–27]. The second approach optimizes test models by relating them to test models that have been already executed in the preceding testing levels. It aims at reducing test execution time without compromising the quality. For this purpose, we develop a model comparison process specific to test cases. We elaborate and discuss the optimization of acceptance test models by relating them to integration test models. Our framework has been implemented, and we have experienced with several case studies. In this chapter, we illustrate our approaches with a library management system case study and discuss the results.
The rest of this chapter is structured as following. Section 2 presents an overview of our MBT framework. We discuss the test generation approach in Section 3 and the test optimization approach in Section 4. In Section 5, we discuss a case study and its results. We review related work in Section 6 before concluding in Section 7.
2 Overall MBT Framework
In this section, we introduce our overall MBT framework for linking testing levels and enabling reusability and optimization. We also provide formal definitions for some concepts used throughout the chapter.
2.1 Overall Framework
The framework enables reusability across testing levels during test design and enhances test execution through the optimization of test models. To conduct our research in rigorous settings, we use sequence diagrams, which have been formally investigated in [28–30], to model and design our test behaviors. Although UTP is used as the language for the framework, it can be replaced by any other language supporting test model description. Fig. 1 shows our overall MBT framework. The framework consists of two approaches: a test generation approach and a test optimization approach. The generation approach links
- • the component-level testing to the integration-level testing and
- • the component-level testing to the system-level testing.
Component testing is a black box testing; tests are exercised on components through their interfaces. These interfaces can be internal, to communicate with other system components, or external, to communicate with the system environment, as shown in Fig. 2A. Hence, the same interface can be exercised by several test cases from different component test models. Each test case takes different perspective of the same interface as shown in Fig. 2B. Therefore, we can generate integration test cases from component test cases that examine internal interfaces and generate system test cases from component test cases that examine external interfaces.
In this chapter, we discuss the generation of integration test models from component test models. In our work, a component is defined as a self-coherent piece of software that provides one or more services and can interact with other components. In this framework, we also assume that component test models are generated from the design models using existing approaches such as [10–15].
As shown in Fig. 1, the framework enables also the optimization of test models by mapping them to the previously executed test models. The optimization approach links
- • the integration-level testing to the system-level testing,
- • the integration-level testing to the acceptance-level testing, and
- • the system-level testing to the acceptance-level testing.
Our framework is based on the idea of overlapping test cases. Test models are composed of a set of test cases. Test cases capture the test behavior that is exercised on the target implementation. Test behavior in general reflects the expected behavior of the implementation under test. We believe that the “collective” behavior of all test cases at any testing level captures the system behavior. In practice, some research activities migrate system behavior across different development stages using test cases since test cases are finite and precise compared to the system design models [31]. Therefore, we can intuitively conclude that test cases from different testing levels that examine the same portion of the system behavior are redundant if they meet the same test requirements of the subsequent testing level. We propose a test optimization approach that optimizes acceptance test cases by relating them to system and/or integration test cases, and optimize system test cases by relating them to integration test cases. In this chapter (Section 4), we discuss the optimization of acceptance test cases by avoiding redundancy with integration test cases.
We assume component test cases have the following characteristics:
- • they are complete and cover all component interfaces,
- • each test case for a component covers at least one service provided by the component, and
- • there is consistency between the component test models since they describe different components of the same system. The names of the components, interfaces, and messages are used in a consistent manner in test models.
In the proposed approaches, we will compare test models. Comparing two different test models that have been built by different engineers with different views is a challenging task. It requires the identification of the similarities and differences among the elements of the two test models, and reconciling inconsistency between the two models. In the discipline of model comparison, there are two methods: three-way merging and two-way merging [32]. In addition to the merged models, the three-way merging, requires the existence of the base model or the changes logs. It compares each model to the base model to identify the changes of that model with respect to the base model. These changes can be classified into three categories: added, deleted, and/or modified. Based on this information, the method merges the models to generate a new base model. Comparing to three-way merging, the two-way merging is harder since it does not rely on any additional information beside the merged models. However, the two methods share the same assumption. They assume that the merged models are evolved from the same base model. Researchers build their approaches around certain model features, such as universally unique identifiers (UUID), to calculate the similarities and differences between the elements of their models. While the process of identifying similarities and differences between the models and detecting conflicts can be automated, reconciling conflicts require user interaction [32–34]. In the testing domain, component test models are usually generated independently from the corresponding design specifications. Hence, the assumption of the evolving of the merged models from the same source is not applicable in this domain. However, we can benefit from the characteristics of our domain since we are not developing a general merging approach. Test cases describe partial behavior of the system under test (SUT). They actually represent a partial view of the SUT that is the focus of the test designer. Thus, different test cases may describe the same system behavior from different angles. We focus on such test cases to build our integration approach. Our approach follows two-way merging. However, we do not assume that the test models evolved from the same source. We assume that test cases overlap since they describe the same system from different angles. Therefore, test models share elements of the system that can be identified through their names and attributes.
2.2 Some Definitions
We define here some concepts used throughout this chapter.
We categorize events into three categories: message events, time events, and miscellaneous events. Message events, the sending event and the receiving event, represent the two ends of messages exchanged between two instances referred to as the sender and the receiver, respectively. In this chapter, messages are instances of an execution trace. Hence, they are unique throughout a single system execution. Time events represent events related to timers. Each timer is associated with one instance. We classify the rest of event types, such as instance termination and UTP verdict, into the third category. Notice that the association between events and instances is part of the event definition.
We use the test model specified in Fig. 3 to illustrate our definitions. The test model is composed of a test package, p, represents the test architecture and two test cases, t1 and t2, represent the test behavior. To distinguish between the sending and receiving events of the same message, we suffix the message name with the first letter of the corresponding action. We represent this test model, M, as follows:
- M = (P, T), with
- P = (TC, ∅, {CUT}),
- T = {t1, t2},
- t1 = ({tc,cut}, {m1s, m2r, m3s, m4r, ver, m1r, m2s, m3r, m4s}, {(m1s,m2r),(m2r,m3s), (m3s,m4r),(m4r,ver),(m2s,m3r),(m3r,m4s),(m1s,m1r),(m2s,m2r),(m3s,m3r),(m4s,m4r),(m1s,m3s),(m2r,m4r),(m2r,m3r),(m3s,ver),(m2s,m4s),(m3r,m4r),(m2s,m3s), (m3s,m4s),(m4s,ver),(m1s,m4r),(m1s,m3r),(m2r,ver),(m2r,m4s),(m2s,m4r), (m3r,ver),(m1s,ver), (m1s,m4s),(m2s,ver)})
- tc = (“tc”, TestContext),
- cut = (“cut”, SUT),
- m1s = (send, “m1s”, tc, m1, cut),
- m2r = (receive, “ m2r”, tc, m2, cut),
- m3s = (send, “m3s”, tc, m3, cut),
- m4r = (receive, “m4r”, tc, m4, cut),
- ver = (UTPverdict, “ver”, “pass”, tc),
- m1r = (receive, “m1r”, cut, m1, tc),
- m2s = (send, “m2s”, cut, m2, tc),
- m3r = (receive, “m3r”, cut, m3, tc),
- m4s = (send, “m4s”, cut, m4, tc).
- t2 = ({tc,cut}, {m5s, m6r, m7r, ver, m5r, m6s, m7s}, {(m5s,m6r),(m5s,m7r),(m6r,ver), (m7r,ver),(m5r,m7s),(m5s,m5r),(m6s,m6r),(m7s,m7r),(m5s,ver),(m5r,m7r),(m5s,m7s), (m6s,ver),(m7s,ver),(m5r,ver)}),
- tc = (“tc”, TestContext),
- cut = (“cut”, SUT),
- m5s = (send, “m5s”, tc, m5, cut),
- m6r = (receive, “m6r”, tc, m6, cut),
- m7r = (receive, “m7r”, tc, m7, cut),
- ver = (UTPverdict, “ver”, “pass”, tc),
- m5r = (receive, “m5r”, cut, m5, tc),
- m6s = (send, “m6s”, cut, m6, tc),
- m7s = (send, “m7s”, cut, m7, tc).
3 Integration Test Generation
By definition integration-level testing puts emphasis on the interactions between the involved components. Hence, integration test cases can be generated from the component test cases that capture such interactions. These interactions can be direct between components or indirect through mediators. However, not all component test cases capture such interactions. Therefore, we need to analyze the available component test cases to identify and select the ones that capture interactions between the components.
The integration test generation approach supports incremental integration strategies, bottom-up, top-down, or ad hoc. With such strategies system integration is a recursive process which integrates components one by one until reaching the complete system. Our test generation approach supports such recursive process. Component test models are integrated incrementally to generate the integration test model for the current iteration as shown in Fig. 4. Eventually, the generated test model will be integrated with the component test model of the next integrated component to generate the next integration test model, and so on. In case of presence of complex mediators, test stubs, that are underspecified in the test behavior, a configuration model should be provided in order to reveal the behavior of the mediators; i.e., relate mediators’ outputs to their corresponding inputs that is specified in the test behavior.
The approach is composed of four processes as shown in Fig. 5. The first two processes analyze the given test models to detect interactions between the integrated system components. The last two processes generate and optimize the output model. We elaborate more on these processes in the following subsections.
3.1 Identification Process
As mentioned earlier we adopted the two-way merging. In order to generate integration test cases from component test cases, we have to inspect the component test cases and select the ones that contain integration test scenarios. In order to inspect such test cases, we need to recognize the identities of the specified test objects. In this identification process, we aim at locating the declaration of one of the integrated components in the test model of the other integrated component or the existence of a shared test object that is specified in both test models. Test objects can be classified into three kinds: test control, implementation under test (IUT), and test stub. The IUT can be SUT, component under test (CUT) or any fragment of software under test. Using UTP stereotypes, the approach can easily recognize the test objects specified on the input test models as shown in Table 1.
Table 1
UTP Stereotype | Test Objects Tagged to |
---|---|
«TestContext» | Test controls |
«SUT» | System/Component under test |
«TestComponent» | Test stubs, test environment |
However, the identification is not always straightforward. With the exception of the CUT, test objects can emulate the behavior of more than one system component and/or system environment. The most used pattern for test cases is composed of two test objects, the IUT and the test control. In this pattern, the test control emulates the test environment in addition to controlling the test case. In other words, the test control embeds the behavior of any system component and/or environment that is required to realize the test execution. Hence, the approach has to investigate the behavior of test objects stereotyped by TestContext or TestComponent to reveal the identity of the CUTs or shared test objects that may be embedded within these test objects. In order to achieve that, our approach maps the behavior of the test objects of one test model to the behavior of known test objects in the other test model. However, UTP stereotypes can be applied only on the test architecture. Up to UTP version 1.2, the behavior part has been left out from the UTP metamodel [10]. We have to rely on the UML specification to reveal the relations between the UML elements in the test architecture using UML class diagram, and the UML elements in the test behavior using UML sequence diagrams. Furthermore, we have two exceptions. First, there is no comparison between two test controls since both of them are unknown. Second, there is no comparison with test objects that are specified in both test models.
3.2 Test Case Selection
Based on the results of the identification process, the selection process analyzes the test cases in both test models and selects the ones that capture interactions between the integrated components. We investigated two patterns for such test cases. The first pattern comprises individual test cases while the second pattern comprises two test cases, one from each test model.
In the first pattern, we look for an individual test case that specifies both of the integrated components. One component is specified as CUT and the other is specified as a test stub or embedded in the behavior of a test object. In other words, we look for test cases that emulate the system component of the other test model. Furthermore, there must be an interaction between the integrated components captured by the selected test case with at least one exchanged message. In this case, we select such test cases to generate integration test cases, and we refer to such pattern as a complete integration test scenario. Fig. 6A illustrates such a pattern, where the other component Comp2 is specified as a test stub. The figure shows two component test cases, one from each test model. By examining each test case individually, we cannot conclude that any one of them captures an integration test case. However, by mapping the two test cases using the information gathered from the identification process, we can make certain observations. The first observation is that the two test cases capture the same test scenario. The second observation is that the component of the second model, Comp2, is represented in the first model test case as a test stub. Hence, we conclude that component Comp2 is emulated by the test case of the first model. The third observation is that there is an interaction between the integrated components, Comp1 and Comp2, by exchanging messages m2 and m3. Therefore, we conclude that the test case of the first test model captures a complete integration test scenario. Furthermore, we can observe that the test control TC2 emulates the behavior of the component Comp1. Hence, we can reclaim this behavior and initiate a new instance for Comp1 and select the test case as a complete integration test scenario too. However, one of the generated test cases will be removed later by the redundancy checking process.
In the second pattern, we investigate the existence of integration test scenarios that are split across two component test cases. Each part of such scenario is captured by one of the test cases of the two test models. The scenario must represent interaction among the integrated components. This interaction can be direct or indirect through other test objects. These test objects can be other system components that have not been integrated yet or the system environment as in client/server applications. Fig. 6B illustrates this pattern. In this example, the integration is applied on components Comp1 and Comp3. There is one shared test object which is explicitly specified in both test cases. In addition, test object Comp4 is explicitly specified by an instance in one test case and implicitly specified in the other test case as partially emulated by the test control TC1. The next step in our process is to examine the existence of interaction between the integrated components with at least one exchanged message. The two test cases are selected to generate integration test case if an interaction is detected between the integrated components.
3.2.1 Interaction Detection
In order to detect interactions between the integrated components, we build the event dependency tree (EDT) as shown in Fig. 7
. The EDT represents the order relation between the events of the involved test cases. Each node represents an event. As naming convention, the event name is composed of the message name followed by the first letter of the action name, send or receive. We build the EDT based on the given test cases. The approach builds the EDT in two or three steps depending on the selection pattern. In the first step it creates an EDT for each instance lifeline. Next, it merges the EDTs of the same test case by linking the nodes of the corresponding sending/receiving events of the same message. The process proceeds to the third step only in the case of the second selection pattern. In the third step, the process merges the two EDTs of the involved test cases by matching the shared events among the two test cases. Event matching, depending on the event type, is done according to Definition 5.
At the same time, the process takes into account the information gathered during the identification process to match instances that are syntactically different but one emulates the other, e.g., TC1 and Comp4. The process examines two characteristics of the EDT. The first characteristic is the existence of overlapping between the EDTs of the two test cases in the final one. This characteristic is related to the second selection pattern and is evaluated during the third step based on the existence of shared events or not. Fig. 7 shows the EDT of the two test cases in Fig. 6B. The EDT of the first test case, surrounded by dotted rectangle, is overlapping completely with the EDT of the second test case, i.e., all of the events of the first test case are shared events. The second characteristic is the existence of interactions between the two CUTs. This characteristic is checked by:
- 1. locating a node that represents a sending event of one of the integrated components, then
- 2. searching the branches of such a node to locate a node that represents a receiving event of the other integrated components.
The process repeats these steps until a send and a receiving events located on the same path is found. From Fig. 7, there are two traces that satisfy this characteristic: (m2s, m4r) and (m5s, m7r). Therefore, the two component test cases are selected to generate integration test cases.
The process selects the involved test cases if the two characteristics are satisfied. Otherwise, it proceeds by examining other test cases from the given test models. The approach stops the current test integration generation if it does not select test cases from the given test models.
3.3 Test Behavior Generation
In this chapter, we first generate the test behavior, then we construct the test architecture. The process generates integration test cases corresponding to the two selection patterns of test cases that have been selected by the previous process.
In the first pattern, test objects of the selected test cases represent the integrated component of the other test model. These test objects can be test stubs or test controls. Furthermore, their instance can represent exclusively the integrated component or have additional test behavior to emulate other entities and/or provide test control. Hence, we have two different scenarios to handle in this pattern. In the first scenario where the instance of the test object represents exclusively the integrated component, the process generates integration test case by relating the instance to the integrated component. To illustrate this scenario, let us consider the example in Fig. 8 where the first integration test case is generated from the component test case of the first test model in Fig. 6A. In the case of the second scenario where the instance of the test object represents partially the integrated component, the process generates integration test case by creating a new instance that represents the integrated component and relocating the corresponding events to it. To illustrate this scenario, the second integration test case in Fig. 8 is generated from the component test case of the second test model in Fig. 6A. As one can notice the two integration test cases are identical. This is because the two component test cases capture the same test scenario. This redundancy is managed in the next step.
In the second pattern, pairs of test cases, one from each test model capture an integration test scenario. The two test cases, in each pair, have a shared test behavior and test objects. The process merges each pair of test cases to build integration test cases. During the merging, we have to align and merge the shared test behavior of identical instances, which are specified on both test cases.
The process creates a new instance for each integration test case to represent the integration test control, and its behavior will be the sum of the behavior of the given test controls. At the same time, we have to maintain the specification of both test cases; e.g., if one test case specifies n instances of a test object and the other test case specifies m instances of the same test object, then the approach merges min(n, m) instances that define shared behavior. The merging operator is defined as follows:
Fig. 9 shows the integration test case generated from the merging of the two component test cases in Fig. 6B. At the end of the generation of the test behavior, the redundancy checking process removes duplicated test cases before the generation of the test architecture.
3.4 Checking for Redundancy
The generated test behavior may include redundant test cases which may be produced as a result of two situations. First, the same test scenario is specified in the two given test models as shown in Fig. 6A. The second case is when a test case is selected by the two selection patterns. In this case, in addition to the generated integration test case from the merging, the approach generates another integration test cases. However, the latter generated integration test case is identical to, or part of, the first generated integration test case. Hence, it should be removed from the generated test model. This case can be explained with the test cases in Fig. 6B. The two test cases contain a shared test object Comp2 with shared behavior. The approach merges the two test cases to generate integration test case as shown in Fig. 9. On the other hand, the test control of the second test case TC2 emulates the behavior of the CUT Comp1. Hence, the approach generates an integration test case by adding a new instance for Comp1 and relocate the correspond events. The generated test case is similar to the one in Fig. 9. The second test case should be removed since it is redundant.
To remove redundancy among the generated integration test cases, we map the test cases against each other. We define test case inclusion as follows.
3.5 Test Architecture Generation
After generating the test behavior, we build the test architecture. The integration test architecture is created from the specification of the generated integration test behavior. The given test architectures of the component test models are used to relate test objects to their external models, if found. We use the UTP test package in this chapter. Table 2 summarizes the important mappings to generate test architecture from test behavior. The generation process traverses the test cases. It goes through the elements of each test case and creates the equivalent elements in the test architecture. Internal references between elements of the test behavior and the corresponding elements of the test architecture are built. After that, the process compares the generated test objects, UML classes, to their corresponding test objects in the given component test cases. In case where any test object has a reference to an external model, the process updates the corresponding generated test object with the same reference. The most important test object is the SUT, which is always externally referenced. Finally, the process adds a reference to the UTP to enable its stereotypes in the generated test model. Fig. 10 shows the generated test architecture for the generated test behavior in Fig. 9.
3.6 Some Properties of the Integration Test Generation Approach
During the development of the generation approach, we investigated the impact of the integration strategy on our approach. System components are integrated using different integration strategies, some of them are well known such as top-down, bottom-up, big-bang, and ad hoc. The overall generated test behavior for the same set of system components must be equivalent regardless of the applied integration strategy. Hence, we have investigated two properties of the generation approach: commutativity and associativity. More details are given in Appendix A.
Furthermore, we have investigated the saving of test information from one integration iteration to the subsequent ones. Usually, there is a single component test model for each system component. The component test model holds all test information regarding the component. There is typically a test case or more for each targeted function. These test cases exercise the system component through its different interfaces. For each system integration, we need different set of test cases that capture test information related to the interfaces between the currently integrated components. Accordingly, integration test cases capture test information regarding the currently integrated components and neglect test information related to interfaces with system components that have not yet been integrated. Therefore, we need to carry-on test information of component test cases that is not captured by the generated integration test cases to be used in subsequent test integrations. We use the example in Fig. 11 to illustrate this point. The system is composed of four components, which are integrated according to the illustrated integration strategy. Usually, there is a component test model for each component that covers the corresponding component functionality through its interfaces, e.g., the component test model of the component A capture test information related to the interfaces ab and ad. The integration goes through three iterations: (A + B), ((A + B) + C), and (((A + B) + C) + D). In the first iteration, the approach analyzes the two test models A and B and uses test information related to the interface ab to generate the integration test model AB. In the second iteration, the approach analyzes the test models AB and C and uses test information related to the interface bc to generate the integration test model ABC. In the last iteration, the approach analyzes the test models ABC and D and uses test information related to the interfaces dc and ad to generate the integration test model ABCD. Here, we may encounter some issues during the second and third iterations. Let us take the second iteration to explain the issues. The integration test model AB capture test information related only to the interface ab; while the component test model C capture test information related to the interface bc and dc. Test information of component test model B related to the interface bc are probably ignored by the approach during the first iteration except if some test cases capture test information for both interfaces, ab and bc. The same situation applies to the test information related to the interface ad. The approach will probably ignore this information during the first iteration. Hence, the generated integration test model AB is probably missing some test information related to interfaces bc and ad. When the approach tries to generate the integration test model in the second iteration, it probably cannot identify and locate any shared test behavior between the two test models, AB and C. It will generate nothing. Hence, we need to save test information regarding interfaces ad and bc during the first iteration to be used in the subsequent iterations.
We have investigated two techniques, as shown in Fig. 12, to carry test information of component test models to subsequent integration iterations: selective and cumulative integration. The selective technique carries the component test models along with the generated integration test model to the subsequent integration iterations. In each integration iteration, the approach is applied several times to generate the corresponding integration test model. First, it uses the former integration test model and the component test model of the currently integrated system component to generate the integration test model for the current iteration. Next, it uses the carried-on component test models of previously integrated components and the component test model of the currently integrated component to generate additional test cases. The generated integration test model and the component test model of the integrated components, including the currently integrated component, are carried to the subsequent integration iteration. In this technique, we carry on individual component test models throughout the integration-level testing.
In the cumulative technique, we build a global model by merging the given component test models. In each integration iteration, we merge the component test model of the currently integrated system component with the global model and generate the integration test model for each iteration by selecting test cases from the global model that capture interactions between the integrated components. In this technique, we have one reference to carry on throughout the integration-level testing, which is the global model. However, during our investigations, we encountered that the cumulative technique may produce invalid test behavior. Therefore, we ignored it and used only the selective technique.
4 Acceptance Test Optimization
The approach maps test cases of the acceptance level to test cases of the integration level. The mapping technique is based on the comparison of the involved test cases. We consider that part of these test cases target the same system functionalities since they describe the same system from different perspectives. We aim to reduce the acceptance test execution time by reducing the number of acceptance test cases. This can be achieved by eliminating acceptance test cases that have already been exercised on the system during integration-level testing. However, one needs to be careful as integration test cases are mainly applied on subsystems. Usually, they emulate some of the system components that have not yet been integrated. Hence, they cannot substitute acceptance test cases that aim at testing the whole system. There are two situations where the integration test cases are suitable to substitute acceptance and system test cases. The first situation includes test cases applied on the last iteration of the integration-level testing. These test cases are exercises during the integration of the last component to the subsystem to build a complete system. Therefore, they are applied on a complete system. The second situation includes integration test models applied on subsystems that fulfill completely the requirements of some of the system functionalities. Hence, test cases of such test models that examine these functionalities are actually applied on complete subsystems. In other words, the test cases do not emulate system components. Therefore, we need to examine the given integration test cases in order to select the ones that can be mapped to the acceptance test cases. The approach is composed of two processes: the selection process and the mapping process as shown in Fig. 13. The approach is described in terms of acceptance test models but it is applicable to system test model optimization as well.
4.1 Integration Test Case Selection
The integration test models should not contain any emulation of system components in order to qualify for comparison against the acceptance test model. We have to examine the given integration test models for the use of test stubs of system components. The test stubs may be specified in some test cases and not specified in other test cases of the same test model. Hence, our examination will be on the level of the test cases instead of the level of the test models. Test cases of the last integration test model are qualified to be mapped to the acceptance test cases. Hence, we select them directly without further examination. For the rest of the integration test models, we compare the behavior of their test stubs and test controls to the behavior of the CUTs of the subsequent integration test models as shown in Fig. 14. More specifically, the approach compares the behavior of the test stubs and test controls of each test case in an integration test model to the behavior of the integrated components of each test case in the subsequent integration test models.
The selection process selects test cases that do not include test stubs of system components in their specifications. The selection criterion is given formally in Definition 9.
The selection process stops the comparison as soon as the condition is no longer satisfied, i.e., it returns false. Accordingly, the corresponding test case is excluded from the selection when the selection condition is evaluated to false.
The results of the selection process depend on the integration order since the usage of test stubs of system components depends on the integration order. We may not require any test stubs when we choose the right integration order. There is a lot of research being done currently on the selection of the right integration order [35–37].
4.2 Mapping Acceptance Test Cases to Integration Test Cases
The mapping process compares the acceptance test cases against the selected integration test cases. The process removes acceptance test cases from the test model if they are included in the selected integration test cases. However, the acceptance-level testing has a different perspective of the system than the integration-level testing. In the acceptance-level testing, we see the system as a block and we examine it through its external interfaces, while in the integration-level testing, we see fragments of the system, and we examine it through its external interfaces as well as through the internal interfaces of the currently integrated component. Consequently, the test cases are different with respect to the test objects described in each testing level. Acceptance test cases require at least two test objects: test control and SUT, while integration test cases require at least three test objects: test control, CUT, and subsystem.
Furthermore, we have to take into account that the events specified on a lifeline of a test object in an acceptance test case may be distributed over several lifelines in the mapped integration test case as shown in Fig. 15. The behavior of the two test objects, TCa and Sys, in the acceptance test case is distributed over three test objects, TCi, CUT, and SbSys, in the integration test case. Moreover, integration test cases may have extra behaviors that reflect internal interactions between the integrated component and the subsystem. In other words, we should not expect the acceptance test case to be a complete fragment/block within the integration test case.
The test case inclusion as specified by Fig. 15 is used to map test cases of the same test model. It cannot be used in this process because it examines the instances. As we mentioned earlier, instances in this mapping are fundamentally different. This inclusion cannot be used to compare integration test cases from different integration iterations. We derive a new inclusion relation that does not depend on the instances of the test cases.
5 A Case Study: A Library Management System
To illustrate our framework and partially demonstrates its effectiveness we built a prototype tool. We ran several case studies. In this chapter, we present the library management system case study and briefly discuss the results. We considered a library management system that is composed of four components to provide users with the main library services. These services are covered by test cases designed to build component test models as well as the acceptance test model. Fig. 16 shows the system architecture and some of the test models. In this case study, we apply our generation approach on the component test models to generate integration test models. We ran the prototype tool twice using two different integration orders to demonstrate the properties of the test generation approach. Next, we use the prototype tool to map the generated integration test models to the acceptance test model to reduce the acceptance test model.
The tool integrated four component test models through three iterations. It generated three test models for both integration orders with, of course, different sets of test cases as shown in Table 3. This is similar to what we have experienced in other case studies during this research. The tool generated the same number of test cases for both integration orders: seven test cases. Furthermore, the generated test cases cover all of the specified system services. Two test cases were repeated in the second integration order since they emulated a system component in the second iteration.
Table 3
Iteration | Integrated Components | First Integration Order Generated Test Cases | Second Integration Order Generated Test Cases |
---|---|---|---|
1 | 2 | 2 | 2 |
2 | 3 | 3 | 2 |
3 | 4 | 2 | 5 |
Total | 7 | 7 + 2 |
The optimization approach removed all of the acceptance test cases as shown in Table 4. In both integration orders, seven integration test cases that do not emulate system components were selected. The complete acceptance test cases are removed since they matched (included in) the selected integration test cases. Therefore, there is no need to execute the given acceptance test model during the acceptance-level testing for this particular case study as they have already been exercised during integration testing.
6 Related Work
To the best of our knowledge, systematic reuse of test models to generate next level test models has not been covered in MBT [38]. On the other hand, different techniques, such as test coverage [1,38,39], have been proposed to minimize the number of tests. However, the scope of such techniques is the reduction of the number of tests within the same level of testing.
The work of Le [13] is the only research work closely related to our work. The author proposes a composition approach based on UML 1.x collaboration diagrams. The test model is built manually and is composed of two roles/players: the component under test role and the tester role. The tester role controls and performs the test-suite and simulates all necessary stubs and drivers. The author demonstrated the reusability of the tester role from component-level testing to integration-level testing through the introduction of adaptors between the component test models. This approach does not address the synchronization between events of the test behavior. The test case selection is not clear, since not all the component test cases are suitable for the integration-level testing.
There are a lot of research activities on model merging specially in the domain of version control systems (VCS) [40]. These approaches are based on the assumption that the input models have evolved from the same base model [16,17], and some approaches even require the existence of the base model [16,17]. These approaches are not applicable in the testing domain since test models are usually built by different engineers with different views. The model comparison approaches use different calculation methods to identify similarities and differences between different models [41,42]. In our approach, we use two methods for comparing model elements, name-based matching, and feature-based matching. While not all UML model elements have names, practical studies show the effectiveness of this method [42].
Hélouët et al. [25,44] propose a merging approach for message sequence charts (MSCs) [15]. The approach merges all scenarios to build the global behavior of the system. The approach covers both basic MSCs (bMSCs) and high-level MSCs (HMSCs). These investigations focused more on the theoretical aspects and decidability-related issues. Inline operators, similar to UML combined fragments, are not covered since they can be substituted by HMSCs. We support UML combined fragments. The approach uses different composition operators, sequential, alternative, parallel, and iteration, that are specified in HMSCs. We only use the merge operator. More important, we are dealing with finite behaviors where merging and comparison can be done.
7 Conclusion
In this chapter, we proposed a MBT framework that relates and links different software testing levels, enables automation, reusability, and optimization. Two approaches have been concretely proposed in this framework, test generation and test optimization. Both approaches assume component test cases are well formed and cover all component interfaces and services. Test models are specified using UTP, which enables their systematic transformations into test code that can be exercised on the IUT using well-known test execution environments, such as JUnit and TTCN-3 [9]. Usage of standard notations enhances the collaboration and certainly helps bridging the gap between the development and testing activities.
The proposed framework enables reusability across the software testing levels. Test models are systematically generated from preceding test models. We discussed in details the generation of integration test models from component test models. We defined a test case merging operator to integrate component test cases that have shared behavior.
The proposed framework also enables systematic test optimization across the software testing levels. Test models are related to preceding test models to remove the ones that have already been exercised. Test optimization reduces the size of the test models, shortens test execution time, and reduces the cost of software testing. We discussed an approach that optimizes acceptance test models by relating them to the integration test models. This approach is also applicable to system test models.
We built a prototype tool and experienced with several case studies. In this chapter, we reported on the library management system case study and showed how the acceptance test model can be reduced because acceptance test cases have been covered during integration testing. However, further validation is required with larger size and industrial case studies to demonstrate the applicability and the efficiency of our framework.
MBT is a maturing field of research and practice. It is gaining in popularity in several domains including safety critical domains like avionic and automotive. MBT enables abstraction, reuse, and automation which are much needed to improve the quality of complex software systems. It alleviates the testers from routine tasks such as test cases generation, coverage evaluation, transformations, etc. However, its complete adoption by practitioners depends on the availability of industrial-strength tools, especially for the next generation cyber physical and Internet of Things based systems which will be more complex than current software systems.
Acknowledgments
This work has been partially supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada. We would like to thank Dr. Reinhard Gotzhein for comments and feedbacks on earlier versions of this work.
Appendix A: Properties of the Integration Test Generation Approach
System integration may take different strategies: top-down, bottom-up, ad hoc, and big-bang, and different sequences/orders to integrate the system components. The generated test behavior for the same set of system components must be equivalent regardless of the adopted integration strategy and order. The intermediate results, at a given step, may not be equivalent since they integrate different sets of components.
Test cases are equivalent when they specify the same behavior. We define the equivalence between two test cases, t1 and t2 as follows.
The generated test cases, from different integration orders, are equivalent if and only if our approach has two properties: commutativity and associativity. The merging operation (Definition 7) uses the union operator and two special functions, f() and g(). We need to investigate the commutativity and the associativity of our merging operation.
A.1 System Specification
Systems are composed of a set of components. Each component has internal and/or external interfaces. Internal interfaces are used to communicate among the system components. External interfaces are used to communicate with the system environment. The general system architecture can be described as shown in Fig. A.1. A system with three components is adequate to investigate the commutative and associative properties.
To simplify our investigation, we assume test cases consist of two instances only, CUT and test control. The test control represents the behavior of the test environment in addition to controlling the test execution. The test environment represents the system environment as well as system components that are not yet realized during the test execution. We also assume, for simplicity, that each component has one component test case.
The system is composed of three components, A, B, and C, and each component has one component test case: t1, t2, and t3, respectively. We assume there is an interaction between these components, and the test cases capture these interactions. The events of each component are organized into several sets to represent the corresponding component interfaces. Accordingly, sets and relations for each test case are split into several subsets to indicate such organization. The specification for each component test case is given as follows:
- t1 = (I1, E1, R1)
- I1 = {tc1, a}
- E1 = e11 U e12 U e13, where
- e11 a set of events specified only in t1
- e12 a set of events specified in both t1 and t2
- e13 a set of events specified in both t1 and t3
- R1 = R111 U R112 U R113 U R121 U R122 U R123 U R131 U R132 U R133, where
- R111 ⊆ e11 x e11
- R112 ⊆ e11 x e12
- R113 ⊆ e11 x e13
- R121 ⊆ e12 x e11
- R122 ⊆ e12 x e12
- R123 ⊆ e12 x e13
- R131 ⊆ e13 x e11
- R132 ⊆ e13 x e12
- R133 ⊆ e13 x e13
- t2 = (I2, E2, R2)
- I2 = {tc2, b}
- E2 = e21 U e22 U e23, where
- e21 a set of events specified in both t2 and t1
- e22 a set of events specified only in t2
- e23 a set of events specified in both t2 and t3
- R2 = R211 U R212 U R213 U R221 U R222 U R223 U R231 U R232 U R233, where
- R211 ⊆ e21 x e21
- R212 ⊆ e21 x e22
- R213 ⊆ e21 x e23
- R221 ⊆ e22 x e21
- R222 ⊆ e22 x e22
- R223 ⊆ e22 x e23
- R231 ⊆ e23 x e21
- R232 ⊆ e23 x e22
- R233 ⊆ e23 x e23
- t3 = (I3, E3, R3)
- I3 = {tc3, c}
- E3 = e31 U e32 U e33, where
- e31 a set of events specified in both t3 and t1
- e32 a set of events specified in both t3 and t2
- e33 a set of events specified only in t3
- R3 = R311 U R312 U R313 U R321 U R322 U R323 U R331 U R332 U R333, where
- R311 ⊆ e31 x e31
- R312 ⊆ e31 x e32
- R313 ⊆ e31 x e33
- R321 ⊆ e32 x e31
- R322 ⊆ e32 x e32
- R323 ⊆ e32 x e33
- R331 ⊆ e33 x e31
- R332 ⊆ e33 x e32
- R333 ⊆ e33 x e33
Notice that
- e12 = e21
- e13 = e31
- e23 = e32
- R122 = R211
- R133 = R311
- R233 = R322
Note that if there is no interaction between two components, then their corresponding variables, sets, and relations will be empty; for examples, suppose there is no interaction between A and C then
- e13 = {},
- e31 = {},
- R113 = {},
- R123 = {},
- R131 = {},
- R132 = {},
- R133 = {},
- R311 = {},
- R312 = {},
- R313 = {},
- R321 = {} and
- R331 = {}
The approach creates the test control for the generated test model and builds its behavior by merging the behavior of the test controls of the given test models, which we call tci.
A.2 Commutativity
To demonstrate the commutativity of our approach for any two components, say A and B, we should demonstrate that the integration of their component test cases, t1 and t2, respectively, generates equivalent behaviors independently of the integration order: (A + B) or (B + A). That means
Using Definitions 3 and 7, we get
- (g(I1) U g(I2), f(E1) U f(E2), f(R1) U f(R2)) = (g(I2) U g(I1), f(E2) U f(E1), f(R2) U f(R1))
Hence, to validate Eq. (A.1), we need to show that
Let us evaluate the left side of Eq. (A.2) first by substituting the values of I1 and I2 and using our definition of equivalence (Definition A.1).
- g(I1) U g(I2) = g({tc1, a}) U g({tc2, b})
Then, we apply the g() function:
- g(I1) U g(I2) = {tci, a} U {tci, b}
Then, we apply the union operator:
- g(I1) U g(I2) = {tci, a, b}
Next, we perform the same sequence on the right side of Eq. (A.2)
- g(I2) U g(I1) = g({tc2, b}) U g({tc1, a})
- = {tci, b} U {tci, a}
- = {tci, b, a}
The two sides are equivalent. Thus, we say Eq. (A.2) is true. We take the same evaluation approach with Eq. (A.3). First, we evaluate the left side of Eq. (A.3).
- f(E1) U f(E2) = f(e11 U e12 U e13) U f(e21 U e22 U e23)
Since e12 = e21, the f() function replaces e21 with e12
- f(E1) U f(E2) = e11 U e12 U e13 U e12 U e22 U e23
- = e11 U e12 U e13 U e22 U e23
Then, we evaluate the right side of Eq. (A.3)
- f(E2) U f(E1) = f(e21 U e22 U e23) U f(e11 U e12 U e13)
Since e12 = e21, the f() function replaces e21 with e12
- f(E2) U f(E1) = e12 U e22 U e23 U e11 U e12 U e13
- = e12 U e22 U e23 U e11 U e13
Hence, the two sides are equivalent, and this is prove that Eq. (A.3) is true as well. The same evaluation approach is applied to Eq. (A.4). We take the left side of the equation first
- f(R1) U f(R2) = f(R111 U R112 U R113 U R121 U R122 U R123 U R131 U R132 U R133) U f(R211 U R212 U R213 U R221 U R222 U R223 U R231 U R232 U R233)
Since R122 = R211, the f() function replaces R211 with R122
- f(R1) U f(R2) = R111 U R112 U R113 U R121 U R122 U R123 U R131 U R132 U R133 U R122 U R212 U R213 U R221 U R222 U R223 U R231 U R232 U R233
- = R111 U R112 U R113 U R121 U R122 U R123 U R131 U R132 U R133 U R212 U R213 U R221 U R222 U R223 U R231 U R232 U R233
The next step is to evaluate the right side of Eq. (A.4)
- f(R2) U f(R1) = f(R211 U R212 U R213 U R221 U R222 U R223 U R231 U R232 U R233) U f(R111 U R112 U R113 U R121 U R122 U R123 U R131 U R132 U R133)
- = R122 U R212 U R213 U R221 U R222 U R223 U R231 U R232 U R233 U R111 U R112 U R113 U R121 U R122 U R123 U R131 U R132 U R133
- = R122 U R212 U R213 U R221 U R222 U R223 U R231 U R232 U R233 U R111 U R112 U R113 U R121 U R123 U R131 U R132 U R133
The results of both sides of (A.4) are equivalent. Since Eqs. (A.2), (A.3), and (A.4) are evaluated to true; then Eq. (A.1) is true too. Hence, the commutativity property of the integration approach is proven.
A.3 Associativity
To demonstrate the associativity of the integration approach for any three components, A, B, and C, we need to demonstrate that:
Using Definitions 3 and 7, we can refactor Eq. (A.5) as follows:
Hence, we have to prove that Eqs. (A.6), (A.7), and (A.8) are satisfied. Let us start by examining Eq. (A.6). First, we evaluate the left side of the equation.
- g(I1) U (g(I2) U g(I3)) = g({tc1, a}) U (g({tc2, b}) U g({tc3, c}))
Then, we apply g()
- = {tci, a} U ({tci, b} U {tci, c})
- = {tci, a} U {tci, b, c}
- = {tci, a, b, c}
Then, we take the right side of Eq. (A.6)
- (g(I1) U g(I2)) U g(I3) = (g({tc1, a}) U g({tc2, b})) U g({tc3, c})
- = ({tci, a} U {tci, b}) U {tci, c}
- = {tci, a, b} U {tci, c}
- = {tci, a, b, c}
The two sides are equal. Thus, we can say Eq. (A.6) is true. We use the same evaluation approach for Eq. (A.7). First, we evaluate the left side of Eq. (A.7).
- f(E1) U (f(E2) U f(E3)) = f(e11 U e12 U e13) U (f(e21 U e22 U e23) U f(e31 U e32 U e33))
Then, we apply f(), which replaces the following sets
- e12 = e21,
- e13 = e31, and
- e23 = e32.
- f(E1) U (f(E2) U f(E3)) = (e11 U e12 U e13) U ((e12 U e22 U e23) U (e13 U e23 U e33))
- = (e11 U e12 U e13) U (e12 U e22 U e23 U e13 U e33)
- = e11 U e12 U e13 U e22 U e23 U e33.
Then, we evaluate the right side of Eq. (A.7).
- (f(E1) U f(E2)) U f(E3) = (f(e11 U e12 U e13) U f(e21 U e22 U e23)) U f(e31 U e32 U e33)
- = ((e11 U e12 U e13) U (e12 U e22 U e23)) U (e13 U e23 U e33)
- = (e11 U e12 U e13 U e22 U e23) U (e13 U e23 U e33)
- = e11 U e12 U e13 U e22 U e23 U e33.
Therefore, the two sides are equal, and that proves that Eq. (A.7) is satisfied. The same evaluation approach is used for Eq. (A.8). We take the left side of the equation first.
- f(R1) U (f(R2) U f(R3)) = f(R111 U R112 U R113 U R121 U R122 U R123 U R131 U R132 U R133) U (f(R211 U R212 U R213 U R221 U R222 U R223 U R231 U R232 U R233) U f(R311 U R312 U R313 U R321 U R322 U R323 U R331 U R332 U R333))
Then, we apply f(), which replaces the following relations
- R122 = R211,
- R133 = R311, and
- R233 = R322.
- f(R1) U (f(R2) U f(R3)) = (R111 U R112 U R113 U R121 U R122 U R123 U R131 U R132 U R133) U ((R122 U R212 U R213 U R221 U R222 U R223 U R231 U R232 U R233) U (R133 U R312 U R313 U R321 U R233 U R323 U R331 U R332 U R333))
- = (R111 U R112 U R113 U R121 U R122 U R123 U R131 U R132 U R133) U (R122 U R212 U R213 U R221 U R222 U R223 U R231 U R232 U R233 U R133 U R312 U R313 U R321 U R323 U R331 U R332 U R333)
- = R111 U R112 U R113 U R121 U R122 U R123 U R131 U R132 U R133 U R212 U R213 U R221 U R222 U R223 U R231 U R232 U R233 U R312 U R313 U R321 U R323 U R331 U R332 U R333.
The next step is to evaluate the right side of Eq. (A.8).
- (f(R1) U f(R2)) U f(R3) = (f(R111 U R112 U R113 U R121 U R122 U R123 U R131 U R132 U R133) U f(R211 U R212 U R213 U R221 U R222 U R223 U R231 U R232 U R233)) U f(R311 U R312 U R313 U R321 U R322 U R323 U R331 U R332 U R333).
Then, we apply f()
- = ((R111 U R112 U R113 U R121 U R122 U R123 U R131 U R132 U R133) U (R122 U R212 U R213 U R221 U R222 U R223 U R231 U R232 U R233)) U (R133 U R312 U R313 U R321 U R233 U R323 U R331 U R332 U R333)
- = (R111 U R112 U R113 U R121 U R122 U R123 U R131 U R132 U R133 U R212 U R213 U R221 U R222 U R223 U R231 U R232 U R233) U (R133 U R312 U R313 U R321 U R233 U R323 U R331 U R332 U R333)
- = R111 U R112 U R113 U R121 U R122 U R123 U R131 U R132 U R133 U R212 U R213 U R221 U R222 U R223 U R231 U R232 U R233 U R312 U R313 U R321 U R323 U R331 U R332 U R333.
The results of both sides of (A.8) are equal. Since Eqs. (A.6), (A.7), and (A.8) are satisfied; therefore, Eq. (A.5) holds. Hence, the associativity of the integration approach is proven.