Chapter Two

Model-Based Test Cases Reuse and Optimization

Mohamed Mussa; Ferhat Khendek    Electrical and Computer Engineering, Concordia University, Montréal, QC, Canada

Abstract

Several test generation techniques have been proposed in the literature. These techniques target separately specific levels of testing without relating them to each other in order to avoid redundancy and enable reuse and optimization. In this chapter, we look into connecting different levels of testing. We propose a model-based testing framework that enables reusability and optimization across different levels of testing. Test cases at one level are reused to generate test cases of subsequent levels of testing. Furthermore, test cases at one level are optimized by relating them to test cases of preceding testing levels and removed if they are found redundant.

Keywords

Model-based testing; Component testing; Integration testing; Test cases reuse; Acceptance test optimization

1 Introduction

Testing aims at enhancing the quality of software products. Software goes through different levels of testing; the main ones are unit, component, integration, system, and acceptance level testing. For each level, engineers plan the test, design test cases, exercise the test cases on the target implementation, and evaluate the results. Different test approaches have been developed over the last 3 decades. They usually target specific levels of testing. The lack of clear and systematic connections between the software testing levels is a noteworthy problem in software testing [1,2].

The investigation of this issue is the main objective of this chapter. Moreover, we propose a model-based setting to tackle this problem. Model-driven engineering (MDE) [3] is gaining in maturity and popularity. The introduction of model-based testing (MBT) [4,5] has been an important progress in software testing. Several MBT approaches covering a wide spectrum of modeling languages and software domains, such as embedded systems and telecommunication [6], have been proposed. The unified modeling language (UML) [7] is now widely accepted. Recently, the object management group (OMG) [8] standardized a UML testing profile (UTP) [9]. The profile enables test concepts in UML models in order to create precise UML test models. The literature shows a growing interest in UTP-based approaches [1,1015]. However, the focus is still on MBT approaches for specific software levels of testing [1018].

In this chapter, we propose a model-based software testing framework to connect testing levels and enable reusability and optimization across different testing levels [1922]. The framework is based on UTP. It is composed of two approaches for test generation and test optimization. The test generation approach generates test models for a target testing level by reusing test models from the previous level. We elaborate and discuss the specific case of generation of integration test models from component test models. For this purpose, we investigate the merging of test cases. While there has been a lot of research toward merging architectural models, only a little has been done for merging behavioral models [2327]. The second approach optimizes test models by relating them to test models that have been already executed in the preceding testing levels. It aims at reducing test execution time without compromising the quality. For this purpose, we develop a model comparison process specific to test cases. We elaborate and discuss the optimization of acceptance test models by relating them to integration test models. Our framework has been implemented, and we have experienced with several case studies. In this chapter, we illustrate our approaches with a library management system case study and discuss the results.

The rest of this chapter is structured as following. Section 2 presents an overview of our MBT framework. We discuss the test generation approach in Section 3 and the test optimization approach in Section 4. In Section 5, we discuss a case study and its results. We review related work in Section 6 before concluding in Section 7.

2 Overall MBT Framework

In this section, we introduce our overall MBT framework for linking testing levels and enabling reusability and optimization. We also provide formal definitions for some concepts used throughout the chapter.

2.1 Overall Framework

The framework enables reusability across testing levels during test design and enhances test execution through the optimization of test models. To conduct our research in rigorous settings, we use sequence diagrams, which have been formally investigated in [2830], to model and design our test behaviors. Although UTP is used as the language for the framework, it can be replaced by any other language supporting test model description. Fig. 1 shows our overall MBT framework. The framework consists of two approaches: a test generation approach and a test optimization approach. The generation approach links

  •  the component-level testing to the integration-level testing and
  •  the component-level testing to the system-level testing.
Fig. 1
Fig. 1 Overall framework.

Component testing is a black box testing; tests are exercised on components through their interfaces. These interfaces can be internal, to communicate with other system components, or external, to communicate with the system environment, as shown in Fig. 2A. Hence, the same interface can be exercised by several test cases from different component test models. Each test case takes different perspective of the same interface as shown in Fig. 2B. Therefore, we can generate integration test cases from component test cases that examine internal interfaces and generate system test cases from component test cases that examine external interfaces.

Fig. 2
Fig. 2 Component interfaces: (A) interfaces of system components and (B) different views of the same interface.

In this chapter, we discuss the generation of integration test models from component test models. In our work, a component is defined as a self-coherent piece of software that provides one or more services and can interact with other components. In this framework, we also assume that component test models are generated from the design models using existing approaches such as [1015].

As shown in Fig. 1, the framework enables also the optimization of test models by mapping them to the previously executed test models. The optimization approach links

  •  the integration-level testing to the system-level testing,
  •  the integration-level testing to the acceptance-level testing, and
  •  the system-level testing to the acceptance-level testing.

Our framework is based on the idea of overlapping test cases. Test models are composed of a set of test cases. Test cases capture the test behavior that is exercised on the target implementation. Test behavior in general reflects the expected behavior of the implementation under test. We believe that the “collective” behavior of all test cases at any testing level captures the system behavior. In practice, some research activities migrate system behavior across different development stages using test cases since test cases are finite and precise compared to the system design models [31]. Therefore, we can intuitively conclude that test cases from different testing levels that examine the same portion of the system behavior are redundant if they meet the same test requirements of the subsequent testing level. We propose a test optimization approach that optimizes acceptance test cases by relating them to system and/or integration test cases, and optimize system test cases by relating them to integration test cases. In this chapter (Section 4), we discuss the optimization of acceptance test cases by avoiding redundancy with integration test cases.

We assume component test cases have the following characteristics:

  •  they are complete and cover all component interfaces,
  •  each test case for a component covers at least one service provided by the component, and
  •  there is consistency between the component test models since they describe different components of the same system. The names of the components, interfaces, and messages are used in a consistent manner in test models.

In the proposed approaches, we will compare test models. Comparing two different test models that have been built by different engineers with different views is a challenging task. It requires the identification of the similarities and differences among the elements of the two test models, and reconciling inconsistency between the two models. In the discipline of model comparison, there are two methods: three-way merging and two-way merging [32]. In addition to the merged models, the three-way merging, requires the existence of the base model or the changes logs. It compares each model to the base model to identify the changes of that model with respect to the base model. These changes can be classified into three categories: added, deleted, and/or modified. Based on this information, the method merges the models to generate a new base model. Comparing to three-way merging, the two-way merging is harder since it does not rely on any additional information beside the merged models. However, the two methods share the same assumption. They assume that the merged models are evolved from the same base model. Researchers build their approaches around certain model features, such as universally unique identifiers (UUID), to calculate the similarities and differences between the elements of their models. While the process of identifying similarities and differences between the models and detecting conflicts can be automated, reconciling conflicts require user interaction [3234]. In the testing domain, component test models are usually generated independently from the corresponding design specifications. Hence, the assumption of the evolving of the merged models from the same source is not applicable in this domain. However, we can benefit from the characteristics of our domain since we are not developing a general merging approach. Test cases describe partial behavior of the system under test (SUT). They actually represent a partial view of the SUT that is the focus of the test designer. Thus, different test cases may describe the same system behavior from different angles. We focus on such test cases to build our integration approach. Our approach follows two-way merging. However, we do not assume that the test models evolved from the same source. We assume that test cases overlap since they describe the same system from different angles. Therefore, test models share elements of the system that can be identified through their names and attributes.

2.2 Some Definitions

We define here some concepts used throughout this chapter.

Definition 1

(Test Model)

A test model is expressed as a tuple M = (P, T), where

  •  P is the test package and
  •  T is a set of test cases.

Definition 2

(Test Package)

A test package is expressed as a tuple P = (tcn, tcm, sut), where

  •  tcn is the test control,
  •  tcm is a set of test components required to realize the test execution (test stubs), and
  •  sut is a set of components under test.

Definition 3

(Test Case)

A test case is expressed as a tuple t = (I, E, R), where

  •  I is a set of instances,
  •  E is a set of events (defined further in Definition 4), and
  •  R ⊆ (E × E) is a partial order reflecting the transitive closure of the order relation between events on the same axis and the sending and receiving events of the same message.

We categorize events into three categories: message events, time events, and miscellaneous events. Message events, the sending event and the receiving event, represent the two ends of messages exchanged between two instances referred to as the sender and the receiver, respectively. In this chapter, messages are instances of an execution trace. Hence, they are unique throughout a single system execution. Time events represent events related to timers. Each timer is associated with one instance. We classify the rest of event types, such as instance termination and UTP verdict, into the third category. Notice that the association between events and instances is part of the event definition.

Definition 4

(Event)

  1. 1. A message event Emsg is a tuple (ty, nm, owner, msg, oIns), where
    1. a. ty ∈ {send, receive},
    2. b. nm is the event name,
    3. c. owner is the instance where the event belongs to. owner = (nm, st), where
      1. i. nm is the instance name and
      2. ii. st is the UTP stereotype of the instance,
    4. d. msg is the message the event is related to,
    5. e. oIns is the other instance related to msg, oIns = (nm, st), where
      1. i. nm is the instance name and
      2. ii. st is the UTP stereotype of the instance.
  2. 2. A time-related event Etime is a tuple (ty, nm, tm, owner, pd), where
    1. a. ty ∈ {timeOutMessage, startTimerAction, stopTimerAction, readTimerAction, timerRunningAction},
    2. b. nm is the event name,
    3. c. tm is the timer name,
    4. d. owner is the instance where the event belongs to, owner = (nm, st), where
      1. i. nm is the instance name and
      2. ii. st is the UTP stereotype of the instance,
    5. f. pd is the timer value.
  3. 3. A miscellaneous event Emisc is a tuple (ty, nm, v, owner), where
    1. a. ty ∈ {Action, Terminate, UTPverdict},
    2. b. nm is the event name,
    3. c. v is the value associated with the event (this value can be pass, fail, inconclusive, error in case ty = UTPverdict),
    4. d. owner is the instance where the event belongs to, owner = (nm, st), where
      1. i. nm is the instance name, and
      2. ii. st is the UTP stereotype of the instance.

We use the test model specified in Fig. 3 to illustrate our definitions. The test model is composed of a test package, p, represents the test architecture and two test cases, t1 and t2, represent the test behavior. To distinguish between the sending and receiving events of the same message, we suffix the message name with the first letter of the corresponding action. We represent this test model, M, as follows:

  • M = (P, T), with
  • P = (TC, ∅, {CUT}),
  • T = {t1, t2},
  • t1 = ({tc,cut}, {m1s, m2r, m3s, m4r, ver, m1r, m2s, m3r, m4s}, {(m1s,m2r),(m2r,m3s), (m3s,m4r),(m4r,ver),(m2s,m3r),(m3r,m4s),(m1s,m1r),(m2s,m2r),(m3s,m3r),(m4s,m4r),(m1s,m3s),(m2r,m4r),(m2r,m3r),(m3s,ver),(m2s,m4s),(m3r,m4r),(m2s,m3s), (m3s,m4s),(m4s,ver),(m1s,m4r),(m1s,m3r),(m2r,ver),(m2r,m4s),(m2s,m4r), (m3r,ver),(m1s,ver), (m1s,m4s),(m2s,ver)})
    • tc = (“tc”, TestContext),
    • cut = (“cut”, SUT),
    • m1s = (send, “m1s”, tc, m1, cut),
    • m2r = (receive, “ m2r”, tc, m2, cut),
    • m3s = (send, “m3s”, tc, m3, cut),
    • m4r = (receive, “m4r”, tc, m4, cut),
    • ver = (UTPverdict, “ver”, “pass”, tc),
    • m1r = (receive, “m1r”, cut, m1, tc),
    • m2s = (send, “m2s”, cut, m2, tc),
    • m3r = (receive, “m3r”, cut, m3, tc),
    • m4s = (send, “m4s”, cut, m4, tc).
  • t2 = ({tc,cut}, {m5s, m6r, m7r, ver, m5r, m6s, m7s}, {(m5s,m6r),(m5s,m7r),(m6r,ver), (m7r,ver),(m5r,m7s),(m5s,m5r),(m6s,m6r),(m7s,m7r),(m5s,ver),(m5r,m7r),(m5s,m7s), (m6s,ver),(m7s,ver),(m5r,ver)}),
    • tc = (“tc”, TestContext),
    • cut = (“cut”, SUT),
    • m5s = (send, “m5s”, tc, m5, cut),
    • m6r = (receive, “m6r”, tc, m6, cut),
    • m7r = (receive, “m7r”, tc, m7, cut),
    • ver = (UTPverdict, “ver”, “pass”, tc),
    • m5r = (receive, “m5r”, cut, m5, tc),
    • m6s = (send, “m6s”, cut, m6, tc),
    • m7s = (send, “m7s”, cut, m7, tc).
Fig. 3
Fig. 3 Example of test model (M).

3 Integration Test Generation

By definition integration-level testing puts emphasis on the interactions between the involved components. Hence, integration test cases can be generated from the component test cases that capture such interactions. These interactions can be direct between components or indirect through mediators. However, not all component test cases capture such interactions. Therefore, we need to analyze the available component test cases to identify and select the ones that capture interactions between the components.

The integration test generation approach supports incremental integration strategies, bottom-up, top-down, or ad hoc. With such strategies system integration is a recursive process which integrates components one by one until reaching the complete system. Our test generation approach supports such recursive process. Component test models are integrated incrementally to generate the integration test model for the current iteration as shown in Fig. 4. Eventually, the generated test model will be integrated with the component test model of the next integrated component to generate the next integration test model, and so on. In case of presence of complex mediators, test stubs, that are underspecified in the test behavior, a configuration model should be provided in order to reveal the behavior of the mediators; i.e., relate mediators’ outputs to their corresponding inputs that is specified in the test behavior.

Fig. 4
Fig. 4 Overall integration test generation approach.

The approach is composed of four processes as shown in Fig. 5. The first two processes analyze the given test models to detect interactions between the integrated system components. The last two processes generate and optimize the output model. We elaborate more on these processes in the following subsections.

Fig. 5
Fig. 5 Different processes (steps) of the test generation approach.

3.1 Identification Process

As mentioned earlier we adopted the two-way merging. In order to generate integration test cases from component test cases, we have to inspect the component test cases and select the ones that contain integration test scenarios. In order to inspect such test cases, we need to recognize the identities of the specified test objects. In this identification process, we aim at locating the declaration of one of the integrated components in the test model of the other integrated component or the existence of a shared test object that is specified in both test models. Test objects can be classified into three kinds: test control, implementation under test (IUT), and test stub. The IUT can be SUT, component under test (CUT) or any fragment of software under test. Using UTP stereotypes, the approach can easily recognize the test objects specified on the input test models as shown in Table 1.

Table 1

UTP Stereotypes for Identifying Test Objects
UTP StereotypeTest Objects Tagged to
«TestContext»Test controls
«SUT»System/Component under test
«TestComponent»Test stubs, test environment

However, the identification is not always straightforward. With the exception of the CUT, test objects can emulate the behavior of more than one system component and/or system environment. The most used pattern for test cases is composed of two test objects, the IUT and the test control. In this pattern, the test control emulates the test environment in addition to controlling the test case. In other words, the test control embeds the behavior of any system component and/or environment that is required to realize the test execution. Hence, the approach has to investigate the behavior of test objects stereotyped by TestContext or TestComponent to reveal the identity of the CUTs or shared test objects that may be embedded within these test objects. In order to achieve that, our approach maps the behavior of the test objects of one test model to the behavior of known test objects in the other test model. However, UTP stereotypes can be applied only on the test architecture. Up to UTP version 1.2, the behavior part has been left out from the UTP metamodel [10]. We have to rely on the UML specification to reveal the relations between the UML elements in the test architecture using UML class diagram, and the UML elements in the test behavior using UML sequence diagrams. Furthermore, we have two exceptions. First, there is no comparison between two test controls since both of them are unknown. Second, there is no comparison with test objects that are specified in both test models.

3.2 Test Case Selection

Based on the results of the identification process, the selection process analyzes the test cases in both test models and selects the ones that capture interactions between the integrated components. We investigated two patterns for such test cases. The first pattern comprises individual test cases while the second pattern comprises two test cases, one from each test model.

In the first pattern, we look for an individual test case that specifies both of the integrated components. One component is specified as CUT and the other is specified as a test stub or embedded in the behavior of a test object. In other words, we look for test cases that emulate the system component of the other test model. Furthermore, there must be an interaction between the integrated components captured by the selected test case with at least one exchanged message. In this case, we select such test cases to generate integration test cases, and we refer to such pattern as a complete integration test scenario. Fig. 6A illustrates such a pattern, where the other component Comp2 is specified as a test stub. The figure shows two component test cases, one from each test model. By examining each test case individually, we cannot conclude that any one of them captures an integration test case. However, by mapping the two test cases using the information gathered from the identification process, we can make certain observations. The first observation is that the two test cases capture the same test scenario. The second observation is that the component of the second model, Comp2, is represented in the first model test case as a test stub. Hence, we conclude that component Comp2 is emulated by the test case of the first model. The third observation is that there is an interaction between the integrated components, Comp1 and Comp2, by exchanging messages m2 and m3. Therefore, we conclude that the test case of the first test model captures a complete integration test scenario. Furthermore, we can observe that the test control TC2 emulates the behavior of the component Comp1. Hence, we can reclaim this behavior and initiate a new instance for Comp1 and select the test case as a complete integration test scenario too. However, one of the generated test cases will be removed later by the redundancy checking process.

Fig. 6
Fig. 6 Patterns of integration test scenarios: (A) Pattern 1: complete integration test scenario and (B) Pattern 2: complement integration test scenario.

In the second pattern, we investigate the existence of integration test scenarios that are split across two component test cases. Each part of such scenario is captured by one of the test cases of the two test models. The scenario must represent interaction among the integrated components. This interaction can be direct or indirect through other test objects. These test objects can be other system components that have not been integrated yet or the system environment as in client/server applications. Fig. 6B illustrates this pattern. In this example, the integration is applied on components Comp1 and Comp3. There is one shared test object which is explicitly specified in both test cases. In addition, test object Comp4 is explicitly specified by an instance in one test case and implicitly specified in the other test case as partially emulated by the test control TC1. The next step in our process is to examine the existence of interaction between the integrated components with at least one exchanged message. The two test cases are selected to generate integration test case if an interaction is detected between the integrated components.

3.2.1 Interaction Detection

In order to detect interactions between the integrated components, we build the event dependency tree (EDT) as shown in Fig. 7

Fig. 7
Fig. 7 Event dependency tree (EDT).

. The EDT represents the order relation between the events of the involved test cases. Each node represents an event. As naming convention, the event name is composed of the message name followed by the first letter of the action name, send or receive. We build the EDT based on the given test cases. The approach builds the EDT in two or three steps depending on the selection pattern. In the first step it creates an EDT for each instance lifeline. Next, it merges the EDTs of the same test case by linking the nodes of the corresponding sending/receiving events of the same message. The process proceeds to the third step only in the case of the second selection pattern. In the third step, the process merges the two EDTs of the involved test cases by matching the shared events among the two test cases. Event matching, depending on the event type, is done according to Definition 5.

Definition 5

(Event Matching)

Let e1 and e2 be two events of the same kind from two different instances, then e1 and e2 match (and noted e1 = e2) if and only if:

  1. 1. Matchmsg(e1, e2) = {e1Emsg, e2Emsg | (e1.ty = e2.ty)(e1.msg = e2.msg)((e1.nm = e2.nm)(((e1.owner.nm = e2.owner.nm)(e1.owner.st ≠ SUT)(e2.owner.st ≠ SUT)((e1.oIns.nm = e2.oIns.nm)(e1.oIns.st ≠ SUT)(e2.oIns.st ≠ SUT)), or
  2. 2. Matchime(e1, e2) = {e1 ∈ Etime, e2 ∈ Etime | (e1.ty = e2.ty) ∧ (e1.tm = e2.tm) ∧ (e1.pd = e2.pd) ∧ ((e1.nm = e2.nm) ∨ (e1.owner.nm = e2.owner.nm) ∨ (e1.owner.st ≠ SUT) ∨ (e2.owner.st ≠ SUT))}, or
  3. 3. Matchmisc(e1, e2) = {e1 ∈ Emisc, e2 ∈ Emisc | (e1.ty = e2.ty) ∧ (e1.v = e2.v) ∧ ((e1.nm = e2.nm) ∨ (e1.owner.nm = e2.owner.nm) ∨ (e1.owner.st ≠ SUT) ∨ (e2.owner.st ≠ SUT))}.

At the same time, the process takes into account the information gathered during the identification process to match instances that are syntactically different but one emulates the other, e.g., TC1 and Comp4. The process examines two characteristics of the EDT. The first characteristic is the existence of overlapping between the EDTs of the two test cases in the final one. This characteristic is related to the second selection pattern and is evaluated during the third step based on the existence of shared events or not. Fig. 7 shows the EDT of the two test cases in Fig. 6B. The EDT of the first test case, surrounded by dotted rectangle, is overlapping completely with the EDT of the second test case, i.e., all of the events of the first test case are shared events. The second characteristic is the existence of interactions between the two CUTs. This characteristic is checked by:

  1. 1. locating a node that represents a sending event of one of the integrated components, then
  2. 2. searching the branches of such a node to locate a node that represents a receiving event of the other integrated components.

The process repeats these steps until a send and a receiving events located on the same path is found. From Fig. 7, there are two traces that satisfy this characteristic: (m2s, m4r) and (m5s, m7r). Therefore, the two component test cases are selected to generate integration test cases.

The process selects the involved test cases if the two characteristics are satisfied. Otherwise, it proceeds by examining other test cases from the given test models. The approach stops the current test integration generation if it does not select test cases from the given test models.

3.3 Test Behavior Generation

In this chapter, we first generate the test behavior, then we construct the test architecture. The process generates integration test cases corresponding to the two selection patterns of test cases that have been selected by the previous process.

In the first pattern, test objects of the selected test cases represent the integrated component of the other test model. These test objects can be test stubs or test controls. Furthermore, their instance can represent exclusively the integrated component or have additional test behavior to emulate other entities and/or provide test control. Hence, we have two different scenarios to handle in this pattern. In the first scenario where the instance of the test object represents exclusively the integrated component, the process generates integration test case by relating the instance to the integrated component. To illustrate this scenario, let us consider the example in Fig. 8 where the first integration test case is generated from the component test case of the first test model in Fig. 6A. In the case of the second scenario where the instance of the test object represents partially the integrated component, the process generates integration test case by creating a new instance that represents the integrated component and relocating the corresponding events to it. To illustrate this scenario, the second integration test case in Fig. 8 is generated from the component test case of the second test model in Fig. 6A. As one can notice the two integration test cases are identical. This is because the two component test cases capture the same test scenario. This redundancy is managed in the next step.

Fig. 8
Fig. 8 Generated integration test cases: (A) integration test case 1 and (B) integration test case 2.

In the second pattern, pairs of test cases, one from each test model capture an integration test scenario. The two test cases, in each pair, have a shared test behavior and test objects. The process merges each pair of test cases to build integration test cases. During the merging, we have to align and merge the shared test behavior of identical instances, which are specified on both test cases.

Definition 6

(Shared Events)

Let E1 and E2 be two sets of events of two different test cases. The set of shared events, se, is defined as follows:

  • se = {(e1,e2): e1 ∈ E1 and e2 ∈ E2 | e1 = e2}

The process creates a new instance for each integration test case to represent the integration test control, and its behavior will be the sum of the behavior of the given test controls. At the same time, we have to maintain the specification of both test cases; e.g., if one test case specifies n instances of a test object and the other test case specifies m instances of the same test object, then the approach merges min(n, m) instances that define shared behavior. The merging operator is defined as follows:

Definition 7

(Merging Test Cases)

Let t1 = (I1, E1, R1) and t2 = (I2, E2, R2) be test two cases and se12 be the corresponding set of shared events. The generated integration test case is defined as follows:

  • t12 = t1 + t2
    • = (g(I1) U g(I2), f(E1) U f(E2), f(R1) U f(R2))

where

  • g(I): {i: i ∈ I and ∀ i if i.st = TestContext, then i = tci}.
  • The function transforms component test controls to integration test control.
  • f(E): {e: e ∈ E and if (e1, e2) ∈ se12 and e = e1 then e = e2}.
  • The function replaces the first element of a pair in the shared events with the second element to eliminate the duplication of identical events. In other words, it relocates emulated events to their corresponding test objects.

Fig. 9 shows the integration test case generated from the merging of the two component test cases in Fig. 6B. At the end of the generation of the test behavior, the redundancy checking process removes duplicated test cases before the generation of the test architecture.

Fig. 9
Fig. 9 Generated integration test case 3.

3.4 Checking for Redundancy

The generated test behavior may include redundant test cases which may be produced as a result of two situations. First, the same test scenario is specified in the two given test models as shown in Fig. 6A. The second case is when a test case is selected by the two selection patterns. In this case, in addition to the generated integration test case from the merging, the approach generates another integration test cases. However, the latter generated integration test case is identical to, or part of, the first generated integration test case. Hence, it should be removed from the generated test model. This case can be explained with the test cases in Fig. 6B. The two test cases contain a shared test object Comp2 with shared behavior. The approach merges the two test cases to generate integration test case as shown in Fig. 9. On the other hand, the test control of the second test case TC2 emulates the behavior of the CUT Comp1. Hence, the approach generates an integration test case by adding a new instance for Comp1 and relocate the correspond events. The generated test case is similar to the one in Fig. 9. The second test case should be removed since it is redundant.

To remove redundancy among the generated integration test cases, we map the test cases against each other. We define test case inclusion as follows.

Definition 8

(Integration Test Case Inclusion)

Let T1 = (I1, E1, R1) be an integration test case and T2 = (I2, E2, R2) be another integration test case, then T1 ⊆ T2 if and only if the following conditions are satisfied:

I1I2

si1_e

E1E2

si2_e

R1R2

si3_e

3.5 Test Architecture Generation

After generating the test behavior, we build the test architecture. The integration test architecture is created from the specification of the generated integration test behavior. The given test architectures of the component test models are used to relate test objects to their external models, if found. We use the UTP test package in this chapter. Table 2 summarizes the important mappings to generate test architecture from test behavior. The generation process traverses the test cases. It goes through the elements of each test case and creates the equivalent elements in the test architecture. Internal references between elements of the test behavior and the corresponding elements of the test architecture are built. After that, the process compares the generated test objects, UML classes, to their corresponding test objects in the given component test cases. In case where any test object has a reference to an external model, the process updates the corresponding generated test object with the same reference. The most important test object is the SUT, which is always externally referenced. Finally, the process adds a reference to the UTP to enable its stereotypes in the generated test model. Fig. 10 shows the generated test architecture for the generated test behavior in Fig. 9.

Table 2

Mapping Test Behavior to Test Structure
Test BehaviorTest Architecture
UML lifelineUML class
UML messageUML association
UML sequence diagramUML operation
Fig. 10
Fig. 10 Generated test architecture.

3.6 Some Properties of the Integration Test Generation Approach

During the development of the generation approach, we investigated the impact of the integration strategy on our approach. System components are integrated using different integration strategies, some of them are well known such as top-down, bottom-up, big-bang, and ad hoc. The overall generated test behavior for the same set of system components must be equivalent regardless of the applied integration strategy. Hence, we have investigated two properties of the generation approach: commutativity and associativity. More details are given in Appendix A.

Furthermore, we have investigated the saving of test information from one integration iteration to the subsequent ones. Usually, there is a single component test model for each system component. The component test model holds all test information regarding the component. There is typically a test case or more for each targeted function. These test cases exercise the system component through its different interfaces. For each system integration, we need different set of test cases that capture test information related to the interfaces between the currently integrated components. Accordingly, integration test cases capture test information regarding the currently integrated components and neglect test information related to interfaces with system components that have not yet been integrated. Therefore, we need to carry-on test information of component test cases that is not captured by the generated integration test cases to be used in subsequent test integrations. We use the example in Fig. 11 to illustrate this point. The system is composed of four components, which are integrated according to the illustrated integration strategy. Usually, there is a component test model for each component that covers the corresponding component functionality through its interfaces, e.g., the component test model of the component A capture test information related to the interfaces ab and ad. The integration goes through three iterations: (A + B), ((A + B) + C), and (((A + B) + C) + D). In the first iteration, the approach analyzes the two test models A and B and uses test information related to the interface ab to generate the integration test model AB. In the second iteration, the approach analyzes the test models AB and C and uses test information related to the interface bc to generate the integration test model ABC. In the last iteration, the approach analyzes the test models ABC and D and uses test information related to the interfaces dc and ad to generate the integration test model ABCD. Here, we may encounter some issues during the second and third iterations. Let us take the second iteration to explain the issues. The integration test model AB capture test information related only to the interface ab; while the component test model C capture test information related to the interface bc and dc. Test information of component test model B related to the interface bc are probably ignored by the approach during the first iteration except if some test cases capture test information for both interfaces, ab and bc. The same situation applies to the test information related to the interface ad. The approach will probably ignore this information during the first iteration. Hence, the generated integration test model AB is probably missing some test information related to interfaces bc and ad. When the approach tries to generate the integration test model in the second iteration, it probably cannot identify and locate any shared test behavior between the two test models, AB and C. It will generate nothing. Hence, we need to save test information regarding interfaces ad and bc during the first iteration to be used in the subsequent iterations.

Fig. 11
Fig. 11 Example of integration strategy.

We have investigated two techniques, as shown in Fig. 12, to carry test information of component test models to subsequent integration iterations: selective and cumulative integration. The selective technique carries the component test models along with the generated integration test model to the subsequent integration iterations. In each integration iteration, the approach is applied several times to generate the corresponding integration test model. First, it uses the former integration test model and the component test model of the currently integrated system component to generate the integration test model for the current iteration. Next, it uses the carried-on component test models of previously integrated components and the component test model of the currently integrated component to generate additional test cases. The generated integration test model and the component test model of the integrated components, including the currently integrated component, are carried to the subsequent integration iteration. In this technique, we carry on individual component test models throughout the integration-level testing.

Fig. 12
Fig. 12 Cumulative vs selective integration.

In the cumulative technique, we build a global model by merging the given component test models. In each integration iteration, we merge the component test model of the currently integrated system component with the global model and generate the integration test model for each iteration by selecting test cases from the global model that capture interactions between the integrated components. In this technique, we have one reference to carry on throughout the integration-level testing, which is the global model. However, during our investigations, we encountered that the cumulative technique may produce invalid test behavior. Therefore, we ignored it and used only the selective technique.

4 Acceptance Test Optimization

The approach maps test cases of the acceptance level to test cases of the integration level. The mapping technique is based on the comparison of the involved test cases. We consider that part of these test cases target the same system functionalities since they describe the same system from different perspectives. We aim to reduce the acceptance test execution time by reducing the number of acceptance test cases. This can be achieved by eliminating acceptance test cases that have already been exercised on the system during integration-level testing. However, one needs to be careful as integration test cases are mainly applied on subsystems. Usually, they emulate some of the system components that have not yet been integrated. Hence, they cannot substitute acceptance test cases that aim at testing the whole system. There are two situations where the integration test cases are suitable to substitute acceptance and system test cases. The first situation includes test cases applied on the last iteration of the integration-level testing. These test cases are exercises during the integration of the last component to the subsystem to build a complete system. Therefore, they are applied on a complete system. The second situation includes integration test models applied on subsystems that fulfill completely the requirements of some of the system functionalities. Hence, test cases of such test models that examine these functionalities are actually applied on complete subsystems. In other words, the test cases do not emulate system components. Therefore, we need to examine the given integration test cases in order to select the ones that can be mapped to the acceptance test cases. The approach is composed of two processes: the selection process and the mapping process as shown in Fig. 13. The approach is described in terms of acceptance test models but it is applicable to system test model optimization as well.

Fig. 13
Fig. 13 Acceptance test optimization approach.

4.1 Integration Test Case Selection

The integration test models should not contain any emulation of system components in order to qualify for comparison against the acceptance test model. We have to examine the given integration test models for the use of test stubs of system components. The test stubs may be specified in some test cases and not specified in other test cases of the same test model. Hence, our examination will be on the level of the test cases instead of the level of the test models. Test cases of the last integration test model are qualified to be mapped to the acceptance test cases. Hence, we select them directly without further examination. For the rest of the integration test models, we compare the behavior of their test stubs and test controls to the behavior of the CUTs of the subsequent integration test models as shown in Fig. 14. More specifically, the approach compares the behavior of the test stubs and test controls of each test case in an integration test model to the behavior of the integrated components of each test case in the subsequent integration test models.

Fig. 14
Fig. 14 The selection process for acceptance test optimization.

The selection process selects test cases that do not include test stubs of system components in their specifications. The selection criterion is given formally in Definition 9.

Definition 9

(Selection Condition)

Let Tkh = (Ikh, Ekh, Rkh) be the integration test case h at the integration iteration k and Tij = (Iij, Eij, Rij) be the integration test case j at integration iteration i, where i > k, then Tkh does not emulate the system component of Tij, if and only if the following condition is satisfied:

Selkh=ejeh.ejEij,ehEkhejehej=ehej.owner.stSUT

si4_e

The selection process stops the comparison as soon as the condition is no longer satisfied, i.e., it returns false. Accordingly, the corresponding test case is excluded from the selection when the selection condition is evaluated to false.

The results of the selection process depend on the integration order since the usage of test stubs of system components depends on the integration order. We may not require any test stubs when we choose the right integration order. There is a lot of research being done currently on the selection of the right integration order [3537].

4.2 Mapping Acceptance Test Cases to Integration Test Cases

The mapping process compares the acceptance test cases against the selected integration test cases. The process removes acceptance test cases from the test model if they are included in the selected integration test cases. However, the acceptance-level testing has a different perspective of the system than the integration-level testing. In the acceptance-level testing, we see the system as a block and we examine it through its external interfaces, while in the integration-level testing, we see fragments of the system, and we examine it through its external interfaces as well as through the internal interfaces of the currently integrated component. Consequently, the test cases are different with respect to the test objects described in each testing level. Acceptance test cases require at least two test objects: test control and SUT, while integration test cases require at least three test objects: test control, CUT, and subsystem.

Furthermore, we have to take into account that the events specified on a lifeline of a test object in an acceptance test case may be distributed over several lifelines in the mapped integration test case as shown in Fig. 15. The behavior of the two test objects, TCa and Sys, in the acceptance test case is distributed over three test objects, TCi, CUT, and SbSys, in the integration test case. Moreover, integration test cases may have extra behaviors that reflect internal interactions between the integrated component and the subsystem. In other words, we should not expect the acceptance test case to be a complete fragment/block within the integration test case.

Fig. 15
Fig. 15 Distributed Events

The test case inclusion as specified by Fig. 15 is used to map test cases of the same test model. It cannot be used in this process because it examines the instances. As we mentioned earlier, instances in this mapping are fundamentally different. This inclusion cannot be used to compare integration test cases from different integration iterations. We derive a new inclusion relation that does not depend on the instances of the test cases.

Definition 10

(Test Case Inclusion)

Let Ta = {Ia, Ea, Ra} be an acceptance test case and Ti = {Ii, Ei, Ri} be an integration test case, then the acceptance test is included in the integration test case if and only if the following conditions are satisfied:

  1. 1. EaEi
  2. 2. RaRi

5 A Case Study: A Library Management System

To illustrate our framework and partially demonstrates its effectiveness we built a prototype tool. We ran several case studies. In this chapter, we present the library management system case study and briefly discuss the results. We considered a library management system that is composed of four components to provide users with the main library services. These services are covered by test cases designed to build component test models as well as the acceptance test model. Fig. 16 shows the system architecture and some of the test models. In this case study, we apply our generation approach on the component test models to generate integration test models. We ran the prototype tool twice using two different integration orders to demonstrate the properties of the test generation approach. Next, we use the prototype tool to map the generated integration test models to the acceptance test model to reduce the acceptance test model.

Fig. 16
Fig. 16
Fig. 16
Fig. 16 A library management system.

The tool integrated four component test models through three iterations. It generated three test models for both integration orders with, of course, different sets of test cases as shown in Table 3. This is similar to what we have experienced in other case studies during this research. The tool generated the same number of test cases for both integration orders: seven test cases. Furthermore, the generated test cases cover all of the specified system services. Two test cases were repeated in the second integration order since they emulated a system component in the second iteration.

Table 3

Integration Test Generation Results for the Library Management System
IterationIntegrated ComponentsFirst Integration Order Generated Test CasesSecond Integration Order Generated Test Cases
1222
2332
3425
Total77 + 2

Table 3

The optimization approach removed all of the acceptance test cases as shown in Table 4. In both integration orders, seven integration test cases that do not emulate system components were selected. The complete acceptance test cases are removed since they matched (included in) the selected integration test cases. Therefore, there is no need to execute the given acceptance test model during the acceptance-level testing for this particular case study as they have already been exercised during integration testing.

Table 4

Acceptance Test Optimization Results for the Library Management System
First Integration Order # Test CasesSecond Integration Order # Test Cases
Integration test models79
Selected test cases77
Acceptance test model77
Excluded test cases77
Optimized acceptance test model00

6 Related Work

To the best of our knowledge, systematic reuse of test models to generate next level test models has not been covered in MBT [38]. On the other hand, different techniques, such as test coverage [1,38,39], have been proposed to minimize the number of tests. However, the scope of such techniques is the reduction of the number of tests within the same level of testing.

The work of Le [13] is the only research work closely related to our work. The author proposes a composition approach based on UML 1.x collaboration diagrams. The test model is built manually and is composed of two roles/players: the component under test role and the tester role. The tester role controls and performs the test-suite and simulates all necessary stubs and drivers. The author demonstrated the reusability of the tester role from component-level testing to integration-level testing through the introduction of adaptors between the component test models. This approach does not address the synchronization between events of the test behavior. The test case selection is not clear, since not all the component test cases are suitable for the integration-level testing.

There are a lot of research activities on model merging specially in the domain of version control systems (VCS) [40]. These approaches are based on the assumption that the input models have evolved from the same base model [16,17], and some approaches even require the existence of the base model [16,17]. These approaches are not applicable in the testing domain since test models are usually built by different engineers with different views. The model comparison approaches use different calculation methods to identify similarities and differences between different models [41,42]. In our approach, we use two methods for comparing model elements, name-based matching, and feature-based matching. While not all UML model elements have names, practical studies show the effectiveness of this method [42].

Hélouët et al. [25,44] propose a merging approach for message sequence charts (MSCs) [15]. The approach merges all scenarios to build the global behavior of the system. The approach covers both basic MSCs (bMSCs) and high-level MSCs (HMSCs). These investigations focused more on the theoretical aspects and decidability-related issues. Inline operators, similar to UML combined fragments, are not covered since they can be substituted by HMSCs. We support UML combined fragments. The approach uses different composition operators, sequential, alternative, parallel, and iteration, that are specified in HMSCs. We only use the merge operator. More important, we are dealing with finite behaviors where merging and comparison can be done.

7 Conclusion

In this chapter, we proposed a MBT framework that relates and links different software testing levels, enables automation, reusability, and optimization. Two approaches have been concretely proposed in this framework, test generation and test optimization. Both approaches assume component test cases are well formed and cover all component interfaces and services. Test models are specified using UTP, which enables their systematic transformations into test code that can be exercised on the IUT using well-known test execution environments, such as JUnit and TTCN-3 [9]. Usage of standard notations enhances the collaboration and certainly helps bridging the gap between the development and testing activities.

The proposed framework enables reusability across the software testing levels. Test models are systematically generated from preceding test models. We discussed in details the generation of integration test models from component test models. We defined a test case merging operator to integrate component test cases that have shared behavior.

The proposed framework also enables systematic test optimization across the software testing levels. Test models are related to preceding test models to remove the ones that have already been exercised. Test optimization reduces the size of the test models, shortens test execution time, and reduces the cost of software testing. We discussed an approach that optimizes acceptance test models by relating them to the integration test models. This approach is also applicable to system test models.

We built a prototype tool and experienced with several case studies. In this chapter, we reported on the library management system case study and showed how the acceptance test model can be reduced because acceptance test cases have been covered during integration testing. However, further validation is required with larger size and industrial case studies to demonstrate the applicability and the efficiency of our framework.

MBT is a maturing field of research and practice. It is gaining in popularity in several domains including safety critical domains like avionic and automotive. MBT enables abstraction, reuse, and automation which are much needed to improve the quality of complex software systems. It alleviates the testers from routine tasks such as test cases generation, coverage evaluation, transformations, etc. However, its complete adoption by practitioners depends on the availability of industrial-strength tools, especially for the next generation cyber physical and Internet of Things based systems which will be more complex than current software systems.

Acknowledgments

This work has been partially supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada. We would like to thank Dr. Reinhard Gotzhein for comments and feedbacks on earlier versions of this work.

Appendix A: Properties of the Integration Test Generation Approach

System integration may take different strategies: top-down, bottom-up, ad hoc, and big-bang, and different sequences/orders to integrate the system components. The generated test behavior for the same set of system components must be equivalent regardless of the adopted integration strategy and order. The intermediate results, at a given step, may not be equivalent since they integrate different sets of components.

Test cases are equivalent when they specify the same behavior. We define the equivalence between two test cases, t1 and t2 as follows.

Definition A.1

(Test Case Equivalence)

Let t1 = (I1, E1, R1) and t2 = (I2, E2, R2) be two test cases, then t1 is equivalent to t2 if and only if the following three conditions are satisfied:

  1. 1. I1 = I2
  2. 2. E1 = E2
  3. 3. R1 = R2.

The generated test cases, from different integration orders, are equivalent if and only if our approach has two properties: commutativity and associativity. The merging operation (Definition 7) uses the union operator and two special functions, f() and g(). We need to investigate the commutativity and the associativity of our merging operation.

A.1 System Specification

Systems are composed of a set of components. Each component has internal and/or external interfaces. Internal interfaces are used to communicate among the system components. External interfaces are used to communicate with the system environment. The general system architecture can be described as shown in Fig. A.1. A system with three components is adequate to investigate the commutative and associative properties.

Fig. A.1
Fig. A.1 General system architecture.

To simplify our investigation, we assume test cases consist of two instances only, CUT and test control. The test control represents the behavior of the test environment in addition to controlling the test execution. The test environment represents the system environment as well as system components that are not yet realized during the test execution. We also assume, for simplicity, that each component has one component test case.

The system is composed of three components, A, B, and C, and each component has one component test case: t1, t2, and t3, respectively. We assume there is an interaction between these components, and the test cases capture these interactions. The events of each component are organized into several sets to represent the corresponding component interfaces. Accordingly, sets and relations for each test case are split into several subsets to indicate such organization. The specification for each component test case is given as follows:

  • t1 = (I1, E1, R1)
    • I1 = {tc1, a}
    • E1 = e11 U e12 U e13, where
      • e11 a set of events specified only in t1
      • e12 a set of events specified in both t1 and t2
      • e13 a set of events specified in both t1 and t3
    • R1 = R111 U R112 U R113 U R121 U R122 U R123 U R131 U R132 U R133, where
      • R111 ⊆ e11 x e11
      • R112 ⊆ e11 x e12
      • R113 ⊆ e11 x e13
      • R121 ⊆ e12 x e11
      • R122 ⊆ e12 x e12
      • R123 ⊆ e12 x e13
      • R131 ⊆ e13 x e11
      • R132 ⊆ e13 x e12
      • R133 ⊆ e13 x e13
  • t2 = (I2, E2, R2)
    • I2 = {tc2, b}
    • E2 = e21 U e22 U e23, where
      • e21 a set of events specified in both t2 and t1
      • e22 a set of events specified only in t2
      • e23 a set of events specified in both t2 and t3
    • R2 = R211 U R212 U R213 U R221 U R222 U R223 U R231 U R232 U R233, where
      • R211 ⊆ e21 x e21
      • R212 ⊆ e21 x e22
      • R213 ⊆ e21 x e23
      • R221 ⊆ e22 x e21
      • R222 ⊆ e22 x e22
      • R223 ⊆ e22 x e23
      • R231 ⊆ e23 x e21
      • R232 ⊆ e23 x e22
      • R233 ⊆ e23 x e23
  • t3 = (I3, E3, R3)
    • I3 = {tc3, c}
    • E3 = e31 U e32 U e33, where
      • e31 a set of events specified in both t3 and t1
      • e32 a set of events specified in both t3 and t2
      • e33 a set of events specified only in t3
    • R3 = R311 U R312 U R313 U R321 U R322 U R323 U R331 U R332 U R333, where
      • R311 ⊆ e31 x e31
      • R312 ⊆ e31 x e32
      • R313 ⊆ e31 x e33
      • R321 ⊆ e32 x e31
      • R322 ⊆ e32 x e32
      • R323 ⊆ e32 x e33
      • R331 ⊆ e33 x e31
      • R332 ⊆ e33 x e32
      • R333 ⊆ e33 x e33

Notice that

  • e12 = e21
  • e13 = e31
  • e23 = e32
  • R122 = R211
  • R133 = R311
  • R233 = R322

Note that if there is no interaction between two components, then their corresponding variables, sets, and relations will be empty; for examples, suppose there is no interaction between A and C then

  • e13 = {},
  • e31 = {},
  • R113 = {},
  • R123 = {},
  • R131 = {},
  • R132 = {},
  • R133 = {},
  • R311 = {},
  • R312 = {},
  • R313 = {},
  • R321 = {} and
  • R331 = {}

The approach creates the test control for the generated test model and builds its behavior by merging the behavior of the test controls of the given test models, which we call tci.

A.2 Commutativity

To demonstrate the commutativity of our approach for any two components, say A and B, we should demonstrate that the integration of their component test cases, t1 and t2, respectively, generates equivalent behaviors independently of the integration order: (A + B) or (B + A). That means

t1+t2=t2+t1

si5_e  (A.1)

Using Definitions 3 and 7, we get

  • (g(I1) U g(I2), f(E1) U f(E2), f(R1) U f(R2)) = (g(I2) U g(I1), f(E2) U f(E1), f(R2) U f(R1))

Hence, to validate Eq. (A.1), we need to show that

gI1UgI2=gI2UgI1

si6_e  (A.2)

fE1UfE2=fE2UfE1

si7_e  (A.3)

fR1UfR2=fR2UfR1

si8_e  (A.4)

Let us evaluate the left side of Eq. (A.2) first by substituting the values of I1 and I2 and using our definition of equivalence (Definition A.1).

  • g(I1) U g(I2) = g({tc1, a}) U g({tc2, b})

Then, we apply the g() function:

  • g(I1) U g(I2) = {tci, a} U {tci, b}

Then, we apply the union operator:

  • g(I1) U g(I2) = {tci, a, b}

Next, we perform the same sequence on the right side of Eq. (A.2)

  • g(I2) U g(I1) = g({tc2, b}) U g({tc1, a})
    • = {tci, b} U {tci, a}
    • = {tci, b, a}

The two sides are equivalent. Thus, we say Eq. (A.2) is true. We take the same evaluation approach with Eq. (A.3). First, we evaluate the left side of Eq. (A.3).

  • f(E1) U f(E2) = f(e11 U e12 U e13) U f(e21 U e22 U e23)

Since e12 = e21, the f() function replaces e21 with e12

  • f(E1) U f(E2) = e11 U e12 U e13 U e12 U e22 U e23
    • = e11 U e12 U e13 U e22 U e23

Then, we evaluate the right side of Eq. (A.3)

  • f(E2) U f(E1) = f(e21 U e22 U e23) U f(e11 U e12 U e13)

Since e12 = e21, the f() function replaces e21 with e12

  • f(E2) U f(E1) = e12 U e22 U e23 U e11 U e12 U e13
    • = e12 U e22 U e23 U e11 U e13

Hence, the two sides are equivalent, and this is prove that Eq. (A.3) is true as well. The same evaluation approach is applied to Eq. (A.4). We take the left side of the equation first

  • f(R1) U f(R2) = f(R111 U R112 U R113 U R121 U R122 U R123 U R131 U R132 U R133) U f(R211 U R212 U R213 U R221 U R222 U R223 U R231 U R232 U R233)

Since R122 = R211, the f() function replaces R211 with R122

  • f(R1) U f(R2) = R111 U R112 U R113 U R121 U R122 U R123 U R131 U R132 U R133 U R122 U R212 U R213 U R221 U R222 U R223 U R231 U R232 U R233
    • = R111 U R112 U R113 U R121 U R122 U R123 U R131 U R132 U R133 U R212 U R213 U R221 U R222 U R223 U R231 U R232 U R233

The next step is to evaluate the right side of Eq. (A.4)

  • f(R2) U f(R1) = f(R211 U R212 U R213 U R221 U R222 U R223 U R231 U R232 U R233) U f(R111 U R112 U R113 U R121 U R122 U R123 U R131 U R132 U R133)
    • = R122 U R212 U R213 U R221 U R222 U R223 U R231 U R232 U R233 U R111 U R112 U R113 U R121 U R122 U R123 U R131 U R132 U R133
    • = R122 U R212 U R213 U R221 U R222 U R223 U R231 U R232 U R233 U R111 U R112 U R113 U R121 U R123 U R131 U R132 U R133

The results of both sides of (A.4) are equivalent. Since Eqs. (A.2), (A.3), and (A.4) are evaluated to true; then Eq. (A.1) is true too. Hence, the commutativity property of the integration approach is proven.

A.3 Associativity

To demonstrate the associativity of the integration approach for any three components, A, B, and C, we need to demonstrate that:

t1+t2+t3=t1+t2+t3

si9_e  (A.5)

Using Definitions 3 and 7, we can refactor Eq. (A.5) as follows:

gI1UgI2UgI3=gI1UgI2UgI3

si10_e  (A.6)

gI1UgI2UgI3=gI1UgI2UgI3

si10_e  (A.7)

fR1UfR2UfR3=fR1UfR2UfR3

si12_e  (A.8)

Hence, we have to prove that Eqs. (A.6), (A.7), and (A.8) are satisfied. Let us start by examining Eq. (A.6). First, we evaluate the left side of the equation.

  • g(I1) U (g(I2) U g(I3)) = g({tc1, a}) U (g({tc2, b}) U g({tc3, c}))

Then, we apply g()

  • = {tci, a} U ({tci, b} U {tci, c})
  • = {tci, a} U {tci, b, c}
  • = {tci, a, b, c}

Then, we take the right side of Eq. (A.6)

  • (g(I1) U g(I2)) U g(I3) = (g({tc1, a}) U g({tc2, b})) U g({tc3, c})
    • = ({tci, a} U {tci, b}) U {tci, c}
    • = {tci, a, b} U {tci, c}
    • = {tci, a, b, c}

The two sides are equal. Thus, we can say Eq. (A.6) is true. We use the same evaluation approach for Eq. (A.7). First, we evaluate the left side of Eq. (A.7).

  • f(E1) U (f(E2) U f(E3)) = f(e11 U e12 U e13) U (f(e21 U e22 U e23) U f(e31 U e32 U e33))

Then, we apply f(), which replaces the following sets

  • e12 = e21,
  • e13 = e31, and
  • e23 = e32.
  • f(E1) U (f(E2) U f(E3)) = (e11 U e12 U e13) U ((e12 U e22 U e23) U (e13 U e23 U e33))
    • = (e11 U e12 U e13) U (e12 U e22 U e23 U e13 U e33)
    • = e11 U e12 U e13 U e22 U e23 U e33.

Then, we evaluate the right side of Eq. (A.7).

  • (f(E1) U f(E2)) U f(E3) = (f(e11 U e12 U e13) U f(e21 U e22 U e23)) U f(e31 U e32 U e33)
    • = ((e11 U e12 U e13) U (e12 U e22 U e23)) U (e13 U e23 U e33)
    • = (e11 U e12 U e13 U e22 U e23) U (e13 U e23 U e33)
    • = e11 U e12 U e13 U e22 U e23 U e33.

Therefore, the two sides are equal, and that proves that Eq. (A.7) is satisfied. The same evaluation approach is used for Eq. (A.8). We take the left side of the equation first.

  • f(R1) U (f(R2) U f(R3)) = f(R111 U R112 U R113 U R121 U R122 U R123 U R131 U R132 U R133) U (f(R211 U R212 U R213 U R221 U R222 U R223 U R231 U R232 U R233) U f(R311 U R312 U R313 U R321 U R322 U R323 U R331 U R332 U R333))

Then, we apply f(), which replaces the following relations

  • R122 = R211,
  • R133 = R311, and
  • R233 = R322.
  • f(R1) U (f(R2) U f(R3)) = (R111 U R112 U R113 U R121 U R122 U R123 U R131 U R132 U R133) U ((R122 U R212 U R213 U R221 U R222 U R223 U R231 U R232 U R233) U (R133 U R312 U R313 U R321 U R233 U R323 U R331 U R332 U R333))
    • = (R111 U R112 U R113 U R121 U R122 U R123 U R131 U R132 U R133) U (R122 U R212 U R213 U R221 U R222 U R223 U R231 U R232 U R233 U R133 U R312 U R313 U R321 U R323 U R331 U R332 U R333)
    • = R111 U R112 U R113 U R121 U R122 U R123 U R131 U R132 U R133 U R212 U R213 U R221 U R222 U R223 U R231 U R232 U R233 U R312 U R313 U R321 U R323 U R331 U R332 U R333.

The next step is to evaluate the right side of Eq. (A.8).

  • (f(R1) U f(R2)) U f(R3) = (f(R111 U R112 U R113 U R121 U R122 U R123 U R131 U R132 U R133) U f(R211 U R212 U R213 U R221 U R222 U R223 U R231 U R232 U R233)) U f(R311 U R312 U R313 U R321 U R322 U R323 U R331 U R332 U R333).

Then, we apply f()

  • = ((R111 U R112 U R113 U R121 U R122 U R123 U R131 U R132 U R133) U (R122 U R212 U R213 U R221 U R222 U R223 U R231 U R232 U R233)) U (R133 U R312 U R313 U R321 U R233 U R323 U R331 U R332 U R333)
  • = (R111 U R112 U R113 U R121 U R122 U R123 U R131 U R132 U R133 U R212 U R213 U R221 U R222 U R223 U R231 U R232 U R233) U (R133 U R312 U R313 U R321 U R233 U R323 U R331 U R332 U R333)
  • = R111 U R112 U R113 U R121 U R122 U R123 U R131 U R132 U R133 U R212 U R213 U R221 U R222 U R223 U R231 U R232 U R233 U R312 U R313 U R321 U R323 U R331 U R332 U R333.

The results of both sides of (A.8) are equal. Since Eqs. (A.6), (A.7), and (A.8) are satisfied; therefore, Eq. (A.5) holds. Hence, the associativity of the integration approach is proven.

References

[1] Bertolino A. Software testing research: achievements, challenges, dreams. In: 2007 Future of Software Engineering. Washington, DC: IEEE Computer Society; 2007:85–103.

[2] Grossmann J., Fey I., Krupp A., Conrad M., Wewetzer C., Mueller W. TestML-A test exchange language for model-based testing of embedded software. In: Broy M., Krüger I., Meisinger M., eds. Model-Driven Development of Reliable Automotive Services. Berlin/Heidelberg: Springer; 2008:98–117.

[3] Hutchinson J., Whittle J., Rouncefield M., Kristoffersen S. In: Empirical assessment of MDE in industry. Proceeding of the 33rd International Conference on Software Engineering; New York, NY: ACM; 2011:471–480.

[4] Utting M., Pretschner A., Legeard B. A taxonomy of model-based testing approaches. Softw. Test. Verif. Rel. 2012;22:297–312.

[5] Ulrich A. In: Introducing model-based testing techniques in industrial projects. Software Engineering (Workshops); 2007. Available at http://subs.emis.de/LNI/Proceedings/Proceedings106/gi-proc-106-002.pdf last accessed 2015.

[6] Dias-Neto A.C., Travassos G.H. In: Evaluation of {model-based} testing techniques selection approaches: an external replication. 3rd International Symposium on Empirical Software Engineering and Measurement, ESEM; 2009:269–278.

[7] OMG: Unified Modeling Language. Available at http://www.uml.org. 2014.

[8] OMG: Object Management Group. Available at http://www.omg.org. 2014.

[9] OMG: UML Testing Profile (UTP), Version 1.2, (Formal/2013–04-03). Available at http://www.omg.org/spec/UTP/1.2. 2013.

[10] Iyenghar P., Pulvermueller E., Westerkamp C. In: Towards model-based test automation for embedded systems using UML and UTP. 2011 IEEE 16th Conference on Emerging Technologies Factory Automation (ETFA); IEEE; 2011:1–9.

[11] Lamancha B.P., Mateo P.R., de Guzmán I.R., Usaola M.P., Velthius M.P. In: Automated model-based testing using the UML testing profile and QVT. Proceedings of the 6th International Workshop on Model-Driven Engineering, Verification and Validation; New York, NY: ACM; 2009:6:1–6:10.

[12] Krishnan P., Pari-Salas P. Model-based testing and the UML testing profile. In: Semantics and Algebraic Specification. Berlin/Heidelberg: Springer; 2009:315–328.

[13] Le H. A collaboration-based testing model for composite components. In: 2011 IEEE 2nd International Conference on Software Engineering and Service Science (ICSESS); Beijing, China: Institute of Electrical and Electronics Engineers (IEEE); 2011:610–613.

[14] Liang D., Xu K. In: Test-driven component integration with UML 2.0 testing and monitoring profile. 7th International Conference on Quality Software, QSIC 2007; Washington, DC: IEEE Computer Society; 2007:32–39.

[15] Chen W., Ying Q., Xue Y., Zhao C. Software testing process automation based on UTP—a case study. In: Li M., Boehm B., Osterweil L., eds. Unifying the Software Process Spectrum. Berlin/Heidelberg: Springer; 2006:222–234.

[16] Iyenghar P. In: Test framework generation For model-based testing in embedded systems. 2011 37th EUROMICRO Conference on Software Engineering and Advanced Applications (SEAA); IEEE; 2011:267–274.

[17] Baker P., Jervis C. Testing UML2.0 models using TTCN-3 and the UML2.0 testing profile. In: Gaudin E., Najm E., Reed R., eds. SDL 2007: Design for Dependable Systems. Berlin/Heidelberg: Springer; 2007:86–100.

[18] Busch M., Chaparadza R., Dai Z.R., Hoffmann A., Lacmene L., Ngwangwen T., Ndem G.C., Ogawa H., Serbanescu D., Schieferdecker I., Zander-Nowicka J. In: Model transformers for test generation from system models. Proceedings of Conquest 2006, 10th International Conference on Quality Engineering in Software Technology, September; Hanser Verlag; 2006:1–16.

[19] Mussa M., Khendek F. Towards a model based approach for integration testing. In: Ober I., Ober I., eds. SDL 2011: Integrating System and Software Modeling. Berlin/Heidelberg: Springer; 106–121. LNCS. 2012;vol. 7083.

[20] Mussa M., Khendek F. Identification and selection of interaction test scenarios for integration testing. In: Haugen '., Reed R., Gotzhein R., eds. SAM2012: System Analysis and Modeling: Theory and Practice. Berlin/Heidelberg: Springer; 16–33. LNCS. 2013;vol. 7744.

[21] Mussa M., Khendek F. In: Merging test models. 18th International Conference on Engineering of Complex Computer Systems (ICECCS); IEEE; 2013:167–170.

[22] Mussa M., Khendek F. Acceptance test optimization. In: Amyot D., Fonseca P., Casas i., Mussbacher G., eds. System Analysis and Modeling: Models and Reusability, SAM2014. Springer; 158–173. LNCS. 2014;vol. 8769.

[23] Fortsch S., Westfechtel B. In: Differencing and merging of software diagrams: state of the art and challenges. ICSOFT 2007—International Conference on Software and Data Technologies; INSTICC Press; 2007:90–99.

[24] Nejati S., Sabetzadeh M., Chechik M., Easterbrook S., Zave P. In: Matching and merging of statecharts specifications. 29th International Conference on Software Engineering, ICSE 2007; IEEE Computer Society; 2007:54–64.

[25] Hélouët L., Hénin T., Chevrier C. Automating scenario merging. In: Gotzhein R., Reed R., eds. System Analysis and Modeling: Language Profiles. Berlin/Heidelberg: Springer; 64–81. LNCS. 2006;vol. 4320.

[26] Mens T. A state-of-the-art survey on software merging. IEEE Trans. Softw. Eng. 2002;28:449–462 IEEE.

[27] Khendek F., Bochmann G.V. Merging behavior specifications. J. Formal Methods Syst. Des. 1995;6:259–293.

[28] Lund M., Stølen K. In: Misra J., Nipkow T., Sekerinski E., eds. A fully general operational semantics for UMLÂ 2.0 sequence diagrams with potential and mandatory choice. in: Proceedings of the Australian Software Engineering Conference, 2004. Berlin/Heidelberg: Springer; 2006:380–395.

[29] Li X., Liu Z., Jifeng H. In: A formal semantics of UML sequence diagram. Proceedings of Software Engineering Conference, Australian; 2004:168–177.

[30] ITU-T Recommendation: Z.120, Message Sequence Charts (MSC), Geneva, Switzerland, 1999.

[31] Aichernig B.K., Lorber F., Tiran S. In: Integrating model-based testing and analysis tools via test case exchange. 2012 Sixth International Symposium on Theoretical Aspects of Software Engineering (TASE); IEEE; 2012:119–126.

[32] Mens T. A state-of-the-art survey on software merging. IEEE Trans. Softw. Eng. 2002;28:449–462.

[33] Stephan M., Cordy J.R. In: A survey of model comparison approaches and applications. 1st International Conference on Model-Driven Engineering and Software Development, MODELSWARD; 2013:265–277.

[34] Fortsch S., Westfechtel B. In: Differencing and merging of software diagrams: state of the art and challenges. ICSOFT 2007—International Conference on Software and Data Technologies; 2007:90–99.

[35] Wang Z., Li B., Wang L., Li Q. In: A brief survey on automatic integration test order generation. In SEKE 2011—Proceedings of the 23rd International Conference on Software Engineering and Knowledge Engineering, July 7, 2011–July 9; Miami, FL: Knowledge Systems Institute Graduate School; 2011:254–257.

[36] Abdurazik A., Offutt J. Using coupling-based weights for the class integration and test order problem. Comput. J. 2009;52:557–570.

[37] Briand L.C., Labiche Y., Wang Y. An investigation of graph-based class integration test order strategies. IEEE Trans. Softw. Eng. 2003;29:594–607.

[38] Ammann P., Offutt J. Introduction to Software Testing. New York: Cambridge University Press; 2008.

[39] Shirole M., Kumar R. UML behavioral model based test case generation: a survey. SIGSOFT Softw. Eng. Notes. 2013;38:1–13.

[40] Jingyue L., Slyngstad O.P.N., Torchiano M., Morisio M., Bunse C. A state-of-the-practice survey of risk management in development with off-the-shelf software components. IEEE Trans. Softw. Eng. 2008;34:271–286.

[41] Budhija N., Ahuja S.P. In: Review of software reusability. International Conference on Computer Science and Information Technology (ICCSIT'2011), Pattaya; 2011:113–115.

[42] Babu G.N.K.S., Srivatsa D.S.K. Analysis and measures of software reusability. Int. J. Rev. Comput. 2009. ;1:41–46. Available at http://www.ijric.org/volumes/Vol1/5Vol1.pdf last accessed 2014.

[43] Biggerstaff T.J., Perlis A.J. Software Reusability. New York, NY: ACM Press; 1989.

[44] Klein J., Caillaud B., Hélouët L. In: Merging scenarios. Proceedings of the Ninth International Workshop on Formal Methods for Industrial Critical Systems (FMICS 2004), June 25–June 27. Electr. Notes Theor. Comput. Sci. 133; Amsterdam, The Netherlands: Elsevier; 2005:193–215.

u02-01-9780128160701

Mohamed Mussa received a PhD in Electrical and Computer Engineering from Concordia University in 2015. He worked on a model-based framework for test cases reuse and optimization. He obtained a Master's degree from the same university in 2000. Mohamed has worked as a software designer/developer for several years with several institutions. Mohamed is interested in model-based software engineering and testing.

u02-02-9780128160701

Ferhat Khendek received his PhD from University of Montreal, Canada. He is a full professor in the Department of Electrical and Computer Engineering of Concordia University where he also holds since 2011 the NSERC/Ericsson Senior Industrial Research Chair in Model Based Management, a major collaboration between Ericsson and Concordia University. Ferhat Khendek has published more than 200 conference/journal papers. He is a co-inventor of six granted patents and of ten patents currently under review. Ferhat Khendek's research interests are in model-based software engineering and management, formal methods, validation and testing, and service engineering and architectures.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.178.151