In this chapter, the essence of the dynamic adaptation capability is examined from the perspective of requirements modeling. Dynamic adaptation is shown as a metalevel capability that is about the variability of the concrete application. Variability and the response to it are modeled as different conformance relationships among the application models: i.e., goal, environment, and basic system. Three kinds of conformance are differentiated to capture dynamic adaptation: goal-oriented adaptation, stimulus/response adaptation, and fine-grain adaptation. Feature models are used as the unique representation of these models, and then a new rule-based representation is used to represent the conformance relationships.
Keywords
Conformance-based adaptation logics; Dynamic adaptation; View-based rule language
Human society increasingly depends on software-intensive systems, as is evidenced by the many systems deployed in banks, airports, companies, etc., and by daily applications operating in distributed and mobile devices such as phones and personal digital assistants. In such systems, the software interacts intensively with other software, systems, devices, and sensors, and with people. There are increasingly beneficial and inspired application areas that include cyber-physical systems, mobile computing, ambient intelligence, and ubiquitous computing. One significant feature of such systems is that the software needs to control and adapt the system's behaviors continuously according to the interaction or execution environments as well as the users' goals or needs, which may change frequently and dynamically (Cheng et al., 2009a). This means that change is a critical feature to be dealt with for modern software applications because they are subject to uncertain interactive and execution environments. Therefore, runtime changes cannot be ignored during development process. Software adaptability is becoming ever more important (Yang et al., 2014).
It is widely recognized that a self-adaptive system is one that is able to reconfigure itself autonomously or adjust its behavior at runtime in response to its perception of the environment and the system itself while fulfilling its specified goals. The development of such a system requires modeling the capability that deals with business function concerns as well as the capability that deals with adaptation concerns. From the perspective of requirements engineering, the former relates to fulfilling business needs by implementing business logic and the latter relates to satisfying adaptation needs by realizing the behavior adjustment or system reconfiguration. The task of a modeling system adaptation is to capture and define the adaptation logic, i.e., how to fulfill the adaptation needs.
Based on the environment modeling-based requirements engineering, this chapter presents an approach to capturing and modeling system self-adaptability. This approach explicitly uses the requirements goal model , the environment model , and the system configuration model . With the perspective that the adaptation is to pursue the conformance, i.e., , of the three kinds, it allows to be reconfigured at runtime given that both and may change. The adaptation logic is represented by the conformance relationships among the three elements at a coarse-grained level and by a system configuration binding adjustment at a fine-grained level. Furthermore, a view-based rule language, , is advised and defined to specify the adaptation logic. The main features include: (1) the requirements goal settings, the environment elements, and the system configurations have been explicitly and separately identified and specified; and (2) a view-based rule language is invented that allows modeling of the adaptation logic on two levels, one for significant changes (that may lead to a big adjustment) and the other for insignificant changes (that ask only for system fine-tuning). In this way, concerns about system adaptability can be well-separated. This helps the model construction of such systems and the constructed models can support the evolution of the system model at runtime.
12.1. Dynamic Adaptation Mechanisms
The traditional way to realize system adaptation is to use built-in adaptation, e.g., programming an adaptable application that interweaves the adaptation logic with the application logic. For example, the system may predefine several application logics, e.g., ones implemented by application codes, and each may include constraints on the environment or on user behaviors. Violation of the current constraints triggers the adaptation by following the prespecified way to change the application logic. Then the system adapts its behaviors by migrating to another application logic. In this way, it allows the system to adapt its behavior when environment conditions change.
However, in many other cases, because of the uncertainty of the dynamic environment and the changeable user goals, determinedly interweaving the adaptation logic and the application logic at the design time might not be possible because it is difficult to foresee the dynamic changes. Dynamic adaptation (Morin et al., 2009) is becoming an important feature for such systems. Dynamically adaptive systems are assumed to be capable of managing themselves according to the goals that must be satisfied at runtime in response to changes in the environment and in themselves. Some proposed dynamic adaptive systems are able to operate in highly dynamic sociotechnical ecosystems in which requirements, models, and contexts change at runtime, such as autonomous cars and home robotic systems (Cheng et al., 2009a). In such systems, not only the system's behaviors but also the adaptation logic may change at runtime.
Obviously, the development of dynamically adaptive systems involves more than developing those systems without taking care of dynamic adaptation. More effective techniques are needed to help developers order their thoughts. Again, “separation of concerns” is used here as a fundamental principle to control complexity. Explicit separation between the application and adaptation logic becomes necessary for the development of dynamically adaptive systems to allow the updating of dynamic adaptation logic at runtime (Tamura et al., 2013). This means that the adaptation logic should be modeled and specified separately so that it can be created and changed independently even after the system has been deployed.
There are several dynamic adaptation mechanisms in the literature. Each has its own particular concern and modeling approach. This section briefly presents some of the representative approaches.
12.1.1. Rule-Based Dynamic Adaptation
The rule-based approach is feasible for the purpose of determining which actions should be performed to react to monitored interaction or execution environment changes (Lanese et al., 2010). It has the advantages of elegance and readability of each individual rule, the efficiency of plan process, and the ease of rule modification. It provides a mechanism to program dynamic adaptable systems, in which simple adaptation rules are used to specify the adaptation logic of the particular action that should be performed, to react to detected environmental changes.
Event condition action (ECA) is widely adopted by rule-based adaptation mechanisms. Such a rule traditionally consists of three parts:
• event part: specifies the precondition that triggers the invocation of the rule
• condition part: specifies the logical test. Its satisfaction means that the action needs to be carried out
• action part: specifies the actions to be carried out
which is understood as: when event becomes true, if condition is satisfied, then take action , i.e., serves as the trigger and is normally fired as a result of some monitoring operations, serves as a guard that can be satisfied or evaluated to true and contains an operation sequence to perform in response to the trigger.
The mechanism of rule-based dynamic adaptation is obvious. Fig. 12.1 gives the framework. A simple example has been included along with the components. This application is adaptable by deploying different numbers (from 1 to 10) of servers, and it needs to adapt to react to the response time. When the response time is longer than a thread, the system needs to take appropriate action, e.g., increasing the number of servers if there are any available.
Dynamic adaptation asks for the system to make online decisions about how to enable system adaptation not only to react to environmental changes but also to satisfy the user's goal. That means that the system should take into account at least four aspects when performing online decision making: (1) monitored environment parameters, (2) conditions about the monitored environment parameters, (3) actions that need to be taken if the adaptation is required, and (4) the user's goal that needs to be satisfied. The last point is important for dynamically adaptive systems. Ignoring this point may result in the degradation of satisfaction of the user's goal. For example, the strategy of selecting adaptation rules is expected to rely on the system's goal.
However, ECA rules are predesigned and lack associations with users' goals. Such an adaptation mechanism is thus unable to support the online evaluation of satisfaction of users' goals. The performance will be degraded when users' goals change. For such an example, a user becomes concerned more with energy consumption than response time. Rather than include one more server, he or she might prefer to extend the response time or limit the number of online users for saving the energy.
Other recognized drawbacks of the rule-based mechanism are recognized as potential violations of the trustworthiness of dynamically adaptive systems, e.g., possible runtime conflicts, because any individual rule in a rule-based mechanism is global and it is not easy to deal with efficient conflict resolution for the whole rule base.
12.1.2. Goal-Oriented Adaptation Mechanism
A goal-oriented approach is popular in the requirements engineering community. Many existing efforts in modeling-system adaptability are along the lines of the goal-oriented approach (Cheng et al., 2009b). Representative work on modeling-system adaptivity focuses on explicitly factoring uncertainty into the requirements analysis process. In a concrete manner, in the iterative process for requirements elaboration, top-level goals are decomposed and a conceptual domain model to identify important physical elements of the system and their relationships is then built. Any uncertainties identified during the process result in adaptation factors. A representative example is the requirements description language, Regular Language description for Extensible Markup Language (RELAX) (Whittle et al., 2009).
RELAX is a structured natural language for describing requirements in goal-orientation style. Its purpose is to support the explicit expression of uncertainty in requirements. The basic idea is that the system may wish to relax noncritical requirements temporarily in an emergency situation to ensure the satisfaction of critical requirements. By designing a set of operators, RELAX enables requirements engineers to identify requirements explicitly that should never change (invariants), as well as requirements that a system could temporarily relax under certain conditions by supporting uncertainty.
RELAX defines a set of operators to enable requirements engineers to identify requirements explicitly, including modal, temporal, and ordinal operators. The contribution of RELAX is in the operators that support uncertainty by using the phrase “as possible” to relax the constraints, e.g., “as early (late) as possible,” “as close as possible to [frequency],” “as close as possible to [quantity],” “as many (few) as possible,” etc.
Another important part of RELAX is that it indicates what uncertainty factors warrant a relaxation of these requirements, thereby requiring adaptive behavior. This information is specified using the MON (monitor), ENV (environment), REL (relationship), and DEP (dependency) keywords, in which the ENV section defines a set of observable environment parameters; the MON section defines a set of monitors that can be used by the system to perceive the environment; the REL section defines the relationship between the environment parameters and monitors; and the DEP section identifies the dependencies between the (relaxed and invariant) requirements. The following is an example of RELAX requirements representations, which shows the meaning of each slot of the RELAX requirements representation frame (Whittle et al., 2009):
R1: The synchronization process SHALL be initiated AS EARLY AS POSSIBLE AFTER Alice enters the room and AS CLOSE AS POSSIBLE TO 30min intervals thereafter.
ENV: location of Alice
synchronization interval
MON: motion sensors
network sensors
REL: motion sensors provide location of Alice
network sensors provide synchronization interval
Other examples of extended goal orientation include “adaptive requirements” (Qureshi and Perini, 2009) and “awareness requirements” (Souza et al., 2011) frameworks. The former captures requirements in goal models and links them to an ontological representation of the environmental context, to capture alternatives together with their monitoring specifications and evaluation criteria. The latter is a kind of metaspecification of other requirements in a goal model, which captures the uncertainty of the success and failure of other goals. In addition, the system always chooses the configuration with a higher goal satisfaction.
The principle of goal orientation is used only to capture and model system adaptivity. Generally, the work views system adaptivity as an optimization problem. Adaptivity modeling reduces dynamic adaptation as an optimization process and leaves the system the task of reasoning on the actions required to achieve high-level goals. The intuition is that the purpose of system adaptivity is to keep satisfying goals in a dynamically changing situation, to be capable of dealing with uncertainty, and to keep making optimal adaptation decisions even when unforeseen conditions occur.
12.1.3. Control Loop–Based System Model
Another line of current attempts comes from “autonomic computing,” which was first coined by IBM in 2001 (Kephart and Chess, 2003). The company envisioned that computing systems should be able to manage duties independently relating to maintenance and optimization. The properties of self-management are:
• self-configuration: A system is told what to accomplish, not how to do it
• self-optimization: Resources are used in an optimal manner
• self-healing: Fault tolerance is an important aspect of the system
• self-protection: Protection on two fronts, malicious users and unknowing users
Here, the core mechanism is the autonomic control loop which includes Monitor, Analyze, Plan, Execute, Knowledge (MAPE-K). Fig. 12.2 shows the MAPE-K reference architecture. Essentially, the Monitor is in charge of collecting detailed environment data and aggregating, correlating, and filtering these details until it determines a symptom that needs to be analyzed. Sensors, probes, gauges, and so on feed information to a Monitor. The system then proceeds to analyze the data and reasoning on the symptoms provided by Monitor. When the Analyzer determines, the system proceeds to invoke the Plan function to create a procedure of actions that is used to enact a desired adaptation. Finally, the Executor changes the system's behavior using effectors based on the action recommended by the Plan.
There is a great amount of effort to develop self-adaptive systems using the MAPE-K framework. However, from the viewpoint of requirements engineering, there are still many challenges. Among others, the most difficult challenge is that there is a lack of tools that can help developers acquire and represent high-level specifications, e.g., goals, constraints, or utility functions, and more importantly, that can map the high-level specification onto lower-level actions.
Furthermore, all autonomic function units of a system need to upgrade or evolve themselves from time to time, to deal with unforeseen situations. This requires being able to update the adaptation logic (Knowledge), build modifiable runtime models, and map the updated models onto the reconfigurable application function units of the system.
12.2. Modeling Dynamic Adaptation Capability
Going back to the environment modeling-based approach, the main task of system capability modeling is to derive the system specification in terms of the system environment models and system goal models. As we can see from Figs. 12.1 and 12.2, a common unit in both architectures is the application system with adaptable behavior. This is, in fact, the system implementing the business logic. We call it the basic or core system. For a dynamic adaptive system, apart from the capabilities of its basic system, the functions, e.g., sensing, deciding, planning, reconfiguring, etc., are within the scope of the adaptation capability.
However, different from other capabilities, the dynamic adaptation capability has three special concerns:
1. It relies on an adaptation mechanism, which implements the adaptation logic. The adaptation mechanism is responsible for dynamically adapting the basic system, which implements the particular application logic.
2. The adaptation mechanism needs to be aware of changes in the interactive environment as well as changes to users' goals, so that it can make a decision about whether the system needs to adapt and decide how to adapt the basic system's behavior, according to the adaptation logic, to better satisfy users' goals.
3. To allow the adaptation logic to be easily updated, it is better for adaptation logic to be explicitly defined separately. In this way, the adaptation logic can also evolve at runtime even if the basic system evolves at runtime.
This section proposes a new perspective for capturing the adaptation logic; it then explores the capability of the adaptation mechanism. We propose that the adaptation mechanism is a metalevel mechanism that is responsible for managing the runtime models of the basic system, and reason about them based on the sensed environment state, and that it controls the basic system's behavior according to the adaptation logic to satisfy the user's goal better. Before presenting the principles of the perspective, we first illustrate the architecture as shown in Fig. 12.3.
This picture clearly shows the main components of a dynamically adaptive mechanism. The two kernels are Basic System and Adaptation Mechanism. The former implements the application logic and the latter implements the adaptation logic. There is a family of application logic, so that the basic system is adaptable. The main functions of the adaptation mechanism are sensing the environment, knowing the user's requirements, reasoning to make decisions based on adaptation logic, and actuating the reconfiguration of the basic system. There is also a family of environment contexts and a family of the user's requirements settings. The former represents how many distinguishable situations the basic system can work in, and the latter captures users' different desires when they use the basic system.
In the following subsections, we will explain the main principles of this architecture.
12.2.1. Conformance Among Req, Env, and Spec as Dynamic Adaptation Logic
As we mentioned before, because it is different from application logic, dynamic adaptation logic is somehow a kind of metalevel requirements. It takes the objective-level capabilities, i.e., functions implementing business logic, as the basis and decides and controls the scheduling and execution of the application logic according to the context of the environment and the requirements setting.
The following analytical structure that relates the system specification to its goal requirements and the environment (or surroundings) of the system is well-recognized:
which means that if a system that realizes is installed in and interacts with the environment described by properties in , the environment will exhibit the required properties given in , i.e., satisfy the requirements. This formula depicts a kind of conformance relationship between the requirements setting, the environment context, and the system. If the relationship holds, we say the system is competent to the requirements setting in an environmental context . For a normal system development problem, the system specification is the final artifact of the requirements engineering process.
However, when dynamic adaptation is necessary, we assume that both the requirements setting and the environment context may change from time to time. When some changes happen, does the relationship that held before still hold after the changes? In a concrete manner, at time , the system realizing is competent to in . During the period between and , when becomes and becomes , is still competent to in ? The answer could be “Maybe not.”
Hence, the purpose of dynamic adaptation is to face the changing requirements setting and environment context and continuously preserve the conformance relationship among the current requirements setting, the current environment context, and the system. This requires the system to be adapted accordingly, as smoothly and quickly as possible at runtime, when violation of the conformance is detected. All three elements have to reside in the runtime system.
Fig. 12.4 uses a sequence diagram to show the behavior of the three elements with changes to the relationship at runtime. The three actions annotated in the lifeline of the system are the tasks of adapting the basic system, whereas the changes to the environment context and requirements setting are out of the system's control.
If we use the terminology of the Problem Frame approach (Jackson, 2001), the task of developing a dynamic adaptive system can be stated as:
There is a basic system whose solution is to be controlled, in accordance with the environment context, as well as the requirements setting, and the problem is to build a system that will detect the changes in both the environment context and the requirements setting, to decide a suitable solution for its behavior and impose the control accordingly.
To simplify the explanation, in this chapter we use feature models to represent the families of requirements settings, environment contexts, and basic system configurations. Furthermore, to unify the representation, we use feature models to represent the functionality of the dynamic adaptation mechanism. Then, recalling the feature models and the family of feature models, we let:
The conformance relationship among , , and can be instantiated as a set of associations:
such as that given and , when the system that realizes is deployed in , we have their collaboration, which can meet (, , and ). That means is a solution that is fitted to the problem in with . That can read as:
With such a conformance relationship as the adaptation logic, the high-level features of the adaptation mechanism may contain (1) detecting and deciding the current environment context ; (2) detect (potential) requirements setting reqj's violation; and (3) reconfigure and deploy basic system .
12.2.2. Structuring the Environment
In many cases, we do not need to detect all environment features to decide whether to change. Some important features can deliver a hint that implies the necessity of the dynamic adaptation. We include another factor, situation , to capture such a kind of hint. Each significant situation consists of a subset of environment features. For any situation there is at least one such as . Hence, situation in fact groups environment configurations into an environment configuration class.
Situation is meaningful in decision making about the adaptation. For example, “everybody is sleeping” is a situation that represents “everybody has been in bed.” When in this situation, the system, e.g., the heating/cooling system, normally switches to “sleeping mode” without taking into account other environmental features.
In fact, situation is a concept to which philosophers and logicians have paid much attention. The earlier formal notion of situation was introduced as a way to give a more realistic formal semantic to speech acts than was previously available. In contrast to a “world,” which determines the value of every proposition, a situation corresponds to the limited parts of the reality we perceive, reason about, and live in. With limited information, a situation can provide answers to some but not all questions about the world. The advantage of including situation is that the sensing cost is decreased, because normally only parts of the environmental parameters need to be detected to decide a situation. This is important when the number of environmental entities is large. The other advantage is fit to human recognition; i.e., in many cases, to identify a situation often some feature constraints are mandatory and need to be met strictly, but others do not need as much care.
Correspondingly, a similar concept, “behavior pattern” ( in brief), can also be named as a partially assigned basic system configuration. It assigns only part of the system's features but allows others to be, i.e., for any , there is a valid system configuration such as .
Then, to enable a dynamic decision about system adaptation, three categories of conformance relationship can be identified:
• situationgoal setting:
The first is a relation between situation and goal configuration . It is for capturing phenomena that users have for different desires in different situations.
• goal setting: situationbehavior pattern:
The second is a relation between situation and behavior patterns for a given goal setting ; i.e., when in a particular goal setting, the basic system behaves in different patterns when situated in different situations.
• goal setting: environment featuresbasic system features:
The third is fine-grained conformance compared with the second one. This is conformance between individual environment features (compared with situation, it is a group of environment features) and individual basic system features (compared with behavior pattern, it is a group of basic system features).
The sequence diagrams shown in Fig. 12.5 specify interactions between the components related to an adaptation mechanism. The important idea is to maintain the system configuration (that can be realized as the runtime system) that synchronizes with the environment configuration and the goal configuration by following the conformance relationships. The environment configuration is updated when changes are perceived in the interactive context of the runtime system. The goal configuration is updated because the user changes his or her preference, or he or she prescribes updating the goal preference when the software is in a certain situation. Then the configuration of the basic system may need to be reconfigured accordingly.
12.2.3. Capability Model for Adaptation Mechanism
To develop the adaptation mechanism, we need to decide its capability requirements. The adaptation mechanism needs to possess three aspects of capability:
Capability One: Situation Awareness: A formal definition of term “situation awareness” is given in Endley and Jones (2012). It says that situation awareness is “the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future.” This obviously implies three subcapabilities:
• environment perception: The first subcapability is about the perception of environment entities, i.e., collecting necessary information. What information needs to be perceived (and how to perceive it) is among the concerns in this layer. It is important to ensure that necessary information can be obtained to estimate what the environment will behave like in the very near future.
• situation prediction: The second subcapability is about comprehending pieces of the perceived information and their combination, i.e., how to synthesize the perceived information to lead to meanings about the environment and then suggest the situation. It is needed to derive meanings about the perceived information in terms of the user's goals.
• goal-setting perception or prediction: The third subcapability is about perceiving the goal requirement changes or predicting them in the future. This is to alert the system to make the necessary response or to be proactive in the meantime.
Normally, conducting the perception–comprehension–projection process should take the users' goals and the priority of the goals into account (this is known as the goal-driven process). The goals serve as the guideline for what information is perceived and how to perceive and interpret the information. Besides, there is also the converse case in which when information that is completely independent of the currently concerned goal occurs to a significant degree, the system needs to reprioritize the goals (this is known as the data-driven process).
Capability Two: Decision Making: After predicting the status in the near future, decisions should be made regarding whether changes need to be made and how to change to match the estimated future status of the environment. Achieving this implies two subcapabilities:
• goal violation detection: The adaptation mechanism should know the current goal setting so that it can determine online whether the goal settings have been satisfied to a certain degree. If the satisfactory degree of the goal setting is lower than a certain thread, the goal setting is violated. This is the capability of obtaining the satisfactory degree of the system's goals.
• basic system configuration selection: If the adaptation mechanism decides to make a change, it should know the basic system variations to allow the change. This is the capability of inferring and selecting the optimal variation to match the current or future status of the environment and/or goal setting.
Capability Three: Performance of Actions: After the new configuration of the basic system has been generated, it needs to be deployed by activating the selected components. The effect of the activation is to enable the controllable environment entities to make transitions and finally stay in the states designated in the configuration. Then the users' goal setting can be satisfied as expected. This implies the following subcapability:
• basic system redeployment: After the adaptation mechanism selects an optimal basic system configuration for responding to the coming status of the environment and/or goal settings, the adaptation mechanism should switch the basic system from the current configuration to the selected one and redeploy the new configuration. This switching process should be as smooth as possible.
Based on this analysis, the metalevel model of the adaptation mechanism accommodates the three modeling elements, i.e., the environment model, the goal model, and the basic system model. The components of the mechanism and the relations among these components are shown in Fig. 12.6.
Of course, if the feature model is of the metalevel, the concrete capabilities should be identified when modeling a particular application by answering questions such as: What environment parameters need to be perceived? Which situations are significant? How will the goal violation be detected? How will the basic system migrate? These are based on the adaptation logic that the application needs.
12.3. Expression of Conformance-Based Dynamical Adaptation
This section presents a rule-based language to represent the conformance-based dynamic adaptation. We show how to identify the necessary capabilities for realizing the required dynamic adaptation. Some criteria are given to evaluate the rule-based specification.
12.3.1. νRule: Syntax and Semantics
To represent the conformance relationship that will be used to enable the dynamic adaptation, we use a new rule-based language, i.e., the view-based rule language (i.e., ) (Zhao et al., 2016). In language, each rule contains three parts: the view, the condition, and the expectation, in which, the view part represents the guard of the rule, the condition part captures the detected changes, and the expectation part expresses the expected features that the system needs to enable. Formally, a is formed as:
• : a predefined view, a set of observable features to represent the invariant of the rule. This means that these features have to be preserved when this rule is used
• : a conjunction of conditions in which each condition is an observable feature. This means that whenever the conduction of conditions holds, the rule can be activated
• : a set of expected features. This means that whenever the rule is activated, the system should take actions to enable each of the features
With this formulation, a can be read as: Given , the feature combination of asks for the features in . Or we can read it as: Given , when features in are detected to be bound, the features in need to be enabled (bound). The formal structure of is defined as follows:
View-based rule
View
Conditions
Condition
Expectations
Expectation
Feature-binding
No binding
Logic value-binding
Enum value-binding
Requirement value-binding
Using as the specification language to express adaptation logics has advantages because has some checkable correctable properties. Let be a set of finite that specifies the adaptation logic. The correctable properties include:
• (invariant of rule). A : is invariant if the observable features of are preserved after adaptation. That is, let be a feature in . For each that can be an effect on an environment domain imposed by a basic system feature in , if , then .
• (rule stability). Let be a correct view-based rule, a basic system configuration, and be the executing rule on resulting in a new basic system configuration. We have . We call rule a stable rule.
• (order independence). Let , be two different , and a basic system configuration. Rules and are said to be order-independent if the execution result of followed by on is the same as followed by . If two rules are order-independent, their effects on a configuration is either not overlapped or the same.
• (confluence). A set of is confluent if the execution result of all rules in is always the same regardless how they are applied.
• (well-behavedness). Let and be two rules in , . If the following two conditions are satisfied: (1) each rule is stable; (2) rules in are order-independent, then is confluent and stable.
12.3.2. Conformance Relationships by νRules
According to the previous discussion, the conformance relationship for capturing the dynamic adaptation logic can be expressed as the following three categories of :
• rule for “if situation, then goal setting”:
representing that the users may have different goal settings in different situations , i.e., the users' goal setting may change at runtime when the system is in different situations. This is a Type I rule.
From now on, we will use “smart home,” i.e., the new generation of “home automation,” which may adapt its behavior to respond to changes, as a dynamically adaptive home automation. When “nobody is home” (the situation), the “security” needs to be at “the highest level” and the “energy consuming” needs to be “as less as possible,” whereas we do not care about the performance and the comfort. This can be expressed as the rule:
• rule for “goal setting: if situation, then behavior pattern of basic system”:
representing the stimulus–response pattern (i.e., the shortcut pattern) under a certain goal setting. This means that situation asks for system behavior pattern with a certain goal-setting .
This is a Type II rule.
For example, in “smart home” applications, the users are used to choosing “comfortness” as the main concern (this is a certain goal setting). When “everybody goes to sleep” (this is a situation), users normally expect that the system can maintain the “sleeping mode” (this is the behavior pattern). This can be expressed as the rule:
• rule for “goal setting: if fine environment features, then fine basic system features”:
representing that a set of environment features asks for a set of system features given a particular goal setting in particular situation . It is for capturing the sensing or decision-making pattern under certain goal settings and in certain situations.
We call it a Type III rule.
For example, in the “smart home” application, when users choose “comfortness” as the main concern, users normally expect to “set the air conditioner as heating mode” (the basic system feature) when it is “snowy” (the environment feature). This can be expressed as the rule:
12.3.3. Function Identification According to νRules-Based Adaptation Logic
With the set of user-defined , we can identify the operational level features of the adaptation mechanism. In terms of a metalevel capability model, the elaborative capabilities can be derived step by step, as:
• situation awareness:
This feature may have all of the sensing features as its subfeatures:
• environment perception:
These are the features that will enable the following sensing functions. They are also for answering how many sensors we need in the application, based on the adaptation logic. The sensors can be hard devices such as those in sensor networks or soft sensors realized by software. They are the same in the sense that they can obtain the value or state of an environment domain. In a concrete manner, we have:
- For any : of Type I, each environment feature in needs a functional feature in the adaptation mechanism to perceive its current value or state
- For any : of Type II, each environment feature in needs a functional feature in the adaptation mechanism to perceive its current value or state
- For any : of Type III, each feature in needs a functional feature in the adaptation mechanism to perceive its current value or state
Aggregating these features results in a set of environment perception features. These features are the subfeatures of “environment perception” in the adaptation mechanism.
• goal-setting perception
For any of Types II and III, each goal feature in needs a sensor (often a soft sensor) in the adaptation mechanism to perceive the goal setting, e.g., the condition, the optimization objective, etc. These goal sensors are aggregated to form the subfeatures of “goal-setting perception” in the adaptation mechanism. For many applications, users want to choose the goal setting at runtime. In this case, the goal sensors may have an interface with users to allow them to set the goal setting at runtime.
• situation determination:
For any of Types I, II, and III, each situation needs a decider in the adaptation mechanism to decide the basic system to be situated in a particular situation when perceiving changes to the environment features. All of the situation deciders lead to the set of subfeatures of “situation determination” in the adaptation mechanism.
• decision making:
We assume for a dynamically adaptive system that:
At any time point , with the goal setting and situation , when runs according to specification , we have . Here, is an extension of .
Then at time point , the detected environment features lead to situation . It is necessary to decide whether the basic system needs to change the behavior to preserve the invariant . To do this, the following two subfeatures can be defined:
• goal-violation detection: This feature asks for realization by the adaptation mechanism for the following functions:
- If , but . Check whether this is true:
If the relationship has been violated, select as the set of all Type III that can be activated by and send out a notification about the violation with . (This is a fine-grained change.)
- If , select as the set of all Type I that can be activated by . In terms of , select a goal setting (otherwise, ). (This is a goal change.)
Check whether this is true:
If this relationship has been violated, select as the set of all Type II that can be activated by and , and send out a notification about the violation with . (This is a coarse-grained change.)
• Basic system configuration selection:
When receiving the violation notification, if , finds a system configuration such that:
holds in the case of the best satisfaction (in terms of the optimization objectives associated with goals in ) with the system features in parts of the in as the feature constraints.
When there is more than one configuration that can be suitable here, the best should be selected according to the degree to which it satisfies the goal, as well as the smoothness of switching from the current configuration to the new one. Finally, the profile of this configuration needs to be generated for redeployment.
• Basic system redeployment:
Redeploying the basic system from the old configuration that implements to the new configuration that implements . This includes all of the activities that make the system available for use in the setting of realizing according to the generated profile. Detailed activities about the deployment are out of the scope of this chapter.
We use the “home automation” system as an example to show the relationship between the basic system and the dynamic adaptive system. Fig. 12.7 gives a fragment of the feature model of a “home automation” system. It is a basic system. When it is enhanced to be dynamically adaptive, it is necessary to embed it into an adaptation mechanism.
Fig. 12.8 shows a fragment of the feature model of “Smart Home.” It extends the feature model of the “Home Automation” in Fig. 12.7 by including the features for “adaptation mechanism” as well as all the sub-features related to this specific application. These sub-features are identified under the guidance of the adaptation logics specified by based on the smart home environment feature model (Fig. 6.2) and the goal feature model (Fig. 6.4).
After constructing the feature model of the adaptation mechanism, we may need to evaluate it and include a set of criteria for this purpose. The basic criteria may include correct sensor interpretation, correct adaptation initiation, correct adaptation planning, consistent interaction between adaptation logic and application logic, consistent adaptation execution, and correct actuator actions.
In addition, in terms of the three feature models, i.e., the environment feature model, the goal feature model, and the basic system feature model, we can let , , and be the numbers of the environment feature configurations, the goal feature configurations, and the basic system feature configurations, respectively. Also, we can let be the number of the environment feature configuration such that and are the numbers of environment feature configurations, the goal configurations, and the basic system configurations implied by . Three measurements can be defined to measure the coverage of the specification of the adaptation mechanism. For -based specification , the environment configuration coverage is ; the goal configuration coverage is ; and the basic system configuration coverage is .
12.4. Summary
This chapter analyzed the essence of dynamic adaptation capability from the perspective of requirements modeling. We showed that dynamic adaptation is a kind of metalevel capability that is about the variability in the concrete application. We modeled the variability and the response to that variability as different conformance relationships among the application models, i.e., the goal model, the environment model, and the basic system model. Moreover, we differentiated three kinds of conformance to capture three kinds of dynamic adaptation, i.e., goal adaptation, stimulus–response adaptation, and fine-grained adaptation. We used the feature model as the unique representation of these models and then invented a new rule-based representation, , to represent the conformance relationships.
The advantages of such a system configuration are that:
• It makes the capability modeling of the dynamically adaptive system more systematic by explicitly stratifying the dynamically adaptive system into two layers, i.e., the basic system layer and the adaptation mechanism layer. This clearly captures the architecture of dynamically adaptive systems that explicitly include interactions with the three online models. In this way, the concerns about developing dynamically adaptive systems have been separated with clear boundaries, so that complexity can be controlled effectively.
• A metalevel feature model has been presented for the adaptation mechanism that can serve as a reference model for any dynamically adaptive system. Moreover, strategies have been given to identify the operation-level features of the adaptation mechanism. This serves as a guideline to feature identification.
• It is along the line of the rule-based approach but extends the ECA rules by assigning conformance-based semantics to the rule elements. It also adds a new part to the ECA rule: the view part, to represent invariance during adaptation, to address the issues about a lack of dynamic strategy adjustment when using the ECA rules. It uses a formal language, , that is on the basis of bidirectional transformation. This implies the possibility that the specification of dynamically adaptive systems may be an executable specification based on the BX engine.1 This is an important feature to enable the simulation of the dynamically adaptive system before design and implementation.
There are other topics worth investigating:
• The requirements of dynamic adaptation might need to evolvable. The reason for requirements evolving could be that users want to strengthen, weaken, or change their requirements for dynamic adaptation when certain conditions apply at runtime. Such evolving requirements have an important role in the lifetime of a system, in that the users define possible changes to requirements, along with the conditions under which these changes apply.
• The capability measurement and comparison are important for a sound method. This includes defining criteria to measure and use the criteria to compare different capability specifications.
• The system can be enhanced by including online learning. It will allow the systems to learn the new adaptation strategies automatically from historical data. This is important for coping with unanticipated changes at runtime. The learning capabilities will have a role after the system is put into use and is enabled by some pattern recognition or data-mining algorithms.