Chapter 5

Overload Management Through Selective Data Dropping 1

5.1. Introduction

During system and network overload periods, excessive delay or even data loss may occur. To maintain the quality of control of an NCS, the implementation system (including both computer and network) overload must be correctly handled. As we can see in the previous chapters, a common approach to dealing with this overload problem is to dynamically change the sampling period of the control loops. In this chapter, as an alternative to the explicit sampling period adjustment, we present an indirect sampling period adjustment approach which is based on selective sampling data dropping according to the (m, k)-firm model [HAM 94]. The interest of this alternative is its easy implementation despite having less adjustment quality, since only the multiples of the basic sampling period can be exploited. Upon overload detection, the basic idea is to selectively drop some samples according to the (m, k)-firm model to avoid long consecutive data drops. The consequence is that the shared network and processor will be less loaded. However, the control stability and performance must still be maintained to an acceptable level. This can be achieved by keeping either the total control tasks on a same processor or the messages sharing a same network bandwidth schedulable under the (m, k)-firm constraint.

In this chapter, we first give a sufficient condition for scheduling a set of control tasks under (m, k)-firm constraint (section 5.2). Then, through several examples, we present the methods for determining the value of k for a given control loop which will still maintain control loop stability (section 5.3), and the optimal values of m and the control gains which minimize a total cost function either in the presence of the control task configuration changes (section 5.4) or by further coping with the plant state changes (section 5.5). The problem of control loop stability with on-line parameter switching is also discussed in section 5.5.1.

To better illustrate this approach, let us first present the generic system architecture. Then, we identify the problem to solve and provide several notations.

5.1.1. System architecture

We consider a global control architecture that integrates a set of plants to control, each one being controlled by a discrete controller. We are interested in the deployment of the control application onto limited resources; for example, all the numerical controllers share the same processors or all the plant states sampled by sensors are transmitted to the controller through the same communication architecture. Moreover, we suppose that according to the global state of the plant, a supervisor chooses the current working mode of the global system. In particular, it can stop the control of a plant, start the control of a plant or modify the control strategy of a plant (control law, sampling period, etc.). The consequence of the transition between working modes is the modification of the set of active tasks/messages that can bring about

– some of them are stopped,

– new tasks are activated or new messages are transmitted,

– the characteristics of tasks may be modified, for example, changing their Worst Case Execution Time (WCET); the characteristics of messages can be modified, for example, their size or the sampling period can be transformed by using a new control strategy.

This means that the scheduling of the messages on a network or the scheduling of the tasks on a processor have to be redefined each time the supervisor modifies the global control mode. The schedulability analysis of a set of tasks or messages subject to hard real-time constraints (all the instances have to meet their deadline) can lead to over provisioning of the resources and this oversizing can be worse in the case of an architecture that implements several working modes as already mentioned before. Moreover, the relationship between the performance of the control and the scheduling of the activities is not well known quantitatively; therefore, the identification of the scheduling parameters relies generally on experiments and/or simulations [SIM 05] that are not exhaustive and so, cannot be generalized. So, a feasible scheduling provided by applying the relaxation of timed constraints as proposed in the classical solution does not lead systematically to the optimal performance of the control application.

Therefore, we propose a new scheduling architecture for handling such configurations as well as an adaptive technique that makes adjustments on-line for, on the one hand, the (m, k)-firm constraints of activities (data transmission and/or task implementing the control law) and, on the other hand, the parameters of the control laws. By doing so, the global performance of the application is fixed at an optimized level and the schedulability under (m, k)-firm constraints is guaranteed.

In the case study that will illustrate the adaptive approach, we consider plants that correspond to harmonic oscillators (as, for example, cart systems, a pendulum or an inverted pendulum). Furthermore, the solution is detailed for processor sharing and therefore, we will deal with the scheduling of tasks instead of messages.

The scheduling architecture is illustrated in Figure 5.1. The system to control is composed of a set of plants (Plant 1, Plant 2, …, Plant n) that are assumed to be independent. Each plant is controlled by a controller (Controller 1, Controller 2, …, Controller n). In this example, the controllers are deployed as a set of n tasks (τ1, τ2,.., τn) on the same processor. A task handler is dedicated to the identification of the optimal configuration of (mi, ki)-firm constraints for each task τi running on the processor. The system is observed by a supervisor, assumed implemented on another processor. The role of the supervisor is to observe the state of the plants and to decide at each instant which plants have to be controlled and which control laws have to be applied. Each time the supervisor modifies the configuration of the system, it sends the task handler a list of active controllers as well as their new characteristics, in short, the activation period and the worst case execution time of the control law. Of course, the schedulability of this task set must be guaranteed in sense of (mi, ki)-firm.

Figure 5.1. The global scheduling architecture

ch5-fig5.1.gif

5.1.2. Problem statement

The global problem of finding, at each instant an optimized scheduling of control tasks can be divided into several sub-problems.

–Firstly, we have to ensure, at least the stability of each controlled system. This will be achieved by the specification of the highest value of ki in the constraint (mi, ki) of each task τi sharing the processor that ensures the stability under a (1, ki)-firm constraint.

–A second problem concerns the optimization issue. Two main considerations have to be taken into account in this context: what is the cost function? what are the criteria? For the first question, we will use the (Linear Quadratic Regulator LQR) cost function that is classically applied in the control community. The determination of the value of the parameter mi in the (mi, ki) constraint of each task τi will answer the second question.

Solutions to these different problems will be presented in the remaining part of this chapter. But, first of all, let us give more details in the following section on how to schedule tasks under (m, k)-firm constraint.

5.2. Scheduling under (m, k)-firm constraint

The (m, k)-firm model was first proposed by Hamdaoui and Ramanathan in order to precisely characterize the timing constraints required by certain kinds of applications [HAM 94]. It allows the specification of the guarantee level required by real-time applications tolerating deadline miss of certain instances of tasks or messages. More specifically, in this chapter a task or a message τi is said to have (mi, ki)-firm deadlines if at least mi out of any ki consecutive instances must meet their deadlines and if the schedulability analysis of a set of tasks or messages subject to such (m, k)-firm constraints must furnish a deterministic guarantee. Note that, mi = ki = 1 means that each instance of τi is constrained by a firm deadline. Initially, this technique was introduced to deal with overloading in real-time message stream handling, but it was soon seen that it could be used in real-time control applications [RAM 99]. Two problems must be solved for such constraint specifications:

–scheduling policy: how to schedule a set of tasks or messages subject to (m, k)-firm;

–schedulability analysis: for a given scheduling policy, how to guarantee that the constraints are satisfied.

The main solutions to these problems are briefly described in the following.

Two classes of scheduling policies that were developed to take into account the (m, k)-firm constraints were studied: dynamic scheduling and static scheduling.

In the following, we consider a set of tasks or message streams τi, each of them being defined by

Ci, the largest time needed by the instances of τi to complete the task instance execution or the message instance transmission;

Di, the relative deadline of τi, supposed to be the same for each instance;

–and, when τi is periodic or sporadic, its period or the minimum inter-arrival hi;

mi and ki, the parameters of the (m, k)-firm constraint imposed to τi;

– the (mi, ki)-pattern, Πi defined as the sequence Πi(0), Πi (1),.., Πi (k – 1) where Πi (j) = 1 indicates that the (j + n. k)th (n ≥ 0)) instance of τi has to meet its deadline, Πi (j) = 0 indicates that it is not mandatory that the (j + n.k)th instance of τi meets its deadline, and consequently, images.

5.2.1. Dynamic scheduling policy under (m,k)-firm constraints

In [HAM 95], the (Distance Based Priority DBP) algorithm is proposed for non-pre-emptive tasks. It implements a dynamic scheduling that is based, at each scheduling point (e.g. arrival instant or departure instant of a task instance), on the distance to a failure state of each task or message. The failure of τi is defined as the situation where more than kimi instances among the ki last ones have missed their deadline. The highest priority is given to the task that has the lowest distance to its failure. A FIFO strategy is applied when the distance to the failure is the same for several tasks. The schedulability analysis, based on Markov chain, proposed in [HAM 95] provides a probabilistic guarantee. Further studies have been proposed for improving this approach. In particular, in [LI 06], a sufficient condition is specified for a set of periodic or sporadic messages, scheduled using DBP and earliest deadline first (EDF).

5.2.2. Static scheduling policy under (m,k)-firm constraints and schedulability issue

If these dynamic policies provide a good quality of service for a set of streams, they appear quite costly in the context of control applications sharing processors or network resources. To deal with such applications, [RAM 99] considered a system composed of several control applications. The controllers are implemented as preemptive tasks running on two processors. When a failure occurs at one processor, all the tasks that were allocated there migrate to the other one, leading to a possible overload situation. In this case, a static scheduling based on (m, k)-firm constraints is proposed. The mandatory instances of each task are scheduled according to the fixed priority of the task, while the lowest priority is given to the optional instances. Given a couple (mi, ki) for each τi, the instance number a (a ≥ 0) is mandatory if images. This classification has been improved in [QUA 00] by a global approach that determines on-line the mandatory instances of all the tasks τi subject to a (mi, ki)-firm constraint. As this problem has been proved NP-hard, a heuristic was proposed. Furthermore, [QUA 00] provides, on the one hand, an algorithm for evaluating of an upper bound of each τi response time for a “deeply-red” pattern (a “deeply-red” pattern is such that Πi (a) = 1 if 0 ≤ a < mi and Πi (a) = 0 in the other case) and, on the other hand, it demonstrated that if for each task τi, the response time is less than the deadline for a (mi, ki) “deeply-red” pattern, each τi is schedulable for any (mi, ki) pattern.

5.2.3. Static scheduling under (m, k)-constraints and mechanical words

In [JIA 05], a new method using the properties of the mechanical words is developed for the schedulability analysis of a set of non-pre-emptive tasks under (mi, ki)-firm constraints. First, it proved that the static patterns defined in the literature can be characterized in the form of the mechanical words leading to a largely simplified schedulability assessment. Then, by identifying the defaults of these patterns, it proposed a new way, based on a cellular line, to determine the (mi, ki) pattern of each task τi. Through intensive simulations, this technique has been demonstrated to achieve an improvement in the schedulable region. Considering images, if α is rationale, then the mechanical word whose slope is α is (ki)-periodic; the classification of the instances as mandatory or optional is therefore based on formula (5.1):

(5.1) images

The upper bound of the response time of a non-pre-emptive task τi subject to a (mi, ki) constraint is obtained, assuming that the sequence is convergent, by the limit of images when images is given by the classical recurrent equation (5.2):

(5.2) images

where i > j means that the priority of the task τi is lower than the priority of task τj.

If, for each mandatory instance of each task images, then the system is proved to be schedulable.

Let us consider now a set of pre-emptive tasks scheduled using a fixed priority strategy defined by the rate monotonic algorithm (the larger the period is, the smaller the priority is); in [RAM 99] and [JIA 05], it is proved that it is possible to limit the length of the interference interval of a task τi due to a task of higher priority τj to the basic period hi (the basic period is the interval between two instances of a task subject to a hard real-time constraint, meaning a (k, k)-firm one). In this case, we first consider a set of intervals associated with the task τi and defined in (5.3). hj is the basic period of a task τj. i > j means hj < hi (i.e. the priority of the task τi is lower than the priority of task τj):

(5.3) images

Then, the response time of a task may be evaluated by equation (5.4):

(5.4) images

If, for each task images, then the (m, k)-firm constraint can be met by all the tasks.

This condition is sufficient in general cases. It is necessary and sufficient if the tasks are synchronous (i.e. the first release time of all the tasks starts at the same time, often at t = 0).

5.2.4. Sufficient condition for schedulability assessment under (m,k)-pattern defined by a mechanical word

The computation of the sequence Wi for each task τi, as it is presented in the recurrence relation (5.4), is non-deterministic and it is difficult, if not impossible, to apply it on-line. Therefore, below, we propose a sufficient condition that ensures the schedulability of a set of tasks under (m, k)-firm constraints. We consider a task set (τ 1, τ2, …, τn); these tasks are periodic and their periods, named “basic period” in the following are, respectively, h1, h2, …, hn. We assume here that the tasks are scheduled according to a pre-emptive fixed priority policy based on the rate-monotonic algorithm (the larger the basic period is, the lower the priority is) and that the (m, k)-pattern of each task is defined by a mechanical word through equation (5.1):

The (mi, ki)-firm constraint of each task τi is satisfied if

(5.5) images

where images

In fact, images gives the workload generated by all the tasks whose priority is higher than τi before instant hi (remember that we are dealing with the special case where h1 < h2 ··· < hn). So it is clear that the deadline is met for τi if the total workload images can be finished within [0, hi].

Note that the above schedulability condition is sufficient and necessary when the basic period of a task is a multiple of the basic period of all the tasks with higher priority. In the other cases, it degenerates to a sufficient condition.

5.2.5. Systematic dropping policy in control applications

This chapter introduces an approach based on the (m, k)-firm model in order to schedule a set of tasks or messages sharing a common resource. The proposed scheduling principles can be seen as a particular case of the period adjustment technique. More specifically we consider that the period of activities sharing a resource (tasks implementing the control law or samples transmitted by a sensor on a network) can be chosen among a limited number of multiples of the basic sampling period. For example, if the basic period for sampling the plant state is hi and if the (mi, ki)-pattern, Πi of an activity τi (task or message) is Πi = [10100110], then, the actual period will be hi or 2.hi or 3.hi. In fact, in this chapter, we will consider that each optional instance of τi is dropped systematically.

As introduced in [SET 96] and improved in [EKE 00], the determination of (mi, ki)-pattern, Π i for each activity τi is relevant to an optimization problem for which the cost function is an indicator of the system performance. In the following, we propose a co-design approach dealing with both points: the scheduling parameters and the control parameters. In short, the technique that will be presented in the following sections aims to determine an optimal configuration, or more exactly one that is sub-optimal in practice, of mi, ki, Π i for each activity τi and of the gain γi of the controller implemented by the task.

5.3. Stability analysis of a multidimensional system

The purpose of this section is to present how to guarantee at least the stability condition of control systems. In fact, we are interested by systems whose structure is presented in Figure 5.2.

5.3.1. Generic model

The plant is defined as a linear continuous-time system whose evolution is modeled by equation (5.6):

(5.6) images

where x is the state vector of the system and u is the output of the controller images, images. The parameters A and B are two matrices whose dimensions are, respectively, (p, p) and (p, q). νc is a white noise whose covariance is Rcdt. The dimension of the constant matrix Rc is (p, p).

The plant state is sampled periodically; the sampling period is noted h. The jth sampled plant state vector transmitted at times jh is consumed by the linear discrete controller defined in equation (5.7); it is noted xj in the following:

(5.7) images

We assume that this controller is implemented as a task. As the purpose is to share the processor among several controller tasks and therefore to decrease the processor bandwidth consumed by this task, in case of an overload situation, several of its instances are rejected according to a (m, k)-firm model and under the constraint that the stability of the system has to be ensured.

Each time an instance is executed, it produces a command uj. This command is maintained until a new command is produced. The use of the (Zero-Order Holder ZOH) and the periodicity of the systematic instances dropping defined by the (m, k)-pattern provide a discrete time behavior of the system modeled by equation (5.8):

Figure 5.2. Control system architecture

ch5-fig5.2.gif

(5.8) images

where Φj and Γj are given by

images

The value of fj is the distance, in terms of the number of basic sampling periods, between two updates of the command. For example, if, for a given controller task, the (m, k)-pattern is Π = [1001000], we get f0 = 3 and f1 = 4.

vj is a discrete white noise with a zero mean and

(5.9) images

As formerly written, the task instances are periodically rejected according to a given (m, k)-pattern, images, the period of the system defined in (5.8) is m, meaning also images.

5.3.2. Example of multidimensional system

As an example, we propose to study the control of a cart that can move in one direction guided by a rail. The purpose of the control strategy is to drive the cart to a given position. We suppose that the friction parameters are negligible.

An initial reference position is defined and the position of the cart, d is measured with regard to this reference. d notes the speed of the cart. Therefore, the state vector of the plant is xT = [d, d]. The parameters p and q of the generic model are, respectively, equal to 2 and 1. The simplified model of the plant is given by the following equation (5.10):

(5.10) images

An identification of the system furnished the value of the parameters k1 and k2:

images

The continuous model of the system according to the state space representation is given by the generic equation (5.6). In this example, images and υc = 0. The output of the cart is periodically sampled (period h). The closed-loop model is given by equation (5.11):

(5.11) images

with images and images

The command elaborated by the controller is

(5.12) images

where L is the gain L = [kc kd] and xref is the targeted reference defined as images is the reference of the position on the rail where the cart has to stop).

5.3.2.1. Sampling period definition

The basic sampling period hbasic is determined according to the empirical “rule of thumb” formulated in [ÅST 97]: 0. 2 < ω0 < 0. 6, where ω0 is the natural frequency of the system (ω0 = 20, for the cart considered in this example). The period hbasic has to be chosen in [0. 01, 0. 03]. A study of this system shows that its performance in terms of rise time and overshot is optimal for hbasic = 0. 01s. So we will use this value as the basic sampling period.

5.3.2.2. Controller parameters

We note kc, 0. 01 and kd, 0. 01, the parameters of the controller obtained for a basic sampling period hbasic = 0. 01s. Their values are evaluated by solving the Linear Quadratic Regulator (LQR) problem proposed in [ÅST 97]. The final results are kc, 0.01 = 121 and kd, 0. 01 = 6. 5.

5.3.3. Stability condition

The configuration of a dropping policy based on the (m, k)-firm model needs to identify the values of k, m, and the (m, k)-pattern. The first problem to solve concerns the stability of the system, and the question is which parameter of the constraint is critical for ensuring this property. This identification relies on the intuitive idea that is: if a system is stable for a given (1, k)-firm policy, it will be stable for any dropping policy based on a (m, k)-firm. Therefore, we propose to evaluate, for each task, kmax, the greatest value of k, so that the system is guaranteed to be stable for each constraint (m, k) with kkmax and mk.

The analysis of equation (5.8), shows that on the one hand, a system subject to a (m, k)-firm constraint can be considered as a system with sampling periods varying according to a regular form specified by the (m, k)-pattern and, on the other hand, a system subject to a (1, k)-firm constraint is equivalent to a system controlled under a sampling period equal to k times the basic period. Therefore, the determination of kmax is equivalent to the determination of the maximal value of h that ensures the system stability.

In particular, if we study the example proposed in section 5.3.2, let us note images, given by

images

By applying the Jury criteria [FRE 63], the following three conditions provide a necessary and sufficient condition for the system stability:

images

where images, images notes the element placed in line i, column j of the matrix Ψ; a1 and a2 are expressed according to the controller parameters (kc, kd) and the sampling period h. This defines a domain of admissible t-uple (kc, kd, h).

In practice, the determination of the greatest admissible value of the sampling period starts by fixing kd = kd,0. 01, which is the value obtained for kd when h = hbasic = 0. 01s (the basic period). Then we study kc according to h under the following constraints deduced from Jury’s conditions.

So, we need to analyze the function that expresses the maximal value of h for each value of kc, with kd being fixed at the value kd,0. 01. In fact, we study the function kcMax (h), which is evaluated by

images

where

images

and we obtain the stability region in a space (h, kc). This region, for the example proposed in 5.3.2, is identified by the gray color in Figure 5.3. It represents, for each point kc, all the admissible values of h. For example, for kc = kc,0. 01, the maximal value of h, hmax, ensuring the stability is given by the abscissa of the point P and h can be chosen in the interval [0. 01, hmax].

As mentioned before, the period adjustment based on a (m, k)-firm model is equivalent to a regular sequence of time intervals between to consecutive samples, specified by the (m, k)-pattern; each time interval is a multiple of the basic period. Therefore, for the cart system, kmax is determined by images and the maximal sampling period, ensuring the stability and corresponding to a (1, kmax) constraint, is equal to kmax hbasic.

5.4. Optimized control and scheduling co-design

Once the stability of the system ensured, there is a further step needed to deal with the optimization issue. So, to do this, we first define a cost function used for determining an optimal control (section 5.4.1) and, then, we identify the global optimization problem for a set of closed loops where the algorithms implementing the control laws share one processor (section 5.4.2). Finally, the proposed approach is illustrated by a case study in section 5.4.3.

The optimization approach relies on two phases:

–The first is done off-line. For each task, τi, a value of ki ensuring the stability of the system is fixed according to the method described in section 5.3.3. For each value of mi, j such that 1 ≥ mi, jki, a (mi,j, ki)-pattern Πi, j is defined based on the mechanical words technique (see section 5.2). Then, for each pattern Π i, j, the cost function of the system is evaluated. Such a function is proposed in section 5.4.1.

Figure 5.3. Stability region evaluated on the case study presented in section 5.3.2. kc 1, kc 2, kc Lim and kc Max are functions of the sampling period h. The stability region for the cart system is identified by the gray color

ch5-fig5.3.gif

–The second phase is done by the task handler as illustrated in Figure 5.1. It concerns the global optimization of the system. For each working mode, the task handler has to configure the (m, k)-firm constraints and the scheduling parameters of each task implementing the control law of each system and sharing the same processor.

5.4.1. Optimal control and individual cost function

An indicator of the closed-loop performance can be given, among others, by the Least Quadratic (LQ) cost function, which provides a form of “cumulative cost” for an infinite horizon. Applied to a system described by equation (5.6), it is defined by the following formula:

images

where Q and R are two matrices that respectively weigh the state and the input of the plant.

For a task τi that implements the ith controller, given ki such that the stability is ensured, images is the cost function to minimize. We consider all the numbers of mandatory instances mi (1 ≥ miki), assuming that the corresponding patterns Πi are defined using the mechanical words approach; for each possible value of mi and Πi we determine the optimal control law, in fact the gain of the controller, which minimizes the cost function. As the intervals between two instances are time varying, according to the pattern, the optimal control law is described by a sequence of gain, Li, p for p = 1, 2, …, mi. For example, let us consider ki = 10, for mi = 3; the (mi, ki)-pattern Π i is equal to [1001001000], and we need to determine three values of the gain: Li 1, Li, 2, Li, 3 to apply using the control law when producing the command to the actuator, respectively, at the first, second, and third mandatory instances.

We consider the discrete time-varying model of the plant presented in (5.8). We denote by hi the basic period of the ith plant controller.

The following discrete form of images is evaluated for each possible value of images and therefore for the corresponding sequences images:

(5.13) images

where Hi is the time horizon of the ith plant, under the condition images is the plant state measured at the pth sample and υi, p the corresponding command sent to the actuator, images, images with Ric, the covariance of υi, and images and images. The optimal control law that minimizes the cost function 5.13 is given by [ÅST 97] as

(5.14) images

where

(5.15) images

Si, p is obtained from the recurrent equation

(5.16) images

Taking into account the periodicity of the pattern, images and images. Consequently, the solution of equation (5.16) is also periodic with a period mi, j when calculated on a sufficiently long time horizon [BIT 91] i.e. images. Then, the gain of the controller Li, p is designed using the steady-state solution of the Ricatti equation (5.15), and its solution is also periodic:

images

With this computation, it is possible, for a given (mi, j, ki)-pattern, to determine the best sequence of Li, p that allows us to partially compensate the task instance dropouts.

When time goes to infinity, (lim Hi → ∞), the influence from the initial condition decreases and because Si, l = Si, l+mi j, equation (5.13) may be written as

(5.17) images

5.4.2. Global optimization

Let us now consider the problem introduced at the beginning of section 5.4. As mentioned before, the problem is the global optimization of a set of controllers deployed as a set of tasks sharing the same processor. In the last section (5.4.1), we demonstrated, for a given (m, k)-firm constraint and a given (m, k)-pattern, how to determine the sequence of controller gains that compensate for the task instance dropout between two mandatory instances; this set of gains is identified in order to minimize a cost function that represents a cumulative cost and is derived from the LQ function. Using this evaluation, realized off-line, each task τi that may be activated in one of the possible working modes, is characterized by several attributes:

–its basic period hi and its priority Pi,

–the execution time of the task Ci,

–the parameter ki that ensures the stability of the system under a period kihi,

–the number ni of values mi, j such that a systematic dropping of the task instances can be done following the (mi,j, ki)-firm constraint:

–for each value mi, j:

    -the value of mi, j,

    -the (mi, j, ki) pattern, Π i, j; we recall that it is defined thanks to the mechanical words,

    -the list of gains to apply at each instance of the task: Li, 1, Li, 2, …, Li, mi j

    -Jij the value of the cost function obtained by using these gains repeatedly on each interval [pki, (p + 1) ki] for p ≤ 0.

This information is used on-line for the resolution of the global optimization problem. As soon as a new working mode is defined, the task handler knows which plant needs to be controlled and by which controller; at this point, it has to choose, for each active controller, and therefore for each corresponding task τi, what the value of the parameter mi is that has to be applied in order to minimize a global cost function. We denote the number of tasks activated in one working mode by n.

Let us consider a set of tasks τi, 1 ≥ in, described by the parameters given above; the global optimization problem consists in determining (si,1, si, 2,.., si, ni) for each task τi that minimizes

(5.18) images

with images

Under the schedulability constraint that has to be met by all tasks τi, i = 1,.., n, this condition is equivalent to the one formulated in equation (5.5):

(5.19) images

This optimization problem can be seen as a Multiple-Choice, Multiple Dimension Knapsack (MMKP) problem [MAR 90] that has been proved to be NP-hard. Therefore, solving this problem on-line requires developing an heuristic algorithm ensuring that it can provide a tight sub-optimal solution. In the following, we apply a slightly modified version of the computationally cheaper algorithm HEU proposed in [KHA 02]. For our optimization problem, the proposed algorithm is

5.4.3. Case study

In this section, we apply the method presented above to a case study. Let us consider four cart systems, cart1, cart2, cart3, and cart4, similar to the one presented in section 5.3.2. The control of these cart systems can be active or not depending on


Algorithm 5.1: Modified computationally cheaper heuristic


1) to find a feasible solution first, that is to say, select mi, j for each τi while satisfying the constraints given in (5.19);

for this purpose, the algorithm HEU is modified by always setting the value of mi, j of each task τi to be equal to 1 (if the solution is infeasible in this case, no other solution will be feasible);

2) and then to iteratively improve the solution by replacing, for eachτi, the current value of mi, j by another value corresponding to a better performance

while keeping the constraints (5.19) satisfied;

if no such solution can be found, the algorithm tries an iterative improvement of the solution which;

a) first replaces mi, j for one task τi, which is not schedulable with the current value of mi, j;

b) and then replace the value of mij for all tasks images by a value providing a worse performance;

the original algorithm HEU tries to find, after the first step, a better solution requiring less resource consumption which, however, does not exist in our model, therefore, this property also help us to delete an unprofitable search procedure in HEU;

3) The iteration finishes when no other feasible solution can be found;


the working mode chosen by the supervisor (see Figure 5.1). The tasks implementing each controller share the same processor.

A Matlab/Simulink model of the system is specified. The scheduling policy is implemented using the toolbox TrueTime [CER 03]. The system is then analyzed by tracing an indicator of the control performance during the simulation of this model running on a given scenario. This indicator is given, for each controlled cart system, by function (5.20), which is evaluated at each simulation step:

(5.20) images

where images and Ri = 0. 00006 for each carti, i = 1, 2, 3, 4.

Furthermore, we can observe the state of each task instance during the simulation (running, pre-empted, not activated).

5.4.3.1. Plants and controllers

The generic continuous model of the cart system, carti for 1 ≤ i ≤ 4 (see equation 5.6) is given by

(5.21) images

where Mi is the mass of carti:

images

The controller of each carti is noted as controlleri, the task which implements the controlleri is τi, and its basic period is hi:

images

5.4.3.2. Scheduling parameters

The tasks τi are scheduled according to a fixed pre-emptive priority policy under an implicit deadline constraint for their mandatory instances (Di = hi). The priorities of the tasks are defined thanks to their basic period (the larger the period is, the lower the priority is). So, in any working mode, there is the following relation between the task priorities:

images

Let us now fix the parameters of the constraint (m, k)-firm for each task. The value of the parameter ki is defined in order to ensure the stability of the system under a period equal to ki hi. In this case study, we identified the following value of ki:

images

We consider, in the considered experiments, that the four tasks have the same execution time:

images

and that the execution time of the task handler is Cth = 2, 5 ms. Furthermore, its priority is higher in the system.

5.4.3.3. Optimal controller

The controller of carti is defined by ui = Lixi, where the gain Li is evaluated for each interval between two consecutive mandatory instances according to the (mi, ki)-firm strategy used for this plant. As detailed in section 5.4.2, the value of the gain is calculated in order to optimize the control performance during this interval. The cost function, to minimize, in this case study is the discrete form of

(5.22) images

where images and Ri = 0. 00006 for each carti, i = 1, 2, 3, 4.

5.4.3.4. Simulation scenario

The system is observed along a scenario that introduces three working modes and two types of working mode switchings:

–at time t = 0s, the first two cart systems (cart1 and cart2) are to be controlled; therefore, only two tasks are running: τ 1 and τ2 implementing, respectively, the Controller1 and Controller2; the set of tasks to be scheduled is {τ1, τ2};

–at time t = 1s, the fourth cart system (cart4) has to be controlled; so, τ4 implementing the Controller4 is activated; the set of tasks to be scheduled is {τ1, τ2, τ4};

–at time t = 2s, the third cart system (cart3) has to be controlled; so, τ3 implementing the Controller3 is activated; the set of tasks to be scheduled is {τ1, τ2, τ3, τ4}.

Two pre-emptive-fixed priority scheduling policies were modeled:

Hard real-time constraints. We consider that all the task instances are mandatory; in this case, the gain of each controller is constant and determined in order to optimize the cost function (5.22) for the basic sampling period of each system.

Adaptive system. In this case, we implement a systematic dropout of the non-mandatory instances according to the (m, k)-firm constraints specified for each task; the gain of the controller is adapted to each inter-sample interval length in order to optimize the performance of each system; the value of k is constant for a given system in each working mode, while the value of m is evaluated for each system at the begin-ning of each working mode in order to optimize the global cost function proposed in (5.18) under the schedulability condition (5.19).

5.4.3.5. Simulation results for hard real-time constraints

The control performance of each system as well as the evolution of the task state are illustrated in Figures 5.45.6.

–In the interval [0, 1[, two tasks are periodically activated: τ 1, with the period h1 = 0. 007 s, and τ2, with the period h2 = 0. 0085 s; all the instances of both tasks meet their deadline.

–At time t1 = 1 s, the supervisor decides to include the control of cart4 leading to the activation of the task τ4; its period is h4 = 0. 0115 s; so, it will be activated successively at 1 s, 1. 0015 s, 1. 013 s, 1. 0245 s, etc.; we can observe in Figure 5.4 that no instance of τ4 meets its deadline; the starting time of several instances is delayed and the completion of all the instances are after their deadline; as the priorities of τ1 and τ2 are higher than that of τ4, the activation of τ4 has no impact on the scheduling of the two other tasks that meet always their deadlines;

Figure 5.4. Task evolution under hard real-time constraints strategy. The detailed time interval is [0. 98, 1. 07] and it includes a working mode switch at time t1 = 1 s: before t1, only two tasks are activated (τ1 and τ2); at time t1, the plant cart4 has to be controlled leading to the activation of the task τ4

ch5-fig5.4.gif

–At time t2 = 2 s, the supervisor includes the control of cart3 leading to the activation of the task τ3 at time 2 s, 2. 0115 s, 2. 023 s, etc.; we can observe in Figure 5.5, that no instance of this new task meets its deadline; furthermore, the interference of the three tasks of higher priority, τ1, τ2, and τ3, makes the processor unavailable for the task τ4, leading all the instances of this task to fail to run.

An analysis of Figure 5.6 shows how the performance of the system varies, as evaluated by function (5.20). We can note that for cart4, the performance is acceptable between t = 1 s, up to t = 2 s, which is before the activation of task τ3. Then the performance of cart4 diverges. This is due to the interference of tasks τ1, τ2, and τ3, whose priorities are higher than that of τ4. As mentioned before, this task will never run. Finally, the performance of cart3 is always acceptable despite the instances of the corresponding control task τ3 never meeting their deadline.

5.4.3.6. Simulation results for (m, k)-firm constraints

The control performance of each system as well as the evolution of the task state are illustrated in the Figures 5.75.9.

Figure 5.5. Task evolution under hard real-time constraints strategy. The detailed time interval is [1. 98, 2. 07] and includes a working mode switch at time t2 = 2 s: before t2, three tasks are activated (τ1, τ2 and τ4); at time t2, cart3 has to be controlled leading to the activation of the task τ3

ch5-fig5.5.gif

Figure 5.6. Control performance under hard real time constraints strategy

ch5-fig5.6.gif

Figure 5.7. Task evolution under (m, k)-firm strategy. The detailed time interval is [0. 98, 1. 07] and includes a working mode switch at time t1=1 s: before t1, only two tasks are activated (τ 1 and τ2); at time t1, cart4 has to be controlled, leading to the activation of task τ 4

ch5-fig5.7.gif

–In the interval [0, 1[, two tasks are activated periodically: τ 1, with the period h1=0. 007 s, and τ2, with the period h2 = 0. 0085; these tasks are schedulable under hard real-time constraints as seen in section 5.4.3.5; moreover, in this working mode, the global cost function is optimized for m1 = k1 and m2 = k2, so all the instances of τ1 and τ2 are mandatory.

–At time t1 = 1 s, the supervisor decides to include the control of cart4 leading to the activation of task τ4; in Figure 5.7, at this time, the task handler is activated; as its priority is higher in the system, its completion is at time t = t1 + Cth = 1. 0. 0025 s; at time t1, the optimization of the global cost function (5.18), realized by the task handler, furnishes the value of m1 = 5 and m2 = 4 providing the rules for the dropping policy: (5, 5)-firm for τ1 and (4, 8)-firm for τ2 under the (m2, k2)-pattern Π2 = [10101010]; as k4 = 1, all the instances of τ4 are mandatory; Figure 5.7 shows that all the mandatory instances of τ1 and τ2 meet their deadline, while no instance of τ4 does; nevertheless, all its instances run to completion, under a delayed starting time and delayed completion time.

Figure 5.8. Task evolution under (m, k)-firm strategy. The detailed time interval is [1. 98, 2. 07] and includes a working mode switch at time t2 = 2s: before t2, three tasks are activated (τ 1, τ2 and τ4); at time t2, cart3 has to be controlled, leading to the activation of the task τ3

ch5-fig5.8.gif

–At time t2 = 2s, the supervisor includes the control of cart3, so a new working mode that integrates the former tasks τ1, τ2, and τ4 and the new task τ3 is started; the task handler has to redefine the optimal configuration of the task activation rules; in this case, function (5.18) is minimized for the following (mi, ki)-firm constraints:

images

We can observe, in Figure 5.8, that the two first instances of τ4 and the first instance of τ3, activated at the beginning of this new working mode, do not meet their deadline because of a transient overload due to the execution of the task handler; after that point, all the mandatory instances of the four tasks meet their deadline.

Figure 5.9. Control performance under (m, k)-firm strategy

ch5-fig5.9.gif

Figure 5.9 illustrates the evolution of the cost function, provided by formula (5.20). For each cart system, the performance becomes quickly stable and remains constant in each working mode.

The optimization algorithm of the global cost function (5.18) under the schedulability constraints (5.19) for a set of control tasks sharing the same processor allows for applying a best dropout policy of several well-identified instances for each tasks; the consequence is a wider availability of the processor and the possibility for each task to run its mandatory instances to completion before the deadline (the relative deadline is the basic period of each task). Furthermore, the controller itself is optimized and an adaptive gain suited to the (mi, ki)-pattern is defined.

5.5. Plant-state-triggered control and scheduling adaptation and optimization

Let us consider in this section, more precisely, the different situations of the states of the plants that may be controlled:

– the plant is not controlled; this situation occurs when the plant does not exist for the overall system (plant controlled only during certain time interval, plant deactivated because its output is out of a given domain);

– it is controlled and its state is a steady state;

– it is controlled, but when this control is just activated, its state is a transient state.

We have to take into account the last two cases and, in particular, we must adapt the cost functions in order to deal with both situations. For this purpose, we propose taking into consideration, depending on the situation of the plant, an infinite-horizon or a finite-horizon cost function in order to find the optimal configuration of each mi and of each controller gain Li.

5.5.1. Closed-loop stability of switching systems

A change in the value of mi, for a given ki, produces a sampling period variation, and then we consider a Discrete Time Switched System (DTSS) description. To adapt the control law parameters to this variation, we use the design process presented in section 5.4.1. But, as was shown in [SCH 02], controllers designed with optimal-LQ techniques may suffer from instability under certain switching sequences, i.e. when the sampling period changes. Because of this undesirable result, [SCH 02] adopts a linear matrix inequalities (LMI) framework to design stable optimal controllers. We propose using a LMI framework to find a common quadratic Lyapunov function (CQLF); then the asymptotic stability is guaranteed for any (mi, ki)-firm sequence for each plant, proving the stability of the control.

Firstly, for a fixed control task τi, ki, we consider each possible value of the number of mandatory instances, mi. For each of these, we compute the corresponding mi controller parameters images by using equation (5.15).

Secondly, we consider, for the control task τi, the set of open-loop discrete time models images and evaluate the ki periods, taking into account the possible interruptions in a planned sequence at any time.

By using the elements in both sets Li, mi and Θ i, we can establish a new set of mi.ki closed-loop models (5.7) without noise, images, where l varies between 1 and images and p between 0 and images.

In order to prove the stability of the DTTS [LIB 99], for ki and each possible value of mi, we need to find a CQLF for the set of matrices images, where n = 1,…, ki and p = 0,…, mi −1. Then, we can define a set of inequalities:

(5.23) images

Note that the identification of a CQLF is a sufficient condition. In order to solve the problem and find the matrix P = PT > 0, we use the LMI toolbox from the Matlab tool.

5.5.2. On-line plant state detection

As mentioned in the previous section, three plant states are identified: non-activated plant, steady plant state (or near), and transient plant state. Reaching or leaving the first situation for a plant modifies the value of the number of control tasks n. The dead-band approach presented in [OTA 02] is used to distinguish the steady and the transient states of a plant. Each controlled plant has a state, which asymptotically tracks the reference r, and is supervised by the supervision task. Let yi be the state of the plant i. yi can be a subset of the plant state vector xi defined in section 5.3.1. For instance, taking the previously studied cart system with x = [position speed], if we want the position to follow the reference position r, the variable position is then the parameter of the interest, and we can define y = [1 0] * x. The following condition on two successive samples (nTe and (n + 1)NH ones) is set up:

images

where th is a threshold to prevent false identifications due to noises, δ is a parameter to adjust for each plant, hi is the detection period implemented by the supervision task for plant i. If the condition is verified, then the plant is considered to be in steady state; otherwise, it is in a transient state. The advantage of this plant state detection mechanism is that it depends on the actual evolution and it detects, in the same way, reference changes or/and non-modeled perturbations.

5.5.3. Global optimization of control tasks taking into account the plant state

As in section 5.4, we deal with several working modes where, in each mode, a number of control tasks have to share one processor. In this section, we take into account that the working modes are identified, on the one hand, by a number of plants to be controlled, meaning, a number of given control tasks and, on the other hand, by the situation in which the plant is found. This situation corresponds to the two above-mentioned cases: the plant is in a transient state (due to a new reference or noises) or is in steady state. Intuitively, the constraints, in terms of performances, that have to be applied to the control are not the same for these two situations. Therefore, we have to consider a more complex cost function than the one presented in section 5.4.

We suppose that the value ki of each τi has been carefully chosen and is constant during the execution of application. The value of mi has to be chosen in [1..ki] on-line by the task handler. We note n the number of tasks activated in one working mode. For each control task τi, each possible value of mi is associated with two values Gi, mi and images corresponding to the control performance, respectively, in a transient situation and in a steady one. Supposing that a lower value of Gi, mi or images represents a better control performance, in the event of a situation change of the plant states, the aim of the task handler is to find, for each τi, a value so that the sum of Gi, mi or images (according to the situation into which the plants fall) for images and images is minimized the subject to the task schedulability condition (5.19). This is formally formulated as the following optimization problem.

Considering in a set of tasks τi, 1 ≥ in, described by hi, their basic period (considered as their relative deadline, i.e. Di = hi), Ci, their worst case execution time, ki the parameter of their (m, k)-constraint, ni, the number, and the list of possible values for the parameter m in the (m, k) constraint; the global optimization problem consists of determining images for each task τi that minimizes

images

with images and such that condition (5.19) is verified.

F and I indicate the current situation: in transient state, F = 0 and I=1 while in the steady state F = 1 and I = 0.

The values of Gi,,j and images have to reflect the performance offered by each solution. We identified two ways for defining the performance indicator.

–In the first case, Gi,,j is defined on a finite horizon by formula (5.13) while images is evaluated on an infinite horizon using (5.17):

(5.24) images

We have to note that the optimization problem is to minimize the overall cost of the application. However, the sub-systems with lower costs may suffer from greater control performance degradation due to a low value of mi. That is to say, the task handler maintains the value of each mi as high as possible for the sub-systems with greater costs by reducing the value of mi for the sub-systems with lower costs.

–The second solution is concerned by a cost that represents performance degradation:

(5.25) images

where Ji (x) and images are defined, respectively, by functions (5.13) and (5.17). In this case, the control performance criteria avoid the problem given for the first solution presented. The control performance degradation of each sub-system is treated equally. On the other hand, the overall cost of application may not be optimal. So, the choice of control performance representation should be identified according to the application requirements.

In both cases, the time horizon Hi for the finite-horizon cost function is an important design parameter, which directly affects the overall control performance, and needs to be carefully chosen. We propose opting for Hi as the settling time (defined approximately as three times the rise time). Moreover, the off-line computation of (5.25) is done by considering the typical values of the reference values and by neglecting noise.

Table 5.1. Parameters of the plants whose controllers share the same processor and scheduled according to their transient and steady state

ch5-tab5.1.gif

As in the solution presented in section 5.4.2, the optimization problem results from the MMKP. To solve this problem on-line, we also propose using the heuristic algorithm (HEU) presented in [KHA 02].

5.5.4. Case study

In this section, we illustrate the scheduling approach presented above by studying the control of four plants. Plant1 (resp. Plant2, Plant3, Plant4) corresponds to a harmonic oscillator system, (resp. to a cart system, a pendulum and an inverted pendulum). The controller of Planti is denoted by Controlleri. Each plant is modeled by the differential equation (5.6) whose parameters are given in the Table 5.1. The controller of the Planti is defined by equation (5.7).

The rise time specifications of each plant are, respectively, 0. 2, 0. 2, 0. 3, and 0. 5. Then, the basic sampling periods are related to rise time specifications, i.e. h1 = 0. 02s for Plant1, h2 = 0. 02s for Plant2, h3 = 0. 03s for Plant3, and h4 = 0. 05s for Plant4.

We suppose that the first state variable in vector x of each plant is the variable supervised by the supervision component. In other words, the controller tries to keep it tracking the plant state reference asymptotically. The step response target for the cart (Plant2) is an over-damped response, while for the others they are under-damped, being the damping coefficient greater than 0. 6 (overshoot < 10%). The gain of the controllers is computed for each possible value of mi, j, according to the approach proposed in 5.4.1. The design weights, which allow the satisfaction of the above-mentioned rise time and overshoot, are presented in Table 5.2.

Table 5.2. Weights used for each performance evaluation of each controller sharing the same processor and scheduled according to the transient or steady state of the controlled plant

ch5-tab5.2.gif

Using the LMI control toolbox of Matlab/Simulink tool, the set of inequalities (for the overall set of discrete plants and controllers parameters), has a QCLF, guarantying stability. To allow for fast changes between different (mi, j, ki)-firm constraints at an mi, j adjustment, the controller parameters are calculated off-line and stored in a table.

As applied in section 5.4, to the control task τi which implements the controller Controlleri is assigned the rate-monotonic priority, the task with the largest period has the lowest priority; its execution has no influence on the other tasks. Therefore, no task instance classification will be applied to τ4, or in other words, it is executed under (k, k)-firm constraint. Using the approach proposed in section 5.3.3, the value of ki is set to, respectively, k1 = 6, k2 = 5, k3 = 5 and k4 = 1. The value of mi for the plant τi may vary within [1..ki]. The worst case execution time of each task is C1 = C2 = C3 = C4 = 9 ms.

The optimal costs as well as the sequence of corresponding gains, associated with the different possible (mi, ki)-firm constraints of each task τi are evaluated off-line and stored in order to be used on-line by the task handler. Then the Matlab/Simulink model is simulated. The deployment characteristics of the global system, for short, the specific scheduling policy, are done by using TrueTime toolbox [CER 03].

5.5.4.1. Simulation scenario

The evolution of the scenario is illustrated in Figure 5.10.

Figure 5.10. Activation, reference (Refi for P lanti) and output (xi for P lanti) evolution of the four plants whose controllers share the same processor

ch5-fig5.10.gif

Table 5.3. Values of mi as they are selected by the task handler at each working mode switch; these values are evaluated for each task activated

ch5-tab5.3.gif

It shows, for each Planti, if it is controlled or not and, in the first case, the reference that is applied and the output of the plant, respectively, noted in figure Re fi and xi for Planti. Table 5.3 provides, at each significant instant, the value of mi that is selected by the task handler for each current task. An instant is significant if it leads to a switch between two working modes: a new plant has to be controlled, one plant is no longer activated, one plant goes from steady state to transient state, or a plant reaches its steady state.

These instants are identified on-line by the supervisor.

The sequence of significant instants in the proposed scenario is described below.

–Just before the observation of the system, we suppose that only Plant1 and Plant3 are controlled, while the other plants are not activated and not controlled. These two plants are in a steady state.

In this case, the system is schedulable under (k1, k1) and (k3, k3) constraints for τ 1 and τ3.

–At time t0, Plant4 is activated and, therefore, the set of tasks to schedule is {τ1, τ3, τ4}; no reference is applied to this new plant. The three plants are in a steady state.

The task handler has to find the optimal configuration of mi for this set of tasks and to deduce the corresponding sequence of values for Li, the gain of each controller. For this purpose, it uses the infinite-horizon cost for each plant. The result is m1 = 4 and m3 = 5 (we have to note that m4 has to be always equal to 1.)

–A transient state is detected by the supervisor at time t1 for Plant1 (occurrence of a new reference) and Plant4

The task handler looks for an optimal configuration of the (mi, ki) constraints by taking into account the finite-horizon costs for Plant1 and Plant4 and the infinite-horizon cost for Plant3. This results in m1 = 6 and m3 = 2.

Plant2 is activated at time t2 and the supervisor identifies it is in a steady state while Plant1 and Plant4 are still in a transient state.

The values of mi defined by the task handler at this time are, respectively, m1 = 3, m2 = 1 and m3 = 2.

–The supervisor detects a steady state for the Plant1 at time t3.

The new values evaluated for mi are m1 = 2, m2 = 1 and m3 = 5.

–At time t4, the values of mi have to be modified as the supervisor detects a transient state for Plant2.

Therefore, m1 = 2, m2 = 1 and m3 = 5.

Plant3 enters in a transient state at t5.

The adjustment of the values of mi results in m1 = 2, m2 = 1 and m3 = 5.

Plant2 goes from transient to steady state at t6.

This leads the task handler to readjust the mi parameters, using the infinite-horizon cost for τ1 and τ2 and to the finite-horizon cost for τ3: m1 = 2, m2 = 1, m3 = 5.

–At t7, an inadmissible perturbation enters in Plant3 whose output reaches images and consequently, this plant is deactivated, thus reducing the number of tasks to 3. At the same time, Plant4 enters a transient state.

The task handler can therefore increase the values of mi for the tasks τ1 and τ2: m1 = 6 and m2 = 1.

–At the end of the scenario, at time t8, a transient state is detected by the supervisor for Plant2.

The last values for the mi parameters that are chosen by the task handler are m1 = 2 and m2 = 5. We recall that during the simulation, parameter m4 is always equal to 1 (hard real-time constraint).

5.5.4.2. Observed performance

In order to analyze the control of the performance degradation due to the (m, k)-firm policy, we evaluate the LQ cost, given by formula (5.20), for each system during the simulation time. Let us note images as this value. On the other hand, during the same simulation time and under the same simulation setup, we calculated the nominal performance of each plant provided by the same formula when each system is controlled by a control task on a separate processor; in this case, the constraint applied is a (k, k) one and the sampling period as well as the activation period of the control task are equal to the basic period of each controller. We note images, the value obtained for Planti. Table 5.4 presents an example of the performance degradation.

Of course, these results depend on the simulation setup, and they are only exposed here to show that using the proposed technique, the degradation of the performance should be kept as small as possible in each situation subject to the task schedulability.

Table 5.4. Performance degradation of the four plants

ch5-tab5.4.gif

Plant4 suffered the lower cost degradation due to the fact that this system has to respect a hard real-time constraint m4 = k4 = 1. Plant2 suffered the maximum cost degradation, due to the Plant2 performance indicator, which generates the reduction of m2 as soon as the other plants require the use of the processor.

5.5.4.3. Summary

Through this case study we can see that the on-line adaptation mechanism we proposed can effectively control the degradation of the system performance during the plant state transient period and system overload. In fact, given the current states of the controlled plants, the proposed approach allows us to derive a (m, k)-firm constraint for each control task and the corresponding optimal control gain while still meeting the (m, k)-firm schedulability condition of the total control tasks. Moreover, Table 5.4 shows that comparing to the idle case where the processor has infinite capacity, using our approach does not induce much performance degradation. Notice that, for the simulated scenario, if we do not allow sample data dropping (i.e. in case that all tasks are considered under hard real-time constraint), the processor will be in an overload at time t = 0. 5 s. As the tasks τ3 and τ4 have lower priority, they are no longer executed by the processor and this leads to the instability of Plant3 and Plant4. Figure 5.11 shows that starting from t = 0. 5 s where the higher priority task τ2 is released, the processor can no longer execute task τ3 correctly and could never execute task τ4. As an example, Figure 5.12 clearly shows that Plant3 is not stable.

5.6. Conclusions

Computing and networking resource sharing is a common trend in NCS for achieving cost-effective solutions. However, an overload situation may occur, either by the dynamic application configuration changes or by the implementation system performance variations.

For dealing with system overload situations, this chapter proposed an approach based on selectively dropping some samples while still guaranteeing the (m, k)-firm schedulability of the control tasks sharing a same processor. By adjusting both the acceptable (m, k) values and the control gains, we have shown, through case studies, that the performance degradation is efficiently controlled. The key point is the use of an algorithm that allows for finding on-line the optimal values of m and the control gains leading to minimizing a global control performance cost function.

Figure 5.11. Simulation trace of task execution without selective sample dropping

ch5-fig5.11.gif

Figure 5.12. Unstability of P lant3

ch5-fig5.12.gif

This approach has been further extended to also take into account the transient plant state, where the plant needs more resources to be controlled than when it is in its steady state. The stability of this dynamic parameter switching closed-loop has been discussed, and we showed that the asymptotic stability is guaranteed if one can find a CQLF. This extension makes the resource utilization still more efficient. In fact, the idea behind this is that the shared resource is dynamically allocated to the plant that needs it (when it is in its transient phase), rather than being allocated to the plants that are in their steady-state and need less control.

As indicated in the introduction section of this chapter, the approach presented can be considered as an alternative to the direct sampling period adaptation for dealing with system overload situations. Using such an approach, the resulting system will be more robust as it can auto-adapt to fit with certain system overload situations.

5.7. Bibliography

[ÅST 97] ÅSTRÖM K., AND WITTENMARK B., Computer-Controlled Systems, Information and System Sciences Series, Prentice Hall, 3rd edition, 1997.

[BIT 91] BITTANTI S., COLANERI P., AND DE NICOLAO G., The periodic Riccati equation, The Riccati Equation, p. 127–162, Springer-Verlag, Berlin, 1991.

[CER 03] CERVIN A., HENRIKSSON D., LINCOLN B., EKER J., AND BERNHARDSSON B., ÅRZÉN K. E., How does control timing affect performance?, IEEE Control Systems Magazine, vol. 23, p. 16–30, June 2003.

[EKE 00] EKER J., HAGANDER P., AND ARZEN K.-E., A feedback scheduler for real-time controller tasks, Control Engineering Practice, vol. 8, p. 1369–1378, 2000.

[FRE 63] FREEMAN H., Discret Time Systems, John Wiley, New York, 1963.

[HAM 94] HAMDAOUI M., AND RAMANATHAN P., A service policy for real-time customers with (m,k)-firm deadlines, Fault-Tolerant Computing Symposium, Austin, USA, p. 196– 205, April 1994.

[HAM 95] HAMDAOUI M., AND RAMANATHAN P., A dynamic priority assignment technique for streams with (m,k)-firm deadlines, IEEE Transactions on Computers, p. 1443–1451, December 1995.

[JIA 05] JIA N., HYON H., AND SONG Y., Ordonnancement sous contraintes (m,k)-firm et combinatoire des mots, 13th International Conference on Real-Time Systems, Paris, France, April 2005.

[KHA 02] KHAN P., LI K., MANNING E., AND AKBAR M., Solving the knapsack problem for adaptive multimedia system, Studia Informatica, vol. 2, p. 161–182, 2002.

[LI 06] LI J., SONG Y. - Q., AND SIMONOT LION F., Providing real-time applications with graceful degradation of QoS and fault tolerance according to (m,k)-firm Model, IEEE Transactions on Industrial Informatics, vol. 2, p. 112–119, 2006.

[LIB 99] LIBERZON D., AND MORSE S., Basic problems in stability and design of switched systems, Control Systems Magazine, vol. 19, p. 59–70, October 1999.

[MAR 90] MARTELLO S., AND TOTH P., Knapsack Problems: Algorithms and Computer Implementations, John Wiley, New York, 1990.

[OTA 02] OTANEZ P., MOYNE J., AND TILBURY D., Using deadbands to reduce communication in networked control systems, American Control Conference, Anchorage, USA, May 2002.

[QUA 00] QUAN G., AND HU X., Enhanced fixed-priority scheduling with (m,k)-firm guarantee, 21st IEEE Real-Time Systems Symposium, Orlando, Florida, USA, p. 79–88, November 2000.

[RAM 99] RAMANATHAN P., Overload management in real-time control applications using (m,k)-firm guarantee, IEEE Transactions on Parallel and Distributed Systems, vol. 10, p. 549–559, June 1999.

[SCH 02] SCHINCKLE M., CHEN W. - H., AND A. RANTZER, Optimal control for systems with varying samplin rate, American Control Conference, Anchorage, USA, May 2002.

[SET 96] SETO D., LEHOCZKY J. P., SHA L., AND SHIN K. G., On task schedulability in real-time control system, 17th IEEE Real Time Systems Symposium, Washington, DC, USA, p. 13–21, December 1996.

[SIM 05] SIMON D., AND BENATTAR F., Design of real-time periodic control systems through synchronisation and fixed priorities, International Journal of Systems Science, vol. 36, p. 57–76, 2005.


1 Chapter written by Flavia FELICIONI, Ning JIA, Françoise SIMONOT-LION and Ye-Qiong SONG.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.143.237.115