Chapter 4

Plant-state-based Feedback Scheduling 1

4.1. Overview

A constant challenge in embedded systems development is represented by computational resource limitations. In fact, economic constraints impose the desired functionalities to be performed with the lowest cost. These limitations call for a more efficient use of the available resources. In this context, integrated control and scheduling methodologies have been proposed in order to allow a more flexible and efficient utilization of the computational resources [ÅRZ 00].

The problem of optimal sampling period selection, subject to schedulability con-straints, was first introduced in [SET 96]. Considering a bubble control system benchmark, the relationship between the control cost (corresponding to a step response) and the sampling periods were approximated using convex exponential functions. Using the Karush–Kuhn–Tucker (KKT) first-order optimality conditions, the analytic expressions of the optimal off-line sampling periods were established. The problem of the joint optimization of control and off-line scheduling has been studied in [REH 04; LIN 02; BEN 06c].

The idea of feedback scheduling was introduced in [EKE 00; LU 02]. First ap-proaches in feedback scheduling considered feedback from resource utilization (for example task execution times) in order to optimize the control performance [EKE 00; CER 02], or to minimize a deadline miss ratio in soft real-time systems [LU 02]. Nat-urally, the on-line adjustment of sampling periods calls for optimal sampling periods assignment. The approaches in [EKE 00; CER 02] used a similar method to [SET 96] in order to find analytic expressions of the optimal sampling periods, under cost ap-proximation assumptions (linear or quadratic approximation of the cost as a function of the sampling period). The experimental evaluation of the feedback scheduling con-cept was undertaken in [SIM 05]. The issue of guaranteeing the stability and per-formance of the controlled systems, when their sampling periods are varied on-line (by a feedback scheduler, for example), was addressed in [ROB 07a], using the H approach for linear parameter varying systems.

Later, it was pointed out that the optimal sampling frequencies are also dependent on the controlled system actual state [MAR 02; MAR 04], and not only on off-line considerations. The problem of the optimal integrated control and scheduling was formalized (using a hybrid system approach) and solved in [BEN 06b]. Heuristics for integrated control and non-pre-emptive scheduling were proposed, in particular the OPP [BEN 06b] and RPP [BEN 06c] algorithms as well as the relaxed dynamic programming-based scheduling strategy [CER 06]. A common point to these heuris-tics is that the scheduling decisions (which task to execute or message to send) are determined on-line through the comparison of a finite number of quadratic functions of the extend state (actual state extended by previous controls). These quadratic cost functions are pre-computed off-line based on the intrinsic characteristics of the con-trolled systems. Another common point is that concurrency was modeled in a finely grained way. Related approaches were proposed in [DAimages 07], and where scheduling decisions are based on the discrepancies between current and the most recently transmitted values of nodes’ signals. These latter results may be applied to the problem of dynamic scheduling of CAN networks.

Other approaches relied on the notion of periodic tasks and used task periods as variable scheduling decision modification. [SHI 99] considers adaptive scheduling for a set of controllers. With each controller is associated a cost function to model the quality of control (QoC) as a function of its period. In response to processor failures or to variations in the computing duties assigned to processors, heuristics assign control tasks to processors and adapt the control periods to optimize the global QoC.

In [HEN 05], the problem of the optimal sampling period selection of a set of LQG controllers, based on plant states knowledge, was studied. It has been shown that the optimal solution to this problem is too complicated. The optimal LQG cost, as a function of the sampling period, was depicted for some selected numerical examples. Explicit formulas, relating the optimal sampling periods to the plant state, were derived in the case of the minimum variance control of first-order plants. The issue of the choice of the feedback scheduler period was also studied. The same setting was considered in [CAS 06]. The on-line sampling period assignment was based on a look-up table, which was constructed off-line, for predefined values of the sampling periods. A heuristic procedure, allowing the construction of this look-up table, was also proposed.

Other approaches of state-based resource allocation were proposed, as in [TAB 07] and [LEM 07]. Although these approaches do not aim to optimize a global cost function, their objective is to allocate the computational resources in order to achieve other control objectives such as the asymptotic stability [TAB 07] or a specified l2 attenuation level [LEM 07].

4.2. Adaptive scheduling and varying sampling robust control

A variable sampling rate appears to be a decisive actuator in scheduling and CPU load control. Although it is quite conservative, the LPV/H-based design developed in section 2.4 guarantees plant stability and performance level, whatever the speed of variation of the control period inside its predefined range. Hence, the control task periods of such controllers can be adapted on-line by an external loop (the feedback scheduler) on the basis of resource allocation and global quality of service (QoS), with no further problems concerning the process control stability. Hence a quite simple scheduling controller can be used, e.g. like a simple re-scaling as proposed in [CER 03], or an elastic scheduler as in [BUT 00].

Indeed, besides the flexibility and robustness provided by an adaptive scheduling, a full benefit would come by taking into account directly the controlled process state in the scheduling loop. It has been shown in [EKE 00] that even for simple cases the full theoretical solution based on optimal control was too complex to be implemented in real time.

However, it is possible to sketch effective solutions suited for specific case studies, as depicted in Figure 4.1 taken from [ROB 07b].

A computing resource is shared between several process controllers. The computing power distribution between the process controllers is on-line adapted by a feedback scheduler. However, conversely, with the robot controller in section 1.4.2.3, the load allocation ratio between the control components is no longer constant and defined at design time. It is made dependent on the measure of the QoC to give advantage to the controller with higher control error.

Figure 4.1. Integrated control and scheduling loops

ch4-fig4.1.gif

4.2.1. Extended elastic tasks controller

The approach relies on a modified elastic scheduler algorithm [BUT 00], whose original objective is a distribution of the CPU utilization between n tasks, acting on their periods under the following constraints:

– the overall CPU load is smaller than the reference images,

– the period of a control task is bounded: images,

– the CPU load distribution is balanced thanks to weights ki.

The on-line scheduling adaptation reacts to changes in the overall load reference Ud, to variations in a task’s execution time or to variations in a weight ki. The scheduling is updated thanks to an iterative algorithm described in [BUT 00], where the CPU utilization Ud is shared in proportion of the weights ki, accounting for the period bounds: for example Figure 4.2 depicts the temporal behavior of three tasks with initial weights ki = 1, i = 1,…, 3. The horizontal axis represents the CPU use shared between the three tasks, along several load and weights configurations. In a first step, decreasing Ud induces a decreasing in the individual loads in equal proportion. In the second step, increasing the weight k3 from 1 to 2 increases the CPU time allocated to task 3 while slackening equally the CPU power allocation for tasks 1 and 3. Increasing again k3 from 2 to 3 cannot compress again task 1 CPU allocation which already reached its lower bound, therefore only task2’CPU allocation is reduced.

Indeed, the tasks set’s behavior mimics the one of a springs chain with overall length Ud, where every spring has a stiffness ki and a length Ui bounded between images and images: recalling that the computing load Ui and the period hi of a control task are linked with Ui = ci /hi, where ci is the execution time of an instance of task i), allows for computing the actual task periods.

Figure 4.2. Example of elastic tasks scheduling

ch4-fig4.2.gif

To a dapt the scheduling algorithm and task periods w.r.t. the actual performance of the process controllers, it is now necessary to enhance the elastic tasks algorithm to make the weights ki depend on the measured QoC, as in the structure in Figure 4.1. The main problem consists of measuring an adequate image of the control performance and finding a function to link it with ki.

In this first approach the QoC is measured via the mean square tracking error. It is evaluated on a time window equal to the feedback scheduler period. To keep the scheduling cost low this period is chosen larger than the process control periods.

The ki weights are functions of the QoS measurements and are handled by the Mi components in Figure 4.1; in the simpler case they can be only static gains. The choice of the Mi gains must provide identical weights ki for controllers with similar performance. However, the QoC measures are normalized to well balance the CPU allocation between process controllers with different dynamics.

The adaptation of the ki weights as functions of the control performance is a feedback, whose stability and dynamics must be investigated. The relationship between ki weights and the corresponding QoC would be analyzed, which appear to be very complex as it involves the behaviors of both the elastic scheduler and the process controllers. The scheduler is a nonlinear system where the task period are functions of the CPU load reference, the ki weights, the period bounds and the varying execution time of the control tasks. The relationship between the control intervals and the corresponding control performance is also difficult to quantify as it depends on the process itself, on the controller and on exogenous signals. Note that even for a constant control period the quadratic tracking errors vary with exogenous signals and disturbances.

In consequence, it still out of the scope of the present analysis to rigorously chose the feedback scheduler gains. However, these gains may be empirically chosen as in the following example, studied in simulation using again TrueTime to take into account the coupling between continuous process control and real-time scheduling.

4.2.2. Case study

The case study consists of the control of two pendulums sharing a common com-puting resource, one is the “T” pendulum already used in the example of section 2.4.4, the other one is a straight (stable) pendulum. Each pendulum is controlled by a LPV/H controller designed as in section 2.4, so that the stability of the position control loops is guaranteed whatever the variations of the control intervals, provided that they stay inside the bounds use for the synthesis. The allowed control intervals are chosen according to the desired closed-loop bandwidth and capabilities of the process, they are [1,3] ms for the T pendulum and [4,12] ms for the stable one. As in the case detailed in section 2.4.4 Taylor’s expansion is truncated at the order 2, leading to reduced polytopes with three vertex and to a simple convex combination of three state-feedback elementary controllers at run time.

The pendulum controllers are implemented as real-time tasks running under con-trol of a pre-emptive and fixed priority RTOS. The control performance of each pen-dulum is measured by the mean quadratic tracking error over a feedback scheduler’s period (5 s). The pendulums are driven by a sinusoidal reference. At t = 12 s, a disturbing task with intermediate priority appears. Noise is injected on the stable pendulum measure at time t = 20 s. Simulation results are plotted in Figure 4.3 for the scheduling parameters and in Figure 4.4 for the plant outputs, quadratic tracking error and computing resource share between the controllers. This latter plot is equal to 1 when all the CPU is allocated to the T pendulum and 0 when it is assigned only to the stable pendulum.

Figure 4.3. Scheduling behavior

ch4-fig4.3.gif

Figure 4.4. Pendulums behavior

ch4-fig4.4.gif

The appearance of the disturbing task at t = 12 s induces a CPU overload which is rapidly canceled by the feedback scheduler, in one scheduling interval, as there is no filtering in this elastic scheduler. At this time, the CPU share is mainly allocated to the T pendulum so that it is weakly disturbed by the added task, while the second pendulum is subject to a larger control interval increase. The noise added to the second pendulum at t = 20 s increases the corresponding tracking error, therefore inducing a re-allocation of CPU power in favor of the second pendulum. This is made at the cost of increasing the control period of the first pendulum which in turns increases its performance index, thus claiming for additional computing power (at t = 36).

The approach is simple to implement and, even if only tested in simulation up to now, has shown significant performance improvements compared with more simple (i.e. control quality unaware) resource allocation. However, the behavior and per-formance of the overall control system depend on numerous parameters, such as the period of the feedback scheduler, normalization factors between the plants, optional filters, etc.

A rational setting of these parameters needs a better understanding of the coupling between the control performance and the scheduling parameters. As said before, analyzing the stability of this feedback scheduling loop, gathering the complex dynamics of the plants and of the control system, remains to be done. It requires an adequate modeling of the relationships between the control quality and the scheduling parameters and seems to be out of reach in a general case. However, some restrictive assumptions on the plant model (e.g. linearity) and on the control algorithms (e.g. LQ control) may lead for tractable solutions as shown in the following section.

4.3. MPC-based integrated control and scheduling

Model predictive control (MPC) has received increased industrial acceptance dur-ing recent years, mainly because of its ability to handle constraints explicitly and the natural way in which it can be applied to multi-variable processes [GAR 89]. MPC is based on iterative, finite horizon optimization of a plant model. At time t the current plant state is sampled, and a cost minimizing control strategy is computed (via a nu-merical minimization algorithm) for a receding horizon in the future: [t, t + T]. Only the first step of the control strategy is implemented, then the plant state is sampled again, and the calculations are repeated starting from the new current state, yielding a new control and new predicted state path.

The computational requirements of MPC, where typically a quadratic optimization problem is solved on-line in every sample, have previously prohibited its application in areas where fast sampling is required. Therefore, MPC has traditionally only been applied to slow processes, mainly in the chemical industry. However, the advent of faster computers and the development of more efficient optimization algorithms, e.g. [CAN 01], has led to applications of MPC to processes governed by faster dynamics. However, much still remains to be done to develop efficient real-time implementations of MPC.

The execution of an MPC controller is based on two main parameters: the sam-pling period and the receding horizon, for which optimization is computed. From a temporal point of view, MPC controllers are characterized by large execution times, but also by large variations of these durations from sample to sample. Hence, the large variations in execution time for MPC tasks make a real-time design based on worst-case bounds very conservative and give an unnecessary long sampling period. As usual, the robustness of closed-loop control can be exploited, so that more flexible implementation schemes are expected to provide better use of the execution resources and make MPC applicable to a larger scope than in the current case.

Feedback scheduling has been, for example, applied to MPC in [HEN 02; HEN 06]. The method uses feedback information from the optimization algorithm to find when to terminate the current iterative optimization. The goal is to find the best trade-off between performance increase due to numerous iterations and the degradation due to very long computations and induced latencies.

Joint control and scheduling may combine both control laws, e.g. based on the MPC concept, working together with an existing scheduling policy to manage the network QoS. Dynamic feedback scheduling policies combined with predictive control have been proposed in [ZHA 08] to cope with network induced delays in the control loops, where the scheduling policy, e.g. Rate Monotonic and dynamic feedback policies, allows us to constrain the delay upper bound. [MIL 08] presents a model predictive controller with an implementation scheme based on a queuing-selecting method and an estimator to compensate for data losses when data packets are dropped. The associated feedback scheduler is designed to minimize the traffic over the network due to the measurement flow needed by the controller.

Control and scheduling co-design, combining the MPC approach within the framework of resource-constrained systems, has been successfully developed in [BEN 06a; BEN 06b; BEN 09]. A summary of this work is given in the following sections.

4.3.1. Resource constrained systems

In this section, an abstract view of a distributed embedded control system operating under communication constraints is presented. This abstract view is described by the class of computer-controlled systems, which was introduced by Hristu in [HRI 99]. This class allows us to model, in a finely grained and abstract way, the impact of the resource limitations on the behavior of the controlled system. In the following, we will rather use the term of resource-constrained systems to refer to this class of systems. In [BEN 06a], it has been shown that a resource-constrained systems may be modeled in the mixed logical dynamical (MLD) framework, which represents a modeling framework for hybrid systems, and which was introduced by Bemporad and Morari in [BEM 99]. A summary of these results is given in the following. Consider the continuous-time LTI plant described by

(4.1) images

(4.2) images

where images represent, respectively, the state, the command input, and the output. The plant is controlled by a discrete-time controller, with sampling period Ts. The plant (4.2) and the controller are connected through a limited bandwidth communication bus. At each sampling instant images, the bus can carry at most br measures and bw control commands, with brp and bwm. The input to the plant is preceded by a zero-order holder, which maintains the last received control commands constant until new control values are received. Let u(k) be the input of the zero-order holder at instant kTs, then its output is given by

(4.3) images

Let x(k) = xc(kTs) and y(k) = yc(kTs) be respectively the sampled values of the state and the output. A discrete-time representation of the plant (4.2) at the sampling period Ts is given by

(4.4) images

(4.5) images

where images and C = Cc.

It is assumed, throughout this section, that the pairs (A, B) and (A, C) are, respectively, reachable and observable. These assumptions are systematically satisfied if the pair (Ac, Bc) is reachable, the pair (Ac, Cc) is observable, and the sampling period Ts non-pathological. A sampling period is said pathological if it causes the loss, for the sampled-data model, of the reachability and observability properties, which were verified by the continuous model before its discretization. In [KAL 63], Kalman et al. have proved that the set of pathological sampling period is countable, and uniquely depends on the eigenvalues of the state matrix Ac. Consequently, in order to avoid the loss of reachability and observability, which may be caused by the sampling, it is sufficient to choose Ts outside this set.

Communication constraints may be formally described by introducing two vectors of Booleans images and images, defined for each sampling instant k.

DEFINITION.– The vector σ(k) defined by

images

is called sensors-to-controller scheduling vector at instant k.

DEFINITION.– The vector δ(k) defined by

images

is called controller-to-actuators scheduling vector at instant k.

The vector σ(k) indicates the measures that the controller may read at instant k. In a similar way, the δ(k) indicates the control inputs to the plant that the controller may update at instant k. The introduction of the scheduling vectors allows us to model in a simple way the communication constraints. The limitations that affect the transmission of the measures to the controller may be described by the following inequality:

(4.6) images

In a similar way, the limitations concerning the sending of the control commands to the actuators may be modeled by

(4.7) images

The last received control inputs (through the communication bus) are kept con-stant. Consequently, if a control input is not updated at the kth sampling period, then it is maintained constant. This assertion may be modeled by the logic formula

(4.8) images

The plant, the analog-to-digital and digital-to-analog converters, the communica-tion bus, and the controller are schematically depicted in Figure 4.5. In this figure, images represents the vector of partial measurements that the controller receives (through the communication bus) at the sampling period k. In a similar way, vector images represents the vector of partial control commands that the controller may send to the actuators (through the limited bandwidth communication bus) at the sampling period k. Blocks D/A and A/D, respectively, represent the digital-to-analog and analog-to-digital converters. The controller may also assign the values of the sensors-to-controller scheduling vector (σ(k)) as well as the controller-to-actuators scheduling vector (δ(k)).

Knowing υ(k) and relation (4.8), u(k) is given by

(4.9) images

Figure 4.5. Schematic representation of a resource-constrained system

ch4-fig4.5.gif

In the same way, the input η(k) to the controller is defined by

(4.10) images

Equations (4.5), (4.6), (4.7), (4.9), and (4.10) describe a model where dynamics and plant performance are tightly coupled with the assignment of communication re-sources. In the particular case where br = p, bw = m, σ(k) = 1p, 1 and δ(k) = 1m, 1, for all images, this model coincides with the classical model of a sampled-data sys-tem. The presence of the communication bus moves the classical frontier between “the plant” and “the controller”. In fact, for sampled-data systems, this frontier lies at the digital-to-analog and analog-to-digital converters. In the considered model, this frontier moves to the communication bus interface. The resource-constrained system is defined as the entity constituted by the sampled-data model of the plant and the communication bus. The formal definition of a resource-constrained system is given thereafter.

DEFINITION.– A resource-constrained system is a mixed logical dynamical system having three inputs: the command input υ(k), the scheduling vector of the sensors-to-controller link σ(k) and the scheduling vector of the controller-to-actuators link δ(k). It has one output denoted η(k). Its mathematical model is defined by:

a recurrent equation (4.5) describing the sampled dynamics of the plant,

inequality constraints (4.6) and (4.7) expressing the limitations of the communi cation medium,

logic formulas (4.9) describing the mapping of the computed controller outputsυ(k) to plant inputs u(k), knowing the scheduling decisions δ(k),

logic formulas (4.10) describing the mapping of the sampled plant outputs y(k) to the controller’s inputs η(k), knowing the scheduling decisions σ(k).

The particularity of a resource-constrained system, compared to a sampled-data system, is that at each sampling period, it is important to determine:

– the measures that should be acquired (it is only possible to acquire at most br measures, defined by the scheduling function σ(k)),

– the control commands that should be applied (it is only possible to apply at most bw control commands, defined by the scheduling function δ(k)),

– the value of the applied control commands.

4.3.2. Optimal integrated control and scheduling of resource constrained systems

This section considers a resource-constrained system images where the full state vector x(k) is available to the controller at each sampling period. It is shown how the use of the model predictive control approach may be seen as an algorithmic solution allowing computing on-line, at the same time, the optimal values of the control signals and the communication scheduling of resource-constrained systems.

Using MPC, an optimal control problem is solved on-line at each sampling period Ts. It aims at finding the optimal control values sequence

images

and the optimal communication sequence images, which are solutions of the following optimization problem:

(4.11) images

The solution of this problem is based on the prediction of the future evolution of the system over a horizon of N sampling periods. This predicted evolution is cal-culated according to the model of the plant, knowing the current state x(k) of the system. The variables images represent the predicted values of system states x(k + h). The sequences images (virtual control sequence) and images (virtual communication sequence) are called virtual sequences, because they are based on the predicted evolution of the system. The resolution of this problem aims at finding the optimal virtual control sequence images and the optimal virtual communication sequence images that mini-mize a quadratic cost function over a finite horizon of N sampling periods. Assuming that the optimal virtual sequences exist, the actual control commands are obtained by setting

(4.12) images

and

(4.13) images

and disregarding the remaining elements

images

At the next sampling period (step k+1), the whole optimization procedure is repeated, based on x(k + 1).

The optimality of the model predictive controlled may be proved if an infinite horizon cost function images is used and if the prediction horizon N is chosen infinite. At each time step k, and for any extended state images, the model predictive controller over an infinite horizon computes the optimal solutions υ*(k) and δ*(k) that minimizes the cost function images, subject to the communication constraints. Its optimality directly results from the Bellman optimality principle, which states:

DEFINITION.– An optimal policy has the property that whatever the initial state and the initial decision are, the remaining decisions must constitute an optimal policy with respect to the state resulting from the first decision.

In many practical situations, it is sufficient to choose the prediction horizon N bigger enough than the response time of the system to get a performance that is close to the optimality. This is possible when the virtual sequences of the optimal control commands, which are computed at each sampling period, converge exponentially to zero as the horizon increases. The obtained finite horizon solution will then approximate the optimal infinite horizon solution.

However, the on-line solving of the optimization algorithm, which is required by the MPC approach, is very costly. For that reason, an on-line scheduling algorithm, called OPP was proposed in [BEN 06a; BEN 06b]. While being based on a pre-computed optimal off-line schedule, OPP makes it possible to allocate on-line the communication resources, based on the state of the controlled dynamical systems. It was shown that under mild conditions, OPP ensures the asymptotic stability of the controlled systems and enables in all the situations the improvement of the control performance compared to the basic static scheduling. Furthermore, under these con-ditions, the determination of OPP control and scheduling amounts to comparing a limited number of quadratic functions of the state.

4.4. A convex optimization approach to feedback scheduling

4.4.1. Problem formulation

Consider a collection of N continuous-time LTI systems images. Each system Si is described by the state space representation

(4.14) images

where images and images. An infinite horizon continuous-time cost functional Ji, defined by

(4.15) images

Figure 4.6. Integrated plant-state and execution-time feedback scheduling

ch4-fig4.6.gif

is associated with Si, and represents the design specifications of its ideal controller. It is assumed that Qi and Ri are positive definite matrices of appropriate dimensions and that the pair (Ai, Bi) is reachable. Each system Si is controlled by a control task τi, characterized by a period hi and an execution-time Ci. These two parameters may be time varying. The N control tasks {τi}1≤iN are executed on the same processor. A global cost functional J(x1, …, xN, u1, …, uN), defined by

(4.16) images

is associated with the entire system, allowing the evaluation of its global performance. Constants {ωi}1≤iN are weighting factors, representing the relative importance of each control loop.

The main objective of this paper is to design a feedback scheduler, allowing task periods to be assigned {hi}1≤iN that optimize the global control performance (defined by J), subject to processor utilization constraints (defined by Usp), and based on both task execution time {Ci}1≤iN and plant state measurements {xi}1≤iN, as shown in Figure 4.6.

Remark. Usp represents the desired processor utilization of tasks whose periods are controlled by the feedback scheduler. It may be chosen by the designer in order to cope with the presence of other tasks, whose processor utilization is not controlled by the feedback scheduler. In practice, even in the situations where all the tasks are controlled by the feedback scheduler, choosing Usp less than schedulable utilization bound allows a “utilization margin” to be obtained and to avoid overruns that may result from the variations of task execution times.

4.4.2. Cost function definition and approximation

4.4.2.1. Cost function definition

For a given fixed sampling period hi of system Si, assume that there exists an optimal sampled-data controller images, defined by the state-feedback control gain images, and which minimizes the cost functional (4.15), subject to the plant model (4.14) and to zero-order hold constraints

(4.17) images

The expression of images may be found in control textbooks, for example [ÅST 97]. The computation of images requires the resolution of an algebraic Riccati equation (ARE).

Let ti(k) be the kth instant where the control input ui is updated and

(4.18) images

An interesting property in optimal LQ sampled-data control is that the cost functional images may be characterized by a unique positive definite matrix Si(hi) of size ni × ni, which is the solution of the ARE. This property considerably simplifies the computation of the cost function (4.18), when the optimal sampled-data control images is used. In fact, instead of simulating the evolution of the sampled-data system (4.14), (4.17) and using equation (4.18) for cost computation, it suffices to use the formula

images

In the following, the solution of the ARE associated with the problem of finding the optimal continuous time controller images is denoted by Sc, which minimizes the cost functional (4.15), subject to plant dynamics (4.14). The QoC measure, associated with each system, will be the difference between the optimal sampled-data cost images and the optimal continuous-time cost images:

images

and similarly

images

Figure 4.7. X4 quadrotor: cost coefficients vs. sampling period

ch4-fig4.7.gif

4.4.2.2. Introductory example: quadrotor attitude control

Consider the linearized model of the attitude of the quadrotor, which was described in section 2.3.7 of Chapter 2.

The blue “+” marks in Figure 4.7 represent the values of the different coefficients of matrix (S(h) − Sc), as a function of the sampling period. It is easy to see that the coefficients of (S(h) − Sc) may be approximated as a parabolic function of h. The green curve in Figure 4.7 represents the mean square best-fitting parabola of (S(h) − Sc). Using a basic linear least-squares method, S(h) − Sc was approximated as

images

where

images

This parabolic evolution of the cost was observed in two other benchmarks: a lin-earized model of an unstable pendulum (for 0 ≤ h ≤ 100 ms) and a 14-order car active suspension system [BEN 06b] (for 0 ≤ h ≤ 15 ms), as illustrated in [BEN 08].

These examples illustrate that in many situations, it is possible to approximate the relationship between the solutions of the Riccati equation and the sampling period, using polynomial interpolations, over a defined range of sampling periods. In the remainder of this paper, it is assumed that for images:

(4.19) images

Note that although only this approximation is considered in this paper, the obtained results may be easily generalized to other polynomial approximations.

It is worth remarking that

– the choice of images depends on the quality and the validity of approximating the true values of the Riccati matrix coefficient using parabolic functions;

– based on Riccati equation solution approximations, analytic expressions of the control gains as a function of the sampling period may easily be deduced;

– the relationship between the cost function and the sampling frequencies may become more complicated, and even non-convex, when the frequencies are decreased to near the Nyquist rate, as illustrated in [EKE 00].

4.4.3. Optimal sampling period selection

4.4.3.1. Problem formulation

Let images be the sampling frequency (corresponding to the sampling period hi). Assume that Ci is constant (the following subsection shows how the time variations of Ci may be handled). The optimal sampling period frequency selection problem may be formulated as follows:

(4.20) images

where images.

Remark. In optimization problem (4.20), the values of the sampling frequencies are implicitly upper-bounded by the processor utilization constraint. Furthermore, it is also straightforward to add upper-bound constraints on the sampling frequencies (i.e. by adding constraints images or equivalently images). However, these additional constraints may lead to a slight increasing complexity of the problem resolution.

4.4.3.2. Problem solving

Problem (4.20) has a convex objective function and affine inequality constraints. Consequently, if the feasibility region is non-empty, its optimal solution will exist, and may be computed analytically using the Karush–Kuhn–Tucker (KKT) conditions [BOY 04]. Analysis of the different conditions of KKT conditions leads to the following algorithm (4.1) for the computation of the optimal sampling frequencies:


Algorithm 4.1: Optimal sampling frequencies computation


ch4-alg4.1.gif

Algorithm 4.1 results from application of KKT conditions (see [BEN 08] for a complete proof).

4.4.3.3. Feedback-scheduling algorithm deployment

The feedback scheduler is executed as a periodic task, with period hfbs. The choice of this period is a trade-off between the complexity of the feedback scheduler and the performance improvements it brings, as illustrated in [HEN 05].

Task execution times may be estimated on-line and smoothed using a first-order filter

(4.21) images

where λ is a forgetting factor, images and Ci(khfbs) are, respectively, the estimated and the measured execution times at instant khfbs.

In practice, using algorithm 4.1, the optimal sampling frequencies of a plant tend to be reduced to zero as the plant approaches the equilibrium. This has the drawback of reducing the disturbance rejection abilities ofthat plant. Another drawback is that when all the plants approach equilibrium, coefficients βi tend to approach zero. The optimal sampling frequencies assignment may result in an undetermined form 0/0. Fortunately, all these issues may be solved if a constant term representing “a prediction of the cost of future disturbances” is added to the cost functions 4.15. This amounts to replacing

images

by

images

where images is a constant coefficient, which have to be chosen off-line, according to the future disturbances that a given plant may be subjected to. These coefficients may be chosen by trial and error, until the best behavior is obtained. A small value of images increases the sensitivity of the optimal sampling period images with respect to state values. A larger value reduces this state sensitivity.

Remark. The expression of images may be explicitly computed if a linear quadratic Gaussian formulation and a finite optimization horizon are adopted in the optimal sampling frequency assignment problem (instead of the deterministic infinite horizon formulation that was adopted in this section).

4.4.4. Application to the attitude control of a quadrotor

In this section, the proposed feedback scheduling approach is applied to the atti-tude control of the quadrotor. As illustrated in Chapter 2, the roll, pitch and yaw con-trol loops of the attitude controller are independent. In fact, the linearized model of the quadrotor (2.35) is made of three independent second-order sub-systems. For this reason, each loop may be implemented by an independent control task. The attitude controller will contain three control tasks τϕ, τθ, and τψ (respectively the roll, pitch and yaw control tasks). Task execution times are equal to Cϕ = Cθ = Cψ = C = 10 ms. Task respective periods will be, respectively, denoted by hϕ, hθ, and hψ. The desired utilization of these three task is Usp = 80%. Task periods hϕ, hθ, and hψ that may be assigned by the feedback-scheduling algorithm verify

images

where images. Constants images and images are equal to 10−8. The period of the feedback scheduler is hfbs = 100 ms.

In these simulations, the attitude of the quadrotor has to follow the specified set points depicted in Figure 4.8. Roll and pitch angle set points from instant t = 0 s to instant t = 20 s are sine signals with respective amplitudes 17° and 5° and periods 10 s and 20 s. From instant t = 20 s, roll and pitch set points are zero. Yaw angle set point is zero. From instant t = 20 s, a yaw torque disturbance, consisting of a band limited white noise, with period 10−3 s and noise power 2 × 10−4, is applied to the quadrotor. Note that in this simulation example, tasks execution times were assumed to be constant. Although the control design was based on the quadrotor linearized model, the simulations were applied on the nonlinear models (2.33) and (2.34).

Figure 4.8. Set points of the attitude controller

ch4-fig4.8.gif

Quadrotor Euler angles as well as their associated tasks sampling periods, assigned by the feedback scheduler, are depicted in Figure 4.9. This figure illustrates how the feedback-scheduling algorithm reduces the sampling period of the control task that has the most important needs of computing resources in order to improve the global control performance. From instants t = 0 s to t = 20 s, the sampling period of the yaw control task is set to around the maximal allowed value (100 ms). Roll and pitch control task period reductions are correlated with the rate of the change of their corresponding roll and pitch angles. In fact, roll control task period is minimal when the roll angle crosses zero. The pitch control task period is augmented when the pitch angle reaches its minimum or maximum values. The same observation holds for the roll angle. Since the roll set point has the greatest amplitude and rate of variation, the feedback scheduler assigned the most important parts of the computational resources to the roll control task.

From instant t = 20 s to instant t = 30 s, the specified set points for the three Euler angles are zero. Since images, and no torque disturbance is applied to the quadrotor, the assigned sampling periods converge smoothly to images.

Figure 4.9. Feedback scheduling of the attitude controller

ch4-fig4.9.gif

At instant t = 30 s, when the yaw torque disturbance starts to be applied, the feedback reduces smoothly the period of the yaw control task to the minimum value (16.6 ms), and augments the periods of roll and pitch tasks to the maximum allowed values (100 ms), in order to achieve a better reduction of the yaw torque disturbance.

4.5. Control and real-time scheduling co-design via a LPV approach

The feasibility of a feedback scheduler to manage the real-time parameters of a robot arm controller has been shown in section 1.4.2.3. In this preliminary example, the nonlinear nature of the process forbids to find simple loss functions to relate the controller’s tasks sampling periods and the tracking performance. A very rough model has been extracted from simulations, providing a static CPU relative allocation between three compensation tasks. These loss functions are globally evaluated for a whole trajectory. The resulting feedback scheduler efficiently controls the overall CPU load, but only from the measured tasks CPU loads. However, even for a given trajectory, the contribution of each task to the tracking performance of the controller likely varies along the trajectory. For example, the disturbances due to Coriolis and centrifugal forces increase with the velocity, and the CPU allocated to compute the corresponding compensation action should be increased at high speed to better cancel this disturbance.

The following sections summarize a first approach to take into account the non-linear plant’s state to dynamically adapt the scheduling parameters of the robot arm controller; this approach is exposed in more details in [SEN 08].

Figure 4.10. Feedback-scheduling block diagram; control scheme for CPU resources

ch4-fig4.10.gif

4.5.1. A LPV feedback scheduler sensible to the plant’s closed-loop performances

Feedback scheduling is a dynamic approach allowing a better usage of the com-puting resources, in particular when the workload changes (e.g. due to the activation of an admitted new task). The CPU activity is controlled according to the resource availability by adjusting scheduling parameters (e.g. the control intervals) of the plant control tasks, as recalled in section 1.4. However, as the goal of the controllers is the achievement of a requested performance level, the use of computing resources should be also linked to the dynamic behavior of the plant(s) to be controlled. The main result given in this section consists in deriving a new feedback-scheduling controller which depends on the plant trajectory, in view of an “optimal” resource sharing. It is designed in the LPV/H framework for polytopic systems.

Following previous results in [SIM 05] and section 1.4.2.3, the feedback scheduler, as illustrated in Figure 4.10, is a dynamic system between the control task frequencies and the processor utilization. As far as the adaptation of the control tasks is concerned, the load of the other tasks is seen as an output disturbance.

The CPU utilization is assumed to be measured or estimated, and the scheduling is here limited to periodic (or more exactly recurrent) tasks. In this case, the processor load induced by a task is defined by images, where c and h are the execution time and period of the task. Hence, as in [CER 02], the processor load induced by a task is estimated for each period hs of the scheduling controller as

(4.22) images

where h is the sampling frequency currently assigned to the plant control task (i.e. at each sampling instant khs), and images is the mean of its measured job execution time. λ is a forgetting factor used to smooth the measure (here λ = 0.3).

For an n-multi-task control system, we should note that, as in [SIM 03], if the execution times are constant, then the relationship images, where fi = 1/hi is the frequency of the task, is a linear function (which is not the case if expressed as a function of the task periods). Therefore, using (4.22), the estimated CPU load is given as

(4.23) images

However, in practice, the execution time of the control tasks may vary according to the run-time environment, e.g. actual processor speed. As proposed in [SIM 05], a “normalized” linear model of the task i (i.e. independent of the execution time), images is used for the scheduling controller synthesis, where images is omitted and will be compensated by on-line gain-scheduling images as shown below:

(4.24) images

Also, as explained above, the use of computing resources is chosen to depend on the plant trajectory. Hence, the control scheme of computing resource control is illustrated in Figure 4.10 for a two control-tasks system for simplicity.

In Figure 4.10, the interval of frequencies is limited by the “saturation” block, α represents a set of real parameters {α1,α2, …, αn} dedicated to the set of control tasks {U1, U2, …, Un}. These parameters will be used to make the resource sharing vary according to the plant trajectory. In the two control-tasks system, where U = U1 + U2, it is required that

(4.25) images

(4.26) images

with α being a varying parameter. This makes the control scheme flexible enough to distribute on-line the use of computing resources to the different control tasks. The choice of the value of the time-varying parameters {α1, α2, …, αn} can be done in many different ways, e.g. from the on-line computation of optimal cost functions or from a dependency on the control effort. It will be illustrated in details in section 4.5.2 for the robot-arm control example.

Here, the design of the controller K(α) is done using the H control approach for LPV systems. The H control scheme to synthesize the controller K(α) is given in Figure 4.11.

In Figure 4.11, G′ is the model of the scheduler, the output of which is the vector of all task loads. To get the sum of all the task loads as in (4.24), C′ = [1 … 1] is used. The H transfer function represents the sensor dynamic behavior which measures the load of the other tasks; it may be a simple first-order filter. The template We specifies the performances on the load-tracking error. It is chosen in the continuous-time domain as

(4.27) images

Figure 4.11. A LPV /H controller for CPU resources

ch4-fig4.11.gif

with images to obtain a closed-loop settling time of 300 ms, a static error less than 1 % and a good robustness margin.

The resource distribution is done through the M(α) matrix defined below. Note that for an n-multi-tasks system

(4.28) images

(4.29) images

where α1 + α2 + … + αn = 1. Then

(4.30) images

(4.31) images

(4.32) images

(4.33) images

(4.34) images

Then to ensure the on-line distribution of the computing resources M is chosen as

(4.35) images

(4.36) images

Using [APK 95] the LPV controller K(α) is obtained through the solution of the H control problem for polytopic systems, and consists in solving 2 LMIs. Then the design of K(α) can be done directly either in the discrete-time domain or in the continuous-time one and then discretized. For this example, K(α) has been synthesized in the continuous-time domain using the H control approach for polytopic systems, as described in details in [SEN 08].

By solving the H problem for the LPV system using the Yalmip interface and Sedumi solver [STU 99; LOF 04], one obtains γopt = 1.8885, and a controller of order 7.

4.5.2. Application to a robot-arm control

The seven degrees of freedom Mitsubishi PA10 robot arm already used in section 1.4.2.3 is again considered. The problem under consideration is tracking a desired trajectory for the position of the end effector. Using the Lagrange formalism, the following model can be obtained:

(4.37) images

where q stands for the positions of the joints, M is the inertia matrix, Gra is the gravity forces vector and C gathers Coriolis, centrifugal and friction forces.

The structure of the ideal linearizing controller includes a compensation of the gravity, Coriolis/centrifugal effect, and inertia variations as well as a proportional-derivative (PD) controller for the tracking and stabilization problem, of the form

(4.38) images

leading to the linear closed-loop system images, where qd and images stand for the reference trajectory positions and velocities.

As in 1.4.2.3 the controller is split into five tasks, i.e. a specific task is considered for the PD control, the trajectory generation and for the gravity, inertia, and Coriolis compensations, which are implemented as a multi-rate controller. In this feedback scheduling scheme, only the periods of the compensation tasks are adapted, as they are time consuming compared with the PD task while being less critical for the stability.

4.5.2.1. Performance evaluation of the control tasks in view of optimal resource distribution

In order to associate the use of computing resources with the robot trajectory, the contribution of each of the three control tasks to the closed-loop system performances has been evaluated as a function of its execution period.

The methodology is the following. Assuming a nominal sampling period for each task of 1 ms, the period of each compensation control task is changed, and new simu-lations are performed during which the following cost is computed:

(4.39) images

where Pref is the desired position in the operational space of the end tip, computed from qd using the geometric model. Pc is the position obtained when all the control tasks act with the minimal sampling period of 1 ms. Finally, images is the position obtained when the sampling period of one of the compensation tasks is increased from 1 to 30 ms.

Simulations are performed for a particular robot trajectory, defined by the refer-ence vector images for all the robot joints. Here qd goes from π/2 to −π/2. Figure 4.12 shows the evolution of the cost function J for the three compensation control tasks.

4.5.2.1.1. Discussion

It is difficult to infer the relations between the compensation task execution period and the trajectory tracking performance, anyway a natural interpretation can be proposed. First, the gravity compensation effect is very sensitive to the increase of the sampling period at the end of the trajectory, as the cost increases in the second part of the trajectory (first part of the graph as the trajectory goes from π/2 to −π/2). It is desired to ensure the availability of CPU resources for this task in a linear way with the trajectory position. The situation is almost opposite for the inertia effect. Finally, even if some variations can be observed, a constant use of CPU resources of the Coriolis compensation task, all along the trajectory is still needed.

Then the distribution of the control task periods is chosen as

(4.40) images

where αC = 0.25, αI = 1 − αG, and αG is linked to the plan trajectory by

(4.41) images

where [αmin; αMax] = [0.1; 0.65], qini is the initial position, and qend is the final trajectory position.

4.5.2.2. Simulation with TrueTime

TrueTime is a free toolbox for MATLAB/Simulink aimed to ease the simulation of the temporal behavior of a multi-tasking real-time system executing controller tasks [OHL 07]. In this application, the period of the feedback scheduler has been fixed to 30 ms to be larger than the robot control task periods, whose limits have been set from 1 ms to 30 ms.

In the experiment depicted in Figure 4.14, the desired CPU usage is initially set to 50% of the maximum usage. The upper plots show the tasks periods and CPU usage. The PD loop period is fixed at 1 ms and the trajectory generator at 5 ms.

Figure 4.12. Cost variation due to varying sampling for gravity, Coriolis, and inertia compensation task

ch4-fig4.12.gif

As seen in Figure 4.14(a), the load of the compensation tasks (gravity, Coriolis, and inertia) vary on-line as expected according to the parameter αI (see Figure 4.15a). The corresponding evolution of the task periods is shown in Figure 4.14(b). Moreover, in Figure 4.15b, the adaptive LPV case (α varying) is compared with the constant case (α = 0.375). It can be seen that the LPV case leads to a smaller cost function which emphasizes the real interest of the provided approach.

4.5.2.3. Feasibility and possible extensions

Note that, as explained in [SIM 05] and depicted in Figure 1.10, the scheduling feedback loop can be easily implemented on top of an off-the-shelf real-time operating system (e.g. Posix) in the form of an additional real-time periodic task, i.e. a control module whose function is specified and encoded by the control designer. The inputs are the measured execution times of the control tasks, and the set point is a desired global computing load. Outputs are the sampling intervals of the gravity, Coriolis, and inertia control tasks which are triggered by programmable timers provided by the operating system as illustrated in section 7.4.1.

Thanks to the use of a hierarchical control structure, the given results may also be integrated with existing methods for the design of varying sampled controllers, as in [TAN 02] or using the LPV approach of section 2.4, which makes this integrated approach somewhat generic.

4.6. Summary

In this chapter, a few co-design approaches to integrate both control and imple-mentation constraints have been presented. Even when the controlled plants are lin-ear, theoretic optimal solutions are too computing intensive to be real-time compliant. Therefore, tractable solutions are designed under restrictive assumptions, leading to operational solutions with limited generality. For example, section 4.2 gives a inte-grated scheme combining varying sampled controllers for linear systems and elastic tasks models, leading to an effective and implementable solution for which no stability proof has been provided up to now. Conversely, the MPC-based approach in section 4.3, using hybrid modeling, is able to cope with nonlinear plants, state, and actua-tors constraints, and limitations in CPU power and networking bandwidth working on a slotted timescale. As optimal solutions are too complex to be used in real time, only sub-optimal controllers can be implemented. Other examples assume, in section 4.4, the convexity of all the cost functions associating the controllers performance with their execution periods, or are based in section 4.5 on cost functions measured on a specific trajectory of a given nonlinear plant. Note also that next Chapter 5 de-scribes a control/scheduling co-design approach assuming LQ controllers associated with (m, k)-firm scheduling policies.

Figure 4.13. Positions and control torques

ch4-fig4.13.gif

Figure 4.14. TrueTime real-time parameters: (a) loads and (b) periods

ch4-fig4.14.gif

Figure 4.15. (a) Variation of αI and (b) total cost

ch4-fig4.15.jpg

Indeed, control and scheduling co-design over networks handles complex and het-erogenous systems, so that it is unlikely that a fully general, operational and unique theoretic framework emerge soon. Conversely, well chosen case studies are expected to bring effective solutions for some classes of systems and for some specific problems.

4.7. Bibliography

[APK 95] APKARIAN P. AND GAHINET P., A convex characterization of gain scheduled images controllers, IEEE Transactions on Automatic Control, vol. 40, p. 853–864, May 1995.

[ÅRZ 00] ÅRZÉN K.-E., CERVIN A., EKER J., AND SHA L., An introduction to control and scheduling co-design, 39th IEEE Conference on Decision and Control, Sydney, Australia, December 2000.

[ÅST 97] ÅSTRÖM K. J., WITTENMARK B., Computer-Controlled Systems, Information and System Sciences Series, Prentice Hall, Englewood Cliffs, NJ, 3rd edition, 1997.

[BEM 99] BEMPORAD A., MORARI M., Control of systems integrating logic, dynamics, and constraints, Automatica, vol. 35, p. 407–427, 1999.

[BEN 06a] BEN GAID M., Optimal scheduling and control for distributed real-time systems, PhD thesis, University of d’Evry Val d’Essonne, France, 2006.

[BEN 06b] BEN GAID M.-M., ÇELA A., AND HAMAM Y., Optimal integrated control and scheduling of networked control systems with communication constraints: application to a car suspension system, IEEE Transactions on Control Systems Technology, vol. 14, p. 776– 787, 2006.

[BEN 06c] BEN GAID M.-M., ÇELA A., HAMAM Y., AND IONETE C., Optimal scheduling of control tasks with state feedback resource allocation, 2006 American Control Conference ACC’06, Minneapolis, USA, June 2006.

[BEN 08] BEN GAID M., SIMON D., AND SENAME O., A convex optimization approach to feedback scheduling, 16th IEEE Mediterranean Conference on Control and Automation MED’08, Ajaccio, France, June 2008.

[BEN 09] BEN GAID M.-M., ÇELA A., AND HAMAM Y., Optimal real-time scheduling of control tasks with state feedback resource allocation, IEEE Transaction on Control Systems Technology, vol. 17, p. 309–326, March 2009.

[BOY 04] BOYD S. P., AND VANDENBERGHE L., Convex Optimization, Cambridge University Press, Cambridge, UK, 2004.

[BUT 00] BUTTAZZO G., AND ABENI L., Adaptive rate control through elastic scheduling, 39th Conference on Decision and Control, Sydney, Australia, December 2000.

[CAN 01] CANNON M., KOUVARITAKIS B., AND ROSSITER J.-A., Efficient active set optimization in triple mode MPC, IEEE Transactions on Automatic Control, vol. 46, p. 1307– 1312, August 2001.

[CAS 06] CASTAÑÉ R., MARTÍ P., VELASCO M., CERVIN A., AND HENRIKSSON D., Resource management for control tasks based on the transient dynamics of closed-loop systems, 18th Euromicro Conference on Real-Time Systems, Dresden, Germany, July 2006.

[CER 02] CERVIN A., EKER J., BERNHARDSSON B., AND ÅRZÉN K.-E., Feedback-feedforward scheduling of control tasks, Real-Time Systems, vol. 23, p. 25–53, July 2002.

[CER 03] CERVIN A., Integrated control and real-time scheduling, PhD thesis, Department of Automatic Control, Lund Institute of Technology, Sweden, April 2003.

[CER 06] CERVIN A., AND ALRIKSSON P., Optimal on-line scheduling of multiple control tasks: a case study, 18th Euromicro Conference on Real-Time Systems, Dresden, Germany, July 2006.

[DAimages 07] DAimages D. B., AND NEŠIĆ D., Quadratic stabilization of linear networked control systems via simultaneous protocol and controller design, Automatica, vol. 43, p. 1145– 1155, 2007.

[EKE 00] EKER J., HAGANDER P., AND ÅRZÉN K.-E., A feedback scheduler for real-time controller tasks, Control Engineering Practice, vol. 8, p. 1369–1378, 2000.

[GAR 89] GARCIA C.-E., PRETT D.-M., AND MORARI M., Model predictive control: theory and practice, Automatica, vol. 25, p. 335–348, 1989.

[HEN 02] HENRIKSSON D., CERVIN A., ÅKESSON J., AND ÅRZÉN K., On dynamic real time scheduling of model predictive controllers, 41st IEEE Conference on Decision and Control, Las Vegas, USA, December 2002.

[HEN 05] HENRIKSSON D., AND CERVIN A., Optimal on-line sampling period assignment for real-time control tasks based on plant state information, 44th Conference on Decision and Control, Sevilla, Spain, December 2005.

[HEN 06] HENRIKSSON D., Resource-constrained embedded control and computing systems, PhD thesis, Department of Automatic Control, Lund Institute of Technology, Sweden, January 2006.

[HRI 99] HRISTU D., Optimal control with limited communication, PhD thesis, Division of Engineering and Applied Sciences, Harvard University, June 1999.

[KAL 63] KALMAN R. E., HO B., AND NARENDRA K., Controllability of linear dynamical systems, Contributions to Differential Equations, vol. 1, p. 188–213, 1963.

[LEM 07] LEMMON M., CHANTEM T., HU X., AND ZYSKOWSKI M., On self-triggered full information H controllers, Proceedings of Hybrid Systems: Computation and Control, April 2007.

[LIN 02] LINCOLN B., AND BERNHARDSSON B., LQR optimization of linear system switching, IEEE Transactions on Automatic Control, vol. 47, p. 1701–1705, October 2002.

[LOF 04] LOFBERG J., AND YALMIP: A toolbox for modeling and optimization in Matlab, Computer Aided Control System Design Conference, Taipei, Taiwan, March 2004.

[LU 02] LU C., STANKOVIC J., TAO G., AND SON S., Feedback control real-time scheduling: framework, modeling and algorithms, Special Issue of Real-Time Systems Journal on Control-Theoretic Approaches to Real-Time Computing, vol. 23, p. 85–126, July 2002.

[MAR 02] MARTÍ P., FUERTES J., FOHLER G., AND RAMAMRITHAM K., Improving quality-of-control using flexible timing constraint: metric and scheduling issues, 23rd IEEE Real-Time Systems Symposium, Austin, USA, December 2002.

[MAR 04] MARTÍ P., LIN C., BRANDT S., VELASCO M., AND FUERTES J., Optimal state feedback based resource allocation for resource-constrained control tasks, 25th IEEE RealTime Systems Symposium, Lisbon, Portugal, December 2004.

[MIL 08] MILLZAN P., JURADO I., VIVAS C., AND RUBIO F.-R., Algorithm for networked control systems with large data dropouts, 47th IEEE Conference on Decision and Control CDC’08, Cancun, Mexico, December 2008.

[OHL 07] OHLIN M., HENRIKSSON D., AND CERVIN A., TrueTime 1.5—Reference Manual, January 2007.

[REH 04] REHBINDER H., AND SANFRIDSON M., Scheduling of a limited communication channel for optimal control, Automatica, vol. 40, p. 491–500, March 2004.

[ROB 07a] ROBERT D., SENAME O., AND SIMON D., A reduced polytopic LPV synthesis for a sampling varying controller: experimentation with a T inverted pendulum, European Control Conference ECC’07, Kos, Greece, July 2007.

[ROB 07b] ROBERT D., Contribution à l’interaction commande/ordonnancement, PhD thesis, INP Grenoble, France, January 2007.

[SEN 08] SENAME O., SIMON D., AND BEN GAID M., A LPV approach to control and realtime scheduling codesign: application to a robot-arm control, Control and Decision Conference CDC’08, Cancun, Mexico, December 2008.

[SET 96] SETO D., LEHOCZKY J. P., SHA L., AND SHIN K. G., On task schedulability in real-time control systems, 17th IEEE Real-Time Systems Symposium, New York, USA, December 1996.

[SHI 99] SHIN K. G., AND MEISSNER C. L., Adaptation of control system performance by task reallocation and period modification, Proceedings of 11th Euromicro Conference on Real-Time Systems, York, UK, p. 29-36, June 1999.

[SIM 03] SIMON D., SENAME O., ROBERT D., AND TESTA O., Real-time and delay-dependent control co-design through feedback scheduling, CERTS’03 Workshop on Co-design in Embedded Real-time Systems, Porto, Portugal, July 2003.

[SIM 05] SIMON D., ROBERT D., AND SENAME O., Robust control/scheduling co-design: application to robot control, 11th IEEE Real-Time and Embedded Technology and Applications Symposium, San Francisco, USA, March 2005.

[STU 99] STURM J. F., Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones, Optimization Methods and Software, vol. 11/12, p. 625–653, 1999.

[TAB 07] TABUADA P., Event-triggered real-time scheduling of stabilizing control tasks, IEEE Transactions on Automatic Control, vol. 52, p. 1680–1685, 2007.

[TAN 02] TAN K., GRIGORIADIS K.-M., AND WU F., Output-feedback control of LPV sampled-data systems, International Journal of Control, vol. 75, p. 252–264, 2002.

[ZHA 08] ZHAO Y., LIU G., AND REES D., Integrated predictive control and scheduling co-design for networked control systems, IET Control Theory Appl, vol. 2, p. 7–15, 2008.


1 Chapter written by Mongi BEN GAID, David ROBERT, Olivier SENAME and Daniel SIMON.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.224.31.41