4
Efficient Computations for (min,plus) Operators

As briefly presented in the Introduction, network calculus is based on computing (min,plus) operators on functions. For some classes of functions, the operations are simple to perform and explicit expressions are given in section 3.1. This is the case for token-bucket, rate-latency and pure delay functions, which are widely used in the network calculus framework. In some other cases, the functions are too complex and efficient algorithms need to be designed to perform the operations.

The aim of this chapter is to provide efficient algorithms to perform the (min,plus) operations. Algorithms can be given for very general functions. However, when functions have a simple enough shape, it is more efficient to use specific algorithms.

We will have a special focus on the class of convex or concave functions. We have already seen in section 3.3 that those functions have good properties with regard to the (min,plus) operators, and we can benefit from them. For example, a token-bucket function is concave and used to model a flow (or arrival process). In some other examples, it is more accurate to model flows by a minimum of two such functions, or by image. This is the case for Traffic Specification (TSpec) [SHE 97] of the Internet Engineering Task Force (IETF): a flow is characterized by a peak rate p, an average or token rate r and a bucket depth b. The function modeling the flow is concave and represented in Figure 4.1. Similarly, the functions modeling a server will often be convex.

This chapter is organized as follows: in section 4.1, we define the classes of functions that we will study. In section 4.2, we mainly focus on the (min,plus) convolutions of concave and convex functions, as very efficient algorithms exist for these classes of functions.

image

Figure 4.1. IETF TSpec flow specification. For a color version of this figure, see www.iste.co.uk/bouillard/calculus.zip

Section 4.3 is devoted to the generic algorithms, which can be applied to any function, provided that the result can be finitely represented. For this reason, we restrict ourselves to a class of functions that is stable for all the operators.

Given a flow, several functions can represent it. The more accurate the representation is, the more complex the function must be. Hence, generic algorithms are useful. Sometimes coping with convex and concave functions is a good trade-off. Section 4.4 is devoted to the quantification of this trade-off: functions are approximated by containers, and safe approximations can be computed in quasi-linear time.

Several other approaches have been developed to mitigate the computation cost while still having satisfactory accuracy. One solution is to have an accurate representation of an initial part of the functions, and to only use a linear approximation for the tail of the function, [GUA 13, SUP 10, LAM 16]. Such approximations (see Proposition 5.13) are often sufficient. Another solution is to maintain two approximations of the same curve and to switch from one to the other, depending on the operation to compute [BOY 11b].

4.1. Classes of functions with finite representations

In this section, we introduce the classes of functions used in this chapter. Indeed, our aim is to have efficient representations of the functions, in order to efficiently compute operations on them. The first constraint is that functions can be finitely represented. A simple solution (that is, not restrictive from the application point of view) is to deal with piecewise linear functions only. This way, a function is represented by a list of segments. The second constraint is the stability of the functions with the operators, and we will need to define other properties on the functions. In this section, we define all of the properties that we will need in the rest of the chapter, particularly for section 4.3.

Here, consider functions with definition domain image that take values in image. As we will consider dual operations (maximum and minimum), we do not use properties of the dioid properties for infinite values, and some operations may not be defined (e.g. image is not well defined). Those cases are easily detected, and we will not consider them: if they are encountered, the result will be considered as undefined.

Asymptotic behavior of functions:

  • f is a linear function if there exist b and r such that, for all image;
  • f is an ultimately linear function if there exist image and r such that, for all image;
  • f is a pseudo-periodic function with period d and increment c if, for all image, image;
  • f is an ultimately pseudo-periodic function with period d and increment c if there exists image such that image;
  • f is a plain function if either, for all image or there exists image and image such that, for image (or image), image and, for all image (or image), image.

For ultimately linear and pseudo-periodic functions, T is called a rank of the function. It can be chosen to be minimal and, in that case, it is called the rank (from which the function is linear or ultimately pseudo-periodic). For pseudo-periodic and ultimately pseudo-periodic functions, d is called a period and c an increment. We call image the growth rate of the function. Remark that an (ultimately) linear function is (ultimately) pseudo-periodic with period 1 and increment r. If there exists a minimal image such that a function is (ultimately) pseudo-periodic, this minimal d is the period of the function. This minimal d does not exist if the function is (ultimately) linear.

Piecewise linear functions. A segment is a function whose support is an interval and which is linear on its support and image outside. If f is such a function, we write image, where image is the support of f. We will denote by image the growth rate of f on its support, which is also called the slope of f. Note that we do not set any restriction on the shape of the interval. In particular, it can be open, closed, semi-open, a singleton or a semi-line.

A piecewise linear function is a function that can be written as the minimum of segments and that is the minimum of a finite number of segments on every finite interval.

The decomposition of a piecewise linear function f is a minimum sequence of segments image with disjoint support. It is minimum in the sense that there is no image, such that image is a segment.

Let X and Y be two sets, and f be a piecewise linear function with representation image. Let image and image be the respective left and right extremities of image. Then, image if for all image and imageimage. We will particularly focus on image, where image is the set of rational numbers.

4.2. Piecewise linear concave/convex functions

The piecewise linear concave and convex functions are certainly the best trade-off between accuracy of modeling and simplicity of implementation. These functions are illustrated in Figure 4.2.

image

Figure 4.2. Piecewise linear functions: f is concave and g is convex. For a color version of this figure, see www.iste.co.uk/bouillard/calculus.zip

The class of concave piecewise linear functions has nice mathematical properties: it is stable under the addition and minimum operations. Moreover, the (min,plus) convolution (from Proposition 3.12(4)) can be implemented as a minimum plus a constant).

The class of convex piecewise linear functions has very similar properties, replacing minimum with maximum, and its (min,plus) convolution can also be implemented very efficiently, as will be shown in section 4.2.2.1.

In the next section, we will see that there are two equivalent ways of describing piecewise linear function, and in section 4.2.2, we will discuss the implementation of the (min,plus) convolution.

4.2.1. Representation of piecewise linear concave and convex functions

In this section, we assume that all of the functions considered are in image and ultimately linear. Considering functions in image is not a technical limitation, but this will ease the notations by replacing linear functions and potential discontinuities at 0 with token-bucket functions. Being ultimately linear ensures the finiteness of the representation of a function. Furthermore, the functions used in practice are in image. There are two natural ways to represent piecewise linear concave functions:

  • – by a finite list of token-bucket functions (the concave function is then the minimum of those functions);
  • – by a finite list of segments, which are open on the left (to take into account the potential discontinuity at 0). This representation explicitly contains useful information about the intersections of the token-buckets of the former representation.

Let us first give the equivalence between those two representations and then discuss them.

DEFINITION 4.1 (Concave piecewise linear normal form).– Let image be non-negative numbers and set image. The piecewise linear concave function

is said to be in normal form, if image are sorted by a decreasing slope and no image can be removed without modifying the minimum:

If image is in normal form, we can define the sequence image of the respective intersection points of image and image

eq

PROPOSITION 4.1 (Concave piecewise linear function properties).– Let image be a piecewise linear concave function in normal form. The following properties hold:

  1. 1) f is concave;
  2. 2) the sequence image is increasing image;
  3. 3) the sequence image (defined above) is increasing image;
  4. 4) the function is piecewise linear. More precisely,
eq
image

Figure 4.3. Concave piecewise linear functions. For a color version of this figure, see www.iste.co.uk/bouillard/calculus.zip

PROOF.– 1) is a direct consequence of Proposition 3.12.

2) If there exists i such that image, as image, so condition [4.3] is not satisfied for i.

3) We again use the notation image. As image is decreasing and image is increasing,

eq

Now, by contradiction, if there exists i such that image which contradicts condition [4.3] of normal form.

4) As image is increasing, image and image. Therefore, for all image.

This proposition leads to two representations of piecewise linear and concave function f : either image, and the representation is a list image, or as a list of segments, and the representation is a list image. In the latter case, the information is redundant, as image can be recovered from the list image.

Depending on the operation to be implemented, it will be more interesting to use a representation with a list of linear functions or a list of segments. For example, the minimum can be efficiently implemented with a list of linear functions: it is the concatenation of the two lists. To obtain a normal form, it suffices to merge the two sorted lists (which can be done in time linear in the number of functions, eliminating the useless functions on the fly). The (min,plus) convolution is similar.

For computing the sum of two functions, using the linear representation would lead to a quadratic time algorithm (the minimum distributes over the sum). It is possible to define a more efficient algorithm using the segments representation. The number of segments of the result is linear in the total number of segments of the operands (see section 4.3.2 for details).

The class of convex piecewise linear functions can be defined as the maximum of rate-latency functions, defining intersection points similarly. In this class, the maximum of two functions is the concatenation of the linear representations, but the (min,plus) convolution is efficiently implemented with the segment representation, as will be explained in section 4.2.2.1.

4.2.2. (min,plus)-convolution of convex and concave functions

We have already seen in Proposition 3.12 that the convolution of two concave functions is the minimum, up to an additive constant, of these functions.

In this section, we focus on the convolution of two convex functions and of a convex function by a concave function.

4.2.2.1. Convolution of two convex functions

The convolution of two convex functions can be efficiently implemented with the segment representation. The most classical proof uses the Legendre–Fenchel transform (see, for example, [BOU 08b]), but we present here a direct and algorithmic proof.

THEOREM 4.1.– Let f and g be two convex piecewise linear functions. Let image (resp. image) be the (possibly infinite) slope of the last semi-infinite segment of f (resp. g). The convolution image consists of concatenating all of the segments of the slope at most image in the increasing order of the slopes starting from image, including the semi-infinite segment.

This computation is illustrated in Figure 4.4.

image

Figure 4.4. Convolution of two piecewise linear convex functions. For a color version of this figure, see www.iste.co.uk/bouillard/calculus.zip

PROOF.– Consider the first segment of f of slope image, and the first segment of g of slope image, and suppose without loss of generality that image and that the length of the first segment of f is image. Then, for every image,

eq

As g is convex, image, so image.

If image, then image is a linear function consisting of the infinite segment of f (with the smallest slope).

Otherwise, image consists of the segment of the smallest slope on image.

Let t > image and suppose that image and image.

eq

But image, so image and u can be chosen to not be smaller than image.

Therefore, we can always write

eq

with image. In other words, image is constructed from image by removing the first segment in image (and image = 0). Function image is also convex, so we can iteratively compute the remainder of image * g. The segments of image and g are clearly concatenated in increasing order of the slopes. If there is a segment of infinite length, then the segments of greater slopes are ignored.

4.2.2.2. Convolution of a convex function by a concave function

The convolution of a concave function by a convex function is more involving. However, it is possible to do it in time image(n log n). Up to now, it is not directly useful in the network calculus framework, but it has been used for the fast (min,plus) convolution in the Hamiltonian system [BOU 16a].

The convolution of a segment by a piecewise linear convex function can be deduced from the convolution of two convex functions in Theorem 4.1. Let image : [a, b] → image be a convex piecewise linear function, represented by a list of segments image : [ai, ai+1 ] → image, i ∈ {1, n} and g : [c, d] → image be a segment. Then, image * g : [a + c, b + d] → image is a convex piecewise linear function defined by

eq

where u = min{ai ∈ [a, b] | image}.

Figure 4.5 illustrates this construction. In the rest of the section, we will use the decomposition of such a convolution into three parts: image * g = min(g1, gc, g2), where

eq
image

Figure 4.5. Convolution of a convex function by a linear function and decomposition into three functions. For a color version of this figure, see www.iste.co.uk/bouillard/calculus.zip

In other words, g1 is composed of the segments of image whose slope is strictly less than that of g, gc corresponds to the segment g and g2 is composed of the segments of image, whose slope is greater than or equal to that of g. Note that gc is also concave.

Now, suppose that g : [c, d] → image is a concave piecewise linear function represented by the list of segments gj : [cj ,cj+1] → image, j ∈ [1, m]. Then, by distributivity (Lemma 2.1), and as image,

eq

Let us study where the segments image are placed. The following lemma, which considers two consecutive segments of g, leads to an efficient algorithm to compute the convolution of a convex function by a concave function.

LEMMA 4.1.– Consider the convolutions image * gj-1 and image * gj. Let

eq

and

eq

Then:

  • image;
  • image.

PROOF.– First, as image is convex and image, we have uj-1 image uj. From the above section, for all t image cj + uj,

eq

Either t image cj-1 + uj-1, then image * gj-1 (t) = image (t - cj-1) + g(cj-1) and

eq

as tcj–1 image uj-1, the derivative of image at t - cj-1 is less than image, so image (t - cj-1) - image (t - cj) image image.(cj - cj-1) and image * gj(t) - image * gj-1(t) image 0;

or t > cj-1 + uj-1, then image * gj-1(t) = image (uj-1) + g(t - uj-1). As cj-1 < t - uj-1 image t - uj image cj, we have g(cj) - g(t - uj-1) = image . (cj + uj-1 - t). Moreover, by definition of uj-1, the derivative of image before uj-1 is less than image and image(uj-1) - image (t - cj) image image .(cj + uj-1 - t), so finally, image * gj(t) - image * gj-1(t) image 0.

The second statement can be proved similarly.

image

Figure 4.6. Convolution of a convex function by a concave function. For a color version of this figure, see www.iste.co.uk/bouillard/calculus.zip

Another formulation of Lemma 4.1 is that image and image, and that the two functions intersect at least once. Hence, image and image cannot appear in the minimum of image * gj and image * gj-1. By transitivity, there is no need to compute entirely the convolution of the convex function by every linear component of the decomposition of the concave function. If there are more than two segments, successive applications of this lemma show that only the position of the segments of the concave function has to be computed, except for the extremal segments.

The intersection of image * gj-1 and image * gj can then happen in one and only one of the four cases:

  1. 1) image and image intersect;
  2. 2) image and image intersect;
  3. 3) image and image intersect;
  4. 4) image and image intersect.

As a direct consequence of this lemma, a more precise shape of the convolution of a convex function by a concave function can be deduced.

THEOREM 4.2 (Convolution of a convex function by a concave function).– The convolution of a convex function by a concave function can be decomposed in three (possibly trivial) parts: a convex function, a concave function and a convex function.

A formal proof of those result can be found in [BOU 16a]. The main idea is to proceed by induction using Lemma 4.1 for the initialization.

If the concave function is composed of m segments and the convex function of n segments, then the convolution of those two functions can be computed in time image(n + m log m). The log m term comes from the fact that we have to compute the minimum of m segments (see [BOU 08c] for more details).

Note that the shape of the function is simplified when the functions have infinite supports (b = +∞ or d = +∞). Indeed, either the last slope of image is less than the last slope of g, in which case each slope of image is smaller than each slope of g and image * g = image + g(0) (assuming without loss of generality that a = c = 0); or the last slope of g is less than the last slope of image, in which case image * g is composed of a convex function and a concave function only, as the convolution by the last segment of g does not make a final convex part appear.

4.3. A stable class of functions

The previous section was devoted to simple but limited classes of functions, where some operations can be efficiently implemented.

Unfortunately, it is necessary to define these operations under more general settings. Indeed, convex and concave functions are not stable under every operation, and it is also useful and accurate to deal with more general function, such as periodic functions (e.g. staircase functions).

In this section, we give algorithms for a larger class of functions. This class of function must satisfy two properties:

  1. 1) stability under all of the operations that we will be using: minimum, maximum, addition, subtraction, (min,plus) convolution, (min,plus) deconvolution and sub-additive closure;
  2. 2) finitely represented so that the operations can be implemented.

Of course, we would like the class to be as simple as possible. In the following sections, we will define such a class, and first give examples that explain why this class is not that simple. A precise study of those algorithms can be found in [BOU 08c] and, in this chapter, we first review the main results of this article.

4.3.1. Examples of instability of some classes of functions

In section 4.1, numerous classes of functions have been defined. We here justify through example why they had to be defined. As we want a simple and finitely representable class, it is therefore natural to focus on piecewise linear functions that ultimately have a periodic behavior. Indeed, those functions can be represented by a finite sequence of points and slopes. This section enumerates some examples of classes of functions that are not stable. More counterexamples can be found in [BOU 07a]. This section also justifies the class of functions we defined at the beginning of section 4.1.

Linear functions are not stable. The class of linear functions is of course not stable. For example, consider t image min(3t, 5t - 2).

Ultimately linear functions are not stable. Consider

eq

Then,

eq

where nimage and t ∈ [0,2). Clearly image is ultimately pseudo-periodic, but not ultimately linear. image and image are represented in Figure 4.7.

These two examples show that we have to choose sets of functions that are large enough. The next examples show that we have to restrict the space of functions we want to consider for a further analysis.

Piecewise linear functions are not stable. Consider a function h defined on [0,1) that is convex and image on this interval (continuous and with a continuous derivative). We will build two piecewise linear functions image and g such that image, showing that piecewise linear functions are not stable.

As h is convex on [0,1), for all t ∈ [0,1), h(t) can be expressed as

eq

by density of image in image. As [0, 1) ∩ image is denumerable, there exists a sequence image that enumerates all of those elements. We now consider this sequence to construct image and g such that image.

image

Figure 4.7. Ultimately linear functions are not stable by the sub-additive closure. For a color version of this figure, see www.iste.co.uk/bouillard/calculus.zip

For nimage, set image if t ∈ [0,1) and = –∞ otherwise and gn(2n + t) = image if t ∈ [0,1) and = +∞ otherwise. Note that for t ∈ [0,1),

eq

Define image and image. For all t ∈ [0,1), image. But, as t ∈ [0,1), it also holds that

eq

Figure 4.8 illustrates this construction. Here, we have used functions that take infinite values. In [BOU 07a], it is proved that image and g can be chosen finite and non-decreasing.

To avoid this problem, we will restrict to the class of functions that are ultimately pseudo-periodic and piecewise linear. This way, every function has only a finite number of slopes, and such a construction becomes impossible.

image

Figure 4.8. A image function from the deconvolution of piecewise linear functions. For a color version of this figure, see www.iste.co.uk/bouillard/calculus.zip

Ultimately pseudo-periodic linear functions are not stable. It is a classical result: if image and g are two periodic functions, image + g is periodic if and only if image and g have periods image and image with image. Therefore, we will only consider piecewise linear functions that only have discontinuities at rational coordinates and rational slopes.

Non-plain functions are not stable. Finally, we also have to consider plain functions. Indeed, if functions are not plain, it is possible to construct functions that are not ultimately periodic. For example, consider image and image if there exists nimage, such that t ∈ [2n, 2n + 1] and +∞ otherwise. Thus, min(image, g) is not periodic as depicted in Figure 4.9.

image

Figure 4.9. Periodic non-plain functions are not stable. For a color version of this figure, see www.iste.co.uk/bouillard/calculus.zip

4.3.2. Class of plain piecewise linear ultimately pseudo-periodic functions

In this section, we show that the class of plain ultimately pseudo-periodic functions in image is stable. As these functions can be finitely represented, it is possible to algorithmically define the (min,plus) operators on these functions. For this, we first propose a data structure to represent them.

4.3.2.1. Data structure

Let f be a plain ultimately pseudo-periodic function of image. We consider the minimum decomposition into spots, a segment whose support is a singleton, and open segments, whose support is an open interval of image. Outside of their support, segments are assumed to take value image there exists imageimage such that image. Moreover, as f is ultimately periodic, there exists image and image such that image and image for all image. Hence, it is possible to represent this function f with a finite set of segments and points. More precisely, every spot and open segment image is represented by a quadruplet image, where image represents the slope of image and only the quadruplet corresponds to the transient part and one period is stored. To reconstruct the periodic part of the function, it suffices to store the rank T, period df and increment cf of the function (hf can be deduced from them).

The size of the representation of a function f is denoted by image and is the number of quadruplets used to represent it. Note that this size strongly depends on the choice of T, hence the notation. However, we write Nf the size of the smallest representation of f when image, the rank of f. We will have to extend the function beyond the rank to perform the operations.

To simplify the proofs of stability, we also use the decomposition of a function f into its transient part ft and its periodic part image, and ft is defined by image and fp on image.

For the sake of simplicity and without loss of generality, we also assume that the periods of the functions are integers: as they are assumed rational, it suffices to rescale the abscissa.

4.3.2.2. Main theorem and ideas for the algorithms

THEOREM 4.3.– The class of the plain ultimately pseudo-periodic functions of image is stable by the operations of minimum, maximum, addition, subtraction, deconvolution and sub-additive closure.

The rest of the section sketches the proof of this theorem. A complete proof can be found in [BOU 08c].

Consider f and g, plain ultimately pseudo-periodic functions of image. It is obvious that the combination of two plain functions is plain, and we will not prove these.

Addition, subtraction of functions

LEMMA 4.2.– image is plain ultimately periodic from image, with period image and increment image.

PROOF.– image,

eq

using the classical equality image. The subtraction is similar.

The addition can be done in time image. Indeed, image. Moreover, image and image. Therefore, it is sufficient to compute addition on the interval [0, T] with image. One way to compute f + g is to merge the list of spots in [0, T] and compute the addition at each spot as well as the slope between two spots. It can be done through a single pass over f and g. The complexity of this algorithm is image.

The subtraction can be done the same way.

Minimum, maximum of functions

LEMMA 4.3.– image is plain ultimately periodic, and:

  • if image, then ultimately, image with period df and increment cf;
  • if image, then ultimately, image with period dg and increment cg;
  • if image, then image is ultimately periodic from image with period image and increment image.

PROOF.– The first two cases are symmetric, so we focus on the first and third ones. Set image, and image. In the third case, for all image,

eq

In the first case, there exists image such that, for all imageimage, and there exists image such that, for all image, and by hypothesis, image. Therefore, from image,

eq

so image, and the rest follows.

As shown in the proof, in case the two functions do not have the same growth rate, it is not easy to compute in advance the rank from which image is pseudo-periodic (the proof only gives an upper bound). Algorithmically, the solution is to go through the quadruplets of functions f and g in order, computing the intersection of segments, and extending the functions from one period at a time when the transient period or a function has been treated. When the minimum is image on a complete period of f (or g), let us say from time T, image has become pseudo-periodic. The complexity is then in image.

Convolution offunctions

LEMMA 4.4.– f ∗ g is ultimately pseudo-periodic with period image and increment image.

PROOF.– We use the decomposition of f and g into their transient and periodic parts:

eq

As ft and gt have a finite support, so does image, which is then ultimately equal to image.

Let us now focus on image. For all image image. Owing to the support of ft, s can be taken to not be greater than Tf, so image, so image and imageis ultimately pseudo-periodic with period dg and increment cg. Similarly, image is ultimately pseudo-periodic with period df and increment Cf.

Finally, let us show that image is pseudo-periodic. For all image,

eq

f ∗ g is then the minimum of four ultimately periodic functions of image, so it is itself ultimately pseudo-periodic from Lemma 4.3.

Algorithmically, a generic way of computing the convolution of a function is to compute the convolution of each pair of segments of f and g. The (min,plus) convolution of two segments is a direct consequence of Theorem 4.1: a segment is convex. Just note that this result holds for open segments. The convolution of two segments can be made in constant time, and then, by distributivity of the convolution over the minimum, the minimum of all these convolutions has to be computed. Using Lemma 4.3, it is possible to compute the rank T from which f ∗ g will be ultimately periodic. The number of segment convolutions to compute is then in image, and the minimum of all these functions has to be computed. The use of a divide-and-conquer approach leads to a image complexity.

It is also possible, if it makes sense, to use the shape of the function, and decompose it into convex and concave parts in order to use efficient convolution algorithms from Theorems 4.1 and 4.2.

Deconvolution offunctions

LEMMA 4.5.– image is ultimately pseudo-periodic from Tf with period df and increment cf.

PROOF.-For all image,

eq

To compute the deconvolution of two functions f and g, we also use the properties of the deconvolution of Propositions 2.7 and 2.8 imageimage and the following lemma for the convolution of segments.

LEMMA 4.6.– Let image and image be two segments, respectively, defined on the intervals I and J. Let a = inf I and d = sup J. Then, image is defined on imageimage and consists of concatenating the segments f and g from image, with the segment with the larger slope first, where image.

Note that, in this lemma, outside J, we set by default image.

PROOF.– First, image is defined if image and finite if there exists image such that image. We use the additional notations b = sup I and c = inf J, and recall that image is the slope of f (resp g).

eq

Either image, and u has to be maximized, if imageimage and if image or image, and u has to be minimized: if image, and if image.

Similarly to the convolution, we decompose the functions into segments and spots and perform the elementary deconvolutions on these, and compute the maximum of these.

Sub-additive closure of a function

Computing the sub-additive closure of a function is more complex. The following lemma gives a trick to compute only the sub-additive closure as a finite number of convolutions of segments and spots. This result is unpublished and credited to Sébastien Lagrange.

LEMMA 4.7.– Let image. Then, image.

PROOF.– Recall that image is the unit element of the dioid of (min,plus) functions. From Proposition 2.6, image and image for all image. Therefore, by commutativity of the (min,plus) convolution,

eq

We can now apply the lemma by slightly modifying the representation of a function f. We still have ft as its transient part, but we decompose the periodic part into image image, where f1p is the restriction of f to image (it is only the first period), and fs is a spot defined at image. Therefore, f can be expressed as image. From Lemma 4.7, image. Only the sub-additive closure of functions composed of a finite number of spots and segments has to be computed, which boils down to computing the sub-additive closure of spots and segments (using image.

LEMMA 4.8.– Let f be a spot: f is defined on {d} and f (d) = c. Then, f * is a function defined on image and f (nd) = nc for all image.

PROOF.– For all image, fn is a spot defined at nd and image, so the result straightforwardly holds since image.

LEMMA 4.9.– If f is a segment defined on an open interval, then f * is ultimately pseudo-periodic.

PROOF.– Suppose that f is defined on (a, b) and that image for all image. For all image, fn is an open segment of slope f' defined on (na, nb). We differentiate two cases, depending on how f'a compares to f (a).

First, assume that image. This is equivalent to image. Let image. Then, f * is ultimately pseudo-periodic from n0b, of period b and increment f 'b: for all image

eq

wherever image is defined, that is, for t > b. Therefore, if imageimage.

If image, then f * is ultimately pseudo-periodic from n0a, of period a and increment f 'a: for all image,

eq

Therefore, for image.

The sub-additive closure is algorithmically the most costly operator: it is even an NP-hard problem to know the rank from which a function is pseudo-periodic: considering a function that is 0 everywhere except at a finite set image, it is equivalent to solve the Frobenius problem: imageimage, which is proved to be NP-hard in [RAM 96].

4.3.3. Functions with discrete domain

We discussed our choice of using image as the time domain in section 1.3. Indeed, there is in general no assumption of a common clock synchronizing a network. However, for some applications where a synchronizing clock exists, such as systems-on-chip, it may be more accurate to use the time domain image.

Let image denote the set of functions of a discrete time domain. All the (min,plus) operators can be defined similarly to the continuous case, and are denoted image. For example, imageimage.

From an algebraic viewpoint, the associativity, commutativity and distributivity properties of the (min,plus) operators remain, so that image is a dioid. Some properties are even simpler to obtain, as, for example, the (min,plus) convolution is now computed as a minimum and not an infimum.

It is also possible to link the results of the operators in the continuous-time domain and the discrete-time domain. For this, we must first define projection and interpolation between the two domains. For image, we denote by image its projection on image.

Conversely, different interpolations, illustrated in Figure 4.10, from image can be defined:

  1. 1) the linear interpolation image;
  2. 2) the left-continuous staircase function image;
  3. 3) the right-continuous staircase function image.

We now can make explicit the relations between the projections and interpolations.

First, it was shown in [BOU 08c, Prop. 1] that for any operator image and any function image,

eq

and in [BOU 07b, Prop. 2] that if, moreover, f is non-decreasing,

eq

Note that these equalities only hold for expressions involving one single operator: for more complex expressions, performing all computations in the continuous-time domain from the interpolated functions and then going back to the discrete-time domain may not lead to the right result. An example is image: as a function in image, we have f (3) = 6, but as a function in image, we have f (3) = 7.

image

Figure 4.10. Three projections from discrete to dense time domain

From an algorithmic viewpoint, the choice of the time domain strongly depends on the shape of the functions: if the linear interpolation reduces the size of the representation, it might be efficient to use it. Otherwise, a discrete-time representation might be more efficient. The algorithms can easily be adapted to the discrete-time domain.

4.4. Containers of (min,plus) functions

In section 4.3, the class of plain piecewise linear ultimately pseudo-periodic functions is defined such that they are stable for the operations of network calculus (see Theorem 4.3). However, the exact computations of the minimum image, the convolution ∗ and the sub-additive closure .* can be really time- and memory-consuming, especially when the growth rate of the functions are very close to each other.

Let f and g be two plain piecewise linear ultimately pseudo-periodic functions such that image with the rank image, and image with the rank image. The inf-convolution h = f ∗ g is a plain piecewise linear ultimately pseudo-periodic function with Th = 496 (g(Th) = 473). There are 253 segments before the periodic part of h because the inf-convolution needs this time to decide which function is really above the other.

Therefore, here, we propose not to deal with exact computations but with operations called inclusion functions. The results of these operations are particular intervals called containers and they contain in a guaranteed way the results obtained with exact computations.

More precisely, inclusion functions are defined both in order to contain the result of the exact computations in a guaranteed way, and to be less time- and memory-consuming than the exact computations. It is obvious that it intrinsically adds pessimism compared to exact computations. However, inclusion functions are proposed with interesting algorithm complexity and small storage capacity for the data structures, and they allow us to obtain containers that are as small as possible.

REMARK 4.1.– The work of this section is introduced in [LEC14] with a strong connection to the dioid algebraic structure. For instance, notationand terms sum or addition are used instead of notationand term minimum. Moreover, a strong modification is that the dioid order relation image is used instead of the natural order. This means that all of the notations have to be inverted between this section and that paper.

4.4.1. Notations and context

All functions of this section belong to image, the set of non-negative and non-decreasing functions. Remember that image is a complete dioid.

image

Figure 4.11. A convex function image

Furthermore, the bounds of the containers are piecewise linear and ultimately linear. Among these functions, convex functions – with a constant part on image (see Figure 4.11) - and almost concave functions – almost because they are constant on the interval image and concave on image (see Figure 4.12) – are used. Convex functions belong to the set that we denote as image, and almost concave functions to the set that we denote as image. The asymptotic slope of a function image is the slope of the last semi-infinite segment denoted image. This slope cannot be infinite.

image

Figure 4.12. A function image that is concave on image

A function of either image can be seen as the following convolution:

where function image is called an elementary function and is defined by:

This means that these convex and almost concave functions can always be reduced to a piecewise linear convex or concave function, null at the origin and shifted in the plane by an elementary function. Function g can also be expressed as image.

During the computations, we will also need some approximation operators in order to always bring back the results in sets image and image. Let image be a non-decreasing function of image. On the one hand, image is the convex hull of image, i.e. the greatest convex function smaller than image. Hence, image. On the other hand, image is the concave approximation defined as follows:

where image and image is the smallest concave function greater than image on image. Hence, image.

Finally, image (respectively image) if image is an ultimately linear function, and image (respectively image) if image is an ultimately pseudo-periodic function with growth rate image.

4.4.2. The object container

In this section, we use the Legendre–Fenchel transform image introduced in section 3.3.2, and image is the set of functions that have the same Legendre–Fenchel transform as image.

DEFINITION 4.2 (Set F of containers).– The set of containers is denoted by F and defined by

eq

with image the subset is defined as

eq

Hence, a container of F is a subset of an interval image, the bounds of which are convex for image, are almost concave for image and have the same asymptotic slope image. A container of F is also a subset of an equivalent class image. This means that all elements of image are equivalent to image modulo the Legendre–Fenchel transform image.

Properties of the lower bound

The lower bound image plays a very important role in the definition of a container thanks to the following proposition:

PROPOSITION 4.2.– Functions image, gimage have the same Legendre–Fenchel transform if and only if they have the same convex hull:

eq

Moreover, the convex hull of image is the least representative of image:

eq

PROOF.– First, image and image is an isotone mapping, so image (from Proposition 3.15 because image is a convex function). But image is the greatest convex function smaller than image and image is convex, so image. Thus, image.

Now, we deduce image.

Conversely, image is convex, so image. Therefore, image.

The second part of the proposition is equivalent to equation [3.14].    

This proposition allows us to say that because image, we have image which is equal to image. Consequently, for a function imageimage, the non-differentiable points of image truly belong to image. They also have the same asymptotic slope (or growth rate). Moreover, we can see that image is the least representative of image and so ofimage.

Equivalence class image is graphically illustrated in Figure 4.13 by the gray zone. Functions that have the same non-differentiable points and the same asymptotic slope as image (which is an ultimately pseudo-periodic function in this figure) belong to this equivalent class for which image is the least representative.

image

Figure 4.13. Equivalence class image

Finally, the following proposition says that equivalence classes modulo the Legendre–Fenchel transform are actually elements of a dioid.

PROPOSITION 4.3 (Dioid image).– The quotient of dioid image by congruence image (defined in equation 3.13) provides another dioid denoted as image. Elements of image are equivalence classes modulo the Legendre–Fenchel transform image. Minimum and convolution between them are such that:

By deduction, sub-additive closure of image is such that:

PROOF.–From Proposition 3.15, image. Therefore, if image and image, then image. So image and image and image is well defined.

The same considerations apply to the convolution and to the sub-additive closure.    

As a result of this proposition, performing computations on the lower bounds of containers actually amounts to use operations of the dioid image and to handle canonical representatives (the least ones) of its elements, since the elements of image are equivalent to image modulo the Legendre–Fenchel transform image.

In addition, the following lemma provides some nice equivalences that will be used for the computations between containers. They directly follow from Propositions 4.2 and 4.3.

LEMMA 4.10.– Computing with elements of dioid image is equivalent to computing with convex hulls of functions. Formally, image:

Canonical representation

Now, by defining a canonical upper bound of image and because image is the canonical lower bound of a container, it is possible to obtain a canonical representation of a container as follows:

PROPOSITION 4.4 (Canonical upper bound of image).– Let image be an equivalence class modulo the Legendre–Fenchel transform and image be its least element, i.e. image. The piecewise constant function image defined below as the sum of elementary functions is a canonical upper bound of image:

where pairs (ti,ki) are the coordinates of the non-differentiable points of image. Therefore:

eq

This upper bound is illustrated by the bold segments in Figure 4.14 in which the gray zone represents the equivalent class image.

image

Figure 4.14. Upper bound image of the equivalent class image

It must be noted that image does not belong to image since the asymptotic slope of image, i.e. +∞, is not equal to the asymptotic slope of image, the least element of image.

DEFINITION 4.3 (Canonical representation of a container).– The canonical representation of a container image is

eq

Note that with this representation, with image.

The advantage of such a canonical representation is to remove some ambiguities. Indeed, if imageimage, then image. Hence, as is illustrated in Figure 4.15, image is the greatest element of the container image, and interval image (the gray zone in Figure 4.15) contains function image. Consequently, as illustrated in Figure 4.16, the same container of F can be represented by different intervals, image and image, as long as image .In other words, we can have

eq

In Figure 4.16, this canonical representation is given by the container image since image.

By conducting the concave approximation of image, we are still in the set F and we propose a canonical representation of objects that we handled.

Data structure. This canonical representation will also be the one chosen as data structure for the function representation. Indeed, in addition to offering the advantage of unambiguously representing a container of F, it allows useless points of image to be removed. In the example of Figure 4.16, the points of image and image graphically located above image can thus be removed in order to keep only the canonical representation image.

image

Figure 4.15. Greatest element image of the container

image

Figure 4.16. Canonical representation image of the identical containerimage and image

Maximum uncertainty of a container

Due to the canonical representation of a container, the maximal distances in time and data domains between image and image are finite and the loss of precision due to approximations is bounded. This uncertainty is defined below with the following precision: abscissa of the last semi-infinite segment of an ultimately linear function image is the rank of image denoted as image, and its corresponding ordinate is image.

DEFINITION 4.4 (Maximal uncertainty of a container f).– Let image be a container of F in its canonical form. Its maximal uncertainty is defined as follows in the time domain (see Figure 4.17):

eq

and in the data domain (see Figure 4.18):

eq
image

Figure 4.17. Maximal uncertainties image and image of a container f ∈ F when image and image

images

Figure 4.18. Maximal uncertainties images of a container images when images and images

4.4.3. Inclusion functions for containers

As stated at the beginning of this section, operations defined for the containers and called inclusion functions must be such that:

  • – they contain the result of exact computations in a guaranteed way;
  • – they are closed for set F of containers;
  • – they can be computed with an interesting algorithm complexity;
  • – they provide containers that are as small as possible.

The first two criteria are mathematically proven thanks to the properties of handled functions. In particular, the condition of closure is mainly fulfilled because of the dioid structure of the convex lower bound of a container. The last two criteria are conditions that we impose on definitions to get efficient and interesting results about these computations.

DEFINITION 4.5 (Inclusion functions images). –Let images and images be two containers of F given in their canonical forms. Inclusion functions of minimum images and convolution are denoted as images and images. They are defined by:

Now, let images be a container not necessarily given in its canonical form. Inclusion function of the sub-additive closure .* is denoted as [∗] and defined by:

eq

with

images are elementary functions for which pairs (ti,ki) are the non-differentiable points of images. Functions images are defined in equation [4.4]: images .

In this definition of inclusion functions, the choice of picking either the canonical form of the container or the interval without the concave approximation (i.e. imagesimages), is relevant in order to obtain a weak level of complexity in all cases.

Moreover, these inclusion functions do not necessarily provide canonical results. A simple way to obtain them is to make at the end of the computation the concave approximation of the minimum between the obtained upper bound and the actual upper bound of the equivalence class of the lower bound. For instance, the canonical result of images is given by:

eq

THEOREM 4.4.–Let images and images be two containers of F. Inclusion functions images are such that:

eq

PROOF.– The entire proof is given in [LEC 14]. Here, we just give its general idea.

Lower bounds of inclusion functions

We saw that the lower bound of a container is the canonical representative of an element of images, since it is the least element of the equivalence class modulo images. Therefore, operations on elements of images are closed in dioid images and so obtained lower bounds are closed in the set F. In addition, because of equations [4.7][4.9], we know that exact computations are included in obtained containers. Indeed, the computations are made with canonical representatives of equivalence classes (the least ones), which means that they are valid for all the other functions of the equivalence classes. Therefore, images and images.

For the minimum and the sub-additive closure, equations [4.10] and [4.12] correspond exactly to definitions of equations [4.14] and [4.16]. For the convolution, Proposition 3.13 allows us to say that if images, then images so images. Hence, we do not need to make the concave approximation given in equation [4.11], but simple to directly perform the computation given in equation [4.15].

Upper bounds of inclusion functions

  • – For the minimum, according to equation [4.6] about the definition of the concave approximation and since images is order-preserving, it is obvious that images and images, images, that images and that images.
  • – For the convolution, remember that functions of images can always be factorized (see equation [4.4]). Therefore, we obtain images with:
    eq
    and images (because of Proposition 3.12). Then, the minimum of two concave functions that take value 0 at 0, like f ' and g', provides a concave result (again according to Proposition 3.12) and allows us to avoid making the concave approximation of images. Moreover, the asymptotic slope of images is equal to that of f[*]g and so to that of f * g. Finally, since * is order preserving, images and images, images.
  • – For the sub-additive closure, it is a little bit more tricky. The general idea is to use the largest element of container f, i.e. images. Then, through the use of the upper bound images, non-differentiable points of images are introduced in the computation, which allow us to be the least pessimistic. The following sub-additive closure is computed:
eq

In these lines, Lemma 2.6 and Proposition 3.12 are used for the equality g = g*; Proposition 2.6 to say that the sub-additive closure of the minimum of functions is equal to the convolution of the sub-additive closures of the functions, and the fact that the sub-additive closure is linked to the Kleene star operator (Theorem 2.3).

Some sub-additive closures of elementary functions appear in the last two lines. Unfortunately, the result of these computations provides ultimately pseudo-periodic functions such that images, and for which the rank is reached at t = 0 with images for the growth rate. The problem is that making convolutions between such functions is not easy in an algorithmic sense. Hence, we propose to make their concave approximations and thus obtain easier functions to compute (i.e. almost concave functions that take value 0 at 0):

eq

By doing all of these concave approximations, it is obvious that we over-approximate the result we might obtain. However, these over-approximations allow us to compute the minimum of concave functions instead of the convolution that is much better for algorithm complexity. Indeed, thanks to Proposition 3.12, we have the following equality:

eq

which is the computation of images. With this upper bound, we still have images, images; and images.

Algorithms and complexities

Now, let us deal with the ideas of algorithms used for the computations of inclusion functions. Let images and images be the number of non-differentiable points of images, images and images, respectively. We also call these numbers the sizes of the bounds (instead of the case in section 4.3 where size is for the number of segments). We will see that algorithms are of linear or quasi-linear time of the size of the container bounds.

Minimum

Inclusion function images can be computed in linear time in the size of the bounds of the containers.

Indeed, the first step of the computation is to make the minimum of the lower bounds images and images and of the upper bounds images and images. This requires in the worst case only one scan of non-differentiable points of each function, and so can be done in linear time in the size of the bounds.

Then, since the minimum of two convex functions is not convex, the minimum of two almost concave functions is neither a concave function nor an almost concave function, and we need to compute the convex hull of images and the concave approximation of images to get a closed result in F. This can also be done in linear time by using well-known algorithms of convex hull computation.

Therefore, the complexity of algorithm that computes inclusion function images is in images.

Convolution

Inclusion function [∗] can be computed in linear time in the size of the bounds of the containers.

First, the computation of images is obtained by putting end to end the different linear pieces of images and images, sorted by non-decreasing slopes, up to reach the least asymptotic slope between images and images (see Theorem 4.1). Hence, the computation only requires a simple scan of functions1.

Second, about the upper bound, the convolution is defined as images. Hence, the minimum between images and images can be done with a linear complexity and the convolution by the elementary function (that corresponds to a shift in the plane) can be done in a constant time, namely in images.

Therefore, the complexity of algorithm that computes inclusion function [∗] is in images

Sub-additive closure

The algorithm complexity of [∗] is given separately for the two bounds images and images.

First, the lower bound images is obtained in linear time in the size of images. Indeed, its computation only requires the research of its asymptotic slope by checking the images non-differentiable points of images and the asymptotic slope images. Therefore, the complexity of algorithm that computes images is in images.

Second, the upper bound images is obtained in quasi-linear time. Indeed, the computation of images, the first part of equation [4.17], is made thanks to algorithms such as “divide and conquer”. Therefore, by doing a two-by-two minimum of images, the computation of the n functions is solved in images. According to the second part of the equation, the minimum between images and g (from images) is made in linear time, namely images, where Ng is the number of non-differentiable points of g. Then, the shift of images is in images as well as the minimum with e (the unit element of the dioid). The concave approximation does not modify this result since its computation is in linear time. Thus, the complexity of this part of the equation is in images. Therefore, the complexity of algorithm that computes images is in images.

4.5. Implementations

Several implementations of the algorithmic procedures presented in this chapter have been developed, either as autonomous libraries or part of a network calculus tool. The main implementations are listed in this section. For those included in a tool, only the classes of functions handled are listed here; the tools themselves will be presented in the Conclusion.

  • – The DISCO Network Calculator is a network calculus Java tool [DIS 18]. Its principles are detailed in [SCH 06c]. It handles functions made up of a finite prefix followed by a linear tail (i.e. ultimately linear function, see section 4.1). This finite prefix can be any piecewise linear function, i.e. made up of a finite number of linear segments. It handles either floating point or rational numbers.
  • – The NC-maude tool is a Maude toolbox [BOY 15]. Its principles are detailed in [BOY 10b], and it uses the Maude rewriting language [MAU 15]. It handles piecewise linear continuous concave and rate-latency functions, with rational numbers.
  • – The containers of section 4.4 are implemented in the tool ContainerMinMaxGD [CON 09] developed in C++ language. This tool is an extension to the MinMaxGD library that implements operations over periodic series in dioid of formal series. It is very similar to the classical dioid theory as described in [LEC 14].
  • – The real-time calculus (RTC) toolbox is a Matlab® toolbox for performance analysis of distributed real-time and embedded systems [WAN 06a, WAN 06b]. Its Java kernel implements the main network calculus operations, except the sub-additive closure. It deals with piecewise linear functions defined over images, which are not necessarily increasing nor positive but which have a periodic behavior from a point, called variability characterization curves (VCCs). Infinite values are also allowed. This class is very close to the classes introduced here where this asymptotic behavior is called ultimate pseudo-periodicity. Besides correctness, no complexity analysis has been given as far as we know.
  • – The software COINC [COI 07] was the first to implement the algorithms presented in [BOU 08c], as a standalone version or a Scilab toolbox.
  • – The RTaW-PEGASE tool [BOY 10c, BOY 11a] implements three classes of functions. Like DISCO, it implements the piecewise linear concave and convex functions, with either floating point or rational numbers. It also implements the most general class of functions of the ultimately pseudo-periodic function of [BOU 08c], but only with rational numbers (since this class requires us to compute l cm), and does not implement sub-additive closure. Last, it implements a class that groups the stair-shaped functions and the piecewise linear concave and convex functions. Since this class is not closed by common operations, only a subset of operations is defined [BOY 11b].

The (min,+) library can be used either as an API, within a dedicated command-line interpreter, or through a WEB server, at http://realtimeatwork.com/minplus-playground.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.128.94.171