This chapter covers the following items:
–Basic fractal explanations (definitions and descriptions)
–Multifractal methods
–LVQ algorithm application to datasets obtained from multifractal methods
–Examples and applications
The word fractal is derived from Latin fractus, which means “broken.” Mathematician Benoit Mandelbrot coined this term in 1975 [1], in the book The Fractal Geometry of Nature. In this work, fractal is defined as “a fragmented or rough geometric shape of a geometric object which is possible to be split into parts, and each of such parts is (at least approximately) a reduced-size copy of the whole.”
One of the most familiar fractal patterns is the Mandelbrot set, named after Benoit Mandelbrot. Producing the Mandelbrot set includes the testing of the properties of complex numbers after they are passed through an iterative function. Some questions emerge in this respect as follows: Do they stay bounded or do they tend to infinity? This “escape-time” algorithm is to a lesser amount a practical method used for producing fractals compared to the recursive techniques which shall be examined later in this chapter. An example for producing the Mandelbrot set is encompassed in the code examples.
Let us elaborate on this definition through two easy examples in Figures 11.1–11.3. Initially, we can consider a tree branching structure as in Figure 11.2.
Pay attention to the way the tree depicted in Figure 11.2 has a single root that has two branches that join at its end. Further, these branches own two branches, maintaining a similar pattern. Let us imagine what would happen if we plucked one branch off from the tree and examine it on its own.
We can discover the fact that the shape of this branch we plucked is similar to the tree itself. This situation is known as self-similarity (as cited by Mandelbrot) and in this figure, each part is a “reduced-size copy of the whole.”
The tree depicted is perfectly symmetrical in that the parts are actually the exact replicas belonging to the whole. Nevertheless, it is not always the case that fractals are perfectly self-similar. Let us have a glance at the graph of the Net Domestic Credit data (adapted from actual Italy Net Domestic Credit (US $) between 2012 and 2015).
As shown in graphs, the x-axis denotes the time (period from 2012 to 2015 in Italy economy) and the y-axis denotes the Net Domestic Credit (US $). Graphs of Italy Net Domestic Credit (US $) from U.N.I.S. (USA, New Zealand, Italy, Sweden) dataset make up some fractal examples since at any scale they look the same. Do these graphs of the Net Domestic Credit (US $) cover one single year, 1 h or 1 day? If there isn’t a label, there is no way that we can provide an answer (Figure 11.4). (Figure 11.3 (a) presents three years’ data and Figure 11.3 (b) focused on a small portion of the graph, with Figure 11.3 (a), presenting a period of 6 months.)
This provides a stochastic fractal example, which means that it is generated based on randomness as well as probabilities. Contrary to the deterministic tree-branching structure, it is self-similar statistically [–2–5].
Self-similarity is a central fractal trait. Besides this, it is also of importance to understand that merely self-similarity does not make a fractal. Take, for example, a line which is self-similar. A line seems to be the same at all scales. It can be thought of a pattern that includes many little lines. Despite this, it is not a fractal at all.
George Cantor [6] came up with simple recursive rules for the generation of an infinite fractal set as follows.
Example 11.1 (Cantor set). Erase recursively the middle third of the line segment by the algorithm whose flowchart is provided in Figure 11.5.
Erasing the middle third of that line algorithm has been provided in a step-by-step manner in Figure 11.6.
Take a single line and split it into two. Subsequently, return back to those two lines and implement the same rule by splitting each line into two, and at this point we have four. Next, return back to those four lines and implement the rule. This time, you will have eight. Such a process is called recursion, which refers to the repeated application of the same rule for the generation of the fractal.
It is possible for us to apply the steps in the relevant order stage by stage to build the form as depicted in Figure 11.7.
Functions called recursive are useful for the solution of certain problems.
Example 11.2 Certain mathematical calculations are applied in a recursive manner, the most common example of which is factorial. The factorial of any number t is generally written as n!. The recursive law is:
At this point, we can write the flowchart (Figure 11.8).
For instance, the operational flow for 5! is given in Figures 11.9 and 11.10.
Example 11.3 As a second example let us consider the Fibonacci series. Let’s write the flowchart (Figure 11.11) as well as the algorithm (Figure 11.12).
As an example, let us calculate the (orbit) of 2 for as shown in Figure 11.13.
This series is obtained as Fibonacci series.
Example 11.4 (Devil’s staircase). This is a continuous function on the unit interval. It is a no-differentiable piecewise constant which is growing from 0 to 1! (see Figure 11.14) [7].
Consider some number X in the unit interval. Denote it in base 3. Slice off the base 3 expansion following the first 1. Afterwards, all 2’s are made in the expansion to 1’s. This number at this point has merely 0’s or 1’s in its expansion, thus, it is to be interpreted as a base 2 number! We can term the new number f(x).
Should we plot this function, we may obtain the form called the Devil’s staircase that is concerned with the standard Cantor set [6] as follows: this function is constant with the intervals removed from the Cantor set that is standard. To illustrate:
Should we plot this function, we can realize that this function cannot be differentiated at the Cantor set points. Yet, it has zero derivative everywhere else. However, this function has zero derivative almost everywhere since a Cantor set has measure zero. It “rises” only on Cantor set points.
With the corresponding algorithm:
Example 11.5 (Snow flake). Helge von Koch discovered a fractal, also known as the Koch island, a fractal in the year 1904 [8]. It is constructed by starting with an equilateral triangle. Here, the inner third of each side is removed and another equilateral triangle is built at the location in which the side was removed. Later, the process is to be repeated indefinitely (see Figure 11.15). We can encode the Koch snowflake simply as a Lindenmayer system [9], which as an initial string is “F--F—F”; string rewriting rule is “F” → “F+F--F+F,” with an angle of 60°. Zeroth through third iterations of the construction are presented earlier.
Nn is the number of sides, Ln is the length of a single side, ln is the length of the perimeter and An is the snowflake’s area after the nth iteration. Additionally, represent the area of the initial n = 0 triangle Δ, and the length of an initial n = 0 side 1. Then, we have the following:
Solving the recurrence equation with A0 = Δ yields
So as n → ∞
The capacity dimension will then be
Some good examples of these beautiful tilings shown in Figure 11.16 can be made through iterations toward Koch snowflakes.
Example 11.6 (Peano curves). It is known that the Peano curve passes through the unit square from (0, 0) to (1, 1). Figure 11.17 shows how we can define an orientation and affine transformations mapping each iterate to one of the nine subsquares of the next iterate, at the same time preserving the previous orientations. Figure 11.17 illustrates the procedure for the fist iteration [10].
Nine of the transformations can be defined in a similar way as that of Hilbert curve: in complex form, j = 0; 1; ...; 8 in matrix notation. Each pj maps Ω to the (j+1)th subsquare of the first iteration as presented in Figure 11.18.
Using the ternary representation of t an equality
we can proceed in the same way as with the Hilbert curve. Upon realizing this
and apply to finite ternaries, we get an expression for the calculation of image points [9].
Example 11.7 (Sierpiński gasket). In 1915, Sierpiński described the Sierpiński sieve [11]. This is termed as the Sierpiński gasket or Sierpiński triangle. And we can write this curve as a Lindenmayer system [9] with initial string “FXF--FF--FF”; string rewriting rule is “F” → “FF”, “X” → “--FXF++FXF++FXF--”, with an angle of 60° (Figure 11.19).
The nth iteration of the Sierpiński sieve is applied in the Wolfram Language as Sierpiński Mesh [n].
Nn is the black triangles’ number following iteration n, Ln is the length of a triangle side and An is the fractional area that is black following the n th iteration. Then
The capacity dimension is therefore [12]
The Sierpiński sieve is produced by the recursive equation
where e(j, n) is the (j + 1)th least significant bit defined by [13]
and the product is taken over all j such that e(j, n)=1 as shown in Figure 11.20.
Sierpiński sieve is elicited by the Pascal triangle (mod 2), with the sequence 1; 1, 1; 1, 0, 1; 1, 1, 1, 1; 1, 0, 0, 0, 1; … [12]. This means coloring all odd numbers blue and even numbers white in Pascal’s triangle will generate a Sierpiński sieve.
In this section we discuss about the measure of fractal and of its complexity by the so-called fractal dimension [13].
Now, let’s analyze two types of fractal dimensions, which are the self-similarity dimension and the box-counting dimension. Among the many other kinds of dimension, topological dimension, Hausdorff dimension and Euclidean dimension are some types we may name [14, 15].
Prior to the explanation of dimension measuring algorithms, it is important that we enlighten the way a power law works. Essentially, data behave through a power law relationship if they fit this following equation: y = c × xd, where c is a constant. Plotting the log(y) versus the log(x) is one of the ways one can use for determining whether data fit a power law relationship. If the plot happens to be a straight line, then it is a power law relationship with the slope d.
We will see that fractal dimension depends largely on the power law.
Definition 11.1 (Self-similarity dimension). In order to measure the self-similar dimension, a fractal must be self-similar. The power law is valid and it is given as follows:
where D is the self-similar dimension measure, a is the number of pieces and s is the reduction factor.
In Figure 11.7, should a line be broken into three pieces, each one will be one-third the length of the original. Thus, a = 3, s = 1/3 and D = 1.
The original triangle is split into half for the Sierpiński gasket, and there exist three pieces. Thus,
Definition 11.2 (Box-counting dimension). In order to compute the box-counting dimension, we have to draw a fractal on a grid. The x-axis of the grid is s being s = 1=(width of the grid). For instance, if the grid is 480 blocks tall by 240 blocks wide, s =1/240. Let N(s) be the number of blocks that intersect with the picture. We can resize the grid by refining the size of the blocks and recur the process. The fractal dimension is obtained by the limit of N(s) with s going to zero. In a log scale, the box-counting dimension equals the slope of a line.
For example:
–For the Koch fractal,
–For the 64-segment quadric fractal it is
Multifractals are fractals having multiple scaling rules. The concept of multiscale-multifractality is important for space plasmas since this makes us capable of looking at intermittent turbulence in the solar wind [16, 17]. Kolmogorov [18] and Kraichnan [19] as well as many authors have tried to make progress as to the observed scaling exponents through the use of multifractal phenomenological models of turbulence that describe the distribution of energy flux between surging whirls at several scales [20–22].
Specially, the multifractal spectrum was examined by the use of Voyager [23] (magnetic field fluctuations) data in the outer heliosphere [23–25] and through Helios (plasma) [26] data in the inner heliosphere [27, 28]. Multifractal spectrum was also analyzed directly with respect to the solar wind attractor and it was revealed that it was consistent with the two-scale weighted multifractal measured [21]. The multifractal scaling was tested as well via the use of Ulysses observations [29] and also through the Advanced Composition Explorer and WIND data [29–32].
Random walk and Brownian motion are relevant concepts in biological and physical science areas [32, 33]. The Brownian motion theory concentrates on the nonstationary Gaussian process. Or it deems that it is the sum of a stationary pure random process – the white noise in Langevin’s approach [27]. Generalization of the classical theory to the fractional Brownian motion (fBm) [34, 35] and the conforming fractional Gaussian noise at present encompasses a wide class of self-similar stochastic processes. Their variances have scale with power law N2H is the number of steps 0 < H < 1; H= 0.5 matches the classical Brownian motion through the independent steps. Mandelbrot [36] proposed the term “fractional” associated with fractional integration and differentiation. The steps in the fBm are correlated strongly and they have long memory. Long memory process is defined as the time-dependent process whose empirical autocorrelation shows a decay that is slower than the exponential pattern as arranged by the finitelag processes [37, 38]. Thus, fBm has proven to be an influential mathematical model to study correlated random motion enjoying a wide application in biology and physics [37, 38]. The one-dimensional fBm can be generalized to two dimensions (2D). Not only a stochastic process in 2D, that is, a sequence of 2D random vectors , Bi = 0, 1, ...}, but also a random field in 2D {Bi, j|i = 0, 1, ...; j = 0, 1, ...} is considered, a scalar random variable that is defined on a plane. These represent the spin dimension 2 and space dimension 2 in the respective order in critical phenomena theory in the field of statistical physics [39]. The most common fractional Brownian random field is an n-dimensional random vector, the definition of which is made in an m-dimensional space [39].
The fBm function is a Brownian motion BH(t) such that (see [33]):
BH(t) is defined by the stochastic integral
dW represents the Wiener process [40–44]. In addition, the integration is taken in the mean square integral form.
Definition 11.3 (Square integrable in the mean). A random process X is square
integrable from a and b if is finite. Notice that if X is bounded on
[a, b], in the sense that with probability 1 for all a ≤ t ≤ b, then X is square integrable from a to b.
Such a kind of a natural extension of fBm brings about some sense of a “loss” of properties. The increments of fBm in truth are nonstationary, with the process not being self-similar. It is known that singularities and irregularities are the most prominent ones and informative components of a data.
H is the Hurst exponent, { = α=2 and α∈(0, 1) is the fractal dimension [42]. The Hurst exponent H assumes two major roles. The first one is that it makes the determination of the roughness of the sample path besides the long-range dependence properties pertaining to the process. Most of the time, the roughness of the path is based on the location. It may also be necessary to take the multiple regimes into consideration. As for the second role, it can be said that this role can be realized by replacing the Hurst exponent H by a Hölder function (see Section 11.3.2) [43] being:
This kind of a generalization gives the way to multifractional Brownian motion (mfBm) term. In such a case, the stochastic integral expression resembles the one pertaining to the fBm but the case of H being replaced by the corresponding Hölder function is not applicable. The second integral of eq. (11.4) is with an upper limit of t and the integration is taken in the mean square sense. In addition, for mfBm, increments are not stationary anymore and the process is not self-similar anymore either.
In order to generate sample trajectories, it is possible to find both simplified covariance functions and pseudocodes. 2D-fBm algorithms form time series of spatial coordinates. One can regard this as spatial jumps that are alike off-lattice simulations. However, it is important to remember that these increments are correlated.
Assume the Hölder function of exponent β Let (X,dx) and (Y,dy) be two metric spaces with corresponding metrics. A function f :X → Y is a Hölder function of exponent β > 0; if for each x, y ∈ X, dX(x, y) < 1 there results [44]
f is a function from an interval D ⊆ R into R. The function f is Hölder continuous on the condition that there are positive constants C and α such that
The constant α is referred to as the Hölder exponent. A continuous function has the property that or equivalently Hölder continuity allows the measurement of the rate of convergence. Let us say that if f is Hölder continuous with exponent α, then at least in the same fastness as [45].
Hölder regularity can be used to characterize singular structures in a precise manner [46]. Thus, the regularity of a signal at each point by the pointwise Hölder exponent can be quantified as follows.
Hölder regularity embodies significant singularities [47]. Each point in data is defined pursuant to Hölder exponent (11.6) as has already been mentioned.
Let and if and only if and polynomial degree P of degree < S and a constant C such that [48]
where B (x0, η) is the local neighborhood around x0 with a radius η. The point-wise Hölder exponent of f at x0 is as αP(x0) = sups{f ∈ Cs(x0)}.
The Hölder exponent of function f(t) at point t is the sup (αP) ∈ [0, 1]. For this, a constant C exists with ∀ t′ in a neighborhood of t,
Regarding the oscillations, a function f(t) is Hölder with exponent αp∈[0, 1] at t if ∃c ∀τ such that oscτ (t) ≤ cταP , embodying the following:
Now, t = x0 and t′ = x0 + h in eq. (11.9) can also be stated as follows:
Therefore, the problem is to find an αp that fulfills eqs. (11.11) and (11.12). For practical reasons, in order to simplify the process, we can set τ =βr. At that point, we can write as equivalent to eq. (11.9).
Thus, an estimation of the regularity is generated at each point by calculating the regression slope between the logarithm of the oscillations oscτ and the logarithm of the dimension of the neighborhood at which oscillations τ are calculated. In the following, we will use the least squares regression for calculating the slope with β = 2 and r = 1, 2, 3, 4. Moreover, it is better not to use all sizes of neighborhoods between the two values τmin and τmax. Here, the point x0 is computed merely on intervals of the form [x0 − τr: x0 + τr]. For 2D MS dataset, economy (U.N.I.S.) dataset and Wechsler Adult Intelligence Scaled – Revised (WAIS-R) dataset, x0 defines a point in 2D space and τr a radius around x0, such that d(t′, t) ≤ τr: and d(t′′, t) ≤ τr, where d(a, b) is the Euclidean distance between a and b [49].
18.225.92.116