(1.8) can be rewritten as

Since the right side of (1.9) is equal to (0,0)when(X,X)(k)=(0,0), the origin (0,0) in the domain is the equilibrium point of (1.9).

Theorem 1.3 The equilibrium point (0,0) of (1.9) of the DFMLS is globally asymptotically stable. If there exists a positive dynamic fuzzy matrix (P,P), then [18]

Proof: Consider the following Lyapunov function:

where (P,P) is a positive dynamic fuzzy matrix. Then, there is

ΔV[(X,X)(k)]=(X,X)T(k+1)(P,P)(X,X)(k+1)(X,X)T(k)(P,P)(X,X)(k)=(X,X)T(k)[(p=1m(Ap,Ap)T(v,v)pp=1m(v,v)p)(P,P)](X,X)(k)

Fig. 1.3: Geometric model of dynamic fuzzy machine learning system.

ΔV[(X,X)(k)]<(0,0) is obtained from (1.10) and (v,v)p(0,0). Using the Lyapunov stability theorem, the proof is complete.

1.2.3DFML geometric model description

1.2.3.1Geometric model of DFMLS [8, 11]

As shown in Fig. 1.3, we define the universe as a dynamic fuzzy sphere (large sphere), in which some small spheres represent the DFSs in the universe. Each dynamic fuzzy number is defined as a point in the DFS (small sphere). The membership degree of each dynamic fuzzy number is determined by the position and radius of the DFS (small sphere) in the domain (large sphere) and the position in the DFS (small sphere).

1.2.3.2Geometric model of DFML algorithm

In Fig. 1.4, the centre of the two balls is the expected value (yd,yd), the radius of the large sphere is (ε,ε), and the radius of the sphere is (δ,δ)[here,(ε,ε)] and (δ,δ) are the same as in Algorithm 1.2].

The geometry model can be described as follows:

(1)If the value (yk,yk) of the learning algorithm falls outside the ball, then this learning is invalid. Discard (yk,yk) and feed the information back to the rules of the system library and learning part, then begin the process again;

(2)If the value (yk,yk) of the learning algorithm falls between the big sphere and the small sphere, the information is fed back to the system rules and learning part of the library so that they can be amended. Proceed to the next step;

(3)If the value (yk,yk) of the learning algorithm falls within the small sphere, it is considered that the precision requirement has been reached. Terminate the learning process and output (yk,yk)

Fig. 1.4: Geometric model of dynamic fuzzy machine learning algorithm.

1.2.4Simulation examples

Example 1.1 The sunspot problem: Data from 1749–1924 were extracted from [19]. This gave a total of 176 data, of which the first 100 were used as training samples and the remainder were used as test data.

Figure 1.5 shows the error and iterative steps of the algorithm described in this section, the neural network elastic Back Propagation (BP) algorithm resilient propagation (RPROP) [20], and a Q-learning algorithm. When the error (ek,ek)=(0.4,0.4), RPROP falls into a local minimum and requires approximately 1660 iterations to find the optimum solution; in the Q-learning algorithm, 1080 iterations are needed to reach an error of (ek,ek)=(0.334,0.334); for our algorithm, the number of iterations required for an error of (ek,ek)<(0.4,0.4) is 455. When the error (ek,ek)=(0.034,0.034), the training is basically stable, and the required number of iterations is about 1050. Figure 1.6 compares the actual values with the predicted values obtained by the algorithm presented in this section for an initial value (u0,u0)(t)=(0,0)) gain learning coefficient of α = 0.3, correction coefficient of β = 0.2, maximum tolerance error of (ε,ε)=(0.5,0.5), acceptable error of (δ,δ)=(0.005,0.005), and error (ek,ek)=(0.034,0.034).

Example 1.2 Time series forecast of daily closing price of a company over a period of time.

The data used in this example are again from [19]. Of the 250 data used, the first 150 data were taken as training samples, and the remaining 100 were used as test data.

Fig. 1.5: Comparison of errors of three algorithms and iterative steps.
Fig. 1.6: Comparison between predictive value and actual value of this algorithm value.
Fig. 1.7: Comparison between errors of three algorithms and iterative steps.

We set the initial value to (u0,u0)(t)=(0,0), gain learning coefficient to α = 0.3, correction coefficient to β = 0.3, maximum tolerance error to (ε,ε)=(0.5,0.5), and acceptable error to (δ,δ)=(0.005,0.005).

Figure 1.7 shows the error and iterative steps of this algorithm compared with the elastic BP algorithm RPROP and the BALSA algorithm, which is based on a Bayesian algorithm [21]. When the performance index is (p,p)(k)=(0.013,0.013), RPROP falls into a local minimum after approximately 146 iterations; for the BALSA algorithm, when the performance index is (p,p)(k)=(0.013,0.013). the number of iterations required is 114; for our algorithm, when the performance index is less than (0.013,0.013), the number of iterations required is 82. The number of iterations required to satisfy the accuracy requirement is about 500. Figure 1.8 compares the actual values and the learning results of the proposed algorithm, where k is the number of iterations.

1.3Relative algorithm of DFMLS [5]

1.3.1Parameter learning algorithm for DFMLS

1.3.1.1Problem statement

According to Algorithm 1.2, DFMLS adjusts and modifies the rules in the rule base according to the results of each learning process. The adjustment and modification of rules are mainly reflected in the adjustment of parameters in the rules. For this problem, this section derives a DFML algorithm that identifies the optimal system parameters.

Fig. 1.8: Comparison between actual value and learning result.

Consider the DFMLS described in the previous section, which is formalized as (y,y)=f((X,X),(θ,θ)),Eachelementin(θ,θ) is

where (m,m)(A1l,A1l),(δ,δ)(A1l,A1l)and(δ,δ)(b1,b1) are the mean and variance of the corresponding membership functions.

According to the given input and output data pairs ((Xk,Xk),(yk,yk))(k= 1, 2, . . ., N), the system parameters can be learnt using the least-squares error (LSE) objective function, which minimizes the output error of the system:

and modifies the parameters along the direction of steepest gradient descent:

where η is the training step length. Thus, the iterative optimization equation of parameter (m,m)(A1l,A1l),(δ,δ)(A1l,A1l)and(δ,δ)(bi,bi) can be obtained so as to minimize the sum of squares of the error in (1.11).

There are two issues worth considering:

(1)The choice of step size: If only a single fixed training step is used, it is difficult to take into account the convergence of different error variations, sometimes resulting in oscillations in the learning of the parameters, especially near the minimum point. Fixed training steps will reduce the convergence speed. In practical applications, there is often no universal, fixed training step for different parameter learning problems. Therefore, the following optimization steps are proposed to solve this problem [22]:

η(t)=[η(m,m)(Ail,Ail)(t),η(δ,δ)(Ail,Ail)(t),η(δ,δ)(bi,bi)(t)]Tη(m,m)(Ail,Ail)(t)1i=1mi=1n{f(θil.θil).(Xl.Xl)(m.m)(Ail,Ail)(t)}η(δ.δ)(bi,bi)(t)1i=1mi=1n{f(θil.θil).(Xl.Xl)(m.m)(bi,bi)(t)}η(δ.δ)(Ai1,Ali)(t)1i=1mi=1n{f(θil.θil).(Xl.Xl)(δ.δ)(Ai1,Ali)(t)}

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.223.10