1Dynamic fuzzy machine learning model

This chapter is divided into six sections. Section 1.1 proposes the problem, Section 1.2 introduces the dynamic fuzzy machine learning (DFML) model, and Section 1.3 describes the dynamic fuzzy machine learning system (DFMLS)-related algorithms. Section 1.4 discusses a process control model for DFML, and Section 1.5 presents a dynamic fuzzy relation learning algorithm. Finally, Section 1.6 gives a summary of this chapter.

1.1Problem statement

Although machine learning theory has been widely used in various learning paradigms, the currently accepted concept of machine learning is as follows: “If a system can improve its performance by performing a certain process, it is learning”. The point of this argument is, first, that learning is a process; second, that learning considers the whole system; and third, that learning can change the performance of the system. “Process”, “system” and “performance change” are the three main points of learning. Obviously, if the system is considered human, the same argument can be established. According to this statement, existing machine learning methods could be considered somewhat limited. The reason is that these three points have a dynamic fuzzy character: the learning process is essentially dynamic and fuzzy; changes in the system (i.e. whether a system is “good” or “bad” and so on) are essentially dynamic and fuzzy; changes in system performance, results, and so on are all dynamic and fuzzy. These dynamic fuzzy characteristics are ubiquitous in the whole of machine learning and machine learning systems. Therefore, we believe that to make progress and meet the above definition of machine learning, the key question is to be able to effectively solve the dynamic fuzzy problems arising from learning activities.

However, existing machine learning methods are not sufficient for dynamic fuzzy problems. For example, Rough Set theory is based on fuzzy sets and can solve fuzziness problems, but it cannot handle dynamic problems; statistical machine learning is based on small-sample statistics and can solve static problems but not dynamic problems; Reinforcement learning is based on Markov processes and can solve dynamic problems but not fuzzy problems. Therefore, choosing dynamic fuzzy sets (DFSs) to study machine learning is a kind of inevitable choice. For the basic concepts of DFSs, see Appendix 8.1 and References [1, 24].

1.2DFML model

To effectively deal with the dynamic fuzzy problems that arise in machine learning activities, a coordination machine learning model based on dynamic fuzzy sets (DFS) and related algorithms has been proposed [9, 10]. Further work on this foundation has put forward the method of DFML and described a DFMLM and related algorithms in terms of their algebra and geometry [58, 11]. This section introduces the basic concepts of DFML, DFML algorithms and their stability analysis, and the geometry of DFML.

1.2.1Basic concept of DFMLs

The process of system learning can be considered as self-adjusting, reflecting a series of changes in the system structure or parameters. Using mathematical language, learning can be defined as a mapping from one set to another set.

Definition 1.1 Dynamic fuzzy machine learning space: The space used to describe the learning process, which consists of all the DFML elements, is called the DFML space. It consists of five elements: {learning sample, learning algorithm, input data, output data, representation theory}, and can be expressed as (SS, SS) = {(ExEx, ExEx), ER, (XX, XX), (YY, YY), ET}.

Definition 1.2 Dynamic fuzzy machine learning: DFML (ll, ll), is the mapping of an input dataset (, ) to an output dataset (, ) in the DFML space (, ). This can be expressed as (, ): (, ) → (, ).

Definition 1.3 Dynamic fuzzy machine learning system (DFMLS): The five elements of the DFMLS (, ) can be combined with a certain learning mechanism to form a computer system with learning ability. This is called a Dynamic Fuzzy Machine Learning System.

Definition 1.4 Dynamic fuzzy machine learning model (DFMLM): DFMLM = {(, ), (L,L)(L,L) is the part to be learned (dynamic environment/dynamic fuzzy learning space), (, ) is the part to be dynamically learned, (uu, uu) is the output of (S,S)to(L,L),(y,y)(S,S)to(L,L),(y,y) is the dynamic feedback from (L,L)to(S,S),(P,P)(L,L)to(S,S),(P,P) is the system learning performance index, (I,I)(I,I) is the input to the DFMLS from the external environment, and (0,0)(0,0) is the output of the system to the outside world. A schematic diagram is given in Fig. 1.1.

If we discretize the processing system, (S,S)and(L,L)(S,S)and(L,L) are represented by the state space, so we have the following definition.

Definition 1.5 DFMLM can be described as

Fig. 1.1: DFMLM block diagram.

where (x,x)(k)(x,x)(k) is the state variable of (, ) at time k, (u,u)(k)(u,u)(k) is the dynamic output of (, ), ξ1(k) is a random disturbance in the state equation, (y,y)(k)(y,y)(k) is the dynamic feedback input of (, ), and ξ2(k) is observational random error. k represents the time period; only integer values are taken. We assume that all of the vectors are finite-dimensional state variables. (pp, pp) represents the system learning performance index, and P(i) is a scalar function that represents the system learning performance at time i.

According to the definition, the following three propositions can be obtained:

Proposition 1.1 DFMLS is a random system.

Proposition 1.2 DFMLS is an open system.

Proposition 1.3 DFMLS is a nonlinear system.

Definition 1.6 The model of DFML process can be described as

where:

(SoSo)(SoSo) is the source field, which refers to the DFMLS facing the learning material set. Its elements are (so,so).(so,so).

(Y,Y)(Y,Y) is the target domain, which is the set of knowledge acquired by the DFMLS. Its elements are (y,y).(y,y).

(Op,Op)(Op,Op) is an operational mechanism, which refers to a set of actions from the source field to the target domain.

(V,V)(V,V) is an evaluation mechanism, which is a set for evaluating and correcting or deleting target domain elements.

ER is an execution algorithm, which is the implementation of the algorithm to verify the source field elements and provide executive information.

(G,G)(G,G) is an incentive mechanism, which refers to the control and coordination of environmental communication.

This definition leads to the following proposition:

Proposition 1.4 DFML is an orderly process controlled by an incentive mechanism. Its general procedure is as follows:

(1)For a relevant subset (S0+,S0+)(S0+,S0+) in the source field (S0,S0),(S0,S0),, under incentive mechanism (G,G),(G,G),, a subset (Op,Op)(Op,Op) of the operating mechanism is activated according to the common characteristics of (S0+,S0+)(S0+,S0+) in the role of (Op+,(Op+,Op+),Op+),, forming a subset (Y+,Y+)(Y+,Y+) of the target domain (, ). That is, (Op+,(Op+, Op+)(s0+,s0+)(Y+,y+)(Y,Y).Op+)(s0+,s0+)(Y+,y+)(Y,Y).

(2)Consider the elements (y0,y0)(y0,y0) in the target domain (, ). In the execution under the action of algorithm ER, there is some deviation information (y0,y0).(y0,y0).. The incentive mechanism (G,G)(G,G) is based on a subset (V+,V+)(V+,V+) of the activation evaluation mechanisms (V,V)(V,V) and act on the environment and the environmental response N(y0,y0)andE(y0,y0)N(y0,y0)andE(y0,y0) in the role of (V+,V+).(V+,V+).. We then revise (y0,y0).(y0,y0).. That is,

(3)For a subset (So+,So+)(So+,So+) of the source field (S0,S0),(S0,S0),, the results of its operation and evaluation are

(4)Take the above target as a new source field.

(5)Repeat the above steps until the learning process meets the accuracy requirements. A schematic diagram is shown in Fig. 1.2.

1.2.2DFML algorithm

In DFML, the curse of dimensionality can be a serious problem. In machine learning, many of the data are nonlinear and high-dimensional, which brings further difficulties to data processing in machine learning. Therefore, we need to reduce the dimension (i.e. dimensionality reduction) of the high-dimensional data.

Fig. 1.2: Dynamic fuzzy machine learning process description model.

1.2.2.1Locally linear embedding (LLE)

LLE is a well-known nonlinear dimension reduction method proposed by Saul and Roweis in 2000 [12]. Its basic idea is to transform the global nonlinearity into local linearity, with the overlapping local neighbourhoods providing the global structure of the information. In accordance with certain rules, each part of the linear dimension reduction is then combined with the results to give a low-dimensional global coordinate representation.

In the dynamic fuzzy high-dimensional space (RD,RD),(RD,RD),, there is a dynamic fuzzy dataset (X,X)={(X1,X1),(X2,X2),...,(Xn,Xn)}.(X,X)={(X1,X1),(X2,X2),...,(Xn,Xn)}. LLE reduces the dimension (F,F)(F,F) to a relatively low dimension dynamic fuzzy space (Rd,Rd)(Rd,Rd) and obtains the corresponding low-dimensional dynamic fuzzy vector (, ) = {(y1,y1),(y2,y2),...,(yn,yn)}{(y1,y1),(y2,y2),...,(yn,yn)} while retaining as much information as possible from the original dynamic fuzzy data (DFD). That is, the topology of the dynamic fuzzy dataset (as determined by the neighbourhood relation of each point) is maintained. An effective approach is to replace the linear approximation with a linear structure in the local approximation, that is, to reduce the nonlinear dimensionality (FF, FF) by approximating the dimensional reduction locally, so that the problem is transformed into one of local linear dimension reduction.

Definition 1.7 The model for the dynamic fuzzy dimensionality reduction problem is ((X,X),(F,F)),((X,X),(F,F)),, where the mapping (F,F):(X,X)(Y,Y).Let(F,F)(F,F):(X,X)(Y,Y).Let(F,F) be the dimension reduction from (X,X)to(Y,Y)(X,X)to(Y,Y) for the dynamic fuzzy datasets.

Definition 1.8 The mapping (f,f):(Y,Y)(T,T)(RD,RD)(f,f):(Y,Y)(T,T)(RD,RD) is called an embedded mapping.

Consider the arbitrary point (xi,xi)(X,X),Let{(xli,xli),l=1,2,...,k}(xi,xi)(X,X),Let{(xli,xli),l=1,2,...,k} represent the k-nearest neighbourhood of (xi,xi)(xi,xi) [the DFS of k points nearest to (xi,xi)in(X,X)](xi,xi)in(X,X)]] and let the linear approximation of the local (F,F)be(Fi,Fi).(F,F)be(Fi,Fi). Let (yi,yi)=(Fi,Fi)(xi,xi)(y1i,y1i)=(Fi,Fi)(xli,xli).(yi,yi)=(Fi,Fi)(xi,xi)(y1i,y1i)=(Fi,Fi)(xli,xli). Then, we have the following theorem:

Theorem 1.1 If (xi,xi)=kl=1(wli,wli)(xli,xli).(xi,xi)=l=1k(wli,wli)(xli,xli)., then [13]

where (wli,wli)(i=1,2,...,n;l=1,2,...,k)(wli,wli)(i=1,2,...,n;l=1,2,...,k) is a linear combination of dynamic fuzzy coefficients satisfying kl1(w1i,w1i)=(1,1).l1k(w1i,w1i)=(1,1).

Proof: Because (Fi,Fi)(Fi,Fi) is a linear dimensional reduction, there exists a d × D matrix (AA, AA and a d-dimensional vector (bb, bb) such that

We have

This suggests that the local linear dimension reduction is a constant. (xi,xi)(xi,xi) is approximated by a linear combination of {(xli,xli)|l=1,2,...,k}:(xi,xi)~{(xli,xli)l=1,2,...,k}:(xi,xi)~kl=1(wli,wli)(xli,xli),l=1k(wli,wli)(xli,xli), so the dynamic fuzzy coefficient {(wli,wli)}{(wli,wli)} depicts the invariant features of the data structure under the reduced dimension. Then, according to (yi,yi)=(Fi,Fi)(xi,xi)~(Fi,Fi)kl=1(wli,wli)(xli,xli)=kl=1(wli,wli)(yli,yli),(yi,yi)=(Fi,Fi)(xi,xi)~(Fi,Fi)l=1k(wli,wli)(xli,xli)=l=1k(wli,wli)(yli,yli), allowing us to identify (yi,yi)(yi,yi) through the information provided by {(wli,wli)}.{(wli,wli)}.

The core of the LLE method is to find the k-dimensional dynamic fuzzy vector (Wi,Wi)=((W1i,W1i),(W1i,W2i),...,(Wki,Wki))(Wi,Wi)=((W1i,W1i),(W1i,W2i),...,(Wki,Wki)) and to minimize

where kl1(w1i,w1i)=(1,1),(w1i,w1i)=(0,0),l1k(w1i,w1i)=(1,1),(w1i,w1i)=(0,0), and, when (xj,xj){(xli,xli)}l=1,2,...,k},and(w1i,w1i)=(0,0),(Wi,Wi) records the neighbourhood informa tion of point (xi,xi), that is, the local topological structure of each point, and the dynamic fuzzy matrix of all the components is denoted by (W, W). At this point, choose the appropriate dimension d and perform Rl d-dimensional embeddings, that is, seek (Y,Y)=((y1,y1),(y2,y2),...,(yn,yn))(Rd×n,Rd×n) to satisfy

because (, ) is solved by the dynamic fuzzy dataset (, , and the computation is very sensitive to noise, especially when the eigenvalues of (X,X)T(X,X) are small. In this case, the desired result may not be obtained.

1.2.2.2Analysis of noise interference

Let (x1l,x1l)'=(x1l,x1l)+(ε1i,ε1i),(i=1,2,...,k) denote the corresponding noise-affected point and (xi,xi)'=kl=1(w1i,w1i)'(xli,xli)',kl=1(w1i,w1i)'=(1,1);(U(xi)),U(xi))=((x1i,x1i),(x2i,x2i),...,(xki,xki)) is the k-neighbourhood DFS of (xi,xi),(U(xi),U(xi))'=((x1i,x1i)',(x2i,x2i)',...,(xki,xki)') is the k-neighbourhood DFS of (xi,xi)',, and

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.52.203