Because the max function is non-continuous and differentiable, the partial derivative of the above equation is problematic. Here, we introduce an approximation function to overcome this problem.

We define a generalized p mean for the data (a,a)=[(a1,a1)(a2,a2),,(aN,aN)]:

Mp(a,a) is a good approximation function [53] for maxi(ai,ai)andmini(ai,ai), and has the following properties.

Lemma 1.2 [53]

(1)M0(a,a)=limp0Mp(a,a)=[(a1,a1)(aN,aN)]1/N

(2)p<qMp(a,a)<Mq(a,a)

(3)limpMp(a,a)=max[(a1,a1),,(aN,aN)]

(4)limpMp(a,a)=min[(a1,a1),,(aN,aN)]

(5)For each variable (ai,ai),M0(a,a) is continuously differentiable. The partial derivative of Mp(a,a)with respect to (ai,ai) is

Supposing p > 0 and approximating maxi=1N(ai,ai)withMp(a,a). (1.39) can be written as

In summary, DFRL can be used to generate or modify dynamic fuzzy rules Rl in a DFMLS. The algorithm can be summarised in Algorithm 1.4.

Algorithm 1.4 DFRL algorithm

Input:(a,a)=[(a1,a1),(a2,a2),,(aN,aN)](b,b)=[(b1,b1),(b2,b2),,(bM,bM)]

Output: (R,R)=[(rij,rij)]N×M

(1)Initialization: Select an initial dynamic fuzzy relationship (R,R) that can have a random initialization value;

(2)Repeat:

(2a)Set k to an initial value of 0. This parameter can be determined according to the scale of the learning and the requirements of the system.

(2b)Calculate the influence coefficient of the step i learning on the step n + 1 learning;

(wi,wi)=[i=nk+1n(ei,ei)](ei,ei)(k1)j=nk+1n(ei,ei),

where is the error generated by step i learning;

(2c)Calculate (1.38);

(2d)Calculate (1.42) from (1.40) and (1.41);

(2e)Modify the current dynamic fuzzy relationship matrix (R,R) according to (1.42), (1.15), and (1.16);

Until Convergence

1.5.4Algorithm analysis

Comparing the DFRL algorithm with the AL algorithm [52], DFRL accounts for the influence of the k th step of learning on the n + 1th step. Although this increases the time complexity, it ensures the learning results are more credible and the error is smaller. In addition, DFRL solves the problem of possible noise interference in the observed data and overcomes the disadvantages of the lateral method. In general, the DFRL algorithm is superior to the latter. It is not difficult to prove that the DFRL algorithm is convergent.

1.6Summary

In this chapter, we have presented a DFMLM and its related algorithms based on DFSs. We have discussed a DFMLM and DFML algorithm, DFML geometric model, DFMLS parameter learning algorithm and MLE algorithm, DFMLS process control model, and a dynamic fuzzy relation learning algorithm.

References

[1]Li FZ. Dynamic fuzzy logic and its applications. NY, USA, Nova Science Publishers, 2008.

[2]Li FZ, Liu GQ, Yu YM. An introduction to dynamic fuzzy logic. Yunnan, China, Yunnan Science & Technology Press, 2005.

[3]Li FZ, Zhu WH. Dynamic fuzzy logic and its application. Yunnan, China, Yunnan Science & Technology Press, 1997.

[4]Li FZ, Shen QZ, Zheng JL, Shen HF. Dynamic fuzzy sets and its application. Yunnan, China, Yunnan Science & Technology Press, 1997.

[5]Xie L, Li FZ. Research on the kind of dynamic fuzzy machine learning algorithm. Acta Electronica Sinica, 2008, 36(12A): 114–116.

[6]Xie L, Li FZ. Research on an new dynamic fuzzy parameter learning algorithm. Microelectronics & Computer, 2008, 25(9): 84–87.

[7]Xie L, Li FZ. Research on a new parameter learning algorithm. In Proceedings of 2008 Internal Conference on Advanced Intelligence, Beijing, China, Oct 18–22, 2008, 112–116.

[8]Zhang J. Dynamic Fuzzy Machine Learning Model and Its Applications. Master’s thesis, Soochow University, Suzhou, China, 2007.

[9]Li FZ. Research on a kind of coordination machine learning model based on the DFS. Computer Engineering, 2001, 27(3): 106–110.

[10]Li FZ. Research on stability of coordinating machine learning. Journal of Chinese MiniMicro Computer Systems, 2002, 23(3): 314–317.

[11]Zhang J, Li FZ. Machine learning model based on dynamic fuzzy sets (DFS) and its validation. Journal of Computational Information Systems, 2005, 1(4): 871–877.

[12]Roweis ST, Lawrence KS. Nonlinear dimensionality reduction by locally linear embedding. Science, 2000, 290(5500): 2323–2326.

[13]Liu Z. Research on dimension reduction in high dimensional data analysis. Master’s thesis, National University of Defense Technology, Changsha, China, 2002.

[14]Tan L, Wu X, Yi DY. Robust locally linear embedding. Journal of National University of Defense Technology, 2004, 26(6): 91–95.

[15]Xiao J, Zhou ZT, Hu DW, Yin JS, Chen S. Self-organized locally linear embedding for nonlinear dimensionality reduction. In Proceedings of ICNC2005, 2005, 3610: 101–109.

[16]Chen S, Zhou ZT, Hu DW. Diffusion and growing self-organizing map: A nitric oxide based neural model. In Proceedings of ISNN2004, 2004, 1: 199–204.

[17]Li JX, Zhang Y, Fu X. A robust learning algorithm for noise data. Journal on Numerical Methods and Computer Applications, 2000, 6(2): 112–120.

[18]Khalil H K. Nonlinear systems (Third Edition). Beijing, China, Publishing House of Electronics Industry, 2005.

[19]Yang SZ, Wu Y. Time series analysis in engineering application. Wuhan, China, Huazhong University of Science & Technology Press, 1994.

[20]Martin R, Heinrich B. A direct adaptive method for faster back propagation learning: The RPROP algorithm. In Proceedings of the IEEE International Conference on Neural Networks, 1993, 586–591.

[21]Webb JM, Liu JS, Lawrence CE. BALSA: Bayesian algorithm for local sequence alignment. Nucleic Acids Research, 2002, 30(6): 1418–1426.

[22]Zu JK, Zhao CS, Dai GZ. An adaptive parameters learning algorithm for fuzzy logic systems. Journal of System Simulation, 2004, 16(5): 1108–1110.

[23]Van Gorp J, Schoukens J, Pintelon R. Learning neural networks with noisy inputs using the errors in variables approach. IEEE Transactions on Neural Networks, 2000, 11(2): 402–413.

[24]Yager RR, Filev DP. Approximate clustering via the mountain method. IEEE Transactions on Systems, Man and Cybernetics, 1994, 15(8): 1274–1284.

[25]Dempster AP, Laird NM, Rubin DB. Maximum likelihood from incomplete data via the EM algorithm (with discussion). Journal of Royal Statistics Society B, 1977, 39: 1–38.

[26]Moon TK. The expectation maximization algorithm. IEEE Signal Processing Magazine, 1996, 13(6): 47–60.

[27]Matsuyama Y. The α-EM algorithm and its applications[C]// IEEE International Conference on Acoustics, Speech, and Signal Processing, 2000. ICASSP ’00. Proceedings. IEEE, 2000, 1: 592–595.

[28]Fang YB. α-EM algorithm and its some applications. Master’s thesis, Shanghai Jiao Tong University, Shanghai, China, 2004.

[29]Matsuyama Y. The α-EM algorithm: Surrogate likelihood maximization using alogarithmic information measures. IEEE Transactions on Information Theory, 2003, 49(3): 692–706.

[30]Zangwill W I, Mond B. Nonlinear programming: A unified approach. NJ, USA, Prentice Hall, 1969.

[31]Luenberger DG. Introduction to linear and nonlinear programming. 2th ed. MA, USA, Addison Wesley Publishing Company, 1984.

[32]Zhang WZ, Wang LB, Wang XR. Maximum likelihood estimation of nonlinear reproductive divergence data model parameters new exploration of theory. Statistics and Decision Making, 2004, 7: 6–7.

[33]Tanaka K, Sugeno M. Stability analysis and design of fuzzy control systems. Fuzzy Sets and Systems, 1992, 45: 135–156.

[34]Xie L, Souza CE. Criteria for robust stabilization of uncertain linear systems with time-varying states delays. In Proceedings of IFAC 13th Triennial word congress, USA, 1996, 137–142.

[35]Wu M, Gui WH. Advanced robust control. ChangSha, China, Central South University Press, 1998.

[36]Cai ZX, Xiao XM. Robust stability analysis and controller design for a class of dynamic fuzzy system. Journal of Central South University, 1999, 30(4): 418–421.

[37]He XQ. Stability analysis and application of a class of multivariable fuzzy systems. PhD Thesis, Northeastern University, Shenyang, China, 2000.

[38]Lavrac N, Dzeroski S. Inductive logic programming. New York, USA, Horwood Press, 1994.

[39]Dzeroski S. Multi-relational data mining: An introduction. ACM SIGKDD Explorations Newsletter, 2003, 5(1): 1–16.

[40]Popescul A, Ungar L H. Statistical relational learning for link prediction. Working Notes of the IJCAI2003 Workshop on Learning Statistical Models from Relational Data, 2003, 109–115.

[41]Popescul A. Statistical learning from relational databases. PhD Thesis, University of Pennsylvania, USA, 2004.

[42]Ng R, Subrahmanian VS. Probabilistic logic programming. Information and Computation, 1992, 101(2): 150–201.

[43]Dey D, Sarkar S. A probabilistic relational model and algebra. ACM Transaction on Database System, 1996, 21(3): 339–369.

[44]Kersting K, De Raedt L. Bayesian logic programs. In Proceedings of the Work in Progress Track at the 10th International Conference on Inductive Logic Programming, 2000.

[45]Taskar B, Abbeel P, Koller D. Discriminative probabilistic models for relational data. In Proceedings of Conference on Uncertainty in Artificial Intelligence, Edmonton, 2002.

[46]Richardson M, Domingos P. Markov logic networks. Machine Learning, 2006, 62(1): 107–136.

[47]Anderson C, et al. Relational Markov Models and their application to adaptive web navigation. In Proceeding of the Eighth International Conference on Knowledge Discovery and Data Mining, 2002, 143–152.

[48]Kersting K. et al. Towards discovering structural signatures of protein folds based on logical hidden Markov models. In Proceedings of the 8th Pacific Symposium on Biocomputing, 2003, 192–203.

[49]Muggleton S. Stochastic logic programs. Advances in Inductive Logic Programming, 1996, 254–264.

[50]Sato T, Kameya Y. PRISM: A symbolic-statistical modeling language. In Proceedings of the Fifteenth International Joint Conference on Artificial Intelligence, 1997, 1330–1339.

[51]Prade H, Richard G, Serrurier M. Enriching relational learning with fuzzy predicates, In Proceedings of PKDD2003, LNAI 2838, 2003, 399–410.

[52]Wang ST. Learning algorithm AL of fuzzy inference rules based on fuzzy relation. Computer Engineering and Applications, 1995, 4: 57–59.

[53]Rivest RL. Game tree searching by min/max approximation. Artificial Intelligence, 1987, 34(1): 1–2.

[54]Muggleton S, Buntine W. Machine Invention of First-order Predicates by Inverting Resolution. Machine Learning Proceedings, 1988, 339–352.

[55]Quinlan JR. Learning logical definitions from relations. Machine Learning, 1990, 5(3): 239–266.

[56]Muggleton S, Feng C. Efficient induction of logic programs. New Generation Computing, 1990, 38.

[57]Lavrac N, Dzeroski S, Grobelnik M. Learning nonrecursive definitions of relations with LINUS// European Working Session on Learning. Springer Berlin Heidelberg, 1991, 265–281.

[58]Muggleton S, Mizoguchi F, Furukawa K. Special issue on inductive logic programming. New Generation Computing, 1995, 13(3-4): 243–244.

[59]Srinivasan A. A study of two probabilistic methods for searching large spaces with ILP. Technical Report PRG-TR-16-00, 2000, Oxford University Computing Laboratory, Oxford.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.227.9