References

  1. D. Bell and H. Wang (2000). A Formalism for Relevance and its Application in Feature Subset Selection. Machine Learning, 41(2):175–195.
  2. J. Doak (1992). An Evaluation of Feature Selection Methods and their Application to Computer Security. Technical Report CSE–92–18, Davis, CA: University of California, Department of Computer Science.
  3. M. Ben-Bassat (1982). Use of Distance Measures, Information Measures and Error Bounds in Feature Evaluation. In P. R. Krishnaiah and L. N. Kanal, editors, Handbook of Statistics, volume 2, pages 773–791, North Holland.
  4. Littlestone N, Warmuth M (1994) The weighted majority algorithm. Information Computing 108(2):212–261
  5. Breiman L., Friedman J.H., Olshen R.A., Stone C.J. (1984) Classification and Regression Trees, Wadsforth International Group.
  6. B. Ripley(1996), Pattern recognition and neural networks. Cambridge University Press, Cambridge.
  7. Breiman, L., (1996). Bagging Predictors, Machine Learning, 24 123-140.
  8. Burges, C. (1998). A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery. 2(2):1-47.
  9. Bouckaert, R. (2004), Naive Bayes Classifiers That Perform Well with Continuous Variables, Lecture Notes in Computer Science, Volume 3339, Pages 1089 – 1094.
  10. Aha D (1997). Lazy learning, Kluwer Academic Publishers, Dordrecht
  11. Nadeau, C. and Bengio, Y. (2003), Inference for the generalization error. In Machine Learning 52:239– 281.
  12. Quinlan, J.R. (1993). C4.5: Programs for machine learning, Morgan Kaufmann, San Francisco.
  13. Vapnik, V. (1995), The Nature of Statistical Learning Theory. Springer Verlag.
  14. Schapire RE, Singer Y, Singhal A (1998). Boosting and Rocchio applied to text filtering. In SIGIR '98: Proceedings of the 21st Annual International Conference on Research and Development in Information Retrieval, pp 215–223
  15. Breiman L.(2001). Random Forests. Machine Learning, 45 (1), pp 5-32.
  16. Nathalie Japkowicz and Mohak Shah (2011). Evaluating Learning Algorithms: A Classification Perspective. Cambridge University Press.
  17. Hanley, J. & McNeil, B. (1982). The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology 143, 29–36.
  18. Tjen-Sien, L., Wei-Yin, L., Yu-Shan, S. (2000). A Comparison of Prediction Accuracy, Complexity, and Training Time of Thirty-Three Old and New Classification Algorithms. Machine Learning 40: 203–228.
  19. A. W. Moore and M. S. Lee (1994). Efficient Algorithms for Minimizing Cross Validation Error. In Proc. of the 11th Int. Conf. on Machine Learning, pages 190–198, New Brunswick, NJ. Morgan Kaufmann.
  20. Nitesh V. Chawla et. al. (2002). Synthetic Minority Over-sampling Technique. Journal of Artificial Intelligence Research. 16:321-357.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.135.187.210