• Field overloading 1
  • – data analysis 1
  • forward layer 1
  • forward networks 1
  • FracLab 1
  • Fractal 1, 2, 3, 4, 5, 6, 7, 8
  • – multifractal spectrum 1
  • – self-similarity 1, 2
  • Fractal Dimension 1, 2, 3, 4, 5, 6, 7
  • free path 1
  • frequency curve 1
  • F-Test for Regression 1
  • fully connected 1
  • gain ratio 1, 2, 3, 4, 5
  • – C4.5 1
  • – maximum 1
  • generalization 1, 2, 3, 4, 5, 6, 7
  • – presentation 1, 2
  • Gini index 1, 2, 3, 4, 5, 6
  • – CART 1, 2
  • – decision tree induction using 1
  • – minimum 1, 2
  • – partitioning 1, 2
  • graphic displays 1
  • – histogram 1, 2
  • – quantile plot 1, 2
  • graphical user interface (GUI) 1
  • greedy methods, attribute subset selection 1
  • harmonic mean 1
  • Hausdorff dimension 1
  • hidden units 1
  • histograms 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 16 17, 18, 19, 20, 21, 22
  • – analysis by discretization 1
  • – binning 1
  • – construction 1
  • – equal-width 1
  • – outlier analysis 1
  • Hölder continuity 1
  • Hölder exponent 1, 2
  • Hölder regularity 1, 2 3, 4, 5, 6, 7, 8, 9
  • holdout method 1, 2, 3
  • ID3 1, 2, 3, 4, 5. See also Decision tree induction
  • – accuracy 1, 2, 3
  • – IF-THEN rules 1, 2, 3, 4
  • – information gain 1, 2
  • – satisfied 1
  • illustrated 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22
  • Image on Coordinate Systems 1
  • imbalance problem 1
  • incremental learning 1
  • inferential 1
  • information 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 162,169, 22, 23, 24, 25, 26, 27, 28, 29
  • information gain 1, 2, 3, 4, 5, 6
  • – decision tree induction using 1
  • – split-point 1
  • instance-based learners 1. See also lazy learners
  • instances 1, 2, 3
  • integrable 1
  • integral 1, 2
  • internal node 1, 2
  • interpolation 1
  • interpretability 1, 2, 3
  • – backpropagation 1
  • – classification 1, 2
  • – data quality 1
  • interquartile range (IQR) 1, 2, 3, 4
  • IQR. See Interquartile range
  • iterate/iterated 1, 2
  • joint event 1
  • Karush-Kuhn-Tucker (KKT) conditions 1
  • kernel function 1, 2
  • Keywords 1
  • k-Nearest Neighbor 1, 2, 3
  • K-nearest Neighbor Algorithm 1, 2
  • – Compute distance 1
  • – editing method 1
  • k-nearest-neighbor classification 1, 2, 3, 4 5, 6, 7, 8
  • – distance-based comparisons 1
  • – number of neighbors 1, 2, 3, 4, 5
  • – partial distance method 1
  • Koch snowflakes 1
  • lazy learners 1
  • – k-nearest-neighbor classifiers 1
  • learning 1, 2, 3, 4, 5, 6, 7, 8
  • – backpropagation 1, 2, 3, 4, 5, 6, 7
  • – supervised 1
  • – unsupervised 1
  • Left side value 1
  • Likelihood 1
  • linear regression 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
  • – multiple regression 1
  • linearly 1, 2, 3, 4, 5, 6
  • logistic function 1
  • log-linear models 1
  • machine learning 1, 2, 3, 4, 5
  • – supervised 1
  • – unsupervised 1
  • machine learning efficiency 1
  • – data mining 1
  • Magnetic Resonance Image (MRI) 1
  • majority voting 1
  • margin 1, 2, 3, 4, 5
  • maximum marginal hyperplane (MMH) 1, 2, 3, 4, 5, 6
  • – SVM finds 1
  • mean 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34
  • – weighted arithmetic 1
  • mean() 1, 2, 3, 4
  • measures 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20
  • – accuracy-based 1
  • – attribute selection 1
  • – categories 1
  • – dispersion 1
  • – dispersion of data 1
  • – of central tendency 1, 2, 3
  • – precision 1, 2, 3
  • – sensitivity 1, 2
  • – significance 1, 2
  • – similarity 1
  • – dissimilarit 1
  • – specificity 1, 2
  • measures classification methods
  • – precision measure 1
  • median 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
  • Median() 1, 2, 3, 4, 5
  • metadata 1, 2
  • – importance 1
  • – operational 1
  • metrics 1, 2, 3
  • midrange 1, 2, 3
  • min-max normalization 1
  • missing values 1, 2, 3, 4
  • mode 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
  • – example 1, 2
  • model selection 1
  • models 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
  • Multi linear regression model 1, 2, 3, 4, 5, 6, 7, 8
  • – straight line 1, 2, 3
  • multifractional Brownian motion (mfBm) 1
  • multimodal 1
  • multiple linear regression 1
  • Naïve Bayes 1, 2
  • naive Bayesian classification 1, 2
  • – class labels 1
  • nearest-neighbor algorithm 1, 2, 3, 4
  • negative correlation 1, 2, 3
  • negatively skewed data 1
  • neighborhoods 1, 2
  • neural network
  • – multilayer feed-forward 1, 2
  • neural networks 1, 2
  • – backpropagation 1, 2, 3, 4, 5, 6
  • – disadvantages 1
  • – for classification 1, 2, 3
  • – fully connected 1
  • – learning 1
  • – multilayer feed-forward 1, 2, 3, 4
  • – rule extraction 1
  • New York Stock Exchange (NYSE) 1, 2, 3, 4, 5
  • no-differentiable 1
  • noisy data 1, 2, 3, 4, 5
  • nominal attributes 1, 2, 3. See also attributes
  • – correlation analysis 1
  • – dissimilarity between 1
  • nonlinear SVMs 1, 2, 3
  • non-random variables 1
  • normalization 1, 2, 3, 4, 5, 6, 7
  • – by decimal scaling 1
  • – data transformation 1, 2
  • – min-max normalization 1, 2
  • – z-score 1, 2
  • null rules 1
  • numeric attributes 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12
  • – covariance analysis 1, 2, 3, 4
  • – interval-scaled 1
  • – ratio-scaled 1
  • numeric data 1, 2, 3
  • numeric prediction 1, 2, 3, 4
  • – classification 1, 2, 3
  • – support vector machines (SVMs) for 1
  • numerosity reduction 1
  • operators 1
  • ordering 1, 2, 3, 4
  • ordinal attributes 1
  • outlier analysis 1
  • partial distance method 1
  • particle cluster 1, 2, 3, 4
  • partitioning 1, 2, 3, 4, 5, 6
  • – algorithms 1, 2, 3, 4, 5
  • – cross-validation 1, 2
  • – Gini index and 1, 2, 3
  • – holdout method 1
  • – random sample 1
  • – recursive 1
  • patterns 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18
  • – search space 1
  • Peano curve 1
  • percentiles 1, 2
  • Pixel Coordinates 1
  • Polynomial Hölder Function 1, 2, 3, 4, 5
  • posterior probability 1, 2, 3, 4, 5, 6
  • predictive 1, 2
  • predictor model 1
  • predictors 1, 2, 3, 4
  • pre-processing 1, 2, 3, 4
  • probability 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23
  • – estimation of 1, 2
  • – posterior 1, 2, 3, 4, 5, 6, 7
  • – prior 1, 2
  • probability theory and statistic 1
  • processing 1, 2, 3, 4, 5, 6
  • pruning (or condensing) 1, 2
  • – decision trees
  • – K-nearest Neighbor Classifiers 1, 2
  • – n-dimensional pattern 1
  • – pessimistic 1
  • q-q plot 1, 2
  • – example 1, 2
  • qualitative 1, 2, 3
  • quantile plots 1, 2, 3, 4, 5
  • quantile-quantile plots 1, 2, 3, 4. See also graphic displays
  • – example 1, 2
  • – illustrated 1
  • quantitative 1, 2, 3, 4, 5
  • quartiles 1, 2
  • – first quartile 1
  • – third quartile 1, 2
  • Radius of cluster 1, 2, 3, 4, 5, 6, 7
  • random forest 1
  • random sample 1
  • random variable 1, 2, 3, 4, 5
  • random walk 1, 2, 3
  • range 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19. See also quartiles
  • Range() 1, 2, 3, 4
  • Ranking (meaningful order) 1
  • – dimensions 1
  • Ratio-scaled attribute 1
  • Real Data Set 1
  • Recall 1, 2, 3, 4
  • Recallclassification methods 1
  • – recall 1
  • recognition rate 1
  • recursion 1
  • recursive 1, 2, 3, 4, 5
  • redundancy 1, 2, 3, 4
  • – in data integration 1
  • regression analysis 1, 2, 3, 4, 5, 6
  • relational database management systems
  • (RDBMS) 1, 2
  • relational databases 1
  • repetition 1, 2, 3
  • residual sum of squares (RSS) 1, 2
  • resubstitution error 1
  • RGB, application of 1, 2, 3
  • – Examples 1, 2
  • Right side value 1, 2
  • robustness, classification methods 1
  • – robustness 1
  • Root Node Initial Value 1, 2
  • rule extraction 1
  • rule-based classification 1, 2
  • – IF-THEN rules 1
  • – rule extraction 1
  • rule-based classification methods
  • – IF-THEN rules 1, 2, 3, 4
  • samples 1
  • – cluster 1
  • scalability 1
  • – classification 1
  • – decision tree induction using 1
  • scaling 1, 2, 3
  • Scatter plots 1, 2, 3, 4, 5, 6, 7
  • – correlations between attributes 1, 2
  • Scatter plots 1, 2, 3, 4, 5, 6, 7
  • self-similarity 1, 2, 3
  • self-similarity dimension 1
  • sensitivity 1, 2, 3, 4
  • sequences 1, 2, 3, 4, 5, 6, 7
  • Sierpinski 1, 2, 3
  • Sigmoid/logistic function 1
  • significant singularities 1
  • similarity 1, 2, 3, 4, 5, 6, 7
  • – asymmetric (skewed) data 1
  • – measuring 1
  • – specificity 1
  • singled out singularities 1, 2, 3, 4, 5, 6, 7
  • skewed data 1, 2
  • – negatively 1
  • – positively 1
  • smoothing 1, 2
  • – Binning methods 1
  • – levelling by bin medians 1
  • – by bin boundaries 1
  • – by bin means 1
  • – for data discretization 1
  • snowflake 1, 2
  • – example 1
  • spatial coordinates 1
  • spectrum 1, 2, 3
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.26.221