H11=y1y2(1+X1TX1)2=(1)(1)(1+[11][11])2=9H12=y1y2(1+X1TX2)2=(1)(1)(1+[11][11])2=1

If all the Hij values are calculated in this way, the matrix provided below will be obtained:

H=[9111191111911119]

Step (815) In order to get the a value at this stage, it is essential to solve these systems: 1 −Ha =0 and [1111][9111191111911119] a = 0. As a result of the solution, we obtain a1 = a2 = a3 = a4 = 0.125. Such being the case, all the samples are accepted as the support vectors. These results obtained fulfill the condition presented in eq. (8.8), which is the condition of i=14aiyi=a1+a2a3a4=0

In order to find the weight vector w, w=i=14aiyi(ϕ(xi)) is to be calculated in this case.

w=(0.125){(1)[122211]+(1)[122211]+(1)[122211]+((1)[122211]}=(0.125)[0004200]=[0002200]

In this case, the classifier is obtained as follows:

=[0002200][12x12x22x1x2x12x22]=x1x2

The test accuracy rate has been obtained as 88.9% in the classification procedure of the MS dataset with four classes through SVM polynomial kernel algorithm.

8.3.3SVM algorithm for the analysis of mental functions

As presented in Table 2.19, the WAIS-R dataset has data 200 samples belonging to patient and 200 samples to healthy control group. The attributes of the control group are data regarding school education, gender and D.M (see Chapter 2, Table 2.18). Data are made up of a total of 21 attributes. It is known that using these attributes of 400 individuals, the data if they belong to patient or healthy group are known. How can we make the classification as to which individual belongs to which patient or healthy individuals and those diagnosed with WAIS-R test(based on the school education, gender and D.M)?D matrix has a dimension of 400 × 21. This means D matrix includes the WAIS-R dataset of 400 individuals along with their 21 attributes (see Table 2.19) for the WAIS-R dataset. For the classification of D matrix through SVM the first-step training procedure is to be employed. For the training procedure, 66.66% of the D matrix can be split for the training dataset (267 × 21), and 33.33% as the test dataset (133 × 21).

Following the classification of the training dataset being trained with SVM algorithm, we can do the classification of the test dataset (Figure 8.10)

Figure 8.10: Binary (linear) support vector machine algorithm for the analysis of WAIS-R.

In order to do classification with linear SVM algorithm, the test data is randomly chosen as 33.3% from the WAIS-R dataset.

WAIS-R dataset has two classes. Let us analyze the example given in Example 8.4 in order to understand how the classifiers are found for WAIS-R dataset being trained by linear (binary) SVM algorithm.

Example 8.4 For the WAIS-R dataset that has patient and healthy classes, for patient class with (0,0)(0,1) values and healthy class with (1,1) values, let us have a function that can separate these two classes from each other in a linear fashion:

The vectors relevant to this can be defined in the following way:

x1=[00],x2=[01],x3=[11],y=[111]

Lagrange function was in the pattern as presented in eq. (8.8). If the given values are written in their place, the calculation of Lagrange function can be done as follows:

Step (15)

d(XT)=a1+a2+a312(a1a1y1y1x1Tx1+a1a2y1y2x1Tx2+a1a3y1y3x1Tx3)+a2a1y2y1x2Tx1+a2a2y2y2x2Tx2+a1a3y2y3x2Tx3+a3a1y3y1x3Tx1+a3a2y3y2x3Tx2+a3a3y3y3x3Tx3)

Since x1=[00] we can write the denotation mentioned above in the way presented as follows:

2322d(XT)=a1+a2+a312(a22y22x2Tx2+a2a3y2y3x2Tx3+a3a2y3y2x3Tx2+a32y32x3Tx3)
d(XT)=a1+a2+a312(a22(1)2[01][01]+a2a3(1)(1)[01][01]+a3a2(1)(1)[11][01]+a32(1)2[11][11])d(Xt)=a1+a2+a312{a222a2a3+2a32}

Step (611) However, since ki=13yiai=0 from a1 + a2a3 = 0 relation, a1 + a2 = a3 is obtained. If this value is written in its place in d(XT), in order to find the values of d(XT)=2a312{a222a2a3+2a32} a2 and a3 the derivatives of the functions are taken and equaled to zero.

d(XT)a2=2a2+2a3=0d(XT)a3=2+a22a3=0

If these two last equations are solved, a2 = 2, a3 = 2 is obtained. a1 = 0 is obtained as

well. This means, it is possible to do the denotation as a=[022]. Now, we can find the w and b values.

w=a2y2x2+a3y3x3=2(1)[01]+2(1)[11]=[02][22]=[20]b=12(1y2x2T[20]+1y3x3T[20])=12(11[01][20]+1(1)[11][20])=1

Step (1215) As a result, {xi} classification for the observation values to be given as new will be as follows:

f(x)=sgn(wx+b)=sgn([20][x1x2]+1)=sgn(2x1+1)

In this case, if we would want to classify the new x4=[04] observation value, we see that n( − 2x1 + 1) > 0. Therefore, it is understood that the observation in question is in the positive zone (patient). For xi > 0, all the observations will be in the negative zone, namely in healthy area, which is seen clearly.

In this way, the WAIS-R datasets with two classes (samples) are linearly separable. In the classification procedure of WAIS-R dataset through polynomial SVM algorithm yielded a test accuracy rate of 98.8%.

We can use SVM kernel functions in our datasets (MS Dataset, Economy (U.N.I.S.) Dataset and WAIS-R Dataset) and do the application accordingly. The accuracy rates for the classification obtained can be seen in Table 8.1.

Table 8.1: The classification accuracy rates of SVM kernel functions.

As presented in Table 8.1, the classification of datasets such as the WAIS-R data that can be separable linearly through SVM kernel functions can be said to be more accurate compared to the classification of multi-class datasets such as those of MS Dataset and Economy Dataset.

References

[1]Larose DT. Discovering knowledge in data: An introduction to data mining, USA: John Wiley & Sons, Inc., 90–106, 2005.

[2]Schölkopf B, Smola AJ. Learning with kernels: Support vector machines, regularization, optimization, and beyond. USA: MIT press, 2001.

[3]Ma Y, Guo G. Support vector machines applications. New York: Springer, 2014.

[4]Han J, Kamber M, Pei J, Data mining Concepts and Techniques. USA: The Morgan Kaufmann Series in Data Management Systems, Elsevier, 2012.

[5]Stoean, C, Stoean, R. Support vector machines and evolutionary algorithms for classification. Single or Together. Switzerland: Springer International Publishing, 2014.

[6]Suykens JAK, Signoretto M, Argyriou A, Regularization, optimization, kernels, and support vector machines. USA: CRC Press, 2014.

[7]Wang SH, Zhang YD, Dong Z, Phillips P. Pathological Brain Detection. Singapore: Springer, 2018.

[8]Kung SY. Kernel methods and machine learning. United Kingdom: Cambridge University Press, 2014.

[9]Kubat M. An Introduction to Machine Learning. Switzerland: Springer International Publishing, 2015.

[10]Olson DL, Delen D. Advanced data mining techniques. Berlin Heidelberg: Springer Science & Business Media, 2008.

[11]Han J, Kamber M, Pei J, Data mining Concepts and Techniques. USA: The Morgan Kaufmann Series in Data Management Systems, Elsevier, 2012.

[12]Witten IH, Frank E, Hall MA, Pal CJ. Data Mining: Practical machine learning tools and techniques. USA: Morgan Kaufmann Series in Data Management Systems, Elsevier, 2016

[13]Karaca Y, Zhang YD, Cattani C, Ayan U. The differential diagnosis of multiple sclerosis using convex combination of infinite Kernels. CNS & Neurological Disorders-Drug Targets (Formerly Current Drug Targets-CNS & Neurological Disorders), 2017, 16(1), 36–43.

[14]Tang J, Tian Y. A multi-kernel framework with nonparallel support vector machine. Neurocomputing, Elsevier, 2017, 266, 226–238.

[15]Abe S. Support vector machines for pattern classification, London: Springer, 2010.

[16]Burges CJ. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery. 1998, 2 (2), 121–167.

[17]Lipo Wang. Support Vector Machines: Theory and Applications. Berlin Heidelberg: Springer Science & Business Media, 2005.

[18]Smola AJ, Schölkopf B. A tutorial on support vector regression. Statistics and Computing, Kluwer Academic Publishers Hingham, USA, 2004, 14 (3), 199–222.

[19]Hsu CW, Lin CJ. A comparison of methods for multiclass support vector machines. IEEE Transactions on Neural Networks, 2002, 13 (2), 415–425.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.138.122.210