Caveats

Of course, predictions are not always accurate, and some have written about the caveats of data science. What do you think about the relationship between the attributes titled Predictor and Outcome on the following plot? It seems like there is a relationship between the two. For the statistically inclined, I tested its significance: r = 0.4195, p = .0024. The value p is the probability of obtaining a relationship of this strength or stronger if there is actually no relationship between the attributes. We could conclude that the relationship between these variables in the population they come from is quite reliable, right?

Caveats

The relationship between the attributes titled Predictor and Outcome

Believe it or not, the population these observations come from is that of randomly generated numbers. We generated a data frame of 50 columns of 50 randomly generated numbers. We then examined all the correlations (manually) and generated a scatterplot of the two attributes with the largest correlation we found. The code is provided here, in case you want to check it yourself—line 1 sets the seed so that you find the same results as we did, line 2 generates to the data frame, line 3 fills it with random numbers, column by column, line 4 generates the scatterplot, line 5 fits the regression line, and line 6 tests the significance of the correlation:

1  set.seed(1)
2  DF = data.frame(matrix(nrow=50,ncol=50))
3  for (i in 1:50) DF[,i] = runif(50)
4  plot(DF[[2]],DF[[16]], xlab = "Predictor", ylab = "Outcome")
5  abline(lm(DF[[2]]~DF[[16]]))
6  cor.test(DF[[2]], DF[[16]])

How could this relationship happen given that the odds were 2.4 in 1000 ? Well, think of it; we correlated all 50 attributes 2 x 2, which resulted in 2,450 tests (not considering the correlation of each attribute with itself). Such spurious correlation was quite expectable. The usual threshold below which we consider a relationship significant is p = 0.05, as we will discuss in Chapter 8, Probability Distributions, Covariance, and Correlation. This means that we expect to be wrong once in 20 times. You would be right to suspect that there are other significant correlations in the generated data frame (there should be approximately 125 of them in total). This is the reason why we should always correct the number of tests. In our example, as we performed 2,450 tests, our threshold for significance should be 0.0000204 (0.05 / 2450). This is called the Bonferroni correction.

Spurious correlations are always a possibility in data analysis and this should be kept in mind at all times. A related concept is that of overfitting. Overfitting happens, for instance, when a weak classifier bases its prediction on the noise in data. We will discuss overfitting in the book, particularly when discussing cross-validation in Chapter 14, Cross-validation and Bootstrapping Using Caret and Exporting Predictive Models Using PMML. All the chapters are listed in the following section.

We hope you enjoy reading the book and hope you learn a lot from us!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.98.250