Model estimation

Once feature sets get finalized in our last section, what follows is to estimate the parameters of the selected models, for which we can use either MLlib or R here, and we need to arrange the distributed computing.

To simplify, we can utilize Databricks' Job feature. Specifically, within the Databricks environment, we can go to Jobs and then create jobs, as shown in the following image:

Model estimation

Then, users can select notebooks to run, specify clusters, and schedule jobs. Once scheduled, users can also monitor the running and then collect the results.

In section, Methods for a holistic view, we prepared some codes for each of the three models selected. Now, we need to modify them with the final set of features selected in the last section so as to create our final notebooks.

In other words, we have one dependent variable prepared and 17 features selected out from our PCA and feature selection work. Therefore, we need to insert all them into the codes developed in section II to finalize our notebook. Then we will use the Spark Job feature to get these notebooks implemented in a distributed way.

MLlib implementation

First, we need to prepare our data with the s1 dependent variable for linear regression and s2 dependent variable for logistic regression or decision tree. Then, we need to add the selected 17 features into them to form the datasets ready for our use.

For linear regression, we will use the following code:

val numIterations = 90
val model = LinearRegressionWithSGD.train(TrainingData, numIterations)

For logistic regression, we will use the following code:

val model = new LogisticRegressionWithSGD()
  .setNumClasses(2)

For decision tree, we will use the following code:

val model = DecisionTree.trainClassifier(trainingData, numClasses, categoricalFeaturesInfo,
  impurity, maxDepth, maxBins)

The R notebooks' implementation

For better comparison, it is a good idea to write linear regression and SEM into an R notebook and also write logistic regression and decision tree into the same R notebook.

Then, the main task left here is to schedule the estimation for each worker and then collect the results using the JOB feature mentioned before in the Databricks environment.

  • For linear regression and SEM, execute the following code:
    lm.est1 <- lm(s1 ~ T1+T2+M1+ M2+ M3+ Tr1+ Tr2+ Tr3+ S1+ S2+ P1+ P2+ P3+ P4+ Pr1+ Pr2+ Pr3)
    mod.no1 <- specifyModel()
    s1 <- x1, gam31
    s1 <- x2, gam32
  • For logistic regression and decision tree, run the following script:
    logit.est1 <- glm(s2~ T1+T2+M1+ M2+ M3+ Tr1+ Tr2+ Tr3+ S1+ S2+ P1+ P2+ P3+ P4+ Pr1+ Pr2+ Pr3,family=binomial())
    
     dt.est1 <- rpart(s2~ T1+T2+M1+ M2+ M3+ Tr1+ Tr2+ Tr3+ S1+ S2+ P1+ P2+ P3+ P4+ Pr1+ Pr2+ Pr3, method="class")

After we get all the models estimated as per each product, for simplicity, we will focus on one product to complete our discussion on model evaluation and deployment.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.143.205.136