There's more...

There are many other popular approaches to evaluating feature importance. We list some of them here:

  • treeinterpreter—The idea is to use the underlying trees in Random Forest to explain how each feature contributes to the end result. This is an observation-level metric— the explanations are given for the selected rows, in the previous case, that would correspond to a specific customer of the bank.
  • Local Interpretable Model-agnostic Explanations (LIME)—Another observation-level technique, used for explaining the predictions of any model in an interpretable and faithful manner. To obtain the explanations, LIME locally approximates the selected model with an interpretable one (such as linear models with regularization or decision trees). The interpretable models are trained on small perturbations (with additional noise) of the original observation.
  • Partial dependence plots (PDP)—These plots isolate the changes in predictions only to come from a certain feature.
  • SHapley Additive exPlanations (SHAP)—A framework for explaining predictions of any machine learning model (in other words, it is model-agnostic) using a combination of game theory and local explanations.

For more information on the presented model explanation methods, please see the additional resources in the See also section.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.113.199