Presenting strategies for explainability

For machine learning, there are fundamentally two strategies to provide explainability to algorithms:

  • A global explainability strategy: This is to provide the details of the formulation of a model as a whole.
  • A local explainability strategy: This is to provide the rationale for one or more individual predictions made by our trained model.

For global explainability, we have techniques such as Testing with Concept Activation Vectors (TCAV), which is used for providing explainability for image classification models. TCAV depends on calculating directional derivatives to quantify the degree of the relationship between a user-defined concept and the classification of pictures. For example, it will quantify how sensitive a prediction of classifying a person as male is to the presence of facial hair in the picture. There are other global explainability strategies such as partial dependence plots and calculating the permutation importance, which can help to explain the formulations in our trained model. Both global and local explainability strategies can either be model-specific or model-agnostic. Model-specific strategies apply to certain types of models, whereas model-agnostic strategies can be applied to a wide variety of models.

The following diagram summarizes the different strategies available for machine learning explainability:

Now, let's look at how we can implement explainability using one of these strategies.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.222.119.148