Preface

Explainable AI (XAI) is an emerging field for bringing artificial intelligence (AI) closer to non-technical end-users. XAI promises to make machine learning (ML) models transparent, and trustworthy and promote AI adoption for industrial and research use-cases.

This book is designed with a unique blend of industrial and academic research perspectives for gaining practical skills in XAI. ML/AI experts working with data science, ML, deep learning, and AI will be able to put their knowledge to work with this practical guide to XAI for bridging the gap between AI and the end-user. The book provides a hands-on approach for implementation and associated methodologies of XAI that will have you up-and-running, and productive in no time.

Initially, you will get a conceptual understanding of XAI and why it's needed. Then, you will get the necessary practical experience of utilizing XAI in the AI/ML problem-solving process by making use of state-of-the-art methods and frameworks. Finally, you will get the necessary guidelines to take XAI to the next step and bridge the existing gaps between AI and end-users.

By the end of this book, you will be able to implement XAI methods and approaches using Python to solve industrial problems, address the key pain points encountered, and follow the best practices in the AI/ML life cycle.

Who this book is for

This book is designed for scientists, researchers, engineers, architects, and managers who are actively engaged in the field of ML and related areas. In general, anyone who is interested in problem-solving using AI would benefit from this book. You are recommended to have a foundational knowledge of Python, ML, deep learning, and data science. This book is ideal for readers who are working in the following roles:

  • Data and AI scientists
  • AI/ML engineers
  • AI/ML product managers
  • AI product owners
  • AI/ML researchers
  • User experience and HCI researchers

In general, any ML enthusiast with a foundational knowledge of Python will be able to read, understand and apply knowledge gained from this book.

What this book covers

Chapter 1, Foundational Concepts of Explainability Techniques, gives the necessary exposure to Explainable AI and help you understand it's importance. This chapter covers various terminology and concepts related to explainability techniques, which is frequently used throughout this book. This chapter also covers the key criteria of human-friendly explainable ML systems and different approaches to evaluating the quality of the explainability techniques.

Chapter 2, Model Explainability Methods, discusses the various model explainability methods used for explaining black-box models. Some of these are model agnostic, some are model specific. Some of these methods provide global interpretability while others provide local interpretability. This chapter will introduce you to a variety of techniques that can be used for explaining ML models and provides recommendation for the right choice of explainability method.

Chapter 3, Data-Centric Approaches, introduces the concept of data-centric XAI. This chapter covers various techniques to explain the working of ML systems in terms of the properties of the data, data volume, data consistency, data purity and actionable insights generated from the underlying training dataset.

Chapter 4, LIME for Model Interpretability, covers the application of one of the most popular XAI frameworks, called LIME. This chapter discusses about the intuition behind the working of the LIME algorithm and some important properties of the algorithm which makes the generated explanations human-friendly. Certain advantages and limitations of the LIME algorithm are also discussed in this chapter, along with a code tutorial for applying LIME for a classification problem.

Chapter 5, Practical Exposure to Using LIME in ML is an extension of the previous chapter, but more focused towards the practical applications of the LIME Python framework on different types of datasets like images, texts along with structured tabular data. Practical code examples are also covered in this chapter for providing exposure to on-hand knowledge using Python LIME framework. This chapter also covers if LIME is a good fit for production-level ML systems.

Chapter 6, Model Interpretability Using SHAP focuses on understanding the importance of the SHAP Python framework for model explainability. It covers the intuitive understanding of Shapley values and SHAP. This chapter also discusses how to use SHAP for model explainability through a variety of visualization and explainer methods. A code walkthrough for using SHAP to explain regression models is also covered in this chapter. Finally, we will discuss the key advantages and limitations of SHAP.

Chapter 7, Practical Exposure to Using SHAP in ML provides the necessary practical exposure of using SHAP with tabular structured data as well unstructured data like images and texts. We have discussed about the different explainers available in SHAP for both model-specific and model agnostic explainability. We have applied SHAP for explaining linear models, tree ensemble models, convolution neural network models and even transformer models in this chapter. Necessary code tutorials are also covered in this chapter for providing exposure to hands-on knowledge using Python SHAP framework.

Chapter 8, Human-Friendly Explanations with TCAV covers the concepts of TCAV, a framework developed by Google AI. This chapter provides both conceptual understanding of TCAV and practical exposure to applying the Python TCAV framework. The key advantages and limitations of TCAV are discussed along with interesting ideas about potential research problems that can be solved using concept-based explanations are discussed in the chapter. 

Chapter 9, Other Popular XAI Frameworks covers about seven popular XAI frameworks available in Python – DALEX, Explainerdashboard, InterpretML, ALIBI, DiCE, ELI5, and H2O AutoML explainers. We have discussed about the supported explanation methods for each of the framework, practical application, and the various pros and cons of each framework. This chapter also provides a quick comparison guide for helping you decide which framework you should go for considering your own use-case.

Chapter 10, XAI Industry Best Practices focuses on the best practices for designing explainable AI systems for industrial problems. In this chapter, we have discussed about the open challenges of XAI and necessary design guidelines for explainable ML systems, considering the open challenges. We have also highlighted the importance of considering data-centric approaches of explainability, interactive machine learning and prescriptive insights for designing explainable AI/ML systems.

Chapter 11, End User-Centered Artificial Intelligence introduces the ideology of end user centered artificial intelligence (ENDURANCE) for the design and development of explainable AI/ML Systems. We have discussed about the importance of using XAI to steer towards the main goals of the end user for building explainable AI/ML systems. Using some of principles and recommended best practices presented in the chapter, we can bridge the gap between AI and the end user to a great extent!

To get the most out of this book

To run the code tutorials provided in this book, you will need a Jupyter environment with Python 3.6+. This can be achieved in either of the following ways:

  • Install one on your machine locally via Anaconda Navigator or from scratch with pip.
  • Use a cloud-based environment such as Google Colaboratory, Kaggle notebooks, Azure notebooks, or Amazon SageMaker.

You can take a look at the supplementary information provided at the code repository if you are new to Jupyter notebooks: https://github.com/PacktPublishing/Applied-Machine-Learning-Explainability-Techniques/blob/main/SupplementaryInfo/CodeSetup.md.

You can also take a look at https://github.com/PacktPublishing/Applied-Machine-Learning-Explainability-Techniques/blob/main/SupplementaryInfo/PythonPackageInfo.md and https://github.com/PacktPublishing/Applied-Machine-Learning-Explainability-Techniques/blob/main/SupplementaryInfo/DatasetInfo.md for getting the supplementary information about the Python packages and datasets used in the tutorial notebooks.

For instructions on installing the Python packages used throughout the book, please refer the specific notebook provided in the code repository. For any additional help needed, please refer the original project repository of the specific package. You can use PyPi (https://pypi.org/) and search for the specific package and navigate to the code repository of the project. It is expected that installation or execution instructions of these packages can change from time to time, given how often packages change. We also tested the code with specific versions detailed in the Python package information README file under the supplementary information provided at the code repository. So, if anything doesn't work as expected with the later versions, please install the specific version mentioned in the README instead.

If you are using the digital version of this book, we advise you to type the code yourself or access the code from the book's GitHub repository (a link is available in the next section). Doing so will help you avoid any potential errors related to the copying and pasting of code.

For beginners without any exposure to ML or data science, it is recommended to read the book sequentially as many important concepts are explained in sufficient detail in the earlier chapters. Seasoned ML or data science experts who are relatively new to the field of XAI can skim through the first three chapters to get clear conceptual understanding of various terminology used. For chapters four to nine, any order should be fine for seasoned experts. For all level of practitioners, it is recommended that you read chapter 10 and 11 only after covering all the nine chapters.

Regarding the code provided, it is recommended that you either read each chapter and then run the corresponding code, or you can run the code simultaneously while reading the specific chapters. Sufficient theory is also added in the Jupyter notebooks to help you understand the overall flow of the notebook.

When you are reading the book, it is recommended that you take notes of the important terminologies covered and try to think of ways in which you could apply the concept or the framework learned. After reading the book and going through all the Jupyter notebooks, hopefully, you will be inspired to apply the newly gained knowledge into action!

Download the example code files

You can download the example code files for this book from GitHub at https://github.com/PacktPublishing/Applied-Machine-Learning-Explainability-Techniques. If there's an update to the code, it will be updated in the GitHub repository.

We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Download the color images

We also provide a PDF file that has color images of the screenshots and diagrams used in this book. You can download it here: https://packt.link/DF7lG.

Conventions used

There are a number of text conventions used throughout this book.

Code in text: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: "For this example, we will use the RegressionExplainer and ExplainerDashboard submodules."

A block of code is set as follows:

pdp = PartialDependence(
    predict_fn=model.predict_proba,
    data=x_train.astype('float').values,
    feature_names=list(x_train.columns),
    feature_types=feature_types)
pdp_global=pdp.explain_global(name='Partial Dependence')

When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:

explainer = shap.Explainer(model, x_test)
shap_values = explainer(x_test)
shap.plots.waterfall(shap_values[0], max_display = 12,
                     show=False)

Bold: Indicates a new term, an important word, or words that you see onscreen. For instance, words in menus or dialog boxes appear in bold. Here is an example: "Due to these known drawbacks, the search for a robust Explainable AI (XAI) framework is still on."

Tips or important notes

Appear like this.

Get in touch

Feedback from our readers is always welcome.

General feedback: If you have questions about any aspect of this book, email us at [email protected] and mention the book title in the subject of your message.

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata and fill in the form.

Piracy: If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.

If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.

Share Your Thoughts

Once you've read Applied Machine Learning Explainability Techniques, we'd love to hear your thoughts! Please click here to go straight to the Amazon review page for this book and share your feedback.

Your review is important to us and the tech community and will help us make sure we're delivering excellent quality content.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.23.130.108