This Apress imprint is published by the registered company APress Media, LLC, part of Springer Nature.
The registered company address is: 1 New York Plaza, New York, NY 10004, U.S.A.
I dedicate this book to my late father; my mother; my lovely wife, Prajna; and my daughters, Priyanshi (Aarya) and Adyanshi (Aadya). This work would not have been possible without their inspiration, support, and encouragement.
Artificial intelligence plays a crucial role determining the decisions businesses make. In these cases, when a machine makes a decision, humans usually want to understand whether the decision is authentic or whether it was generated in error. If business stakeholders are not convinced by the decision, they will not trust the machine learning system, and hence artificial intelligence adoption will gradually reduce within that organization. To make the decision process more transparent, developers must be able to document the explainability of AI decisions or ML model decisions. This book provides a series of solutions to problems that require explainability and interpretability. Adopting an AI model and developing a responsible AI system requires explainability as a component.
This book covers model interpretation for supervised learning linear models, including important features for regression and classification models, partial dependency analysis for regression and classification models, and influential data point analysis for both classification and regression models. Supervised learning models using nonlinear models is explored using state-of-the-art frameworks such as SHAP values/scores, including global explanation, and how to use LIME for local interpretation. This book will also give you an understanding of bagging, boosting-based ensemble models for supervised learning such as regression and classification, as well as explainability for time-series models using LIME and SHAP, natural language processing tasks such as text classification, and sentiment analysis using ELI5, ALIBI. The most complex models for classification and regression, such as neural network models and deep learning models, are explained using the CAPTUM framework, which shows feature attribution, neuron attribution, and activation attribution.
This book attempts to make AI models explainable to help developers increase the adoption of AI-based models within their organizations and bring more transparency to decision-making. After reading this book, you will be able to use Python libraries such as Alibi, SHAP, LIME, Skater, ELI5, and CAPTUM. Explainable AI Recipes provides a problem-solution approach to demonstrate each machine learning model, and shows how to use Python’s XAI libraries to answer questions of explainability and build trust with AI models and machine learning models. All source code can be downloaded from github.com/apress/explainable-ai-recipes.
I would like to thank my wife, Prajna, for her continuous inspiration and support and for sacrificing her weekends to help me complete this book; and my daughters, Aarya and Aadya, for being patient throughout the writing process.
A big thank-you to Celestin Suresh John and Mark Powers for fast-tracking the whole process and guiding me in the right direction.
I would like to thank the authors of the Appliances Energy Prediction dataset (http://archive.ics.uci.edu/ml) for making it available: D. Dua and C. Graff. I use this dataset in the book to show how to develop a model and explain the predictions generated by a regression model for the purpose of model explainability using various explainable libraries.
Pradeepta presented a keynote talk on the application of bidirectional LSTM for time-series forecasting at the 2018 Global Data Science Conference. He delivered the TEDx talk “Can Machines Think?” on the power of artificial intelligence in transforming industries and job roles across industries. He has also delivered more than 150 tech talks on data science, machine learning, and artificial intelligence at various meetups, technical institutions, universities, and community forums. He is on LinkedIn (www.linkedin.com/in/pradeepta/) and Twitter (@pradmishra1).
35.170.81.33