Pradeepta Mishra

Explainable AI Recipes

Implement Solutions to Model Explainability and Interpretability with Python

Pradeepta Mishra
Bangalore, Karnataka, India
ISBN 978-1-4842-9028-6e-ISBN 978-1-4842-9029-3
© Pradeepta Mishra 2023
Apress Standard
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This Apress imprint is published by the registered company APress Media, LLC, part of Springer Nature.

The registered company address is: 1 New York Plaza, New York, NY 10004, U.S.A.

I dedicate this book to my late father; my mother; my lovely wife, Prajna; and my daughters, Priyanshi (Aarya) and Adyanshi (Aadya). This work would not have been possible without their inspiration, support, and encouragement.

Introduction

Artificial intelligence plays a crucial role determining the decisions businesses make. In these cases, when a machine makes a decision, humans usually want to understand whether the decision is authentic or whether it was generated in error. If business stakeholders are not convinced by the decision, they will not trust the machine learning system, and hence artificial intelligence adoption will gradually reduce within that organization. To make the decision process more transparent, developers must be able to document the explainability of AI decisions or ML model decisions. This book provides a series of solutions to problems that require explainability and interpretability. Adopting an AI model and developing a responsible AI system requires explainability as a component.

This book covers model interpretation for supervised learning linear models, including important features for regression and classification models, partial dependency analysis for regression and classification models, and influential data point analysis for both classification and regression models. Supervised learning models using nonlinear models is explored using state-of-the-art frameworks such as SHAP values/scores, including global explanation, and how to use LIME for local interpretation. This book will also give you an understanding of bagging, boosting-based ensemble models for supervised learning such as regression and classification, as well as explainability for time-series models using LIME and SHAP, natural language processing tasks such as text classification, and sentiment analysis using ELI5, ALIBI. The most complex models for classification and regression, such as neural network models and deep learning models, are explained using the CAPTUM framework, which shows feature attribution, neuron attribution, and activation attribution.

This book attempts to make AI models explainable to help developers increase the adoption of AI-based models within their organizations and bring more transparency to decision-making. After reading this book, you will be able to use Python libraries such as Alibi, SHAP, LIME, Skater, ELI5, and CAPTUM. Explainable AI Recipes provides a problem-solution approach to demonstrate each machine learning model, and shows how to use Python’s XAI libraries to answer questions of explainability and build trust with AI models and machine learning models. All source code can be downloaded from github.com/apress/explainable-ai-recipes.

Acknowledgments

I would like to thank my wife, Prajna, for her continuous inspiration and support and for sacrificing her weekends to help me complete this book; and my daughters, Aarya and Aadya, for being patient throughout the writing process.

A big thank-you to Celestin Suresh John and Mark Powers for fast-tracking the whole process and guiding me in the right direction.

I would like to thank the authors of the Appliances Energy Prediction dataset (http://archive.ics.uci.edu/ml) for making it available: D. Dua and C. Graff. I use this dataset in the book to show how to develop a model and explain the predictions generated by a regression model for the purpose of model explainability using various explainable libraries.

Table of Contents
About the Author
Pradeepta Mishra

The photo of Pradeepta Mishra.

is an AI/ML leader, experienced data scientist, and artificial intelligence architect. He currently heads NLP, ML, and AI initiatives for five products at FOSFOR by LTI, a leading-edge innovator in AI and machine learning based out of Bangalore, India. He has expertise in designing artificial intelligence systems for performing tasks such as understanding natural language and making recommendations based on natural language processing. He has filed 12 patents as an inventor and has authored and coauthored five books, including R Data Mining Blueprints (Packt Publishing, 2016), R: Mining Spatial, Text, Web, and Social Media Data (Packt Publishing, 2017), PyTorch Recipes (Apress, 2019), and Practical Explainable AI Using Python (Apress, 2023). There are two courses available on Udemy based on these books.

Pradeepta presented a keynote talk on the application of bidirectional LSTM for time-series forecasting at the 2018 Global Data Science Conference. He delivered the TEDx talk “Can Machines Think?” on the power of artificial intelligence in transforming industries and job roles across industries. He has also delivered more than 150 tech talks on data science, machine learning, and artificial intelligence at various meetups, technical institutions, universities, and community forums. He is on LinkedIn (www.linkedin.com/in/pradeepta/) and Twitter (@pradmishra1).

 
About the Technical Reviewer
Bharath Kumar Bolla

The photo of Bharath Kumar Bolla.

has more than ten years of experience and is currently working as a senior data science engineer consultant at Verizon, Bengaluru. He has a PG diploma in data science from Praxis Business School and an MS in life sciences from Mississippi State University. He previously worked as a data scientist at the University of Georgia, Emory University, and Eurofins LLC & Happiest Minds. At Happiest Minds, he worked on AI-based digital marketing products and NLP-based solutions in the education domain. Along with his day-to-day responsibilities, Bharath is a mentor and an active researcher. To date, he has published ten articles in journals and peer-reviewed conferences. He is particularly interested in unsupervised and semisupervised learning and efficient deep learning architectures in NLP and computer vision.
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
35.170.81.33