Preface

In today's era of AI, accurately interpreting and communicating trustworthy AI findings is becoming a crucial skill to master. Artificial intelligence often surpasses human understanding. As such, the results of machine learning models can often prove difficult and sometimes impossible to explain. Both users and developers face challenges when asked to explain how and why an AI decision was made.

The AI designer cannot possibly design a single explainable AI solution for the hundreds of machine learning and deep learning models. Effectively translating AI insights to business stakeholders requires individual planning, design, and visualization choices. European and US law has opened the door to litigation when results cannot be explained, but developers face overwhelming amounts of data and results in real-life implementations, making it nearly impossible to find explanations without the proper tools.

In this book, you will learn about tools and techniques using Python to visualize, explain, and integrate trustworthy AI results to deliver business value, while avoiding common issues with AI bias and ethics.

Throughout the book, you will work with hands-on Python machine learning projects in Python and TensorFlow 2.x. You will learn how to use WIT, SHAP, LIME, CEM, and other key explainable AI tools. You will explore tools designed by IBM, Google, Microsoft, and other advanced AI research labs.

You will be introduced to several open source explainable AI tools for Python that can be used throughout the machine learning project lifecycle. You will learn how to explore machine learning model results, review key influencing variables and variable relationships, detect and handle bias and ethics issues, and integrate predictions using Python along with supporting machine learning model visualizations in user explainable interfaces.

We will build XAI solutions in Python and TensorFlow 2.x, and use Google Cloud's XAI platform and Google Colaboratory.

Who this book is for

  • Beginner Python programmers who already have some foundational knowledge and/or experience with machine learning libraries such as scikit-learn.
  • Professionals who already use Python for purposes such as data science, machine learning, research, analysis, and so on, and can benefit from learning the latest explainable AI open source toolkits and techniques.
  • Data analysts and data scientists that want an introduction to explainable AI tools and techniques using Python for machine learning models.
  • AI project and business managers who must face the contractual and legal obligations of AI explainability for the acceptance phase of their applications.
  • Developers, project managers, and consultants who want to design solid artificial intelligence that both users and the legal system can understand.
  • AI specialists who have reached the limits of unexplainable black box AI and want AI to expand through a better understanding of the results produced.
  • Anyone interested in the future of artificial intelligence as a tool that can be explained and understood. AI and XAI techniques will evolve and change. But the fundamental ethical and XAI tools learned in this book will remain an essential part of the future of AI.

What this book covers

Chapter 1, Explaining Artificial Intelligence with Python

Explainable AI (XAI) cannot be summed up in a single method for all participants in a project. When a patient shows signs of COVID-19, West Nile Virus, or any other virus, how can a general practitioner and AI form a cobot to determine the origin of the disease? The chapter describes a case study and an AI solution built from scratch, to trace the origins of a patient's infection with a Python solution that uses k-nearest neighbors and Google Location History.

Chapter 2, White Box XAI for AI Bias and Ethics

Artificial intelligence might sometimes have to make life or death decisions. When the autopilot of an autonomous vehicle detects pedestrians suddenly crossing a road, what decision should be made when there is no time to stop?

Can the vehicle change lanes without hitting other pedestrians or vehicles? The chapter describes the MIT moral machine experiment and builds a Python program using decision trees to make real-life decisions.

Chapter 3, Explaining Machine Learning with Facets

Machine learning is a data-driven training process. Yet, companies rarely provide clean data or even all of the data required to start a project. Furthermore, the data often comes from different sources and formats. Machine learning models involve complex mathematics, even when the data seems acceptable. A project can rapidly become a nightmare from the start.

This chapter implements Facets in Python in a Jupyter Notebook on Google Colaboratory. Facets provides multiple views and tools to track the variables that distort the ML model's results. Finding counterfactual data points, and identifying the causes, can save hours of otherwise tedious classical analysis.

Chapter 4, Microsoft Azure Machine Learning Model Interpretability with SHAP

Artificial intelligence designers and developers spend days searching for the right ML model that fits the specifications of a project. Explainable AI provides valuable time-saving information. However, nobody has the time to develop an explainable AI solution for every single ML model on the market!

This chapter introduces model-agnostic explainable AI through a Python program that implements Shapley values with SHAP based on Microsoft Azure's research. This game theory approach provides explanations no matter which ML model it faces. The Python program provides explainable AI graphs showing which variables influence the outcome of a specific result.

Chapter 5, Building an Explainable AI Solution from Scratch

Artificial intelligence has progressed so fast in the past few years that moral obligations have sometimes been overlooked. Eradicating bias has become critical to the survival of AI. Machine learning decisions based on racial or ethnic criteria were once accepted in the United States; however, it has now become an obligation to track bias and eliminate those features in datasets that could be using discrimination as information.

This chapter shows how to eradicate bias and build an ethical ML system in Python with Google's What-If Tool and Facets. The program will take moral, legal, and ethical parameters into account from the very beginning.

Chapter 6, AI Fairness with Google's What-If Tool (WIT)

Google's PAIR (People + AI Research – https://research.google/teams/brain/pair/) designed What-If Tool (WIT) to investigate the fairness of an AI model. This chapter takes us deeper into Explainable AI, introducing a Python program that creates a deep neural network (DNN) with TensorFlow, uses a SHAP explainer and creates a WIT instance.

The WIT will provide ground truth, cost ration fairness, and PR curve visualizations. The Python program shows how ROC curves, AUC, slicing, and PR curves can pinpoint the variables that produced a result, using AI fairness and ethical tools to make predictions.

Chapter 7, A Python Client for Explainable AI Chatbots

The future of artificial intelligence will increasingly involve bots and chatbots. This chapter shows how chatbots can provide a CUI XAI through Google Dialogflow. A Google Dialogflow Python client will be implemented with an API that communicates with Google Dialogflow.

The goal is to simulate user interactions for decision-making XAI based on the Markov Decision Process (MDP). The XAI dialog is simulated in a Jupyter Notebook, and the agent is tested on Google Assistant.

Chapter 8, Local Interpretable Model-Agnostic Explanations (LIME)

This chapter takes model agnostics further with Local Interpretable Model-agnostic Explanations (LIME). The chapter shows how to create a model-agnostic explainable AI Python program that can explain the results of random forests, k-nearest neighbors, gradient boosting, decision trees, and extra trees.

The Python program creates a unique LIME explainer with visualizations no matter which ML model produces the results.

Chapter 9, The Counterfactual Explanations Method

It is sometimes impossible to find why a data point has not been classified as expected. No matter how we look at it, we cannot determine which feature or features generated the error.

Visualizing counterfactual explanations can display the features of a data point that has been classified in the wrong category right next to the closest data point that was classified in the right category. An explanation can be rapidly tracked down with the Python program created in this chapter with a WIT.

The Python program created in this chapter's WIT can define the belief, truth, justification, and sensitivity of a prediction.

Chapter 10, Contrastive XAI

Sometimes, even the most potent XAI tools cannot pinpoint the reason an ML program made a decision. The Contrastive Explanation Method (CEM) implemented in Python in this chapter will find precisely how a datapoint crossed the line into another class.

The program created in this chapter prepares a MNIST dataset for CEM, defines a CNN, tests the accuracy of the CNN, and defines and trains an auto-encoder. From there, the program creates a CEM explainer that will provide visual explanations of pertinent negatives and positives.

Chapter 11, Anchors XAI

Rules have often been associated with hard coded expert system rules. But what if an XAI tool could generate rules automatically to explain a result? Anchors are high-precision rules that are produced automatically.

This chapter's Python program creates anchors for text classification and images. The program pinpoints the precise pixels of an image that made a model change its mind and select a class.

Chapter 12, Cognitive XAI

Human cognition has provided the framework for the incredible technical progress made by humanity in the past few centuries, including artificial intelligence. This chapter puts human cognition to work to build cognitive rule bases for XAI.

The chapter explains how to build a cognitive dictionary and a cognitive sentiment analysis function to explain the marginal features from a human perspective. A Python program shows how to measure marginal cognitive contributions.

This chapter sums up the essence of XAI, for the reader to build the future of artificial intelligence, containing real human intelligence and ethics.

To get the most out of this book

To get the most out of this book, it is recommended to:

  • Focus on the key concepts of explainable AI (XAI) and how they are becoming mandatory
  • Read the chapters without running the code if you wish to focus on XAI theory
  • Read the chapters and run the programs if you wish to go through the theory and implementations simultaneously.

Download the example code files

You can download the example code files for this book from your account at http://www.packt.com. If you purchased this book elsewhere, you can visit http://www.packt.com/support and register to have the files emailed directly to you.

You can download the code files by following these steps:

  1. Log in or register at http://www.packt.com.
  2. Select the SUPPORT tab.
  3. Click on Code Downloads & Errata.
  4. Enter the name of the book in the Search box and follow the on-screen instructions.

Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:

  • WinRAR / 7-Zip for Windows
  • Zipeg / iZip / UnRarX for Mac
  • 7-Zip / PeaZip for Linux

The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/Hands-On-Explainable-AI-XAI-with-Python. In case there's an update to the code, it will be updated on the existing GitHub repository.

We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Download the color images

We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: https://static.packt-cdn.com/downloads/9781800208131_ColorImages.pdf.

Conventions used

There are a number of text conventions used throughout this book.

CodeInText: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. For example; "If the label is 0, then the recommendation is to stay in the right lane."

A block of code is set as follows:

choices = str(prediction).strip('[]')
  if float(choices) <= 1:
    choice = "R lane"
  if float(choices) >= 1:
    choice = "L lane"

Command-line or terminal output is written as follows:

1 data [[0.76, 0.62, 0.02, 0.04]]  prediction: 0 class 0 acc.: True R lane
2 data [[0.16, 0.46, 0.09, 0.01]]  prediction: 0 class 1 acc.: False R lane
3 data [[1.53, 0.76, 0.06, 0.01]]  prediction: 0 class 0 acc.: True R lane

Bold: Indicates a new term, an important word, or words that you see on the screen, for example, in menus or dialog boxes, also appear in the text like this. For example: " Go to the Scatter | X-Axis and Scatter | Y-Axis drop-down lists."

Warnings or important notes appear like this.

Tips and tricks appear like this.

Get in touch

Feedback from our readers is always welcome.

General feedback: If you have questions about any aspect of this book, mention the book title in the subject of your message and email us at [email protected].

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book we would be grateful if you would report this to us. Please visit, http://www.packt.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.

Piracy: If you come across any illegal copies of our works in any form on the Internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.

If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit http://authors.packtpub.com.

Reviews

Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!

For more information about Packt, please visit packt.com.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.217.186.244