Answers to the Questions

Chapter 1, Explaining Artificial Intelligence with Python

  1. Understanding the theory of an ML algorithm is enough for XAI. (True|False)

    False. Implementing an ML program requires more than theoretical knowledge.

    True. If a user just wants an intuitive explanation of an ML algorithm.

  2. Explaining the origin of datasets is not necessary for XAI. (True|False)

    True. If a third party has certified the dataset.

    False. If you are building the dataset, you need to make sure you are respecting privacy laws.

  3. Explaining the results of an ML algorithm is sufficient. (True|False)

    True. If a user is not interested in anything else but the result.

    False. If it is required to explain how a result was reached.

  4. It is not necessary for an end user to know what a KNN is. (True|False)

    True. If the end user is satisfied with results.

    False. If the end user is a developer that is deploying the program and must maintain it.

  5. Obtaining data to train an ML algorithm is easy with all the available data online. (True|False)

    True. If we are training an ML with ready-to-use datasets.

    False. If we need to collect the data ourselves. It can take months to find the right way to collect the data and build meaningful samples.

  6. Location history is not necessary for a medical diagnosis. (True|False)

    True. When a disease does not depend on where a patient traveled.

    False. When the patient was infected in a location before going to another location where the disease is not present.

  7. Our analysis of the patient with the West Nile virus cannot be applied to other viruses. (True|False)

    False. Viruses usually start in a location and then travel. It many cases, it is vital to know about the virus, where it came from, and how a patient was infected. The whole process explored in this chapter can save lives.

  8. A doctor does not require AI to make a diagnosis. (True|False)

    True. For a simple disease.

    False. When the diagnosis involves many parameters, AI can be a great help.

  9. It isn't necessary to explain AI to a doctor. (True|False)

    False. A doctor needs to know why an ML algorithm reached a critical decision.

  10. AI and XAI will save lives. (True|False)

    True. This is 100% certain. Humans that work with AI and understand AI will obtain vital information to reach a diagnosis quickly in life and death situations.

Chapter 2, White Box XAI for AI Bias and Ethics

  1. The autopilot of an SDC can override traffic regulations. (True|False)

    True. Technically it is possible.

    False. Though possible, it is not legal.

  2. The autopilot of an SDC should always be activated. (True|False)

    False. The simulations in this chapter prove that it should be avoided in heavy traffic until autopilots can deal with any situation.

  3. The structure of a decision tree can be controlled for XAI. (True|False)

    True. Modifying the size and depth of a decision tree is a good tool to explain the algorithm.

  4. A well-trained decision tree will always produce a good result with live data. (True|False)

    True. If the training reached an accuracy of 1.

    False. New situations might confuse the algorithm.

  5. A decision tree uses a set of hardcoded rules to classify data. (True|False)

    False. A decision tree learns how to decide.

  6. A binary decision tree can classify more than two classes. (True|False)

    False. A binary decision tree is designed for two classes.

  7. The graph of a decision tree can be controlled to help explain the algorithm. (True|False)

    True. Explaining ML by visualizing different sizes and depths of a decision tree is efficient.

  8. The trolley problem is an optimizing algorithm for trollies. (True|False)

    False. The trolley problem applies to a runaway trolley that can potentially kill pedestrians.

  9. A machine should not be allowed to decide whether to kill somebody or not. (True|False)

    True. Ethics should forbid machines from doing this, although we don't know if we can stop the use of autopilot weapons.

  10. An autopilot should not be activated in heavy traffic until it's totally reliable. (True|False)

    True. Autopilots progress each day. In the meantime, we should be careful when using autopilots in heavy traffic.

Chapter 3, Explaining Machine Learning with Facets

  1. Datasets in real-life projects are rarely reliable. (True|False)

    True. In most cases, the datasets require a fair amount of quality control before they can be used as input data for ML models. In rare cases, the data is perfect in some companies that constantly check the quality of their data.

  2. In a real-life project, there are no missing records in a dataset. (True|False)

    False. In most cases, data is missing.

    True. In some critical areas, such as aerospace projects, the data is clean.

  3. The distribution distance is the distance between two data points. (True|False)

    False. The distribution distance is measured between two data distributions.

  4. Non-uniformity does not affect an ML model. (True|False)

    False. Non-uniformity has profound effects on the outputs of an ML model. However, in some cases, non-uniform datasets reflect the reality of the problem we are trying to solve, and the challenge is to find a solution!

  5. Sorting by feature order can provide interesting information. (True|False)

    True. You can design your dataset so that the features appear in the order or display the reasoning you wish to convey.

  6. Binning the x axis and the y axis in various ways offers helpful insights. (True|False)

    True. By binning different features, you can visualize how each feature influences the outcome of the machine learning model.

  7. The median, the minimum, and the maximum values of a feature cannot change an ML prediction. (True|False)

    False and True. If the median divides a set of values of a feature into two very different feature values, you might have an unstable feature.

    However, this could also be a key aspect of the dataset to take into account.

  8. Analyzing training datasets before running an ML model is useless. It's better to wait for outputs. (True|False)

    False. Quality control of a dataset comes first. Building an ML model using unreliable data will lead to a waste of time and resources.

  9. Facets Overview and Facets Dive can help fine-tune an ML model. (True|False)

    True. By visualizing your datasets, you can see where your ML model needs fine-tuning. You might find missing data, zeros, non-uniform data distributions, and more problems that will help you improve your datasets and ML model.

Chapter 4, Microsoft Azure Machine Learning Model Interpretability with SHAP

  1. Shapley values are model-dependent. (True|False)

    False. Shapley values are particularly interesting because they are not ML model dependent.

  2. Model-agnostic XAI does not require output. (True|False)

    False. The output of an ML model is necessary to analyze the contributions of each feature to a result.

  3. The Shapley value calculates the marginal contribution of a feature in a prediction. (True|False)

    True.

  4. The Shapley value can calculate the marginal contribution of a feature for all of the records in a dataset. (True|False)

    True. Each feature can then be compared to the marginal contribution of other features when analyzing the output of an ML model.

  5. Vectorizing data means that we transform data into numerical vectors. (True|False)

    True. This is a key function before running an ML algorithm.

  6. When vectorizing data, we can also calculate the frequency of a feature in the dataset. (True|False)

    True. Calculating the frequency of a feature in a dataset can help explain ML model outputs.

  7. SHAP only works with logistic regression. (True|False)

    False. SHAP is model-agnostic. It can be used to explain many types of ML models.

  8. One feature with a very high Shapley value can change the output of a prediction. (True|False)

    True. One feature can change a result.

    False. Many other features can also have high Shapley values. In this case, the features can form a coalition of features to influence the outcome of a prediction, for example.

  9. Using a unit test to explain AI is a waste of time. (True|False)

    False. In some cases, the results are challenging to explain an ML model. Isolating samples to run a unit test can save a lot of time.

    True. If an ML model is easy to explain with SHAP, for example, creating a unit test is useless.

  10. Shapley values can show that some features are mispresented in a dataset. (True|False)

    True. If an ML model keeps producing errors, SHAP can help track them using SHAP's numerical and visual tools.

Chapter 5, Building an Explainable AI Solution from Scratch

  1. Moral considerations mean nothing in AI as long as it's legal. (True|False)

    False. You have conflicts with people that are offended by AI solutions that do not take moral considerations into account.

  2. Explaining AI with an ethical approach will help users trust AI. (True|False)

    True. An ethical approach will make your AI programs and explanations trustworthy.

  3. There is no need to check whether a dataset contains legal data. (True|False)

    False. You must verify if the data used is legal or not.

  4. Using machine learning algorithms to verify datasets is productive. (True|False)

    True. ML can provide automated help to check datasets.

  5. Facets Dive requires an ML algorithm. (True|False)

    False. You can load and analyze raw data without running an ML algorithm first.

  6. You can anticipate ML outputs with Facets Dive. (True|False)

    True. In some cases, you can anticipate what the ML algorithm will produce.

    False. In other cases, the number of parameters and the amount of data will be overwhelming. ML will be required.

  7. You can use ML to verify your intuitive predictions. (True|False)

    True. This is an essential aspect of explainable AI and AI.

  8. Some features in an ML model can be suppressed without changing the results. (True|False)

    True. Yes, in some cases, this will work.

    False. In other cases, the results will be either inaccurate or biased.

  9. Some datasets provide accurate ML labels but inaccurate real-life results. (True|False)

    True. In some cases, an ML algorithm will produce good results from a metrics perspective and produce false predictions. In this case, you must check the data and the model in detail.

  10. You can visualize counterfactual datapoints with WIT. (True|False)

    True. This is one of the excellent functions provided by WIT.

Chapter 6, AI Fairness with Google's What-If Tool (WIT)

  1. The developer of an AI system decides what is ethical or not. (True|False)

    True. The developer is accountable for the philosophy of an AI system.

    False. Each country has legal obligation guidelines.

  2. A DNN is the only estimator for the COMPAS dataset. (True|False)

    False. Other estimators would produce good results as well, such as decision trees or linear regression models.

  3. Shapley values determine the marginal contribution of each feature. (True|False)

    True. The values will show if some features are making the wrong contributions, for example.

  4. We can detect the biased output of a model with a SHAP plot. (True|False)

    True. We can visualize the contribution of biased data.

  5. WIT's primary quality is "people-centered." (True|False)

    True. People-centered AI systems will outperform AI systems that have no human guidance.

  6. A ROC curve monitors the time it takes to train a model. (True|False)

    False. ROC stands for "receiver operating characteristic." It displays true and false positives.

  7. AUC stands for "area under convolution." (True|False)

    False. AUC stands for "area under curve." If the area under the true positive curve is small, then the true positive curve is failing to approach an accuracy of 1.

  8. Analyzing the ground truth of a model is a prerequisite in ML. (True|False)

    True. If the goal of the model, the ground truth, is false, the model is false.

  9. Another ML program could do XAI. (True|False)

    True. Technically, yes. This would be a personal decision to accept bias.

    False. Another machine learning program would need to go through human moral and ethical control, which brings us back to people-centered AI. Legal constraints will most certainly slow the distribution of the AI system down.

  10. The WIT "people-centered" approach will change the course of AI. (True|False)

    True. Human intelligence added to machine intelligence will contribute to the wide distribution of AI in all fields based on accuracy and trust.

Chapter 7, A Python Client for Explainable AI Chatbots

  1. It is possible to create a dialog with Python and Dialogflow. (True|False)

    True. The Google Python client can communicate with Dialogflow.

  2. You can customize your XAI dialog with a Python client. (True|False)

    True. Yes, a chatbot can help to interestingly explain AI.

  3. You do not need to set anything up in Dialogflow if you use a Python client. (True|False)

    False. You must configure Dialogflow.

  4. Intents are optional in Dialogflow. (True|False)

    False. Intents contain training phrases and responses.

  5. A training phrase is a response in a dialog. (True|False)

    False. A training phase contains a question or phrase, not the response.

  6. Context is a way of enhancing an XAI dialog. (True|False)

    True. It is a good way of improving a dialog by remembering the previous exchanges.

  7. A follow-up question is a way of managing the context of a dialog. (True|False)

    True. A follow-up question continues a dialog.

  8. Small talk improves the emotional behavior of a dialog. (True|False)

    True. Introducing some small talk makes the dialog less technical and cold.

  9. Small talk can be directly set up in Dialogflow. (True|False)

    True. Yes. There is a small talk feature in Dialogflow.

Chapter 8, Local Interpretable Model-Agnostic Explanations (LIME)

  1. LIME stands for Local Interpretable Model-agnostic Explanations. (True|False)

    True.

  2. LIME measures the overall accuracy score of a model. (True|False)

    False. LIME's unique approach measures the truthfulness of a prediction locally.

  3. LIME's primary goal is to verify if a local prediction is faithful to the model. (True|False)

    True.

  4. The LIME explainer shows why a local prediction is trustworthy or not. (True|False)

    True.

  5. If you run a random forest model with LIME, you cannot use the same model with an extra trees model. (True|False)

    False. LIME's algorithm is model-agnostic.

  6. There is a LIME explainer for each model. (True|False)

    False. LIME's explainer applies to a range of models.

  7. Prediction metrics are not necessary if a user is satisfied with predictions. (True|False)

    False. The predictions might seem accurate, but the global accuracy of a model must be displayed and explained.

  8. A model that has a low accuracy score can produce accurate outputs. (True|False)

    True. Even a poor model can produce true positives and true negatives. However, the model is globally unreliable.

  9. A model that has a high accuracy score provides correct outputs. (True|False)

    True. But the same model might produce false positives and negatives as well. A model might be accurate but requires constant monitoring.

  10. Benchmarking models can help choose a model or fine-tune it. (True|False)

    True. A model might not fit a specific dataset for several reasons. First, try to refine the model and check the datasets. If not, perhaps another model might produce better predictions.

Chapter 9, The Counterfactual Explanations Method

  1. A true positive prediction does not require a justification. (True|False)

    False. Any prediction should be justified, whether it is true or false.

  2. Justification by showing the accuracy of the model will satisfy a user. (True|False)

    False. A model can be accurate but for the wrong reasons, whatever they may be.

  3. A user needs to believe an AI prediction. (True|False)

    True. A user will not trust a prediction without a certain amount of belief.

    False. In some cases, such as a medical diagnosis, some truths are difficult to believe.

  4. A counterfactual explanation is unconditional. (True|False)

    True.

  5. The counterfactual explanation method will vary from one model to another. (True|False)

    False. A counterfactual explanation is model-agnostic.

  6. A counterfactual data point is found with a distance function. (True|False)

    True.

  7. Sensitivity shows how closely a data point and its counterfactual are related. (True|False)

    True.

  8. The L1 norm uses Manhattan distances. (True|False)

    True.

  9. The L2 norm uses Euclidean distances. (True|False)

    True.

  10. GDPR has made XAI de facto mandatory in the European Union. (True|False)

    True. Furthermore, explaining automatic decisions will eventually become mainstream as consumer suits will challenge controversial choices made by bots.

Chapter 10, Contrastive XAI

  1. Contrastive explanations focus on the features with the highest values that lead to a prediction. (True|False)

    False. CEM focuses on the missing features.

  2. General practitioners never use contrastive reasoning to evaluate a patient. (True|False)

    False. General practitioners often eliminate symptoms when assessing a patient's condition.

  3. Humans reason with contrastive methods. (True|False)

    True.

  4. An image cannot be explained with CEM. (True|False)

    False. CEM can use the output of a CNN and an autoencoder to produce explanations.

  5. You can explain a tripod using a missing feature of a table. (True|False)

    True. It is the example given by the IBM Research team.

  6. A CNN generates good results on the MNIST dataset. (True|False)

    True.

  7. A pertinent negative explains how a model makes a prediction with a missing feature. (True|False)

    True.

  8. CEM does not apply to text classification. (True|False)

    False.

  9. The CEM explainer can produce visual explanations. (True|False)

    True.

Chapter 11, Anchors XAI

  1. LIME explanations are global rules. (True|False)

    False. LIME explanations explain predictions locally.

  2. LIME explains a prediction locally. (True|False)

    True.

  3. LIME is efficient on all datasets. (True|False)

    False. An XAI tool is model-agnostic but not dataset-agnostic.

  4. Anchors detect the ML model used to make a prediction. (True|False)

    False. Anchors are model-agnostic.

  5. Anchors detect the parameters of an ML model. (True|False)

    False. Anchors are model-agnostic.

  6. Anchors are high-precision rules. (True|False)

    True.

  7. High-precision rules explain how a prediction was reached. (True|False)

    True.

  8. Anchors do not apply to images. (True|False)

    False.

  9. Anchors can display superpixels on an image. (True|False)

    True.

  10. A model-agnostic XAI tool can run on many ML models. However, not every ML model is compatible with an XAI tool for specific applications. (True|False)

    True. You must carefully choose the XAI tools you will use for a specific database for a specific ML model.

Chapter 12, Cognitive XAI

  1. SHapley Additive exPlanations (SHAP) compute the marginal contribution of each feature with a SHAP value. (True|False)

    True.

  2. Google's What-If Tool can display SHAP values as counterfactual data points. (True|False)

    True.

  3. Counterfactual explanations include showing the distance between two data points. (True|False)

    True.

  4. The contrastive explanations method (CEM) has an interesting way of interpreting the absence of a feature in a prediction. (True|False)

    True.

  5. Local Interpretable Model-agnostic Explanations (LIME) interpret the vicinity of a prediction. (True|False)

    True.

  6. Anchors show the connection between features that can occur in positive and negative predictions. (True|False)

    True.

  7. Tools such as Google Location History can provide additional information to explain the output of a machine learning model. (True|False)

    True.

  8. Cognitive XAI captures the essence of XAI tools to help a user understand XAI in everyday language. (True|False)

    True.

  9. XAI is mandatory in many countries. (True|False)

    True.

  10. The future of AI is people-centered XAI that uses chatbots, among other tools. (True|False)

    True.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.177.151