False. Implementing an ML program requires more than theoretical knowledge.
True. If a user just wants an intuitive explanation of an ML algorithm.
True. If a third party has certified the dataset.
False. If you are building the dataset, you need to make sure you are respecting privacy laws.
True. If a user is not interested in anything else but the result.
False. If it is required to explain how a result was reached.
True. If the end user is satisfied with results.
False. If the end user is a developer that is deploying the program and must maintain it.
True. If we are training an ML with ready-to-use datasets.
False. If we need to collect the data ourselves. It can take months to find the right way to collect the data and build meaningful samples.
True. When a disease does not depend on where a patient traveled.
False. When the patient was infected in a location before going to another location where the disease is not present.
False. Viruses usually start in a location and then travel. It many cases, it is vital to know about the virus, where it came from, and how a patient was infected. The whole process explored in this chapter can save lives.
True. For a simple disease.
False. When the diagnosis involves many parameters, AI can be a great help.
False. A doctor needs to know why an ML algorithm reached a critical decision.
True. This is 100% certain. Humans that work with AI and understand AI will obtain vital information to reach a diagnosis quickly in life and death situations.
True. Technically it is possible.
False. Though possible, it is not legal.
False. The simulations in this chapter prove that it should be avoided in heavy traffic until autopilots can deal with any situation.
True. Modifying the size and depth of a decision tree is a good tool to explain the algorithm.
True. If the training reached an accuracy of 1.
False. New situations might confuse the algorithm.
False. A decision tree learns how to decide.
False. A binary decision tree is designed for two classes.
True. Explaining ML by visualizing different sizes and depths of a decision tree is efficient.
False. The trolley problem applies to a runaway trolley that can potentially kill pedestrians.
True. Ethics should forbid machines from doing this, although we don't know if we can stop the use of autopilot weapons.
True. Autopilots progress each day. In the meantime, we should be careful when using autopilots in heavy traffic.
True. In most cases, the datasets require a fair amount of quality control before they can be used as input data for ML models. In rare cases, the data is perfect in some companies that constantly check the quality of their data.
False. In most cases, data is missing.
True. In some critical areas, such as aerospace projects, the data is clean.
False. The distribution distance is measured between two data distributions.
False. Non-uniformity has profound effects on the outputs of an ML model. However, in some cases, non-uniform datasets reflect the reality of the problem we are trying to solve, and the challenge is to find a solution!
True. You can design your dataset so that the features appear in the order or display the reasoning you wish to convey.
True. By binning different features, you can visualize how each feature influences the outcome of the machine learning model.
False and True. If the median divides a set of values of a feature into two very different feature values, you might have an unstable feature.
However, this could also be a key aspect of the dataset to take into account.
False. Quality control of a dataset comes first. Building an ML model using unreliable data will lead to a waste of time and resources.
True. By visualizing your datasets, you can see where your ML model needs fine-tuning. You might find missing data, zeros, non-uniform data distributions, and more problems that will help you improve your datasets and ML model.
False. Shapley values are particularly interesting because they are not ML model dependent.
False. The output of an ML model is necessary to analyze the contributions of each feature to a result.
True.
True. Each feature can then be compared to the marginal contribution of other features when analyzing the output of an ML model.
True. This is a key function before running an ML algorithm.
True. Calculating the frequency of a feature in a dataset can help explain ML model outputs.
False. SHAP is model-agnostic. It can be used to explain many types of ML models.
True. One feature can change a result.
False. Many other features can also have high Shapley values. In this case, the features can form a coalition of features to influence the outcome of a prediction, for example.
False. In some cases, the results are challenging to explain an ML model. Isolating samples to run a unit test can save a lot of time.
True. If an ML model is easy to explain with SHAP, for example, creating a unit test is useless.
True. If an ML model keeps producing errors, SHAP can help track them using SHAP's numerical and visual tools.
False. You have conflicts with people that are offended by AI solutions that do not take moral considerations into account.
True. An ethical approach will make your AI programs and explanations trustworthy.
False. You must verify if the data used is legal or not.
True. ML can provide automated help to check datasets.
False. You can load and analyze raw data without running an ML algorithm first.
True. In some cases, you can anticipate what the ML algorithm will produce.
False. In other cases, the number of parameters and the amount of data will be overwhelming. ML will be required.
True. This is an essential aspect of explainable AI and AI.
True. Yes, in some cases, this will work.
False. In other cases, the results will be either inaccurate or biased.
True. In some cases, an ML algorithm will produce good results from a metrics perspective and produce false predictions. In this case, you must check the data and the model in detail.
True. This is one of the excellent functions provided by WIT.
True. The developer is accountable for the philosophy of an AI system.
False. Each country has legal obligation guidelines.
False. Other estimators would produce good results as well, such as decision trees or linear regression models.
True. The values will show if some features are making the wrong contributions, for example.
True. We can visualize the contribution of biased data.
True. People-centered AI systems will outperform AI systems that have no human guidance.
False. ROC stands for "receiver operating characteristic." It displays true and false positives.
False. AUC stands for "area under curve." If the area under the true positive curve is small, then the true positive curve is failing to approach an accuracy of 1.
True. If the goal of the model, the ground truth, is false, the model is false.
True. Technically, yes. This would be a personal decision to accept bias.
False. Another machine learning program would need to go through human moral and ethical control, which brings us back to people-centered AI. Legal constraints will most certainly slow the distribution of the AI system down.
True. Human intelligence added to machine intelligence will contribute to the wide distribution of AI in all fields based on accuracy and trust.
True. The Google Python client can communicate with Dialogflow.
True. Yes, a chatbot can help to interestingly explain AI.
False. You must configure Dialogflow.
False. Intents contain training phrases and responses.
False. A training phase contains a question or phrase, not the response.
True. It is a good way of improving a dialog by remembering the previous exchanges.
True. A follow-up question continues a dialog.
True. Introducing some small talk makes the dialog less technical and cold.
True. Yes. There is a small talk feature in Dialogflow.
True.
False. LIME's unique approach measures the truthfulness of a prediction locally.
True.
True.
False. LIME's algorithm is model-agnostic.
False. LIME's explainer applies to a range of models.
False. The predictions might seem accurate, but the global accuracy of a model must be displayed and explained.
True. Even a poor model can produce true positives and true negatives. However, the model is globally unreliable.
True. But the same model might produce false positives and negatives as well. A model might be accurate but requires constant monitoring.
True. A model might not fit a specific dataset for several reasons. First, try to refine the model and check the datasets. If not, perhaps another model might produce better predictions.
False. Any prediction should be justified, whether it is true or false.
False. A model can be accurate but for the wrong reasons, whatever they may be.
True. A user will not trust a prediction without a certain amount of belief.
False. In some cases, such as a medical diagnosis, some truths are difficult to believe.
True.
False. A counterfactual explanation is model-agnostic.
True.
True.
True.
True.
True. Furthermore, explaining automatic decisions will eventually become mainstream as consumer suits will challenge controversial choices made by bots.
False. CEM focuses on the missing features.
False. General practitioners often eliminate symptoms when assessing a patient's condition.
True.
False. CEM can use the output of a CNN and an autoencoder to produce explanations.
True. It is the example given by the IBM Research team.
True.
True.
False.
True.
False. LIME explanations explain predictions locally.
True.
False. An XAI tool is model-agnostic but not dataset-agnostic.
False. Anchors are model-agnostic.
False. Anchors are model-agnostic.
True.
True.
False.
True.
True. You must carefully choose the XAI tools you will use for a specific database for a specific ML model.
True.
True.
True.
True.
True.
True.
True.
True.
True.
True.
3.149.28.126