Over the last 10 chapters of this book, we have traveled over the entire landscape of Explainable AI (XAI), covering different types of explainability methods used in practice for different dimensions of explainability (data, model, outcome, and the end users). XAI is an active field of research that I think is yet to reach its full potential. But the field is growing rapidly, along with the broader domain of AI, and we will witness many new algorithms, approaches, and tools being developed in the future. Most likely, the new methods and tools of XAI will be better than the existing ones and will be able to tackle some of the open challenges of XAI discussed in Chapter 10, XAI Industry Best Practices. Unfortunately, we cannot extend the scope of this book to cover all possible approaches to XAI. However, the goal of this book is to provide a blend of conceptual understanding of the field with the required practical skills so that it is a useful starting point for beginners, and even add to the knowledge of experts for an applied knowledge of XAI.
In the previous chapter, we discussed the recommended practices for implementing an explainable Machine Learning (ML) system from the industry perspective. We also discussed the existing challenges of XAI and some recommended ways to mitigate the challenges. Considering these existing challenges, in this chapter, we will focus on the ideology of End User-Centered Artificial Intelligence (ENDURANCE). This is a term that is often used to refer to sustainable and scalable AI solutions that are built, keeping the user in the center. It is recommended that you read the previous chapter before starting this chapter for a better understanding. ENDURANCE is neither a new algorithm nor a new, sophisticated tool for XAI. Instead, it is a practice; it is a methodical discipline to bridge the AI-end user gaps.
This chapter will be particularly useful for researchers from the field of AI and Human-Computer Interaction (HCI) who view XAI from a multidisciplinary perspective. It is also useful for business leaders who want to drive problem solving using AI, considering a seamless User Experience (UX). For AI developers and thought leaders, this chapter will help you to design your AI solutions keeping the end user in the center and promoting AI adoption.
This chapter focuses on the following main topics:
Let's proceed with the first topic of discussion in the next section.
For most industrial problems, AI solutions are developed in isolation and users are only introduced in the final stages of the development process after a minimum viable solution is ready. With this conventional approach, it is often found that product leads or product managers tend to focus on projecting the solution from the development team's perspective to meet the goals of the users. Well, this approach is absolutely fine, and it might work really well for certain use cases that require the technical team to develop through abstraction. However, if the users are not involved in the early stages of the implementation process, it has been often found that the users are reluctant to adopt the solution. So, the ENDURANCE ideology is focused on developing solutions by involving final users right from the design phase of the solution.
The ENDURANCE ideology focuses on the principles of HCI and emphasizes the importance of distributed cognition of the user. With this ideology, the entire solution comprising the User Interface (UI), AI algorithms, underlying dataset, XAI component, and end user's experience is considered collectively as a system, rather than considering the individual components in isolation. This ensures that explainability is baked into the system instead of being offered as an add-on service for the user. From what I have observed, most industrial AI solutions are developed in isolation as a separate component and then added to the main software system as an add-on or premium feature. Similarly, the XAI component is also considered an add-on feature after being developed in isolation. Consequently, the seamless UX can get hampered, and the main benefits of the AI solution and the XAI component may not be realized to their full potential. This is why we should focus on the design and development of the entire user-centric XAI/ML system.
Next, let's discuss the various aspects of end user-centric XAI that we should consider while designing the solution.
In this section, we will discuss the different principles of human factors that should be integrated while designing the XAI system using the ENDURANCE ideology for bridging the AI and end user gap.
The primary questions that the field of HCI tries to address are Who are the users? and What are their needs? Or in other words, it tries to understand the goal relevance of the solution for the user. If the solution provided is not effectively solving the problem by meeting the needs of the users without introducing other challenges, it is not relevant. Not considering the goal relevance is probably one of the main reasons why the majority of AI solutions are either scrapped or adopted with a lot of skepticism.
The recommended approach to evaluate goal relevance is by checking whether the users can achieve their goals without the introduction of other challenges. Along with goal relevance, I often recommend assessing the impact of the solution. The impact of the solution can be qualitatively measured by taking the user's feedback when the solution is absent.
As discussed before, in most industrial use cases, XAI is used in isolation to provide explainability without considering the user needs. Instead, using the ideology of ENDURANCE, XAI should connect the user needs with the strength of the AI algorithm. Once the user needs are identified, translate the user needs into data needs and model needs. If the underlying dataset is not sufficient to meet all the user needs, use data-centric XAI to communicate the limitations of the dataset to the user. If the model needs are identified, use XAI to interpret the working of the model, and tune accordingly to meet the needs of the user.
But this process can be challenging as it involves identifying the existing mental model of the user. With the introduction of AI and XAI, the existing workflow should not get disrupted.
Moreover, it is also recommended that using XAI, you try to explain whether the AI solution is adding any unique value. But design the explainability methods to justify the advantages and not the underlying technology used. For example, if the system conveys to the user that complex deep learning algorithms are used to predict the outcome, it does not increase the confidence of the user. Instead, if the system conveys that the intelligent solution helps the user to reach their goal five times faster than the conventional approach, the user will agree to adopt the solution.
Considering the conventional approaches, most AI practitioners are focused only on developing accurate AI models giving much less focus to the user's interaction with the model. Generally, the user's interaction with the AI component is decided by the software engineering teams; unfortunately, in most organizations, the data science and AI teams work in silos. But it is the UI that controls the level of visibility, explainability, or interpretability of the AI models and plays a vital role in influencing the user's trust in the system.
In Chapter 10, XAI Industry Best Practices, while discussing Interactive Machine Learning (IML), we discussed how the user's interaction with the system through the UI gives more confidence to the user about the working of the AI/ML system. Hence, the UI should be in alignment with the AI model and its explainability methods to calibrate the user's trust. You can find out more about calibrating the user's trust using the UI in the People + AI Guidebook from Google PAIR: https://pair.withgoogle.com/chapter/explainability-trust/.
Unlike conventional approaches, the user-centric approach recommends involving the final user(s) early in the development process. The end user should in fact be involved from the design phase of the UI of the system, so that the needs of the user are correctly mapped into the interface. Similar to the design and development life cycle of the solution, explainability should also be evolved in an iterative process by taking continuous feedback from the user.
As the ENDURANCE ideology views the XAI/ML system as one solution, the entire solution should have a design phase, prototype phase, development phase, and evaluation phase. These four phases would collectively form one iteration of design and development. Likewise, the entire solution should be matured in several iterations, keeping the user involved in every single phase of each iteration. This process is also in alignment with the agile methodology followed in software engineering. Involvement of the user in every phase ensures that useful feedback is collected for evaluating whether the user's needs are being met by the solution. Early involvement also ensures that the users are familiar with the design and working of the new system. Users' familiarity with the system increases the adoption rate of the system.
As discussed in the previous section, the importance of the user's feedback in every phase of the design and development of the solution is inevitable. But sometimes, a general framework of a solution doesn't fulfill all needs of the user.
For example, when using counterfactual examples, it is technically possible to generate an example using all the features used for the prediction. But suppose the user is only interested in changing a specific set of actionable variables. In that case, the controlled counterfactuals should modify only the features that are interesting to the user. It has been found that a tailor-made personalized solution is often more useful to the end user than a generalized solution. So, using the feedback obtained from the user, try to provide a personalized solution meeting the specific pain points of the user.
As we previously discussed in Chapter 10, XAI Industry Best Practices, explanations should be contextual and actionable. The entire XAI/ML system should also be in alignment with the user's actions and should have context awareness. XAI plays a vital role in connecting AI to the user's action and modifying any AI solution into a contextual AI solution.
Oliver Brdiczka, in his article Contextual AI: The Next Frontier of Artificial Intelligence (https://business.adobe.com/blog/perspectives/contextual-ai-the-next-frontier-of-artificial-intelligence), defined the following four pillars of contextual AI:
The following figure shows the four different components of contextual AI:
So, considering user-centric approaches, the XAI component of XAI/ML systems should provide actionable insights and it should be contextual to further bridge the gap between AI and end users. Now that we have discussed the user-centric approaches to bridge possible gaps between AI and end users, considering the open challenges of XAI discussed in Chapter 10, XAI Industry Best Practices, let's discuss making rapid XAI prototypes using the End User-Centric Explainable Artificial Intelligence (EUCA) framework.
In the previous section, we discussed the key ingredients of a user-centered XAI/ML system. In this section, the importance of rapid prototyping in the ENDURANCE ideology will be emphasized. Rapid prototyping is a concept that is predominantly adopted in software engineering as software is probably the most malleable thing created by mankind. Building fast prototypes is an approach for collecting useful user feedback early in the development process of a software product. Hence, even for designing user-centered XAI/ML systems, rapid prototyping is very important.
Jin et al., in their research work EUCA: the End-User-Centered Explainable AI Framework (https://arxiv.org/abs/2102.02437), introduced a toolkit called EUCA. EUCA is a very interesting framework primarily designed by UX researchers, HCI researchers and designers, AI scientists, and developers for building rapid XAI prototypes for non-technical end users. The official GitHub repository for the EUCA framework is available at https://github.com/weinajin/end-user-xai. It is strongly recommended to use EUCA to build low-fidelity prototypes and iteratively improve the prototype based on continuous user feedback for XAI/ML systems.
The following important components are offered by this framework:
The following figure illustrates the different types of explanation methods currently supported by the EUCA framework:
This framework is a great starting point and definitely recommended for building rapid XAI prototypes. Next, let's discuss some additional efforts that can be made to increase user acceptance of AI/ML systems.
In this section, we will discuss some recommended practices to increase the acceptance of AI/ML systems using XAI. In most software systems, the User Acceptance Testing (UAT) phase is used to determine the go or no-go for software. Similarly, before the final production phase, more and more organizations prefer doing a robust UAT process for AI/ML systems. But how important is the explainability of AI algorithms, when doing UAT of AI/ML systems? Can explainability increase the user acceptance of AI? The short answer is yes! Let's go through the following points to understand why:
The preceding approaches are certain ways to increase user acceptance, but ultimately, user acceptance depends on the overall UX. In the next section, we will discuss further the importance of providing a delightful UX.
In this section, we will focus on the importance of overall UX to promote the adoption of XAI/ML systems. Aaron Walter, in his book Designing for Emotion (https://abookapart.com/products/designing-for-emotion), mentioned some of the foundational elements of user needs that must be met before higher motivation can influence the behavior of the user. According to his hierarchy of user needs, pleasurable or delightful UX is at the top of the pyramid. The following figure shows Aaron Walter's hierarchy of user needs:
This hierarchy of user needs defines the fundamental needs of the end user that should be fulfilled before any advanced needs of the user are addressed. So, if a system is only functional, reliable, and usable, it is not sufficient for adopting the system unless the overall UX is delightful and enjoyable! Hence, XAI/ML systems should also consider providing a seamless overall experience to truly bridge the AI-end user gap.
This brings us to the end of the last chapter of this book. We will summarize the key topics of discussion in the next section.
In this chapter, we have primarily discussed using the ideology of ENDURANCE for the design and development of XAI/ML systems. We have discussed the importance of using XAI to steer us toward the main goals of the end user for building XAI/ML systems. Using some of the principles and recommended best practices presented in the chapter, we can bridge the gap between AI and the end user to a great extent!
This also brings us to the end of this book! Congratulations on reaching the end! This book was carefully designed to include conceptual understanding of various XAI concepts and jargon, practical examples to use popular XAI frameworks for applied problem solving, real-life examples and experiences from an industrial perspective, and references to important research literature to further expand your knowledge. This book introduced you to the field of XAI from both the industrial perspective as well as an academic research perspective. The open challenges and the next phases of XAI research topics discussed in this book are important research problems that are being explored by the AI research community.
Even though this book touched on almost every aspect of the field of XAI, clearly there are lots more to explore and unravel. My recommendation is not to restrict yourself to what was offered by this book. Instead, use this book as a reference starting point but explore and apply the knowledge gained from this book to practical use cases and step forward to contribute to the community!
Please refer to the following resources to gain additional information:
18.117.241.69