Ab Februar 2025 Pflicht (EU AI Act): Anbieter und Betreiber von KI-Systemen müssen KI-Kompetenz nachweisen.
Alle Informatonen 
Artificial Intelligence

Explainable AI as a User-Centered Design Approach

Lesezeit
21 ​​min

Creating user-friendly explanations still poses a challenge in the field of Explainable AI. Yet, it is indispensable to meet the demand for transparency and comprehension in black box models of various user groups without ML expertise. Considering the social-psychological origin of explanations, effective XAI requires designing the interaction between an agent and a human user. Transitioning from AI as a black box to an explainer, interactive user interfaces offer a promising solution for facilitating this dialogue.

The following blog post summarizes the findings of my bachelor thesis where I explore the linking of UI/UX Design and XAI. To make this tangible I will guide you through my process of researching, selecting, and visualizing XAI methods, following the user-centered design process, eventually resulting in the approach for an explainable user interface of an ML-based demand forecasting system.

The dilemma surrounding Explainable AI

In the ever-evolving landscape of technology, artificial intelligence has not only seamlessly integrated into our daily lives but is profoundly impacting high-stakes decision-making in sectors like healthcare, finance, criminal justice, and retail. While AI models excel in accuracy and efficiency with growing data sets, they often operate as black boxes, making it challenging to understand their outcomes and decision processes.

Explainable Artificial Intelligence (XAI) as an emerging research field is concerned with moving towards a more transparent approach to AI. It is therefore traded as a key element for developing trustworthy machine-learning applications. While various terms like interpretability, explicability, and transparency are interchangeably used, the core goal of XAI is to make AI decisions understandable to humans.

Despite the substantial algorithm-centric progress in XAI, critics argue that the field primarily caters to the needs and expertise of ML engineers. However, the increasing automation of processes has brought AI into direct interaction with various user types, including decision-makers, domain experts, business stakeholders, and lay users, most of whom lack in-depth AI knowledge.

To fulfill the diverse explainability needs of its user groups, XAI must adopt a human-centered approach. Yet, one fundamental question remains: What defines user-friendly explanations? Exploring the concept of explanations from a social psychology perspective is expected to be an insightful approach.

Explanations are social interactions

Ever since explanations have been an integral part of human learning and cognitive processing. In his research on the intersection of social sciences and XAI, Researcher Tim Miller poses four findings when it comes to explanations:

  1. Explanations are contrastive and sought in response to counterfactual scenarios. We inquire not just why something occurred but why it happened instead of something else.
  2. Explanations are selectively chosen, with a preference for concise and precise explanations. Humans tend to identify one or two causes for an event, favoring clarity over complexity.
  3. Probabilities alone may not effectively identify underlying causes. Causal explanations alongside statistical generalizations enhance overall satisfaction.
  4. Explanations are social. In its origin, an explanation is a social construct within a conversation or interaction where knowledge is transferred between an explainer and an explainee.

Applying this understanding to XAI highlights the shift in human-computer interaction. In this conversation, the agent assumes the role of what Miller calls the ‘explainer’, conveying decisions to human users, the ‘explainees’.

Achieving genuine explainability in XAI makes it essential to create a common basis for communication between both, the AI system and the users. Employing user interfaces as mediators, we visually facilitate this dialogue. This approach enables users to interact, adjust inputs, and explore functionalities, fostering a deeper understanding and, ultimately, trust in the system.

Still, effective communication of explanations hinges on understanding users‘ specific explainability needs when using AI systems. This necessitates a tailored development process for AI tools, and that is where user-centered design comes into play.

User-Centered Design Process

User-centered design (UCD) is rooted in the continuous involvement of users in the design and development process to maintain a focus on their needs. This process, as per ISO standard ISO 9241-210 2010, comprises four key steps:

  1. Understand and specify the user and the context of use:
    This step involves understanding users, their goals, needs, and the context in which they engage with the system. Research methods such as interviews, surveys, and observations help uncover user pain points and preferences, ensuring a user-centric approach.
  2. Specify the user requirements:
    Based on insights from the prior step, this phase establishes clear design goals and key challenges the design solution needs to address to guide the design process effectively.
  3. Produce design solutions:
    The third phase involves creating design solutions that meet the defined requirements and user needs. Designers engage in brainstorming, sketching, prototyping, and iterative refinement, with user feedback playing a pivotal role in aligning the design with user expectations.
  4. Evaluate the design against the requirements
    In the final step, the design is evaluated for alignment with evolving user needs. Usability testing and feedback sessions involve users throughout the development process, allowing for early identification of usability issues. The design is then refined and iterated to enhance user satisfaction and usability.

In essence, UCD revolves around constant user involvement, ensuring that the design process remains user-focused, from understanding their needs to delivering a user-friendly final product.

Use Case: Demand Forecasting in Retail

Numerous open-source libraries and toolkits strive to empower practitioners with access to a range of XAI techniques and explanations for black-box ML models. Yet, what remains challenging is translating these theoretical frameworks into real-world applications.

To make things more tangible I will showcase a human-centered approach about how XAI can be effectively implemented through a practical example of an AI-driven demand forecasting system in retail. In the following sections, I will illustrate the results I acquired during the first iteration of the user-centered design process within my thesis, starting from the initial user research stage to the selection of suitable XAI methods accordingly. Finally, I will introduce my approach for visually applying XAI in a user interface using design patterns, specifically tailored for AI products.

Phase 1: Researching User and Context of Use

End users of AI-based forecasting systems are typically domain experts, highly skilled in logistics, supply chain management, and demand forecasting. Their extensive experience drives their reliance on instinct and intuition for assessing the system’s trustworthiness. Understanding the rationale behind specific predictions is their primary concern, guiding their decision-making.

Deviations in AI predictions from their expectations erode their confidence in the system. When faced with incomprehensible forecasts, users seek explanations and support from the development team, typically ML engineers and Data Scientists. However, for the developers providing these explanations entails significant time and resources through additional manual effort, eventually hindering system scalability.

Additionally, end users demand control to override AI predictions, especially when discrepancies arise due to their domain expertise and intuition. Despite users are already able to adjust predictions, the decentralized implementation of the system landscape, however, leads to opacity in prediction changes made by users, resulting in hidden feedback loops that impact system development.

Phase 2: Selection of XAI methods

At this point, the procedure within the user-centered design process entails deriving user requirements from these findings. For selecting appropriate XAI methods, an intermediate step becomes necessary: identifying specific explainability needs based on our user research.

We recall explanations in XAI are a form of knowledge exchange between an AI agent and human users. In this interaction, the AI system is supposed to provide understandable explanations in response to user queries. In a similar effort, Liao et al. categorized the questions users pose to AI systems into nine explainability need categories within their “XAI Question Bank“. Accordingly, they also developed a “Mapping Guide“ recommending explanations and suitable XAI methods for each category.

Within my process, I have referred to this XAI Question Bank to also classify the findings from my user research into the given categories to guide the selection of suitable XAI methods. The results reveal that most of the insights gathered from the user research align with categories that pertain to local explanations rather than explaining the complex global model-wide logic. ‘Local’ here defines explanations for specific, individual predictions, rather than the entire model behavior which would, in contrast, be the case for ‘global’ explanations.

Notably, all of the identified categories are commonly mapped to an example-based method, which means explaining the predictions using example instances. 

Example-based explanation distinguishes two types:

Prototypical examples: These closely resemble the instance and yield the same (or a nearly similar) predicted outcome, making them suitable for substantiating the model’s prediction.

Counterfactual examples: They are similar to the instance in most but not all circumstances (features) but lead to a significantly different prediction. Counterfactual examples highlight the smallest change required to achieve an alternative outcome, offering insights into the actions needed to reach the desired result.

Defining Meta Requirements

These insights served as the foundation for defining two overarching objectives, which I call meta requirements. In this case, user research highlighted that users predominantly inquire about why a particular prediction occurred or why it did not align with their expectations. From this, the first meta-requirement emerged: prioritizing clear explanations for local predictions rather than explaining the complex global model-wide logic. A global approach is deemed to be very complex and less useful, given the users‘ primary focus on understanding specific predictions.

The second meta-requirement relates to the user’s desire for control, a need also identified in the user research. This is especially relevant when obvious prediction errors occur or when there are circumstances in the real world that the AI model is not able to know because it is not encoded in its input features (e.g. the occurrence of Covid). Thus, the design solution should address this essential user need for control, empowering users to have agency over the system.

Counterfactual Examples

In the realm of XAI, counterfactual explanations emerge as an evolving method for explaining the complex interplay between actions and outcomes. They offer an understanding of an elemental cause-and-effect relationship.

They do so by illustrating the smallest necessary modifications required to attain the desired prediction compared to a reference example. In this framework, the term ‘effect’ refers to the anticipated outcome, while the ‘causes’ encompass the specific feature values of the instance that have influenced the model’s prediction.

This approach aligns closely with the cognitive processes inherent in human counterfactual thinking. It mirrors the mental exercise where individuals contemplate alternative scenarios that could have unfolded in the past or may potentially occur in the future. This introspective ‘what if?’ questioning allows humans to mentally simulate divergent outcomes counter to factual events.

By implementing analogous principles in machine learning, counterfactual examples can be strategically leveraged to enhance the interpretability and transparency of AI systems. They provide a mechanism for explaining local predictions without exposing the intricate internal logic governing the decision-making process. Consequently, counterfactual examples emerge as a compliant means of addressing the GDPR’s ‘right to explanation’, offering suitable and permissible explanations for end users, particularly in the quest for enhanced understanding and trust within the AI landscape.

Phase 3: Visualizing example-based XAI in a user interface

When creating a design solution in the third phase of UCD, it is important to clarify what information needs to be visualized. In the context of demand forecasting, each example of an instance represents the prediction of the demand for a particular product on a given day. A scenario contains two important pieces of information: the predicted outcome, which signifies the expected sales volume, and the relevant features that the AI model considers when making its prediction. For the initial iteration of our user interface, I have focused on six key features: day of the week, month, price, temperature, campaign, and event.

To ensure a consistent user experience, first, I established a standardized design for examples being horizontal cards within the interface, displaying predicted sales on the left and listing corresponding feature values on the right. This design choice facilitates easy scaling, allowing stacking for multiple instances, each explaining a single prediction. The table-like format simplifies comparisons between instances, enabling users to comprehend diverse scenarios at a glance. Furthermore, the examples are visually distinguished based on the kind of instance they represent: comparable past sales, the initially predicted forecast, and AI-generated instances.

To provide a realistic and business-relevant scenario for my user interface, I chose a specific product, ‘Apple Pink Lady 6 pieces in a bowl,’ commonly found in retail stores. As this is a fictional use case, so are all the sales figures I use.

Following the Human-Centered AI (HCAI) pattern language proposed by Ben Shneiderman in 2022, the initial view of the interface provides users with a contextual overview. A bar chart takes center stage, providing a holistic view of both forecast and historical sales of the viewed article. Bar charts are an effective means of data visualization, offering quick and accurate comprehension of value disparities and visually distinguishing each day as the distinct data point that it is. The chart spans five weeks, divided into two sections: the past three weeks of sales on the left and a two-week forecast on the right. Additional features impacting sales, including price, event, and campaign, are incorporated into the design, providing users with pertinent information at a glance.

If the output of the AI is unpleasant, surprising, or unfamiliar, users need explanations to evaluate the AI’s decision. A detailed view allows for simultaneously accessing more detailed information, enhancing understanding. The additional information includes two service levels indicating the statistical probability of meeting the demand for the item over a specific period, factoring in an efficient optimization parameter. To foster trust, prototypical examples from past sales are presented to substantiate the current prediction. Additionally, a bar chart illustrating the importance of a feature’s influence is displayed, serving as an explanation of how each feature contributes to the forecast.

To cater to users‘ desire for control, the interface integrates a simulation mode. This view allows users to explore the system’s functionalities without triggering consequential outcomes. However, the challenge lies in explaining the interdependencies between features and predicted outcomes. As a solution, the interface is split into two parts: the left side displaying the distinct features, and the right side showing the corresponding prediction. This division enables accurate visualization of both: one-to-one relations, where a single feature combination results in one prediction, and one-to-many relations, where one prediction can stem from multiple feature combinations.

Upon entering the simulation view, each feature is presented as an individual interactive element allowing users to adjust its value. Any adjustments made to these features trigger immediate feedback as alterations in the prediction on the right side of the screen whereby users can witness the effect it has in real time. Further, users can tailor their experience by selectively enabling the features they wish to observe, through toggling the feature list on the left side of the screen.

Within the simulation view, users can also switch directions to modify the predicted outcome. The left side of the screen serves as a platform for providing a comprehensive explanation of the prediction, showcasing two categories of prototypical examples. Firstly, the system generates instances based on the AI model, illustrating similar scenarios resulting in the same demand prediction. Beneath these, specific instances of comparable past sales are displayed, underscoring similar sales achieved under analogous conditions.

To customize the required explanation, users have the option to lock features within the feature list on the left, thus limiting the set of possible examples and accounting for the fact that not all features are alterable.

By adjusting the predicted outcome, the system promptly responds with new examples on the left side, providing a range of instances, both generated by the AI and drawn from comparable past demand. Being alternative scenarios to the initial prediction those are explained by counterfactual examples, illustrating the smallest change required to produce an alternative outcome. To make this change immediately apparent to the user,  the feature value responsible for this outcome is visually highlighted with a red background.

Phase 4: Evaluation

Qualitatively evaluating the interface with real end users revealed that, first and foremost, feature visualization, highlighting the corresponding features influencing a forecast, plays a pivotal role in enhancing users‘ understanding of AI-based predictions. This insight underscores the importance of integrating visual representations of such features in user-centric interfaces, as it enriches the interpretability of AI predictions. Additionally, the interactive simulation view emerges as a helpful tool for explaining the interplay of features and their impact on the predicted outcome. This view empowers users to explore, experiment, and adjust variables, providing them with a profound sense of control over the process.

The inclusion of prototypical examples showcasing instances of past sales that closely resemble the initial forecast also contributes to increasing user trust. These examples provide tangible evidence supporting the AI’s predictions and serve as a significant factor in building confidence among users. Furthermore, end users consistently express a desire for a visual reflection of the AI system’s past performance in terms of prediction quality. This emerged as an important insight that will be integrated within further iterations. This should be part of user-centric, explainable interfaces to calibrate trust in AI systems appropriately.

Moreover, integrating counterfactual examples in user interfaces turned out as a promising approach for increasing confidence and insight in AI predictions. This method not only explains AI predictions but also serves as actionable recommendations for end users when feature values are alterable and visually highlighted. One particularly promising application is the efficient management of remaining products, especially those nearing expiration or prone to spoilage. This approach not only generates revenue but also minimizes waste.

What now?

The pace at which technology is advancing suggests that AI will continue having a substantial impact on orchestrating processes in retail. While automation plays a significant role in retail, humans remain accountable for decision-making. To ensure successful collaboration between humans and AI, it is necessary to design their interaction thoughtfully.

Achieving transparency in black-box models undoubtedly requires algorithmic-centric progress in XAI. Yet, it must be accompanied by a human-centered approach to effectively realize what we inherently mirror: an interactive knowledge exchange between an agent and human users.

Following a user-centered design process allows for identifying the recipient-specific explainability needs and requirements for the given use case. Moreover, an explorative approach through interactive user interfaces enhances the user’s need for control while fostering understanding and trust of the system’s capabilities and limitations.

Failing to address user needs and expectations regarding explainability ultimately causes neglect of advanced systems. Therefore, optimizing for user acceptance through a human-centered XAI approach becomes not just a recommendation but a strategic necessity for the future of AI in retail and other industries.

Hat dir der Beitrag gefallen?

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert