XAI Today

Evaluating the Influences of Explanation Style on Human-AI Reliance

Julian Hatwell

The reccent paper “Evaluating the Influences of Explanation Style on Human-AI Reliance” investigates how different types of explanations affect human reliance on AI systems. The research focused on three explanation styles: feature-based, example-based, and a combined approach, with each style hypothesized to influence human-AI reliance in unique ways. A two-part experiment with 274 participants explored how these explanation styles impact reliance and interpretability in a human-AI collaboration task, specifically using a bird identification task. The study sought to address mixed evidence from previous literature on whether certain explanation styles reduce over-reliance on AI or improve human decision-making accuracy.

To study human responses to various AI explanations, the researchers used a quantitative methodology, measuring reliance through initial and final decision accuracy shifts. The study employed the Judge-Advisor System (JAS) model to capture differences in human reliance before and after AI assistance. Key measures included the Appropriateness of Reliance (AoR) framework, developed by Schemmer et al., which introduced two metrics: Relative AI Reliance (RAIR) and Relative Self-Reliance (RSR). These metrics quantified reliance by assessing how often humans appropriately switched to AI-supported decisions or maintained their initial, correct judgments. The researchers noted individual participant performance variations, revealing that higher-performing individuals demonstrated different reliance patterns compared to lower-performing ones, particularly when interacting with high-complexity tasks.

The quantitative approach included metrics from the AoR model and the JAS framework. Both RAIR and RSR metrics from AoR provided a structured comparison across explanation styles by evaluating the effect of explanations on reliance in human-AI interactions. While the reliance metrics were based on existing literature, this study extended their use by separating participants based on individual performance, creating a novel analysis approach. Additionally, accuracy shift measures captured how reliance on AI suggestions varied with task complexity and participant ability. This nuanced view highlighted reliance discrepancies based on cognitive engagement, suggesting that explanation styles should be tailored to user expertise and task requirements.

The paper emphasizes the importance of Explainable AI (XAI) for human-in-the-loop tasks, where humans need to understand and trust AI recommendations effectively. Such explanations can calibrate user trust, ideally preventing over-reliance on incorrect AI outputs. XAI’s value is underscored in collaborative tasks, but this study’s focus on bird classification, although useful for understanding complex identification tasks, may not directly apply to more general, real-world applications. The research reveals challenges in establishing broad applicability for explanation methods due to the inherent limitations in specific experimental tasks that may not fully capture the varied decision-making contexts encountered in real-life scenarios.

Despite these limitations, the study makes a significant contribution to the ongoing discourse on XAI by providing evidence that certain explanation forms (example-based, feature-based, or combined) affect reliance differently. This research highlights that example-based explanations, though beneficial for identifying incorrect AI suggestions, can also foster over-reliance, particularly when high-quality explanations reinforce trust. The paper suggests that balancing clarity and trust calibration remains an open question, crucial for advancing reliable human-AI collaboration frameworks.

In summary, this work advances the conversation on XAI by demonstrating that different explanation styles yield complex, context-dependent effects on human reliance. Although the field has matured in recent years, debates persist about the ideal form and substance of explanations. This study contributes to understanding the nuanced roles of example- and feature-based explanations, highlighting that while explanations enhance interpretability, they do not uniformly improve reliance, particularly across varied user expertise levels and decision contexts. This reinforces the need for adaptable XAI methods that align with diverse human-AI collaboration needs.

Tags:
Categories: