Evaluating the Influences of Explanation Style on Human-AI Reliance
The reccent paper “Evaluating the Influences of Explanation Style on Human-AI Reliance” investigates how different types of explanations affect human reliance on AI systems. The research focused on three explanation styles: feature-based, example-based, and a combined approach, with each style hypothesized to influence human-AI reliance in unique ways. A two-part experiment with 274 participants explored how these explanation styles impact reliance and interpretability in a human-AI collaboration task, specifically using a bird identification task. The study sought to address mixed evidence from previous literature on whether certain explanation styles reduce over-reliance on AI or improve human decision-making accuracy.
Explainable AI for Improved Heart Disease Prediction
The paper “Optimized Ensemble Learning Approach with Explainable AI for Improved Heart Disease Prediction” focuses on explaining machine learning models in healthcare, similar to my original work in “Ada-WHIPS: explaining AdaBoost classification with applications in the health sciences”. The newer paper combines a novel Bayesian method to optimally tune the hyper-paremeters of ensemble models such as AdaBoost, XGBoost and Random Forest and then applies the now well established SHAP method to assign Shapley values to each feature. The authors use their method to analyse three heart disease prediction datasets, included the well-known Cleveland set used as a benchmark in many ML research papers.
Gender Controlled Data Sets for XAI Research
The paper “GECOBench: A Gender-Controlled Text Dataset and Benchmark for Quantifying Biases in Explanations” introduces a novel dataset, GECO, to evaluate biases in AI explanations, specifically focusing on gender. The authors constructed the dataset with sentence pairs that differ only in gendered pronouns or names, enabling a controlled analysis of gender biases in AI-generated text. GECOBench, an accompanying benchmark, assesses different explainable AI (XAI) methods by measuring their ability to detect and mitigate biases within this context.
Algebraic Aggregation of Random Forests
In my paper, “CHIRPS: Explaining random forest classification”, I took an empirical approach to addressing model transparency by extracting rules that make Random Forest (RF) models more interpretable. Importantly, this was done without sacrificing the high levels of accuracy achieved by the high-performing RF models. The recently published “Algebraic aggregation of random forests: towards explainability and rapid evaluation” by Gossen and Steffen provides a theoretical counterpart, offering essential proofs and a mathematical framework for achieving explainability with RF models.
Explaining Random Forests with Representative Trees
The paper “Can’t see the forest for the trees: Analyzing groves to explain random forests” explores a novel take on model-specific explanations, as outlined in my own research (e.g. you can look at “CHIRPS: Explaining random forest classification” as a reference). This new paper by Szepannek and von Holt seeks to make Random Forests (RF) more interpretable. RF are notoriously hard to explain due to their complexity and these novel methods works well for both classification and regression, which is a very useful extension to the field.