Combatting Fake News With XAI
With all the unhinged hype over ChatGPT stealing everyone’s jobs and AI taking over the world, it’s great to see postiive use cases for Machine Learning (ML) technologies. As usual, eXplainable Artificial Intelligence (XAI) has something to contribute to the ethical landscape of fairness and transparency. In this recent news article we see a concerted attempt to combat fake news with XAI and a pretty sophisticated tech stack. With the rise of social media and other online platforms, the spread of fake news has become a major problem. Fake news is defined as news stories that are intentionally false and designed to mislead readers. It is often spread through social media and can have serious consequences, such as influencing public opinion and even swaying elections.
Explaining Random Forests with Boolean Satisfiability
The paper “On Explaining Random Forests with SAT” uses Boolean satisfiability (SAT) methods to provide a formal framework for generating explanations of Random Forest (RF) predictions. A key result in the paper is that abductive explanations (AXp) and contrastive explanations (CXp) can be derived by encoding the RF’s decision paths into propositional logic. Encoding a decision path as propositional logic, is an entirely reasoned approach and quite straightforward, as I showed in my paper CHIRPS: Explaining random forest classification. The decision paths of an RF model can be transformed into a Boolean formula in Conjunctive Normal Form (CNF). For example, each decision tree in the forest is represented as a set of clauses. Following the paths for a single example prediction essentially carves out a region of the feature space with a set of step functions, resulting in a sub-region that must return the target response. When the clauses of this step functions set correspond to a subset of the features, a change in the remaining feature inputs has no effect on the model prediction. This subset is a prime implicant (PI) explanation.
How Subsets of the Training Data Affect a Prediction
I was quite excited by the title of a new paper, on pre-publication this month. “Explainable Artificial Intelligence: How Subsets of the Training Data Affect a Prediction” by Andreas Brandsæter and Ingrid K. Glad, at first glance, appeared to have some close alignment to my own work CHIRPS: Explaining random forest classification, published earlier this year in June. It’s generally highly desirable to connect with other researchers with which you share common ground, working contemporaneously. Often, fruitful collaborations are born.
Counterfactual Explanations Help Identify Sources of Bias
By the end of 2020, the topic of eXplainable Artificial Intelligence (XAI) has become quite mainstream. One important developlment is counterfactual explanations, which (among other benefits) can to identify and reduce bias in machine learning models. Counterfactual explanations provide insights by showing how minimal changes in input features can alter model predictions. This approach has been crucial in exposing biased behavior, especially in sensitive applications like credit scoring or hiring. By identifying how protected attributes (e.g., gender or race) affect outcomes, practitioners could better address and mitigate unfair biases in AI systems (Verma et al., 2020).
International Women's Day 2020
Profile: Cynthia Rudin # Today, for International Women’s Day, I wanted to share my huge respect for Cynthia Rudin. She is a leading academic in the research field in which I am currently involved for my PhD - interpretability in machine learning. Her work is very widely cited and comes up in all searches related to solving the “black box” problem of machine learning. She is a true thought leader.