Introducing a brand new model-agnostic, post hoc XAI approach based on CART to supply local explanations improving the transparency of AI-assisted decision making in healthcare
Within the realm of artificial intelligence, there may be a growing concern regarding the shortage of transparency and understandability of complex AI systems. Recent research has been dedicated to addressing this issue by developing explanatory models that make clear the inner workings of opaque systems like boosting, bagging, and deep learning techniques.
Local and Global Explainability
Explanatory models can make clear the behavior of AI systems in two distinct ways:
- Global explainability. Global explainers provide a comprehensive understanding of how the AI classifier behaves as an entire. They aim to uncover overarching patterns, trends, biases, and other characteristics that remain consistent across various inputs and scenarios.
- Local explainability. However, local explainers deal with providing insights into the decision-making process of the AI system for a single instance. By highlighting the features or inputs that significantly influenced the model’s prediction, a neighborhood explainer offers a glimpse into how a particular decision was reached. Nevertheless, it’s essential to notice that these explanations will not be applicable to other instances or provide a whole understanding of the model’s overall behavior.
The increasing demand for trustworthy and transparent AI systems will not be only fueled by the widespread adoption of complex black box models, known for his or her accuracy but in addition for his or her limited interpretability. It is usually motivated by the necessity to comply with recent regulations aimed toward safeguarding individuals against the misuse of knowledge and data-driven applications, resembling the Artificial Intelligence Act, the General Data Protection Regulation (GDPR), or the U.S. Department of Defense’s Ethical Principles for Artificial Intelligence.