AI Explainability 360 Open Source Toolkit
This extensible open source toolkit can help you comprehend how machine learning models predict labels by various means throughout the AI application lifecycle. Containing eight state-of-the-art algorithms for interpretable machine learning as well as metrics for explainability, it is designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare, and education. We invite you to use it and improve it.
Not sure what to do first? Start here!
Learn more about explainability concepts, terminology, and tools before you begin.
Try a Web Demo
Step through the process of explaining models to consumers with different personas in an interactive web demo that shows a sample of capabilities available in this toolkit.
Step through a set of in-depth examples that introduce developers to code that explains data and models in different industry and application domains.
Ask a Question
Join our AI Explainability 360 Slack Channel to ask questions, make comments, and tell stories about how you use the toolkit.
Open a directory of Jupyter notebooks in GitHub that provide working examples of explainability in sample datasets. Then share your own notebooks!
You can add new algorithms and metrics in GitHub. Share Jupyter notebooks showcasing how you have enabled explanations in your machine learning application.
Learn how to put this toolkit to work for your application or industry problem. Try these tutorials.
See how to explain credit approval models using the FICO Explainable Machine Learning Challenge dataset.
See how to create interpretable machine learning models in a care management scenario using Medical Expenditure Panel Survey data.
See how to explain dermoscopic image datasets used to train machine learning models that help physicians diagnose skin diseases.
Health and Nutrition Survey
See how to quickly understand the National Health and Nutrition Examination Survey datasets to hasten research in epidemiology and health policy.
See how to explain predictions of a model that recommends employees for retention actions from a synthesized human resources dataset.
These are eight state-of-the-art explainability algorithms that can add transparency throughout AI systems. Add more!
Boolean Decision Rules via Column Generation (Light Edition)
Directly learn accurate and interpretable ‘or’-of-‘and’ logical classification rules.
Generalized Linear Rule Models
Directly learn accurate and interpretable weighted combinations of ‘and’ rules for classification or regression.
Improve the accuracy of a directly interpretable model such as a decision tree using the confidence profile of a neural network.
Teaching AI to Explain its Decisions
Predict both labels and explanations with a model whose training set contains features, labels, and explanations.
Contrastive Explanations Method
Generate justifications for neural network classifications by highlighting minimally sufficient features, and minimally and critically absent features.
Contrastive Explanations Method with Monotonic Attribute Functions
Contrastive explanations for colored images or images with rich structure.
Disentangled Inferred Prior VAE
Learn disentangled representations for interpreting unlabeled data.
Select prototypical examples from a dataset.