AI Explainability 360


This extensible open source toolkit can help you comprehend how machine learning models predict labels by various means throughout the AI application lifecycle. We invite you to use it and improve it.

 

Although it is ultimately the consumer who determines the quality of an explanation, the research community has proposed quantitative metrics as proxies for explainability.

About this site

AI Explainability 360 was created by IBM Research

Additional research sites that advance other aspects of Trusted AI include:

AI Fairness 360
AI Adversarial Robustness 360
AI FactSheets 360