Interpretability analyses may have multiple scopes, e.g. debugging, justifying outcomes, proving the safety, reliability and fairness of the model, providing accountability. Such variety of objectives led to inconsistencies in the terminology of interpretable Artificial Intelligence. The words “interpretable”, “explainable”, “intelligible”, “understandable”, “transparent” and “comprehensible” have often been used interchangeably in the literature, causing confusion and different taxonomies.
A formal definition of interpretability was given by Doshi-Velez and Kim as that of “explaining or presenting in understandable terms to a human” the decisions of a machine learning system.
What does “understandable to a human” mean from the cognitive and psychological perspective? What are the legal constraints regarding the explanations? What are the ethical and social impacts of generating explanations and how can these meet the requirements of the technical development?
The goal of this workshop is to discuss all of these questions, among others. We aim at involving experts of different backgrounds to join forces towards obtaining a global viewpoint on interpretable AI that can then be presented in the joint publication “Common Viewpoint on Interpretable AI: Unifying the Taxonomy from the Developmental, Ethical, Social and Legal Perspectives”.