Seminar from Jean-François Bonastre, professor at LIA, Avignon Université
Explicability and Interpretability of AI models: concepts, approaches and adaptability to the domain of speech processing
This presentation proposes to define the notions of explicability and interpretability in AI and to give an overview of the approaches used. The approaches are explored depending on their context in the processing toolchain and the type of information they consider.
After a brief overview of the so-called pre-hoc methods, more attention is given to “model agnostic” approaches as well as to example-based solutions. A few examples of “intrinsic” (explainable by design) and hybrid methods close this overview. Finally, the last part of the presentation deals with the issue of deploying the presented approaches in the context of automatic speech processing, trying to identify the limits of the presented methods and to formulate together some questions to be solved.