Seminar from Théo Mariotte, researcher at LIUM

 

Date : 20/10/2025
Time : 10h30
Place : IC2, Boardroom
Speaker : Théo Mariotte
 
 

Sparse Autoencoders Make Audio Foundation Models more Explainable

 

Abstract : Audio pretrained models are widely employed to solve various tasks in speech processing, sound event detection, or music information retrieval. However, the representations learned by these models are unclear, and their analysis mainly restricts to linear probing of the hidden representations.

In this work, we explore the use of Sparse Autoencoders (SAEs) to analyze the hidden representations of pretrained models, focusing on a case study in singing technique classification. We first demonstrate that SAEs retain both information about the original representations and class labels, enabling their internal structure to provide insights into self-supervised learning systems. Furthermore, we show that SAEs enhance the disentanglement of vocal attributes, establishing them as an effective tool for identifying the underlying factors encoded in the representations.