PhD defence, Paul Tardy

Date : 12/07/2021
Time : 9h00
Location : online
 

Title: Neural Approaches for Abstractive Summarization of Speech Transcription

Jury members:

  • Ms. Sophie Rosset, Research Director, LISN, Reviewer
  • Mr. Alexandre Allauzen, Professor, LAMSADE, Paris, Reviewer
  • Mr. Sylvain Meignier, Professor, LIUM, Le Mans University, Examiner
  • Mr. Alexis Nasr, Professeur, LIS, Examiner
  • Mr. Yannick Estève, Professor, LIA, Avignon University, PhD Director
  • Mr. David Janiszek, Assistant Professor, Paris University, Co-supervisor
  • Mr. Vincent Nguyen, CEO, Ubiqus Labs, Invited

 

Abstract :

In this thesis, we study the application of Deep Learning Neural approaches for abstractive summarization for meetings reports generation.
This work takes place in a context where Deep Learning is omnipresent in the Natural Language Processing field (NLP). In fact, neural models constitute the current state-of-the-art in different language generation tasks such as Machine Translation and Abstractive Summarization. However, the application of automatic summarization for meeting report generation in French remains unexplored. Indeed, this task suffers from a lack of available data because of difficulties to collect and annotate such data.

In this context, our first contribution consists of the creation of a dataset for this task byaligning meeting reports with automatic transcriptions of the meeting’s audio recording. We propose a methodology associating automatic alignment with human alignment. This methodology enables us to develop automatic alignment models thanks to the annotation of an evaluation dataset while facilitating the human annotation task thanks to the use of automatic pre-alignments. Then, in order to avoid constraints from the annotation – even automatic – we suggest running a self-supervised pre-training inorder to take profit from large amounts of unaligned data. Moreover, we introduce back-summarization that allows us to generate synthetic data and create training pairs from unaligned meeting reports. We also combine those two approaches and show their synergy.

In this thesis, we focus our work on the abstractive approach of automatic summarization which consists in generating a summary from scratch, as opposed to the extractive approach where parts of the source document are selected to form the summary. Indeed, writing meeting reports from automatic transcriptions requires rephrasing what is being said, optionally correcting it or reorganizing it in order to go from a spoken language to a written, and more formal language. In order to alleviate this bias, we introduce the explicit learning of the expected copy rate with control tokens.

Finally, we conclude this thesis work with a human evaluation of automatic reports. This evaluation allows us to give a critical look atour models’ performances as well as our experimental setup in particular on the metrics and the data used during evaluation

Keywords :

Abstractive Summarization ; Self-Supervised Learning ; Meeting Report Generation ; Back-summarization ; Neural Networks ; Control Tokens ; Automatic Alignment ; HumanEvaluation