Information Extraction and Analysis from News videos

Seminar from Sadok Mansouri, ATER at LIUM   Date: 15/11/2024 Time: 11h00 Place: IC2, Boardroom Speaker: Sadok Mansouri     Information Extraction and Analysis from News videos   Information extraction from videos is an important research topic in content-based video indexing and retrieval. Indeed, the visual text present in news videos typically provides rich semantic […]

Offre stage M2 : Machine Learning for Acoustic-Based Keystroke Recognition: A Study on Security Vulnerabilities

Machine Learning for Acoustic-Based Keystroke Recognition: A Study on Security Vulnerabilities Supervsisors : Kais Hassan (LAUM), Meysam Shamsi (LIUM) Host Laboratory: Laboratoire d’Informatique de l’Université du Mans (LIUM) – Laboratoire d’Acoustique de l’Université du Mans (LAUM). Location : Le Mans Université Beginning of internship: February 2025 Contact : Kais Hassan, Meysam Shamsi (firstname.name@univ-lemans.fr)  

Offre stage M2 : Construction de Sound Zones par apprentissage automatique sur un large jeu de données

Constructing Sound Zones using machine learning on a large dataset Supervsisors : Théo Mariotte (LIUM), Manuel Melon (LAUM), Marie Tahon (LIUM) Host Laboratory: Laboratoire d’Informatique de l’Université du Mans (LIUM) – Laboratoire d’Acoustique de l’Université du Mans (LAUM). Location : Le Mans Université Beginning of internship: Between January and March 2025 Contact : Théo Mariotte, […]

LST-days

LST day   The LST team day is being held on 17 October. On this occasion, the young and more experienced researchers present their research themes. There will also be a presentation on European projects by Hélène Dereszowski from DRIS. This year, 3 workshops will focus on the following themes: 1. Spoof diarization audiovisuel 2. […]

KUTED

Corpus: Kurdish TED (KUTED)Licence: CreativeCommons Attribution NonCommercial-ShareAlike 4.0 International License.URL: https://huggingface.co/datasets/aranemini/kurdishtedAuthor(s): Mohammad MohammadaminiAntoine LaurentDescription Kurdish TED (KUTED) is the first Speech-to-Text-Translation (S2TT) dataset for the Central Kurdish language derived from TED Talks and TEDx. The corpus consists of 91,000 pairs, encompassing 170 hours of English audio, 1.65 million English tokens, and 1.40 million Central Kurdish […]

Offre stage M2 : Études des systèmes automatiques de traduction vocale

Speech Translation System – Low-resource Languages to High-resource Languages   Supervisors: Aghilas Sini (LIUM), Jane Wottawa (LIUM) Hosting lab: LIUM (Laboratoire d’Informatique de l’Université du Mans) Place: Le Mans Université Beginning of internship : March 2024 Contacts: Aghilas Sini and Jane Wottawa, (firstname.name@univ-lemans.fr) Application: Send a CV, a covering letter relevant to the proposed subject, […]

Offre stage M2 : Système de Traduction Vocale – Langues Peu Dotées Vers Langues Riches

Speech Translation System – Low-resourced Languages to High-resourced Languages Niveau : Master 2 Supervisors: Aghilas Sini (LIUM), Mohammad Mohammadamini (LIUM) Hosting lab : Laboratoire d’Informatique de l’Université du Mans (LIUM). The internship will take place on-site Place : Le Mans Université Beginning of internship: February to April 2025 Contact : Aghilas Sini et Mohammad Mohammadamini […]

Lightweight CNNs for Face Recognition Applications

Seminar from Heydi Méndez Vázquez, researcher at CENATAV   Date: 09/10/2024 Time: 11h00 Place: IC2, Boardroom Speaker: Heydi Méndez Vázquez     Lightweight CNNs for Face Recognition Applications   Face recognition (FR) is an active research topic in computer vision and image understanding. It is one of the most used and extended biometric techniques. The […]

Natacha Miniconi

Optimizing Human Intervention for Synthetic Speech Quality Evaluation: Active Learning for AdaptabilityStarting: 01/10/2024PhD Student: Natacha MiniconiAdvisor(s): Anthony LarcherCo-advisor(s): Meysam ShamsiFunding: Region TandemContext: The primary objective of Text-to-Speech (TTS), speech conversion and speech to speech translation system is to synthesize or generate a high-quality speech signal. Typically, the quality of synthetic speech is subjectively evaluated by […]

TV2M-E

Multilingual Multimodal Voice Translation – Expressive (TV2M-E)Date: 06/2024 – 06/2026Funding: Région Pays de la LoireCall: PULSARURL: https://lium.univ-lemans.fr/en/tv2m-e/LIUM Participant(s): Aghilas SiniSummary A bilingual or polyglot speaker has the ability to communicate coherently in several languages, adapting to different contexts. Transferring this skill to machines could contribute to the preservation of cultural heritage by maintaining less privileged […]