Séminaires RALI-OLST

Benchmarking benchmarks: Can we predict how challenging Spoken Language Understanding corpora are across sources, languages and domains?

Frédéric Bechet (Frederic (point) Bechet <at> lis-lab (point) fr)

LIS, Université Aix-Marseille

Wednesday 29 September 2021 at 11:30 AM

Réunion Zoom (below)


In the Transformer era, Spoken Language Understanding models have achieved remarkable results on a wide range of benchmark tasks. State of the art models involve contextual embeddings trained on a very large quantity of out-of-domain text, usually with a Transformer approach, followed by a fine-tune training process on in-domain data to generate the semantic representation required, often made of intent+concept/value labels. If on some SLU benchmark corpora like ATIS, such models have reached almost perfect performance, other corpora remain challenging and performance can be greatly affected by the amount and the quality of data available for training or by the complexity and ambiguity of the semantic annotation scheme.

Benchmark corpora used to compare SLU models are also of limited size and differences in performance among systems are often very small, not statistically significant , and can be produced by biases in the data collection or the annotation scheme, rather than "real" ambiguities. The utterances distribution in these benchmark datasets doesn’t necessarily reflect real-life usage, and don’t contain enough "difficult " examples that can be found in deployed services, giving a false impression that there are no margin of improvement in current models.

But how can we characterize how challenging a corpus is? What are the factors that explain why some utterances still resist to current state-of-the-art models? And can we predict automatically this complexity when dealing with a new corpora in order to partition data into several sets representing different sources and levels of difficulty?

This talk will address some of these questions on benchmark SLU corpora as well as a new dataset collected from a deployed voice assistant in order to study if knowledge extracted on artificial data can generalize to real human-machine interactions. We propose a methodology for assessing the relevance of an SLU corpus. We claim that only taking into account systems performance does not provide enough insight about what is covered by current state-of-the-art models and what is left to be done.

(The presentation will be in French)

Enregistrement: https://drive.google.com/file/d/1cAUZI1GuMlDr2u2sPnGZfWfQYKfvDFbG/view?usp=sharing



Join with Zoom at this url.
Meeting ID: 916 9097 5818, 𝑷𝒂𝒔𝒔𝒄𝒐𝒅𝒆: 343273.
Phone numbers: https://umontreal.zoom.us/u/abitNZzLg.
One tap mobile: +14388097799,,91690975818#,,,,,,0#,,343273#


Follow this link to subscribe to future RALI-OLST announcements.
http://rali.iro.umontreal.ca/rali/?q=fr/node/1631

See all the weekly talks for the year:

1991 1992 1993 1994 1995 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024