Benchmarking benchmarks: Can we predict how challenging Spoken Language Understanding corpora are across sources, languages and domains?

Frédéric Bechet (Frederic (point) Bechet <at> lis-lab (point) fr)

LIS, Université Aix-Marseille

Le 29 septembre 2021 à 11 h 30

Réunion Zoom (voir http://rali.iro.umontreal.ca/rali/seminaire-virtuel)


In the Transformer era, Spoken Language Understanding models have achieved remarkable results on a wide range of benchmark tasks. State of the art models involve contextual embeddings trained on a very large quantity of out-of-domain text, usually with a Transformer approach, followed by a fine-tune training process on in-domain data to generate the semantic representation required, often made of intent+concept/value labels. If on some SLU benchmark corpora like ATIS, such models have reached almost perfect performance, other corpora remain challenging and performance can be greatly affected by the amount and the quality of data available for training or by the complexity and ambiguity of the semantic annotation scheme.

Benchmark corpora used to compare SLU models are also of limited size and differences in performance among systems are often very small, not statistically significant , and can be produced by biases in the data collection or the annotation scheme, rather than "real" ambiguities. The utterances distribution in these benchmark datasets doesn’t necessarily reflect real-life usage, and don’t contain enough "difficult " examples that can be found in deployed services, giving a false impression that there are no margin of improvement in current models.

But how can we characterize how challenging a corpus is? What are the factors that explain why some utterances still resist to current state-of-the-art models? And can we predict automatically this complexity when dealing with a new corpora in order to partition data into several sets representing different sources and levels of difficulty?

This talk will address some of these questions on benchmark SLU corpora as well as a new dataset collected from a deployed voice assistant in order to study if knowledge extracted on artificial data can generalize to real human-machine interactions. We propose a methodology for assessing the relevance of an SLU corpus. We claim that only taking into account systems performance does not provide enough insight about what is covered by current state-of-the-art models and what is left to be done.

(The presentation will be in French)

Enregistrement: https://drive.google.com/file/d/1cAUZI1GuMlDr2u2sPnGZfWfQYKfvDFbG/view?usp=sharing


Pour recevoir les annonces hebdomadaires par courriel, visitez http://rali.iro.umontreal.ca/rali/?q=fr/node/1631

Liste des autres séminaires