UnNatural Language Inference
Prasanna Parthasarathi (pp1403 <at> gmail (point) com)
Wednesday 13 October 2021 at 11:30 AM
Réunion Zoom (below)
Recent investigations into the inner-workings of state-of-the-art large-scale pre-trained Transformer-based Natural Language Understanding (NLU) models indicate that they appear to know humanlike syntax, at least to some extent. We provide novel evidence that complicates this claim: we find that state-of-the-art Natural Language Inference (NLI) models assign the same labels to permuted examples as they do to the original, i.e. they are largely invariant to random word order permutations. This behavior notably differs from that of humans; we struggle with ungrammatical sentences. To measure the severity of this issue, we propose a suite of metrics and investigate which properties of particular permutations lead models to be word-order invariant. In the MNLI dataset, for example, we find almost all (98.7%) examples contain at least one permutation which elicits the gold label. Models are sometimes even able to assign gold labels to permutations that they originally failed to predict correctly. We provide a comprehensive empirical evaluation of this phenomenon, and further show that this issue exists for both Transformers and pre-Transformer RNN / ConvNet based encoders, as well as across multiple languages (English and Mandarin Chinese).
Recording available here: https://drive.google.com/file/d/1I_8Jqmwhb8M1n06B51fAx4ijg2r1HTX7/view?usp=sharing
To receive weekly talk announcements, please send an e-mail to email@example.com. Simply write a message containing the single line 'subscribe ralli' (without the quotes, with a double 'l' in 'ralli').
See all the weekly talks for the year:1991 1992 1993 1994 1995 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022