Séminaires RALI-OLST-ILFC

Les mercredis à 11 h 30 (heure de Montréal), nous tenons un séminaire d'une heure portant sur un sujet du traitement des langues ou de linguistique. Il est typiquement offert en mode hybride (présentiel & vidéoconférence). Une fois par mois, le séminaire est organisé par le groupe de recherche français Linguistique Informatique, Formelle et de Terrain.

Improved Knowledge Distillation for Pre-trained Language Models

Mehdi Rezagholizadeh (mehdi (point) rezagholizadeh <at> huawei (point) com)

Huawei Montreal

Le mercredi 10 novembre 2021 à 11 h 30

Réunion Zoom (ci-dessous)


Knowledge distillation (KD) was originally proposed and became very prominent for neural model compression and later on showed its potentials for improving the accuracy of neural models as well. This talk concerns introducing knowledge distillation and answering how and why knowledge distillation helps in compressing/training neural networks. Despite the great success of KD, in this presentation we revisit KD from three different perspectives: data, model, and training. From data point of view, we deploy a MiniMax approach to spot regions in the input space where the teacher and student networks diverge the most from each other and generate some augmented data from the training samples to cover these maximum divergence regions accordingly. The new augmented samples will enrich the training data to improve KD training. From the model point of view, original KD technique is only taking the information of the last layer of the teacher and student networks to matching their outputs. However, it is shown in the literature that matching the internal representations or other internal statistics of the two networks can lead to a better performance in some architectures such as transformer-based models. This observation opens a new domain of research on what is the best way of matching internal representations of two networks. From the training point of view, based on VC dimension theory, this is evident that KD performs poorly when the capacity gap between the teacher and student networks becomes large. This problem is getting more serious in NLP considering the ever growing size of pre-trained models. We propose our solution based on an annealing technique in which the student is exposed to a smoothed version of the teacher output at early stages and this smoothness is gradually reduced using a temperature factor during the training. This talk includes analysis on application insights, theoretical and empirical evidence as well as practical experiments to support the effectiveness of our proposed methods.

The presentation will be given in English.

Recording available here: https://drive.google.com/file/d/1yyB2YJaf_hgM9pxUhTRIIuD7XlSLT1T-/view?usp=sharing



Joignez-vous à nous avec Zoom à l'aide de cette URL.
ID de réunion : 916 9097 5818, Code : 343273.
Numéros de téléphone : https://umontreal.zoom.us/u/abitNZzLg.
One tap mobile : +14388097799,,91690975818#,,,,,,0#,,343273#


Suivez ce lien pour vous inscrire à la liste de diffusion RALI-OLST.
http://rali.iro.umontreal.ca/rali/?q=fr/node/1631

Liste de tous les séminaires pour l'année :

1991 1992 1993 1994 1995 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025