Séminaires RALI-OLST

Towards Optimal Knowledge Transfer for Language Models

Peng Lu (LPXD1101 <at> outlook (point) com)

RALI, DIRO

Le mercredi 31 août 2022 à 10 h 30  — !!! date inhabituelle !!!

Réunion Zoom, ci-dessous


In the natural language processing field, large Pre-trained Language Models like BERT and GPT-3 have achieved state-of-the-art performance on several applications. However, large language model pre-training requires a lot of computing resources, and most models are trained from scratch without leveraging previously trained models. To facilitate the training of neural nets, it is common to apply methods like Knowledge Distillation (KD) which facilitates knowledge transfer from a teacher model to a student one.

In this presentation, we aim to address the following three research issues: a) How to determine and control the balance between learning from the teacher and from the data to improve generalization, b) formalizing the link between KD and label regularization so that we can learn the label regularization along with the training, and finally, c) since the training of small models is more stable and faster, could we leverage the knowledge of pre-trained small models to accelerate large model training?

Note: cette présentation sera donnée en anglais et fait partie d'une soutenance prédoc / the talk will be in English, and is part of a predoc exam.

Recording: https://drive.google.com/file/d/12ybMquIwr9GNGq1p5NR8-2lpDpB5ZRyg/view?usp=sharing



Joignez-vous à nous avec Zoom à l'aide de cette URL.
ID de réunion : 916 9097 5818, Code : 343273.
Numéros de téléphone : https://umontreal.zoom.us/u/abitNZzLg.
One tap mobile : +14388097799,,91690975818#,,,,,,0#,,343273#


Suivez ce lien pour vous inscrire à la liste de diffusion RALI-OLST.
http://rali.iro.umontreal.ca/rali/?q=fr/node/1631

Liste de tous les séminaires pour l'année :

1991 1992 1993 1994 1995 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024