Séminaires RALI-OLST

Towards Optimal Knowledge Transfer for Language Models

Peng Lu (LPXD1101 <at> outlook (point) com)

RALI, DIRO

Wednesday 31 August 2022 at 10:30 AM  — !!! date inhabituelle !!!

Réunion Zoom, below


In the natural language processing field, large Pre-trained Language Models like BERT and GPT-3 have achieved state-of-the-art performance on several applications. However, large language model pre-training requires a lot of computing resources, and most models are trained from scratch without leveraging previously trained models. To facilitate the training of neural nets, it is common to apply methods like Knowledge Distillation (KD) which facilitates knowledge transfer from a teacher model to a student one.

In this presentation, we aim to address the following three research issues: a) How to determine and control the balance between learning from the teacher and from the data to improve generalization, b) formalizing the link between KD and label regularization so that we can learn the label regularization along with the training, and finally, c) since the training of small models is more stable and faster, could we leverage the knowledge of pre-trained small models to accelerate large model training?

Note: cette présentation sera donnée en anglais et fait partie d'une soutenance prédoc / the talk will be in English, and is part of a predoc exam.

Recording: https://drive.google.com/file/d/12ybMquIwr9GNGq1p5NR8-2lpDpB5ZRyg/view?usp=sharing



Join with Zoom at this url.
Meeting ID: 916 9097 5818, 𝑷𝒂𝒔𝒔𝒄𝒐𝒅𝒆: 343273.
Phone numbers: https://umontreal.zoom.us/u/abitNZzLg.
One tap mobile: +14388097799,,91690975818#,,,,,,0#,,343273#


Follow this link to subscribe to future RALI-OLST announcements.
http://rali.iro.umontreal.ca/rali/?q=fr/node/1631

See all the weekly talks for the year:

1991 1992 1993 1994 1995 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024