Towards Optimal Knowledge Transfer for Language Models

Peng Lu (LPXD1101 <at> outlook (point) com)

RALI, DIRO

Le 31 août 2022 à 10 h 30 — !!! date inhabituelle !!!

Réunion Zoom, voir http://rali.iro.umontreal.ca/rali/seminaire-virtuel


In the natural language processing field, large Pre-trained Language Models like BERT and GPT-3 have achieved state-of-the-art performance on several applications. However, large language model pre-training requires a lot of computing resources, and most models are trained from scratch without leveraging previously trained models. To facilitate the training of neural nets, it is common to apply methods like Knowledge Distillation (KD) which facilitates knowledge transfer from a teacher model to a student one.

In this presentation, we aim to address the following three research issues: a) How to determine and control the balance between learning from the teacher and from the data to improve generalization, b) formalizing the link between KD and label regularization so that we can learn the label regularization along with the training, and finally, c) since the training of small models is more stable and faster, could we leverage the knowledge of pre-trained small models to accelerate large model training?

Note: cette présentation sera donnée en anglais et fait partie d'une soutenance prédoc / the talk will be in English, and is part of a predoc exam.

Recording: https://drive.google.com/file/d/12ybMquIwr9GNGq1p5NR8-2lpDpB5ZRyg/view?usp=sharing


Pour recevoir les annonces hebdomadaires par courriel, visitez http://rali.iro.umontreal.ca/rali/?q=fr/node/1631

Liste des autres séminaires