Unsupervised multiple-choice question generation for out-of-domain Q&A fine-tuning

Guillaume Le Berre (guillaume (point) le_berre <at> depinfonancy (point) net)

RALI, DIRO

Le 22 septembre 2021 à 11 h 30

Réunion Zoom (voir http://rali.iro.umontreal.ca/rali/seminaire-virtuel)


Pre-trained models have shown very good performances on a number of question answering benchmarks especially when fine-tuned on multiple question answering datasets at once. In this work, we generate a fine-tuning dataset model thanks to a rule-based algorithm that generates questions and answers from unannotated sentences. We show that the state-of-the-art model UnifiedQA can greatly benefit from such a system on a multiple-choice benchmark about physics, biology and chemistry it has never been trained on. We further show that improved performances may be obtained by selecting the most challenging distractors (wrong answers), with a dedicated ranker based on a pretrained RoBERTa model.

La présentation sera donnée en français.

Enregistrement vidéo: https://drive.google.com/file/d/11dBS-RL8zNON9RGsniQbUkbuFV_9WoCe/view?usp=sharing


Pour recevoir les annonces hebdomadaires par courriel, visitez http://rali.iro.umontreal.ca/rali/?q=fr/node/1631

Liste des autres séminaires