Attending Knowledge Facts With BERT-Like Models In Question-Answering : Disappointing Results And Some Explanations

Guillaume Le Berre (guillaume (point) le_berre <at> depinfonancy (point) net)

RALI, DIRO

Le 18 novembre 2020 à 11 h 30

RĂ©union Zoom (voir http://rali.iro.umontreal.ca/rali/seminaire-virtuel)


Since the first appearance of BERT, pretrained BERT inspired models (XLNet, Roberta, ...) have delivered state-of-the-art results in a large number of Natural Language Processing tasks. This in- cludes question-answering where previous models performed relatively poorly particularly on datasets with a limited amount of data. In this paper we perform experiments with BERT on two such datasets that are OpenBookQA and ARC. Our aim is to understand why, in our experi- ments, using BERT sentence representations inside an attention mecha- nism on a set of facts tends to give poor results. We demonstrate that in some cases, the sentence representations proposed by BERT are limited in terms of semantic and that BERT often answers the questions in a meaningless way.

Presentation recording (in French): https://drive.google.com/file/d/17nc5yu4r15xt8fA3d-63nUF0W-AOc6EW/view?usp=sharing


Pour recevoir les annonces hebdomadaires par courriel, visitez http://rali.iro.umontreal.ca/rali/?q=fr/node/1631

Liste des autres séminaires