Vector-Space Proximity-Based Document Retrieval For Document Embeddings Built By Transformers

Pavel Khloponin (pioneerappx <at> gmail (point) com)

ClaC, Concordia University

Le 15 juin 2022 à 11 h 30

RĂ©union Zoom, voir http://rali.iro.umontreal.ca/rali/seminaire-virtuel


In this presentation, I will explain my approach to the TREC News Track shared task run by NIST. This task is organized in collaboration with the Washington Post which helped to build a test collection of 670K news articles. Given 50 query news articles from the same collection, participants have to select and order 100 most relevant articles (backlinks). The results were then pulled and evaluated by experts, which leads to a rank for each assessed backlink. Each submission is evaluated (with nDCG@5) based on how far it is from the ideal possible ranking. It means participants not only need to find relevant news, but also order them from most to less relevant.

In this work, Okapi BM25 was used as our baseline model. A variety of transformer-based embedding models (19 models, from 5 families) were also used to build embeddings for news articles and a plethora of proximity measures (85 different ones) to retrieve the backlinks. Additional exploration of other hyperparameters led to the evaluation of 47,332 unique configurations of our system. We also explored performance of a combined model where Okapi BM25 and proximity-based models are working together.

Our baseline model got the highest score at the TREC News Track 2020. The performance of the proximity-based approach alone was below the median, but the combined approach showed improvement on the topics that BM25 was struggling with.

Recording: https://drive.google.com/file/d/1uG2fK8zGQvf6_2e42XhMwm04zDmmc8oG/view?usp=sharing


Pour recevoir les annonces hebdomadaires par courriel, visitez http://rali.iro.umontreal.ca/rali/?q=fr/node/1631

Liste des autres séminaires