Evaluation of Generated Texts

Ehud Reiter; Jackie Cheung (e (point) reiter <at> abdn (point) ac (point) uk; jackie (point) cheung <at> mcgill (point) ca)

University of Aberdeen (ehudreiter.com) and McGill University

Le 13 mai 2024 à 14 h  — Jour et heure différents - Meetup de CLIQ.ai

Salle 3195, Pavillon André-Aisenstadt — Diffusion simultanée sur Microsoft Teams: voir l'annonce de CLIQ-AI


1. High-Quality Human Evaluation of Generated Texts

Ehud Reiter

Human evaluation is the best way to evaluate text generation systems *if* it is done rigorously, with good experimental design, execution, and reporting. Unfortunately many of the human evaluations in NLP have quality problems including poor experimental execution and questionable experimental design; many evaluations are also difficult to reproduce. I will summarise our recent work on developing high-quality human evaluations techniques and protocols, on analysing problems in experiments done elsewhere, and on reproducing human evaluations.

2. Evaluation of NLG Systems: From Reaction to Anticipation

Jackie Cheung

Large language models (LLMs) can make factual errors or unsupported claims—sometimes called hallucinations—which can be costly or harmful to the developers, users, and other stakeholders relevant to an NLG system. This type of errors had been anticipated by the NLG research communities years in advance of the release of popular pretrained LLMs, yet they still occur in deployed LLMs. Hallucinations, however, are not the only issue faced by LLMs. In this talk, I ask: how can we anticipate and mitigate potential issues with NLG systems before they become the next embarrassing headline? First, I discuss our work in systematically surveying the recent NLP literature on automatic summarization as a study of how NLP researchers discuss and frame responsible AI issues. Overall, we find that the papers we examined typically do not discuss downstream stakeholders or imagine potential impacts or harms that the summarization system may cause. Next, I present our current efforts to encourage more structured reflection of evaluation practices in NLP, focusing in particular on benchmark design and creation. I introduce our novel framework, Evidence-Centred Benchmark Design, inspired by work in educational assessment.

MS Teams link:

https://teams.microsoft.com/l/meetup-join/19%3ameeting_ZDFmOWRlOGUtYTdhOC00YjFkLWJjMmQtMzgwMTliOGMyNzIz%40thread.v2/0?context=%7b%22Tid%22%3a%22d27eefec-2a47-4be7-981e-0f8977fa31d8%22%2c%22Oid%22%3a%22f7e10f81-7a39-411d-8700-39a75edd1bfd%22%7d

Full program of the presentation:

https://docs.google.com/document/d/1ATBaF1HpL6Fd8fTJsn-7KpN8dWa89AvATUCJTW15GVU/edit#heading=h.5f6z6e9rbese


Ehud Reiter is a Professor of Computing Science at the University of Aberdeen focusing on Natural Language Generation (NLG) technology, that is software using artificial intelligence and natural language processing techniques to automatically produce high-quality texts and narratives, from non-linguistic data. In recent years much of his research has focused on evaluation of generated texts, especially human evaluation.

He regularly writes on his blog: https://ehudreiter.com.

Jackie Chi Kit Cheung is an associate professor at McGill University's School of Computer Science, where he co-directs the Reasoning and Learning Lab. He is a Canada CIFAR AI Chair and an Associate Scientific Co-Director at the Mila Quebec AI Institute. His research focuses on topics in natural language generation such as automatic summarization, and on integrating diverse knowledge sources into NLP systems for pragmatic and common-sense reasoning. He also works on applications of NLP to domains such as education, health, and language revitalization. He is motivated in particular by how the structure of the world can be reflected in the structure of language processing systems. He is a consulting researcher at Microsoft Research Montreal.


Pour recevoir les annonces hebdomadaires par courriel, visitez http://rali.iro.umontreal.ca/rali/?q=fr/node/1631

Liste des autres séminaires