HardEval: Focusing on Challenging Tokens to Assess Robustness of NER
Titre | HardEval: Focusing on Challenging Tokens to Assess Robustness of NER |
Type de publication | Conference Paper |
Année de publication | 2020 |
Auteurs | Bernier-Colborne, G., and P. Langlais |
Nom de la conférence | Proceedings of The 12th Language Resources and Evaluation Conference |
Éditeur | European Language Resources Association |
Endroit d'édition | Marseille, France |
ISBN Number | 979-10-95546-34-4 |
Résumé | To assess the robustness of NER systems, we propose an evaluation method that focuses on subsets of tokens that represent specific sources of errors: unknown words and label shift or ambiguity. These subsets provide a system-agnostic basis for evaluating specific sources of NER errors and assessing room for improvement in terms of robustness. We analyze these subsets of challenging tokens in two widely-used NER benchmarks, then exploit them to evaluate NER systems in both in-domain and out-of-domain settings. Results show that these challenging tokens explain the majority of errors made by modern NER systems, although they represent only a small fraction of test tokens. They also indicate that label shift is harder to deal with than unknown words, and that there is much more room for improvement than the standard NER evaluation procedure would suggest. We hope this work will encourage NLP researchers to adopt rigorous and meaningful evaluation methods, and will help them develop more robust models. |
URL | https://www.aclweb.org/anthology/2020.lrec-1.211 |
PDF: