HardEval: Focusing on Challenging Tokens to Assess Robustness of NER
Title | HardEval: Focusing on Challenging Tokens to Assess Robustness of NER |
Publication Type | Conference Paper |
Year of Publication | 2020 |
Authors | Bernier-Colborne, G., and P. Langlais |
Conference Name | Proceedings of The 12th Language Resources and Evaluation Conference |
Publisher | European Language Resources Association |
Place Published | Marseille, France |
ISBN Number | 979-10-95546-34-4 |
Abstract | To assess the robustness of NER systems, we propose an evaluation method that focuses on subsets of tokens that represent specific sources of errors: unknown words and label shift or ambiguity. These subsets provide a system-agnostic basis for evaluating specific sources of NER errors and assessing room for improvement in terms of robustness. We analyze these subsets of challenging tokens in two widely-used NER benchmarks, then exploit them to evaluate NER systems in both in-domain and out-of-domain settings. Results show that these challenging tokens explain the majority of errors made by modern NER systems, although they represent only a small fraction of test tokens. They also indicate that label shift is harder to deal with than unknown words, and that there is much more room for improvement than the standard NER evaluation procedure would suggest. We hope this work will encourage NLP researchers to adopt rigorous and meaningful evaluation methods, and will help them develop more robust models. |
URL | https://www.aclweb.org/anthology/2020.lrec-1.211 |
PDF: