SemEval-2021 Task 12: Learning with Disagreements

Alexandra Uma, Tommaso Fornaciari, Anca Dumitrache, Tristan Miller, Jon Chamberlain, Barbara Plank, Edwin Simpson, Massimo Poesio

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

Disagreement between coders is ubiquitous in virtually all datasets annotated with human judgements in both natural language processing and computer vision. However, most supervised machine learning methods assume that a single preferred interpretation exists for each item, which is at best an idealization. The aim of the SemEval-2021 shared task on Learning with Disagreements (Le-wi-Di) was to provide a unified testing framework for methods for learning from data containing multiple and possibly contradictory annotations covering the best-known datasets containing information about disagreements for interpreting language and classifying images. In this paper we describe the shared task and its results.

Original languageEnglish
Title of host publicationSemEval 2021 - 15th International Workshop on Semantic Evaluation, Proceedings of the Workshop
EditorsAlexis Palmer, Nathan Schneider, Natalie Schluter, Guy Emerson, Aurelie Herbelot, Xiaodan Zhu
PublisherAssociation for Computational Linguistics
Pages338-347
Number of pages10
ISBN (Electronic)9781954085701
Publication statusPublished - 2021
Externally publishedYes
Event15th International Workshop on Semantic Evaluation, SemEval 2021 - Virtual, Bangkok, Thailand
Duration: 5 Aug 20216 Aug 2021

Publication series

NameSemEval 2021 - 15th International Workshop on Semantic Evaluation, Proceedings of the Workshop

Conference

Conference15th International Workshop on Semantic Evaluation, SemEval 2021
Country/TerritoryThailand
CityVirtual, Bangkok
Period5/08/216/08/21

Fingerprint

Dive into the research topics of 'SemEval-2021 Task 12: Learning with Disagreements'. Together they form a unique fingerprint.

Cite this