Epicurus at SemEval-2023 Task 4: Improving Prediction of Human Values behind Arguments by Leveraging Their Definitions

Christian Fang*, Qixiang Fang*, Dong Nguyen

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

We describe our experiments for SemEval-2023 Task 4 on the identification of human values behind arguments (ValueEval). Because human values are subjective concepts which require precise definitions, we hypothesize that incorporating the definitions of human values (in the form of annotation instructions and validated survey items) during model training can yield better prediction performance. We explore this idea and show that our proposed models perform better than the challenge organizers’ baselines, with improvements in macro F1 scores of up to 18%.
Original languageEnglish
Title of host publicationProceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)
EditorsAtul Kr. Ojha, A. Seza Doğruöz, Giovanni Da San Martino, Harish Tayyar Madabushi, Ritesh Kumar, Elisa Sartori
PublisherAssociation for Computational Linguistics (ACL)
Pages221–229
Number of pages9
DOIs
Publication statusPublished - 2023

Fingerprint

Dive into the research topics of 'Epicurus at SemEval-2023 Task 4: Improving Prediction of Human Values behind Arguments by Leveraging Their Definitions'. Together they form a unique fingerprint.

Cite this