Prompting Implicit Discourse Relation Annotation

Frances Yung, Mansoor Ahmad, Merel Scholman, Vera Demberg

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

Pre-trained large language models, such as ChatGPT, archive outstanding performance in various reasoning tasks without supervised training and were found to have outperformed crowdsourcing workers. Nonetheless, ChatGPT’s performance in the task of implicit discourse relation classification, prompted by a standard multiple-choice question, is still far from satisfactory and considerably inferior to state-of-the-art supervised approaches. This work investigates several proven prompting techniques to improve ChatGPT’s recognition of discourse relations. In particular, we experimented with breaking down the classification task that involves numerous abstract labels into smaller subtasks. Nonetheless, experiment results show that the inference accuracy hardly changes even with sophisticated prompt engineering, suggesting that implicit discourse relation classification is not yet resolvable under zero-shot or few-shot settings.

Original languageEnglish
Title of host publicationProceedings of The 18th Linguistic Annotation Workshop (LAW-XVIII)
EditorsSophie Henning, Manfred Stede
PublisherAssociation for Computational Linguistics
Pages150-165
Number of pages16
ISBN (Electronic)9798891760738
Publication statusPublished - 2024
Event18th Linguistic Annotation Workshop, LAW 2024 - St. Julian's, Malta
Duration: 22 Mar 2024 → …

Conference

Conference18th Linguistic Annotation Workshop, LAW 2024
Country/TerritoryMalta
CitySt. Julian's
Period22/03/24 → …

Bibliographical note

Publisher Copyright:
© 2024 Association for Computational Linguistics.

Fingerprint

Dive into the research topics of 'Prompting Implicit Discourse Relation Annotation'. Together they form a unique fingerprint.

Cite this