Does It Capture STEL? A Modular, Similarity-based Linguistic Style Evaluation Framework

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    Abstract

    Style is an integral part of natural language. However, evaluation methods for style measures are rare, often task-specific and usually do not control for content. We propose the modular, fine-grained and content-controlled similarity-based STyle EvaLuation framework (STEL) to test the performance of any model that can compare two sentences on style. We illustrate STEL with two general dimensions of style (formal/informal and simple/complex) as well as two specific characteristics of style (contrac'tion and numb3r substitution). We find that BERT-based methods outperform simple versions of commonly used style measures like 3-grams, punctuation frequency and LIWC-based approaches. We invite the addition of further tasks and task instances to STEL and hope to facilitate the improvement of style-sensitive measures.
    Original languageEnglish
    Title of host publicationProceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
    EditorsMarie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
    Place of PublicationDominican Republic
    PublisherAssociation for Computational Linguistics
    Pages7109-7130
    Number of pages22
    DOIs
    Publication statusPublished - Nov 2021

    Fingerprint

    Dive into the research topics of 'Does It Capture STEL? A Modular, Similarity-based Linguistic Style Evaluation Framework'. Together they form a unique fingerprint.

    Cite this