Machine-annotated Rationales: Faithfully Explaining Text Classification

    Research output: Contribution to conferencePaperAcademic

    Abstract

    We propose an approach to faithfully explaining text classification models, using a specifically designed neural network to
    find explanations in the form of machine-annotated rationales
    during the prediction process. This results in faithful explanations that are similar to human-annotated rationales, while not
    requiring human explanation examples during training. The
    quality of found explanations is measured on faithfulness,
    quantitative similarity to human explanations, and through a
    user evaluation.
    Original languageEnglish
    Number of pages8
    Publication statusPublished - 2021
    Event35th AAAI Conference on Artificial Intelligence -
    Duration: 8 Feb 20219 Feb 2021

    Conference

    Conference35th AAAI Conference on Artificial Intelligence
    Period8/02/219/02/21

    Fingerprint

    Dive into the research topics of 'Machine-annotated Rationales: Faithfully Explaining Text Classification'. Together they form a unique fingerprint.

    Cite this