Abstract
Fine-tuning pre-trained language models on downstream tasks with varying random seeds has been shown to be unstable, especially on small datasets. Many previous studies have investigated this instability and proposed methods to mitigate it. However, most studies only used the standard deviation of performance scores (SD) as their measure, which is a narrow characterization of instability. In this paper, we analyze SD and six other measures quantifying instability at different levels of granularity. Moreover, we propose a systematic framework to evaluate the validity of these measures. Finally, we analyze the consistency and difference between different measures by reassessing existing instability mitigation methods. We hope our results will inform the development of better measurements of fine-tuning instability.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) |
| Publisher | Association for Computational Linguistics |
| Pages | 6209-6230 |
| Number of pages | 22 |
| ISBN (Electronic) | 9781959429722 |
| DOIs | |
| Publication status | Published - Jul 2023 |
Publication series
| Name | Proceedings of the Annual Meeting of the Association for Computational Linguistics |
|---|---|
| Volume | 1 |
| ISSN (Print) | 0736-587X |
Bibliographical note
Publisher Copyright:© 2023 Association for Computational Linguistics.
Funding
This work is part of the research programme Veni with project number VI.Veni.192.130, which is (partly) financed by the Dutch Research Council (NWO).
| Funders |
|---|
| Nederlandse Organisatie voor Wetenschappelijk Onderzoek |