Disentangling the Roles of Representation and Selection in Data Pruning

Research output: Working paperPreprintAcademic

Abstract

Data pruning, selecting small but impactful subsets, offers a promising way to efficiently scale NLP model training. However, existing methods often involve many different design choices, which have not been systematically studied. This limits future developments. In this work, we decompose data pruning into two key components: the data representation and the selection algorithm, and we systematically analyze their influence on the selection of instances. Our theoretical and empirical results highlight the crucial role of representations: better representations, e.g., training gradients, generally lead to a better selection of instances, regardless of the chosen selection algorithm. Furthermore, different selection algorithms excel in different settings, and none consistently outperforms the others. Moreover, the selection algorithms do not always align with their intended objectives: for example, algorithms designed for the same objective can select drastically different instances, highlighting the need for careful evaluation.
Original languageEnglish
PublisherarXiv
Number of pages19
DOIs
Publication statusPublished - 4 Jul 2025

Fingerprint

Dive into the research topics of 'Disentangling the Roles of Representation and Selection in Data Pruning'. Together they form a unique fingerprint.

Cite this