Abstract
I argue that machine learning (ML) models used in science function as highly idealized toy models. If we treat ML models as a type of highly idealized toy model, then we can deploy standard representational and epistemic strategies from the toy model literature to explain why ML models can still provide epistemic success despite their lack of similarity to their targets.
Original language | English |
---|---|
Number of pages | 11 |
Journal | Philosophy of Science |
DOIs | |
Publication status | E-pub ahead of print - 20 Oct 2023 |