Abstract
Although algorithms are imbued with a sense of objectivity and reliability, numerous high-profile incidents have demonstrated their fallibility. In response, many have called for algorithmic governance that mitigates their potential harms. Further, these incidents have inspired studies that consider algorithms as part of wider sociotechnical systems. In this article, we build on such work and focus on how the specific forms of algorithms may facilitate or constrain the ways in which they become embedded within these systems. More specifically, we suggest that (a) algorithms should be understood as models, with (b) divergent forms, and (c) associated representational qualities. We showcase this approach in three critical case studies of algorithmic models used in government: the SAFFIER II model that underpins the Netherlands government’s spending, the Ofqual DCP A-Level grading algorithm that was used (and later abandoned) in lieu of actual secondary school exams in the United Kingdom, and the Risk Classification Model used by the Dutch Tax and Customs Administration to identify social benefit fraud. With the three case studies, we show how the divergent forms of algorithms have implications for their responsiveness and ultimately their solidification in–or dissolution from–socio-technical systems.
Original language | English |
---|---|
Journal | Information Society |
DOIs | |
Publication status | E-pub ahead of print - 2024 |
Keywords
- Algorithmic representation
- algorithms
- artificial intelligence
- governance
- machine learning
- materiality
- statistical models