TY - JOUR
T1 - OK Computer
T2 - Worker Perceptions of Algorithmic Recruitment
AU - Fumagalli, Elena
AU - Rezaei Khavas, Sarah
AU - Salomons, Anna
N1 - Funding Information:
Helpful input from Dominik Prugger on an earlier version of this project is gratefully acknowledged. We are grateful to Maria Savona and two anonymous reviewers for helpful comments, and to Lucky Belder and Nina Bontje for helping to interpret legal provisions concerning algorithmic versus human evaluation. This work was supported by a Utrecht University Institutions seed money grant. Salomons thanks Instituut Gak, Netherlands for financial support.
Funding Information:
Helpful input from Dominik Prugger on an earlier version of this project is gratefully acknowledged. We are grateful to Maria Savona and two anonymous reviewers for helpful comments, and to Lucky Belder and Nina Bontje for helping to interpret legal provisions concerning algorithmic versus human evaluation. This work was supported by a Utrecht University Institutions seed money grant. Salomons thanks Instituut Gak, Netherlands for financial support.
Publisher Copyright:
© 2021
PY - 2022/3
Y1 - 2022/3
N2 - We provide evidence on how workers on an online platform perceive algorithmic versus human recruitment through two incentivized experiments designed to elicit willingness to pay for human or algorithmic evaluation. In particular, we test how information on workers’ performance affects their recruiter choice and whether the algorithmic recruiter is perceived as more or less gender-biased than the human one. We find that workers do perceive human and algorithmic evaluation differently, even though both recruiters are given the same inputs in our controlled setting. Specifically, human recruiters are perceived to be more error-prone evaluators and place more weight on personal characteristics, whereas algorithmic recruiters are seen as placing more weight on task performance. Consistent with these perceptions, workers with good task performance relative to others prefer algorithmic evaluation, whereas those with lower task performance prefer human evaluation. We also find suggestive evidence that perceived differences in gender bias drive preferences for human versus algorithmic recruitment.
AB - We provide evidence on how workers on an online platform perceive algorithmic versus human recruitment through two incentivized experiments designed to elicit willingness to pay for human or algorithmic evaluation. In particular, we test how information on workers’ performance affects their recruiter choice and whether the algorithmic recruiter is perceived as more or less gender-biased than the human one. We find that workers do perceive human and algorithmic evaluation differently, even though both recruiters are given the same inputs in our controlled setting. Specifically, human recruiters are perceived to be more error-prone evaluators and place more weight on personal characteristics, whereas algorithmic recruiters are seen as placing more weight on task performance. Consistent with these perceptions, workers with good task performance relative to others prefer algorithmic evaluation, whereas those with lower task performance prefer human evaluation. We also find suggestive evidence that perceived differences in gender bias drive preferences for human versus algorithmic recruitment.
KW - Algorithmic evaluation
KW - Technological change
KW - Online labor market
KW - Online experiment
UR - http://www.scopus.com/inward/record.url?scp=85119294726&partnerID=8YFLogxK
U2 - 10.1016/j.respol.2021.104420
DO - 10.1016/j.respol.2021.104420
M3 - Article
SN - 0048-7333
VL - 51
SP - 1
EP - 13
JO - Research Policy
JF - Research Policy
IS - 2
M1 - 104420
ER -