Responsible AI Innovation in the Public Sector: Lessons from and Recommendations for Facilitating Fundamental Rights and Algorithms Impact Assessments

Iris Muis*, Bart Kamphorst, Julia Straatman

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Since the initial development of the Fundamental Rights and Algorithms Impact Assessment (FRAIA) in 2021, there has been an increasing interest from public sector organizations to gain experience with performing a FRAIA in contexts of developing, procuring, and deploying AI systems. In this contribution, we share observations from fifteen FRAIA trajectories performed in the field within the Dutch public sector context. Based on our experiences facilitating these trajectories, we offer a set of recommendations directed at practitioners with the aim of helping organizations make the best use of FRAIA and similar impact assessment instruments. We conclude by calling for the development of an informal FRAIA community in which practical handholds and advice can be shared to promote responsible AI innovation by ensuring that the human decision making around AI and other algorithms is well informed and well documented with respect to the protection of fundamental rights.
Original languageEnglish
Article number100118
JournalJournal of Responsible Technology
Volume22
Early online date3 Apr 2025
DOIs
Publication statusE-pub ahead of print - 3 Apr 2025

Bibliographical note

Publisher Copyright:
© 2025 The Author(s)

Keywords

  • AI
  • FRAIA
  • FRIA
  • fundamental rights
  • impact assessments
  • responsible innovation

Fingerprint

Dive into the research topics of 'Responsible AI Innovation in the Public Sector: Lessons from and Recommendations for Facilitating Fundamental Rights and Algorithms Impact Assessments'. Together they form a unique fingerprint.

Cite this