Towards Case-based Interpretability for Medical Federated Learning

Laura Latorre*, Liliana Petrychenko, Regina Beets-Tan, Taisiya Kopytova, Wilson Silva

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

We explore deep generative models to generate case-based explanations in a medical federated learning setting. Explaining AI model decisions through case-based interpretability is paramount to increasing trust and allowing widespread adoption of AI in clinical practice. However, medical AI training paradigms are shifting towards federated learning settings in order to comply with data protection regulations. In a federated scenario, past data is inaccessible to the current user. Thus, we use a deep generative model to generate synthetic examples that protect privacy and explain decisions. Our proof-of-concept focuses on pleural effusion diagnosis and uses publicly available Chest X-ray data.

Original languageEnglish
Title of host publication46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2024 - Proceedings
PublisherIEEE
ISBN (Electronic)9798350371499
DOIs
Publication statusPublished - 17 Dec 2024
Event46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2024 - Orlando, United States
Duration: 15 Jul 202419 Jul 2024

Publication series

NameProceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS
ISSN (Print)1557-170X

Conference

Conference46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2024
Country/TerritoryUnited States
CityOrlando
Period15/07/2419/07/24

Bibliographical note

Publisher Copyright:
© 2024 IEEE.

Fingerprint

Dive into the research topics of 'Towards Case-based Interpretability for Medical Federated Learning'. Together they form a unique fingerprint.

Cite this