Abstract
Deep-learning techniques can improve the efficiency of medical diagnosis while challenging human experts’ accuracy. However, the rationale behind these classifier’s decisions is largely opaque, which is dangerous in sensitive applications such as healthcare. Case-based explanations explain the decision process behind these mechanisms by exemplifying similar cases using previous studies from other patients. Yet, these may contain personally identifiable information, which makes them impossible to share without violating patients’ privacy rights. Previous works have used GANs to generate anonymous case-based explanations, which had limited visual quality. We solve this issue by employing a latent diffusion model in a three-step procedure: generating a catalogue of synthetic images, removing the images that closely resemble existing patients, and using this anonymous catalogue during an explanation retrieval process. We evaluate the proposed method on the MIMIC-CXR-JPG dataset and achieve explanations that simultaneously have high visual quality, are anonymous, and retain their explanatory value.
Original language | English |
---|---|
Number of pages | 10 |
Journal | CEUR Workshop Proceedings |
Volume | 3831 |
Publication status | Published - 14 Nov 2024 |
Event | 1st Workshop on Explainable Artificial Intelligence for the Medical Domain, EXPLIMED 2024 - Santiago de Compostela, Spain Duration: 20 Oct 2024 → … |
Bibliographical note
Publisher Copyright:© 2024 Copyright for this paper by its authors.
Keywords
- case-based explainability
- latent-diffusion models
- medical imaging
- Privacy-preserving machine learning