Sequential Experimental Design for X-Ray CT Using Deep Reinforcement Learning

Tianyuan Wang, Felix Lucka, Tristan van Leeuwen

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

In X-ray Computed Tomography (CT), projections from many angles are acquired and used for 3D reconstruction. To make CT suitable for in-line quality control, reducing the number of angles while maintaining reconstruction quality is necessary. Sparse-angle tomography is a popular approach for obtaining 3D reconstructions from limited data. To optimize its performance, one can adapt scan angles sequentially to select the most informative angles for each scanned object. Mathematically, this corresponds to solving an optimal experimental design (OED) problem. OED problems are high-dimensional, non-convex, bi-level optimization problems that cannot be solved online, i.e., during the scan. To address these challenges, we pose the OED problem as a partially observable Markov decision process in a Bayesian framework, and solve it through deep reinforcement learning. The approach learns efficient non-greedy policies to solve a given class of OED problems through extensive offline training rather than solving a given OED problem directly via numerical optimization. As such, the trained policy can successfully find the most informative scan angles online. We use a policy training method based on the Actor-Critic approach and evaluate its performance on 2D tomography with synthetic data.
Original languageEnglish
Pages (from-to)953-968
Number of pages16
JournalIEEE Transactions on Computational Imaging
Volume10
DOIs
Publication statusPublished - 26 Jun 2024

Keywords

  • Adaptive angle selection
  • Reinforcement learning
  • X-ray Computed Tomography (CT)
  • optimal experimental design (OED)

Fingerprint

Dive into the research topics of 'Sequential Experimental Design for X-Ray CT Using Deep Reinforcement Learning'. Together they form a unique fingerprint.

Cite this