TY - JOUR
T1 - Sequential Experimental Design for X-Ray CT Using Deep Reinforcement Learning
AU - Wang, Tianyuan
AU - Lucka, Felix
AU - van Leeuwen, Tristan
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2024/6/26
Y1 - 2024/6/26
N2 - In X-ray Computed Tomography (CT), projections from many angles are acquired and used for 3D reconstruction. To make CT suitable for in-line quality control, reducing the number of angles while maintaining reconstruction quality is necessary. Sparse-angle tomography is a popular approach for obtaining 3D reconstructions from limited data. To optimize its performance, one can adapt scan angles sequentially to select the most informative angles for each scanned object. Mathematically, this corresponds to solving an optimal experimental design (OED) problem. OED problems are high-dimensional, non-convex, bi-level optimization problems that cannot be solved online, i.e., during the scan. To address these challenges, we pose the OED problem as a partially observable Markov decision process in a Bayesian framework, and solve it through deep reinforcement learning. The approach learns efficient non-greedy policies to solve a given class of OED problems through extensive offline training rather than solving a given OED problem directly via numerical optimization. As such, the trained policy can successfully find the most informative scan angles online. We use a policy training method based on the Actor-Critic approach and evaluate its performance on 2D tomography with synthetic data.
AB - In X-ray Computed Tomography (CT), projections from many angles are acquired and used for 3D reconstruction. To make CT suitable for in-line quality control, reducing the number of angles while maintaining reconstruction quality is necessary. Sparse-angle tomography is a popular approach for obtaining 3D reconstructions from limited data. To optimize its performance, one can adapt scan angles sequentially to select the most informative angles for each scanned object. Mathematically, this corresponds to solving an optimal experimental design (OED) problem. OED problems are high-dimensional, non-convex, bi-level optimization problems that cannot be solved online, i.e., during the scan. To address these challenges, we pose the OED problem as a partially observable Markov decision process in a Bayesian framework, and solve it through deep reinforcement learning. The approach learns efficient non-greedy policies to solve a given class of OED problems through extensive offline training rather than solving a given OED problem directly via numerical optimization. As such, the trained policy can successfully find the most informative scan angles online. We use a policy training method based on the Actor-Critic approach and evaluate its performance on 2D tomography with synthetic data.
KW - Adaptive angle selection
KW - Reinforcement learning
KW - X-ray Computed Tomography (CT)
KW - optimal experimental design (OED)
UR - http://www.scopus.com/inward/record.url?scp=85198003104&partnerID=8YFLogxK
U2 - 10.1109/TCI.2024.3414273
DO - 10.1109/TCI.2024.3414273
M3 - Article
SN - 2333-9403
VL - 10
SP - 953
EP - 968
JO - IEEE Transactions on Computational Imaging
JF - IEEE Transactions on Computational Imaging
ER -