Computed inverse resonance imaging for magnetic susceptibility map reconstruction.
Chen, Zikuan; Calhoun, Vince
2012-01-01
This article reports a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a 2-step computational approach. The forward T2*-weighted MRI (T2*MRI) process is broken down into 2 steps: (1) from magnetic susceptibility source to field map establishment via magnetization in the main field and (2) from field map to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes 2 inverse steps to reverse the T2*MRI procedure: field map calculation from MR-phase image and susceptibility source calculation from the field map. The inverse step from field map to susceptibility map is a 3-dimensional ill-posed deconvolution problem, which can be solved with 3 kinds of approaches: the Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from an MR-phase image with high fidelity (spatial correlation ≈ 0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by 2 computational steps: calculating the field map from the phase image and reconstructing the susceptibility map from the field map. The crux of CIMRI lies in an ill-posed 3-dimensional deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm.
Watanabe, Shota; Sakaguchi, Kenta; Hosono, Makoto; Ishii, Kazunari; Murakami, Takamichi; Ichikawa, Katsuhiro
The purpose of this study was to evaluate the effect of a hybrid-type iterative reconstruction method on Z-score mapping of hyperacute stroke in unenhanced computed tomography (CT) images. We used a hybrid-type iterative reconstruction [adaptive statistical iterative reconstruction (ASiR)] implemented in a CT system (Optima CT660 Pro advance, GE Healthcare). With 15 normal brain cases, we reconstructed CT images with a filtered back projection (FBP) and ASiR with a blending factor of 100% (ASiR100%). Two standardized normal brain data were created from normal databases of FBP images (FBP-NDB) and ASiR100% images (ASiR-NDB), and standard deviation (SD) values in basal ganglia were measured. The Z-score mapping was performed for 12 hyperacute stroke cases by using FBP-NDB and ASiR-NDB, and compared Z-score value on hyperacute stroke area and normal area between FBP-NDB and ASiR-NDB. By using ASiR-NDB, the SD value of standardized brain was decreased by 16%. The Z-score value of ASiR-NDB on hyperacute stroke area was significantly higher than FBP-NDB (p<0.05). Therefore, the use of images reconstructed with ASiR100% for Z-score mapping had potential to improve the accuracy of Z-score mapping.
AIR-MRF: Accelerated iterative reconstruction for magnetic resonance fingerprinting.
Cline, Christopher C; Chen, Xiao; Mailhe, Boris; Wang, Qiu; Pfeuffer, Josef; Nittka, Mathias; Griswold, Mark A; Speier, Peter; Nadar, Mariappan S
2017-09-01
Existing approaches for reconstruction of multiparametric maps with magnetic resonance fingerprinting (MRF) are currently limited by their estimation accuracy and reconstruction time. We aimed to address these issues with a novel combination of iterative reconstruction, fingerprint compression, additional regularization, and accelerated dictionary search methods. The pipeline described here, accelerated iterative reconstruction for magnetic resonance fingerprinting (AIR-MRF), was evaluated with simulations as well as phantom and in vivo scans. We found that the AIR-MRF pipeline provided reduced parameter estimation errors compared to non-iterative and other iterative methods, particularly at shorter sequence lengths. Accelerated dictionary search methods incorporated into the iterative pipeline reduced the reconstruction time at little cost of quality. Copyright © 2017 Elsevier Inc. All rights reserved.
Computed inverse MRI for magnetic susceptibility map reconstruction
Chen, Zikuan; Calhoun, Vince
2015-01-01
Objective This paper reports on a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a two-step computational approach. Methods The forward T2*-weighted MRI (T2*MRI) process is decomposed into two steps: 1) from magnetic susceptibility source to fieldmap establishment via magnetization in a main field, and 2) from fieldmap to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes two inverse steps to reverse the T2*MRI procedure: fieldmap calculation from MR phase image and susceptibility source calculation from the fieldmap. The inverse step from fieldmap to susceptibility map is a 3D ill-posed deconvolution problem, which can be solved by three kinds of approaches: Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Results Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from a MR phase image with high fidelity (spatial correlation≈0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. Conclusions The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by two computational steps: calculating the fieldmap from the phase image and reconstructing the susceptibility map from the fieldmap. The crux of CIMRI lies in an ill-posed 3D deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm. PMID:22446372
3D reconstruction of the magnetic vector potential using model based iterative reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta
Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less
3D reconstruction of the magnetic vector potential using model based iterative reconstruction.
Prabhat, K C; Aditya Mohan, K; Phatak, Charudatta; Bouman, Charles; De Graef, Marc
2017-11-01
Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model for image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. A comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach. Copyright © 2017 Elsevier B.V. All rights reserved.
3D reconstruction of the magnetic vector potential using model based iterative reconstruction
Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta; ...
2017-07-03
Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less
Direct Reconstruction of CT-Based Attenuation Correction Images for PET With Cluster-Based Penalties
NASA Astrophysics Data System (ADS)
Kim, Soo Mee; Alessio, Adam M.; De Man, Bruno; Kinahan, Paul E.
2017-03-01
Extremely low-dose (LD) CT acquisitions used for PET attenuation correction have high levels of noise and potential bias artifacts due to photon starvation. This paper explores the use of a priori knowledge for iterative image reconstruction of the CT-based attenuation map. We investigate a maximum a posteriori framework with cluster-based multinomial penalty for direct iterative coordinate decent (dICD) reconstruction of the PET attenuation map. The objective function for direct iterative attenuation map reconstruction used a Poisson log-likelihood data fit term and evaluated two image penalty terms of spatial and mixture distributions. The spatial regularization is based on a quadratic penalty. For the mixture penalty, we assumed that the attenuation map may consist of four material clusters: air + background, lung, soft tissue, and bone. Using simulated noisy sinogram data, dICD reconstruction was performed with different strengths of the spatial and mixture penalties. The combined spatial and mixture penalties reduced the root mean squared error (RMSE) by roughly two times compared with a weighted least square and filtered backprojection reconstruction of CT images. The combined spatial and mixture penalties resulted in only slightly lower RMSE compared with a spatial quadratic penalty alone. For direct PET attenuation map reconstruction from ultra-LD CT acquisitions, the combination of spatial and mixture penalties offers regularization of both variance and bias and is a potential method to reconstruct attenuation maps with negligible patient dose. The presented results, using a best-case histogram suggest that the mixture penalty does not offer a substantive benefit over conventional quadratic regularization and diminishes enthusiasm for exploring future application of the mixture penalty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakano, M; Haga, A; Hanaoka, S
2016-06-15
Purpose: The purpose of this study is to propose a new concept of four-dimensional (4D) cone-beam CT (CBCT) reconstruction for non-periodic organ motion using the Time-ordered Chain Graph Model (TCGM), and to compare the reconstructed results with the previously proposed methods, the total variation-based compressed sensing (TVCS) and prior-image constrained compressed sensing (PICCS). Methods: CBCT reconstruction method introduced in this study consisted of maximum a posteriori (MAP) iterative reconstruction combined with a regularization term derived from a concept of TCGM, which includes a constraint coming from the images of neighbouring time-phases. The time-ordered image series were concurrently reconstructed in themore » MAP iterative reconstruction framework. Angular range of projections for each time-phase was 90 degrees for TCGM and PICCS, and 200 degrees for TVCS. Two kinds of projection data, an elliptic-cylindrical digital phantom data and two clinical patients’ data, were used for reconstruction. The digital phantom contained an air sphere moving 3 cm along longitudinal axis, and temporal resolution of each method was evaluated by measuring the penumbral width of reconstructed moving air sphere. The clinical feasibility of non-periodic time-ordered 4D CBCT reconstruction was also examined using projection data of prostate cancer patients. Results: The results of reconstructed digital phantom shows that the penumbral widths of TCGM yielded the narrowest result; PICCS and TCGM were 10.6% and 17.4% narrower than that of TVCS, respectively. This suggests that the TCGM has the better temporal resolution than the others. Patients’ CBCT projection data were also reconstructed and all three reconstructed results showed motion of rectal gas and stool. The result of TCGM provided visually clearer and less blurring images. Conclusion: The present study demonstrates that the new concept for 4D CBCT reconstruction, TCGM, combined with MAP iterative reconstruction framework enables time-ordered image reconstruction with narrower time-window.« less
Sparsity-constrained PET image reconstruction with learned dictionaries
NASA Astrophysics Data System (ADS)
Tang, Jing; Yang, Bao; Wang, Yanhua; Ying, Leslie
2016-09-01
PET imaging plays an important role in scientific and clinical measurement of biochemical and physiological processes. Model-based PET image reconstruction such as the iterative expectation maximization algorithm seeking the maximum likelihood solution leads to increased noise. The maximum a posteriori (MAP) estimate removes divergence at higher iterations. However, a conventional smoothing prior or a total-variation (TV) prior in a MAP reconstruction algorithm causes over smoothing or blocky artifacts in the reconstructed images. We propose to use dictionary learning (DL) based sparse signal representation in the formation of the prior for MAP PET image reconstruction. The dictionary to sparsify the PET images in the reconstruction process is learned from various training images including the corresponding MR structural image and a self-created hollow sphere. Using simulated and patient brain PET data with corresponding MR images, we study the performance of the DL-MAP algorithm and compare it quantitatively with a conventional MAP algorithm, a TV-MAP algorithm, and a patch-based algorithm. The DL-MAP algorithm achieves improved bias and contrast (or regional mean values) at comparable noise to what the other MAP algorithms acquire. The dictionary learned from the hollow sphere leads to similar results as the dictionary learned from the corresponding MR image. Achieving robust performance in various noise-level simulation and patient studies, the DL-MAP algorithm with a general dictionary demonstrates its potential in quantitative PET imaging.
Matrix completion-based reconstruction for undersampled magnetic resonance fingerprinting data.
Doneva, Mariya; Amthor, Thomas; Koken, Peter; Sommer, Karsten; Börnert, Peter
2017-09-01
An iterative reconstruction method for undersampled magnetic resonance fingerprinting data is presented. The method performs the reconstruction entirely in k-space and is related to low rank matrix completion methods. A low dimensional data subspace is estimated from a small number of k-space locations fully sampled in the temporal direction and used to reconstruct the missing k-space samples before MRF dictionary matching. Performing the iterations in k-space eliminates the need for applying a forward and an inverse Fourier transform in each iteration required in previously proposed iterative reconstruction methods for undersampled MRF data. A projection onto the low dimensional data subspace is performed as a matrix multiplication instead of a singular value thresholding typically used in low rank matrix completion, further reducing the computational complexity of the reconstruction. The method is theoretically described and validated in phantom and in-vivo experiments. The quality of the parameter maps can be significantly improved compared to direct matching on undersampled data. Copyright © 2017 Elsevier Inc. All rights reserved.
Multiscale Reconstruction for Magnetic Resonance Fingerprinting
Pierre, Eric Y.; Ma, Dan; Chen, Yong; Badve, Chaitra; Griswold, Mark A.
2015-01-01
Purpose To reduce acquisition time needed to obtain reliable parametric maps with Magnetic Resonance Fingerprinting. Methods An iterative-denoising algorithm is initialized by reconstructing the MRF image series at low image resolution. For subsequent iterations, the method enforces pixel-wise fidelity to the best-matching dictionary template then enforces fidelity to the acquired data at slightly higher spatial resolution. After convergence, parametric maps with desirable spatial resolution are obtained through template matching of the final image series. The proposed method was evaluated on phantom and in-vivo data using the highly-undersampled, variable-density spiral trajectory and compared with the original MRF method. The benefits of additional sparsity constraints were also evaluated. When available, gold standard parameter maps were used to quantify the performance of each method. Results The proposed approach allowed convergence to accurate parametric maps with as few as 300 time points of acquisition, as compared to 1000 in the original MRF work. Simultaneous quantification of T1, T2, proton density (PD) and B0 field variations in the brain was achieved in vivo for a 256×256 matrix for a total acquisition time of 10.2s, representing a 3-fold reduction in acquisition time. Conclusions The proposed iterative multiscale reconstruction reliably increases MRF acquisition speed and accuracy. PMID:26132462
Multiscale reconstruction for MR fingerprinting.
Pierre, Eric Y; Ma, Dan; Chen, Yong; Badve, Chaitra; Griswold, Mark A
2016-06-01
To reduce the acquisition time needed to obtain reliable parametric maps with Magnetic Resonance Fingerprinting. An iterative-denoising algorithm is initialized by reconstructing the MRF image series at low image resolution. For subsequent iterations, the method enforces pixel-wise fidelity to the best-matching dictionary template then enforces fidelity to the acquired data at slightly higher spatial resolution. After convergence, parametric maps with desirable spatial resolution are obtained through template matching of the final image series. The proposed method was evaluated on phantom and in vivo data using the highly undersampled, variable-density spiral trajectory and compared with the original MRF method. The benefits of additional sparsity constraints were also evaluated. When available, gold standard parameter maps were used to quantify the performance of each method. The proposed approach allowed convergence to accurate parametric maps with as few as 300 time points of acquisition, as compared to 1000 in the original MRF work. Simultaneous quantification of T1, T2, proton density (PD), and B0 field variations in the brain was achieved in vivo for a 256 × 256 matrix for a total acquisition time of 10.2 s, representing a three-fold reduction in acquisition time. The proposed iterative multiscale reconstruction reliably increases MRF acquisition speed and accuracy. Magn Reson Med 75:2481-2492, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, T; Zhu, L
Purpose: Conventional dual energy CT (DECT) reconstructs CT and basis material images from two full-size projection datasets with different energy spectra. To relax the data requirement, we propose an iterative DECT reconstruction algorithm using one full scan and a second sparse-view scan by utilizing redundant structural information of the same object acquired at two different energies. Methods: We first reconstruct a full-scan CT image using filtered-backprojection (FBP) algorithm. The material similarities of each pixel with other pixels are calculated by an exponential function about pixel value differences. We assume that the material similarities of pixels remains in the second CTmore » scan, although pixel values may vary. An iterative method is designed to reconstruct the second CT image from reduced projections. Under the data fidelity constraint, the algorithm minimizes the L2 norm of the difference between pixel value and its estimation, which is the average of other pixel values weighted by their similarities. The proposed algorithm, referred to as structure preserving iterative reconstruction (SPIR), is evaluated on physical phantoms. Results: On the Catphan600 phantom, SPIR-based DECT method with a second 10-view scan reduces the noise standard deviation of a full-scan FBP CT reconstruction by a factor of 4 with well-maintained spatial resolution, while iterative reconstruction using total-variation regularization (TVR) degrades the spatial resolution at the same noise level. The proposed method achieves less than 1% measurement difference on electron density map compared with the conventional two-full-scan DECT. On an anthropomorphic pediatric phantom, our method successfully reconstructs the complicated vertebra structures and decomposes bone and soft tissue. Conclusion: We develop an effective method to reduce the number of views and therefore data acquisition in DECT. We show that SPIR-based DECT using one full scan and a second 10-view scan can provide high-quality DECT images and accurate electron density maps as conventional two-full-scan DECT.« less
NASA Astrophysics Data System (ADS)
Nouizi, F.; Erkol, H.; Luk, A.; Marks, M.; Unlu, M. B.; Gulsen, G.
2016-10-01
We previously introduced photo-magnetic imaging (PMI), an imaging technique that illuminates the medium under investigation with near-infrared light and measures the induced temperature increase using magnetic resonance thermometry (MRT). Using a multiphysics solver combining photon migration and heat diffusion, PMI models the spatiotemporal distribution of temperature variation and recovers high resolution optical absorption images using these temperature maps. In this paper, we present a new fast non-iterative reconstruction algorithm for PMI. This new algorithm uses analytic methods during the resolution of the forward problem and the assembly of the sensitivity matrix. We validate our new analytic-based algorithm with the first generation finite element method (FEM) based reconstruction algorithm previously developed by our team. The validation is performed using, first synthetic data and afterwards, real MRT measured temperature maps. Our new method accelerates the reconstruction process 30-fold when compared to a single iteration of the FEM-based algorithm.
A Survey of the Use of Iterative Reconstruction Algorithms in Electron Microscopy
Otón, J.; Vilas, J. L.; Kazemi, M.; Melero, R.; del Caño, L.; Cuenca, J.; Conesa, P.; Gómez-Blanco, J.; Marabini, R.; Carazo, J. M.
2017-01-01
One of the key steps in Electron Microscopy is the tomographic reconstruction of a three-dimensional (3D) map of the specimen being studied from a set of two-dimensional (2D) projections acquired at the microscope. This tomographic reconstruction may be performed with different reconstruction algorithms that can be grouped into several large families: direct Fourier inversion methods, back-projection methods, Radon methods, or iterative algorithms. In this review, we focus on the latter family of algorithms, explaining the mathematical rationale behind the different algorithms in this family as they have been introduced in the field of Electron Microscopy. We cover their use in Single Particle Analysis (SPA) as well as in Electron Tomography (ET). PMID:29312997
Attenuation correction strategies for multi-energy photon emitters using SPECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pretorius, P.H.; King, M.A.; Pan, T.S.
1996-12-31
The aim of this study was to investigate whether the photopeak window projections from different energy photons can be combined into a single window for reconstruction or if it is better to not combine the projections due to differences in the attenuation maps required for each photon energy. The mathematical cardiac torso (MCAT) phantom was modified to simulate the uptake of Ga-67 in the human body. Four spherical hot tumors were placed in locations which challenged attenuation correction. An analytical 3D projector with attenuation and detector response included was used to generate projection sets. Data were reconstructed using filtered backprojectionmore » (FBP) reconstruction with Butterworth filtering in conjunction with one iteration of Chang attenuation correction, and with 5 and 10 iterations of ordered-subset maximum-likelihood expectation-maximization reconstruction. To serve as a standard for comparison, the projection sets obtained from the two energies were first reconstructed separately using their own attenuation maps. The emission data obtained from both energies were added and reconstructed using the following attenuation strategies: (1) the 93 keV attenuation map for attenuation correction, (2) the 185 keV attenuation map for attenuation correction, (3) using a weighted mean obtained from combining the 93 keV and 185 keV maps, and (4) an ordered subset approach which combines both energies. The central count ratio (CCR) and total count ratio (TCR) were used to compare the performance of the different strategies. Compared to the standard method, results indicate an over-estimation with strategy 1, an under-estimation with strategy 2 and comparable results with strategies 3 and 4. In all strategies, the CCR`s of sphere 4 were under-estimated, although TCR`s were comparable to that of the other locations. The weighted mean and ordered subset strategies for attenuation correction were of comparable accuracy to reconstruction of the windows separately.« less
Low dose dynamic CT myocardial perfusion imaging using a statistical iterative reconstruction method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tao, Yinghua; Chen, Guang-Hong; Hacker, Timothy A.
Purpose: Dynamic CT myocardial perfusion imaging has the potential to provide both functional and anatomical information regarding coronary artery stenosis. However, radiation dose can be potentially high due to repeated scanning of the same region. The purpose of this study is to investigate the use of statistical iterative reconstruction to improve parametric maps of myocardial perfusion derived from a low tube current dynamic CT acquisition. Methods: Four pigs underwent high (500 mA) and low (25 mA) dose dynamic CT myocardial perfusion scans with and without coronary occlusion. To delineate the affected myocardial territory, an N-13 ammonia PET perfusion scan wasmore » performed for each animal in each occlusion state. Filtered backprojection (FBP) reconstruction was first applied to all CT data sets. Then, a statistical iterative reconstruction (SIR) method was applied to data sets acquired at low dose. Image voxel noise was matched between the low dose SIR and high dose FBP reconstructions. CT perfusion maps were compared among the low dose FBP, low dose SIR and high dose FBP reconstructions. Numerical simulations of a dynamic CT scan at high and low dose (20:1 ratio) were performed to quantitatively evaluate SIR and FBP performance in terms of flow map accuracy, precision, dose efficiency, and spatial resolution. Results: Forin vivo studies, the 500 mA FBP maps gave −88.4%, −96.0%, −76.7%, and −65.8% flow change in the occluded anterior region compared to the open-coronary scans (four animals). The percent changes in the 25 mA SIR maps were in good agreement, measuring −94.7%, −81.6%, −84.0%, and −72.2%. The 25 mA FBP maps gave unreliable flow measurements due to streaks caused by photon starvation (percent changes of +137.4%, +71.0%, −11.8%, and −3.5%). Agreement between 25 mA SIR and 500 mA FBP global flow was −9.7%, 8.8%, −3.1%, and 26.4%. The average variability of flow measurements in a nonoccluded region was 16.3%, 24.1%, and 937.9% for the 500 mA FBP, 25 mA SIR, and 25 mA FBP, respectively. In numerical simulations, SIR mitigated streak artifacts in the low dose data and yielded flow maps with mean error <7% and standard deviation <9% of mean, for 30×30 pixel ROIs (12.9 × 12.9 mm{sup 2}). In comparison, low dose FBP flow errors were −38% to +258%, and standard deviation was 6%–93%. Additionally, low dose SIR achieved 4.6 times improvement in flow map CNR{sup 2} per unit input dose compared to low dose FBP. Conclusions: SIR reconstruction can reduce image noise and mitigate streaking artifacts caused by photon starvation in dynamic CT myocardial perfusion data sets acquired at low dose (low tube current), and improve perfusion map quality in comparison to FBP reconstruction at the same dose.« less
Quantitative neuroanatomy for connectomics in Drosophila
Schneider-Mizell, Casey M; Gerhard, Stephan; Longair, Mark; Kazimiers, Tom; Li, Feng; Zwart, Maarten F; Champion, Andrew; Midgley, Frank M; Fetter, Richard D; Saalfeld, Stephan; Cardona, Albert
2016-01-01
Neuronal circuit mapping using electron microscopy demands laborious proofreading or reconciliation of multiple independent reconstructions. Here, we describe new methods to apply quantitative arbor and network context to iteratively proofread and reconstruct circuits and create anatomically enriched wiring diagrams. We measured the morphological underpinnings of connectivity in new and existing reconstructions of Drosophila sensorimotor (larva) and visual (adult) systems. Synaptic inputs were preferentially located on numerous small, microtubule-free 'twigs' which branch off a single microtubule-containing 'backbone'. Omission of individual twigs accounted for 96% of errors. However, the synapses of highly connected neurons were distributed across multiple twigs. Thus, the robustness of a strong connection to detailed twig anatomy was associated with robustness to reconstruction error. By comparing iterative reconstruction to the consensus of multiple reconstructions, we show that our method overcomes the need for redundant effort through the discovery and application of relationships between cellular neuroanatomy and synaptic connectivity. DOI: http://dx.doi.org/10.7554/eLife.12059.001 PMID:26990779
Wang, Chunhao; Yin, Fang-Fang; Kirkpatrick, John P; Chang, Zheng
2017-08-01
To investigate the feasibility of using undersampled k-space data and an iterative image reconstruction method with total generalized variation penalty in the quantitative pharmacokinetic analysis for clinical brain dynamic contrast-enhanced magnetic resonance imaging. Eight brain dynamic contrast-enhanced magnetic resonance imaging scans were retrospectively studied. Two k-space sparse sampling strategies were designed to achieve a simulated image acquisition acceleration factor of 4. They are (1) a golden ratio-optimized 32-ray radial sampling profile and (2) a Cartesian-based random sampling profile with spatiotemporal-regularized sampling density constraints. The undersampled data were reconstructed to yield images using the investigated reconstruction technique. In quantitative pharmacokinetic analysis on a voxel-by-voxel basis, the rate constant K trans in the extended Tofts model and blood flow F B and blood volume V B from the 2-compartment exchange model were analyzed. Finally, the quantitative pharmacokinetic parameters calculated from the undersampled data were compared with the corresponding calculated values from the fully sampled data. To quantify each parameter's accuracy calculated using the undersampled data, error in volume mean, total relative error, and cross-correlation were calculated. The pharmacokinetic parameter maps generated from the undersampled data appeared comparable to the ones generated from the original full sampling data. Within the region of interest, most derived error in volume mean values in the region of interest was about 5% or lower, and the average error in volume mean of all parameter maps generated through either sampling strategy was about 3.54%. The average total relative error value of all parameter maps in region of interest was about 0.115, and the average cross-correlation of all parameter maps in region of interest was about 0.962. All investigated pharmacokinetic parameters had no significant differences between the result from original data and the reduced sampling data. With sparsely sampled k-space data in simulation of accelerated acquisition by a factor of 4, the investigated dynamic contrast-enhanced magnetic resonance imaging pharmacokinetic parameters can accurately estimate the total generalized variation-based iterative image reconstruction method for reliable clinical application.
NASA Astrophysics Data System (ADS)
Nakamura, Gen; Wang, Haibing
2017-05-01
Consider the problem of reconstructing unknown Robin inclusions inside a heat conductor from boundary measurements. This problem arises from active thermography and is formulated as an inverse boundary value problem for the heat equation. In our previous works, we proposed a sampling-type method for reconstructing the boundary of the Robin inclusion and gave its rigorous mathematical justification. This method is non-iterative and based on the characterization of the solution to the so-called Neumann- to-Dirichlet map gap equation. In this paper, we give a further investigation of the reconstruction method from both the theoretical and numerical points of view. First, we clarify the solvability of the Neumann-to-Dirichlet map gap equation and establish a relation of its solution to the Green function associated with an initial-boundary value problem for the heat equation inside the Robin inclusion. This naturally provides a way of computing this Green function from the Neumann-to-Dirichlet map and explains what is the input for the linear sampling method. Assuming that the Neumann-to-Dirichlet map gap equation has a unique solution, we also show the convergence of our method for noisy measurements. Second, we give the numerical implementation of the reconstruction method for two-dimensional spatial domains. The measurements for our inverse problem are simulated by solving the forward problem via the boundary integral equation method. Numerical results are presented to illustrate the efficiency and stability of the proposed method. By using a finite sequence of transient input over a time interval, we propose a new sampling method over the time interval by single measurement which is most likely to be practical.
Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor.
Kim, Heegwang; Park, Jinho; Park, Hasil; Paik, Joonki
2017-12-09
Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.
Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor
Park, Jinho; Park, Hasil
2017-01-01
Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system. PMID:29232826
Attenuation correction strategies for multi-energy photon emitters using SPECT
NASA Astrophysics Data System (ADS)
Pretorius, P. H.; King, M. A.; Pan, T.-S.; Hutton, B. F.
1997-06-01
The aim of this study was to investigate whether the photopeak window projections from different energy photons can be combined into a single window for reconstruction or if it is better to not combine the projections due to differences in the attenuation maps required for each photon energy. The mathematical cardiac torso (MCAT) phantom was modified to simulate the uptake of Ga-67 in the human body. Four spherical hot tumors were placed in locations which challenged attenuation correction. An analytical 3D projector with attenuation and detector response included was used to generate projection sets. Data were reconstructed using filtered backprojection (FBP) reconstruction with Butterworth filtering in conjunction with one iteration of Chang attenuation correction, and with 5 and 10 iterations of ordered-subset maximum-likelihood expectation maximization (ML-OS) reconstruction. To serve as a standard for comparison, the projection sets obtained from the two energies were first reconstructed separately using their own attenuation maps. The emission data obtained from both energies were added and reconstructed using the following attenuation strategies: 1) the 93 keV attenuation map for attenuation correction, 2) the 185 keV attenuation map for attenuation correction, 3) using a weighted mean obtained from combining the 93 keV and 185 keV maps, and 4) an ordered subset approach which combines both energies. The central count ratio (CCR) and total count ratio (TCR) were used to compare the performance of the different strategies. Compared to the standard method, results indicate an over-estimation with strategy 1, an under-estimation with strategy 2 and comparable results with strategies 3 and 4. In all strategies, the CCRs of sphere 4 (in proximity to the liver, spleen and backbone) were under-estimated, although TCRs were comparable to that of the other locations. The weighted mean and ordered subset strategies for attenuation correction were of comparable accuracy to reconstruction of the windows separately. They are recommended for multi-energy photon SPECT imaging quantitation when there is a need to combine the acquisitions of multiple windows.
Zhang, Peijun; Meng, Xin; Zhao, Gongpu
2013-01-01
Helical structures are important in many different life forms and are well-suited for structural studies by cryo-EM. A unique feature of helical objects is that a single projection image contains all the views needed to perform a three-dimensional (3D) crystallographic reconstruction. Here, we use HIV-1 capsid assemblies to illustrate the detailed approaches to obtain 3D density maps from helical objects. Mature HIV-1 particles contain a conical- or tubular-shaped capsid that encloses the viral RNA genome and performs essential functions in the virus life cycle. The capsid is composed of capsid protein (CA) oligomers which are helically arranged on the surface. The N-terminal domain (NTD) of CA is connected to its C-terminal domain (CTD) through a flexible hinge. Structural analysis of two- and three-dimensional crystals provided molecular models of the capsid protein (CA) and its oligomer forms. We determined the 3D density map of helically assembled HIV-1 CA hexamers at 16 Å resolution using an iterative helical real-space reconstruction method. Docking of atomic models of CA-NTD and CA-CTD dimer into the electron density map indicated that the CTD dimer interface is retained in the assembled CA. Furthermore, molecular docking revealed an additional, novel CTD trimer interface. PMID:23132072
3D and 4D magnetic susceptibility tomography based on complex MR images
Chen, Zikuan; Calhoun, Vince D
2014-11-11
Magnetic susceptibility is the physical property for T2*-weighted magnetic resonance imaging (T2*MRI). The invention relates to methods for reconstructing an internal distribution (3D map) of magnetic susceptibility values, .chi. (x,y,z), of an object, from 3D T2*MRI phase images, by using Computed Inverse Magnetic Resonance Imaging (CIMRI) tomography. The CIMRI technique solves the inverse problem of the 3D convolution by executing a 3D Total Variation (TV) regularized iterative convolution scheme, using a split Bregman iteration algorithm. The reconstruction of .chi. (x,y,z) can be designed for low-pass, band-pass, and high-pass features by using a convolution kernel that is modified from the standard dipole kernel. Multiple reconstructions can be implemented in parallel, and averaging the reconstructions can suppress noise. 4D dynamic magnetic susceptibility tomography can be implemented by reconstructing a 3D susceptibility volume from a 3D phase volume by performing 3D CIMRI magnetic susceptibility tomography at each snapshot time.
Naidu, Sailen G; Kriegshauser, J Scott; Paden, Robert G; He, Miao; Wu, Qing; Hara, Amy K
2014-12-01
An ultra-low-dose radiation protocol reconstructed with model-based iterative reconstruction was compared with our standard-dose protocol. This prospective study evaluated 20 men undergoing surveillance-enhanced computed tomography after endovascular aneurysm repair. All patients underwent standard-dose and ultra-low-dose venous phase imaging; images were compared after reconstruction with filtered back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction. Objective measures of aortic contrast attenuation and image noise were averaged. Images were subjectively assessed (1 = worst, 5 = best) for diagnostic confidence, image noise, and vessel sharpness. Aneurysm sac diameter and endoleak detection were compared. Quantitative image noise was 26% less with ultra-low-dose model-based iterative reconstruction than with standard-dose adaptive statistical iterative reconstruction and 58% less than with ultra-low-dose adaptive statistical iterative reconstruction. Average subjective noise scores were not different between ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction (3.8 vs. 4.0, P = .25). Subjective scores for diagnostic confidence were better with standard-dose adaptive statistical iterative reconstruction than with ultra-low-dose model-based iterative reconstruction (4.4 vs. 4.0, P = .002). Vessel sharpness was decreased with ultra-low-dose model-based iterative reconstruction compared with standard-dose adaptive statistical iterative reconstruction (3.3 vs. 4.1, P < .0001). Ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction aneurysm sac diameters were not significantly different (4.9 vs. 4.9 cm); concordance for the presence of endoleak was 100% (P < .001). Compared with a standard-dose technique, an ultra-low-dose model-based iterative reconstruction protocol provides comparable image quality and diagnostic assessment at a 73% lower radiation dose.
Shuman, William P; Chan, Keith T; Busey, Janet M; Mitsumori, Lee M; Choi, Eunice; Koprowicz, Kent M; Kanal, Kalpana M
2014-12-01
To investigate whether reduced radiation dose liver computed tomography (CT) images reconstructed with model-based iterative reconstruction ( MBIR model-based iterative reconstruction ) might compromise depiction of clinically relevant findings or might have decreased image quality when compared with clinical standard radiation dose CT images reconstructed with adaptive statistical iterative reconstruction ( ASIR adaptive statistical iterative reconstruction ). With institutional review board approval, informed consent, and HIPAA compliance, 50 patients (39 men, 11 women) were prospectively included who underwent liver CT. After a portal venous pass with ASIR adaptive statistical iterative reconstruction images, a 60% reduced radiation dose pass was added with MBIR model-based iterative reconstruction images. One reviewer scored ASIR adaptive statistical iterative reconstruction image quality and marked findings. Two additional independent reviewers noted whether marked findings were present on MBIR model-based iterative reconstruction images and assigned scores for relative conspicuity, spatial resolution, image noise, and image quality. Liver and aorta Hounsfield units and image noise were measured. Volume CT dose index and size-specific dose estimate ( SSDE size-specific dose estimate ) were recorded. Qualitative reviewer scores were summarized. Formal statistical inference for signal-to-noise ratio ( SNR signal-to-noise ratio ), contrast-to-noise ratio ( CNR contrast-to-noise ratio ), volume CT dose index, and SSDE size-specific dose estimate was made (paired t tests), with Bonferroni adjustment. Two independent reviewers identified all 136 ASIR adaptive statistical iterative reconstruction image findings (n = 272) on MBIR model-based iterative reconstruction images, scoring them as equal or better for conspicuity, spatial resolution, and image noise in 94.1% (256 of 272), 96.7% (263 of 272), and 99.3% (270 of 272), respectively. In 50 image sets, two reviewers (n = 100) scored overall image quality as sufficient or good with MBIR model-based iterative reconstruction in 99% (99 of 100). Liver SNR signal-to-noise ratio was significantly greater for MBIR model-based iterative reconstruction (10.8 ± 2.5 [standard deviation] vs 7.7 ± 1.4, P < .001); there was no difference for CNR contrast-to-noise ratio (2.5 ± 1.4 vs 2.4 ± 1.4, P = .45). For ASIR adaptive statistical iterative reconstruction and MBIR model-based iterative reconstruction , respectively, volume CT dose index was 15.2 mGy ± 7.6 versus 6.2 mGy ± 3.6; SSDE size-specific dose estimate was 16.4 mGy ± 6.6 versus 6.7 mGy ± 3.1 (P < .001). Liver CT images reconstructed with MBIR model-based iterative reconstruction may allow up to 59% radiation dose reduction compared with the dose with ASIR adaptive statistical iterative reconstruction , without compromising depiction of findings or image quality. © RSNA, 2014.
Improved image decompression for reduced transform coding artifacts
NASA Technical Reports Server (NTRS)
Orourke, Thomas P.; Stevenson, Robert L.
1994-01-01
The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.
NASA Astrophysics Data System (ADS)
Nie, Xiaokai; Luo, Jingjing; Coca, Daniel; Birkin, Mark; Chen, Jing
2018-03-01
The paper introduces a method for reconstructing one-dimensional iterated maps that are driven by an external control input and subjected to an additive stochastic perturbation, from sequences of probability density functions that are generated by the stochastic dynamical systems and observed experimentally.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maughan, N; Conti, M; Parikh, P
2015-06-15
Purpose: Imaging Y-90 microspheres with PET/MRI following hepatic radioembolization has the potential for predicting treatment outcome and, in turn, improving patient care. The positron decay branching ratio, however, is very small (32 ppm), yielding images with poor statistics even when therapy doses are used. Our purpose is to find PET reconstruction parameters that maximize the PET recovery coefficients and minimize noise. Methods: An initial 7.5 GBq of Y-90 chloride solution was used to fill an ACR phantom for measurements with a PET/MRI scanner (Siemens Biograph mMR). Four hot cylinders and a warm background activity volume of the phantom were filledmore » with a 10:1 ratio. Phantom attenuation maps were derived from scaled CT images of the phantom and included the MR phased array coil. The phantom was imaged at six time points between 7.5–1.0 GBq total activity over a period of eight days. PET images were reconstructed via OP-OSEM with 21 subsets and varying iteration number (1–5), post-reconstruction filter size (5–10 mm), and either absolute or relative scatter correction. Recovery coefficients, SNR, and noise were measured as well as total activity in the phantom. Results: For the 120 different reconstructions, recovery coefficients ranged from 0.1–0.6 and improved with increasing iteration number and reduced post-reconstruction filter size. SNR, however, improved substantially with lower iteration numbers and larger post-reconstruction filters. From the phantom data, we found that performing 2 iterations, 21 subsets, and applying a 5 mm Gaussian post-reconstruction filter provided optimal recovery coefficients at a moderate noise level for a wide range of activity levels. Conclusion: The choice of reconstruction parameters for Y-90 PET images greatly influences both the accuracy of measurements and image quality. We have found reconstruction parameters that provide optimal recovery coefficients with minimized noise. Future work will include the effects of the body matrix coil and off-center measurements.« less
Lin, Jyh-Miin; Patterson, Andrew J; Chang, Hing-Chiu; Gillard, Jonathan H; Graves, Martin J
2015-10-01
To propose a new reduced field-of-view (rFOV) strategy for iterative reconstructions in a clinical environment. Iterative reconstructions can incorporate regularization terms to improve the image quality of periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI. However, the large amount of calculations required for full FOV iterative reconstructions has posed a huge computational challenge for clinical usage. By subdividing the entire problem into smaller rFOVs, the iterative reconstruction can be accelerated on a desktop with a single graphic processing unit (GPU). This rFOV strategy divides the iterative reconstruction into blocks, based on the block-diagonal dominant structure. A near real-time reconstruction system was developed for the clinical MR unit, and parallel computing was implemented using the object-oriented model. In addition, the Toeplitz method was implemented on the GPU to reduce the time required for full interpolation. Using the data acquired from the PROPELLER MRI, the reconstructed images were then saved in the digital imaging and communications in medicine format. The proposed rFOV reconstruction reduced the gridding time by 97%, as the total iteration time was 3 s even with multiple processes running. A phantom study showed that the structure similarity index for rFOV reconstruction was statistically superior to conventional density compensation (p < 0.001). In vivo study validated the increased signal-to-noise ratio, which is over four times higher than with density compensation. Image sharpness index was improved using the regularized reconstruction implemented. The rFOV strategy permits near real-time iterative reconstruction to improve the image quality of PROPELLER images. Substantial improvements in image quality metrics were validated in the experiments. The concept of rFOV reconstruction may potentially be applied to other kinds of iterative reconstructions for shortened reconstruction duration.
Evaluation of the OSC-TV iterative reconstruction algorithm for cone-beam optical CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matenine, Dmitri, E-mail: dmitri.matenine.1@ulaval.ca; Mascolo-Fortin, Julia, E-mail: julia.mascolo-fortin.1@ulaval.ca; Goussard, Yves, E-mail: yves.goussard@polymtl.ca
Purpose: The present work evaluates an iterative reconstruction approach, namely, the ordered subsets convex (OSC) algorithm with regularization via total variation (TV) minimization in the field of cone-beam optical computed tomography (optical CT). One of the uses of optical CT is gel-based 3D dosimetry for radiation therapy, where it is employed to map dose distributions in radiosensitive gels. Model-based iterative reconstruction may improve optical CT image quality and contribute to a wider use of optical CT in clinical gel dosimetry. Methods: This algorithm was evaluated using experimental data acquired by a cone-beam optical CT system, as well as complementary numericalmore » simulations. A fast GPU implementation of OSC-TV was used to achieve reconstruction times comparable to those of conventional filtered backprojection. Images obtained via OSC-TV were compared with the corresponding filtered backprojections. Spatial resolution and uniformity phantoms were scanned and respective reconstructions were subject to evaluation of the modulation transfer function, image uniformity, and accuracy. The artifacts due to refraction and total signal loss from opaque objects were also studied. Results: The cone-beam optical CT data reconstructions showed that OSC-TV outperforms filtered backprojection in terms of image quality, thanks to a model-based simulation of the photon attenuation process. It was shown to significantly improve the image spatial resolution and reduce image noise. The accuracy of the estimation of linear attenuation coefficients remained similar to that obtained via filtered backprojection. Certain image artifacts due to opaque objects were reduced. Nevertheless, the common artifact due to the gel container walls could not be eliminated. Conclusions: The use of iterative reconstruction improves cone-beam optical CT image quality in many ways. The comparisons between OSC-TV and filtered backprojection presented in this paper demonstrate that OSC-TV can potentially improve the rendering of spatial features and reduce cone-beam optical CT artifacts.« less
Evaluation of the OSC-TV iterative reconstruction algorithm for cone-beam optical CT.
Matenine, Dmitri; Mascolo-Fortin, Julia; Goussard, Yves; Després, Philippe
2015-11-01
The present work evaluates an iterative reconstruction approach, namely, the ordered subsets convex (OSC) algorithm with regularization via total variation (TV) minimization in the field of cone-beam optical computed tomography (optical CT). One of the uses of optical CT is gel-based 3D dosimetry for radiation therapy, where it is employed to map dose distributions in radiosensitive gels. Model-based iterative reconstruction may improve optical CT image quality and contribute to a wider use of optical CT in clinical gel dosimetry. This algorithm was evaluated using experimental data acquired by a cone-beam optical CT system, as well as complementary numerical simulations. A fast GPU implementation of OSC-TV was used to achieve reconstruction times comparable to those of conventional filtered backprojection. Images obtained via OSC-TV were compared with the corresponding filtered backprojections. Spatial resolution and uniformity phantoms were scanned and respective reconstructions were subject to evaluation of the modulation transfer function, image uniformity, and accuracy. The artifacts due to refraction and total signal loss from opaque objects were also studied. The cone-beam optical CT data reconstructions showed that OSC-TV outperforms filtered backprojection in terms of image quality, thanks to a model-based simulation of the photon attenuation process. It was shown to significantly improve the image spatial resolution and reduce image noise. The accuracy of the estimation of linear attenuation coefficients remained similar to that obtained via filtered backprojection. Certain image artifacts due to opaque objects were reduced. Nevertheless, the common artifact due to the gel container walls could not be eliminated. The use of iterative reconstruction improves cone-beam optical CT image quality in many ways. The comparisons between OSC-TV and filtered backprojection presented in this paper demonstrate that OSC-TV can potentially improve the rendering of spatial features and reduce cone-beam optical CT artifacts.
Application Of Iterative Reconstruction Techniques To Conventional Circular Tomography
NASA Astrophysics Data System (ADS)
Ghosh Roy, D. N.; Kruger, R. A.; Yih, B. C.; Del Rio, S. P.; Power, R. L.
1985-06-01
Two "point-by-point" iteration procedures, namely, Iterative Least Square Technique (ILST) and Simultaneous Iterative Reconstructive Technique (SIRT) were applied to classical circular tomographic reconstruction. The technique of tomosynthetic DSA was used in forming the tomographic images. Reconstructions of a dog's renal and neck anatomy are presented.
NASA Astrophysics Data System (ADS)
Jing, Xiaoli; Cheng, Haobo; Wen, Yongfu
2018-04-01
A new local integration algorithm called quality map path integration (QMPI) is reported for shape reconstruction in the fringe reflection technique. A quality map is proposed to evaluate the quality of gradient data locally, and functions as a guideline for the integrated path. The presented method can be employed in wavefront estimation from its slopes over the general shaped surface with slope noise equivalent to that in practical measurements. Moreover, QMPI is much better at handling the slope data with local noise, which may be caused by the irregular shapes of the surface under test. The performance of QMPI is discussed by simulations and experiment. It is shown that QMPI not only improves the accuracy of local integration, but can also be easily implemented with no iteration compared to Southwell zonal reconstruction (SZR). From an engineering point-of-view, the proposed method may also provide an efficient and stable approach for different shapes with high-precise demand.
Quantitative proton imaging from multiple physics processes: a proof of concept
NASA Astrophysics Data System (ADS)
Bopp, C.; Rescigno, R.; Rousseau, M.; Brasse, D.
2015-07-01
Proton imaging is developed in order to improve the accuracy of charged particle therapy treatment planning. It makes it possible to directly map the relative stopping powers of the materials using the information on the energy loss of the protons. In order to reach a satisfactory spatial resolution in the reconstructed images, the position and direction of each particle is recorded upstream and downstream from the patient. As a consequence of individual proton detection, information on the transmission rate and scattering of the protons is available. Image reconstruction processes are proposed to make use of this information. A proton tomographic acquisition of an anthropomorphic head phantom was simulated. The transmission rate of the particles was used to reconstruct a map of the macroscopic cross section for nuclear interactions of the materials. A two-step iterative reconstruction process was implemented to reconstruct a map of the inverse scattering length of the materials using the scattering of the protons. Results indicate that, while the reconstruction processes should be optimized, it is possible to extract quantitative information from the transmission rate and scattering of the protons. This suggests that proton imaging could provide additional knowledge on the materials that may be of use to further improve treatment planning.
Finite element method framework for RF-based through-the-wall mapping
NASA Astrophysics Data System (ADS)
Campos, Rafael Saraiva; Lovisolo, Lisandro; de Campos, Marcello Luiz R.
2017-05-01
Radiofrequency (RF) Through-the-Wall Mapping (TWM) employs techniques originally applied in X-Ray Computerized Tomographic Imaging to map obstacles behind walls. It aims to provide valuable information for rescuing efforts in damaged buildings, as well as for military operations in urban scenarios. This work defines a Finite Element Method (FEM) based framework to allow fast and accurate simulations of the reconstruction of floors blueprints, using Ultra High-Frequency (UHF) signals at three different frequencies (500 MHz, 1 GHz and 2 GHz). To the best of our knowledge, this is the first use of FEM in a TWM scenario. This framework allows quick evaluation of different algorithms without the need to assemble a full test setup, which might not be available due to budgetary and time constraints. Using this, the present work evaluates a collection of reconstruction methods (Filtered Backprojection Reconstruction, Direct Fourier Reconstruction, Algebraic Reconstruction and Simultaneous Iterative Reconstruction) under a parallel-beam acquisition geometry for different spatial sampling rates, number of projections, antenna gains and operational frequencies. The use of multiple frequencies assesses the trade-off between higher resolution at shorter wavelengths and lower through-the-wall penetration. Considering all the drawbacks associated with such a complex problem, a robust and reliable computational setup based on a flexible method such as FEM can be very useful.
Quantitative proton imaging from multiple physics processes: a proof of concept.
Bopp, C; Rescigno, R; Rousseau, M; Brasse, D
2015-07-07
Proton imaging is developed in order to improve the accuracy of charged particle therapy treatment planning. It makes it possible to directly map the relative stopping powers of the materials using the information on the energy loss of the protons. In order to reach a satisfactory spatial resolution in the reconstructed images, the position and direction of each particle is recorded upstream and downstream from the patient. As a consequence of individual proton detection, information on the transmission rate and scattering of the protons is available. Image reconstruction processes are proposed to make use of this information. A proton tomographic acquisition of an anthropomorphic head phantom was simulated. The transmission rate of the particles was used to reconstruct a map of the macroscopic cross section for nuclear interactions of the materials. A two-step iterative reconstruction process was implemented to reconstruct a map of the inverse scattering length of the materials using the scattering of the protons. Results indicate that, while the reconstruction processes should be optimized, it is possible to extract quantitative information from the transmission rate and scattering of the protons. This suggests that proton imaging could provide additional knowledge on the materials that may be of use to further improve treatment planning.
Goodenberger, Martin H; Wagner-Bartak, Nicolaus A; Gupta, Shiva; Liu, Xinming; Yap, Ramon Q; Sun, Jia; Tamm, Eric P; Jensen, Corey T
The purpose of this study was to compare abdominopelvic computed tomography images reconstructed with adaptive statistical iterative reconstruction-V (ASIR-V) with model-based iterative reconstruction (Veo 3.0), ASIR, and filtered back projection (FBP). Abdominopelvic computed tomography scans for 36 patients (26 males and 10 females) were reconstructed using FBP, ASIR (80%), Veo 3.0, and ASIR-V (30%, 60%, 90%). Mean ± SD patient age was 32 ± 10 years with mean ± SD body mass index of 26.9 ± 4.4 kg/m. Images were reviewed by 2 independent readers in a blinded, randomized fashion. Hounsfield unit, noise, and contrast-to-noise ratio (CNR) values were calculated for each reconstruction algorithm for further comparison. Phantom evaluation of low-contrast detectability (LCD) and high-contrast resolution was performed. Adaptive statistical iterative reconstruction-V 30%, ASIR-V 60%, and ASIR 80% were generally superior qualitatively compared with ASIR-V 90%, Veo 3.0, and FBP (P < 0.05). Adaptive statistical iterative reconstruction-V 90% showed superior LCD and had the highest CNR in the liver, aorta, and, pancreas, measuring 7.32 ± 3.22, 11.60 ± 4.25, and 4.60 ± 2.31, respectively, compared with the next best series of ASIR-V 60% with respective CNR values of 5.54 ± 2.39, 8.78 ± 3.15, and 3.49 ± 1.77 (P <0.0001). Veo 3.0 and ASIR 80% had the best and worst spatial resolution, respectively. Adaptive statistical iterative reconstruction-V 30% and ASIR-V 60% provided the best combination of qualitative and quantitative performance. Adaptive statistical iterative reconstruction 80% was equivalent qualitatively, but demonstrated inferior spatial resolution and LCD.
NASA Astrophysics Data System (ADS)
Wang, Yihan; Lu, Tong; Zhang, Songhe; Song, Shaoze; Wang, Bingyuan; Li, Jiao; Zhao, Huijuan; Gao, Feng
2018-02-01
Quantitative photoacoustic tomography (q-PAT) is a nontrivial technique can be used to reconstruct the absorption image with a high spatial resolution. Several attempts have been investigated by setting point sources or fixed-angle illuminations. However, in practical applications, these schemes normally suffer from low signal-to-noise ratio (SNR) or poor quantification especially for large-size domains, due to the limitation of the ANSI-safety incidence and incompleteness in the data acquisition. We herein present a q-PAT implementation that uses multi-angle light-sheet illuminations and a calibrated iterative multi-angle reconstruction. The approach can acquire more complete information on the intrinsic absorption and SNR-boosted photoacoustic signals at selected planes from the multi-angle wide-field excitations of light-sheet. Therefore, the sliced absorption maps over whole body can be recovered in a measurementflexible, noise-robust and computation-economic way. The proposed approach is validated by the phantom experiment, exhibiting promising performances in image fidelity and quantitative accuracy.
Iterative image reconstruction for PROPELLER-MRI using the nonuniform fast fourier transform.
Tamhane, Ashish A; Anastasio, Mark A; Gui, Minzhi; Arfanakis, Konstantinos
2010-07-01
To investigate an iterative image reconstruction algorithm using the nonuniform fast Fourier transform (NUFFT) for PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction) MRI. Numerical simulations, as well as experiments on a phantom and a healthy human subject were used to evaluate the performance of the iterative image reconstruction algorithm for PROPELLER, and compare it with that of conventional gridding. The trade-off between spatial resolution, signal to noise ratio, and image artifacts, was investigated for different values of the regularization parameter. The performance of the iterative image reconstruction algorithm in the presence of motion was also evaluated. It was demonstrated that, for a certain range of values of the regularization parameter, iterative reconstruction produced images with significantly increased signal to noise ratio, reduced artifacts, for similar spatial resolution, compared with gridding. Furthermore, the ability to reduce the effects of motion in PROPELLER-MRI was maintained when using the iterative reconstruction approach. An iterative image reconstruction technique based on the NUFFT was investigated for PROPELLER MRI. For a certain range of values of the regularization parameter, the new reconstruction technique may provide PROPELLER images with improved image quality compared with conventional gridding. (c) 2010 Wiley-Liss, Inc.
Iterative Image Reconstruction for PROPELLER-MRI using the NonUniform Fast Fourier Transform
Tamhane, Ashish A.; Anastasio, Mark A.; Gui, Minzhi; Arfanakis, Konstantinos
2013-01-01
Purpose To investigate an iterative image reconstruction algorithm using the non-uniform fast Fourier transform (NUFFT) for PROPELLER (Periodically Rotated Overlapping parallEL Lines with Enhanced Reconstruction) MRI. Materials and Methods Numerical simulations, as well as experiments on a phantom and a healthy human subject were used to evaluate the performance of the iterative image reconstruction algorithm for PROPELLER, and compare it to that of conventional gridding. The trade-off between spatial resolution, signal to noise ratio, and image artifacts, was investigated for different values of the regularization parameter. The performance of the iterative image reconstruction algorithm in the presence of motion was also evaluated. Results It was demonstrated that, for a certain range of values of the regularization parameter, iterative reconstruction produced images with significantly increased SNR, reduced artifacts, for similar spatial resolution, compared to gridding. Furthermore, the ability to reduce the effects of motion in PROPELLER-MRI was maintained when using the iterative reconstruction approach. Conclusion An iterative image reconstruction technique based on the NUFFT was investigated for PROPELLER MRI. For a certain range of values of the regularization parameter the new reconstruction technique may provide PROPELLER images with improved image quality compared to conventional gridding. PMID:20578028
High resolution x-ray CMT: Reconstruction methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, J.K.
This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited formore » high accuracy, tomographic reconstruction codes.« less
Mapping chemicals in air using an environmental CAT scanning system: evaluation of algorithms
NASA Astrophysics Data System (ADS)
Samanta, A.; Todd, L. A.
A new technique is being developed which creates near real-time maps of chemical concentrations in air for environmental and occupational environmental applications. This technique, we call Environmental CAT Scanning, combines the real-time measuring technique of open-path Fourier transform infrared spectroscopy with the mapping capabilitites of computed tomography to produce two-dimensional concentration maps. With this system, a network of open-path measurements is obtained over an area; measurements are then processed using a tomographic algorithm to reconstruct the concentrations. This research focussed on the process of evaluating and selecting appropriate reconstruction algorithms, for use in the field, by using test concentration data from both computer simultation and laboratory chamber studies. Four algorithms were tested using three types of data: (1) experimental open-path data from studies that used a prototype opne-path Fourier transform/computed tomography system in an exposure chamber; (2) synthetic open-path data generated from maps created by kriging point samples taken in the chamber studies (in 1), and; (3) synthetic open-path data generated using a chemical dispersion model to create time seires maps. The iterative algorithms used to reconstruct the concentration data were: Algebraic Reconstruction Technique without Weights (ART1), Algebraic Reconstruction Technique with Weights (ARTW), Maximum Likelihood with Expectation Maximization (MLEM) and Multiplicative Algebraic Reconstruction Technique (MART). Maps were evaluated quantitatively and qualitatively. In general, MART and MLEM performed best, followed by ARTW and ART1. However, algorithm performance varied under different contaminant scenarios. This study showed the importance of using a variety of maps, particulary those generated using dispersion models. The time series maps provided a more rigorous test of the algorithms and allowed distinctions to be made among the algorithms. A comprehensive evaluation of algorithms, for the environmental application of tomography, requires the use of a battery of test concentration data before field implementation, which models reality and tests the limits of the algorithms.
Volumetric quantification of lung nodules in CT with iterative reconstruction (ASiR and MBIR).
Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Robins, Marthony; Colsher, James; Samei, Ehsan
2013-11-01
Volume quantifications of lung nodules with multidetector computed tomography (CT) images provide useful information for monitoring nodule developments. The accuracy and precision of the volume quantification, however, can be impacted by imaging and reconstruction parameters. This study aimed to investigate the impact of iterative reconstruction algorithms on the accuracy and precision of volume quantification with dose and slice thickness as additional variables. Repeated CT images were acquired from an anthropomorphic chest phantom with synthetic nodules (9.5 and 4.8 mm) at six dose levels, and reconstructed with three reconstruction algorithms [filtered backprojection (FBP), adaptive statistical iterative reconstruction (ASiR), and model based iterative reconstruction (MBIR)] into three slice thicknesses. The nodule volumes were measured with two clinical software (A: Lung VCAR, B: iNtuition), and analyzed for accuracy and precision. Precision was found to be generally comparable between FBP and iterative reconstruction with no statistically significant difference noted for different dose levels, slice thickness, and segmentation software. Accuracy was found to be more variable. For large nodules, the accuracy was significantly different between ASiR and FBP for all slice thicknesses with both software, and significantly different between MBIR and FBP for 0.625 mm slice thickness with Software A and for all slice thicknesses with Software B. For small nodules, the accuracy was more similar between FBP and iterative reconstruction, with the exception of ASIR vs FBP at 1.25 mm with Software A and MBIR vs FBP at 0.625 mm with Software A. The systematic difference between the accuracy of FBP and iterative reconstructions highlights the importance of extending current segmentation software to accommodate the image characteristics of iterative reconstructions. In addition, a calibration process may help reduce the dependency of accuracy on reconstruction algorithms, such that volumes quantified from scans of different reconstruction algorithms can be compared. The little difference found between the precision of FBP and iterative reconstructions could be a result of both iterative reconstruction's diminished noise reduction at the edge of the nodules as well as the loss of resolution at high noise levels with iterative reconstruction. The findings do not rule out potential advantage of IR that might be evident in a study that uses a larger number of nodules or repeated scans.
Inverting ion images without Abel inversion: maximum entropy reconstruction of velocity maps.
Dick, Bernhard
2014-01-14
A new method for the reconstruction of velocity maps from ion images is presented, which is based on the maximum entropy concept. In contrast to other methods used for Abel inversion the new method never applies an inversion or smoothing to the data. Instead, it iteratively finds the map which is the most likely cause for the observed data, using the correct likelihood criterion for data sampled from a Poissonian distribution. The entropy criterion minimizes the information content in this map, which hence contains no information for which there is no evidence in the data. Two implementations are proposed, and their performance is demonstrated with simulated and experimental data: Maximum Entropy Velocity Image Reconstruction (MEVIR) obtains a two-dimensional slice through the velocity distribution and can be compared directly to Abel inversion. Maximum Entropy Velocity Legendre Reconstruction (MEVELER) finds one-dimensional distribution functions Q(l)(v) in an expansion of the velocity distribution in Legendre polynomials P((cos θ) for the angular dependence. Both MEVIR and MEVELER can be used for the analysis of ion images with intensities as low as 0.01 counts per pixel, with MEVELER performing significantly better than MEVIR for images with low intensity. Both methods perform better than pBASEX, in particular for images with less than one average count per pixel.
Planarity constrained multi-view depth map reconstruction for urban scenes
NASA Astrophysics Data System (ADS)
Hou, Yaolin; Peng, Jianwei; Hu, Zhihua; Tao, Pengjie; Shan, Jie
2018-05-01
Multi-view depth map reconstruction is regarded as a suitable approach for 3D generation of large-scale scenes due to its flexibility and scalability. However, there are challenges when this technique is applied to urban scenes where apparent man-made regular shapes may present. To address this need, this paper proposes a planarity constrained multi-view depth (PMVD) map reconstruction method. Starting with image segmentation and feature matching for each input image, the main procedure is iterative optimization under the constraints of planar geometry and smoothness. A set of candidate local planes are first generated by an extended PatchMatch method. The image matching costs are then computed and aggregated by an adaptive-manifold filter (AMF), whereby the smoothness constraint is applied to adjacent pixels through belief propagation. Finally, multiple criteria are used to eliminate image matching outliers. (Vertical) aerial images, oblique (aerial) images and ground images are used for qualitative and quantitative evaluations. The experiments demonstrated that the PMVD outperforms the popular multi-view depth map reconstruction with an accuracy two times better for the aerial datasets and achieves an outcome comparable to the state-of-the-art for ground images. As expected, PMVD is able to preserve the planarity for piecewise flat structures in urban scenes and restore the edges in depth discontinuous areas.
van der Werf, N R; Willemink, M J; Willems, T P; Greuter, M J W; Leiner, T
2017-12-28
The objective of this study was to evaluate the influence of iterative reconstruction on coronary calcium scores (CCS) at different heart rates for four state-of-the-art CT systems. Within an anthropomorphic chest phantom, artificial coronary arteries were translated in a water-filled compartment. The arteries contained three different calcifications with low (38 mg), medium (80 mg) and high (157 mg) mass. Linear velocities were applied, corresponding to heart rates of 0, < 60, 60-75 and > 75 bpm. Data were acquired on four state-of-the-art CT systems (CT1-CT4) with routinely used CCS protocols. Filtered back projection (FBP) and three increasing levels of iterative reconstruction (L1-L3) were used for reconstruction. CCS were quantified as Agatston score and mass score. An iterative reconstruction susceptibility (IRS) index was used to assess susceptibility of Agatston score (IRS AS ) and mass score (IRS MS ) to iterative reconstruction. IRS values were compared between CT systems and between calcification masses. For each heart rate, differences in CCS of iterative reconstructed images were evaluated with CCS of FBP images as reference, and indicated as small (< 5%), medium (5-10%) or large (> 10%). Statistical analysis was performed with repeated measures ANOVA tests. While subtle differences were found for Agatston scores of low mass calcification, medium and high mass calcifications showed increased CCS up to 77% with increasing heart rates. IRS AS of CT1-T4 were 17, 41, 130 and 22% higher than IRS MS . Not only were IRS significantly different between all CT systems, but also between calcification masses. Up to a fourfold increase in IRS was found for the low mass calcification in comparison with the high mass calcification. With increasing iterative reconstruction strength, maximum decreases of 21 and 13% for Agatston and mass score were found. In total, 21 large differences between Agatston scores from FBP and iterative reconstruction were found, while only five large differences were found between FBP and iterative reconstruction mass scores. Iterative reconstruction results in reduced CCS. The effect of iterative reconstruction on CCS is more prominent with low-density calcifications, high heart rates and increasing iterative reconstruction strength.
Xu, Q; Yang, D; Tan, J; Anastasio, M
2012-06-01
To improve image quality and reduce imaging dose in CBCT for radiation therapy applications and to realize near real-time image reconstruction based on use of a fast convergence iterative algorithm and acceleration by multi-GPUs. An iterative image reconstruction that sought to minimize a weighted least squares cost function that employed total variation (TV) regularization was employed to mitigate projection data incompleteness and noise. To achieve rapid 3D image reconstruction (< 1 min), a highly optimized multiple-GPU implementation of the algorithm was developed. The convergence rate and reconstruction accuracy were evaluated using a modified 3D Shepp-Logan digital phantom and a Catphan-600 physical phantom. The reconstructed images were compared with the clinical FDK reconstruction results. Digital phantom studies showed that only 15 iterations and 60 iterations are needed to achieve algorithm convergence for 360-view and 60-view cases, respectively. The RMSE was reduced to 10-4 and 10-2, respectively, by using 15 iterations for each case. Our algorithm required 5.4s to complete one iteration for the 60-view case using one Tesla C2075 GPU. The few-view study indicated that our iterative algorithm has great potential to reduce the imaging dose and preserve good image quality. For the physical Catphan studies, the images obtained from the iterative algorithm possessed better spatial resolution and higher SNRs than those obtained from by use of a clinical FDK reconstruction algorithm. We have developed a fast convergence iterative algorithm for CBCT image reconstruction. The developed algorithm yielded images with better spatial resolution and higher SNR than those produced by a commercial FDK tool. In addition, from the few-view study, the iterative algorithm has shown great potential for significantly reducing imaging dose. We expect that the developed reconstruction approach will facilitate applications including IGART and patient daily CBCT-based treatment localization. © 2012 American Association of Physicists in Medicine.
Compartmentalized Low-Rank Recovery for High-Resolution Lipid Unsuppressed MRSI
Bhattacharya, Ipshita; Jacob, Mathews
2017-01-01
Purpose To introduce a novel algorithm for the recovery of high-resolution magnetic resonance spectroscopic imaging (MRSI) data with minimal lipid leakage artifacts, from dual-density spiral acquisition. Methods The reconstruction of MRSI data from dual-density spiral data is formulated as a compartmental low-rank recovery problem. The MRSI dataset is modeled as the sum of metabolite and lipid signals, each of which is support limited to the brain and extracranial regions, respectively, in addition to being orthogonal to each other. The reconstruction problem is formulated as an optimization problem, which is solved using iterative reweighted nuclear norm minimization. Results The comparisons of the scheme against dual-resolution reconstruction algorithm on numerical phantom and in vivo datasets demonstrate the ability of the scheme to provide higher spatial resolution and lower lipid leakage artifacts. The experiments demonstrate the ability of the scheme to recover the metabolite maps, from lipid unsuppressed datasets with echo time (TE)=55 ms. Conclusion The proposed reconstruction method and data acquisition strategy provide an efficient way to achieve high-resolution metabolite maps without lipid suppression. This algorithm would be beneficial for fast metabolic mapping and extension to multislice acquisitions. PMID:27851875
Fast iterative image reconstruction using sparse matrix factorization with GPU acceleration
NASA Astrophysics Data System (ADS)
Zhou, Jian; Qi, Jinyi
2011-03-01
Statistically based iterative approaches for image reconstruction have gained much attention in medical imaging. An accurate system matrix that defines the mapping from the image space to the data space is the key to high-resolution image reconstruction. However, an accurate system matrix is often associated with high computational cost and huge storage requirement. Here we present a method to address this problem by using sparse matrix factorization and parallel computing on a graphic processing unit (GPU).We factor the accurate system matrix into three sparse matrices: a sinogram blurring matrix, a geometric projection matrix, and an image blurring matrix. The sinogram blurring matrix models the detector response. The geometric projection matrix is based on a simple line integral model. The image blurring matrix is to compensate for the line-of-response (LOR) degradation due to the simplified geometric projection matrix. The geometric projection matrix is precomputed, while the sinogram and image blurring matrices are estimated by minimizing the difference between the factored system matrix and the original system matrix. The resulting factored system matrix has much less number of nonzero elements than the original system matrix and thus substantially reduces the storage and computation cost. The smaller size also allows an efficient implement of the forward and back projectors on GPUs, which have limited amount of memory. Our simulation studies show that the proposed method can dramatically reduce the computation cost of high-resolution iterative image reconstruction. The proposed technique is applicable to image reconstruction for different imaging modalities, including x-ray CT, PET, and SPECT.
NASA Astrophysics Data System (ADS)
Wang, Tonghe; Zhu, Lei
2016-09-01
Conventional dual-energy CT (DECT) reconstruction requires two full-size projection datasets with two different energy spectra. In this study, we propose an iterative algorithm to enable a new data acquisition scheme which requires one full scan and a second sparse-view scan for potential reduction in imaging dose and engineering cost of DECT. A bilateral filter is calculated as a similarity matrix from the first full-scan CT image to quantify the similarity between any two pixels, which is assumed unchanged on a second CT image since DECT scans are performed on the same object. The second CT image from reduced projections is reconstructed by an iterative algorithm which updates the image by minimizing the total variation of the difference between the image and its filtered image by the similarity matrix under data fidelity constraint. As the redundant structural information of the two CT images is contained in the similarity matrix for CT reconstruction, we refer to the algorithm as structure preserving iterative reconstruction (SPIR). The proposed method is evaluated on both digital and physical phantoms, and is compared with the filtered-backprojection (FBP) method, the conventional total-variation-regularization-based algorithm (TVR) and prior-image-constrained-compressed-sensing (PICCS). SPIR with a second 10-view scan reduces the image noise STD by a factor of one order of magnitude with same spatial resolution as full-view FBP image. SPIR substantially improves over TVR on the reconstruction accuracy of a 10-view scan by decreasing the reconstruction error from 6.18% to 1.33%, and outperforms TVR at 50 and 20-view scans on spatial resolution with a higher frequency at the modulation transfer function value of 10% by an average factor of 4. Compared with the 20-view scan PICCS result, the SPIR image has 7 times lower noise STD with similar spatial resolution. The electron density map obtained from the SPIR-based DECT images with a second 10-view scan has an average error of less than 1%.
Paavolainen, Lassi; Acar, Erman; Tuna, Uygar; Peltonen, Sari; Moriya, Toshio; Soonsawad, Pan; Marjomäki, Varpu; Cheng, R Holland; Ruotsalainen, Ulla
2014-01-01
Electron tomography (ET) of biological samples is used to study the organization and the structure of the whole cell and subcellular complexes in great detail. However, projections cannot be acquired over full tilt angle range with biological samples in electron microscopy. ET image reconstruction can be considered an ill-posed problem because of this missing information. This results in artifacts, seen as the loss of three-dimensional (3D) resolution in the reconstructed images. The goal of this study was to achieve isotropic resolution with a statistical reconstruction method, sequential maximum a posteriori expectation maximization (sMAP-EM), using no prior morphological knowledge about the specimen. The missing wedge effects on sMAP-EM were examined with a synthetic cell phantom to assess the effects of noise. An experimental dataset of a multivesicular body was evaluated with a number of gold particles. An ellipsoid fitting based method was developed to realize the quantitative measures elongation and contrast in an automated, objective, and reliable way. The method statistically evaluates the sub-volumes containing gold particles randomly located in various parts of the whole volume, thus giving information about the robustness of the volume reconstruction. The quantitative results were also compared with reconstructions made with widely-used weighted backprojection and simultaneous iterative reconstruction technique methods. The results showed that the proposed sMAP-EM method significantly suppresses the effects of the missing information producing isotropic resolution. Furthermore, this method improves the contrast ratio, enhancing the applicability of further automatic and semi-automatic analysis. These improvements in ET reconstruction by sMAP-EM enable analysis of subcellular structures with higher three-dimensional resolution and contrast than conventional methods.
Tan, Zhengguo; Hohage, Thorsten; Kalentev, Oleksandr; Joseph, Arun A; Wang, Xiaoqing; Voit, Dirk; Merboldt, K Dietmar; Frahm, Jens
2017-12-01
The purpose of this work is to develop an automatic method for the scaling of unknowns in model-based nonlinear inverse reconstructions and to evaluate its application to real-time phase-contrast (RT-PC) flow magnetic resonance imaging (MRI). Model-based MRI reconstructions of parametric maps which describe a physical or physiological function require the solution of a nonlinear inverse problem, because the list of unknowns in the extended MRI signal equation comprises multiple functional parameters and all coil sensitivity profiles. Iterative solutions therefore rely on an appropriate scaling of unknowns to numerically balance partial derivatives and regularization terms. The scaling of unknowns emerges as a self-adjoint and positive-definite matrix which is expressible by its maximal eigenvalue and solved by power iterations. The proposed method is applied to RT-PC flow MRI based on highly undersampled acquisitions. Experimental validations include numerical phantoms providing ground truth and a wide range of human studies in the ascending aorta, carotid arteries, deep veins during muscular exercise and cerebrospinal fluid during deep respiration. For RT-PC flow MRI, model-based reconstructions with automatic scaling not only offer velocity maps with high spatiotemporal acuity and much reduced phase noise, but also ensure fast convergence as well as accurate and precise velocities for all conditions tested, i.e. for different velocity ranges, vessel sizes and the simultaneous presence of signals with velocity aliasing. In summary, the proposed automatic scaling of unknowns in model-based MRI reconstructions yields quantitatively reliable velocities for RT-PC flow MRI in various experimental scenarios. Copyright © 2017 John Wiley & Sons, Ltd.
Quantitative Susceptibility Mapping of Human Brain Reflects Spatial Variation in Tissue Composition
Li, Wei; Wu, Bing; Liu, Chunlei
2011-01-01
Image phase from gradient echo MRI provides a unique contrast that reflects brain tissue composition variations, such as iron and myelin distribution. Phase imaging is emerging as a powerful tool for the investigation of functional brain anatomy and disease diagnosis. However, the quantitative value of phase is compromised by its nonlocal and orientation dependent properties. There is an increasing need for reliable quantification of magnetic susceptibility, the intrinsic property of tissue. In this study, we developed a novel and accurate susceptibility mapping method that is also phase-wrap insensitive. The proposed susceptibility mapping method utilized two complementary equations: (1) the Fourier relationship of phase and magnetic susceptibility; and (2) the first-order partial derivative of the first equation in the spatial frequency domain. In numerical simulation, this method reconstructed the susceptibility map almost free of streaking artifact. Further, the iterative implementation of this method allowed for high quality reconstruction of susceptibility maps of human brain in vivo. The reconstructed susceptibility map provided excellent contrast of iron-rich deep nuclei and white matter bundles from surrounding tissues. Further, it also revealed anisotropic magnetic susceptibility in brain white matter. Hence, the proposed susceptibility mapping method may provide a powerful tool for the study of brain physiology and pathophysiology. Further elucidation of anisotropic magnetic susceptibility in vivo may allow us to gain more insight into the white matter microarchitectures. PMID:21224002
Low dose dynamic myocardial CT perfusion using advanced iterative reconstruction
NASA Astrophysics Data System (ADS)
Eck, Brendan L.; Fahmi, Rachid; Fuqua, Christopher; Vembar, Mani; Dhanantwari, Amar; Bezerra, Hiram G.; Wilson, David L.
2015-03-01
Dynamic myocardial CT perfusion (CTP) can provide quantitative functional information for the assessment of coronary artery disease. However, x-ray dose in dynamic CTP is high, typically from 10mSv to >20mSv. We compared the dose reduction potential of advanced iterative reconstruction, Iterative Model Reconstruction (IMR, Philips Healthcare, Cleveland, Ohio) to hybrid iterative reconstruction (iDose4) and filtered back projection (FBP). Dynamic CTP scans were obtained using a porcine model with balloon-induced ischemia in the left anterior descending coronary artery to prescribed fractional flow reserve values. High dose dynamic CTP scans were acquired at 100kVp/100mAs with effective dose of 23mSv. Low dose scans at 75mAs, 50mAs, and 25mAs were simulated by adding x-ray quantum noise and detector electronic noise to the projection space data. Images were reconstructed with FBP, iDose4, and IMR at each dose level. Image quality in static CTP images was assessed by SNR and CNR. Blood flow was obtained using a dynamic CTP analysis pipeline and blood flow image quality was assessed using flow-SNR and flow-CNR. IMR showed highest static image quality according to SNR and CNR. Blood flow in FBP was increasingly over-estimated at reduced dose. Flow was more consistent for iDose4 from 100mAs to 50mAs, but was over-estimated at 25mAs. IMR was most consistent from 100mAs to 25mAs. Static images and flow maps for 100mAs FBP, 50mAs iDose4, and 25mAs IMR showed comparable, clear ischemia, CNR, and flow-CNR values. These results suggest that IMR can enable dynamic CTP at significantly reduced dose, at 5.8mSv or 25% of the comparable 23mSv FBP protocol.
Gatti, Marco; Marchisio, Filippo; Fronda, Marco; Rampado, Osvaldo; Faletti, Riccardo; Bergamasco, Laura; Ropolo, Roberto; Fonio, Paolo
The aim of this study was to evaluate the impact on dose reduction and image quality of the new iterative reconstruction technique: adaptive statistical iterative reconstruction (ASIR-V). Fifty consecutive oncologic patients acted as case controls undergoing during their follow-up a computed tomography scan both with ASIR and ASIR-V. Each study was analyzed in a double-blinded fashion by 2 radiologists. Both quantitative and qualitative analyses of image quality were conducted. Computed tomography scanner radiation output was 38% (29%-45%) lower (P < 0.0001) for the ASIR-V examinations than for the ASIR ones. The quantitative image noise was significantly lower (P < 0.0001) for ASIR-V. Adaptive statistical iterative reconstruction-V had a higher performance for the subjective image noise (P = 0.01 for 5 mm and P = 0.009 for 1.25 mm), the other parameters (image sharpness, diagnostic acceptability, and overall image quality) being similar (P > 0.05). Adaptive statistical iterative reconstruction-V is a new iterative reconstruction technique that has the potential to provide image quality equal to or greater than ASIR, with a dose reduction around 40%.
Noniterative MAP reconstruction using sparse matrix representations.
Cao, Guangzhi; Bouman, Charles A; Webb, Kevin J
2009-09-01
We present a method for noniterative maximum a posteriori (MAP) tomographic reconstruction which is based on the use of sparse matrix representations. Our approach is to precompute and store the inverse matrix required for MAP reconstruction. This approach has generally not been used in the past because the inverse matrix is typically large and fully populated (i.e., not sparse). In order to overcome this problem, we introduce two new ideas. The first idea is a novel theory for the lossy source coding of matrix transformations which we refer to as matrix source coding. This theory is based on a distortion metric that reflects the distortions produced in the final matrix-vector product, rather than the distortions in the coded matrix itself. The resulting algorithms are shown to require orthonormal transformations of both the measurement data and the matrix rows and columns before quantization and coding. The second idea is a method for efficiently storing and computing the required orthonormal transformations, which we call a sparse-matrix transform (SMT). The SMT is a generalization of the classical FFT in that it uses butterflies to compute an orthonormal transform; but unlike an FFT, the SMT uses the butterflies in an irregular pattern, and is numerically designed to best approximate the desired transforms. We demonstrate the potential of the noniterative MAP reconstruction with examples from optical tomography. The method requires offline computation to encode the inverse transform. However, once these offline computations are completed, the noniterative MAP algorithm is shown to reduce both storage and computation by well over two orders of magnitude, as compared to a linear iterative reconstruction methods.
MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods.
Schmidt, Johannes F M; Santelli, Claudio; Kozerke, Sebastian
2016-01-01
An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods.
Investigation of iterative image reconstruction in three-dimensional optoacoustic tomography
Wang, Kun; Su, Richard; Oraevsky, Alexander A; Anastasio, Mark A
2012-01-01
Iterative image reconstruction algorithms for optoacoustic tomography (OAT), also known as photoacoustic tomography, have the ability to improve image quality over analytic algorithms due to their ability to incorporate accurate models of the imaging physics, instrument response, and measurement noise. However, to date, there have been few reported attempts to employ advanced iterative image reconstruction algorithms for improving image quality in three-dimensional (3D) OAT. In this work, we implement and investigate two iterative image reconstruction methods for use with a 3D OAT small animal imager: namely, a penalized least-squares (PLS) method employing a quadratic smoothness penalty and a PLS method employing a total variation norm penalty. The reconstruction algorithms employ accurate models of the ultrasonic transducer impulse responses. Experimental data sets are employed to compare the performances of the iterative reconstruction algorithms to that of a 3D filtered backprojection (FBP) algorithm. By use of quantitative measures of image quality, we demonstrate that the iterative reconstruction algorithms can mitigate image artifacts and preserve spatial resolution more effectively than FBP algorithms. These features suggest that the use of advanced image reconstruction algorithms can improve the effectiveness of 3D OAT while reducing the amount of data required for biomedical applications. PMID:22864062
Fast Quantitative Susceptibility Mapping with L1-Regularization and Automatic Parameter Selection
Bilgic, Berkin; Fan, Audrey P.; Polimeni, Jonathan R.; Cauley, Stephen F.; Bianciardi, Marta; Adalsteinsson, Elfar; Wald, Lawrence L.; Setsompop, Kawin
2014-01-01
Purpose To enable fast reconstruction of quantitative susceptibility maps with Total Variation penalty and automatic regularization parameter selection. Methods ℓ1-regularized susceptibility mapping is accelerated by variable-splitting, which allows closed-form evaluation of each iteration of the algorithm by soft thresholding and FFTs. This fast algorithm also renders automatic regularization parameter estimation practical. A weighting mask derived from the magnitude signal can be incorporated to allow edge-aware regularization. Results Compared to the nonlinear Conjugate Gradient (CG) solver, the proposed method offers 20× speed-up in reconstruction time. A complete pipeline including Laplacian phase unwrapping, background phase removal with SHARP filtering and ℓ1-regularized dipole inversion at 0.6 mm isotropic resolution is completed in 1.2 minutes using Matlab on a standard workstation compared to 22 minutes using the Conjugate Gradient solver. This fast reconstruction allows estimation of regularization parameters with the L-curve method in 13 minutes, which would have taken 4 hours with the CG algorithm. Proposed method also permits magnitude-weighted regularization, which prevents smoothing across edges identified on the magnitude signal. This more complicated optimization problem is solved 5× faster than the nonlinear CG approach. Utility of the proposed method is also demonstrated in functional BOLD susceptibility mapping, where processing of the massive time-series dataset would otherwise be prohibitive with the CG solver. Conclusion Online reconstruction of regularized susceptibility maps may become feasible with the proposed dipole inversion. PMID:24259479
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2010-01-01
Cell-centered and node-centered approaches have been compared for unstructured finite-volume discretization of inviscid fluxes. The grids range from regular grids to irregular grids, including mixed-element grids and grids with random perturbations of nodes. Accuracy, complexity, and convergence rates of defect-correction iterations are studied for eight nominally second-order accurate schemes: two node-centered schemes with weighted and unweighted least-squares (LSQ) methods for gradient reconstruction and six cell-centered schemes two node-averaging with and without clipping and four schemes that employ different stencils for LSQ gradient reconstruction. The cell-centered nearest-neighbor (CC-NN) scheme has the lowest complexity; a version of the scheme that involves smart augmentation of the LSQ stencil (CC-SA) has only marginal complexity increase. All other schemes have larger complexity; complexity of node-centered (NC) schemes are somewhat lower than complexity of cell-centered node-averaging (CC-NA) and full-augmentation (CC-FA) schemes. On highly anisotropic grids typical of those encountered in grid adaptation, discretization errors of five of the six cell-centered schemes converge with second order on all tested grids; the CC-NA scheme with clipping degrades solution accuracy to first order. The NC schemes converge with second order on regular and/or triangular grids and with first order on perturbed quadrilaterals and mixed-element grids. All schemes may produce large relative errors in gradient reconstruction on grids with perturbed nodes. Defect-correction iterations for schemes employing weighted least-square gradient reconstruction diverge on perturbed stretched grids. Overall, the CC-NN and CC-SA schemes offer the best options of the lowest complexity and secondorder discretization errors. On anisotropic grids over a curved body typical of turbulent flow simulations, the discretization errors converge with second order and are small for the CC-NN, CC-SA, and CC-FA schemes on all grids and for NC schemes on triangular grids; the discretization errors of the CC-NA scheme without clipping do not converge on irregular grids. Accurate gradient reconstruction can be achieved by introducing a local approximate mapping; without approximate mapping, only the NC scheme with weighted LSQ method provides accurate gradients. Defect correction iterations for the CC-NA scheme without clipping diverge; for the NC scheme with weighted LSQ method, the iterations either diverge or converge very slowly. The best option in curved geometries is the CC-SA scheme that offers low complexity, second-order discretization errors, and fast convergence.
Shading correction assisted iterative cone-beam CT reconstruction
NASA Astrophysics Data System (ADS)
Yang, Chunlin; Wu, Pengwei; Gong, Shutao; Wang, Jing; Lyu, Qihui; Tang, Xiangyang; Niu, Tianye
2017-11-01
Recent advances in total variation (TV) technology enable accurate CT image reconstruction from highly under-sampled and noisy projection data. The standard iterative reconstruction algorithms, which work well in conventional CT imaging, fail to perform as expected in cone beam CT (CBCT) applications, wherein the non-ideal physics issues, including scatter and beam hardening, are more severe. These physics issues result in large areas of shading artifacts and cause deterioration to the piecewise constant property assumed in reconstructed images. To overcome this obstacle, we incorporate a shading correction scheme into low-dose CBCT reconstruction and propose a clinically acceptable and stable three-dimensional iterative reconstruction method that is referred to as the shading correction assisted iterative reconstruction. In the proposed method, we modify the TV regularization term by adding a shading compensation image to the reconstructed image to compensate for the shading artifacts while leaving the data fidelity term intact. This compensation image is generated empirically, using image segmentation and low-pass filtering, and updated in the iterative process whenever necessary. When the compensation image is determined, the objective function is minimized using the fast iterative shrinkage-thresholding algorithm accelerated on a graphic processing unit. The proposed method is evaluated using CBCT projection data of the Catphan© 600 phantom and two pelvis patients. Compared with the iterative reconstruction without shading correction, the proposed method reduces the overall CT number error from around 200 HU to be around 25 HU and increases the spatial uniformity by a factor of 20 percent, given the same number of sparsely sampled projections. A clinically acceptable and stable iterative reconstruction algorithm for CBCT is proposed in this paper. Differing from the existing algorithms, this algorithm incorporates a shading correction scheme into the low-dose CBCT reconstruction and achieves more stable optimization path and more clinically acceptable reconstructed image. The method proposed by us does not rely on prior information and thus is practically attractive to the applications of low-dose CBCT imaging in the clinic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gingold, E; Dave, J
2014-06-01
Purpose: The purpose of this study was to compare a new model-based iterative reconstruction with existing reconstruction methods (filtered backprojection and basic iterative reconstruction) using quantitative analysis of standard image quality phantom images. Methods: An ACR accreditation phantom (Gammex 464) and a CATPHAN600 phantom were scanned using 3 routine clinical acquisition protocols (adult axial brain, adult abdomen, and pediatric abdomen) on a Philips iCT system. Each scan was acquired using default conditions and 75%, 50% and 25% dose levels. Images were reconstructed using standard filtered backprojection (FBP), conventional iterative reconstruction (iDose4) and a prototype model-based iterative reconstruction (IMR). Phantom measurementsmore » included CT number accuracy, contrast to noise ratio (CNR), modulation transfer function (MTF), low contrast detectability (LCD), and noise power spectrum (NPS). Results: The choice of reconstruction method had no effect on CT number accuracy, or MTF (p<0.01). The CNR of a 6 HU contrast target was improved by 1–67% with iDose4 relative to FBP, while IMR improved CNR by 145–367% across all protocols and dose levels. Within each scan protocol, the CNR improvement from IMR vs FBP showed a general trend of greater improvement at lower dose levels. NPS magnitude was greatest for FBP and lowest for IMR. The NPS of the IMR reconstruction showed a pronounced decrease with increasing spatial frequency, consistent with the unusual noise texture seen in IMR images. Conclusion: Iterative Model Reconstruction reduces noise and improves contrast-to-noise ratio without sacrificing spatial resolution in CT phantom images. This offers the possibility of radiation dose reduction and improved low contrast detectability compared with filtered backprojection or conventional iterative reconstruction.« less
SU-D-206-03: Segmentation Assisted Fast Iterative Reconstruction Method for Cone-Beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, P; Mao, T; Gong, S
2016-06-15
Purpose: Total Variation (TV) based iterative reconstruction (IR) methods enable accurate CT image reconstruction from low-dose measurements with sparse projection acquisition, due to the sparsifiable feature of most CT images using gradient operator. However, conventional solutions require large amount of iterations to generate a decent reconstructed image. One major reason is that the expected piecewise constant property is not taken into consideration at the optimization starting point. In this work, we propose an iterative reconstruction method for cone-beam CT (CBCT) using image segmentation to guide the optimization path more efficiently on the regularization term at the beginning of the optimizationmore » trajectory. Methods: Our method applies general knowledge that one tissue component in the CT image contains relatively uniform distribution of CT number. This general knowledge is incorporated into the proposed reconstruction using image segmentation technique to generate the piecewise constant template on the first-pass low-quality CT image reconstructed using analytical algorithm. The template image is applied as an initial value into the optimization process. Results: The proposed method is evaluated on the Shepp-Logan phantom of low and high noise levels, and a head patient. The number of iterations is reduced by overall 40%. Moreover, our proposed method tends to generate a smoother reconstructed image with the same TV value. Conclusion: We propose a computationally efficient iterative reconstruction method for CBCT imaging. Our method achieves a better optimization trajectory and a faster convergence behavior. It does not rely on prior information and can be readily incorporated into existing iterative reconstruction framework. Our method is thus practical and attractive as a general solution to CBCT iterative reconstruction. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R&D Program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917).« less
A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.
De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc
2010-09-01
In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources.
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen; Wald, Lawrence L.
2017-01-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization. PMID:26915119
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L
2016-08-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Almansouri, Hani; Venkatakrishnan, Singanallur V.; Clayton, Dwight A.
One-sided non-destructive evaluation (NDE) is widely used to inspect materials, such as concrete structures in nuclear power plants (NPP). A widely used method for one-sided NDE is the synthetic aperture focusing technique (SAFT). The SAFT algorithm produces reasonable results when inspecting simple structures. However, for complex structures, such as heavily reinforced thick concrete structures, SAFT results in artifacts and hence there is a need for a more sophisticated inversion technique. Model-based iterative reconstruction (MBIR) algorithms, which are typically equivalent to regularized inversion techniques, offer a powerful framework to incorporate complex models for the physics, detector miscalibrations and the materials beingmore » imaged to obtain high quality reconstructions. Previously, we have proposed an ultrasonic MBIR method that signifcantly improves reconstruction quality compared to SAFT. However, the method made some simplifying assumptions on the propagation model and did not disucss ways to handle data that is obtained by raster scanning a system over a surface to inspect large regions. In this paper, we propose a novel MBIR algorithm that incorporates an anisotropic forward model and allows for the joint processing of data obtained from a system that raster scans a large surface. We demonstrate that the new MBIR method can produce dramatic improvements in reconstruction quality compared to SAFT and suppresses articfacts compared to the perviously presented MBIR approach.« less
NASA Astrophysics Data System (ADS)
Almansouri, Hani; Venkatakrishnan, Singanallur; Clayton, Dwight; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector
2018-04-01
One-sided non-destructive evaluation (NDE) is widely used to inspect materials, such as concrete structures in nuclear power plants (NPP). A widely used method for one-sided NDE is the synthetic aperture focusing technique (SAFT). The SAFT algorithm produces reasonable results when inspecting simple structures. However, for complex structures, such as heavily reinforced thick concrete structures, SAFT results in artifacts and hence there is a need for a more sophisticated inversion technique. Model-based iterative reconstruction (MBIR) algorithms, which are typically equivalent to regularized inversion techniques, offer a powerful framework to incorporate complex models for the physics, detector miscalibrations and the materials being imaged to obtain high quality reconstructions. Previously, we have proposed an ultrasonic MBIR method that signifcantly improves reconstruction quality compared to SAFT. However, the method made some simplifying assumptions on the propagation model and did not disucss ways to handle data that is obtained by raster scanning a system over a surface to inspect large regions. In this paper, we propose a novel MBIR algorithm that incorporates an anisotropic forward model and allows for the joint processing of data obtained from a system that raster scans a large surface. We demonstrate that the new MBIR method can produce dramatic improvements in reconstruction quality compared to SAFT and suppresses articfacts compared to the perviously presented MBIR approach.
Non-homogeneous updates for the iterative coordinate descent algorithm
NASA Astrophysics Data System (ADS)
Yu, Zhou; Thibault, Jean-Baptiste; Bouman, Charles A.; Sauer, Ken D.; Hsieh, Jiang
2007-02-01
Statistical reconstruction methods show great promise for improving resolution, and reducing noise and artifacts in helical X-ray CT. In fact, statistical reconstruction seems to be particularly valuable in maintaining reconstructed image quality when the dosage is low and the noise is therefore high. However, high computational cost and long reconstruction times remain as a barrier to the use of statistical reconstruction in practical applications. Among the various iterative methods that have been studied for statistical reconstruction, iterative coordinate descent (ICD) has been found to have relatively low overall computational requirements due to its fast convergence. This paper presents a novel method for further speeding the convergence of the ICD algorithm, and therefore reducing the overall reconstruction time for statistical reconstruction. The method, which we call nonhomogeneous iterative coordinate descent (NH-ICD) uses spatially non-homogeneous updates to speed convergence by focusing computation where it is most needed. Experimental results with real data indicate that the method speeds reconstruction by roughly a factor of two for typical 3D multi-slice geometries.
A transversal approach for patch-based label fusion via matrix completion
Sanroma, Gerard; Wu, Guorong; Gao, Yaozong; Thung, Kim-Han; Guo, Yanrong; Shen, Dinggang
2015-01-01
Recently, multi-atlas patch-based label fusion has received an increasing interest in the medical image segmentation field. After warping the anatomical labels from the atlas images to the target image by registration, label fusion is the key step to determine the latent label for each target image point. Two popular types of patch-based label fusion approaches are (1) reconstruction-based approaches that compute the target labels as a weighted average of atlas labels, where the weights are derived by reconstructing the target image patch using the atlas image patches; and (2) classification-based approaches that determine the target label as a mapping of the target image patch, where the mapping function is often learned using the atlas image patches and their corresponding labels. Both approaches have their advantages and limitations. In this paper, we propose a novel patch-based label fusion method to combine the above two types of approaches via matrix completion (and hence, we call it transversal). As we will show, our method overcomes the individual limitations of both reconstruction-based and classification-based approaches. Since the labeling confidences may vary across the target image points, we further propose a sequential labeling framework that first labels the highly confident points and then gradually labels more challenging points in an iterative manner, guided by the label information determined in the previous iterations. We demonstrate the performance of our novel label fusion method in segmenting the hippocampus in the ADNI dataset, subcortical and limbic structures in the LONI dataset, and mid-brain structures in the SATA dataset. We achieve more accurate segmentation results than both reconstruction-based and classification-based approaches. Our label fusion method is also ranked 1st in the online SATA Multi-Atlas Segmentation Challenge. PMID:26160394
A heuristic statistical stopping rule for iterative reconstruction in emission tomography.
Ben Bouallègue, F; Crouzet, J F; Mariano-Goulart, D
2013-01-01
We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for MLEM reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the GATE platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time.
Visser, R; Godart, J; Wauben, D J L; Langendijk, J A; Van't Veld, A A; Korevaar, E W
2016-05-21
The objective of this study was to introduce a new iterative method to reconstruct multi leaf collimator (MLC) positions based on low resolution ionization detector array measurements and to evaluate its error detection performance. The iterative reconstruction method consists of a fluence model, a detector model and an optimizer. Expected detector response was calculated using a radiotherapy treatment plan in combination with the fluence model and detector model. MLC leaf positions were reconstructed by minimizing differences between expected and measured detector response. The iterative reconstruction method was evaluated for an Elekta SLi with 10.0 mm MLC leafs in combination with the COMPASS system and the MatriXX Evolution (IBA Dosimetry) detector with a spacing of 7.62 mm. The detector was positioned in such a way that each leaf pair of the MLC was aligned with one row of ionization chambers. Known leaf displacements were introduced in various field geometries ranging from -10.0 mm to 10.0 mm. Error detection performance was tested for MLC leaf position dependency relative to the detector position, gantry angle dependency, monitor unit dependency, and for ten clinical intensity modulated radiotherapy (IMRT) treatment beams. For one clinical head and neck IMRT treatment beam, influence of the iterative reconstruction method on existing 3D dose reconstruction artifacts was evaluated. The described iterative reconstruction method was capable of individual MLC leaf position reconstruction with millimeter accuracy, independent of the relative detector position within the range of clinically applied MU's for IMRT. Dose reconstruction artifacts in a clinical IMRT treatment beam were considerably reduced as compared to the current dose verification procedure. The iterative reconstruction method allows high accuracy 3D dose verification by including actual MLC leaf positions reconstructed from low resolution 2D measurements.
TU-F-18A-06: Dual Energy CT Using One Full Scan and a Second Scan with Very Few Projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, T; Zhu, L
Purpose: The conventional dual energy CT (DECT) requires two full CT scans at different energy levels, resulting in dose increase as well as imaging errors from patient motion between the two scans. To shorten the scan time of DECT and thus overcome these drawbacks, we propose a new DECT algorithm using one full scan and a second scan with very few projections by preserving structural information. Methods: We first reconstruct a CT image on the full scan using a standard filtered-backprojection (FBP) algorithm. We then use a compressed sensing (CS) based iterative algorithm on the second scan for reconstruction frommore » very few projections. The edges extracted from the first scan are used as weights in the Objectives: function of the CS-based reconstruction to substantially improve the image quality of CT reconstruction. The basis material images are then obtained by an iterative image-domain decomposition method and an electron density map is finally calculated. The proposed method is evaluated on phantoms. Results: On the Catphan 600 phantom, the CT reconstruction mean error using the proposed method on 20 and 5 projections are 4.76% and 5.02%, respectively. Compared with conventional iterative reconstruction, the proposed edge weighting preserves object structures and achieves a better spatial resolution. With basis materials of Iodine and Teflon, our method on 20 projections obtains similar quality of decomposed material images compared with FBP on a full scan and the mean error of electron density in the selected regions of interest is 0.29%. Conclusion: We propose an effective method for reducing projections and therefore scan time in DECT. We show that a full scan plus a 20-projection scan are sufficient to provide DECT images and electron density with similar quality compared with two full scans. Our future work includes more phantom studies to validate the performance of our method.« less
Iterative CT reconstruction using coordinate descent with ordered subsets of data
NASA Astrophysics Data System (ADS)
Noo, F.; Hahn, K.; Schöndube, H.; Stierstorfer, K.
2016-04-01
Image reconstruction based on iterative minimization of a penalized weighted least-square criteria has become an important topic of research in X-ray computed tomography. This topic is motivated by increasing evidence that such a formalism may enable a significant reduction in dose imparted to the patient while maintaining or improving image quality. One important issue associated with this iterative image reconstruction concept is slow convergence and the associated computational effort. For this reason, there is interest in finding methods that produce approximate versions of the targeted image with a small number of iterations and an acceptable level of discrepancy. We introduce here a novel method to produce such approximations: ordered subsets in combination with iterative coordinate descent. Preliminary results demonstrate that this method can produce, within 10 iterations and using only a constant image as initial condition, satisfactory reconstructions that retain the noise properties of the targeted image.
Real-time photo-magnetic imaging.
Nouizi, Farouk; Erkol, Hakan; Luk, Alex; Unlu, Mehmet B; Gulsen, Gultekin
2016-10-01
We previously introduced a new high resolution diffuse optical imaging modality termed, photo-magnetic imaging (PMI). PMI irradiates the object under investigation with near-infrared light and monitors the variations of temperature using magnetic resonance thermometry (MRT). In this paper, we present a real-time PMI image reconstruction algorithm that uses analytic methods to solve the forward problem and assemble the Jacobian matrix much faster. The new algorithm is validated using real MRT measured temperature maps. In fact, it accelerates the reconstruction process by more than 250 times compared to a single iteration of the FEM-based algorithm, which opens the possibility for the real-time PMI.
Spectral CT Reconstruction with Image Sparsity and Spectral Mean
Zhang, Yi; Xi, Yan; Yang, Qingsong; Cong, Wenxiang; Zhou, Jiliu
2017-01-01
Photon-counting detectors can acquire x-ray intensity data in different energy bins. The signal to noise ratio of resultant raw data in each energy bin is generally low due to the narrow bin width and quantum noise. To address this problem, here we propose an image reconstruction approach for spectral CT to simultaneously reconstructs x-ray attenuation coefficients in all the energy bins. Because the measured spectral data are highly correlated among the x-ray energy bins, the intra-image sparsity and inter-image similarity are important prior acknowledge for image reconstruction. Inspired by this observation, the total variation (TV) and spectral mean (SM) measures are combined to improve the quality of reconstructed images. For this purpose, a linear mapping function is used to minimalize image differences between energy bins. The split Bregman technique is applied to perform image reconstruction. Our numerical and experimental results show that the proposed algorithms outperform competing iterative algorithms in this context. PMID:29034267
NASA Astrophysics Data System (ADS)
Van de Casteele, Elke; Parizel, Paul; Sijbers, Jan
2012-03-01
Adaptive statistical iterative reconstruction (ASiR) is a new reconstruction algorithm used in the field of medical X-ray imaging. This new reconstruction method combines the idealized system representation, as we know it from the standard Filtered Back Projection (FBP) algorithm, and the strength of iterative reconstruction by including a noise model in the reconstruction scheme. It studies how noise propagates through the reconstruction steps, feeds this model back into the loop and iteratively reduces noise in the reconstructed image without affecting spatial resolution. In this paper the effect of ASiR on the contrast to noise ratio is studied using the low contrast module of the Catphan phantom. The experiments were done on a GE LightSpeed VCT system at different voltages and currents. The results show reduced noise and increased contrast for the ASiR reconstructions compared to the standard FBP method. For the same contrast to noise ratio the images from ASiR can be obtained using 60% less current, leading to a reduction in dose of the same amount.
Evaluation of the effects of patient arm attenuation in SPECT cardiac perfusion imaging
NASA Astrophysics Data System (ADS)
Luo, Dershan; King, M. A.; Pan, Tin-Su; Xia, Weishi
1996-12-01
It was hypothesized that the use of attenuation correction could compensate for degradation in the uniformity of apparent localization of imaging agents seen in cardiac walls when patients are imaged with arms at their sides. Noise-free simulations of the digital MCAT phantom were employed to investigate this hypothesis. Four variations in camera size and collimation scheme were investigated. We observed that: 1) without attenuation correction, the arms had little additional influences on the uniformity of the heart for 180/spl deg/ reconstructions and caused a small increase in nonuniformity for 360/spl deg/ reconstructions, where the impact of both arms was included; 2) change in patient size had more of an impact on count uniformity than the presence of the arms, either with or without attenuation correction; 3) for a low number of iterations and large patient size, slightly better uniformity was obtained from parallel emission data than from fan-beam emission data, independent of whether parallel or fan-beam transmission data was used to reconstruct the attenuation maps; and 4) for all camera configurations, uniformity was improved with attenuation correction and, given sufficient number of iterations, it was compatible among different imaging geometry combinations. Thus, iterative algorithms can compensate for the additional attenuation imposed by larger patients or having the arms on the sides. When the arms are at the sides of the patient, however, a larger radius of rotation may be required, resulting in decreased spatial resolution.
Evaluation of the effects of patient arm attenuation in SPECT cardiac perfusion imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, D.; King, M.A.; Pan, T.S.
1996-12-01
It was hypothesized that the use of attenuation correction could compensate for degradation in the uniformity of apparent localization of imaging agents seen in cardiac walls when patients are imaged with arms at their sides. Noise-free simulations of the digital MCAT phantom were employed to investigate this hypothesis. Four variations in camera size and collimation scheme were investigated. The authors observed that: (1) without attenuation correction, the arms had little additional influences on the uniformity of the heart for 180{degree} reconstructions and caused a small increase in nonuniformity for 360{degree} reconstructions, where the impact of both arms was included; (2)more » change in patient size had more of an impact on count uniformity than the presence of the arms, either with or without attenuation correction; (3) for a low number of iterations and large patient size, slightly better uniformity was obtained from parallel emission data than from fan-beam emission data, independent of whether parallel or fan-beam transmission data was used to reconstruct the attenuation maps; and (4) for all camera configurations, uniformity was improved with attenuation correction and, given sufficient number of iterations, it was compatible among different imaging geometry combinations. Thus, iterative algorithms can compensate for the additional attenuation imposed by larger patients or having the arms on the sides. When the arms are at the sides of the patient, however, a larger radius of rotation may be required, resulting in decreased spatial resolution.« less
Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework
Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.
2016-01-01
Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of TOF scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (Direct Image Reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias vs. variance performance to iterative TOF reconstruction with a matched resolution model. PMID:27032968
Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework
NASA Astrophysics Data System (ADS)
Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.
2016-05-01
Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of time-of-flight (TOF) scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (DIRECT: direct image reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias versus variance performance to iterative TOF reconstruction with a matched resolution model.
Notohamiprodjo, S; Deak, Z; Meurer, F; Maertz, F; Mueck, F G; Geyer, L L; Wirth, S
2015-01-01
The purpose of this study was to compare cranial CT (CCT) image quality (IQ) of the MBIR algorithm with standard iterative reconstruction (ASiR). In this institutional review board (IRB)-approved study, raw data sets of 100 unenhanced CCT examinations (120 kV, 50-260 mAs, 20 mm collimation, 0.984 pitch) were reconstructed with both ASiR and MBIR. Signal-to-noise (SNR) and contrast-to-noise (CNR) were calculated from attenuation values measured in caudate nucleus, frontal white matter, anterior ventricle horn, fourth ventricle, and pons. Two radiologists, who were blinded to the reconstruction algorithms, evaluated anonymized multiplanar reformations of 2.5 mm with respect to depiction of different parenchymal structures and impact of artefacts on IQ with a five-point scale (0: unacceptable, 1: less than average, 2: average, 3: above average, 4: excellent). MBIR decreased artefacts more effectively than ASiR (p < 0.01). The median depiction score for MBIR was 3, whereas the median value for ASiR was 2 (p < 0.01). SNR and CNR were significantly higher in MBIR than ASiR (p < 0.01). MBIR showed significant improvement of IQ parameters compared to ASiR. As CCT is an examination that is frequently required, the use of MBIR may allow for substantial reduction of radiation exposure caused by medical diagnostics. • Model-Based iterative reconstruction (MBIR) effectively decreased artefacts in cranial CT. • MBIR reconstructed images were rated with significantly higher scores for image quality. • Model-Based iterative reconstruction may allow reduced-dose diagnostic examination protocols.
Wallace, Nathan D; Ceguerra, Anna V; Breen, Andrew J; Ringer, Simon P
2018-06-01
Atom probe tomography is a powerful microscopy technique capable of reconstructing the 3D position and chemical identity of millions of atoms within engineering materials, at the atomic level. Crystallographic information contained within the data is particularly valuable for the purposes of reconstruction calibration and grain boundary analysis. Typically, analysing this data is a manual, time-consuming and error prone process. In many cases, the crystallographic signal is so weak that it is difficult to detect at all. In this study, a new automated signal processing methodology is demonstrated. We use the affine properties of the detector coordinate space, or the 'detector stack', as the basis for our calculations. The methodological framework and the visualisation tools are shown to be superior to the standard method of crystallographic pole visualisation directly from field evaporation images and there is no requirement for iterations between a full real-space initial tomographic reconstruction and the detector stack. The mapping approaches are demonstrated for aluminium, tungsten, magnesium and molybdenum. Implications for reconstruction calibration, accuracy of crystallographic measurements, reliability and repeatability are discussed. Copyright © 2018 Elsevier B.V. All rights reserved.
Ryu, Young Jin; Choi, Young Hun; Cheon, Jung-Eun; Ha, Seongmin; Kim, Woo Sun; Kim, In-One
2016-03-01
CT of pediatric phantoms can provide useful guidance to the optimization of knowledge-based iterative reconstruction CT. To compare radiation dose and image quality of CT images obtained at different radiation doses reconstructed with knowledge-based iterative reconstruction, hybrid iterative reconstruction and filtered back-projection. We scanned a 5-year anthropomorphic phantom at seven levels of radiation. We then reconstructed CT data with knowledge-based iterative reconstruction (iterative model reconstruction [IMR] levels 1, 2 and 3; Philips Healthcare, Andover, MA), hybrid iterative reconstruction (iDose(4), levels 3 and 7; Philips Healthcare, Andover, MA) and filtered back-projection. The noise, signal-to-noise ratio and contrast-to-noise ratio were calculated. We evaluated low-contrast resolutions and detectability by low-contrast targets and subjective and objective spatial resolutions by the line pairs and wire. With radiation at 100 peak kVp and 100 mAs (3.64 mSv), the relative doses ranged from 5% (0.19 mSv) to 150% (5.46 mSv). Lower noise and higher signal-to-noise, contrast-to-noise and objective spatial resolution were generally achieved in ascending order of filtered back-projection, iDose(4) levels 3 and 7, and IMR levels 1, 2 and 3, at all radiation dose levels. Compared with filtered back-projection at 100% dose, similar noise levels were obtained on IMR level 2 images at 24% dose and iDose(4) level 3 images at 50% dose, respectively. Regarding low-contrast resolution, low-contrast detectability and objective spatial resolution, IMR level 2 images at 24% dose showed comparable image quality with filtered back-projection at 100% dose. Subjective spatial resolution was not greatly affected by reconstruction algorithm. Reduced-dose IMR obtained at 0.92 mSv (24%) showed similar image quality to routine-dose filtered back-projection obtained at 3.64 mSv (100%), and half-dose iDose(4) obtained at 1.81 mSv.
Spectral Reconstruction Based on Svm for Cross Calibration
NASA Astrophysics Data System (ADS)
Gao, H.; Ma, Y.; Liu, W.; He, H.
2017-05-01
Chinese HY-1C/1D satellites will use a 5nm/10nm-resolutional visible-near infrared(VNIR) hyperspectral sensor with the solar calibrator to cross-calibrate with other sensors. The hyperspectral radiance data are composed of average radiance in the sensor's passbands and bear a spectral smoothing effect, a transform from the hyperspectral radiance data to the 1-nm-resolution apparent spectral radiance by spectral reconstruction need to be implemented. In order to solve the problem of noise cumulation and deterioration after several times of iteration by the iterative algorithm, a novel regression method based on SVM is proposed, which can approach arbitrary complex non-linear relationship closely and provide with better generalization capability by learning. In the opinion of system, the relationship between the apparent radiance and equivalent radiance is nonlinear mapping introduced by spectral response function(SRF), SVM transform the low-dimensional non-linear question into high-dimensional linear question though kernel function, obtaining global optimal solution by virtue of quadratic form. The experiment is performed using 6S-simulated spectrums considering the SRF and SNR of the hyperspectral sensor, measured reflectance spectrums of water body and different atmosphere conditions. The contrastive result shows: firstly, the proposed method is with more reconstructed accuracy especially to the high-frequency signal; secondly, while the spectral resolution of the hyperspectral sensor reduces, the proposed method performs better than the iterative method; finally, the root mean square relative error(RMSRE) which is used to evaluate the difference of the reconstructed spectrum and the real spectrum over the whole spectral range is calculated, it decreses by one time at least by proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, H
Purpose: This work is to develop a general framework, namely filtered iterative reconstruction (FIR) method, to incorporate analytical reconstruction (AR) method into iterative reconstruction (IR) method, for enhanced CT image quality. Methods: FIR is formulated as a combination of filtered data fidelity and sparsity regularization, and then solved by proximal forward-backward splitting (PFBS) algorithm. As a result, the image reconstruction decouples data fidelity and image regularization with a two-step iterative scheme, during which an AR-projection step updates the filtered data fidelity term, while a denoising solver updates the sparsity regularization term. During the AR-projection step, the image is projected tomore » the data domain to form the data residual, and then reconstructed by certain AR to a residual image which is in turn weighted together with previous image iterate to form next image iterate. Since the eigenvalues of AR-projection operator are close to the unity, PFBS based FIR has a fast convergence. Results: The proposed FIR method is validated in the setting of circular cone-beam CT with AR being FDK and total-variation sparsity regularization, and has improved image quality from both AR and IR. For example, AIR has improved visual assessment and quantitative measurement in terms of both contrast and resolution, and reduced axial and half-fan artifacts. Conclusion: FIR is proposed to incorporate AR into IR, with an efficient image reconstruction algorithm based on PFBS. The CBCT results suggest that FIR synergizes AR and IR with improved image quality and reduced axial and half-fan artifacts. The authors was partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).« less
On the assessment of spatial resolution of PET systems with iterative image reconstruction
NASA Astrophysics Data System (ADS)
Gong, Kuang; Cherry, Simon R.; Qi, Jinyi
2016-03-01
Spatial resolution is an important metric for performance characterization in PET systems. Measuring spatial resolution is straightforward with a linear reconstruction algorithm, such as filtered backprojection, and can be performed by reconstructing a point source scan and calculating the full-width-at-half-maximum (FWHM) along the principal directions. With the widespread adoption of iterative reconstruction methods, it is desirable to quantify the spatial resolution using an iterative reconstruction algorithm. However, the task can be difficult because the reconstruction algorithms are nonlinear and the non-negativity constraint can artificially enhance the apparent spatial resolution if a point source image is reconstructed without any background. Thus, it was recommended that a background should be added to the point source data before reconstruction for resolution measurement. However, there has been no detailed study on the effect of the point source contrast on the measured spatial resolution. Here we use point source scans from a preclinical PET scanner to investigate the relationship between measured spatial resolution and the point source contrast. We also evaluate whether the reconstruction of an isolated point source is predictive of the ability of the system to resolve two adjacent point sources. Our results indicate that when the point source contrast is below a certain threshold, the measured FWHM remains stable. Once the contrast is above the threshold, the measured FWHM monotonically decreases with increasing point source contrast. In addition, the measured FWHM also monotonically decreases with iteration number for maximum likelihood estimate. Therefore, when measuring system resolution with an iterative reconstruction algorithm, we recommend using a low-contrast point source and a fixed number of iterations.
Jini service to reconstruct tomographic data
NASA Astrophysics Data System (ADS)
Knoll, Peter; Mirzaei, S.; Koriska, K.; Koehn, H.
2002-06-01
A number of imaging systems rely on the reconstruction of a 3- dimensional model from its projections through the process of computed tomography (CT). In medical imaging, for example magnetic resonance imaging (MRI), positron emission tomography (PET), and Single Computer Tomography (SPECT) acquire two-dimensional projections of a three dimensional projections of a three dimensional object. In order to calculate the 3-dimensional representation of the object, i.e. its voxel distribution, several reconstruction algorithms have been developed. Currently, mainly two reconstruct use: the filtered back projection(FBP) and iterative methods. Although the quality of iterative reconstructed SPECT slices is better than that of FBP slices, such iterative algorithms are rarely used for clinical routine studies because of their low availability and increased reconstruction time. We used Jini and a self-developed iterative reconstructions algorithm to design and implement a Jini reconstruction service. With this service, the physician selects the patient study from a database and a Jini client automatically discovers the registered Jini reconstruction services in the department's Intranet. After downloading the proxy object the this Jini service, the SPECT acquisition data are reconstructed. The resulting transaxial slices are visualized using a Jini slice viewer, which can be used for various imaging modalities.
Region of interest processing for iterative reconstruction in x-ray computed tomography
NASA Astrophysics Data System (ADS)
Kopp, Felix K.; Nasirudin, Radin A.; Mei, Kai; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Noël, Peter B.
2015-03-01
The recent advancements in the graphics card technology raised the performance of parallel computing and contributed to the introduction of iterative reconstruction methods for x-ray computed tomography in clinical CT scanners. Iterative maximum likelihood (ML) based reconstruction methods are known to reduce image noise and to improve the diagnostic quality of low-dose CT. However, iterative reconstruction of a region of interest (ROI), especially ML based, is challenging. But for some clinical procedures, like cardiac CT, only a ROI is needed for diagnostics. A high-resolution reconstruction of the full field of view (FOV) consumes unnecessary computation effort that results in a slower reconstruction than clinically acceptable. In this work, we present an extension and evaluation of an existing ROI processing algorithm. Especially improvements for the equalization between regions inside and outside of a ROI are proposed. The evaluation was done on data collected from a clinical CT scanner. The performance of the different algorithms is qualitatively and quantitatively assessed. Our solution to the ROI problem provides an increase in signal-to-noise ratio and leads to visually less noise in the final reconstruction. The reconstruction speed of our technique was observed to be comparable with other previous proposed techniques. The development of ROI processing algorithms in combination with iterative reconstruction will provide higher diagnostic quality in the near future.
Joseph, Arun A; Kalentev, Oleksandr; Merboldt, Klaus-Dietmar; Voit, Dirk; Roeloffs, Volkert B; van Zalk, Maaike; Frahm, Jens
2016-01-01
Objective: To develop a novel method for rapid myocardial T1 mapping at high spatial resolution. Methods: The proposed strategy represents a single-shot inversion recovery experiment triggered to early diastole during a brief breath-hold. The measurement combines an adiabatic inversion pulse with a real-time readout by highly undersampled radial FLASH, iterative image reconstruction and T1 fitting with automatic deletion of systolic frames. The method was implemented on a 3-T MRI system using a graphics processing unit-equipped bypass computer for online application. Validations employed a T1 reference phantom including analyses at simulated heart rates from 40 to 100 beats per minute. In vivo applications involved myocardial T1 mapping in short-axis views of healthy young volunteers. Results: At 1-mm in-plane resolution and 6-mm section thickness, the inversion recovery measurement could be shortened to 3 s without compromising T1 quantitation. Phantom studies demonstrated T1 accuracy and high precision for values ranging from 300 to 1500 ms and up to a heart rate of 100 beats per minute. Similar results were obtained in vivo yielding septal T1 values of 1246 ± 24 ms (base), 1256 ± 33 ms (mid-ventricular) and 1288 ± 30 ms (apex), respectively (mean ± standard deviation, n = 6). Conclusion: Diastolic myocardial T1 mapping with use of single-shot inversion recovery FLASH offers high spatial resolution, T1 accuracy and precision, and practical robustness and speed. Advances in knowledge: The proposed method will be beneficial for clinical applications relying on native and post-contrast T1 quantitation. PMID:27759423
Photoacoustic image reconstruction via deep learning
NASA Astrophysics Data System (ADS)
Antholzer, Stephan; Haltmeier, Markus; Nuster, Robert; Schwab, Johannes
2018-02-01
Applying standard algorithms to sparse data problems in photoacoustic tomography (PAT) yields low-quality images containing severe under-sampling artifacts. To some extent, these artifacts can be reduced by iterative image reconstruction algorithms which allow to include prior knowledge such as smoothness, total variation (TV) or sparsity constraints. These algorithms tend to be time consuming as the forward and adjoint problems have to be solved repeatedly. Further, iterative algorithms have additional drawbacks. For example, the reconstruction quality strongly depends on a-priori model assumptions about the objects to be recovered, which are often not strictly satisfied in practical applications. To overcome these issues, in this paper, we develop direct and efficient reconstruction algorithms based on deep learning. As opposed to iterative algorithms, we apply a convolutional neural network, whose parameters are trained before the reconstruction process based on a set of training data. For actual image reconstruction, a single evaluation of the trained network yields the desired result. Our presented numerical results (using two different network architectures) demonstrate that the proposed deep learning approach reconstructs images with a quality comparable to state of the art iterative reconstruction methods.
Sinogram-based adaptive iterative reconstruction for sparse view x-ray computed tomography
NASA Astrophysics Data System (ADS)
Trinca, D.; Zhong, Y.; Wang, Y.-Z.; Mamyrbayev, T.; Libin, E.
2016-10-01
With the availability of more powerful computing processors, iterative reconstruction algorithms have recently been successfully implemented as an approach to achieving significant dose reduction in X-ray CT. In this paper, we propose an adaptive iterative reconstruction algorithm for X-ray CT, that is shown to provide results comparable to those obtained by proprietary algorithms, both in terms of reconstruction accuracy and execution time. The proposed algorithm is thus provided for free to the scientific community, for regular use, and for possible further optimization.
Okariz, Ana; Guraya, Teresa; Iturrondobeitia, Maider; Ibarretxe, Julen
2017-02-01
The SIRT (Simultaneous Iterative Reconstruction Technique) algorithm is commonly used in Electron Tomography to calculate the original volume of the sample from noisy images, but the results provided by this iterative procedure are strongly dependent on the specific implementation of the algorithm, as well as on the number of iterations employed for the reconstruction. In this work, a methodology for selecting the iteration number of the SIRT reconstruction that provides the most accurate segmentation is proposed. The methodology is based on the statistical analysis of the intensity profiles at the edge of the objects in the reconstructed volume. A phantom which resembles a a carbon black aggregate has been created to validate the methodology and the SIRT implementations of two free software packages (TOMOJ and TOMO3D) have been used. Copyright © 2016 Elsevier B.V. All rights reserved.
Gay, F; Pavia, Y; Pierrat, N; Lasalle, S; Neuenschwander, S; Brisse, H J
2014-01-01
To assess the benefit and limits of iterative reconstruction of paediatric chest and abdominal computed tomography (CT). The study compared adaptive statistical iterative reconstruction (ASIR) with filtered back projection (FBP) on 64-channel MDCT. A phantom study was first performed using variable tube potential, tube current and ASIR settings. The assessed image quality indices were the signal-to-noise ratio (SNR), the noise power spectrum, low contrast detectability (LCD) and spatial resolution. A clinical retrospective study of 26 children (M:F = 14/12, mean age: 4 years, range: 1-9 years) was secondarily performed allowing comparison of 18 chest and 14 abdominal CT pairs, one with a routine CT dose and FBP reconstruction, and the other with 30 % lower dose and 40 % ASIR reconstruction. Two radiologists independently compared the images for overall image quality, noise, sharpness and artefacts, and measured image noise. The phantom study demonstrated a significant increase in SNR without impairment of the LCD or spatial resolution, except for tube current values below 30-50 mA. On clinical images, no significant difference was observed between FBP and reduced dose ASIR images. Iterative reconstruction allows at least 30 % dose reduction in paediatric chest and abdominal CT, without impairment of image quality. • Iterative reconstruction helps lower radiation exposure levels in children undergoing CT. • Adaptive statistical iterative reconstruction (ASIR) significantly increases SNR without impairing spatial resolution. • For abdomen and chest CT, ASIR allows at least a 30 % dose reduction.
Compressively sampled MR image reconstruction using generalized thresholding iterative algorithm
NASA Astrophysics Data System (ADS)
Elahi, Sana; kaleem, Muhammad; Omer, Hammad
2018-01-01
Compressed sensing (CS) is an emerging area of interest in Magnetic Resonance Imaging (MRI). CS is used for the reconstruction of the images from a very limited number of samples in k-space. This significantly reduces the MRI data acquisition time. One important requirement for signal recovery in CS is the use of an appropriate non-linear reconstruction algorithm. It is a challenging task to choose a reconstruction algorithm that would accurately reconstruct the MR images from the under-sampled k-space data. Various algorithms have been used to solve the system of non-linear equations for better image quality and reconstruction speed in CS. In the recent past, iterative soft thresholding algorithm (ISTA) has been introduced in CS-MRI. This algorithm directly cancels the incoherent artifacts produced because of the undersampling in k -space. This paper introduces an improved iterative algorithm based on p -thresholding technique for CS-MRI image reconstruction. The use of p -thresholding function promotes sparsity in the image which is a key factor for CS based image reconstruction. The p -thresholding based iterative algorithm is a modification of ISTA, and minimizes non-convex functions. It has been shown that the proposed p -thresholding iterative algorithm can be used effectively to recover fully sampled image from the under-sampled data in MRI. The performance of the proposed method is verified using simulated and actual MRI data taken at St. Mary's Hospital, London. The quality of the reconstructed images is measured in terms of peak signal-to-noise ratio (PSNR), artifact power (AP), and structural similarity index measure (SSIM). The proposed approach shows improved performance when compared to other iterative algorithms based on log thresholding, soft thresholding and hard thresholding techniques at different reduction factors.
Grosser, Oliver S.; Kupitz, Dennis; Ruf, Juri; Czuczwara, Damian; Steffen, Ingo G.; Furth, Christian; Thormann, Markus; Loewenthal, David; Ricke, Jens; Amthauer, Holger
2015-01-01
Background Hybrid imaging combines nuclear medicine imaging such as single photon emission computed tomography (SPECT) or positron emission tomography (PET) with computed tomography (CT). Through this hybrid design, scanned patients accumulate radiation exposure from both applications. Imaging modalities have been the subject of long-term optimization efforts, focusing on diagnostic applications. It was the aim of this study to investigate the influence of an iterative CT image reconstruction algorithm (ASIR) on the image quality of the low-dose CT images. Methodology/Principal Findings Examinations were performed with a SPECT-CT scanner with standardized CT and SPECT-phantom geometries and CT protocols with systematically reduced X-ray tube currents. Analyses included image quality with respect to photon flux. Results were compared to the standard FBP reconstructed images. The general impact of the CT-based attenuation maps used during SPECT reconstruction was examined for two SPECT phantoms. Using ASIR for image reconstructions, image noise was reduced compared to FBP reconstructions for the same X-ray tube current. The Hounsfield unit (HU) values reconstructed by ASIR were correlated to the FBP HU values(R2 ≥ 0.88) and the contrast-to-noise ratio (CNR) was improved by ASIR. However, for a phantom with increased attenuation, the HU values shifted for low X-ray tube currents I ≤ 60 mA (p ≤ 0.04). In addition, the shift of the HU values was observed within the attenuation corrected SPECT images for very low X-ray tube currents (I ≤ 20 mA, p ≤ 0.001). Conclusion/Significance In general, the decrease in X-ray tube current up to 30 mA in combination with ASIR led to a reduction of CT-related radiation exposure without a significant decrease in image quality. PMID:26390216
NASA Astrophysics Data System (ADS)
David, Sabrina; Burion, Steve; Tepe, Alan; Wilfley, Brian; Menig, Daniel; Funk, Tobias
2012-03-01
Iterative reconstruction methods have emerged as a promising avenue to reduce dose in CT imaging. Another, perhaps less well-known, advance has been the development of inverse geometry CT (IGCT) imaging systems, which can significantly reduce the radiation dose delivered to a patient during a CT scan compared to conventional CT systems. Here we show that IGCT data can be reconstructed using iterative methods, thereby combining two novel methods for CT dose reduction. A prototype IGCT scanner was developed using a scanning beam digital X-ray system - an inverse geometry fluoroscopy system with a 9,000 focal spot x-ray source and small photon counting detector. 90 fluoroscopic projections or "superviews" spanning an angle of 360 degrees were acquired of an anthropomorphic phantom mimicking a 1 year-old boy. The superviews were reconstructed with a custom iterative reconstruction algorithm, based on the maximum-likelihood algorithm for transmission tomography (ML-TR). The normalization term was calculated based on flat-field data acquired without a phantom. 15 subsets were used, and a total of 10 complete iterations were performed. Initial reconstructed images showed faithful reconstruction of anatomical details. Good edge resolution and good contrast-to-noise properties were observed. Overall, ML-TR reconstruction of IGCT data collected by a bench-top prototype was shown to be viable, which may be an important milestone in the further development of inverse geometry CT.
Imaging Electric Properties of Biological Tissues by RF Field Mapping in MRI
Zhang, Xiaotong; Zhu, Shanan; He, Bin
2010-01-01
The electric properties (EPs) of biological tissue, i.e., the electric conductivity and permittivity, can provide important information in the diagnosis of various diseases. The EPs also play an important role in specific absorption rate (SAR) calculation, a major concern in high-field Magnetic Resonance Imaging (MRI), as well as in non-medical areas such as wireless-telecommunications. The high-field MRI system is accompanied by significant wave propagation effects, and the radio frequency (RF) radiation is dependent on the EPs of biological tissue. Based on the measurement of the active transverse magnetic component of the applied RF field (known as B1-mapping technique), we propose a dual-excitation algorithm, which uses two sets of measured B1 data to noninvasively reconstruct the electric properties of biological tissues. The Finite Element Method (FEM) was utilized in three-dimensional (3D) modeling and B1 field calculation. A series of computer simulations were conducted to evaluate the feasibility and performance of the proposed method on a 3D head model within a transverse electromagnetic (TEM) coil and a birdcage (BC) coil. Using a TEM coil, when noise free, the reconstructed EP distribution of tissues in the brain has relative errors of 12% ∼ 28% and correlated coefficients of greater than 0.91. Compared with other B1-mapping based reconstruction algorithms, our approach provides superior performance without the need for iterative computations. The present simulation results suggest that good reconstruction of electric properties from B1 mapping can be achieved. PMID:20129847
Rioux, James A; Beyea, Steven D; Bowen, Chris V
2017-02-01
Purely phase-encoded techniques such as single point imaging (SPI) are generally unsuitable for in vivo imaging due to lengthy acquisition times. Reconstruction of highly undersampled data using compressed sensing allows SPI data to be quickly obtained from animal models, enabling applications in preclinical cellular and molecular imaging. TurboSPI is a multi-echo single point technique that acquires hundreds of images with microsecond spacing, enabling high temporal resolution relaxometry of large-R 2 * systems such as iron-loaded cells. TurboSPI acquisitions can be pseudo-randomly undersampled in all three dimensions to increase artifact incoherence, and can provide prior information to improve reconstruction. We evaluated the performance of CS-TurboSPI in phantoms, a rat ex vivo, and a mouse in vivo. An algorithm for iterative reconstruction of TurboSPI relaxometry time courses does not affect image quality or R 2 * mapping in vitro at acceleration factors up to 10. Imaging ex vivo is possible at similar acceleration factors, and in vivo imaging is demonstrated at an acceleration factor of 8, such that acquisition time is under 1 h. Accelerated TurboSPI enables preclinical R 2 * mapping without loss of data quality, and may show increased specificity to iron oxide compared to other sequences.
Tomography by iterative convolution - Empirical study and application to interferometry
NASA Technical Reports Server (NTRS)
Vest, C. M.; Prikryl, I.
1984-01-01
An algorithm for computer tomography has been developed that is applicable to reconstruction from data having incomplete projections because an opaque object blocks some of the probing radiation as it passes through the object field. The algorithm is based on iteration between the object domain and the projection (Radon transform) domain. Reconstructions are computed during each iteration by the well-known convolution method. Although it is demonstrated that this algorithm does not converge, an empirically justified criterion for terminating the iteration when the most accurate estimate has been computed is presented. The algorithm has been studied by using it to reconstruct several different object fields with several different opaque regions. It also has been used to reconstruct aerodynamic density fields from interferometric data recorded in wind tunnel tests.
Sparse magnetic resonance imaging reconstruction using the bregman iteration
NASA Astrophysics Data System (ADS)
Lee, Dong-Hoon; Hong, Cheol-Pyo; Lee, Man-Woo
2013-01-01
Magnetic resonance imaging (MRI) reconstruction needs many samples that are sequentially sampled by using phase encoding gradients in a MRI system. It is directly connected to the scan time for the MRI system and takes a long time. Therefore, many researchers have studied ways to reduce the scan time, especially, compressed sensing (CS), which is used for sparse images and reconstruction for fewer sampling datasets when the k-space is not fully sampled. Recently, an iterative technique based on the bregman method was developed for denoising. The bregman iteration method improves on total variation (TV) regularization by gradually recovering the fine-scale structures that are usually lost in TV regularization. In this study, we studied sparse sampling image reconstruction using the bregman iteration for a low-field MRI system to improve its temporal resolution and to validate its usefulness. The image was obtained with a 0.32 T MRI scanner (Magfinder II, SCIMEDIX, Korea) with a phantom and an in-vivo human brain in a head coil. We applied random k-space sampling, and we determined the sampling ratios by using half the fully sampled k-space. The bregman iteration was used to generate the final images based on the reduced data. We also calculated the root-mean-square-error (RMSE) values from error images that were obtained using various numbers of bregman iterations. Our reconstructed images using the bregman iteration for sparse sampling images showed good results compared with the original images. Moreover, the RMSE values showed that the sparse reconstructed phantom and the human images converged to the original images. We confirmed the feasibility of sparse sampling image reconstruction methods using the bregman iteration with a low-field MRI system and obtained good results. Although our results used half the sampling ratio, this method will be helpful in increasing the temporal resolution at low-field MRI systems.
Gariani, Joanna; Martin, Steve P; Botsikas, Diomidis; Becker, Christoph D; Montet, Xavier
2018-06-14
To compare radiation dose and image quality of thoracoabdominal scans obtained with a high-pitch protocol (pitch 3.2) and iterative reconstruction (Sinogram Affirmed Iterative Reconstruction) in comparison to standard pitch reconstructed with filtered back projection (FBP) using dual source CT. 114 CT scans (Somatom Definition Flash, Siemens Healthineers, Erlangen, Germany), 39 thoracic scans, 54 thoracoabdominal scans and 21 abdominal scans were performed. Analysis of three protocols was undertaken; pitch of 1 reconstructed with FBP, pitch of 3.2 reconstructed with SAFIRE, pitch of 3.2 with stellar detectors reconstructed with SAFIRE. Objective and subjective image analysis were performed. Dose differences of the protocols used were compared. Dose was reduced when comparing scans with a pitch of 1 reconstructed with FBP to high-pitch scans with a pitch of 3.2 reconstructed with SAFIRE with a reduction of volume CT dose index of 75% for thoracic scans, 64% for thoracoabdominal scans and 67% for abdominal scans. There was a further reduction after the implementation of stellar detectors reflected in a reduction of 36% of the dose-length product for thoracic scans. This was not at the detriment of image quality, contrast-to-noise ratio, signal-to-noise ratio and the qualitative image analysis revealed a superior image quality in the high-pitch protocols. The combination of a high pitch protocol with iterative reconstruction allows significant dose reduction in routine chest and abdominal scans whilst maintaining or improving diagnostic image quality, with a further reduction in thoracic scans with stellar detectors. Advances in knowledge: High pitch imaging with iterative reconstruction is a tool that can be used to reduce dose without sacrificing image quality.
NASA Astrophysics Data System (ADS)
Huang, Rong; Limburg, Karin; Rohtla, Mehis
2017-05-01
X-ray fluorescence computed tomography is often used to measure trace element distributions within low-Z samples, using algorithms capable of X-ray absorption correction when sample self-absorption is not negligible. Its reconstruction is more complicated compared to transmission tomography, and therefore not widely used. We describe in this paper a very practical iterative method that uses widely available transmission tomography reconstruction software for fluorescence tomography. With this method, sample self-absorption can be corrected not only for the absorption within the measured layer but also for the absorption by material beyond that layer. By combining tomography with analysis for scanning X-ray fluorescence microscopy, absolute concentrations of trace elements can be obtained. By using widely shared software, we not only minimized the coding, took advantage of computing efficiency of fast Fourier transform in transmission tomography software, but also thereby accessed well-developed data processing tools coming with well-known and reliable software packages. The convergence of the iterations was also carefully studied for fluorescence of different attenuation lengths. As an example, fish eye lenses could provide valuable information about fish life-history and endured environmental conditions. Given the lens's spherical shape and sometimes the short distance from sample to detector for detecting low concentration trace elements, its tomography data are affected by absorption related to material beyond the measured layer but can be reconstructed well with our method. Fish eye lens tomography results are compared with sliced lens 2D fluorescence mapping with good agreement, and with tomography providing better spatial resolution.
Iterative methods for tomography problems: implementation to a cross-well tomography problem
NASA Astrophysics Data System (ADS)
Karadeniz, M. F.; Weber, G. W.
2018-01-01
The velocity distribution between two boreholes is reconstructed by cross-well tomography, which is commonly used in geology. In this paper, iterative methods, Kaczmarz’s algorithm, algebraic reconstruction technique (ART), and simultaneous iterative reconstruction technique (SIRT), are implemented to a specific cross-well tomography problem. Convergence to the solution of these methods and their CPU time for the cross-well tomography problem are compared. Furthermore, these three methods for this problem are compared for different tolerance values.
Continuous analog of multiplicative algebraic reconstruction technique for computed tomography
NASA Astrophysics Data System (ADS)
Tateishi, Kiyoko; Yamaguchi, Yusaku; Abou Al-Ola, Omar M.; Kojima, Takeshi; Yoshinaga, Tetsuya
2016-03-01
We propose a hybrid dynamical system as a continuous analog to the block-iterative multiplicative algebraic reconstruction technique (BI-MART), which is a well-known iterative image reconstruction algorithm for computed tomography. The hybrid system is described by a switched nonlinear system with a piecewise smooth vector field or differential equation and, for consistent inverse problems, the convergence of non-negatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem. Namely, we can prove theoretically that a weighted Kullback-Leibler divergence measure can be a common Lyapunov function for the switched system. We show that discretizing the differential equation by using the first-order approximation (Euler's method) based on the geometric multiplicative calculus leads to the same iterative formula of the BI-MART with the scaling parameter as a time-step of numerical discretization. The present paper is the first to reveal that a kind of iterative image reconstruction algorithm is constructed by the discretization of a continuous-time dynamical system for solving tomographic inverse problems. Iterative algorithms with not only the Euler method but also the Runge-Kutta methods of lower-orders applied for discretizing the continuous-time system can be used for image reconstruction. A numerical example showing the characteristics of the discretized iterative methods is presented.
Ramani, Sathish; Liu, Zhihao; Rosen, Jeffrey; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.
2012-01-01
Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Stein's Unbiased Risk Estimate— SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance σ2), and GCV (that does not need σ2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type ℓ1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly sub-optimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms. PMID:22531764
de Lima, Camila; Salomão Helou, Elias
2018-01-01
Iterative methods for tomographic image reconstruction have the computational cost of each iteration dominated by the computation of the (back)projection operator, which take roughly O(N 3 ) floating point operations (flops) for N × N pixels images. Furthermore, classical iterative algorithms may take too many iterations in order to achieve acceptable images, thereby making the use of these techniques unpractical for high-resolution images. Techniques have been developed in the literature in order to reduce the computational cost of the (back)projection operator to O(N 2 logN) flops. Also, incremental algorithms have been devised that reduce by an order of magnitude the number of iterations required to achieve acceptable images. The present paper introduces an incremental algorithm with a cost of O(N 2 logN) flops per iteration and applies it to the reconstruction of very large tomographic images obtained from synchrotron light illuminated data.
Angelis, G I; Reader, A J; Markiewicz, P J; Kotasidis, F A; Lionheart, W R; Matthews, J C
2013-08-07
Recent studies have demonstrated the benefits of a resolution model within iterative reconstruction algorithms in an attempt to account for effects that degrade the spatial resolution of the reconstructed images. However, these algorithms suffer from slower convergence rates, compared to algorithms where no resolution model is used, due to the additional need to solve an image deconvolution problem. In this paper, a recently proposed algorithm, which decouples the tomographic and image deconvolution problems within an image-based expectation maximization (EM) framework, was evaluated. This separation is convenient, because more computational effort can be placed on the image deconvolution problem and therefore accelerate convergence. Since the computational cost of solving the image deconvolution problem is relatively small, multiple image-based EM iterations do not significantly increase the overall reconstruction time. The proposed algorithm was evaluated using 2D simulations, as well as measured 3D data acquired on the high-resolution research tomograph. Results showed that bias reduction can be accelerated by interleaving multiple iterations of the image-based EM algorithm solving the resolution model problem, with a single EM iteration solving the tomographic problem. Significant improvements were observed particularly for voxels that were located on the boundaries between regions of high contrast within the object being imaged and for small regions of interest, where resolution recovery is usually more challenging. Minor differences were observed using the proposed nested algorithm, compared to the single iteration normally performed, when an optimal number of iterations are performed for each algorithm. However, using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer tomographic iterations (up to 70% fewer iterations for small regions). Nevertheless, the optimal number of nested image-based EM iterations is hard to be defined and it should be selected according to the given application.
NASA Astrophysics Data System (ADS)
Park, S. Y.; Kim, G. A.; Cho, H. S.; Park, C. K.; Lee, D. Y.; Lim, H. W.; Lee, H. W.; Kim, K. S.; Kang, S. Y.; Park, J. E.; Kim, W. S.; Jeon, D. H.; Je, U. K.; Woo, T. H.; Oh, J. E.
2018-02-01
In recent digital tomosynthesis (DTS), iterative reconstruction methods are often used owing to the potential to provide multiplanar images of superior image quality to conventional filtered-backprojection (FBP)-based methods. However, they require enormous computational cost in the iterative process, which has still been an obstacle to put them to practical use. In this work, we propose a new DTS reconstruction method incorporated with a dual-resolution voxelization scheme in attempt to overcome these difficulties, in which the voxels outside a small region-of-interest (ROI) containing target diagnosis are binned by 2 × 2 × 2 while the voxels inside the ROI remain unbinned. We considered a compressed-sensing (CS)-based iterative algorithm with a dual-constraint strategy for more accurate DTS reconstruction. We implemented the proposed algorithm and performed a systematic simulation and experiment to demonstrate its viability. Our results indicate that the proposed method seems to be effective for reducing computational cost considerably in iterative DTS reconstruction, keeping the image quality inside the ROI not much degraded. A binning size of 2 × 2 × 2 required only about 31.9% computational memory and about 2.6% reconstruction time, compared to those for no binning case. The reconstruction quality was evaluated in terms of the root-mean-square error (RMSE), the contrast-to-noise ratio (CNR), and the universal-quality index (UQI).
Investigation of iterative image reconstruction in low-dose breast CT
NASA Astrophysics Data System (ADS)
Bian, Junguo; Yang, Kai; Boone, John M.; Han, Xiao; Sidky, Emil Y.; Pan, Xiaochuan
2014-06-01
There is interest in developing computed tomography (CT) dedicated to breast-cancer imaging. Because breast tissues are radiation-sensitive, the total radiation exposure in a breast-CT scan is kept low, often comparable to a typical two-view mammography exam, thus resulting in a challenging low-dose-data-reconstruction problem. In recent years, evidence has been found that suggests that iterative reconstruction may yield images of improved quality from low-dose data. In this work, based upon the constrained image total-variation minimization program and its numerical solver, i.e., the adaptive steepest descent-projection onto the convex set (ASD-POCS), we investigate and evaluate iterative image reconstructions from low-dose breast-CT data of patients, with a focus on identifying and determining key reconstruction parameters, devising surrogate utility metrics for characterizing reconstruction quality, and tailoring the program and ASD-POCS to the specific reconstruction task under consideration. The ASD-POCS reconstructions appear to outperform the corresponding clinical FDK reconstructions, in terms of subjective visualization and surrogate utility metrics.
Investigation of practical initial attenuation image estimates in TOF-MLAA reconstruction for PET/MR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Ju-Chieh, E-mail: chengjuchieh@gmail.com; Y
Purpose: Time-of-flight joint attenuation and activity positron emission tomography reconstruction requires additional calibration (scale factors) or constraints during or post-reconstruction to produce a quantitative μ-map. In this work, the impact of various initializations of the joint reconstruction was investigated, and the initial average mu-value (IAM) method was introduced such that the forward-projection of the initial μ-map is already very close to that of the reference μ-map, thus reducing/minimizing the offset (scale factor) during the early iterations of the joint reconstruction. Consequently, the accuracy and efficiency of unconstrained joint reconstruction such as time-of-flight maximum likelihood estimation of attenuation and activity (TOF-MLAA)more » can be improved by the proposed IAM method. Methods: 2D simulations of brain and chest were used to evaluate TOF-MLAA with various initial estimates which include the object filled with water uniformly (conventional initial estimate), bone uniformly, the average μ-value uniformly (IAM magnitude initialization method), and the perfect spatial μ-distribution but with a wrong magnitude (initialization in terms of distribution). 3D GATE simulation was also performed for the chest phantom under a typical clinical scanning condition, and the simulated data were reconstructed with a fully corrected list-mode TOF-MLAA algorithm with various initial estimates. The accuracy of the average μ-values within the brain, chest, and abdomen regions obtained from the MR derived μ-maps was also evaluated using computed tomography μ-maps as the gold-standard. Results: The estimated μ-map with the initialization in terms of magnitude (i.e., average μ-value) was observed to reach the reference more quickly and naturally as compared to all other cases. Both 2D and 3D GATE simulations produced similar results, and it was observed that the proposed IAM approach can produce quantitative μ-map/emission when the corrections for physical effects such as scatter and randoms were included. The average μ-value obtained from MR derived μ-map was accurate within 5% with corrections for bone, fat, and uniform lungs. Conclusions: The proposed IAM-TOF-MLAA can produce quantitative μ-map without any calibration provided that there are sufficient counts in the measured data. For low count data, noise reduction and additional regularization/rescaling techniques need to be applied and investigated. The average μ-value within the object is prior information which can be extracted from MR and patient database, and it is feasible to obtain accurate average μ-value using MR derived μ-map with corrections as demonstrated in this work.« less
Patino, Manuel; Fuentes, Jorge M; Singh, Sarabjeet; Hahn, Peter F; Sahani, Dushyant V
2015-07-01
This article discusses the clinical challenge of low-radiation-dose examinations, the commonly used approaches for dose optimization, and their effect on image quality. We emphasize practical aspects of the different iterative reconstruction techniques, along with their benefits, pitfalls, and clinical implementation. The widespread use of CT has raised concerns about potential radiation risks, motivating diverse strategies to reduce the radiation dose associated with CT. CT manufacturers have developed alternative reconstruction algorithms intended to improve image quality on dose-optimized CT studies, mainly through noise and artifact reduction. Iterative reconstruction techniques take unique approaches to noise reduction and provide distinct strength levels or settings.
Fu, Jian; Schleede, Simone; Tan, Renbo; Chen, Liyuan; Bech, Martin; Achterhold, Klaus; Gifford, Martin; Loewen, Rod; Ruth, Ronald; Pfeiffer, Franz
2013-09-01
Iterative reconstruction has a wide spectrum of proven advantages in the field of conventional X-ray absorption-based computed tomography (CT). In this paper, we report on an algebraic iterative reconstruction technique for grating-based differential phase-contrast CT (DPC-CT). Due to the differential nature of DPC-CT projections, a differential operator and a smoothing operator are added to the iterative reconstruction, compared to the one commonly used for absorption-based CT data. This work comprises a numerical study of the algorithm and its experimental verification using a dataset measured at a two-grating interferometer setup. Since the algorithm is easy to implement and allows for the extension to various regularization possibilities, we expect a significant impact of the method for improving future medical and industrial DPC-CT applications. Copyright © 2012. Published by Elsevier GmbH.
Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia
2013-02-01
The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hanming; Wang, Linyuan; Li, Lei
2016-06-15
Purpose: Metal artifact reduction (MAR) is a major problem and a challenging issue in x-ray computed tomography (CT) examinations. Iterative reconstruction from sinograms unaffected by metals shows promising potential in detail recovery. This reconstruction has been the subject of much research in recent years. However, conventional iterative reconstruction methods easily introduce new artifacts around metal implants because of incomplete data reconstruction and inconsistencies in practical data acquisition. Hence, this work aims at developing a method to suppress newly introduced artifacts and improve the image quality around metal implants for the iterative MAR scheme. Methods: The proposed method consists of twomore » steps based on the general iterative MAR framework. An uncorrected image is initially reconstructed, and the corresponding metal trace is obtained. The iterative reconstruction method is then used to reconstruct images from the unaffected sinogram. In the reconstruction step of this work, an iterative strategy utilizing unmatched projector/backprojector pairs is used. A ramp filter is introduced into the back-projection procedure to restrain the inconsistency components in low frequencies and generate more reliable images of the regions around metals. Furthermore, a constrained total variation (TV) minimization model is also incorporated to enhance efficiency. The proposed strategy is implemented based on an iterative FBP and an alternating direction minimization (ADM) scheme, respectively. The developed algorithms are referred to as “iFBP-TV” and “TV-FADM,” respectively. Two projection-completion-based MAR methods and three iterative MAR methods are performed simultaneously for comparison. Results: The proposed method performs reasonably on both simulation and real CT-scanned datasets. This approach could reduce streak metal artifacts effectively and avoid the mentioned effects in the vicinity of the metals. The improvements are evaluated by inspecting regions of interest and by comparing the root-mean-square errors, normalized mean absolute distance, and universal quality index metrics of the images. Both iFBP-TV and TV-FADM methods outperform other counterparts in all cases. Unlike the conventional iterative methods, the proposed strategy utilizing unmatched projector/backprojector pairs shows excellent performance in detail preservation and prevention of the introduction of new artifacts. Conclusions: Qualitative and quantitative evaluations of experimental results indicate that the developed method outperforms classical MAR algorithms in suppressing streak artifacts and preserving the edge structural information of the object. In particular, structures lying close to metals can be gradually recovered because of the reduction of artifacts caused by inconsistency effects.« less
Objective performance assessment of five computed tomography iterative reconstruction algorithms.
Omotayo, Azeez; Elbakri, Idris
2016-11-22
Iterative algorithms are gaining clinical acceptance in CT. We performed objective phantom-based image quality evaluation of five commercial iterative reconstruction algorithms available on four different multi-detector CT (MDCT) scanners at different dose levels as well as the conventional filtered back-projection (FBP) reconstruction. Using the Catphan500 phantom, we evaluated image noise, contrast-to-noise ratio (CNR), modulation transfer function (MTF) and noise-power spectrum (NPS). The algorithms were evaluated over a CTDIvol range of 0.75-18.7 mGy on four major MDCT scanners: GE DiscoveryCT750HD (algorithms: ASIR™ and VEO™); Siemens Somatom Definition AS+ (algorithm: SAFIRE™); Toshiba Aquilion64 (algorithm: AIDR3D™); and Philips Ingenuity iCT256 (algorithm: iDose4™). Images were reconstructed using FBP and the respective iterative algorithms on the four scanners. Use of iterative algorithms decreased image noise and increased CNR, relative to FBP. In the dose range of 1.3-1.5 mGy, noise reduction using iterative algorithms was in the range of 11%-51% on GE DiscoveryCT750HD, 10%-52% on Siemens Somatom Definition AS+, 49%-62% on Toshiba Aquilion64, and 13%-44% on Philips Ingenuity iCT256. The corresponding CNR increase was in the range 11%-105% on GE, 11%-106% on Siemens, 85%-145% on Toshiba and 13%-77% on Philips respectively. Most algorithms did not affect the MTF, except for VEO™ which produced an increase in the limiting resolution of up to 30%. A shift in the peak of the NPS curve towards lower frequencies and a decrease in NPS amplitude were obtained with all iterative algorithms. VEO™ required long reconstruction times, while all other algorithms produced reconstructions in real time. Compared to FBP, iterative algorithms reduced image noise and increased CNR. The iterative algorithms available on different scanners achieved different levels of noise reduction and CNR increase while spatial resolution improvements were obtained only with VEO™. This study is useful in that it provides performance assessment of the iterative algorithms available from several mainstream CT manufacturers.
Fat fraction bias correction using T1 estimates and flip angle mapping.
Yang, Issac Y; Cui, Yifan; Wiens, Curtis N; Wade, Trevor P; Friesen-Waldner, Lanette J; McKenzie, Charles A
2014-01-01
To develop a new method of reducing T1 bias in proton density fat fraction (PDFF) measured with iterative decomposition of water and fat with echo asymmetry and least-squares estimation (IDEAL). PDFF maps reconstructed from high flip angle IDEAL measurements were simulated and acquired from phantoms and volunteer L4 vertebrae. T1 bias was corrected using a priori T1 values for water and fat, both with and without flip angle correction. Signal-to-noise ratio (SNR) maps were used to measure precision of the reconstructed PDFF maps. PDFF measurements acquired using small flip angles were then compared to both sets of corrected large flip angle measurements for accuracy and precision. Simulations show similar results in PDFF error between small flip angle measurements and corrected large flip angle measurements as long as T1 estimates were within one standard deviation from the true value. Compared to low flip angle measurements, phantom and in vivo measurements demonstrate better precision and accuracy in PDFF measurements if images were acquired at a high flip angle, with T1 bias corrected using T1 estimates and flip angle mapping. T1 bias correction of large flip angle acquisitions using estimated T1 values with flip angle mapping yields fat fraction measurements of similar accuracy and superior precision compared to low flip angle acquisitions. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Hashemi, Sayed Masoud; Lee, Young; Eriksson, Markus; Nordström, Hâkan; Mainprize, James; Grouza, Vladimir; Huynh, Christopher; Sahgal, Arjun; Song, William Y.; Ruschin, Mark
2017-03-01
A Contrast and Attenuation-map (CT-number) Linearity Improvement (CALI) framework is proposed for cone-beam CT (CBCT) images used for brain stereotactic radiosurgery (SRS). The proposed framework is used together with our high spatial resolution iterative reconstruction algorithm and is tailored for the Leksell Gamma Knife ICON (Elekta, Stockholm, Sweden). The incorporated CBCT system in ICON facilitates frameless SRS planning and treatment delivery. The ICON employs a half-cone geometry to accommodate the existing treatment couch. This geometry increases the amount of artifacts and together with other physical imperfections causes image inhomogeneity and contrast reduction. Our proposed framework includes a preprocessing step, involving a shading and beam-hardening artifact correction, and a post-processing step to correct the dome/capping artifact caused by the spatial variations in x-ray energy generated by bowtie-filter. Our shading correction algorithm relies solely on the acquired projection images (i.e. no prior information required) and utilizes filtered-back-projection (FBP) reconstructed images to generate a segmented bone and soft-tissue map. Ideal projections are estimated from the segmented images and a smoothed version of the difference between the ideal and measured projections is used in correction. The proposed beam-hardening and dome artifact corrections are segmentation free. The CALI was tested on CatPhan, as well as patient images acquired on the ICON system. The resulting clinical brain images show substantial improvements in soft contrast visibility, revealing structures such as ventricles and lesions which were otherwise un-detectable in FBP-reconstructed images. The linearity of the reconstructed attenuation-map was also improved, resulting in more accurate CT#.
Widmann, G; Juranek, D; Waldenberger, F; Schullian, P; Dennhardt, A; Hoermann, R; Steurer, M; Gassner, E-M; Puelacher, W
2017-08-01
Dose reduction on CT scans for surgical planning and postoperative evaluation of midface and orbital fractures is an important concern. The purpose of this study was to evaluate the variability of various low-dose and iterative reconstruction techniques on the visualization of orbital soft tissues. Contrast-to-noise ratios of the optic nerve and inferior rectus muscle and subjective scores of a human cadaver were calculated from CT with a reference dose protocol (CT dose index volume = 36.69 mGy) and a subsequent series of low-dose protocols (LDPs I-4: CT dose index volume = 4.18, 2.64, 0.99, and 0.53 mGy) with filtered back-projection (FBP) and adaptive statistical iterative reconstruction (ASIR)-50, ASIR-100, and model-based iterative reconstruction. The Dunn Multiple Comparison Test was used to compare each combination of protocols (α = .05). Compared with the reference dose protocol with FBP, the following statistically significant differences in contrast-to-noise ratios were shown (all, P ≤ .012) for the following: 1) optic nerve: LDP-I with FBP; LDP-II with FBP and ASIR-50; LDP-III with FBP, ASIR-50, and ASIR-100; and LDP-IV with FBP, ASIR-50, and ASIR-100; and 2) inferior rectus muscle: LDP-II with FBP, LDP-III with FBP and ASIR-50, and LDP-IV with FBP, ASIR-50, and ASIR-100. Model-based iterative reconstruction showed the best contrast-to-noise ratio in all images and provided similar subjective scores for LDP-II. ASIR-50 had no remarkable effect, and ASIR-100, a small effect on subjective scores. Compared with a reference dose protocol with FBP, model-based iterative reconstruction may show similar diagnostic visibility of orbital soft tissues at a CT dose index volume of 2.64 mGy. Low-dose technology and iterative reconstruction technology may redefine current reference dose levels in maxillofacial CT. © 2017 by American Journal of Neuroradiology.
Simultaneous deblurring and iterative reconstruction of CBCT for image guided brain radiosurgery.
Hashemi, SayedMasoud; Song, William Y; Sahgal, Arjun; Lee, Young; Huynh, Christopher; Grouza, Vladimir; Nordström, Håkan; Eriksson, Markus; Dorenlot, Antoine; Régis, Jean Marie; Mainprize, James G; Ruschin, Mark
2017-04-07
One of the limiting factors in cone-beam CT (CBCT) image quality is system blur, caused by detector response, x-ray source focal spot size, azimuthal blurring, and reconstruction algorithm. In this work, we develop a novel iterative reconstruction algorithm that improves spatial resolution by explicitly accounting for image unsharpness caused by different factors in the reconstruction formulation. While the model-based iterative reconstruction techniques use prior information about the detector response and x-ray source, our proposed technique uses a simple measurable blurring model. In our reconstruction algorithm, denoted as simultaneous deblurring and iterative reconstruction (SDIR), the blur kernel can be estimated using the modulation transfer function (MTF) slice of the CatPhan phantom or any other MTF phantom, such as wire phantoms. The proposed image reconstruction formulation includes two regularization terms: (1) total variation (TV) and (2) nonlocal regularization, solved with a split Bregman augmented Lagrangian iterative method. The SDIR formulation preserves edges, eases the parameter adjustments to achieve both high spatial resolution and low noise variances, and reduces the staircase effect caused by regular TV-penalized iterative algorithms. The proposed algorithm is optimized for a point-of-care head CBCT unit for image-guided radiosurgery and is tested with CatPhan phantom, an anthropomorphic head phantom, and 6 clinical brain stereotactic radiosurgery cases. Our experiments indicate that SDIR outperforms the conventional filtered back projection and TV penalized simultaneous algebraic reconstruction technique methods (represented by adaptive steepest-descent POCS algorithm, ASD-POCS) in terms of MTF and line pair resolution, and retains the favorable properties of the standard TV-based iterative reconstruction algorithms in improving the contrast and reducing the reconstruction artifacts. It improves the visibility of the high contrast details in bony areas and the brain soft-tissue. For example, the results show the ventricles and some brain folds become visible in SDIR reconstructed images and the contrast of the visible lesions is effectively improved. The line-pair resolution was improved from 12 line-pair/cm in FBP to 14 line-pair/cm in SDIR. Adjusting the parameters of the ASD-POCS to achieve 14 line-pair/cm caused the noise variance to be higher than the SDIR. Using these parameters for ASD-POCS, the MTF of FBP and ASD-POCS were very close and equal to 0.7 mm -1 which was increased to 1.2 mm -1 by SDIR, at half maximum.
Simultaneous deblurring and iterative reconstruction of CBCT for image guided brain radiosurgery
NASA Astrophysics Data System (ADS)
Hashemi, SayedMasoud; Song, William Y.; Sahgal, Arjun; Lee, Young; Huynh, Christopher; Grouza, Vladimir; Nordström, Håkan; Eriksson, Markus; Dorenlot, Antoine; Régis, Jean Marie; Mainprize, James G.; Ruschin, Mark
2017-04-01
One of the limiting factors in cone-beam CT (CBCT) image quality is system blur, caused by detector response, x-ray source focal spot size, azimuthal blurring, and reconstruction algorithm. In this work, we develop a novel iterative reconstruction algorithm that improves spatial resolution by explicitly accounting for image unsharpness caused by different factors in the reconstruction formulation. While the model-based iterative reconstruction techniques use prior information about the detector response and x-ray source, our proposed technique uses a simple measurable blurring model. In our reconstruction algorithm, denoted as simultaneous deblurring and iterative reconstruction (SDIR), the blur kernel can be estimated using the modulation transfer function (MTF) slice of the CatPhan phantom or any other MTF phantom, such as wire phantoms. The proposed image reconstruction formulation includes two regularization terms: (1) total variation (TV) and (2) nonlocal regularization, solved with a split Bregman augmented Lagrangian iterative method. The SDIR formulation preserves edges, eases the parameter adjustments to achieve both high spatial resolution and low noise variances, and reduces the staircase effect caused by regular TV-penalized iterative algorithms. The proposed algorithm is optimized for a point-of-care head CBCT unit for image-guided radiosurgery and is tested with CatPhan phantom, an anthropomorphic head phantom, and 6 clinical brain stereotactic radiosurgery cases. Our experiments indicate that SDIR outperforms the conventional filtered back projection and TV penalized simultaneous algebraic reconstruction technique methods (represented by adaptive steepest-descent POCS algorithm, ASD-POCS) in terms of MTF and line pair resolution, and retains the favorable properties of the standard TV-based iterative reconstruction algorithms in improving the contrast and reducing the reconstruction artifacts. It improves the visibility of the high contrast details in bony areas and the brain soft-tissue. For example, the results show the ventricles and some brain folds become visible in SDIR reconstructed images and the contrast of the visible lesions is effectively improved. The line-pair resolution was improved from 12 line-pair/cm in FBP to 14 line-pair/cm in SDIR. Adjusting the parameters of the ASD-POCS to achieve 14 line-pair/cm caused the noise variance to be higher than the SDIR. Using these parameters for ASD-POCS, the MTF of FBP and ASD-POCS were very close and equal to 0.7 mm-1 which was increased to 1.2 mm-1 by SDIR, at half maximum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, S; Hoffman, J; McNitt-Gray, M
Purpose: Iterative reconstruction methods show promise for improving image quality and lowering the dose in helical CT. We aim to develop a novel model-based reconstruction method that offers potential for dose reduction with reasonable computation speed and storage requirements for vendor-independent reconstruction from clinical data on a normal desktop computer. Methods: In 2012, Xu proposed reconstructing on rotating slices to exploit helical symmetry and reduce the storage requirements for the CT system matrix. Inspired by this concept, we have developed a novel reconstruction method incorporating the stored-system-matrix approach together with iterative coordinate-descent (ICD) optimization. A penalized-least-squares objective function with amore » quadratic penalty term is solved analytically voxel-by-voxel, sequentially iterating along the axial direction first, followed by the transaxial direction. 8 in-plane (transaxial) neighbors are used for the ICD algorithm. The forward problem is modeled via a unique approach that combines the principle of Joseph’s method with trilinear B-spline interpolation to enable accurate reconstruction with low storage requirements. Iterations are accelerated with multi-CPU OpenMP libraries. For preliminary evaluations, we reconstructed (1) a simulated 3D ellipse phantom and (2) an ACR accreditation phantom dataset exported from a clinical scanner (Definition AS, Siemens Healthcare). Image quality was evaluated in the resolution module. Results: Image quality was excellent for the ellipse phantom. For the ACR phantom, image quality was comparable to clinical reconstructions and reconstructions using open-source FreeCT-wFBP software. Also, we did not observe any deleterious impact associated with the utilization of rotating slices. The system matrix storage requirement was only 4.5GB, and reconstruction time was 50 seconds per iteration. Conclusion: Our reconstruction method shows potential for furthering research in low-dose helical CT, in particular as part of our ongoing development of an acquisition/reconstruction pipeline for generating images under a wide range of conditions. Our algorithm will be made available open-source as “FreeCT-ICD”. NIH U01 CA181156; Disclosures (McNitt-Gray): Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics.« less
NASA Astrophysics Data System (ADS)
Yu, Haiqing; Chen, Shuhang; Chen, Yunmei; Liu, Huafeng
2017-05-01
Dynamic positron emission tomography (PET) is capable of providing both spatial and temporal information of radio tracers in vivo. In this paper, we present a novel joint estimation framework to reconstruct temporal sequences of dynamic PET images and the coefficients characterizing the system impulse response function, from which the associated parametric images of the system macro parameters for tracer kinetics can be estimated. The proposed algorithm, which combines statistical data measurement and tracer kinetic models, integrates a dictionary sparse coding (DSC) into a total variational minimization based algorithm for simultaneous reconstruction of the activity distribution and parametric map from measured emission sinograms. DSC, based on the compartmental theory, provides biologically meaningful regularization, and total variation regularization is incorporated to provide edge-preserving guidance. We rely on techniques from minimization algorithms (the alternating direction method of multipliers) to first generate the estimated activity distributions with sub-optimal kinetic parameter estimates, and then recover the parametric maps given these activity estimates. These coupled iterative steps are repeated as necessary until convergence. Experiments with synthetic, Monte Carlo generated data, and real patient data have been conducted, and the results are very promising.
Stalder, Aurelien F; Schmidt, Michaela; Quick, Harald H; Schlamann, Marc; Maderwald, Stefan; Schmitt, Peter; Wang, Qiu; Nadar, Mariappan S; Zenge, Michael O
2015-12-01
To integrate, optimize, and evaluate a three-dimensional (3D) contrast-enhanced sparse MRA technique with iterative reconstruction on a standard clinical MR system. Data were acquired using a highly undersampled Cartesian spiral phyllotaxis sampling pattern and reconstructed directly on the MR system with an iterative SENSE technique. Undersampling, regularization, and number of iterations of the reconstruction were optimized and validated based on phantom experiments and patient data. Sparse MRA of the whole head (field of view: 265 × 232 × 179 mm(3) ) was investigated in 10 patient examinations. High-quality images with 30-fold undersampling, resulting in 0.7 mm isotropic resolution within 10 s acquisition, were obtained. After optimization of the regularization factor and of the number of iterations of the reconstruction, it was possible to reconstruct images with excellent quality within six minutes per 3D volume. Initial results of sparse contrast-enhanced MRA (CEMRA) in 10 patients demonstrated high-quality whole-head first-pass MRA for both the arterial and venous contrast phases. While sparse MRI techniques have not yet reached clinical routine, this study demonstrates the technical feasibility of high-quality sparse CEMRA of the whole head in a clinical setting. Sparse CEMRA has the potential to become a viable alternative where conventional CEMRA is too slow or does not provide sufficient spatial resolution. © 2014 Wiley Periodicals, Inc.
Emerging Techniques for Dose Optimization in Abdominal CT
Platt, Joel F.; Goodsitt, Mitchell M.; Al-Hawary, Mahmoud M.; Maturen, Katherine E.; Wasnik, Ashish P.; Pandya, Amit
2014-01-01
Recent advances in computed tomographic (CT) scanning technique such as automated tube current modulation (ATCM), optimized x-ray tube voltage, and better use of iterative image reconstruction have allowed maintenance of good CT image quality with reduced radiation dose. ATCM varies the tube current during scanning to account for differences in patient attenuation, ensuring a more homogeneous image quality, although selection of the appropriate image quality parameter is essential for achieving optimal dose reduction. Reducing the x-ray tube voltage is best suited for evaluating iodinated structures, since the effective energy of the x-ray beam will be closer to the k-edge of iodine, resulting in a higher attenuation for the iodine. The optimal kilovoltage for a CT study should be chosen on the basis of imaging task and patient habitus. The aim of iterative image reconstruction is to identify factors that contribute to noise on CT images with use of statistical models of noise (statistical iterative reconstruction) and selective removal of noise to improve image quality. The degree of noise suppression achieved with statistical iterative reconstruction can be customized to minimize the effect of altered image quality on CT images. Unlike with statistical iterative reconstruction, model-based iterative reconstruction algorithms model both the statistical noise and the physical acquisition process, allowing CT to be performed with further reduction in radiation dose without an increase in image noise or loss of spatial resolution. Understanding these recently developed scanning techniques is essential for optimization of imaging protocols designed to achieve the desired image quality with a reduced dose. © RSNA, 2014 PMID:24428277
Fast non-interferometric iterative phase retrieval for holographic data storage.
Lin, Xiao; Huang, Yong; Shimura, Tsutomu; Fujimura, Ryushi; Tanaka, Yoshito; Endo, Masao; Nishimoto, Hajimu; Liu, Jinpeng; Li, Yang; Liu, Ying; Tan, Xiaodi
2017-12-11
Fast non-interferometric phase retrieval is a very important technique for phase-encoded holographic data storage and other phase based applications due to its advantage of easy implementation, simple system setup, and robust noise tolerance. Here we present an iterative non-interferometric phase retrieval for 4-level phase encoded holographic data storage based on an iterative Fourier transform algorithm and known portion of the encoded data, which increases the storage code rate to two-times that of an amplitude based method. Only a single image at the Fourier plane of the beam is captured for the iterative reconstruction. Since beam intensity at the Fourier plane of the reconstructed beam is more concentrated than the reconstructed beam itself, the requirement of diffractive efficiency of the recording media is reduced, which will improve the dynamic range of recording media significantly. The phase retrieval only requires 10 iterations to achieve a less than 5% phase data error rate, which is successfully demonstrated by recording and reconstructing a test image data experimentally. We believe our method will further advance the holographic data storage technique in the era of big data.
Quantitative cardiac SPECT reconstruction with reduced image degradation due to patient anatomy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsui, B.M.W.; Zhao, X.D.; Gregoriou, G.K.
1994-12-01
Patient anatomy has complicated effects on cardiac SPECT images. The authors investigated reconstruction methods which substantially reduced these effects for improved image quality. A 3D mathematical cardiac-torso (MCAT) phantom which models the anatomical structures in the thorax region were used in the study. The phantom was modified to simulate variations in patient anatomy including regions of natural thinning along the myocardium, body size, diaphragmatic shape, gender, and size and shape of breasts for female patients. Distributions of attenuation coefficients and Tl-201 uptake in different organs in a normal patient were also simulated. Emission projection data were generated from the phantomsmore » including effects of attenuation and detector response. The authors have observed the attenuation-induced artifacts caused by patient anatomy in the conventional FBP reconstructed images. Accurate attenuation compensation using iterative reconstruction algorithms and attenuation maps substantially reduced the image artifacts and improved quantitative accuracy. They conclude that reconstruction methods which accurately compensate for non-uniform attenuation can substantially reduce image degradation caused by variations in patient anatomy in cardiac SPECT.« less
Ning, Peigang; Zhu, Shaocheng; Shi, Dapeng; Guo, Ying; Sun, Minghua
2014-01-01
This work aims to explore the effects of adaptive statistical iterative reconstruction (ASiR) and model-based iterative reconstruction (MBIR) algorithms in reducing computed tomography (CT) radiation dosages in abdominal imaging. CT scans on a standard male phantom were performed at different tube currents. Images at the different tube currents were reconstructed with the filtered back-projection (FBP), 50% ASiR and MBIR algorithms and compared. The CT value, image noise and contrast-to-noise ratios (CNRs) of the reconstructed abdominal images were measured. Volumetric CT dose indexes (CTDIvol) were recorded. At different tube currents, 50% ASiR and MBIR significantly reduced image noise and increased the CNR when compared with FBP. The minimal tube current values required by FBP, 50% ASiR, and MBIR to achieve acceptable image quality using this phantom were 200, 140, and 80 mA, respectively. At the identical image quality, 50% ASiR and MBIR reduced the radiation dose by 35.9% and 59.9% respectively when compared with FBP. Advanced iterative reconstruction techniques are able to reduce image noise and increase image CNRs. Compared with FBP, 50% ASiR and MBIR reduced radiation doses by 35.9% and 59.9%, respectively.
NASA Astrophysics Data System (ADS)
Ramlau, R.; Saxenhuber, D.; Yudytskiy, M.
2014-07-01
The problem of atmospheric tomography arises in ground-based telescope imaging with adaptive optics (AO), where one aims to compensate in real-time for the rapidly changing optical distortions in the atmosphere. Many of these systems depend on a sufficient reconstruction of the turbulence profiles in order to obtain a good correction. Due to steadily growing telescope sizes, there is a strong increase in the computational load for atmospheric reconstruction with current methods, first and foremost the MVM. In this paper we present and compare three novel iterative reconstruction methods. The first iterative approach is the Finite Element- Wavelet Hybrid Algorithm (FEWHA), which combines wavelet-based techniques and conjugate gradient schemes to efficiently and accurately tackle the problem of atmospheric reconstruction. The method is extremely fast, highly flexible and yields superior quality. Another novel iterative reconstruction algorithm is the three step approach which decouples the problem in the reconstruction of the incoming wavefronts, the reconstruction of the turbulent layers (atmospheric tomography) and the computation of the best mirror correction (fitting step). For the atmospheric tomography problem within the three step approach, the Kaczmarz algorithm and the Gradient-based method have been developed. We present a detailed comparison of our reconstructors both in terms of quality and speed performance in the context of a Multi-Object Adaptive Optics (MOAO) system for the E-ELT setting on OCTOPUS, the ESO end-to-end simulation tool.
Bernstein, Ally Leigh; Dhanantwari, Amar; Jurcova, Martina; Cheheltani, Rabee; Naha, Pratap Chandra; Ivanc, Thomas; Shefer, Efrat; Cormode, David Peter
2016-01-01
Computed tomography is a widely used medical imaging technique that has high spatial and temporal resolution. Its weakness is its low sensitivity towards contrast media. Iterative reconstruction techniques (ITER) have recently become available, which provide reduced image noise compared with traditional filtered back-projection methods (FBP), which may allow the sensitivity of CT to be improved, however this effect has not been studied in detail. We scanned phantoms containing either an iodine contrast agent or gold nanoparticles. We used a range of tube voltages and currents. We performed reconstruction with FBP, ITER and a novel, iterative, modal-based reconstruction (IMR) algorithm. We found that noise decreased in an algorithm dependent manner (FBP > ITER > IMR) for every scan and that no differences were observed in attenuation rates of the agents. The contrast to noise ratio (CNR) of iodine was highest at 80 kV, whilst the CNR for gold was highest at 140 kV. The CNR of IMR images was almost tenfold higher than that of FBP images. Similar trends were found in dual energy images formed using these algorithms. In conclusion, IMR-based reconstruction techniques will allow contrast agents to be detected with greater sensitivity, and may allow lower contrast agent doses to be used. PMID:27185492
Improved compressed sensing-based cone-beam CT reconstruction using adaptive prior image constraints
NASA Astrophysics Data System (ADS)
Lee, Ho; Xing, Lei; Davidi, Ran; Li, Ruijiang; Qian, Jianguo; Lee, Rena
2012-04-01
Volumetric cone-beam CT (CBCT) images are acquired repeatedly during a course of radiation therapy and a natural question to ask is whether CBCT images obtained earlier in the process can be utilized as prior knowledge to reduce patient imaging dose in subsequent scans. The purpose of this work is to develop an adaptive prior image constrained compressed sensing (APICCS) method to solve this problem. Reconstructed images using full projections are taken on the first day of radiation therapy treatment and are used as prior images. The subsequent scans are acquired using a protocol of sparse projections. In the proposed APICCS algorithm, the prior images are utilized as an initial guess and are incorporated into the objective function in the compressed sensing (CS)-based iterative reconstruction process. Furthermore, the prior information is employed to detect any possible mismatched regions between the prior and current images for improved reconstruction. For this purpose, the prior images and the reconstructed images are classified into three anatomical regions: air, soft tissue and bone. Mismatched regions are identified by local differences of the corresponding groups in the two classified sets of images. A distance transformation is then introduced to convert the information into an adaptive voxel-dependent relaxation map. In constructing the relaxation map, the matched regions (unchanged anatomy) between the prior and current images are assigned with smaller weight values, which are translated into less influence on the CS iterative reconstruction process. On the other hand, the mismatched regions (changed anatomy) are associated with larger values and the regions are updated more by the new projection data, thus avoiding any possible adverse effects of prior images. The APICCS approach was systematically assessed by using patient data acquired under standard and low-dose protocols for qualitative and quantitative comparisons. The APICCS method provides an effective way for us to enhance the image quality at the matched regions between the prior and current images compared to the existing PICCS algorithm. Compared to the current CBCT imaging protocols, the APICCS algorithm allows an imaging dose reduction of 10-40 times due to the greatly reduced number of projections and lower x-ray tube current level coming from the low-dose protocol.
Aurumskjöld, Marie-Louise; Ydström, Kristina; Tingberg, Anders; Söderberg, Marcus
2017-01-01
The number of computed tomography (CT) examinations is increasing and leading to an increase in total patient exposure. It is therefore important to optimize CT scan imaging conditions in order to reduce the radiation dose. The introduction of iterative reconstruction methods has enabled an improvement in image quality and a reduction in radiation dose. To investigate how image quality depends on reconstruction method and to discuss patient dose reduction resulting from the use of hybrid and model-based iterative reconstruction. An image quality phantom (Catphan® 600) and an anthropomorphic torso phantom were examined on a Philips Brilliance iCT. The image quality was evaluated in terms of CT numbers, noise, noise power spectra (NPS), contrast-to-noise ratio (CNR), low-contrast resolution, and spatial resolution for different scan parameters and dose levels. The images were reconstructed using filtered back projection (FBP) and different settings of hybrid (iDose 4 ) and model-based (IMR) iterative reconstruction methods. iDose 4 decreased the noise by 15-45% compared with FBP depending on the level of iDose 4 . The IMR reduced the noise even further, by 60-75% compared to FBP. The results are independent of dose. The NPS showed changes in the noise distribution for different reconstruction methods. The low-contrast resolution and CNR were improved with iDose 4 , and the improvement was even greater with IMR. There is great potential to reduce noise and thereby improve image quality by using hybrid or, in particular, model-based iterative reconstruction methods, or to lower radiation dose and maintain image quality. © The Foundation Acta Radiologica 2016.
Regularization iteration imaging algorithm for electrical capacitance tomography
NASA Astrophysics Data System (ADS)
Tong, Guowei; Liu, Shi; Chen, Hongyan; Wang, Xueyao
2018-03-01
The image reconstruction method plays a crucial role in real-world applications of the electrical capacitance tomography technique. In this study, a new cost function that simultaneously considers the sparsity and low-rank properties of the imaging targets is proposed to improve the quality of the reconstruction images, in which the image reconstruction task is converted into an optimization problem. Within the framework of the split Bregman algorithm, an iterative scheme that splits a complicated optimization problem into several simpler sub-tasks is developed to solve the proposed cost function efficiently, in which the fast-iterative shrinkage thresholding algorithm is introduced to accelerate the convergence. Numerical experiment results verify the effectiveness of the proposed algorithm in improving the reconstruction precision and robustness.
Hultenmo, Maria; Caisander, Håkan; Mack, Karsten; Thilander-Klang, Anne
2016-06-01
The diagnostic image quality of 75 paediatric abdominal computed tomography (CT) examinations reconstructed with two different iterative reconstruction (IR) algorithms-adaptive statistical IR (ASiR™) and model-based IR (Veo™)-was compared. Axial and coronal images were reconstructed with 70 % ASiR with the Soft™ convolution kernel and with the Veo algorithm. The thickness of the reconstructed images was 2.5 or 5 mm depending on the scanning protocol used. Four radiologists graded the delineation of six abdominal structures and the diagnostic usefulness of the image quality. The Veo reconstruction significantly improved the visibility of most of the structures compared with ASiR in all subgroups of images. For coronal images, the Veo reconstruction resulted in significantly improved ratings of the diagnostic use of the image quality compared with the ASiR reconstruction. This was not seen for the axial images. The greatest improvement using Veo reconstruction was observed for the 2.5 mm coronal slices. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Uncertainty quantification in volumetric Particle Image Velocimetry
NASA Astrophysics Data System (ADS)
Bhattacharya, Sayantan; Charonko, John; Vlachos, Pavlos
2016-11-01
Particle Image Velocimetry (PIV) uncertainty quantification is challenging due to coupled sources of elemental uncertainty and complex data reduction procedures in the measurement chain. Recent developments in this field have led to uncertainty estimation methods for planar PIV. However, no framework exists for three-dimensional volumetric PIV. In volumetric PIV the measurement uncertainty is a function of reconstructed three-dimensional particle location that in turn is very sensitive to the accuracy of the calibration mapping function. Furthermore, the iterative correction to the camera mapping function using triangulated particle locations in space (volumetric self-calibration) has its own associated uncertainty due to image noise and ghost particle reconstructions. Here we first quantify the uncertainty in the triangulated particle position which is a function of particle detection and mapping function uncertainty. The location uncertainty is then combined with the three-dimensional cross-correlation uncertainty that is estimated as an extension of the 2D PIV uncertainty framework. Finally the overall measurement uncertainty is quantified using an uncertainty propagation equation. The framework is tested with both simulated and experimental cases. For the simulated cases the variation of estimated uncertainty with the elemental volumetric PIV error sources are also evaluated. The results show reasonable prediction of standard uncertainty with good coverage.
Adaptive cornea modeling from keratometric data.
Martínez-Finkelshtein, Andrei; López, Darío Ramos; Castro, Gracia M; Alió, Jorge L
2011-07-01
To introduce an iterative, multiscale procedure that allows for better reconstruction of the shape of the anterior surface of the cornea from altimetric data collected by a corneal topographer. The report describes, first, an adaptive, multiscale mathematical algorithm for the parsimonious fit of the corneal surface data that adapts the number of functions used in the reconstruction to the conditions of each cornea. The method also implements a dynamic selection of the parameters and the management of noise. Then, several numerical experiments are performed, comparing it with the results obtained by the standard Zernike-based procedure. The numerical experiments showed that the algorithm exhibits steady exponential error decay, independent of the level of aberration of the cornea. The complexity of each anisotropic Gaussian-basis function in the functional representation is the same, but the parameters vary to fit the current scale. This scale is determined only by the residual errors and not by the number of the iteration. Finally, the position and clustering of the centers, as well as the size of the shape parameters, provides additional spatial information about the regions of higher irregularity. The methodology can be used for the real-time reconstruction of both altimetric data and corneal power maps from the data collected by keratoscopes, such as the Placido ring-based topographers, that will be decisive in early detection of corneal diseases such as keratoconus.
Benkert, Thomas; Tian, Ye; Huang, Chenchan; DiBella, Edward V R; Chandarana, Hersh; Feng, Li
2018-07-01
Golden-angle radial sparse parallel (GRASP) MRI reconstruction requires gridding and regridding to transform data between radial and Cartesian k-space. These operations are repeatedly performed in each iteration, which makes the reconstruction computationally demanding. This work aimed to accelerate GRASP reconstruction using self-calibrating GRAPPA operator gridding (GROG) and to validate its performance in clinical imaging. GROG is an alternative gridding approach based on parallel imaging, in which k-space data acquired on a non-Cartesian grid are shifted onto a Cartesian k-space grid using information from multicoil arrays. For iterative non-Cartesian image reconstruction, GROG is performed only once as a preprocessing step. Therefore, the subsequent iterative reconstruction can be performed directly in Cartesian space, which significantly reduces computational burden. Here, a framework combining GROG with GRASP (GROG-GRASP) is first optimized and then compared with standard GRASP reconstruction in 22 prostate patients. GROG-GRASP achieved approximately 4.2-fold reduction in reconstruction time compared with GRASP (∼333 min versus ∼78 min) while maintaining image quality (structural similarity index ≈ 0.97 and root mean square error ≈ 0.007). Visual image quality assessment by two experienced radiologists did not show significant differences between the two reconstruction schemes. With a graphics processing unit implementation, image reconstruction time can be further reduced to approximately 14 min. The GRASP reconstruction can be substantially accelerated using GROG. This framework is promising toward broader clinical application of GRASP and other iterative non-Cartesian reconstruction methods. Magn Reson Med 80:286-293, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Local motion-compensated method for high-quality 3D coronary artery reconstruction
Liu, Bo; Bai, Xiangzhi; Zhou, Fugen
2016-01-01
The 3D reconstruction of coronary artery from X-ray angiograms rotationally acquired on C-arm has great clinical value. While cardiac-gated reconstruction has shown promising results, it suffers from the problem of residual motion. This work proposed a new local motion-compensated reconstruction method to handle this issue. An initial image was firstly reconstructed using a regularized iterative reconstruction method. Then a 3D/2D registration method was proposed to estimate the residual vessel motion. Finally, the residual motion was compensated in the final reconstruction using the extended iterative reconstruction method. Through quantitative evaluation, it was found that high-quality 3D reconstruction could be obtained and the result was comparable to state-of-the-art method. PMID:28018741
Hudson, H M; Ma, J; Green, P
1994-01-01
Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.
PET attenuation correction for flexible MRI surface coils in hybrid PET/MRI using a 3D depth camera
NASA Astrophysics Data System (ADS)
Frohwein, Lynn J.; Heß, Mirco; Schlicher, Dominik; Bolwin, Konstantin; Büther, Florian; Jiang, Xiaoyi; Schäfers, Klaus P.
2018-01-01
PET attenuation correction for flexible MRI radio frequency surface coils in hybrid PET/MRI is still a challenging task, as position and shape of these coils conform to large inter-patient variabilities. The purpose of this feasibility study is to develop a novel method for the incorporation of attenuation information about flexible surface coils in PET reconstruction using the Microsoft Kinect V2 depth camera. The depth information is used to determine a dense point cloud of the coil’s surface representing the shape of the coil. From a CT template—acquired once in advance—surface information of the coil is extracted likewise and converted into a point cloud. The two point clouds are then registered using a combination of an iterative-closest-point (ICP) method and a partially rigid registration step. Using the transformation derived through the point clouds, the CT template is warped and thereby adapted to the PET/MRI scan setup. The transformed CT template is converted into an attenuation map from Hounsfield units into linear attenuation coefficients. The resulting fitted attenuation map is then integrated into the MRI-based patient-specific DIXON-based attenuation map of the actual PET/MRI scan. A reconstruction of phantom PET data acquired with the coil present in the field-of-view (FoV), but without the corresponding coil attenuation map, shows large artifacts in regions close to the coil. The overall count loss is determined to be around 13% compared to a PET scan without the coil present in the FoV. A reconstruction using the new μ-map resulted in strongly reduced artifacts as well as increased overall PET intensities with a remaining relative difference of about 1% to a PET scan without the coil in the FoV.
Zhou, Zhengdong; Guan, Shaolin; Xin, Runchao; Li, Jianbo
2018-06-01
Contrast-enhanced subtracted breast computer tomography (CESBCT) images acquired using energy-resolved photon counting detector can be helpful to enhance the visibility of breast tumors. In such technology, one challenge is the limited number of photons in each energy bin, thereby possibly leading to high noise in separate images from each energy bin, the projection-based weighted image, and the subtracted image. In conventional low-dose CT imaging, iterative image reconstruction provides a superior signal-to-noise compared with the filtered back projection (FBP) algorithm. In this paper, maximum a posteriori expectation maximization (MAP-EM) based on projection-based weighting imaging for reconstruction of CESBCT images acquired using an energy-resolving photon counting detector is proposed, and its performance was investigated in terms of contrast-to-noise ratio (CNR). The simulation study shows that MAP-EM based on projection-based weighting imaging can improve the CNR in CESBCT images by 117.7%-121.2% compared with FBP based on projection-based weighting imaging method. When compared with the energy-integrating imaging that uses the MAP-EM algorithm, projection-based weighting imaging that uses the MAP-EM algorithm can improve the CNR of CESBCT images by 10.5%-13.3%. In conclusion, MAP-EM based on projection-based weighting imaging shows significant improvement the CNR of the CESBCT image compared with FBP based on projection-based weighting imaging, and MAP-EM based on projection-based weighting imaging outperforms MAP-EM based on energy-integrating imaging for CESBCT imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A., E-mail: anastasio@wustl.edu
Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that ismore » solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.« less
Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A
2016-04-01
The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.
Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.
2016-01-01
Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets. PMID:27036582
NASA Astrophysics Data System (ADS)
Liansheng, Sui; Yin, Cheng; Bing, Li; Ailing, Tian; Krishna Asundi, Anand
2018-07-01
A novel computational ghost imaging scheme based on specially designed phase-only masks, which can be efficiently applied to encrypt an original image into a series of measured intensities, is proposed in this paper. First, a Hadamard matrix with a certain order is generated, where the number of elements in each row is equal to the size of the original image to be encrypted. Each row of the matrix is rearranged into the corresponding 2D pattern. Then, each pattern is encoded into the phase-only masks by making use of an iterative phase retrieval algorithm. These specially designed masks can be wholly or partially used in the process of computational ghost imaging to reconstruct the original information with high quality. When a significantly small number of phase-only masks are used to record the measured intensities in a single-pixel bucket detector, the information can be authenticated without clear visualization by calculating the nonlinear correlation map between the original image and its reconstruction. The results illustrate the feasibility and effectiveness of the proposed computational ghost imaging mechanism, which will provide an effective alternative for enriching the related research on the computational ghost imaging technique.
Fu, Jian; Hu, Xinhua; Velroyen, Astrid; Bech, Martin; Jiang, Ming; Pfeiffer, Franz
2015-01-01
Due to the potential of compact imaging systems with magnified spatial resolution and contrast, cone-beam x-ray differential phase-contrast computed tomography (DPC-CT) has attracted significant interest. The current proposed FDK reconstruction algorithm with the Hilbert imaginary filter will induce severe cone-beam artifacts when the cone-beam angle becomes large. In this paper, we propose an algebraic iterative reconstruction (AIR) method for cone-beam DPC-CT and report its experiment results. This approach considers the reconstruction process as the optimization of a discrete representation of the object function to satisfy a system of equations that describes the cone-beam DPC-CT imaging modality. Unlike the conventional iterative algorithms for absorption-based CT, it involves the derivative operation to the forward projections of the reconstructed intermediate image to take into account the differential nature of the DPC projections. This method is based on the algebraic reconstruction technique, reconstructs the image ray by ray, and is expected to provide better derivative estimates in iterations. This work comprises a numerical study of the algorithm and its experimental verification using a dataset measured with a three-grating interferometer and a mini-focus x-ray tube source. It is shown that the proposed method can reduce the cone-beam artifacts and performs better than FDK under large cone-beam angles. This algorithm is of interest for future cone-beam DPC-CT applications.
Yasaka, Koichiro; Katsura, Masaki; Akahane, Masaaki; Sato, Jiro; Matsuda, Izuru; Ohtomo, Kuni
2013-12-01
To evaluate dose reduction and image quality of abdominopelvic computed tomography (CT) reconstructed with model-based iterative reconstruction (MBIR) compared to adaptive statistical iterative reconstruction (ASIR). In this prospective study, 85 patients underwent referential-, low-, and ultralow-dose unenhanced abdominopelvic CT. Images were reconstructed with ASIR for low-dose (L-ASIR) and ultralow-dose CT (UL-ASIR), and with MBIR for ultralow-dose CT (UL-MBIR). Image noise was measured in the abdominal aorta and iliopsoas muscle. Subjective image analyses and a lesion detection study (adrenal nodules) were conducted by two blinded radiologists. A reference standard was established by a consensus panel of two different radiologists using referential-dose CT reconstructed with filtered back projection. Compared to low-dose CT, there was a 63% decrease in dose-length product with ultralow-dose CT. UL-MBIR had significantly lower image noise than L-ASIR and UL-ASIR (all p<0.01). UL-MBIR was significantly better for subjective image noise and streak artifacts than L-ASIR and UL-ASIR (all p<0.01). There were no significant differences between UL-MBIR and L-ASIR in diagnostic acceptability (p>0.65), or diagnostic performance for adrenal nodules (p>0.87). MBIR significantly improves image noise and streak artifacts compared to ASIR, and can achieve radiation dose reduction without severely compromising image quality.
NASA Astrophysics Data System (ADS)
Teuho, J.; Johansson, J.; Linden, J.; Saunavaara, V.; Tolvanen, T.; Teräs, M.
2014-01-01
Selection of reconstruction parameters has an effect on the image quantification in PET, with an additional contribution from a scanner-specific attenuation correction method. For achieving comparable results in inter- and intra-center comparisons, any existing quantitative differences should be identified and compensated for. In this study, a comparison between PET, PET/CT and PET/MR is performed by using an anatomical brain phantom, to identify and measure the amount of bias caused due to differences in reconstruction and attenuation correction methods especially in PET/MR. Differences were estimated by using visual, qualitative and quantitative analysis. The qualitative analysis consisted of a line profile analysis for measuring the reproduction of anatomical structures and the contribution of the amount of iterations to image contrast. The quantitative analysis consisted of measurement and comparison of 10 anatomical VOIs, where the HRRT was considered as the reference. All scanners reproduced the main anatomical structures of the phantom adequately, although the image contrast on the PET/MR was inferior when using a default clinical brain protocol. Image contrast was improved by increasing the amount of iterations from 2 to 5 while using 33 subsets. Furthermore, a PET/MR-specific bias was detected, which resulted in underestimation of the activity values in anatomical structures closest to the skull, due to the MR-derived attenuation map that ignores the bone. Thus, further improvements for the PET/MR reconstruction and attenuation correction could be achieved by optimization of RAMLA-specific reconstruction parameters and implementation of bone to the attenuation template.
Hoffman, John M; Noo, Frédéric; Young, Stefano; Hsieh, Scott S; McNitt-Gray, Michael
2018-06-01
To facilitate investigations into the impacts of acquisition and reconstruction parameters on quantitative imaging, radiomics and CAD using CT imaging, we previously released an open source implementation of a conventional weighted filtered backprojection reconstruction called FreeCT_wFBP. Our purpose was to extend that work by providing an open-source implementation of a model-based iterative reconstruction method using coordinate descent optimization, called FreeCT_ICD. Model-based iterative reconstruction offers the potential for substantial radiation dose reduction, but can impose substantial computational processing and storage requirements. FreeCT_ICD is an open source implementation of a model-based iterative reconstruction method that provides a reasonable tradeoff between these requirements. This was accomplished by adapting a previously proposed method that allows the system matrix to be stored with a reasonable memory requirement. The method amounts to describing the attenuation coefficient using rotating slices that follow the helical geometry. In the initially-proposed version, the rotating slices are themselves described using blobs. We have replaced this description by a unique model that relies on tri-linear interpolation together with the principles of Joseph's method. This model offers an improvement in memory requirement while still allowing highly accurate reconstruction for conventional CT geometries. The system matrix is stored column-wise and combined with an iterative coordinate descent (ICD) optimization. The result is FreeCT_ICD, which is a reconstruction program developed on the Linux platform using C++ libraries and the open source GNU GPL v2.0 license. The software is capable of reconstructing raw projection data of helical CT scans. In this work, the software has been described and evaluated by reconstructing datasets exported from a clinical scanner which consisted of an ACR accreditation phantom dataset and a clinical pediatric thoracic scan. For the ACR phantom, image quality was comparable to clinical reconstructions as well as reconstructions using open-source FreeCT_wFBP software. The pediatric thoracic scan also yielded acceptable results. In addition, we did not observe any deleterious impact in image quality associated with the utilization of rotating slices. These evaluations also demonstrated reasonable tradeoffs in storage requirements and computational demands. FreeCT_ICD is an open-source implementation of a model-based iterative reconstruction method that extends the capabilities of previously released open source reconstruction software and provides the ability to perform vendor-independent reconstructions of clinically acquired raw projection data. This implementation represents a reasonable tradeoff between storage and computational requirements and has demonstrated acceptable image quality in both simulated and clinical image datasets. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhou, Meiling; Singh, Alok Kumar; Pedrini, Giancarlo; Osten, Wolfgang; Min, Junwei; Yao, Baoli
2018-03-01
We present a tunable output-frequency filter (TOF) algorithm to reconstruct the object from noisy experimental data under low-power partially coherent illumination, such as LED, when imaging through scattering media. In the iterative algorithm, we employ Gaussian functions with different filter windows at different stages of iteration process to reduce corruption from experimental noise to search for a global minimum in the reconstruction. In comparison with the conventional iterative phase retrieval algorithm, we demonstrate that the proposed TOF algorithm achieves consistent and reliable reconstruction in the presence of experimental noise. Moreover, the spatial resolution and distinctive features are retained in the reconstruction since the filter is applied only to the region outside the object. The feasibility of the proposed method is proved by experimental results.
NASA Astrophysics Data System (ADS)
Vasilenko, Georgii Ivanovich; Taratorin, Aleksandr Markovich
Linear, nonlinear, and iterative image-reconstruction (IR) algorithms are reviewed. Theoretical results are presented concerning controllable linear filters, the solution of ill-posed functional minimization problems, and the regularization of iterative IR algorithms. Attention is also given to the problem of superresolution and analytical spectrum continuation, the solution of the phase problem, and the reconstruction of images distorted by turbulence. IR in optical and optical-digital systems is discussed with emphasis on holographic techniques.
Nasirudin, Radin A.; Mei, Kai; Panchev, Petar; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Fiebich, Martin; Noël, Peter B.
2015-01-01
Purpose The exciting prospect of Spectral CT (SCT) using photon-counting detectors (PCD) will lead to new techniques in computed tomography (CT) that take advantage of the additional spectral information provided. We introduce a method to reduce metal artifact in X-ray tomography by incorporating knowledge obtained from SCT into a statistical iterative reconstruction scheme. We call our method Spectral-driven Iterative Reconstruction (SPIR). Method The proposed algorithm consists of two main components: material decomposition and penalized maximum likelihood iterative reconstruction. In this study, the spectral data acquisitions with an energy-resolving PCD were simulated using a Monte-Carlo simulator based on EGSnrc C++ class library. A jaw phantom with a dental implant made of gold was used as an object in this study. A total of three dental implant shapes were simulated separately to test the influence of prior knowledge on the overall performance of the algorithm. The generated projection data was first decomposed into three basis functions: photoelectric absorption, Compton scattering and attenuation of gold. A pseudo-monochromatic sinogram was calculated and used as input in the reconstruction, while the spatial information of the gold implant was used as a prior. The results from the algorithm were assessed and benchmarked with state-of-the-art reconstruction methods. Results Decomposition results illustrate that gold implant of any shape can be distinguished from other components of the phantom. Additionally, the result from the penalized maximum likelihood iterative reconstruction shows that artifacts are significantly reduced in SPIR reconstructed slices in comparison to other known techniques, while at the same time details around the implant are preserved. Quantitatively, the SPIR algorithm best reflects the true attenuation value in comparison to other algorithms. Conclusion It is demonstrated that the combination of the additional information from Spectral CT and statistical reconstruction can significantly improve image quality, especially streaking artifacts caused by the presence of materials with high atomic numbers. PMID:25955019
Multishot cartesian turbo spin-echo diffusion imaging using iterative POCSMUSE Reconstruction.
Zhang, Zhe; Zhang, Bing; Li, Ming; Liang, Xue; Chen, Xiaodong; Liu, Renyuan; Zhang, Xin; Guo, Hua
2017-07-01
To report a diffusion imaging technique insensitive to off-resonance artifacts and motion-induced ghost artifacts using multishot Cartesian turbo spin-echo (TSE) acquisition and iterative POCS-based reconstruction of multiplexed sensitivity encoded magnetic resonance imaging (MRI) (POCSMUSE) for phase correction. Phase insensitive diffusion preparation was used to deal with the violation of the Carr-Purcell-Meiboom-Gill (CPMG) conditions of TSE diffusion-weighted imaging (DWI), followed by a multishot Cartesian TSE readout for data acquisition. An iterative diffusion phase correction method, iterative POCSMUSE, was developed and implemented to eliminate the ghost artifacts in multishot TSE DWI. The in vivo human brain diffusion images (from one healthy volunteer and 10 patients) using multishot Cartesian TSE were acquired at 3T and reconstructed using iterative POCSMUSE, and compared with single-shot and multishot echo-planar imaging (EPI) results. These images were evaluated by two radiologists using visual scores (considering both image quality and distortion levels) from 1 to 5. The proposed iterative POCSMUSE reconstruction was able to correct the ghost artifacts in multishot DWI. The ghost-to-signal ratio of TSE DWI using iterative POCSMUSE (0.0174 ± 0.0024) was significantly (P < 0.0005) smaller than using POCSMUSE (0.0253 ± 0.0040). The image scores of multishot TSE DWI were significantly higher than single-shot (P = 0.004 and 0.006 from two reviewers) and multishot (P = 0.008 and 0.004 from two reviewers) EPI-based methods. The proposed multishot Cartesian TSE DWI using iterative POCSMUSE reconstruction can provide high-quality diffusion images insensitive to motion-induced ghost artifacts and off-resonance related artifacts such as chemical shifts and susceptibility-induced image distortions. 1 Technical Efficacy: Stage 1 J. MAGN. RESON. IMAGING 2017;46:167-174. © 2016 International Society for Magnetic Resonance in Medicine.
Park, Hyun Jeong; Lee, Jeong Min; Park, Sung Bin; Lee, Jong Beum; Jeong, Yoong Ki; Yoon, Jeong Hee
The purpose of this work was to evaluate the image quality, lesion conspicuity, and dose reduction provided by knowledge-based iterative model reconstruction (IMR) in computed tomography (CT) of the liver compared with hybrid iterative reconstruction (IR) and filtered back projection (FBP) in patients with hepatocellular carcinoma (HCC). Fifty-six patients with 61 HCCs who underwent multiphasic reduced-dose CT (RDCT; n = 33) or standard-dose CT (SDCT; n = 28) were retrospectively evaluated. Reconstructed images with FBP, hybrid IR (iDose), IMR were evaluated for image quality using CT attenuation and image noise. Objective and subjective image quality of RDCT and SDCT sets were independently assessed by 2 observers in a blinded manner. Image quality and lesion conspicuity were better with IMR for both RDCT and SDCT than either FBP or IR (P < 0.001). Contrast-to-noise ratio of HCCs in IMR-RDCT was significantly higher on delayed phase (DP) (P < 0.001), and comparable on arterial phase, than with IR-SDCT (P = 0.501). Iterative model reconstruction RDCT was significantly superior to FBP-SDCT (P < 0.001). Compared with IR-SDCT, IMR-RDCT was comparable in image sharpness and tumor conspicuity on arterial phase, and superior in image quality, noise, and lesion conspicuity on DP. With the use of IMR, a 27% reduction of effective dose was achieved with RDCT (12.7 ± 0.6 mSv) compared with SDCT (17.4 ± 1.1 mSv) without loss of image quality (P < 0.001). Iterative model reconstruction provides better image quality and tumor conspicuity than FBP and IR with considerable noise reduction. In addition, more than comparable results were achieved with IMR-RDCT to IR-SDCT for the evaluation of HCCs.
Iterative initial condition reconstruction
NASA Astrophysics Data System (ADS)
Schmittfull, Marcel; Baldauf, Tobias; Zaldarriaga, Matias
2017-07-01
Motivated by recent developments in perturbative calculations of the nonlinear evolution of large-scale structure, we present an iterative algorithm to reconstruct the initial conditions in a given volume starting from the dark matter distribution in real space. In our algorithm, objects are first moved back iteratively along estimated potential gradients, with a progressively reduced smoothing scale, until a nearly uniform catalog is obtained. The linear initial density is then estimated as the divergence of the cumulative displacement, with an optional second-order correction. This algorithm should undo nonlinear effects up to one-loop order, including the higher-order infrared resummation piece. We test the method using dark matter simulations in real space. At redshift z =0 , we find that after eight iterations the reconstructed density is more than 95% correlated with the initial density at k ≤0.35 h Mpc-1 . The reconstruction also reduces the power in the difference between reconstructed and initial fields by more than 2 orders of magnitude at k ≤0.2 h Mpc-1 , and it extends the range of scales where the full broadband shape of the power spectrum matches linear theory by a factor of 2-3. As a specific application, we consider measurements of the baryonic acoustic oscillation (BAO) scale that can be improved by reducing the degradation effects of large-scale flows. In our idealized dark matter simulations, the method improves the BAO signal-to-noise ratio by a factor of 2.7 at z =0 and by a factor of 2.5 at z =0.6 , improving standard BAO reconstruction by 70% at z =0 and 30% at z =0.6 , and matching the optimal BAO signal and signal-to-noise ratio of the linear density in the same volume. For BAO, the iterative nature of the reconstruction is the most important aspect.
Wellenberg, Ruud H H; Boomsma, Martijn F; van Osch, Jochen A C; Vlassenbroek, Alain; Milles, Julien; Edens, Mireille A; Streekstra, Geert J; Slump, Cornelis H; Maas, Mario
To quantify the combined use of iterative model-based reconstruction (IMR) and orthopaedic metal artefact reduction (O-MAR) in reducing metal artefacts and improving image quality in a total hip arthroplasty phantom. Scans acquired at several dose levels and kVps were reconstructed with filtered back-projection (FBP), iterative reconstruction (iDose) and IMR, with and without O-MAR. Computed tomography (CT) numbers, noise levels, signal-to-noise-ratios and contrast-to-noise-ratios were analysed. Iterative model-based reconstruction results in overall improved image quality compared to iDose and FBP (P < 0.001). Orthopaedic metal artefact reduction is most effective in reducing severe metal artefacts improving CT number accuracy by 50%, 60%, and 63% (P < 0.05) and reducing noise by 1%, 62%, and 85% (P < 0.001) whereas improving signal-to-noise-ratios by 27%, 47%, and 46% (P < 0.001) and contrast-to-noise-ratios by 16%, 25%, and 19% (P < 0.001) with FBP, iDose, and IMR, respectively. The combined use of IMR and O-MAR strongly improves overall image quality and strongly reduces metal artefacts in the CT imaging of a total hip arthroplasty phantom.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ogden, K; Greene-Donnelly, K; Vallabhaneni, D
Purpose: To investigate the effects of changing iterative reconstruction strength and tube voltage on Hounsfield Unit (HU) values of varying concentrations of Iodinated contrast medium in a phantom. Method: Iodinated contrast (Omnipaque 300, GE Healthcare, Princeton NJ) was diluted with distilled water to concentrations of 0.6, 0.9, 1.8, 3.6, 7.2, and 10.8 mg/mL of Iodine. The solutions were scanned in a patient equivalent water phantom on two MDCT scanners: VCT 64 slice (GE Medical Systems, Waukesha, WI) and an Aquilion One 320 slice scanner (Toshiba America Medical Systems, Tustin CA). The phantom was scanned at 80, 100, 120, 140 kVmore » using 400, 255, 180, and 130 mAs, respectively, for the VCT scanner, and 80, 100, 120, and 135 kV using 400, 250, 200, and 150 mAs, respectively, on the Aquilion One. Images were reconstructed at 2.5 mm (VCT) and 0.5 mm (Aquilion One). The VCT images were reconstructed using Advanced Statistical Iterative Reconstruction (ASIR) at 6 different strengths: 0%, 20%, 40%, 60%, 80%, and 100%. Aquilion One images were reconstructed using Adaptive Iterative Dose Reduction (AIDR) at 4 strengths: no AIDR, Weak AIDR, Standard AIDR, and Strong AIDR. Regions of interest (ROIs) were drawn on the images to measure the HU values and standard deviations of the diluted contrast. Second order polynomials were used to fit the HU values as a function of Iodine concentration. Results: For both scanners, there was no significant effect of changing the iterative reconstruction strength. The polynomial fits yielded goodness-of-fit (R2) values averaging 0.997. Conclusion: Changing the strength of the iterative reconstruction has no significant effect on the HU values of Iodinated contrast in a tissue-equivalent phantom. Fit values of HU vs Iodine concentration are useful in quantitative imaging protocols such as the determination of cardiac output from time-density curves in the main pulmonary artery.« less
Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fahimian, Benjamin P.; Zhao Yunzhe; Huang Zhifeng
Purpose: A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. Methods: EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). Inmore » each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Results: Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. Conclusions: A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method.« less
Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction.
Fahimian, Benjamin P; Zhao, Yunzhe; Huang, Zhifeng; Fung, Russell; Mao, Yu; Zhu, Chun; Khatonabadi, Maryam; DeMarco, John J; Osher, Stanley J; McNitt-Gray, Michael F; Miao, Jianwei
2013-03-01
A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method.
Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction
Fahimian, Benjamin P.; Zhao, Yunzhe; Huang, Zhifeng; Fung, Russell; Mao, Yu; Zhu, Chun; Khatonabadi, Maryam; DeMarco, John J.; Osher, Stanley J.; McNitt-Gray, Michael F.; Miao, Jianwei
2013-01-01
Purpose: A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. Methods: EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Results: Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. Conclusions: A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method. PMID:23464329
Non-iterative volumetric particle reconstruction near moving bodies
NASA Astrophysics Data System (ADS)
Mendelson, Leah; Techet, Alexandra
2017-11-01
When multi-camera 3D PIV experiments are performed around a moving body, the body often obscures visibility of regions of interest in the flow field in a subset of cameras. We evaluate the performance of non-iterative particle reconstruction algorithms used for synthetic aperture PIV (SAPIV) in these partially-occluded regions. We show that when partial occlusions are present, the quality and availability of 3D tracer particle information depends on the number of cameras and reconstruction procedure used. Based on these findings, we introduce an improved non-iterative reconstruction routine for SAPIV around bodies. The reconstruction procedure combines binary masks, already required for reconstruction of the body's 3D visual hull, and a minimum line-of-sight algorithm. This approach accounts for partial occlusions without performing separate processing for each possible subset of cameras. We combine this reconstruction procedure with three-dimensional imaging on both sides of the free surface to reveal multi-fin wake interactions generated by a jumping archer fish. Sufficient particle reconstruction in near-body regions is crucial to resolving the wake structures of upstream fins (i.e., dorsal and anal fins) before and during interactions with the caudal tail.
Kole, J S; Beekman, F J
2006-02-21
Statistical reconstruction methods offer possibilities to improve image quality as compared with analytical methods, but current reconstruction times prohibit routine application in clinical and micro-CT. In particular, for cone-beam x-ray CT, the use of graphics hardware has been proposed to accelerate the forward and back-projection operations, in order to reduce reconstruction times. In the past, wide application of this texture hardware mapping approach was hampered owing to limited intrinsic accuracy. Recently, however, floating point precision has become available in the latest generation commodity graphics cards. In this paper, we utilize this feature to construct a graphics hardware accelerated version of the ordered subset convex reconstruction algorithm. The aims of this paper are (i) to study the impact of using graphics hardware acceleration for statistical reconstruction on the reconstructed image accuracy and (ii) to measure the speed increase one can obtain by using graphics hardware acceleration. We compare the unaccelerated algorithm with the graphics hardware accelerated version, and for the latter we consider two different interpolation techniques. A simulation study of a micro-CT scanner with a mathematical phantom shows that at almost preserved reconstructed image accuracy, speed-ups of a factor 40 to 222 can be achieved, compared with the unaccelerated algorithm, and depending on the phantom and detector sizes. Reconstruction from physical phantom data reconfirms the usability of the accelerated algorithm for practical cases.
NASA Astrophysics Data System (ADS)
Zeng, Lu-Chuan; Yao, Jen-Chih
2006-09-01
Recently, Agarwal, Cho, Li and Huang [R.P. Agarwal, Y.J. Cho, J. Li, N.J. Huang, Stability of iterative procedures with errors approximating common fixed points for a couple of quasi-contractive mappings in q-uniformly smooth Banach spaces, J. Math. Anal. Appl. 272 (2002) 435-447] introduced the new iterative procedures with errors for approximating the common fixed point of a couple of quasi-contractive mappings and showed the stability of these iterative procedures with errors in Banach spaces. In this paper, we introduce a new concept of a couple of q-contractive-like mappings (q>1) in a Banach space and apply these iterative procedures with errors for approximating the common fixed point of the couple of q-contractive-like mappings. The results established in this paper improve, extend and unify the corresponding ones of Agarwal, Cho, Li and Huang [R.P. Agarwal, Y.J. Cho, J. Li, N.J. Huang, Stability of iterative procedures with errors approximating common fixed points for a couple of quasi-contractive mappings in q-uniformly smooth Banach spaces, J. Math. Anal. Appl. 272 (2002) 435-447], Chidume [C.E. Chidume, Approximation of fixed points of quasi-contractive mappings in Lp spaces, Indian J. Pure Appl. Math. 22 (1991) 273-386], Chidume and Osilike [C.E. Chidume, M.O. Osilike, Fixed points iterations for quasi-contractive maps in uniformly smooth Banach spaces, Bull. Korean Math. Soc. 30 (1993) 201-212], Liu [Q.H. Liu, On Naimpally and Singh's open questions, J. Math. Anal. Appl. 124 (1987) 157-164; Q.H. Liu, A convergence theorem of the sequence of Ishikawa iterates for quasi-contractive mappings, J. Math. Anal. Appl. 146 (1990) 301-305], Osilike [M.O. Osilike, A stable iteration procedure for quasi-contractive maps, Indian J. Pure Appl. Math. 27 (1996) 25-34; M.O. Osilike, Stability of the Ishikawa iteration method for quasi-contractive maps, Indian J. Pure Appl. Math. 28 (1997) 1251-1265] and many others in the literature.
Tensor-based Dictionary Learning for Spectral CT Reconstruction
Zhang, Yanbo; Wang, Ge
2016-01-01
Spectral computed tomography (CT) produces an energy-discriminative attenuation map of an object, extending a conventional image volume with a spectral dimension. In spectral CT, an image can be sparsely represented in each of multiple energy channels, and are highly correlated among energy channels. According to this characteristics, we propose a tensor-based dictionary learning method for spectral CT reconstruction. In our method, tensor patches are extracted from an image tensor, which is reconstructed using the filtered backprojection (FBP), to form a training dataset. With the Candecomp/Parafac decomposition, a tensor-based dictionary is trained, in which each atom is a rank-one tensor. Then, the trained dictionary is used to sparsely represent image tensor patches during an iterative reconstruction process, and the alternating minimization scheme is adapted for optimization. The effectiveness of our proposed method is validated with both numerically simulated and real preclinical mouse datasets. The results demonstrate that the proposed tensor-based method generally produces superior image quality, and leads to more accurate material decomposition than the currently popular popular methods. PMID:27541628
Iterative reconstruction with boundary detection for carbon ion computed tomography
NASA Astrophysics Data System (ADS)
Shrestha, Deepak; Qin, Nan; Zhang, You; Kalantari, Faraz; Niu, Shanzhou; Jia, Xun; Pompos, Arnold; Jiang, Steve; Wang, Jing
2018-03-01
In heavy ion radiation therapy, improving the accuracy in range prediction of the ions inside the patient’s body has become essential. Accurate localization of the Bragg peak provides greater conformity of the tumor while sparing healthy tissues. We investigated the use of carbon ions directly for computed tomography (carbon CT) to create the relative stopping power map of a patient’s body. The Geant4 toolkit was used to perform a Monte Carlo simulation of the carbon ion trajectories, to study their lateral and angular deflections and the most likely paths, using a water phantom. Geant4 was used to create carbonCT projections of a contrast and spatial resolution phantom, with a cone beam of 430 MeV/u carbon ions. The contrast phantom consisted of cranial bone, lung material, and PMMA inserts while the spatial resolution phantom contained bone and lung material inserts with line pair (lp) densities ranging from 1.67 lp cm-1 through 5 lp cm-1. First, the positions of each carbon ion on the rear and front trackers were used for an approximate reconstruction of the phantom. The phantom boundary was extracted from this approximate reconstruction, by using the position as well as angle information from the four tracking detectors, resulting in the entry and exit locations of the individual ions on the phantom surface. Subsequent reconstruction was performed by the iterative algebraic reconstruction technique coupled with total variation minimization (ART-TV) assuming straight line trajectories for the ions inside the phantom. The influence of number of projections was studied with reconstruction from five different sets of projections: 15, 30, 45, 60 and 90. Additionally, the effect of number of ions on the image quality was investigated by reducing the number of ions/projection while keeping the total number of projections at 60. An estimation of carbon ion range using the carbonCT image resulted in improved range prediction compared to the range calculated using a calibration curve.
Evaluation of Bias and Variance in Low-count OSEM List Mode Reconstruction
Jian, Y; Planeta, B; Carson, R E
2016-01-01
Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization (MLEM) reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combination of subsets and iterations. Regions of interest (ROIs) were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations x subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1–5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR. PMID:25479254
Evaluation of bias and variance in low-count OSEM list mode reconstruction
NASA Astrophysics Data System (ADS)
Jian, Y.; Planeta, B.; Carson, R. E.
2015-01-01
Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combinations of subsets and iterations. Regions of interest were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations × subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1-5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR.
Tenant, Sean; Pang, Chun Lap; Dissanayake, Prageeth; Vardhanabhuti, Varut; Stuckey, Colin; Gutteridge, Catherine; Hyde, Christopher; Roobottom, Carl
2017-10-01
To evaluate the accuracy of reduced-dose CT scans reconstructed using a new generation of model-based iterative reconstruction (MBIR) in the imaging of urinary tract stone disease, compared with a standard-dose CT using 30% adaptive statistical iterative reconstruction. This single-institution prospective study recruited 125 patients presenting either with acute renal colic or for follow-up of known urinary tract stones. They underwent two immediately consecutive scans, one at standard dose settings and one at the lowest dose (highest noise index) the scanner would allow. The reduced-dose scans were reconstructed using both ASIR 30% and MBIR algorithms and reviewed independently by two radiologists. Objective and subjective image quality measures as well as diagnostic data were obtained. The reduced-dose MBIR scan was 100% concordant with the reference standard for the assessment of ureteric stones. It was extremely accurate at identifying calculi of 3 mm and above. The algorithm allowed a dose reduction of 58% without any loss of scan quality. A reduced-dose CT scan using MBIR is accurate in acute imaging for renal colic symptoms and for urolithiasis follow-up and allows a significant reduction in dose. • MBIR allows reduced CT dose with similar diagnostic accuracy • MBIR outperforms ASIR when used for the reconstruction of reduced-dose scans • MBIR can be used to accurately assess stones 3 mm and above.
Self-prior strategy for organ reconstruction in fluorescence molecular tomography
Zhou, Yuan; Chen, Maomao; Su, Han; Luo, Jianwen
2017-01-01
The purpose of this study is to propose a strategy for organ reconstruction in fluorescence molecular tomography (FMT) without prior information from other imaging modalities, and to overcome the high cost and ionizing radiation caused by the traditional structural prior strategy. The proposed strategy is designed as an iterative architecture to solve the inverse problem of FMT. In each iteration, a short time Fourier transform (STFT) based algorithm is used to extract the self-prior information in the space-frequency energy spectrum with the assumption that the regions with higher fluorescence concentration have larger energy intensity, then the cost function of the inverse problem is modified by the self-prior information, and lastly an iterative Laplacian regularization algorithm is conducted to solve the updated inverse problem and obtains the reconstruction results. Simulations and in vivo experiments on liver reconstruction are carried out to test the performance of the self-prior strategy on organ reconstruction. The organ reconstruction results obtained by the proposed self-prior strategy are closer to the ground truth than those obtained by the iterative Tikhonov regularization (ITKR) method (traditional non-prior strategy). Significant improvements are shown in the evaluation indexes of relative locational error (RLE), relative error (RE) and contrast-to-noise ratio (CNR). The self-prior strategy improves the organ reconstruction results compared with the non-prior strategy and also overcomes the shortcomings of the traditional structural prior strategy. Various applications such as metabolic imaging and pharmacokinetic study can be aided by this strategy. PMID:29082094
Self-prior strategy for organ reconstruction in fluorescence molecular tomography.
Zhou, Yuan; Chen, Maomao; Su, Han; Luo, Jianwen
2017-10-01
The purpose of this study is to propose a strategy for organ reconstruction in fluorescence molecular tomography (FMT) without prior information from other imaging modalities, and to overcome the high cost and ionizing radiation caused by the traditional structural prior strategy. The proposed strategy is designed as an iterative architecture to solve the inverse problem of FMT. In each iteration, a short time Fourier transform (STFT) based algorithm is used to extract the self-prior information in the space-frequency energy spectrum with the assumption that the regions with higher fluorescence concentration have larger energy intensity, then the cost function of the inverse problem is modified by the self-prior information, and lastly an iterative Laplacian regularization algorithm is conducted to solve the updated inverse problem and obtains the reconstruction results. Simulations and in vivo experiments on liver reconstruction are carried out to test the performance of the self-prior strategy on organ reconstruction. The organ reconstruction results obtained by the proposed self-prior strategy are closer to the ground truth than those obtained by the iterative Tikhonov regularization (ITKR) method (traditional non-prior strategy). Significant improvements are shown in the evaluation indexes of relative locational error (RLE), relative error (RE) and contrast-to-noise ratio (CNR). The self-prior strategy improves the organ reconstruction results compared with the non-prior strategy and also overcomes the shortcomings of the traditional structural prior strategy. Various applications such as metabolic imaging and pharmacokinetic study can be aided by this strategy.
Bischel, Alexander; Stratis, Andreas; Bosmans, Hilde; Jacobs, Reinhilde; Gassner, Eva-Maria; Puelacher, Wolfgang; Pauwels, Ruben
2017-01-01
Objectives: The objective of this study was to determine how iterative reconstruction technology (IRT) influences contrast and spatial resolution in ultralow-dose dentomaxillofacial CT imaging. Methods: A polymethyl methacrylate phantom with various inserts was scanned using a reference protocol (RP) at CT dose index volume 36.56 mGy, a sinus protocol at 18.28 mGy and ultralow-dose protocols (LD) at 4.17 mGy, 2.36 mGy, 0.99 mGy and 0.53 mGy. All data sets were reconstructed using filtered back projection (FBP) and the following IRTs: adaptive statistical iterative reconstructions (ASIRs) (ASIR-50, ASIR-100) and model-based iterative reconstruction (MBIR). Inserts containing line-pair patterns and contrast detail patterns for three different materials were scored by three observers. Observer agreement was analyzed using Cohen's kappa and difference in performance between the protocols and reconstruction was analyzed with Dunn's test at α = 0.05. Results: Interobserver agreement was acceptable with a mean kappa value of 0.59. Compared with the RP using FBP, similar scores were achieved at 2.36 mGy using MBIR. MIBR reconstructions showed the highest noise suppression as well as good contrast even at the lowest doses. Overall, ASIR reconstructions did not outperform FBP. Conclusions: LD and MBIR at a dose reduction of >90% may show no significant differences in spatial and contrast resolution compared with an RP and FBP. Ultralow-dose CT and IRT should be further explored in clinical studies. PMID:28059562
NASA Astrophysics Data System (ADS)
Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.
2016-05-01
X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.
Hirata, Kenichiro; Utsunomiya, Daisuke; Kidoh, Masafumi; Funama, Yoshinori; Oda, Seitaro; Yuki, Hideaki; Nagayama, Yasunori; Iyama, Yuji; Nakaura, Takeshi; Sakabe, Daisuke; Tsujita, Kenichi; Yamashita, Yasuyuki
2018-05-01
We aimed to evaluate the image quality performance of coronary CT angiography (CTA) under the different settings of forward-projected model-based iterative reconstruction solutions (FIRST).Thirty patients undergoing coronary CTA were included. Each image was reconstructed using filtered back projection (FBP), adaptive iterative dose reduction 3D (AIDR-3D), and 2 model-based iterative reconstructions including FIRST-body and FIRST-cardiac sharp (CS). CT number and noise were measured in the coronary vessels and plaque. Subjective image-quality scores were obtained for noise and structure visibility.In the objective image analysis, FIRST-body produced the significantly highest contrast-to-noise ratio. Regarding subjective image quality, FIRST-CS had the highest score for structure visibility, although the image noise score was inferior to that of FIRST-body.In conclusion, FIRST provides significant improvements in objective and subjective image quality compared with FBP and AIDR-3D. FIRST-body effectively reduces image noise, but the structure visibility with FIRST-CS was superior to FIRST-body.
Mariano-Goulart, D; Fourcade, M; Bernon, J L; Rossi, M; Zanca, M
2003-01-01
Thanks to an experimental study based on simulated and physical phantoms, the propagation of the stochastic noise in slices reconstructed using the conjugate gradient algorithm has been analysed versus iterations. After a first increase corresponding to the reconstruction of the signal, the noise stabilises before increasing linearly with iterations. The level of the plateau as well as the slope of the subsequent linear increase depends on the noise in the projection data.
Zhou, Qijing; Jiang, Biao; Dong, Fei; Huang, Peiyu; Liu, Hongtao; Zhang, Minming
2014-01-01
To evaluate the improvement of iterative reconstruction in image space (IRIS) technique in computed tomographic (CT) coronary stent imaging with sharp kernel, and to make a trade-off analysis. Fifty-six patients with 105 stents were examined by 128-slice dual-source CT coronary angiography (CTCA). Images were reconstructed using standard filtered back projection (FBP) and IRIS with both medium kernel and sharp kernel applied. Image noise and the stent diameter were investigated. Image noise was measured both in background vessel and in-stent lumen as objective image evaluation. Image noise score and stent score were performed as subjective image evaluation. The CTCA images reconstructed with IRIS were associated with significant noise reduction compared to that of CTCA images reconstructed using FBP technique in both of background vessel and in-stent lumen (the background noise decreased by approximately 25.4% ± 8.2% in medium kernel (P
Aurumskjöld, Marie-Louise; Söderberg, Marcus; Stålhammar, Fredrik; von Steyern, Kristina Vult; Tingberg, Anders; Ydström, Kristina
2018-06-01
Background In pediatric patients, computed tomography (CT) is important in the medical chain of diagnosing and monitoring various diseases. Because children are more radiosensitive than adults, they require minimal radiation exposure. One way to achieve this goal is to implement new technical solutions, like iterative reconstruction. Purpose To evaluate the potential of a new, iterative, model-based method for reconstructing (IMR) pediatric abdominal CT at a low radiation dose and determine whether it maintains or improves image quality, compared to the current reconstruction method. Material and Methods Forty pediatric patients underwent abdominal CT. Twenty patients were examined with the standard dose settings and 20 patients were examined with a 32% lower radiation dose. Images from the standard examination were reconstructed with a hybrid iterative reconstruction method (iDose 4 ), and images from the low-dose examinations were reconstructed with both iDose 4 and IMR. Image quality was evaluated subjectively by three observers, according to modified EU image quality criteria, and evaluated objectively based on the noise observed in liver images. Results Visual grading characteristics analyses showed no difference in image quality between the standard dose examination reconstructed with iDose 4 and the low dose examination reconstructed with IMR. IMR showed lower image noise in the liver compared to iDose 4 images. Inter- and intra-observer variance was low: the intraclass coefficient was 0.66 (95% confidence interval = 0.60-0.71) for the three observers. Conclusion IMR provided image quality equivalent or superior to the standard iDose 4 method for evaluating pediatric abdominal CT, even with a 32% dose reduction.
Pretorius, P. Hendrik; Johnson, Karen L.; King, Michael A.
2016-01-01
We have recently been successful in the development and testing of rigid-body motion tracking, estimation and compensation for cardiac perfusion SPECT based on a visual tracking system (VTS). The goal of this study was to evaluate in patients the effectiveness of our rigid-body motion compensation strategy. Sixty-four patient volunteers were asked to remain motionless or execute some predefined body motion during an additional second stress perfusion acquisition. Acquisitions were performed using the standard clinical protocol with 64 projections acquired through 180 degrees. All data were reconstructed with an ordered-subsets expectation-maximization (OSEM) algorithm using 4 projections per subset and 5 iterations. All physical degradation factors were addressed (attenuation, scatter, and distance dependent resolution), while a 3-dimensional Gaussian rotator was used during reconstruction to correct for six-degree-of-freedom (6-DOF) rigid-body motion estimated by the VTS. Polar map quantification was employed to evaluate compensation techniques. In 54.7% of the uncorrected second stress studies there was a statistically significant difference in the polar maps, and in 45.3% this made a difference in the interpretation of segmental perfusion. Motion correction reduced the impact of motion such that with it 32.8 % of the polar maps were statistically significantly different, and in 14.1% this difference changed the interpretation of segmental perfusion. The improvement shown in polar map quantitation translated to visually improved uniformity of the SPECT slices. PMID:28042170
Reddy, Vinod; Swanson, Stanley M; Segelke, Brent; Kantardjieff, Katherine A; Sacchettini, James C; Rupp, Bernhard
2003-12-01
Anticipating a continuing increase in the number of structures solved by molecular replacement in high-throughput crystallography and drug-discovery programs, a user-friendly web service for automated molecular replacement, map improvement, bias removal and real-space correlation structure validation has been implemented. The service is based on an efficient bias-removal protocol, Shake&wARP, and implemented using EPMR and the CCP4 suite of programs, combined with various shell scripts and Fortran90 routines. The service returns improved maps, converted data files and real-space correlation and B-factor plots. User data are uploaded through a web interface and the CPU-intensive iteration cycles are executed on a low-cost Linux multi-CPU cluster using the Condor job-queuing package. Examples of map improvement at various resolutions are provided and include model completion and reconstruction of absent parts, sequence correction, and ligand validation in drug-target structures.
NASA Astrophysics Data System (ADS)
Pua, Rizza; Park, Miran; Wi, Sunhee; Cho, Seungryong
2016-12-01
We propose a hybrid metal artifact reduction (MAR) approach for computed tomography (CT) that is computationally more efficient than a fully iterative reconstruction method, but at the same time achieves superior image quality to the interpolation-based in-painting techniques. Our proposed MAR method, an image-based artifact subtraction approach, utilizes an intermediate prior image reconstructed via PDART to recover the background information underlying the high density objects. For comparison, prior images generated by total-variation minimization (TVM) algorithm, as a realization of fully iterative approach, were also utilized as intermediate images. From the simulation and real experimental results, it has been shown that PDART drastically accelerates the reconstruction to an acceptable quality of prior images. Incorporating PDART-reconstructed prior images in the proposed MAR scheme achieved higher quality images than those by a conventional in-painting method. Furthermore, the results were comparable to the fully iterative MAR that uses high-quality TVM prior images.
Razifar, Pasha; Sandström, Mattias; Schnieder, Harald; Långström, Bengt; Maripuu, Enn; Bengtsson, Ewert; Bergström, Mats
2005-08-25
Positron Emission Tomography (PET), Computed Tomography (CT), PET/CT and Single Photon Emission Tomography (SPECT) are non-invasive imaging tools used for creating two dimensional (2D) cross section images of three dimensional (3D) objects. PET and SPECT have the potential of providing functional or biochemical information by measuring distribution and kinetics of radiolabelled molecules, whereas CT visualizes X-ray density in tissues in the body. PET/CT provides fused images representing both functional and anatomical information with better precision in localization than PET alone. Images generated by these types of techniques are generally noisy, thereby impairing the imaging potential and affecting the precision in quantitative values derived from the images. It is crucial to explore and understand the properties of noise in these imaging techniques. Here we used autocorrelation function (ACF) specifically to describe noise correlation and its non-isotropic behaviour in experimentally generated images of PET, CT, PET/CT and SPECT. Experiments were performed using phantoms with different shapes. In PET and PET/CT studies, data were acquired in 2D acquisition mode and reconstructed by both analytical filter back projection (FBP) and iterative, ordered subsets expectation maximisation (OSEM) methods. In the PET/CT studies, different magnitudes of X-ray dose in the transmission were employed by using different mA settings for the X-ray tube. In the CT studies, data were acquired using different slice thickness with and without applied dose reduction function and the images were reconstructed by FBP. SPECT studies were performed in 2D, reconstructed using FBP and OSEM, using post 3D filtering. ACF images were generated from the primary images, and profiles across the ACF images were used to describe the noise correlation in different directions. The variance of noise across the images was visualised as images and with profiles across these images. The most important finding was that the pattern of noise correlation is rotation symmetric or isotropic, independent of object shape in PET and PET/CT images reconstructed using the iterative method. This is, however, not the case in FBP images when the shape of phantom is not circular. Also CT images reconstructed using FBP show the same non-isotropic pattern independent of slice thickness and utilization of care dose function. SPECT images show an isotropic correlation of the noise independent of object shape or applied reconstruction algorithm. Noise in PET/CT images was identical independent of the applied X-ray dose in the transmission part (CT), indicating that the noise from transmission with the applied doses does not propagate into the PET images showing that the noise from the emission part is dominant. The results indicate that in human studies it is possible to utilize a low dose in transmission part while maintaining the noise behaviour and the quality of the images. The combined effect of noise correlation for asymmetric objects and a varying noise variance across the image field significantly complicates the interpretation of the images when statistical methods are used, such as with statistical estimates of precision in average values, use of statistical parametric mapping methods and principal component analysis. Hence it is recommended that iterative reconstruction methods are used for such applications. However, it is possible to calculate the noise analytically in images reconstructed by FBP, while it is not possible to do the same calculation in images reconstructed by iterative methods. Therefore for performing statistical methods of analysis which depend on knowing the noise, FBP would be preferred.
Estimation of Noise Properties for TV-regularized Image Reconstruction in Computed Tomography
Sánchez, Adrian A.
2016-01-01
A method for predicting the image covariance resulting from total-variation-penalized iterative image reconstruction (TV-penalized IIR) is presented and demonstrated in a variety of contexts. The method is validated against the sample covariance from statistical noise realizations for a small image using a variety of comparison metrics. Potential applications for the covariance approximation include investigation of image properties such as object- and signal-dependence of noise, and noise stationarity. These applications are demonstrated, along with the construction of image pixel variance maps for two-dimensional 128 × 128 pixel images. Methods for extending the proposed covariance approximation to larger images and improving computational efficiency are discussed. Future work will apply the developed methodology to the construction of task-based image quality metrics such as the Hotelling observer detectability for TV-based IIR. PMID:26308968
Estimation of noise properties for TV-regularized image reconstruction in computed tomography.
Sánchez, Adrian A
2015-09-21
A method for predicting the image covariance resulting from total-variation-penalized iterative image reconstruction (TV-penalized IIR) is presented and demonstrated in a variety of contexts. The method is validated against the sample covariance from statistical noise realizations for a small image using a variety of comparison metrics. Potential applications for the covariance approximation include investigation of image properties such as object- and signal-dependence of noise, and noise stationarity. These applications are demonstrated, along with the construction of image pixel variance maps for two-dimensional 128 × 128 pixel images. Methods for extending the proposed covariance approximation to larger images and improving computational efficiency are discussed. Future work will apply the developed methodology to the construction of task-based image quality metrics such as the Hotelling observer detectability for TV-based IIR.
Estimation of noise properties for TV-regularized image reconstruction in computed tomography
NASA Astrophysics Data System (ADS)
Sánchez, Adrian A.
2015-09-01
A method for predicting the image covariance resulting from total-variation-penalized iterative image reconstruction (TV-penalized IIR) is presented and demonstrated in a variety of contexts. The method is validated against the sample covariance from statistical noise realizations for a small image using a variety of comparison metrics. Potential applications for the covariance approximation include investigation of image properties such as object- and signal-dependence of noise, and noise stationarity. These applications are demonstrated, along with the construction of image pixel variance maps for two-dimensional 128× 128 pixel images. Methods for extending the proposed covariance approximation to larger images and improving computational efficiency are discussed. Future work will apply the developed methodology to the construction of task-based image quality metrics such as the Hotelling observer detectability for TV-based IIR.
DAVIS: A direct algorithm for velocity-map imaging system
NASA Astrophysics Data System (ADS)
Harrison, G. R.; Vaughan, J. C.; Hidle, B.; Laurent, G. M.
2018-05-01
In this work, we report a direct (non-iterative) algorithm to reconstruct the three-dimensional (3D) momentum-space picture of any charged particles collected with a velocity-map imaging system from the two-dimensional (2D) projected image captured by a position-sensitive detector. The method consists of fitting the measured image with the 2D projection of a model 3D velocity distribution defined by the physics of the light-matter interaction. The meaningful angle-correlated information is first extracted from the raw data by expanding the image with a complete set of Legendre polynomials. Both the particle's angular and energy distributions are then directly retrieved from the expansion coefficients. The algorithm is simple, easy to implement, fast, and explicitly takes into account the pixelization effect in the measurement.
NASA Astrophysics Data System (ADS)
Lin, Qingyang; Andrew, Matthew; Thompson, William; Blunt, Martin J.; Bijeljic, Branko
2018-05-01
Non-invasive laboratory-based X-ray microtomography has been widely applied in many industrial and research disciplines. However, the main barrier to the use of laboratory systems compared to a synchrotron beamline is its much longer image acquisition time (hours per scan compared to seconds to minutes at a synchrotron), which results in limited application for dynamic in situ processes. Therefore, the majority of existing laboratory X-ray microtomography is limited to static imaging; relatively fast imaging (tens of minutes per scan) can only be achieved by sacrificing imaging quality, e.g. reducing exposure time or number of projections. To alleviate this barrier, we introduce an optimized implementation of a well-known iterative reconstruction algorithm that allows users to reconstruct tomographic images with reasonable image quality, but requires lower X-ray signal counts and fewer projections than conventional methods. Quantitative analysis and comparison between the iterative and the conventional filtered back-projection reconstruction algorithm was performed using a sandstone rock sample with and without liquid phases in the pore space. Overall, by implementing the iterative reconstruction algorithm, the required image acquisition time for samples such as this, with sparse object structure, can be reduced by a factor of up to 4 without measurable loss of sharpness or signal to noise ratio.
PET image reconstruction: a robust state space approach.
Liu, Huafeng; Tian, Yi; Shi, Pengcheng
2005-01-01
Statistical iterative reconstruction algorithms have shown improved image quality over conventional nonstatistical methods in PET by using accurate system response models and measurement noise models. Strictly speaking, however, PET measurements, pre-corrected for accidental coincidences, are neither Poisson nor Gaussian distributed and thus do not meet basic assumptions of these algorithms. In addition, the difficulty in determining the proper system response model also greatly affects the quality of the reconstructed images. In this paper, we explore the usage of state space principles for the estimation of activity map in tomographic PET imaging. The proposed strategy formulates the organ activity distribution through tracer kinetics models, and the photon-counting measurements through observation equations, thus makes it possible to unify the dynamic reconstruction problem and static reconstruction problem into a general framework. Further, it coherently treats the uncertainties of the statistical model of the imaging system and the noisy nature of measurement data. Since H(infinity) filter seeks minimummaximum-error estimates without any assumptions on the system and data noise statistics, it is particular suited for PET image reconstruction where the statistical properties of measurement data and the system model are very complicated. The performance of the proposed framework is evaluated using Shepp-Logan simulated phantom data and real phantom data with favorable results.
Yasaka, Koichiro; Katsura, Masaki; Akahane, Masaaki; Sato, Jiro; Matsuda, Izuru; Ohtomo, Kuni
2016-01-01
Iterative reconstruction methods have attracted attention for reducing radiation doses in computed tomography (CT). To investigate the detectability of pancreatic calcification using dose-reduced CT reconstructed with model-based iterative construction (MBIR) and adaptive statistical iterative reconstruction (ASIR). This prospective study approved by Institutional Review Board included 85 patients (57 men, 28 women; mean age, 69.9 years; mean body weight, 61.2 kg). Unenhanced CT was performed three times with different radiation doses (reference-dose CT [RDCT], low-dose CT [LDCT], ultralow-dose CT [ULDCT]). From RDCT, LDCT, and ULDCT, images were reconstructed with filtered-back projection (R-FBP, used for establishing reference standard), ASIR (L-ASIR), and MBIR and ASIR (UL-MBIR and UL-ASIR), respectively. A lesion (pancreatic calcification) detection test was performed by two blinded radiologists with a five-point certainty level scale. Dose-length products of RDCT, LDCT, and ULDCT were 410, 97, and 36 mGy-cm, respectively. Nine patients had pancreatic calcification. The sensitivity for detecting pancreatic calcification with UL-MBIR was high (0.67-0.89) compared to L-ASIR or UL-ASIR (0.11-0.44), and a significant difference was seen between UL-MBIR and UL-ASIR for one reader (P = 0.014). The area under the receiver-operating characteristic curve for UL-MBIR (0.818-0.860) was comparable to that for L-ASIR (0.696-0.844). The specificity was lower with UL-MBIR (0.79-0.92) than with L-ASIR or UL-ASIR (0.96-0.99), and a significant difference was seen for one reader (P < 0.01). In UL-MBIR, pancreatic calcification can be detected with high sensitivity, however, we should pay attention to the slightly lower specificity.
NASA Astrophysics Data System (ADS)
Li, Qin; Berman, Benjamin P.; Schumacher, Justin; Liang, Yongguang; Gavrielides, Marios A.; Yang, Hao; Zhao, Binsheng; Petrick, Nicholas
2017-03-01
Tumor volume measured from computed tomography images is considered a biomarker for disease progression or treatment response. The estimation of the tumor volume depends on the imaging system parameters selected, as well as lesion characteristics. In this study, we examined how different image reconstruction methods affect the measurement of lesions in an anthropomorphic liver phantom with a non-uniform background. Iterative statistics-based and model-based reconstructions, as well as filtered back-projection, were evaluated and compared in this study. Statistics-based and filtered back-projection yielded similar estimation performance, while model-based yielded higher precision but lower accuracy in the case of small lesions. Iterative reconstructions exhibited higher signal-to-noise ratio but slightly lower contrast of the lesion relative to the background. A better understanding of lesion volumetry performance as a function of acquisition parameters and lesion characteristics can lead to its incorporation as a routine sizing tool.
CT image reconstruction with half precision floating-point values.
Maaß, Clemens; Baer, Matthias; Kachelrieß, Marc
2011-07-01
Analytic CT image reconstruction is a computationally demanding task. Currently, the even more demanding iterative reconstruction algorithms find their way into clinical routine because their image quality is superior to analytic image reconstruction. The authors thoroughly analyze a so far unconsidered but valuable tool of tomorrow's reconstruction hardware (CPU and GPU) that allows implementing the forward projection and backprojection steps, which are the computationally most demanding parts of any reconstruction algorithm, much more efficiently. Instead of the standard 32 bit floating-point values (float), a recently standardized floating-point value with 16 bit (half) is adopted for data representation in image domain and in rawdata domain. The reduction in the total data amount reduces the traffic on the memory bus, which is the bottleneck of today's high-performance algorithms, by 50%. In CT simulations and CT measurements, float reconstructions (gold standard) and half reconstructions are visually compared via difference images and by quantitative image quality evaluation. This is done for analytical reconstruction (filtered backprojection) and iterative reconstruction (ordered subset SART). The magnitude of quantization noise, which is caused by a reduction in the data precision of both rawdata and image data during image reconstruction, is negligible. This is clearly shown for filtered backprojection and iterative ordered subset SART reconstruction. In filtered backprojection, the implementation of the backprojection should be optimized for low data precision if the image data are represented in half format. In ordered subset SART image reconstruction, no adaptations are necessary and the convergence speed remains unchanged. Half precision floating-point values allow to speed up CT image reconstruction without compromising image quality.
PET Image Reconstruction Incorporating 3D Mean-Median Sinogram Filtering
NASA Astrophysics Data System (ADS)
Mokri, S. S.; Saripan, M. I.; Rahni, A. A. Abd; Nordin, A. J.; Hashim, S.; Marhaban, M. H.
2016-02-01
Positron Emission Tomography (PET) projection data or sinogram contained poor statistics and randomness that produced noisy PET images. In order to improve the PET image, we proposed an implementation of pre-reconstruction sinogram filtering based on 3D mean-median filter. The proposed filter is designed based on three aims; to minimise angular blurring artifacts, to smooth flat region and to preserve the edges in the reconstructed PET image. The performance of the pre-reconstruction sinogram filter prior to three established reconstruction methods namely filtered-backprojection (FBP), Maximum likelihood expectation maximization-Ordered Subset (OSEM) and OSEM with median root prior (OSEM-MRP) is investigated using simulated NCAT phantom PET sinogram as generated by the PET Analytical Simulator (ASIM). The improvement on the quality of the reconstructed images with and without sinogram filtering is assessed according to visual as well as quantitative evaluation based on global signal to noise ratio (SNR), local SNR, contrast to noise ratio (CNR) and edge preservation capability. Further analysis on the achieved improvement is also carried out specific to iterative OSEM and OSEM-MRP reconstruction methods with and without pre-reconstruction filtering in terms of contrast recovery curve (CRC) versus noise trade off, normalised mean square error versus iteration, local CNR versus iteration and lesion detectability. Overall, satisfactory results are obtained from both visual and quantitative evaluations.
Model Based Iterative Reconstruction for Bright Field Electron Tomography (Postprint)
2013-02-01
which is based on the iterative coordinate descent (ICD), works by constructing a substitute to the original cost4 at every point, and minimizing this...using Beer’s law. Thus the projection integral corresponding to the ith measurement is given by log ( λD λi ) . There can be cases in which the dosage λD...Inputs: Measurements g, Initial reconstruction f ′, Initial dosage d′, Fraction of entries to reject R %Outputs: Reconstruction f̂ and dosage parameter d̂
Multigrid-based reconstruction algorithm for quantitative photoacoustic tomography
Li, Shengfu; Montcel, Bruno; Yuan, Zhen; Liu, Wanyu; Vray, Didier
2015-01-01
This paper proposes a multigrid inversion framework for quantitative photoacoustic tomography reconstruction. The forward model of optical fluence distribution and the inverse problem are solved at multiple resolutions. A fixed-point iteration scheme is formulated for each resolution and used as a cost function. The simulated and experimental results for quantitative photoacoustic tomography reconstruction show that the proposed multigrid inversion can dramatically reduce the required number of iterations for the optimization process without loss of reliability in the results. PMID:26203371
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng Guoyan
2010-04-15
Purpose: The aim of this article is to investigate the feasibility of using a statistical shape model (SSM)-based reconstruction technique to derive a scaled, patient-specific surface model of the pelvis from a single standard anteroposterior (AP) x-ray radiograph and the feasibility of estimating the scale of the reconstructed surface model by performing a surface-based 3D/3D matching. Methods: Data sets of 14 pelvises (one plastic bone, 12 cadavers, and one patient) were used to validate the single-image based reconstruction technique. This reconstruction technique is based on a hybrid 2D/3D deformable registration process combining a landmark-to-ray registration with a SSM-based 2D/3D reconstruction.more » The landmark-to-ray registration was used to find an initial scale and an initial rigid transformation between the x-ray image and the SSM. The estimated scale and rigid transformation were used to initialize the SSM-based 2D/3D reconstruction. The optimal reconstruction was then achieved in three stages by iteratively matching the projections of the apparent contours extracted from a 3D model derived from the SSM to the image contours extracted from the x-ray radiograph: Iterative affine registration, statistical instantiation, and iterative regularized shape deformation. The image contours are first detected by using a semiautomatic segmentation tool based on the Livewire algorithm and then approximated by a set of sparse dominant points that are adaptively sampled from the detected contours. The unknown scales of the reconstructed models were estimated by performing a surface-based 3D/3D matching between the reconstructed models and the associated ground truth models that were derived from a CT-based reconstruction method. Such a matching also allowed for computing the errors between the reconstructed models and the associated ground truth models. Results: The technique could reconstruct the surface models of all 14 pelvises directly from the landmark-based initialization. Depending on the surface-based matching techniques, the reconstruction errors were slightly different. When a surface-based iterative affine registration was used, an average reconstruction error of 1.6 mm was observed. This error was increased to 1.9 mm, when a surface-based iterative scaled rigid registration was used. Conclusions: It is feasible to reconstruct a scaled, patient-specific surface model of the pelvis from single standard AP x-ray radiograph using the present approach. The unknown scale of the reconstructed model can be estimated by performing a surface-based 3D/3D matching.« less
Image reconstruction from cone-beam projections with attenuation correction
NASA Astrophysics Data System (ADS)
Weng, Yi
1997-07-01
In single photon emission computered tomography (SPECT) imaging, photon attenuation within the body is a major factor contributing to the quantitative inaccuracy in measuring the distribution of radioactivity. Cone-beam SPECT provides improved sensitivity for imaging small organs. This thesis extends the results for 2D parallel- beam and fan-beam geometry to 3D parallel-beam and cone- beam geometries in order to derive filtered backprojection reconstruction algorithms for the 3D exponential parallel-beam transform and for the exponential cone-beam transform with sampling on a sphere. An exact inversion formula for the 3D exponential parallel-beam transform is obtained and is extended to the 3D exponential cone-beam transform. Sampling on a sphere is not useful clinically and current cone-beam tomography, with the focal point traversing a planar orbit, does not acquire sufficient data to give an accurate reconstruction. Thus a data acquisition method that obtains complete data for cone-beam SPECT by simultaneously rotating the gamma camera and translating the patient bed, so that cone-beam projections can be obtained with the focal point traversing a helix that surrounds the patient was developed. First, an implementation of Grangeat's algorithm for helical cone- beam projections was developed without attenuation correction. A fast new rebinning scheme was developed that uses all of the detected data to reconstruct the image and properly normalizes any multiply scanned data. In the case of attenuation no theorem analogous to Tuy's has been proven. We hypothesized that an artifact-free reconstruction could be obtained even if the cone-beam data are attenuated, provided the imaging orbit satisfies Tuy's condition and the exact attenuation map is known. Cone-beam emission data were acquired by using a circle- and-line and a helix orbit on a clinical SPECT system. An iterative conjugate gradient reconstruction algorithm was used to reconstruct projection data with a known attenuation map. The quantitative accuracy of the attenuation-corrected emission reconstruction was significantly improved.
Noda, Y; Goshima, S; Nagata, S; Miyoshi, T; Kawada, H; Kawai, N; Tanahashi, Y; Matsuo, M
2018-06-01
To compare right adrenal vein (RAV) visualisation and contrast enhancement degree on adrenal venous phase images reconstructed using adaptive statistical iterative reconstruction (ASiR) and model-based iterative reconstruction (MBIR) techniques. This prospective study was approved by the institutional review board, and written informed consent was waived. Fifty-seven consecutive patients who underwent adrenal venous phase imaging were enrolled. The same raw data were reconstructed using ASiR 40% and MBIR. The expert and beginner independently reviewed computed tomography (CT) images. RAV visualisation rates, background noise, and CT attenuation of the RAV, right adrenal gland, inferior vena cava (IVC), hepatic vein, and bilateral renal veins were compared between the two reconstruction techniques. RAV visualisation rates were higher with MBIR than with ASiR (95% versus 88%, p=0.13 in expert and 93% versus 75%, p=0.002 in beginner, respectively). RAV visualisation confidence ratings with MBIR were significantly greater than with ASiR (p<0.0001, both in the beginner and the expert). The mean background noise was significantly lower with MBIR than with ASiR (p<0.0001). Mean CT attenuation values of the RAV, right adrenal gland, IVC, and hepatic vein were comparable between the two techniques (p=0.12-0.91). Mean CT attenuation values of the bilateral renal veins were significantly higher with MBIR than with ASiR (p=0.0013 and 0.02). Reconstruction of adrenal venous phase images using MBIR significantly reduces background noise, leading to an improvement in the RAV visualisation compared with ASiR. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Mirro, Amy E.; Brady, Samuel L.; Kaufman, Robert. A.
2016-01-01
Purpose To implement the maximum level of statistical iterative reconstruction that can be used to establish dose-reduced head CT protocols in a primarily pediatric population. Methods Select head examinations (brain, orbits, sinus, maxilla and temporal bones) were investigated. Dose-reduced head protocols using an adaptive statistical iterative reconstruction (ASiR) were compared for image quality with the original filtered back projection (FBP) reconstructed protocols in phantom using the following metrics: image noise frequency (change in perceived appearance of noise texture), image noise magnitude, contrast-to-noise ratio (CNR), and spatial resolution. Dose reduction estimates were based on computed tomography dose index (CTDIvol) values. Patient CTDIvol and image noise magnitude were assessed in 737 pre and post dose reduced examinations. Results Image noise texture was acceptable up to 60% ASiR for Soft reconstruction kernel (at both 100 and 120 kVp), and up to 40% ASiR for Standard reconstruction kernel. Implementation of 40% and 60% ASiR led to an average reduction in CTDIvol of 43% for brain, 41% for orbits, 30% maxilla, 43% for sinus, and 42% for temporal bone protocols for patients between 1 month and 26 years, while maintaining an average noise magnitude difference of 0.1% (range: −3% to 5%), improving CNR of low contrast soft tissue targets, and improving spatial resolution of high contrast bony anatomy, as compared to FBP. Conclusion The methodology in this study demonstrates a methodology for maximizing patient dose reduction and maintaining image quality using statistical iterative reconstruction for a primarily pediatric population undergoing head CT examination. PMID:27056425
SPIRiT: Iterative Self-consistent Parallel Imaging Reconstruction from Arbitrary k-Space
Lustig, Michael; Pauly, John M.
2010-01-01
A new approach to autocalibrating, coil-by-coil parallel imaging reconstruction is presented. It is a generalized reconstruction framework based on self consistency. The reconstruction problem is formulated as an optimization that yields the most consistent solution with the calibration and acquisition data. The approach is general and can accurately reconstruct images from arbitrary k-space sampling patterns. The formulation can flexibly incorporate additional image priors such as off-resonance correction and regularization terms that appear in compressed sensing. Several iterative strategies to solve the posed reconstruction problem in both image and k-space domain are presented. These are based on a projection over convex sets (POCS) and a conjugate gradient (CG) algorithms. Phantom and in-vivo studies demonstrate efficient reconstructions from undersampled Cartesian and spiral trajectories. Reconstructions that include off-resonance correction and nonlinear ℓ1-wavelet regularization are also demonstrated. PMID:20665790
Penalized maximum likelihood reconstruction for x-ray differential phase-contrast tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brendel, Bernhard, E-mail: bernhard.brendel@philips.com; Teuffenbach, Maximilian von; Noël, Peter B.
2016-01-15
Purpose: The purpose of this work is to propose a cost function with regularization to iteratively reconstruct attenuation, phase, and scatter images simultaneously from differential phase contrast (DPC) acquisitions, without the need of phase retrieval, and examine its properties. Furthermore this reconstruction method is applied to an acquisition pattern that is suitable for a DPC tomographic system with continuously rotating gantry (sliding window acquisition), overcoming the severe smearing in noniterative reconstruction. Methods: We derive a penalized maximum likelihood reconstruction algorithm to directly reconstruct attenuation, phase, and scatter image from the measured detector values of a DPC acquisition. The proposed penaltymore » comprises, for each of the three images, an independent smoothing prior. Image quality of the proposed reconstruction is compared to images generated with FBP and iterative reconstruction after phase retrieval. Furthermore, the influence between the priors is analyzed. Finally, the proposed reconstruction algorithm is applied to experimental sliding window data acquired at a synchrotron and results are compared to reconstructions based on phase retrieval. Results: The results show that the proposed algorithm significantly increases image quality in comparison to reconstructions based on phase retrieval. No significant mutual influence between the proposed independent priors could be observed. Further it could be illustrated that the iterative reconstruction of a sliding window acquisition results in images with substantially reduced smearing artifacts. Conclusions: Although the proposed cost function is inherently nonconvex, it can be used to reconstruct images with less aliasing artifacts and less streak artifacts than reconstruction methods based on phase retrieval. Furthermore, the proposed method can be used to reconstruct images of sliding window acquisitions with negligible smearing artifacts.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaefferkoetter, Joshua, E-mail: dnrjds@nus.edu.sg; Ouyang, Jinsong; Rakvongthai, Yothin
2014-06-15
Purpose: A study was designed to investigate the impact of time-of-flight (TOF) and point spread function (PSF) modeling on the detectability of myocardial defects. Methods: Clinical FDG-PET data were used to generate populations of defect-present and defect-absent images. Defects were incorporated at three contrast levels, and images were reconstructed by ordered subset expectation maximization (OSEM) iterative methods including ordinary Poisson, alone and with PSF, TOF, and PSF+TOF. Channelized Hotelling observer signal-to-noise ratio (SNR) was the surrogate for human observer performance. Results: For three iterations, 12 subsets, and no postreconstruction smoothing, TOF improved overall defect detection SNR by 8.6% as comparedmore » to its non-TOF counterpart for all the defect contrasts. Due to the slow convergence of PSF reconstruction, PSF yielded 4.4% less SNR than non-PSF. For reconstruction parameters (iteration number and postreconstruction smoothing kernel size) optimizing observer SNR, PSF showed larger improvement for faint defects. The combination of TOF and PSF improved mean detection SNR as compared to non-TOF and non-PSF counterparts by 3.0% and 3.2%, respectively. Conclusions: For typical reconstruction protocol used in clinical practice, i.e., less than five iterations, TOF improved defect detectability. In contrast, PSF generally yielded less detectability. For large number of iterations, TOF+PSF yields the best observer performance.« less
High-resolution reconstruction for terahertz imaging.
Xu, Li-Min; Fan, Wen-Hui; Liu, Jia
2014-11-20
We present a high-resolution (HR) reconstruction model and algorithms for terahertz imaging, taking advantage of super-resolution methodology and algorithms. The algorithms used include projection onto a convex sets approach, iterative backprojection approach, Lucy-Richardson iteration, and 2D wavelet decomposition reconstruction. Using the first two HR reconstruction methods, we successfully obtain HR terahertz images with improved definition and lower noise from four low-resolution (LR) 22×24 terahertz images taken from our homemade THz-TDS system at the same experimental conditions with 1.0 mm pixel. Using the last two HR reconstruction methods, we transform one relatively LR terahertz image to a HR terahertz image with decreased noise. This indicates potential application of HR reconstruction methods in terahertz imaging with pulsed and continuous wave terahertz sources.
Inverse imaging of the breast with a material classification technique.
Manry, C W; Broschat, S L
1998-03-01
In recent publications [Chew et al., IEEE Trans. Blomed. Eng. BME-9, 218-225 (1990); Borup et al., Ultrason. Imaging 14, 69-85 (1992)] the inverse imaging problem has been solved by means of a two-step iterative method. In this paper, a third step is introduced for ultrasound imaging of the breast. In this step, which is based on statistical pattern recognition, classification of tissue types and a priori knowledge of the anatomy of the breast are integrated into the iterative method. Use of this material classification technique results in more rapid convergence to the inverse solution--approximately 40% fewer iterations are required--as well as greater accuracy. In addition, tumors are detected early in the reconstruction process. Results for reconstructions of a simple two-dimensional model of the human breast are presented. These reconstructions are extremely accurate when system noise and variations in tissue parameters are not too great. However, for the algorithm used, degradation of the reconstructions and divergence from the correct solution occur when system noise and variations in parameters exceed threshold values. Even in this case, however, tumors are still identified within a few iterations.
Wavelet-based edge correlation incorporated iterative reconstruction for undersampled MRI.
Hu, Changwei; Qu, Xiaobo; Guo, Di; Bao, Lijun; Chen, Zhong
2011-09-01
Undersampling k-space is an effective way to decrease acquisition time for MRI. However, aliasing artifacts introduced by undersampling may blur the edges of magnetic resonance images, which often contain important information for clinical diagnosis. Moreover, k-space data is often contaminated by the noise signals of unknown intensity. To better preserve the edge features while suppressing the aliasing artifacts and noises, we present a new wavelet-based algorithm for undersampled MRI reconstruction. The algorithm solves the image reconstruction as a standard optimization problem including a ℓ(2) data fidelity term and ℓ(1) sparsity regularization term. Rather than manually setting the regularization parameter for the ℓ(1) term, which is directly related to the threshold, an automatic estimated threshold adaptive to noise intensity is introduced in our proposed algorithm. In addition, a prior matrix based on edge correlation in wavelet domain is incorporated into the regularization term. Compared with nonlinear conjugate gradient descent algorithm, iterative shrinkage/thresholding algorithm, fast iterative soft-thresholding algorithm and the iterative thresholding algorithm using exponentially decreasing threshold, the proposed algorithm yields reconstructions with better edge recovery and noise suppression. Copyright © 2011 Elsevier Inc. All rights reserved.
Iterative reconstruction of volumetric particle distribution
NASA Astrophysics Data System (ADS)
Wieneke, Bernhard
2013-02-01
For tracking the motion of illuminated particles in space and time several volumetric flow measurement techniques are available like 3D-particle tracking velocimetry (3D-PTV) recording images from typically three to four viewing directions. For higher seeding densities and the same experimental setup, tomographic PIV (Tomo-PIV) reconstructs voxel intensities using an iterative tomographic reconstruction algorithm (e.g. multiplicative algebraic reconstruction technique, MART) followed by cross-correlation of sub-volumes computing instantaneous 3D flow fields on a regular grid. A novel hybrid algorithm is proposed here that similar to MART iteratively reconstructs 3D-particle locations by comparing the recorded images with the projections calculated from the particle distribution in the volume. But like 3D-PTV, particles are represented by 3D-positions instead of voxel-based intensity blobs as in MART. Detailed knowledge of the optical transfer function and the particle image shape is mandatory, which may differ for different positions in the volume and for each camera. Using synthetic data it is shown that this method is capable of reconstructing densely seeded flows up to about 0.05 ppp with similar accuracy as Tomo-PIV. Finally the method is validated with experimental data.
Influence of Iterative Reconstruction Algorithms on PET Image Resolution
NASA Astrophysics Data System (ADS)
Karpetas, G. E.; Michail, C. M.; Fountos, G. P.; Valais, I. G.; Nikolopoulos, D.; Kandarakis, I. S.; Panayiotakis, G. S.
2015-09-01
The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction. The simulated PET scanner was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the modulation transfer function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL, the ordered subsets separable paraboloidal surrogate (OSSPS), the median root prior (MRP) and OSMAPOSL with quadratic prior, algorithms. OSMAPOSL reconstruction was assessed by using fixed subsets and various iterations, as well as by using various beta (hyper) parameter values. MTF values were found to increase with increasing iterations. MTF also improves by using lower beta values. The simulated PET evaluation method, based on the TLC plane source, can be useful in the resolution assessment of PET scanners.
NASA Astrophysics Data System (ADS)
Zhao, Jin; Han-Ming, Zhang; Bin, Yan; Lei, Li; Lin-Yuan, Wang; Ai-Long, Cai
2016-03-01
Sparse-view x-ray computed tomography (CT) imaging is an interesting topic in CT field and can efficiently decrease radiation dose. Compared with spatial reconstruction, a Fourier-based algorithm has advantages in reconstruction speed and memory usage. A novel Fourier-based iterative reconstruction technique that utilizes non-uniform fast Fourier transform (NUFFT) is presented in this work along with advanced total variation (TV) regularization for a fan sparse-view CT. The proposition of a selective matrix contributes to improve reconstruction quality. The new method employs the NUFFT and its adjoin to iterate back and forth between the Fourier and image space. The performance of the proposed algorithm is demonstrated through a series of digital simulations and experimental phantom studies. Results of the proposed algorithm are compared with those of existing TV-regularized techniques based on compressed sensing method, as well as basic algebraic reconstruction technique. Compared with the existing TV-regularized techniques, the proposed Fourier-based technique significantly improves convergence rate and reduces memory allocation, respectively. Projected supported by the National High Technology Research and Development Program of China (Grant No. 2012AA011603) and the National Natural Science Foundation of China (Grant No. 61372172).
NASA Astrophysics Data System (ADS)
Zhang, Shunli; Zhang, Dinghua; Gong, Hao; Ghasemalizadeh, Omid; Wang, Ge; Cao, Guohua
2014-11-01
Iterative algorithms, such as the algebraic reconstruction technique (ART), are popular for image reconstruction. For iterative reconstruction, the area integral model (AIM) is more accurate for better reconstruction quality than the line integral model (LIM). However, the computation of the system matrix for AIM is more complex and time-consuming than that for LIM. Here, we propose a fast and accurate method to compute the system matrix for AIM. First, we calculate the intersection of each boundary line of a narrow fan-beam with pixels in a recursive and efficient manner. Then, by grouping the beam-pixel intersection area into six types according to the slopes of the two boundary lines, we analytically compute the intersection area of the narrow fan-beam with the pixels in a simple algebraic fashion. Overall, experimental results show that our method is about three times faster than the Siddon algorithm and about two times faster than the distance-driven model (DDM) in computation of the system matrix. The reconstruction speed of our AIM-based ART is also faster than the LIM-based ART that uses the Siddon algorithm and DDM-based ART, for one iteration. The fast reconstruction speed of our method was accomplished without compromising the image quality.
A fast reconstruction algorithm for fluorescence optical diffusion tomography based on preiteration.
Song, Xiaolei; Xiong, Xiaoyun; Bai, Jing
2007-01-01
Fluorescence optical diffusion tomography in the near-infrared (NIR) bandwidth is considered to be one of the most promising ways for noninvasive molecular-based imaging. Many reconstructive approaches to it utilize iterative methods for data inversion. However, they are time-consuming and they are far from meeting the real-time imaging demands. In this work, a fast preiteration algorithm based on the generalized inverse matrix is proposed. This method needs only one step of matrix-vector multiplication online, by pushing the iteration process to be executed offline. In the preiteration process, the second-order iterative format is employed to exponentially accelerate the convergence. Simulations based on an analytical diffusion model show that the distribution of fluorescent yield can be well estimated by this algorithm and the reconstructed speed is remarkably increased.
Baumueller, Stephan; Hilty, Regina; Nguyen, Thi Dan Linh; Weder, Walter; Alkadhi, Hatem; Frauenfelder, Thomas
2016-01-01
The purpose of this study was to evaluate the influence of sinogram-affirmed iterative reconstruction (SAFIRE) on quantification of lung volume and pulmonary emphysema in low-dose chest computed tomography compared with filtered back projection (FBP). Enhanced or nonenhanced low-dose chest computed tomography was performed in 20 patients with chronic obstructive pulmonary disease (group A) and in 20 patients without lung disease (group B). Data sets were reconstructed with FBP and SAFIRE strength levels 3 to 5. Two readers semiautomatically evaluated lung volumes and automatically quantified pulmonary emphysema, and another assessed image quality. Radiation dose parameters were recorded. Lung volume between FBP and SAFIRE 3 to 5 was not significantly different among both groups (all P > 0.05). When compared with those of FBP, total emphysema volume was significantly lower among reconstructions with SAFIRE 4 and 5 (mean difference, 0.56 and 0.79 L; all P < 0.001). There was no nondiagnostic image quality. Sinogram-affirmed iterative reconstruction does not alter lung volume measurements, although quantification of lung emphysema is affected at higher strength levels.
Varying-energy CT imaging method based on EM-TV
NASA Astrophysics Data System (ADS)
Chen, Ping; Han, Yan
2016-11-01
For complicated structural components with wide x-ray attenuation ranges, conventional fixed-energy computed tomography (CT) imaging cannot obtain all the structural information. This limitation results in a shortage of CT information because the effective thickness of the components along the direction of x-ray penetration exceeds the limit of the dynamic range of the x-ray imaging system. To address this problem, a varying-energy x-ray CT imaging method is proposed. In this new method, the tube voltage is adjusted several times with the fixed lesser interval. Next, the fusion of grey consistency and logarithm demodulation are applied to obtain full and lower noise projection with a high dynamic range (HDR). In addition, for the noise suppression problem of the analytical method, EM-TV (expectation maximization-total Jvariation) iteration reconstruction is used. In the process of iteration, the reconstruction result obtained at one x-ray energy is used as the initial condition of the next iteration. An accompanying experiment demonstrates that this EM-TV reconstruction can also extend the dynamic range of x-ray imaging systems and provide a higher reconstruction quality relative to the fusion reconstruction method.
Kaasalainen, Touko; Palmu, Kirsi; Lampinen, Anniina; Reijonen, Vappu; Leikola, Junnu; Kivisaari, Riku; Kortesniemi, Mika
2015-09-01
Medical professionals need to exercise particular caution when developing CT scanning protocols for children who require multiple CT studies, such as those with craniosynostosis. To evaluate the utility of ultra-low-dose CT protocols with model-based iterative reconstruction techniques for craniosynostosis imaging. We scanned two pediatric anthropomorphic phantoms with a 64-slice CT scanner using different low-dose protocols for craniosynostosis. We measured organ doses in the head region with metal-oxide-semiconductor field-effect transistor (MOSFET) dosimeters. Numerical simulations served to estimate organ and effective doses. We objectively and subjectively evaluated the quality of images produced by adaptive statistical iterative reconstruction (ASiR) 30%, ASiR 50% and Veo (all by GE Healthcare, Waukesha, WI). Image noise and contrast were determined for different tissues. Mean organ dose with the newborn phantom was decreased up to 83% compared to the routine protocol when using ultra-low-dose scanning settings. Similarly, for the 5-year phantom the greatest radiation dose reduction was 88%. The numerical simulations supported the findings with MOSFET measurements. The image quality remained adequate with Veo reconstruction, even at the lowest dose level. Craniosynostosis CT with model-based iterative reconstruction could be performed with a 20-μSv effective dose, corresponding to the radiation exposure of plain skull radiography, without compromising required image quality.
Barca, Patrizio; Giannelli, Marco; Fantacci, Maria Evelina; Caramella, Davide
2018-06-01
Computed tomography (CT) is a useful and widely employed imaging technique, which represents the largest source of population exposure to ionizing radiation in industrialized countries. Adaptive Statistical Iterative Reconstruction (ASIR) is an iterative reconstruction algorithm with the potential to allow reduction of radiation exposure while preserving diagnostic information. The aim of this phantom study was to assess the performance of ASIR, in terms of a number of image quality indices, when different reconstruction blending levels are employed. CT images of the Catphan-504 phantom were reconstructed using conventional filtered back-projection (FBP) and ASIR with reconstruction blending levels of 20, 40, 60, 80, and 100%. Noise, noise power spectrum (NPS), contrast-to-noise ratio (CNR) and modulation transfer function (MTF) were estimated for different scanning parameters and contrast objects. Noise decreased and CNR increased non-linearly up to 50 and 100%, respectively, with increasing blending level of reconstruction. Also, ASIR has proven to modify the NPS curve shape. The MTF of ASIR reconstructed images depended on tube load/contrast and decreased with increasing blending level of reconstruction. In particular, for low radiation exposure and low contrast acquisitions, ASIR showed lower performance than FBP, in terms of spatial resolution for all blending levels of reconstruction. CT image quality varies substantially with the blending level of reconstruction. ASIR has the potential to reduce noise whilst maintaining diagnostic information in low radiation exposure CT imaging. Given the opposite variation of CNR and spatial resolution with the blending level of reconstruction, it is recommended to use an optimal value of this parameter for each specific clinical application.
Parallel Reconstruction Using Null Operations (PRUNO)
Zhang, Jian; Liu, Chunlei; Moseley, Michael E.
2011-01-01
A novel iterative k-space data-driven technique, namely Parallel Reconstruction Using Null Operations (PRUNO), is presented for parallel imaging reconstruction. In PRUNO, both data calibration and image reconstruction are formulated into linear algebra problems based on a generalized system model. An optimal data calibration strategy is demonstrated by using Singular Value Decomposition (SVD). And an iterative conjugate- gradient approach is proposed to efficiently solve missing k-space samples during reconstruction. With its generalized formulation and precise mathematical model, PRUNO reconstruction yields good accuracy, flexibility, stability. Both computer simulation and in vivo studies have shown that PRUNO produces much better reconstruction quality than autocalibrating partially parallel acquisition (GRAPPA), especially under high accelerating rates. With the aid of PRUO reconstruction, ultra high accelerating parallel imaging can be performed with decent image quality. For example, we have done successful PRUNO reconstruction at a reduction factor of 6 (effective factor of 4.44) with 8 coils and only a few autocalibration signal (ACS) lines. PMID:21604290
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merlin, Thibaut, E-mail: thibaut.merlin@telecom-bretagne.eu; Visvikis, Dimitris; Fernandez, Philippe
2015-02-15
Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimationmore » of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a wavelet-based denoising in the reconstruction process to better correct for PVE. Future work includes further evaluations of the proposed method on clinical datasets and the use of improved PSF models.« less
Lu, Yao; Chan, Heang-Ping; Wei, Jun; Hadjiiski, Lubomir M
2014-01-01
Digital breast tomosynthesis (DBT) has strong promise to improve sensitivity for detecting breast cancer. DBT reconstruction estimates the breast tissue attenuation using projection views (PVs) acquired in a limited angular range. Because of the limited field of view (FOV) of the detector, the PVs may not completely cover the breast in the x-ray source motion direction at large projection angles. The voxels in the imaged volume cannot be updated when they are outside the FOV, thus causing a discontinuity in intensity across the FOV boundaries in the reconstructed slices, which we refer to as the truncated projection artifact (TPA). Most existing TPA reduction methods were developed for the filtered backprojection method in the context of computed tomography. In this study, we developed a new diffusion-based method to reduce TPAs during DBT reconstruction using the simultaneous algebraic reconstruction technique (SART). Our TPA reduction method compensates for the discontinuity in background intensity outside the FOV of the current PV after each PV updating in SART. The difference in voxel values across the FOV boundary is smoothly diffused to the region beyond the FOV of the current PV. Diffusion-based background intensity estimation is performed iteratively to avoid structured artifacts. The method is applicable to TPA in both the forward and backward directions of the PVs and for any number of iterations during reconstruction. The effectiveness of the new method was evaluated by comparing the visual quality of the reconstructed slices and the measured discontinuities across the TPA with and without artifact correction at various iterations. The results demonstrated that the diffusion-based intensity compensation method reduced the TPA while preserving the detailed tissue structures. The visibility of breast lesions obscured by the TPA was improved after artifact reduction. PMID:23318346
An iterative reconstruction of cosmological initial density fields
NASA Astrophysics Data System (ADS)
Hada, Ryuichiro; Eisenstein, Daniel J.
2018-05-01
We present an iterative method to reconstruct the linear-theory initial conditions from the late-time cosmological matter density field, with the intent of improving the recovery of the cosmic distance scale from the baryon acoustic oscillations (BAOs). We present tests using the dark matter density field in both real and redshift space generated from an N-body simulation. In redshift space at z = 0.5, we find that the reconstructed displacement field using our iterative method are more than 80% correlated with the true displacement field of the dark matter particles on scales k < 0.10h Mpc-1. Furthermore, we show that the two-point correlation function of our reconstructed density field matches that of the initial density field substantially better, especially on small scales (<40h-1 Mpc). Our redshift-space results are improved if we use an anisotropic smoothing so as to account for the reduced small-scale information along the line of sight in redshift space.
WE-AB-303-09: Rapid Projection Computations for On-Board Digital Tomosynthesis in Radiation Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iliopoulos, AS; Sun, X; Pitsianis, N
2015-06-15
Purpose: To facilitate fast and accurate iterative volumetric image reconstruction from limited-angle on-board projections. Methods: Intrafraction motion hinders the clinical applicability of modern radiotherapy techniques, such as lung stereotactic body radiation therapy (SBRT). The LIVE system may impact clinical practice by recovering volumetric information via Digital Tomosynthesis (DTS), thus entailing low time and radiation dose for image acquisition during treatment. The DTS is estimated as a deformation of prior CT via iterative registration with on-board images; this shifts the challenge to the computational domain, owing largely to repeated projection computations across iterations. We address this issue by composing efficient digitalmore » projection operators from their constituent parts. This allows us to separate the static (projection geometry) and dynamic (volume/image data) parts of projection operations by means of pre-computations, enabling fast on-board processing, while also relaxing constraints on underlying numerical models (e.g. regridding interpolation kernels). Further decoupling the projectors into simpler ones ensures the incurred memory overhead remains low, within the capacity of a single GPU. These operators depend only on the treatment plan and may be reused across iterations and patients. The dynamic processing load is kept to a minimum and maps well to the GPU computational model. Results: We have integrated efficient, pre-computable modules for volumetric ray-casting and FDK-based back-projection with the LIVE processing pipeline. Our results show a 60x acceleration of the DTS computations, compared to the previous version, using a single GPU; presently, reconstruction is attained within a couple of minutes. The present implementation allows for significant flexibility in terms of the numerical and operational projection model; we are investigating the benefit of further optimizations and accurate digital projection sub-kernels. Conclusion: Composable projection operators constitute a versatile research tool which can greatly accelerate iterative registration algorithms and may be conducive to the clinical applicability of LIVE. National Institutes of Health Grant No. R01-CA184173; GPU donation by NVIDIA Corporation.« less
Tang, Jie; Nett, Brian E; Chen, Guang-Hong
2009-10-07
Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.
NASA Astrophysics Data System (ADS)
Yu, Baihui; Zhao, Ziran; Wang, Xuewu; Wu, Dufan; Zeng, Zhi; Zeng, Ming; Wang, Yi; Cheng, Jianping
2016-01-01
The Tsinghua University MUon Tomography facilitY (TUMUTY) has been built up and it is utilized to reconstruct the special objects with complex structure. Since fine image is required, the conventional Maximum likelihood Scattering and Displacement (MLSD) algorithm is employed. However, due to the statistical characteristics of muon tomography and the data incompleteness, the reconstruction is always instable and accompanied with severe noise. In this paper, we proposed a Maximum a Posterior (MAP) algorithm for muon tomography regularization, where an edge-preserving prior on the scattering density image is introduced to the object function. The prior takes the lp norm (p>0) of the image gradient magnitude, where p=1 and p=2 are the well-known total-variation (TV) and Gaussian prior respectively. The optimization transfer principle is utilized to minimize the object function in a unified framework. At each iteration the problem is transferred to solving a cubic equation through paraboloidal surrogating. To validate the method, the French Test Object (FTO) is imaged by both numerical simulation and TUMUTY. The proposed algorithm is used for the reconstruction where different norms are detailedly studied, including l2, l1, l0.5, and an l2-0.5 mixture norm. Compared with MLSD method, MAP achieves better image quality in both structure preservation and noise reduction. Furthermore, compared with the previous work where one dimensional image was acquired, we achieve the relatively clear three dimensional images of FTO, where the inner air hole and the tungsten shell is visible.
Bindu, G; Semenov, S
2013-01-01
This paper describes an efficient two-dimensional fused image reconstruction approach for Microwave Tomography (MWT). Finite Difference Time Domain (FDTD) models were created for a viable MWT experimental system having the transceivers modelled using thin wire approximation with resistive voltage sources. Born Iterative and Distorted Born Iterative methods have been employed for image reconstruction with the extremity imaging being done using a differential imaging technique. The forward solver in the imaging algorithm employs the FDTD method of solving the time domain Maxwell's equations with the regularisation parameter computed using a stochastic approach. The algorithm is tested with 10% noise inclusion and successful image reconstruction has been shown implying its robustness.
Katsura, Masaki; Akahane, Masaaki; Sato, Jiro; Matsuda, Izuru; Ohtomo, Kuni
2016-01-01
Background Iterative reconstruction methods have attracted attention for reducing radiation doses in computed tomography (CT). Purpose To investigate the detectability of pancreatic calcification using dose-reduced CT reconstructed with model-based iterative construction (MBIR) and adaptive statistical iterative reconstruction (ASIR). Material and Methods This prospective study approved by Institutional Review Board included 85 patients (57 men, 28 women; mean age, 69.9 years; mean body weight, 61.2 kg). Unenhanced CT was performed three times with different radiation doses (reference-dose CT [RDCT], low-dose CT [LDCT], ultralow-dose CT [ULDCT]). From RDCT, LDCT, and ULDCT, images were reconstructed with filtered-back projection (R-FBP, used for establishing reference standard), ASIR (L-ASIR), and MBIR and ASIR (UL-MBIR and UL-ASIR), respectively. A lesion (pancreatic calcification) detection test was performed by two blinded radiologists with a five-point certainty level scale. Results Dose-length products of RDCT, LDCT, and ULDCT were 410, 97, and 36 mGy-cm, respectively. Nine patients had pancreatic calcification. The sensitivity for detecting pancreatic calcification with UL-MBIR was high (0.67–0.89) compared to L-ASIR or UL-ASIR (0.11–0.44), and a significant difference was seen between UL-MBIR and UL-ASIR for one reader (P = 0.014). The area under the receiver-operating characteristic curve for UL-MBIR (0.818–0.860) was comparable to that for L-ASIR (0.696–0.844). The specificity was lower with UL-MBIR (0.79–0.92) than with L-ASIR or UL-ASIR (0.96–0.99), and a significant difference was seen for one reader (P < 0.01). Conclusion In UL-MBIR, pancreatic calcification can be detected with high sensitivity, however, we should pay attention to the slightly lower specificity. PMID:27110389
Liao, Congyu; Chen, Ying; Cao, Xiaozhi; Chen, Song; He, Hongjian; Mani, Merry; Jacob, Mathews; Magnotta, Vincent; Zhong, Jianhui
2017-03-01
To propose a novel reconstruction method using parallel imaging with low rank constraint to accelerate high resolution multishot spiral diffusion imaging. The undersampled high resolution diffusion data were reconstructed based on a low rank (LR) constraint using similarities between the data of different interleaves from a multishot spiral acquisition. The self-navigated phase compensation using the low resolution phase data in the center of k-space was applied to correct shot-to-shot phase variations induced by motion artifacts. The low rank reconstruction was combined with sensitivity encoding (SENSE) for further acceleration. The efficiency of the proposed joint reconstruction framework, dubbed LR-SENSE, was evaluated through error quantifications and compared with ℓ1 regularized compressed sensing method and conventional iterative SENSE method using the same datasets. It was shown that with a same acceleration factor, the proposed LR-SENSE method had the smallest normalized sum-of-squares errors among all the compared methods in all diffusion weighted images and DTI-derived index maps, when evaluated with different acceleration factors (R = 2, 3, 4) and for all the acquired diffusion directions. Robust high resolution diffusion weighted image can be efficiently reconstructed from highly undersampled multishot spiral data with the proposed LR-SENSE method. Magn Reson Med 77:1359-1366, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Wu, Yan; Jiang, Yaojun; Han, Xueli; Wang, Mingyue; Gao, Jianbo
2018-02-01
To investigate the repeatability and accuracy of quantitative CT (QCT) measurement of bone mineral density (BMD) by low-mAs using iterative model reconstruction (IMR) technique based on phantom model. European spine phantom (ESP) was selected and measured on the Philips Brilliance iCT Elite FHD machine for 10 times. Data were transmitted to the QCT PRO workstation to measure BMD (mg/cm 3 ) of the ESP (L1, L2, L3). Scanning method: the voltage of X-ray tube is 120 kV, the electric current of X-ray tube output in five respective groups A-E were: 20, 30, 40, 50 and 60 mAs. Reconstruction: all data were reconstructed using filtered back projection (FBP), IR levels of hybrid iterative reconstruction (iDose 4 , levels 1, 2, 3, 4, 5, 6 were used) and IMR (levels 1, 2, 3 were used). ROIs were placed in the middle of L1, L2 and L3 spine phantom in each group. CT values, noise and contrast-to-noise ratio (CNR) were measured and calculated. One-way analysis of variance (ANOVA) was used to compare BMD values of different mAs and different IMR. Radiation dose [volume CT dose index (CTDI vol ) and dose length product (DLP)] was positively correlated with tube current. In L1 with low BMD, different mAs in FBP showed P<0.05, indicating statistically significant BMD in ESP. In other iterative algorithms, different mAs under same iterative algorithms showed P>0.05, indicating no difference in BMD. And P>0.05 was observed among BMD of spine phantom in L1, L2 and L3 under same mAs joined with varied iterative reconstruction. The BMD in L1 varied greatly during FBP reconstruction, and less variation was observed in reconstruction of IMR [1] and IMR [2]. The BMD of L2 changed more during FBP reconstruction, where less was observed in IMR [2]. The BMD of L3 varied greatly during FBP reconstruction, and was less varied in all levels of iDose 4 and reconstruction of IMR [2]. In addition, along with continuous mAs incensement, the CNRs in various algorithms continued to increase. Among them, CNR with the FBP algorithm is the lowest, and CNR of the IMR [3] algorithm is the highest. Repeated measurements of BMD with QCT in the ESP multicenter showed that BMD changes in L1-L3 are the least varied at IMR [2] algorithm. It is recommended to scan at 120 kV with 20 mAs combined with IMR [2] algorithm. In this way, the BMD of spine by QCT could be accurately measured, while radiation dosage significantly reduced and imaging quality improved at the same time.
Motion and positional error correction for cone beam 3D-reconstruction with mobile C-arms.
Bodensteiner, C; Darolti, C; Schumacher, H; Matthäus, L; Schweikard, A
2007-01-01
CT-images acquired by mobile C-arm devices can contain artefacts caused by positioning errors. We propose a data driven method based on iterative 3D-reconstruction and 2D/3D-registration to correct projection data inconsistencies. With a 2D/3D-registration algorithm, transformations are computed to align the acquired projection images to a previously reconstructed volume. In an iterative procedure, the reconstruction algorithm uses the results of the registration step. This algorithm also reduces small motion artefacts within 3D-reconstructions. Experiments with simulated projections from real patient data show the feasibility of the proposed method. In addition, experiments with real projection data acquired with an experimental robotised C-arm device have been performed with promising results.
Real-Time Compressive Sensing MRI Reconstruction Using GPU Computing and Split Bregman Methods
Smith, David S.; Gore, John C.; Yankeelov, Thomas E.; Welch, E. Brian
2012-01-01
Compressive sensing (CS) has been shown to enable dramatic acceleration of MRI acquisition in some applications. Being an iterative reconstruction technique, CS MRI reconstructions can be more time-consuming than traditional inverse Fourier reconstruction. We have accelerated our CS MRI reconstruction by factors of up to 27 by using a split Bregman solver combined with a graphics processing unit (GPU) computing platform. The increases in speed we find are similar to those we measure for matrix multiplication on this platform, suggesting that the split Bregman methods parallelize efficiently. We demonstrate that the combination of the rapid convergence of the split Bregman algorithm and the massively parallel strategy of GPU computing can enable real-time CS reconstruction of even acquisition data matrices of dimension 40962 or more, depending on available GPU VRAM. Reconstruction of two-dimensional data matrices of dimension 10242 and smaller took ~0.3 s or less, showing that this platform also provides very fast iterative reconstruction for small-to-moderate size images. PMID:22481908
Real-Time Compressive Sensing MRI Reconstruction Using GPU Computing and Split Bregman Methods.
Smith, David S; Gore, John C; Yankeelov, Thomas E; Welch, E Brian
2012-01-01
Compressive sensing (CS) has been shown to enable dramatic acceleration of MRI acquisition in some applications. Being an iterative reconstruction technique, CS MRI reconstructions can be more time-consuming than traditional inverse Fourier reconstruction. We have accelerated our CS MRI reconstruction by factors of up to 27 by using a split Bregman solver combined with a graphics processing unit (GPU) computing platform. The increases in speed we find are similar to those we measure for matrix multiplication on this platform, suggesting that the split Bregman methods parallelize efficiently. We demonstrate that the combination of the rapid convergence of the split Bregman algorithm and the massively parallel strategy of GPU computing can enable real-time CS reconstruction of even acquisition data matrices of dimension 4096(2) or more, depending on available GPU VRAM. Reconstruction of two-dimensional data matrices of dimension 1024(2) and smaller took ~0.3 s or less, showing that this platform also provides very fast iterative reconstruction for small-to-moderate size images.
NASA Astrophysics Data System (ADS)
Jackson, B. V.; Yu, H. S.; Hick, P. P.; Buffington, A.; Odstrcil, D.; Kim, T. K.; Pogorelov, N. V.; Tokumaru, M.; Bisi, M. M.; Kim, J.; Yun, J.
2017-12-01
The University of California, San Diego has developed an iterative remote-sensing time-dependent three-dimensional (3-D) reconstruction technique which provides volumetric maps of density, velocity, and magnetic field. We have applied this technique in near real time for over 15 years with a kinematic model approximation to fit data from ground-based interplanetary scintillation (IPS) observations. Our modeling concept extends volumetric data from an inner boundary placed above the Alfvén surface out to the inner heliosphere. We now use this technique to drive 3-D MHD models at their inner boundary and generate output 3-D data files that are fit to remotely-sensed observations (in this case IPS observations), and iterated. These analyses are also iteratively fit to in-situ spacecraft measurements near Earth. To facilitate this process, we have developed a traceback from input 3-D MHD volumes to yield an updated boundary in density, temperature, and velocity, which also includes magnetic-field components. Here we will show examples of this analysis using the ENLIL 3D-MHD and the University of Alabama Multi-Scale Fluid-Kinetic Simulation Suite (MS-FLUKSS) heliospheric codes. These examples help refine poorly-known 3-D MHD variables (i.e., density, temperature), and parameters (gamma) by fitting heliospheric remotely-sensed data between the region near the solar surface and in-situ measurements near Earth.
A noise power spectrum study of a new model-based iterative reconstruction system: Veo 3.0.
Li, Guang; Liu, Xinming; Dodge, Cristina T; Jensen, Corey T; Rong, X John
2016-09-08
The purpose of this study was to evaluate performance of the third generation of model-based iterative reconstruction (MBIR) system, Veo 3.0, based on noise power spectrum (NPS) analysis with various clinical presets over a wide range of clinically applicable dose levels. A CatPhan 600 surrounded by an oval, fat-equivalent ring to mimic patient size/shape was scanned 10 times at each of six dose levels on a GE HD 750 scanner. NPS analysis was performed on images reconstructed with various Veo 3.0 preset combinations for comparisons of those images reconstructed using Veo 2.0, filtered back projection (FBP) and adaptive statistical iterative reconstruc-tion (ASiR). The new Target Thickness setting resulted in higher noise in thicker axial images. The new Texture Enhancement function achieved a more isotropic noise behavior with less image artifacts. Veo 3.0 provides additional reconstruction options designed to allow the user choice of balance between spatial resolution and image noise, relative to Veo 2.0. Veo 3.0 provides more user selectable options and in general improved isotropic noise behavior in comparison to Veo 2.0. The overall noise reduction performance of both versions of MBIR was improved in comparison to FBP and ASiR, especially at low-dose levels. © 2016 The Authors.
Deep learning methods to guide CT image reconstruction and reduce metal artifacts
NASA Astrophysics Data System (ADS)
Gjesteby, Lars; Yang, Qingsong; Xi, Yan; Zhou, Ye; Zhang, Junping; Wang, Ge
2017-03-01
The rapidly-rising field of machine learning, including deep learning, has inspired applications across many disciplines. In medical imaging, deep learning has been primarily used for image processing and analysis. In this paper, we integrate a convolutional neural network (CNN) into the computed tomography (CT) image reconstruction process. Our first task is to monitor the quality of CT images during iterative reconstruction and decide when to stop the process according to an intelligent numerical observer instead of using a traditional stopping rule, such as a fixed error threshold or a maximum number of iterations. After training on ground truth images, the CNN was successful in guiding an iterative reconstruction process to yield high-quality images. Our second task is to improve a sinogram to correct for artifacts caused by metal objects. A large number of interpolation and normalization-based schemes were introduced for metal artifact reduction (MAR) over the past four decades. The NMAR algorithm is considered a state-of-the-art method, although residual errors often remain in the reconstructed images, especially in cases of multiple metal objects. Here we merge NMAR with deep learning in the projection domain to achieve additional correction in critical image regions. Our results indicate that deep learning can be a viable tool to address CT reconstruction challenges.
Reconstruction of multiple-pinhole micro-SPECT data using origin ensembles.
Lyon, Morgan C; Sitek, Arkadiusz; Metzler, Scott D; Moore, Stephen C
2016-10-01
The authors are currently developing a dual-resolution multiple-pinhole microSPECT imaging system based on three large NaI(Tl) gamma cameras. Two multiple-pinhole tungsten collimator tubes will be used sequentially for whole-body "scout" imaging of a mouse, followed by high-resolution (hi-res) imaging of an organ of interest, such as the heart or brain. Ideally, the whole-body image will be reconstructed in real time such that data need only be acquired until the area of interest can be visualized well-enough to determine positioning for the hi-res scan. The authors investigated the utility of the origin ensemble (OE) algorithm for online and offline reconstructions of the scout data. This algorithm operates directly in image space, and can provide estimates of image uncertainty, along with reconstructed images. Techniques for accelerating the OE reconstruction were also introduced and evaluated. System matrices were calculated for our 39-pinhole scout collimator design. SPECT projections were simulated for a range of count levels using the MOBY digital mouse phantom. Simulated data were used for a comparison of OE and maximum-likelihood expectation maximization (MLEM) reconstructions. The OE algorithm convergence was evaluated by calculating the total-image entropy and by measuring the counts in a volume-of-interest (VOI) containing the heart. Total-image entropy was also calculated for simulated MOBY data reconstructed using OE with various levels of parallelization. For VOI measurements in the heart, liver, bladder, and soft-tissue, MLEM and OE reconstructed images agreed within 6%. Image entropy converged after ∼2000 iterations of OE, while the counts in the heart converged earlier at ∼200 iterations of OE. An accelerated version of OE completed 1000 iterations in <9 min for a 6.8M count data set, with some loss of image entropy performance, whereas the same dataset required ∼79 min to complete 1000 iterations of conventional OE. A combination of the two methods showed decreased reconstruction time and no loss of performance when compared to conventional OE alone. OE-reconstructed images were found to be quantitatively and qualitatively similar to MLEM, yet OE also provided estimates of image uncertainty. Some acceleration of the reconstruction can be gained through the use of parallel computing. The OE algorithm is useful for reconstructing multiple-pinhole SPECT data and can be easily modified for real-time reconstruction.
Assessment of chest CT at CTDIvol less than 1 mGy with iterative reconstruction techniques.
Padole, Atul; Digumarthy, Subba; Flores, Efren; Madan, Rachna; Mishra, Shelly; Sharma, Amita; Kalra, Mannudeep K
2017-03-01
To assess the image quality of chest CT reconstructed with image-based iterative reconstruction (SafeCT; MedicVision ® , Tirat Carmel, Israel), adaptive statistical iterative reconstruction (ASIR; GE Healthcare, Waukesha, WI) and model-based iterative reconstruction (MBIR; GE Healthcare, Waukesha, WI) techniques at CT dose index volume (CTDI vol ) <1 mGy. In an institutional review board-approved study, 25 patients gave written informed consent for acquisition of three reduced dose (0.25-, 0.4- and 0.8-mGy) chest CT after standard of care CT (8 mGy) on a 64-channel multidetector CT (MDCT) and reconstructed with SafeCT, ASIR and MBIR. Two board-certified thoracic radiologists evaluated images from the lowest to the highest dose of the reduced dose CT series and subsequently for standard of care CT. Out of the 182 detected lesions, the missed lesions were 35 at 0.25, 24 at 0.4 and 9 at 0.8 mGy with SafeCT, ASIR and MBIR, respectively. The most missed lesions were non-calcified lung nodules (NCLNs) 25/112 (<5 mm) at 0.25, 18/112 (<5 mm) at 0.4 and 3/112 (<4 mm) at 0.8 mGy. There were 78%, 84% and 97% lung nodules detected at 0.25, 0.4 and 0.8 mGy, respectively regardless of iterative reconstruction techniques (IRTs), Most mediastinum structures were not sufficiently seen at 0.25-0.8 mGy. NCLNs can be missed in chest CT at CTDI vol of <1 mGy (0.25, 0.4 and 0.8 mGy) regardless of IRTs. The most lung nodules (97%) were detected at CTDI vol of 0.8 mGy. The most mediastinum structures were not sufficiently seen at 0.25-0.8 mGy. Advances in knowledge: NCLNs can be missed regardless of IRTs in chest CT at CTDI vol of <1 mGy. The performance of ASIR, SafeCT and MBIR was similar for lung nodule detection at 0.25, 0.4 and 0.8 mGy.
SU-E-P-49: Evaluation of Image Quality and Radiation Dose of Various Unenhanced Head CT Protocols
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, L; Khan, M; Alapati, K
2015-06-15
Purpose: To evaluate the diagnostic value of various unenhanced head CT protocols and predicate acceptable radiation dose level for head CT exam. Methods: Our retrospective analysis included 3 groups, 20 patients per group, who underwent clinical routine unenhanced adult head CT examination. All exams were performed axially with 120 kVp. Three protocols, 380 mAs without iterative reconstruction and automAs, 340 mAs with iterative reconstruction without automAs, 340 mAs with iterative reconstruction and automAs, were applied on each group patients respectively. The images were reconstructed with H30, J30 for brain window and H60, J70 for bone window. Images acquired with threemore » protocols were randomized and blindly reviewed by three radiologists. A 5 point scale was used to rate each exam The percentage of exam score above 3 and average scores of each protocol were calculated for each reviewer and tissue types. Results: For protocols without automAs, the average scores of bone window with iterative reconstruction were higher than those without iterative reconstruction for each reviewer although the radiation dose was 10 percentage lower. 100 percentage exams were scored 3 or higher and the average scores were above 4 for both brain and bone reconstructions. The CTDIvols are 64.4 and 57.8 mGy of 380 and 340 mAs, respectively. With automAs, the radiation dose varied with head size, resulting in 47.5 mGy average CTDIvol between 39.5 and 56.5 mGy. 93 and 98 percentage exams were scored great than 3 for brain and bone windows, respectively. The diagnostic confidence level and image quality of exams with AutomAs were less than those without AutomAs for each reviewer. Conclusion: According to these results, the mAs was reduced to 300 with automAs OFF for head CT exam. The radiation dose was 20 percentage lower than the original protocol and the CTDIvol was reduced to 51.2 mGy.« less
Sauter, Andreas P; Kopp, Felix K; Münzel, Daniela; Dangelmaier, Julia; Renz, Martin; Renger, Bernhard; Braren, Rickmer; Fingerle, Alexander A; Rummeny, Ernst J; Noël, Peter B
2018-05-01
Evaluation of the influence of iterative reconstruction, tube settings and patient habitus on the accuracy of iodine quantification with dual-layer spectral CT (DL-CT). A CT abdomen phantom with different extension rings and four iodine inserts (1, 2, 5 and 10 mg/ml) was scanned on a DL-CT. The phantom was scanned with tube-voltages of 120 and 140 kVp and CTDI vol of 2.5, 5, 10 and 20 mGy. Reconstructions were performed for eight levels of iterative reconstruction (i0-i7). Diagnostic dose levels are classified depending on patient-size and radiation dose. Measurements of iodine concentration showed accurate and reliable results. Taking all CTDI vol -levels into account, the mean absolute percentage difference (MAPD) showed less accuracy for low CTDI vol -levels (2.5 mGy: 34.72%) than for high CTDI vol -levels (20 mGy: 5.89%). At diagnostic dose levels, accurate quantification of iodine was possible (MAPD 3.38%). Level of iterative reconstruction did not significantly influence iodine measurements. Iodine quantification worked more accurately at a tube voltage of 140 kVp. Phantom size had a considerable effect only at low-dose-levels; at diagnostic dose levels the effect of phantom size decreased (MAPD <5% for all phantom sizes). With DL-CT, even low iodine concentrations can be accurately quantified. Accuracies are higher when diagnostic radiation doses are employed. Copyright © 2018 Elsevier B.V. All rights reserved.
ADART: an adaptive algebraic reconstruction algorithm for discrete tomography.
Maestre-Deusto, F Javier; Scavello, Giovanni; Pizarro, Joaquín; Galindo, Pedro L
2011-08-01
In this paper we suggest an algorithm based on the Discrete Algebraic Reconstruction Technique (DART) which is capable of computing high quality reconstructions from substantially fewer projections than required for conventional continuous tomography. Adaptive DART (ADART) goes a step further than DART on the reduction of the number of unknowns of the associated linear system achieving a significant reduction in the pixel error rate of reconstructed objects. The proposed methodology automatically adapts the border definition criterion at each iteration, resulting in a reduction of the number of pixels belonging to the border, and consequently of the number of unknowns in the general algebraic reconstruction linear system to be solved, being this reduction specially important at the final stage of the iterative process. Experimental results show that reconstruction errors are considerably reduced using ADART when compared to original DART, both in clean and noisy environments.
SU-E-I-01: Iterative CBCT Reconstruction with a Feature-Preserving Penalty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyu, Q; Li, B; Southern Medical University, Guangzhou
2015-06-15
Purpose: Low-dose CBCT is desired in various clinical applications. Iterative image reconstruction algorithms have shown advantages in suppressing noise in low-dose CBCT. However, due to the smoothness constraint enforced during the reconstruction process, edges may be blurred and image features may lose in the reconstructed image. In this work, we proposed a new penalty design to preserve image features in the image reconstructed by iterative algorithms. Methods: Low-dose CBCT is reconstructed by minimizing the penalized weighted least-squares (PWLS) objective function. Binary Robust Independent Elementary Features (BRIEF) of the image were integrated into the penalty of PWLS. BRIEF is a generalmore » purpose point descriptor that can be used to identify important features of an image. In this work, BRIEF distance of two neighboring pixels was used to weigh the smoothing parameter in PWLS. For pixels of large BRIEF distance, weaker smooth constraint will be enforced. Image features will be better preserved through such a design. The performance of the PWLS algorithm with BRIEF penalty was evaluated by a CatPhan 600 phantom. Results: The image quality reconstructed by the proposed PWLS-BRIEF algorithm is superior to that by the conventional PWLS method and the standard FDK method. At matched noise level, edges in PWLS-BRIEF reconstructed image are better preserved. Conclusion: This study demonstrated that the proposed PWLS-BRIEF algorithm has great potential on preserving image features in low-dose CBCT.« less
Strong Convergence of Iteration Processes for Infinite Family of General Extended Mappings
NASA Astrophysics Data System (ADS)
Hussein Maibed, Zena
2018-05-01
The aim of this paper, we introduce a concept of general extended mapping which is independent of nonexpansive mapping and give an iteration process of families of quasi nonexpansive and of general extended mappings. Also, the existence of common fixed point are studied for these process in the Hilbert spaces.
NASA Astrophysics Data System (ADS)
Kupke, Renate; Gavel, Don; Johnson, Jess; Reinig, Marc
2008-07-01
We investigate the non-modulating pyramid wave-front sensor's (P-WFS) implementation in the context of Lick Observatory's Villages visible light AO system on the Nickel 1-meter telescope. A complete adaptive optics correction, using a non-modulated P-WFS in slope sensing mode as a boot-strap to a regime in which the P-WFS can act as a direct phase sensor is explored. An iterative approach to reconstructing the wave-front phase, given the pyramid wave-front sensor's non-linear signal, is developed. Using Monte Carlo simulations, the iterative reconstruction method's photon noise propagation behavior is compared to both the pyramid sensor used in slope-sensing mode, and the traditional Shack Hartmann sensor's theoretical performance limits. We determine that bootstrapping using the P-WFS as a slope sensor does not offer enough correction to bring the phase residuals into a regime in which the iterative algorithm can provide much improvement in phase measurement. It is found that both the iterative phase reconstructor and the slope reconstruction methods offer an advantage in noise propagation over Shack Hartmann sensors.
NASA Astrophysics Data System (ADS)
Sudevan, Vipin; Aluri, Pavan K.; Yadav, Sarvesh Kumar; Saha, Rajib; Souradeep, Tarun
2017-06-01
We report an improved technique for diffuse foreground minimization from Cosmic Microwave Background (CMB) maps using a new multiphase iterative harmonic space internal-linear-combination (HILC) approach. Our method nullifies a foreground leakage that was present in the old and usual iterative HILC method. In phase 1 of the multiphase technique, we obtain an initial cleaned map using the single iteration HILC approach over the desired portion of the sky. In phase 2, we obtain a final CMB map using the iterative HILC approach; however, now, to nullify the leakage, during each iteration, some of the regions of the sky that are not being cleaned in the current iteration are replaced by the corresponding cleaned portions of the phase 1 map. We bring all input frequency maps to a common and maximum possible beam and pixel resolution at the beginning of the analysis, which significantly reduces data redundancy, memory usage, and computational cost, and avoids, during the HILC weight calculation, the deconvolution of partial sky harmonic coefficients by the azimuthally symmetric beam and pixel window functions, which in a strict mathematical sense, are not well defined. Using WMAP 9 year and Planck 2015 frequency maps, we obtain foreground-cleaned CMB maps and a CMB angular power spectrum for the multipole range 2≤slant {\\ell }≤slant 2500. Our power spectrum matches the published Planck results with some differences at different multipole ranges. We validate our method by performing Monte Carlo simulations. Finally, we show that the weights for HILC foreground minimization have the intrinsic characteristic that they also tend to produce a statistically isotropic CMB map.
Ziegler, Susanne; Jakoby, Bjoern W; Braun, Harald; Paulus, Daniel H; Quick, Harald H
2015-12-01
In integrated PET/MR hybrid imaging the evaluation of PET performance characteristics according to the NEMA standard NU 2-2007 is challenging because of incomplete MR-based attenuation correction (AC) for phantom imaging. In this study, a strategy for CT-based AC of the NEMA image quality (IQ) phantom is assessed. The method is systematically evaluated in NEMA IQ phantom measurements on an integrated PET/MR system. NEMA IQ measurements were performed on the integrated 3.0 Tesla PET/MR hybrid system (Biograph mMR, Siemens Healthcare). AC of the NEMA IQ phantom was realized by an MR-based and by a CT-based method. The suggested CT-based AC uses a template μ-map of the NEMA IQ phantom and a phantom holder for exact repositioning of the phantom on the systems patient table. The PET image quality parameters contrast recovery, background variability, and signal-to-noise ratio (SNR) were determined and compared for both phantom AC methods. Reconstruction parameters of an iterative 3D OP-OSEM reconstruction were optimized for highest lesion SNR in NEMA IQ phantom imaging. Using a CT-based NEMA IQ phantom μ-map on the PET/MR system is straightforward and allowed performing accurate NEMA IQ measurements on the hybrid system. MR-based AC was determined to be insufficient for PET quantification in the tested NEMA IQ phantom because only photon attenuation caused by the MR-visible phantom filling but not the phantom housing is considered. Using the suggested CT-based AC, the highest SNR in this phantom experiment for small lesions (<= 13 mm) was obtained with 3 iterations, 21 subsets and 4 mm Gaussian filtering. This study suggests CT-based AC for the NEMA IQ phantom when performing PET NEMA IQ measurements on an integrated PET/MR hybrid system. The superiority of CT-based AC for this phantom is demonstrated by comparison to measurements using MR-based AC. Furthermore, optimized PET image reconstruction parameters are provided for the highest lesion SNR in NEMA IQ phantom measurements.
A frequency dependent preconditioned wavelet method for atmospheric tomography
NASA Astrophysics Data System (ADS)
Yudytskiy, Mykhaylo; Helin, Tapio; Ramlau, Ronny
2013-12-01
Atmospheric tomography, i.e. the reconstruction of the turbulence in the atmosphere, is a main task for the adaptive optics systems of the next generation telescopes. For extremely large telescopes, such as the European Extremely Large Telescope, this problem becomes overly complex and an efficient algorithm is needed to reduce numerical costs. Recently, a conjugate gradient method based on wavelet parametrization of turbulence layers was introduced [5]. An iterative algorithm can only be numerically efficient when the number of iterations required for a sufficient reconstruction is low. A way to achieve this is to design an efficient preconditioner. In this paper we propose a new frequency-dependent preconditioner for the wavelet method. In the context of a multi conjugate adaptive optics (MCAO) system simulated on the official end-to-end simulation tool OCTOPUS of the European Southern Observatory we demonstrate robustness and speed of the preconditioned algorithm. We show that three iterations are sufficient for a good reconstruction.
On iterative algorithms for quantitative photoacoustic tomography in the radiative transport regime
NASA Astrophysics Data System (ADS)
Wang, Chao; Zhou, Tie
2017-11-01
In this paper, we present a numerical reconstruction method for quantitative photoacoustic tomography (QPAT), based on the radiative transfer equation (RTE), which models light propagation more accurately than diffusion approximation (DA). We investigate the reconstruction of absorption coefficient and scattering coefficient of biological tissues. An improved fixed-point iterative method to retrieve the absorption coefficient, given the scattering coefficient, is proposed for its cheap computational cost; the convergence of this method is also proved. The Barzilai-Borwein (BB) method is applied to retrieve two coefficients simultaneously. Since the reconstruction of optical coefficients involves the solutions of original and adjoint RTEs in the framework of optimization, an efficient solver with high accuracy is developed from Gao and Zhao (2009 Transp. Theory Stat. Phys. 38 149-92). Simulation experiments illustrate that the improved fixed-point iterative method and the BB method are competitive methods for QPAT in the relevant cases.
Iterative feature refinement for accurate undersampled MR image reconstruction
NASA Astrophysics Data System (ADS)
Wang, Shanshan; Liu, Jianbo; Liu, Qiegen; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong
2016-05-01
Accelerating MR scan is of great significance for clinical, research and advanced applications, and one main effort to achieve this is the utilization of compressed sensing (CS) theory. Nevertheless, the existing CSMRI approaches still have limitations such as fine structure loss or high computational complexity. This paper proposes a novel iterative feature refinement (IFR) module for accurate MR image reconstruction from undersampled K-space data. Integrating IFR with CSMRI which is equipped with fixed transforms, we develop an IFR-CS method to restore meaningful structures and details that are originally discarded without introducing too much additional complexity. Specifically, the proposed IFR-CS is realized with three iterative steps, namely sparsity-promoting denoising, feature refinement and Tikhonov regularization. Experimental results on both simulated and in vivo MR datasets have shown that the proposed module has a strong capability to capture image details, and that IFR-CS is comparable and even superior to other state-of-the-art reconstruction approaches.
Cardiac-gated parametric images from 82 Rb PET from dynamic frames and direct 4D reconstruction.
Germino, Mary; Carson, Richard E
2018-02-01
Cardiac perfusion PET data can be reconstructed as a dynamic sequence and kinetic modeling performed to quantify myocardial blood flow, or reconstructed as static gated images to quantify function. Parametric images from dynamic PET are conventionally not gated, to allow use of all events with lower noise. An alternative method for dynamic PET is to incorporate the kinetic model into the reconstruction algorithm itself, bypassing the generation of a time series of emission images and directly producing parametric images. So-called "direct reconstruction" can produce parametric images with lower noise than the conventional method because the noise distribution is more easily modeled in projection space than in image space. In this work, we develop direct reconstruction of cardiac-gated parametric images for 82 Rb PET with an extension of the Parametric Motion compensation OSEM List mode Algorithm for Resolution-recovery reconstruction for the one tissue model (PMOLAR-1T). PMOLAR-1T was extended to accommodate model terms to account for spillover from the left and right ventricles into the myocardium. The algorithm was evaluated on a 4D simulated 82 Rb dataset, including a perfusion defect, as well as a human 82 Rb list mode acquisition. The simulated list mode was subsampled into replicates, each with counts comparable to one gate of a gated acquisition. Parametric images were produced by the indirect (separate reconstructions and modeling) and direct methods for each of eight low-count and eight normal-count replicates of the simulated data, and each of eight cardiac gates for the human data. For the direct method, two initialization schemes were tested: uniform initialization, and initialization with the filtered iteration 1 result of the indirect method. For the human dataset, event-by-event respiratory motion compensation was included. The indirect and direct methods were compared for the simulated dataset in terms of bias and coefficient of variation as a function of iteration. Convergence of direct reconstruction was slow with uniform initialization; lower bias was achieved in fewer iterations by initializing with the filtered indirect iteration 1 images. For most parameters and regions evaluated, the direct method achieved the same or lower absolute bias at matched iteration as the indirect method, with 23%-65% lower noise. Additionally, the direct method gave better contrast between the perfusion defect and surrounding normal tissue than the indirect method. Gated parametric images from the human dataset had comparable relative performance of indirect and direct, in terms of mean parameter values per iteration. Changes in myocardial wall thickness and blood pool size across gates were readily visible in the gated parametric images, with higher contrast between myocardium and left ventricle blood pool in parametric images than gated SUV images. Direct reconstruction can produce parametric images with less noise than the indirect method, opening the potential utility of gated parametric imaging for perfusion PET. © 2017 American Association of Physicists in Medicine.
Hielscher, Andreas H; Bartel, Sebastian
2004-02-01
Optical tomography (OT) is a fast developing novel imaging modality that uses near-infrared (NIR) light to obtain cross-sectional views of optical properties inside the human body. A major challenge remains the time-consuming, computational-intensive image reconstruction problem that converts NIR transmission measurements into cross-sectional images. To increase the speed of iterative image reconstruction schemes that are commonly applied for OT, we have developed and implemented several parallel algorithms on a cluster of workstations. Static process distribution as well as dynamic load balancing schemes suitable for heterogeneous clusters and varying machine performances are introduced and tested. The resulting algorithms are shown to accelerate the reconstruction process to various degrees, substantially reducing the computation times for clinically relevant problems.
Bindu, G.; Semenov, S.
2013-01-01
This paper describes an efficient two-dimensional fused image reconstruction approach for Microwave Tomography (MWT). Finite Difference Time Domain (FDTD) models were created for a viable MWT experimental system having the transceivers modelled using thin wire approximation with resistive voltage sources. Born Iterative and Distorted Born Iterative methods have been employed for image reconstruction with the extremity imaging being done using a differential imaging technique. The forward solver in the imaging algorithm employs the FDTD method of solving the time domain Maxwell’s equations with the regularisation parameter computed using a stochastic approach. The algorithm is tested with 10% noise inclusion and successful image reconstruction has been shown implying its robustness. PMID:24058889
View-interpolation of sparsely sampled sinogram using convolutional neural network
NASA Astrophysics Data System (ADS)
Lee, Hoyeon; Lee, Jongha; Cho, Suengryong
2017-02-01
Spare-view sampling and its associated iterative image reconstruction in computed tomography have actively investigated. Sparse-view CT technique is a viable option to low-dose CT, particularly in cone-beam CT (CBCT) applications, with advanced iterative image reconstructions with varying degrees of image artifacts. One of the artifacts that may occur in sparse-view CT is the streak artifact in the reconstructed images. Another approach has been investigated for sparse-view CT imaging by use of the interpolation methods to fill in the missing view data and that reconstructs the image by an analytic reconstruction algorithm. In this study, we developed an interpolation method using convolutional neural network (CNN), which is one of the widely used deep-learning methods, to find missing projection data and compared its performances with the other interpolation techniques.
Nguyen, Van-Giang; Lee, Soo-Jin
2016-07-01
Iterative reconstruction from Compton scattered data is known to be computationally more challenging than that from conventional line-projection based emission data in that the gamma rays that undergo Compton scattering are modeled as conic projections rather than line projections. In conventional tomographic reconstruction, to parallelize the projection and backprojection operations using the graphics processing unit (GPU), approximated methods that use an unmatched pair of ray-tracing forward projector and voxel-driven backprojector have been widely used. In this work, we propose a new GPU-accelerated method for Compton camera reconstruction which is more accurate by using exactly matched pair of projector and backprojector. To calculate conic forward projection, we first sample the cone surface into conic rays and accumulate the intersecting chord lengths of the conic rays passing through voxels using a fast ray-tracing method (RTM). For conic backprojection, to obtain the true adjoint of the conic forward projection, while retaining the computational efficiency of the GPU, we use a voxel-driven RTM which is essentially the same as the standard RTM used for the conic forward projector. Our simulation results show that, while the new method is about 3 times slower than the approximated method, it is still about 16 times faster than the CPU-based method without any loss of accuracy. The net conclusion is that our proposed method is guaranteed to retain the reconstruction accuracy regardless of the number of iterations by providing a perfectly matched projector-backprojector pair, which makes iterative reconstruction methods for Compton imaging faster and more accurate. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Nam, S B; Jeong, D W; Choo, K S; Nam, K J; Hwang, J-Y; Lee, J W; Kim, J Y; Lim, S J
2017-12-01
To compare the image quality of computed tomography angiography (CTA) reconstructed by sinogram-affirmed iterative reconstruction (SAFIRE) with that of advanced modelled iterative reconstruction (ADMIRE) in children with congenital heart disease (CHD). Thirty-one children (8.23±13.92 months) with CHD who underwent CTA were enrolled. Images were reconstructed using SAFIRE (strength 5) and ADMIRE (strength 5). Objective image qualities (attenuation, noise) were measured in the great vessels and heart chambers. Two radiologists independently calculated the contrast-to-noise ratio (CNR) by measuring the intensity and noise of the myocardial walls. Subjective noise, diagnostic confidence, and sharpness at the level prior to the first branch of the main pulmonary artery were also graded by the two radiologists independently. The objective image noise of ADMIRE was significantly lower than that of SAFIRE in the right atrium, right ventricle, and myocardial wall (p<0.05); however, there were no significant differences observed in the attenuations among the four chambers and great vessels, except in the pulmonary arteries (p>0.05). The mean CNR values were 21.56±10.80 for ADMIRE and 18.21±6.98 for SAFIRE, which were significantly different (p<0.05). In addition, the diagnostic confidence of ADMIRE was significantly lower than that of SAFIRE (p<0.05), while the subjective image noise and sharpness of ADMIRE were not significantly different (p>0.05). CTA using ADMIRE was superior to SAFIRE when comparing the objective and subjective image quality in children with CHD. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Lee, Sangyun; Kwon, Heejin; Cho, Jihan
2016-12-01
To investigate image quality characteristics of abdominal computed tomography (CT) scans reconstructed with adaptive statistical iterative reconstruction V (ASIR-V) vs currently using applied adaptive statistical iterative reconstruction (ASIR). This institutional review board-approved study included 35 consecutive patients who underwent CT of the abdomen. Among these 35 patients, 27 with focal liver lesions underwent abdomen CT with a 128-slice multidetector unit using the following parameters: fixed noise index of 30, 1.25 mm slice thickness, 120 kVp, and a gantry rotation time of 0.5 seconds. CT images were analyzed depending on the method of reconstruction: ASIR (30%, 50%, and 70%) vs ASIR-V (30%, 50%, and 70%). Three radiologists independently assessed randomized images in a blinded manner. Imaging sets were compared to focal lesion detection numbers, overall image quality, and objective noise with a paired sample t test. Interobserver agreement was assessed with the intraclass correlation coefficient. The detection of small focal liver lesions (<10 mm) was significantly higher when ASIR-V was used when compared to ASIR (P <0.001). Subjective image noise, artifact, and objective image noise in liver were generally significantly better for ASIR-V compared to ASIR, especially in 50% ASIR-V. Image sharpness and diagnostic acceptability were significantly worse in 70% ASIR-V compared to various levels of ASIR. Images analyzed using 50% ASIR-V were significantly better than three different series of ASIR or other ASIR-V conditions at providing diagnostically acceptable CT scans without compromising image quality and in the detection of focal liver lesions. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Eck, Brendan; Fahmi, Rachid; Brown, Kevin M.; Raihani, Nilgoun; Wilson, David L.
2014-03-01
Model observers were created and compared to human observers for the detection of low contrast targets in computed tomography (CT) images reconstructed with an advanced, knowledge-based, iterative image reconstruction method for low x-ray dose imaging. A 5-channel Laguerre-Gauss Hotelling Observer (CHO) was used with internal noise added to the decision variable (DV) and/or channel outputs (CO). Models were defined by parameters: (k1) DV-noise with standard deviation (std) proportional to DV std; (k2) DV-noise with constant std; (k3) CO-noise with constant std across channels; and (k4) CO-noise in each channel with std proportional to CO variance. Four-alternative forced choice (4AFC) human observer studies were performed on sub-images extracted from phantom images with and without a "pin" target. Model parameters were estimated using maximum likelihood comparison to human probability correct (PC) data. PC in human and all model observers increased with dose, contrast, and size, and was much higher for advanced iterative reconstruction (IMR) as compared to filtered back projection (FBP). Detection in IMR was better than FPB at 1/3 dose, suggesting significant dose savings. Model(k1,k2,k3,k4) gave the best overall fit to humans across independent variables (dose, size, contrast, and reconstruction) at fixed display window. However Model(k1) performed better when considering model complexity using the Akaike information criterion. Model(k1) fit the extraordinary detectability difference between IMR and FBP, despite the different noise quality. It is anticipated that the model observer will predict results from iterative reconstruction methods having similar noise characteristics, enabling rapid comparison of methods.
Method for position emission mammography image reconstruction
Smith, Mark Frederick
2004-10-12
An image reconstruction method comprising accepting coincidence datat from either a data file or in real time from a pair of detector heads, culling event data that is outside a desired energy range, optionally saving the desired data for each detector position or for each pair of detector pixels on the two detector heads, and then reconstructing the image either by backprojection image reconstruction or by iterative image reconstruction. In the backprojection image reconstruction mode, rays are traced between centers of lines of response (LOR's), counts are then either allocated by nearest pixel interpolation or allocated by an overlap method and then corrected for geometric effects and attenuation and the data file updated. If the iterative image reconstruction option is selected, one implementation is to compute a grid Siddon retracing, and to perform maximum likelihood expectation maiximization (MLEM) computed by either: a) tracing parallel rays between subpixels on opposite detector heads; or b) tracing rays between randomized endpoint locations on opposite detector heads.
NASA Astrophysics Data System (ADS)
Lalush, D. S.; Tsui, B. M. W.
1998-06-01
We study the statistical convergence properties of two fast iterative reconstruction algorithms, the rescaled block-iterative (RBI) and ordered subset (OS) EM algorithms, in the context of cardiac SPECT with 3D detector response modeling. The Monte Carlo method was used to generate nearly noise-free projection data modeling the effects of attenuation, detector response, and scatter from the MCAT phantom. One thousand noise realizations were generated with an average count level approximating a typical T1-201 cardiac study. Each noise realization was reconstructed using the RBI and OS algorithms for cases with and without detector response modeling. For each iteration up to twenty, we generated mean and variance images, as well as covariance images for six specific locations. Both OS and RBI converged in the mean to results that were close to the noise-free ML-EM result using the same projection model. When detector response was not modeled in the reconstruction, RBI exhibited considerably lower noise variance than OS for the same resolution. When 3D detector response was modeled, the RBI-EM provided a small improvement in the tradeoff between noise level and resolution recovery, primarily in the axial direction, while OS required about half the number of iterations of RBI to reach the same resolution. We conclude that OS is faster than RBI, but may be sensitive to errors in the projection model. Both OS-EM and RBI-EM are effective alternatives to the EVIL-EM algorithm, but noise level and speed of convergence depend on the projection model used.
Benz, Dominik C; Gräni, Christoph; Mikulicic, Fran; Vontobel, Jan; Fuchs, Tobias A; Possner, Mathias; Clerc, Olivier F; Stehli, Julia; Gaemperli, Oliver; Pazhenkottil, Aju P; Buechel, Ronny R; Kaufmann, Philipp A
The clinical utility of a latest generation iterative reconstruction algorithm (adaptive statistical iterative reconstruction [ASiR-V]) has yet to be elucidated for coronary computed tomography angiography (CCTA). This study evaluates the impact of ASiR-V on signal, noise and image quality in CCTA. Sixty-five patients underwent clinically indicated CCTA on a 256-slice CT scanner using an ultralow-dose protocol. Data sets from each patient were reconstructed at 6 different levels of ASiR-V. Signal intensity was measured by placing a region of interest in the aortic root, LMA, and RCA. Similarly, noise was measured in the aortic root. Image quality was visually assessed by 2 readers. Median radiation dose was 0.49 mSv. Image noise decreased with increasing levels of ASiR-V resulting in a significant increase in signal-to-noise ratio in the RCA and LMA (P < 0.001). Correspondingly, image quality significantly increased with higher levels of ASiR-V (P < 0.001). ASiR-V yields substantial noise reduction and improved image quality enabling introduction of ultralow-dose CCTA.
NASA Astrophysics Data System (ADS)
Ying, Changsheng; Zhao, Peng; Li, Ye
2018-01-01
The intensified charge-coupled device (ICCD) is widely used in the field of low-light-level (LLL) imaging. The LLL images captured by ICCD suffer from low spatial resolution and contrast, and the target details can hardly be recognized. Super-resolution (SR) reconstruction of LLL images captured by ICCDs is a challenging issue. The dispersion in the double-proximity-focused image intensifier is the main factor that leads to a reduction in image resolution and contrast. We divide the integration time into subintervals that are short enough to get photon images, so the overlapping effect and overstacking effect of dispersion can be eliminated. We propose an SR reconstruction algorithm based on iterative projection photon localization. In the iterative process, the photon image is sliced by projection planes, and photons are screened under the constraints of regularity. The accurate position information of the incident photons in the reconstructed SR image is obtained by the weighted centroids calculation. The experimental results show that the spatial resolution and contrast of our SR image are significantly improved.
NASA Astrophysics Data System (ADS)
Rose, Sean D.; Roth, Jacob; Zimmerman, Cole; Reiser, Ingrid; Sidky, Emil Y.; Pan, Xiaochuan
2018-03-01
In this work we investigate an efficient implementation of a region-of-interest (ROI) based Hotelling observer (HO) in the context of parameter optimization for detection of a rod signal at two orientations in linear iterative image reconstruction for DBT. Our preliminary results suggest that ROI-HO performance trends may be efficiently estimated by modeling only the 2D plane perpendicular to the detector and containing the X-ray source trajectory. In addition, the ROI-HO is seen to exhibit orientation dependent trends in detectability as a function of the regularization strength employed in reconstruction. To further investigate the ROI-HO performance in larger 3D system models, we present and validate an iterative methodology for calculating the ROI-HO. Lastly, we present a real data study investigating the correspondence between ROI-HO performance trends and signal conspicuity. Conspicuity of signals in real data reconstructions is seen to track well with trends in ROI-HO detectability. In particular, we observe orientation dependent conspicuity matching the orientation dependent detectability of the ROI-HO.
Takahashi, Masahiro; Kimura, Fumiko; Umezawa, Tatsuya; Watanabe, Yusuke; Ogawa, Harumi
2016-01-01
Adaptive statistical iterative reconstruction (ASIR) has been used to reduce radiation dose in cardiac computed tomography. However, change of image parameters by ASIR as compared to filtered back projection (FBP) may influence quantification of coronary calcium. To investigate the influence of ASIR on calcium quantification in comparison to FBP. In 352 patients, CT images were reconstructed using FBP alone, FBP combined with ASIR 30%, 50%, 70%, and ASIR 100% based on the same raw data. Image noise, plaque density, Agatston scores and calcium volumes were compared among the techniques. Image noise, Agatston score, and calcium volume decreased significantly with ASIR compared to FBP (each P < 0.001). Use of ASIR reduced Agatston score by 10.5% to 31.0%. In calcified plaques both of patients and a phantom, ASIR decreased maximum CT values and calcified plaque size. In comparison to FBP, adaptive statistical iterative reconstruction (ASIR) may significantly decrease Agatston scores and calcium volumes. Copyright © 2016 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.
A fast method to emulate an iterative POCS image reconstruction algorithm.
Zeng, Gengsheng L
2017-10-01
Iterative image reconstruction algorithms are commonly used to optimize an objective function, especially when the objective function is nonquadratic. Generally speaking, the iterative algorithms are computationally inefficient. This paper presents a fast algorithm that has one backprojection and no forward projection. This paper derives a new method to solve an optimization problem. The nonquadratic constraint, for example, an edge-preserving denoising constraint is implemented as a nonlinear filter. The algorithm is derived based on the POCS (projections onto projections onto convex sets) approach. A windowed FBP (filtered backprojection) algorithm enforces the data fidelity. An iterative procedure, divided into segments, enforces edge-enhancement denoising. Each segment performs nonlinear filtering. The derived iterative algorithm is computationally efficient. It contains only one backprojection and no forward projection. Low-dose CT data are used for algorithm feasibility studies. The nonlinearity is implemented as an edge-enhancing noise-smoothing filter. The patient studies results demonstrate its effectiveness in processing low-dose x ray CT data. This fast algorithm can be used to replace many iterative algorithms. © 2017 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Visser, Eric P.; Disselhorst, Jonathan A.; van Lier, Monique G. J. T. B.; Laverman, Peter; de Jong, Gabie M.; Oyen, Wim J. G.; Boerman, Otto C.
2011-02-01
The image reconstruction algorithms provided with the Siemens Inveon small-animal PET scanner are filtered backprojection (FBP), 3-dimensional reprojection (3DRP), ordered subset expectation maximization in 2 or 3 dimensions (OSEM2D/3D) and maximum a posteriori (MAP) reconstruction. This study aimed at optimizing the reconstruction parameter settings with regard to image quality (IQ) as defined by the NEMA NU 4-2008 standards. The NEMA NU 4-2008 image quality phantom was used to determine image noise, expressed as percentage standard deviation in the uniform phantom region (%STD unif), activity recovery coefficients for the FDG-filled rods (RC rod), and spill-over ratios for the non-radioactive water- and air-filled phantom compartments (SOR wat and SOR air). Although not required by NEMA NU 4, we also determined a contrast-to-noise ratio for each rod (CNR rod), expressing the trade-off between activity recovery and image noise. For FBP and 3DRP the cut-off frequency of the applied filters, and for OSEM2D and OSEM3D, the number of iterations was varied. For MAP, the "smoothing parameter" β and the type of uniformity constraint (variance or resolution) were varied. Results of these analyses were demonstrated in images of an FDG-injected rat showing tumours in the liver, and of a mouse injected with an 18F-labeled peptide, showing a small subcutaneous tumour and the cortex structure of the kidneys. Optimum IQ in terms of CNR rod for the small-diameter rods was obtained using MAP with uniform variance and β=0.4. This setting led to RC rod,1 mm=0.21, RC rod,2 mm=0.57, %STD unif=1.38, SOR wat=0.0011, and SOR air=0.00086. However, the highest activity recovery for the smallest rods with still very small %STD unif was obtained using β=0.075, for which these IQ parameters were 0.31, 0.74, 2.67, 0.0041, and 0.0030, respectively. The different settings of reconstruction parameters were clearly reflected in the rat and mouse images as the trade-off between the recovery of small structures (blood vessels, small tumours, kidney cortex structure) and image noise in homogeneous body parts (healthy liver background). Highest IQ for the Inveon PET scanner was obtained using MAP reconstruction with uniform variance. The setting of β depended on the specific imaging goals.
Impact of view reduction in CT on radiation dose for patients
NASA Astrophysics Data System (ADS)
Parcero, E.; Flores, L.; Sánchez, M. G.; Vidal, V.; Verdú, G.
2017-08-01
Iterative methods have become a hot topic of research in computed tomography (CT) imaging because of their capacity to resolve the reconstruction problem from a limited number of projections. This allows the reduction of radiation exposure on patients during the data acquisition. The reconstruction time and the high radiation dose imposed on patients are the two major drawbacks in CT. To solve them effectively we adapted the method for sparse linear equations and sparse least squares (LSQR) with soft threshold filtering (STF) and the fast iterative shrinkage-thresholding algorithm (FISTA) to computed tomography reconstruction. The feasibility of the proposed methods is demonstrated numerically.
NASA Astrophysics Data System (ADS)
Wells, R. G.; Gifford, H. C.; Pretorius, P. H.; Famcombe, T. H.; Narayanan, M. V.; King, M. A.
2002-06-01
We have demonstrated an improvement due to attenuation correction (AC) at the task of lesion detection in thoracic SPECT images. However, increased noise in the transmission data due to aging sources or very large patients, and misregistration of the emission and transmission maps, can reduce the accuracy of the AC and may result in a loss of lesion detectability. We investigated the impact of noise in and misregistration of transmission data, on the detection of simulated Ga-67 thoracic lesions. Human-observer localization-receiver-operating-characteristic (LROC) methodology was used to assess performance. Both emission and transmission data were simulated using the MCAT computer phantom. Emission data were reconstructed using OSEM incorporating AC and detector resolution compensation. Clinical noise levels were used in the emission data. The transmission-data noise levels ranged from zero (noise-free) to 32 times the measured clinical levels. Transaxial misregistrations of 0.32, 0.63, and 1.27 cm between emission and transmission data were also examined. Three different algorithms were considered for creating the attenuation maps: filtered backprojection (FBP), unbounded maximum-likelihood (ML), and block-iterative transmission AB (BITAB). Results indicate that a 16-fold increase in the noise was required to eliminate the benefit afforded by AC, when using FBP or ML to reconstruct the attenuation maps. When using BITAB, no significant loss in performance was observed for a 32-fold increase in noise. Misregistration errors are also a concern as even small errors here reduce the performance gains of AC.
Kim, Hyungjin; Park, Chang Min; Song, Yong Sub; Lee, Sang Min; Goo, Jin Mo
2014-05-01
To evaluate the influence of radiation dose settings and reconstruction algorithms on the measurement accuracy and reproducibility of semi-automated pulmonary nodule volumetry. CT scans were performed on a chest phantom containing various nodules (10 and 12mm; +100, -630 and -800HU) at 120kVp with tube current-time settings of 10, 20, 50, and 100mAs. Each CT was reconstructed using filtered back projection (FBP), iDose(4) and iterative model reconstruction (IMR). Semi-automated volumetry was performed by two radiologists using commercial volumetry software for nodules at each CT dataset. Noise, contrast-to-noise ratio and signal-to-noise ratio of CT images were also obtained. The absolute percentage measurement errors and differences were then calculated for volume and mass. The influence of radiation dose and reconstruction algorithm on measurement accuracy, reproducibility and objective image quality metrics was analyzed using generalized estimating equations. Measurement accuracy and reproducibility of nodule volume and mass were not significantly associated with CT radiation dose settings or reconstruction algorithms (p>0.05). Objective image quality metrics of CT images were superior in IMR than in FBP or iDose(4) at all radiation dose settings (p<0.05). Semi-automated nodule volumetry can be applied to low- or ultralow-dose chest CT with usage of a novel iterative reconstruction algorithm without losing measurement accuracy and reproducibility. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Penalized weighted least-squares approach for low-dose x-ray computed tomography
NASA Astrophysics Data System (ADS)
Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong
2006-03-01
The noise of low-dose computed tomography (CT) sinogram follows approximately a Gaussian distribution with nonlinear dependence between the sample mean and variance. The noise is statistically uncorrelated among detector bins at any view angle. However the correlation coefficient matrix of data signal indicates a strong signal correlation among neighboring views. Based on above observations, Karhunen-Loeve (KL) transform can be used to de-correlate the signal among the neighboring views. In each KL component, a penalized weighted least-squares (PWLS) objective function can be constructed and optimal sinogram can be estimated by minimizing the objective function, followed by filtered backprojection (FBP) for CT image reconstruction. In this work, we compared the KL-PWLS method with an iterative image reconstruction algorithm, which uses the Gauss-Seidel iterative calculation to minimize the PWLS objective function in image domain. We also compared the KL-PWLS with an iterative sinogram smoothing algorithm, which uses the iterated conditional mode calculation to minimize the PWLS objective function in sinogram space, followed by FBP for image reconstruction. Phantom experiments show a comparable performance of these three PWLS methods in suppressing the noise-induced artifacts and preserving resolution in reconstructed images. Computer simulation concurs with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS noise reduction may have the advantage in computation for low-dose CT imaging, especially for dynamic high-resolution studies.
Vardhanabhuti, Varut; James, Julia; Nensey, Rehaan; Hyde, Christopher; Roobottom, Carl
2015-05-01
To compare image quality on computed tomographic colonography (CTC) acquired at standard dose (STD) and low dose (LD) using filtered-back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction (MBIR) techniques. A total of 65 symptomatic patients were prospectively enrolled for the study and underwent STD and LD CTC with filtered-back projection, adaptive statistical iterative reconstruction, and MBIR to allow direct per-patient comparison. Objective image noise, subjective image analyses, and polyp detection were assessed. Objective image noise analysis demonstrates significant noise reduction using MBIR technique (P < .05) despite being acquired at lower doses. Subjective image analyses were superior for LD MBIR in all parameters except visibility of extracolonic lesions (two-dimensional) and visibility of colonic wall (three-dimensional) where there were no significant differences. There was no significant difference in polyp detection rates (P > .05). Doses: LD (dose-length product, 257.7), STD (dose-length product, 483.6). LD MBIR CTC objectively shows improved image noise using parameters in our study. Subjectively, image quality is maintained. Polyp detection shows no significant difference but because of small numbers needs further validation. Average dose reduction of 47% can be achieved. This study confirms feasibility of using MBIR in this context of CTC in symptomatic population. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
Debatin, Maurice; Hesser, Jürgen
2015-01-01
Reducing the amount of time for data acquisition and reconstruction in industrial CT decreases the operation time of the X-ray machine and therefore increases the sales. This can be achieved by reducing both, the dose and the pulse length of the CT system and the number of projections for the reconstruction, respectively. In this paper, a novel generalized Anisotropic Total Variation regularization for under-sampled, low-dose iterative CT reconstruction is discussed and compared to the standard methods, Total Variation, Adaptive weighted Total Variation and Filtered Backprojection. The novel regularization function uses a priori information about the Gradient Magnitude Distribution of the scanned object for the reconstruction. We provide a general parameterization scheme and evaluate the efficiency of our new algorithm for different noise levels and different number of projection views. When noise is not present, error-free reconstructions are achievable for AwTV and GATV from 40 projections. In cases where noise is simulated, our strategy achieves a Relative Root Mean Square Error that is up to 11 times lower than Total Variation-based and up to 4 times lower than AwTV-based iterative statistical reconstruction (e.g. for a SNR of 223 and 40 projections). To obtain the same reconstruction quality as achieved by Total Variation, the projection number and the pulse length, and the acquisition time and the dose respectively can be reduced by a factor of approximately 3.5, when AwTV is used and a factor of approximately 6.7, when our proposed algorithm is used.
Accurate tissue characterization in low-dose CT imaging with pure iterative reconstruction.
Murphy, Kevin P; McLaughlin, Patrick D; Twomey, Maria; Chan, Vincent E; Moloney, Fiachra; Fung, Adrian J; Chan, Faimee E; Kao, Tafline; O'Neill, Siobhan B; Watson, Benjamin; O'Connor, Owen J; Maher, Michael M
2017-04-01
We assess the ability of low-dose hybrid iterative reconstruction (IR) and 'pure' model-based IR (MBIR) images to maintain accurate Hounsfield unit (HU)-determined tissue characterization. Standard-protocol (SP) and low-dose modified-protocol (MP) CTs were contemporaneously acquired in 34 Crohn's disease patients referred for CT. SP image reconstruction was via the manufacturer's recommendations (60% FBP, filtered back projection; 40% ASiR, Adaptive Statistical iterative Reconstruction; SP-ASiR40). MP data sets underwent four reconstructions (100% FBP; 40% ASiR; 70% ASiR; MBIR). Three observers measured tissue volumes using HU thresholds for fat, soft tissue and bone/contrast on each data set. Analysis was via SPSS. Inter-observer agreement was strong for 1530 datapoints (rs > 0.9). MP-MBIR tissue volume measurement was superior to other MP reconstructions and closely correlated with the reference SP-ASiR40 images for all tissue types. MP-MBIR superiority was most marked for fat volume calculation - close SP-ASiR40 and MP-MBIR Bland-Altman plot correlation was seen with the lowest average difference (336 cm 3 ) when compared with other MP reconstructions. Hounsfield unit-determined tissue volume calculations from MP-MBIR images resulted in values comparable to SP-ASiR40 calculations and values that are superior to MP-ASiR images. Accuracy of estimation of volume of tissues (e.g. fat) using segmentation software on low-dose CT images appears optimal when reconstructed with pure IR. © 2016 The Royal Australian and New Zealand College of Radiologists.
Low-dose 4D cardiac imaging in small animals using dual source micro-CT
NASA Astrophysics Data System (ADS)
Holbrook, M.; Clark, D. P.; Badea, C. T.
2018-01-01
Micro-CT is widely used in preclinical studies, generating substantial interest in extending its capabilities in functional imaging applications such as blood perfusion and cardiac function. However, imaging cardiac structure and function in mice is challenging due to their small size and rapid heart rate. To overcome these challenges, we propose and compare improvements on two strategies for cardiac gating in dual-source, preclinical micro-CT: fast prospective gating (PG) and uncorrelated retrospective gating (RG). These sampling strategies combined with a sophisticated iterative image reconstruction algorithm provide faster acquisitions and high image quality in low-dose 4D (i.e. 3D + Time) cardiac micro-CT. Fast PG is performed under continuous subject rotation which results in interleaved projection angles between cardiac phases. Thus, fast PG provides a well-sampled temporal average image for use as a prior in iterative reconstruction. Uncorrelated RG incorporates random delays during sampling to prevent correlations between heart rate and sampling rate. We have performed both simulations and animal studies to validate these new sampling protocols. Sampling times for 1000 projections using fast PG and RG were 2 and 3 min, respectively, and the total dose was 170 mGy each. Reconstructions were performed using a 4D iterative reconstruction technique based on the split Bregman method. To examine undersampling robustness, subsets of 500 and 250 projections were also used for reconstruction. Both sampling strategies in conjunction with our iterative reconstruction method are capable of resolving cardiac phases and provide high image quality. In general, for equal numbers of projections, fast PG shows fewer errors than RG and is more robust to undersampling. Our results indicate that only 1000-projection based reconstruction with fast PG satisfies a 5% error criterion in left ventricular volume estimation. These methods promise low-dose imaging with a wide range of preclinical applications in cardiac imaging.
Further investigation on "A multiplicative regularization for force reconstruction"
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2018-05-01
We have recently proposed a multiplicative regularization to reconstruct mechanical forces acting on a structure from vibration measurements. This method does not require any selection procedure for choosing the regularization parameter, since the amount of regularization is automatically adjusted throughout an iterative resolution process. The proposed iterative algorithm has been developed with performance and efficiency in mind, but it is actually a simplified version of a full iterative procedure not described in the original paper. The present paper aims at introducing the full resolution algorithm and comparing it with its simplified version in terms of computational efficiency and solution accuracy. In particular, it is shown that both algorithms lead to very similar identified solutions.
NASA Technical Reports Server (NTRS)
Mareboyana, Manohar; Le Moigne-Stewart, Jacqueline; Bennett, Jerome
2016-01-01
In this paper, we demonstrate a simple algorithm that projects low resolution (LR) images differing in subpixel shifts on a high resolution (HR) also called super resolution (SR) grid. The algorithm is very effective in accuracy as well as time efficiency. A number of spatial interpolation techniques using nearest neighbor, inverse-distance weighted averages, Radial Basis Functions (RBF) etc. used in projection yield comparable results. For best accuracy of reconstructing SR image by a factor of two requires four LR images differing in four independent subpixel shifts. The algorithm has two steps: i) registration of low resolution images and (ii) shifting the low resolution images to align with reference image and projecting them on high resolution grid based on the shifts of each low resolution image using different interpolation techniques. Experiments are conducted by simulating low resolution images by subpixel shifts and subsampling of original high resolution image and the reconstructing the high resolution images from the simulated low resolution images. The results of accuracy of reconstruction are compared by using mean squared error measure between original high resolution image and reconstructed image. The algorithm was tested on remote sensing images and found to outperform previously proposed techniques such as Iterative Back Projection algorithm (IBP), Maximum Likelihood (ML), and Maximum a posterior (MAP) algorithms. The algorithm is robust and is not overly sensitive to the registration inaccuracies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, L; Han, Y; Jin, M
Purpose: To develop an iterative reconstruction method for X-ray CT, in which the reconstruction can quickly converge to the desired solution with much reduced projection views. Methods: The reconstruction is formulated as a convex feasibility problem, i.e. the solution is an intersection of three convex sets: 1) data fidelity (DF) set – the L2 norm of the difference of observed projections and those from the reconstructed image is no greater than an error bound; 2) non-negativity of image voxels (NN) set; and 3) piecewise constant (PC) set - the total variation (TV) of the reconstructed image is no greater thanmore » an upper bound. The solution can be found by applying projection onto convex sets (POCS) sequentially for these three convex sets. Specifically, the algebraic reconstruction technique and setting negative voxels as zero are used for projection onto the DF and NN sets, respectively, while the projection onto the PC set is achieved by solving a standard Rudin, Osher, and Fatemi (ROF) model. The proposed method is named as full sequential POCS (FS-POCS), which is tested using the Shepp-Logan phantom and the Catphan600 phantom and compared with two similar algorithms, TV-POCS and CP-TV. Results: Using the Shepp-Logan phantom, the root mean square error (RMSE) of reconstructed images changing along with the number of iterations is used as the convergence measurement. In general, FS- POCS converges faster than TV-POCS and CP-TV, especially with fewer projection views. FS-POCS can also achieve accurate reconstruction of cone-beam CT of the Catphan600 phantom using only 54 views, comparable to that of FDK using 364 views. Conclusion: We developed an efficient iterative reconstruction for sparse-view CT using full sequential POCS. The simulation and physical phantom data demonstrated the computational efficiency and effectiveness of FS-POCS.« less
An iterative hyperelastic parameters reconstruction for breast cancer assessment
NASA Astrophysics Data System (ADS)
Mehrabian, Hatef; Samani, Abbas
2008-03-01
In breast elastography, breast tissues usually undergo large compressions resulting in significant geometric and structural changes, and consequently nonlinear mechanical behavior. In this study, an elastography technique is presented where parameters characterizing tissue nonlinear behavior is reconstructed. Such parameters can be used for tumor tissue classification. To model the nonlinear behavior, tissues are treated as hyperelastic materials. The proposed technique uses a constrained iterative inversion method to reconstruct the tissue hyperelastic parameters. The reconstruction technique uses a nonlinear finite element (FE) model for solving the forward problem. In this research, we applied Yeoh and Polynomial models to model the tissue hyperelasticity. To mimic the breast geometry, we used a computational phantom, which comprises of a hemisphere connected to a cylinder. This phantom consists of two types of soft tissue to mimic adipose and fibroglandular tissues and a tumor. Simulation results show the feasibility of the proposed method in reconstructing the hyperelastic parameters of the tumor tissue.
SSULI/SSUSI UV Tomographic Images of Large-Scale Plasma Structuring
NASA Astrophysics Data System (ADS)
Hei, M. A.; Budzien, S. A.; Dymond, K.; Paxton, L. J.; Schaefer, R. K.; Groves, K. M.
2015-12-01
We present a new technique that creates tomographic reconstructions of atmospheric ultraviolet emission based on data from the Special Sensor Ultraviolet Limb Imager (SSULI) and the Special Sensor Ultraviolet Spectrographic Imager (SSUSI), both flown on the Defense Meteorological Satellite Program (DMSP) Block 5D3 series satellites. Until now, the data from these two instruments have been used independently of each other. The new algorithm combines SSULI/SSUSI measurements of 135.6 nm emission using the tomographic technique; the resultant data product - whole-orbit reconstructions of atmospheric volume emission within the satellite orbital plane - is substantially improved over the original data sets. Tests using simulated atmospheric emission verify that the algorithm performs well in a variety of situations, including daytime, nighttime, and even in the challenging terminator regions. A comparison with ALTAIR radar data validates that the volume emission reconstructions can be inverted to yield maps of electron density. The algorithm incorporates several innovative new features, including the use of both SSULI and SSUSI data to create tomographic reconstructions, the use of an inversion algorithm (Richardson-Lucy; RL) that explicitly accounts for the Poisson statistics inherent in optical measurements, and a pseudo-diffusion based regularization scheme implemented between iterations of the RL code. The algorithm also explicitly accounts for extinction due to absorption by molecular oxygen.
Sub-aperture switching based ptychographic iterative engine (sasPIE) method for quantitative imaging
NASA Astrophysics Data System (ADS)
Sun, Aihui; Kong, Yan; Jiang, Zhilong; Yu, Wei; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng
2018-03-01
Though ptychographic iterative engine (PIE) has been widely adopted in the quantitative micro-imaging with various illuminations as visible light, X-ray and electron beam, the mechanical inaccuracy in the raster scanning of the sample relative to the illumination always degrades the reconstruction quality seriously and makes the resolution reached much lower than that determined by the numerical aperture of the optical system. To overcome this disadvantage, the sub-aperture switching based PIE method is proposed: the mechanical scanning in the common PIE is replaced by the sub-aperture switching, and the reconstruction error related to the positioning inaccuracy is completely avoided. The proposed technique remarkably improves the reconstruction quality, reduces the complexity of the experimental setup and fundamentally accelerates the data acquisition and reconstruction.
Effect of Low-Dose MDCT and Iterative Reconstruction on Trabecular Bone Microstructure Assessment.
Kopp, Felix K; Holzapfel, Konstantin; Baum, Thomas; Nasirudin, Radin A; Mei, Kai; Garcia, Eduardo G; Burgkart, Rainer; Rummeny, Ernst J; Kirschke, Jan S; Noël, Peter B
2016-01-01
We investigated the effects of low-dose multi detector computed tomography (MDCT) in combination with statistical iterative reconstruction algorithms on trabecular bone microstructure parameters. Twelve donated vertebrae were scanned with the routine radiation exposure used in our department (standard-dose) and a low-dose protocol. Reconstructions were performed with filtered backprojection (FBP) and maximum-likelihood based statistical iterative reconstruction (SIR). Trabecular bone microstructure parameters were assessed and statistically compared for each reconstruction. Moreover, fracture loads of the vertebrae were biomechanically determined and correlated to the assessed microstructure parameters. Trabecular bone microstructure parameters based on low-dose MDCT and SIR significantly correlated with vertebral bone strength. There was no significant difference between microstructure parameters calculated on low-dose SIR and standard-dose FBP images. However, the results revealed a strong dependency on the regularization strength applied during SIR. It was observed that stronger regularization might corrupt the microstructure analysis, because the trabecular structure is a very small detail that might get lost during the regularization process. As a consequence, the introduction of SIR for trabecular bone microstructure analysis requires a specific optimization of the regularization parameters. Moreover, in comparison to other approaches, superior noise-resolution trade-offs can be found with the proposed methods.
Shen, Junlin; Du, Xiangying; Guo, Daode; Cao, Lizhen; Gao, Yan; Yang, Qi; Li, Pengyu; Liu, Jiabin; Li, Kuncheng
2013-01-01
Objectives To evaluate the clinical value of noise-based tube current reduction method with iterative reconstruction for obtaining consistent image quality with dose optimization in prospective electrocardiogram (ECG)-triggered coronary CT angiography (CCTA). Materials and Methods We performed a prospective randomized study evaluating 338 patients undergoing CCTA with prospective ECG-triggering. Patients were randomly assigned to fixed tube current with filtered back projection (Group 1, n = 113), noise-based tube current with filtered back projection (Group 2, n = 109) or with iterative reconstruction (Group 3, n = 116). Tube voltage was fixed at 120 kV. Qualitative image quality was rated on a 5-point scale (1 = impaired, to 5 = excellent, with 3–5 defined as diagnostic). Image noise and signal intensity were measured; signal-to-noise ratio was calculated; radiation dose parameters were recorded. Statistical analyses included one-way analysis of variance, chi-square test, Kruskal-Wallis test and multivariable linear regression. Results Image noise was maintained at the target value of 35HU with small interquartile range for Group 2 (35.00–35.03HU) and Group 3 (34.99–35.02HU), while from 28.73 to 37.87HU for Group 1. All images in the three groups were acceptable for diagnosis. A relative 20% and 51% reduction in effective dose for Group 2 (2.9 mSv) and Group 3 (1.8 mSv) were achieved compared with Group 1 (3.7 mSv). After adjustment for scan characteristics, iterative reconstruction was associated with 26% reduction in effective dose. Conclusion Noise-based tube current reduction method with iterative reconstruction maintains image noise precisely at the desired level and achieves consistent image quality. Meanwhile, effective dose can be reduced by more than 50%. PMID:23741444
Reducing the latency of the Fractal Iterative Method to half an iteration
NASA Astrophysics Data System (ADS)
Béchet, Clémentine; Tallon, Michel
2013-12-01
The fractal iterative method for atmospheric tomography (FRiM-3D) has been introduced to solve the wavefront reconstruction at the dimensions of an ELT with a low-computational cost. Previous studies reported the requirement of only 3 iterations of the algorithm in order to provide the best adaptive optics (AO) performance. Nevertheless, any iterative method in adaptive optics suffer from the intrinsic latency induced by the fact that one iteration can start only once the previous one is completed. Iterations hardly match the low-latency requirement of the AO real-time computer. We present here a new approach to avoid iterations in the computation of the commands with FRiM-3D, thus allowing low-latency AO response even at the scale of the European ELT (E-ELT). The method highlights the importance of "warm-start" strategy in adaptive optics. To our knowledge, this particular way to use the "warm-start" has not been reported before. Futhermore, removing the requirement of iterating to compute the commands, the computational cost of the reconstruction with FRiM-3D can be simplified and at least reduced to half the computational cost of a classical iteration. Thanks to simulations of both single-conjugate and multi-conjugate AO for the E-ELT,with FRiM-3D on Octopus ESO simulator, we demonstrate the benefit of this approach. We finally enhance the robustness of this new implementation with respect to increasing measurement noise, wind speed and even modeling errors.
Filtered gradient reconstruction algorithm for compressive spectral imaging
NASA Astrophysics Data System (ADS)
Mejia, Yuri; Arguello, Henry
2017-04-01
Compressive sensing matrices are traditionally based on random Gaussian and Bernoulli entries. Nevertheless, they are subject to physical constraints, and their structure unusually follows a dense matrix distribution, such as the case of the matrix related to compressive spectral imaging (CSI). The CSI matrix represents the integration of coded and shifted versions of the spectral bands. A spectral image can be recovered from CSI measurements by using iterative algorithms for linear inverse problems that minimize an objective function including a quadratic error term combined with a sparsity regularization term. However, current algorithms are slow because they do not exploit the structure and sparse characteristics of the CSI matrices. A gradient-based CSI reconstruction algorithm, which introduces a filtering step in each iteration of a conventional CSI reconstruction algorithm that yields improved image quality, is proposed. Motivated by the structure of the CSI matrix, Φ, this algorithm modifies the iterative solution such that it is forced to converge to a filtered version of the residual ΦTy, where y is the compressive measurement vector. We show that the filtered-based algorithm converges to better quality performance results than the unfiltered version. Simulation results highlight the relative performance gain over the existing iterative algorithms.
A combined reconstruction-classification method for diffuse optical tomography.
Hiltunen, P; Prince, S J D; Arridge, S
2009-11-07
We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, Xue; Niu, Tianye; Zhu, Lei, E-mail: leizhu@gatech.edu
2014-05-15
Purpose: Dual-energy CT (DECT) is being increasingly used for its capability of material decomposition and energy-selective imaging. A generic problem of DECT, however, is that the decomposition process is unstable in the sense that the relative magnitude of decomposed signals is reduced due to signal cancellation while the image noise is accumulating from the two CT images of independent scans. Direct image decomposition, therefore, leads to severe degradation of signal-to-noise ratio on the resultant images. Existing noise suppression techniques are typically implemented in DECT with the procedures of reconstruction and decomposition performed independently, which do not explore the statistical propertiesmore » of decomposed images during the reconstruction for noise reduction. In this work, the authors propose an iterative approach that combines the reconstruction and the signal decomposition procedures to minimize the DECT image noise without noticeable loss of resolution. Methods: The proposed algorithm is formulated as an optimization problem, which balances the data fidelity and total variation of decomposed images in one framework, and the decomposition step is carried out iteratively together with reconstruction. The noise in the CT images from the proposed algorithm becomes well correlated even though the noise of the raw projections is independent on the two CT scans. Due to this feature, the proposed algorithm avoids noise accumulation during the decomposition process. The authors evaluate the method performance on noise suppression and spatial resolution using phantom studies and compare the algorithm with conventional denoising approaches as well as combined iterative reconstruction methods with different forms of regularization. Results: On the Catphan©600 phantom, the proposed method outperforms the existing denoising methods on preserving spatial resolution at the same level of noise suppression, i.e., a reduction of noise standard deviation by one order of magnitude. This improvement is mainly attributed to the high noise correlation in the CT images reconstructed by the proposed algorithm. Iterative reconstruction using different regularization, including quadratic orq-generalized Gaussian Markov random field regularization, achieves similar noise suppression from high noise correlation. However, the proposed TV regularization obtains a better edge preserving performance. Studies of electron density measurement also show that our method reduces the average estimation error from 9.5% to 7.1%. On the anthropomorphic head phantom, the proposed method suppresses the noise standard deviation of the decomposed images by a factor of ∼14 without blurring the fine structures in the sinus area. Conclusions: The authors propose a practical method for DECT imaging reconstruction, which combines the image reconstruction and material decomposition into one optimization framework. Compared to the existing approaches, our method achieves a superior performance on DECT imaging with respect to decomposition accuracy, noise reduction, and spatial resolution.« less
Propagation-based x-ray phase contrast imaging using an iterative phase diversity technique
NASA Astrophysics Data System (ADS)
Carroll, Aidan J.; van Riessen, Grant A.; Balaur, Eugeniu; Dolbnya, Igor P.; Tran, Giang N.; Peele, Andrew G.
2018-03-01
Through the use of a phase diversity technique, we demonstrate a near-field in-line x-ray phase contrast algorithm that provides improved object reconstruction when compared to our previous iterative methods for a homogeneous sample. Like our previous methods, the new technique uses the sample refractive index distribution during the reconstruction process. The technique complements existing monochromatic and polychromatic methods and is useful in situations where experimental phase contrast data is affected by noise.
Kwon, Heejin; Cho, Jinhan; Oh, Jongyeong; Kim, Dongwon; Cho, Junghyun; Kim, Sanghyun; Lee, Sangyun; Lee, Jihyun
2015-10-01
To investigate whether reduced radiation dose abdominal CT images reconstructed with adaptive statistical iterative reconstruction V (ASIR-V) compromise the depiction of clinically competent features when compared with the currently used routine radiation dose CT images reconstructed with ASIR. 27 consecutive patients (mean body mass index: 23.55 kg m(-2) underwent CT of the abdomen at two time points. At the first time point, abdominal CT was scanned at 21.45 noise index levels of automatic current modulation at 120 kV. Images were reconstructed with 40% ASIR, the routine protocol of Dong-A University Hospital. At the second time point, follow-up scans were performed at 30 noise index levels. Images were reconstructed with filtered back projection (FBP), 40% ASIR, 30% ASIR-V, 50% ASIR-V and 70% ASIR-V for the reduced radiation dose. Both quantitative and qualitative analyses of image quality were conducted. The CT dose index was also recorded. At the follow-up study, the mean dose reduction relative to the currently used common radiation dose was 35.37% (range: 19-49%). The overall subjective image quality and diagnostic acceptability of the 50% ASIR-V scores at the reduced radiation dose were nearly identical to those recorded when using the initial routine-dose CT with 40% ASIR. Subjective ratings of the qualitative analysis revealed that of all reduced radiation dose CT series reconstructed, 30% ASIR-V and 50% ASIR-V were associated with higher image quality with lower noise and artefacts as well as good sharpness when compared with 40% ASIR and FBP. However, the sharpness score at 70% ASIR-V was considered to be worse than that at 40% ASIR. Objective image noise for 50% ASIR-V was 34.24% and 46.34% which was lower than 40% ASIR and FBP. Abdominal CT images reconstructed with ASIR-V facilitate radiation dose reductions of to 35% when compared with the ASIR. This study represents the first clinical research experiment to use ASIR-V, the newest version of iterative reconstruction. Use of the ASIR-V algorithm decreased image noise and increased image quality when compared with the ASIR and FBP methods. These results suggest that high-quality low-dose CT may represent a new clinical option.
Cho, Jinhan; Oh, Jongyeong; Kim, Dongwon; Cho, Junghyun; Kim, Sanghyun; Lee, Sangyun; Lee, Jihyun
2015-01-01
Objective: To investigate whether reduced radiation dose abdominal CT images reconstructed with adaptive statistical iterative reconstruction V (ASIR-V) compromise the depiction of clinically competent features when compared with the currently used routine radiation dose CT images reconstructed with ASIR. Methods: 27 consecutive patients (mean body mass index: 23.55 kg m−2 underwent CT of the abdomen at two time points. At the first time point, abdominal CT was scanned at 21.45 noise index levels of automatic current modulation at 120 kV. Images were reconstructed with 40% ASIR, the routine protocol of Dong-A University Hospital. At the second time point, follow-up scans were performed at 30 noise index levels. Images were reconstructed with filtered back projection (FBP), 40% ASIR, 30% ASIR-V, 50% ASIR-V and 70% ASIR-V for the reduced radiation dose. Both quantitative and qualitative analyses of image quality were conducted. The CT dose index was also recorded. Results: At the follow-up study, the mean dose reduction relative to the currently used common radiation dose was 35.37% (range: 19–49%). The overall subjective image quality and diagnostic acceptability of the 50% ASIR-V scores at the reduced radiation dose were nearly identical to those recorded when using the initial routine-dose CT with 40% ASIR. Subjective ratings of the qualitative analysis revealed that of all reduced radiation dose CT series reconstructed, 30% ASIR-V and 50% ASIR-V were associated with higher image quality with lower noise and artefacts as well as good sharpness when compared with 40% ASIR and FBP. However, the sharpness score at 70% ASIR-V was considered to be worse than that at 40% ASIR. Objective image noise for 50% ASIR-V was 34.24% and 46.34% which was lower than 40% ASIR and FBP. Conclusion: Abdominal CT images reconstructed with ASIR-V facilitate radiation dose reductions of to 35% when compared with the ASIR. Advances in knowledge: This study represents the first clinical research experiment to use ASIR-V, the newest version of iterative reconstruction. Use of the ASIR-V algorithm decreased image noise and increased image quality when compared with the ASIR and FBP methods. These results suggest that high-quality low-dose CT may represent a new clinical option. PMID:26234823
Charm: Cosmic history agnostic reconstruction method
NASA Astrophysics Data System (ADS)
Porqueres, Natalia; Ensslin, Torsten A.
2017-03-01
Charm (cosmic history agnostic reconstruction method) reconstructs the cosmic expansion history in the framework of Information Field Theory. The reconstruction is performed via the iterative Wiener filter from an agnostic or from an informative prior. The charm code allows one to test the compatibility of several different data sets with the LambdaCDM model in a non-parametric way.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, A; Paysan, P; Brehm, M
2016-06-15
Purpose: To improve CBCT image quality for image-guided radiotherapy by applying advanced reconstruction algorithms to overcome scatter, noise, and artifact limitations Methods: CBCT is used extensively for patient setup in radiotherapy. However, image quality generally falls short of diagnostic CT, limiting soft-tissue based positioning and potential applications such as adaptive radiotherapy. The conventional TrueBeam CBCT reconstructor uses a basic scatter correction and FDK reconstruction, resulting in residual scatter artifacts, suboptimal image noise characteristics, and other artifacts like cone-beam artifacts. We have developed an advanced scatter correction that uses a finite-element solver (AcurosCTS) to model the behavior of photons as theymore » pass (and scatter) through the object. Furthermore, iterative reconstruction is applied to the scatter-corrected projections, enforcing data consistency with statistical weighting and applying an edge-preserving image regularizer to reduce image noise. The combined algorithms have been implemented on a GPU. CBCT projections from clinically operating TrueBeam systems have been used to compare image quality between the conventional and improved reconstruction methods. Planning CT images of the same patients have also been compared. Results: The advanced scatter correction removes shading and inhomogeneity artifacts, reducing the scatter artifact from 99.5 HU to 13.7 HU in a typical pelvis case. Iterative reconstruction provides further benefit by reducing image noise and eliminating streak artifacts, thereby improving soft-tissue visualization. In a clinical head and pelvis CBCT, the noise was reduced by 43% and 48%, respectively, with no change in spatial resolution (assessed visually). Additional benefits include reduction of cone-beam artifacts and reduction of metal artifacts due to intrinsic downweighting of corrupted rays. Conclusion: The combination of an advanced scatter correction with iterative reconstruction substantially improves CBCT image quality. It is anticipated that clinically acceptable reconstruction times will result from a multi-GPU implementation (the algorithms are under active development and not yet commercially available). All authors are employees of and (may) own stock of Varian Medical Systems.« less
Acceleration of the direct reconstruction of linear parametric images using nested algorithms.
Wang, Guobao; Qi, Jinyi
2010-03-07
Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.
A novel iterative scheme and its application to differential equations.
Khan, Yasir; Naeem, F; Šmarda, Zdeněk
2014-01-01
The purpose of this paper is to employ an alternative approach to reconstruct the standard variational iteration algorithm II proposed by He, including Lagrange multiplier, and to give a simpler formulation of Adomian decomposition and modified Adomian decomposition method in terms of newly proposed variational iteration method-II (VIM). Through careful investigation of the earlier variational iteration algorithm and Adomian decomposition method, we find unnecessary calculations for Lagrange multiplier and also repeated calculations involved in each iteration, respectively. Several examples are given to verify the reliability and efficiency of the method.
Assessment of acquisition protocols for routine imaging of Y-90 using PET/CT
2013-01-01
Background Despite the early theoretical prediction of the 0+-0+ transition of 90Zr, 90Y-PET underwent only recently a growing interest for the development of imaging radioembolization of liver tumors. The aim of this work was to determine the minimum detectable activity (MDA) of 90Y by PET imaging and the impact of time-of-flight (TOF) reconstruction on detectability and quantitative accuracy according to the lesion size. Methods The study was conducted using a Siemens Biograph® mCT with a 22 cm large axial field of view. An IEC torso-shaped phantom containing five coplanar spheres was uniformly filled to achieve sphere-to-background ratios of 40:1. The phantom was imaged nine times in 14 days over 30 min. Sinograms were reconstructed with and without TOF information. A contrast-to-noise ratio (CNR) index was calculated using the Rose criterion, taking partial volume effects into account. The impact of reconstruction parameters on quantification accuracy, detectability, and spatial localization of the signal was investigated. Finally, six patients with hepatocellular carcinoma and four patients included in different 90Y-based radioimmunotherapy protocols were enrolled for the evaluation of the imaging parameters in a clinical situation. Results The highest CNR was achieved with one iteration for both TOF and non-TOF reconstructions. The MDA, however, was found to be lower with TOF than with non-TOF reconstruction. There was no gain by adding TOF information in terms of CNR for concentrations higher than 2 to 3 MBq mL−1, except for infra-centimetric lesions. Recovered activity was highly underestimated when a single iteration or non-TOF reconstruction was used (10% to 150% less depending on the lesion size). The MDA was estimated at 1 MBq mL−1 for a TOF reconstruction and infra-centimetric lesions. Images from patients treated with microspheres were clinically relevant, unlike those of patients who received systemic injections of 90Y. Conclusions Only one iteration and TOF were necessary to achieve an MDA around 1 MBq mL−1 and the most accurate localization of lesions. For precise quantification, at least three iterations gave the best performance, using TOF reconstruction and keeping an MDA of roughly 1 MBq mL−1. One and three iterations were mandatory to prevent false positive results for quantitative analysis of clinical data. Trial registration http://IDRCB 2011-A00043-38 P101103 PMID:23414629
Sparse-view proton computed tomography using modulated proton beams.
Lee, Jiseoc; Kim, Changhwan; Min, Byungjun; Kwak, Jungwon; Park, Seyjoon; Lee, Se Byeong; Park, Sungyong; Cho, Seungryong
2015-02-01
Proton imaging that uses a modulated proton beam and an intensity detector allows a relatively fast image acquisition compared to the imaging approach based on a trajectory tracking detector. In addition, it requires a relatively simple implementation in a conventional proton therapy equipment. The model of geometric straight ray assumed in conventional computed tomography (CT) image reconstruction is however challenged by multiple-Coulomb scattering and energy straggling in the proton imaging. Radiation dose to the patient is another important issue that has to be taken care of for practical applications. In this work, the authors have investigated iterative image reconstructions after a deconvolution of the sparsely view-sampled data to address these issues in proton CT. Proton projection images were acquired using the modulated proton beams and the EBT2 film as an intensity detector. Four electron-density cylinders representing normal soft tissues and bone were used as imaged object and scanned at 40 views that are equally separated over 360°. Digitized film images were converted to water-equivalent thickness by use of an empirically derived conversion curve. For improving the image quality, a deconvolution-based image deblurring with an empirically acquired point spread function was employed. They have implemented iterative image reconstruction algorithms such as adaptive steepest descent-projection onto convex sets (ASD-POCS), superiorization method-projection onto convex sets (SM-POCS), superiorization method-expectation maximization (SM-EM), and expectation maximization-total variation minimization (EM-TV). Performance of the four image reconstruction algorithms was analyzed and compared quantitatively via contrast-to-noise ratio (CNR) and root-mean-square-error (RMSE). Objects of higher electron density have been reconstructed more accurately than those of lower density objects. The bone, for example, has been reconstructed within 1% error. EM-based algorithms produced an increased image noise and RMSE as the iteration reaches about 20, while the POCS-based algorithms showed a monotonic convergence with iterations. The ASD-POCS algorithm outperformed the others in terms of CNR, RMSE, and the accuracy of the reconstructed relative stopping power in the region of lung and soft tissues. The four iterative algorithms, i.e., ASD-POCS, SM-POCS, SM-EM, and EM-TV, have been developed and applied for proton CT image reconstruction. Although it still seems that the images need to be improved for practical applications to the treatment planning, proton CT imaging by use of the modulated beams in sparse-view sampling has demonstrated its feasibility.
NASA Astrophysics Data System (ADS)
Faugeras, Blaise; Blum, Jacques; Heumann, Holger; Boulbe, Cédric
2017-08-01
The modelization of polarimetry Faraday rotation measurements commonly used in tokamak plasma equilibrium reconstruction codes is an approximation to the Stokes model. This approximation is not valid for the foreseen ITER scenarios where high current and electron density plasma regimes are expected. In this work a method enabling the consistent resolution of the inverse equilibrium reconstruction problem in the framework of non-linear free-boundary equilibrium coupled to the Stokes model equation for polarimetry is provided. Using optimal control theory we derive the optimality system for this inverse problem. A sequential quadratic programming (SQP) method is proposed for its numerical resolution. Numerical experiments with noisy synthetic measurements in the ITER tokamak configuration for two test cases, the second of which is an H-mode plasma, show that the method is efficient and that the accuracy of the identification of the unknown profile functions is improved compared to the use of classical Faraday measurements.
Design of 4D x-ray tomography experiments for reconstruction using regularized iterative algorithms
NASA Astrophysics Data System (ADS)
Mohan, K. Aditya
2017-10-01
4D X-ray computed tomography (4D-XCT) is widely used to perform non-destructive characterization of time varying physical processes in various materials. The conventional approach to improving temporal resolution in 4D-XCT involves the development of expensive and complex instrumentation that acquire data faster with reduced noise. It is customary to acquire data with many tomographic views at a high signal to noise ratio. Instead, temporal resolution can be improved using regularized iterative algorithms that are less sensitive to noise and limited views. These algorithms benefit from optimization of other parameters such as the view sampling strategy while improving temporal resolution by reducing the total number of views or the detector exposure time. This paper presents the design principles of 4D-XCT experiments when using regularized iterative algorithms derived using the framework of model-based reconstruction. A strategy for performing 4D-XCT experiments is presented that allows for improving the temporal resolution by progressively reducing the number of views or the detector exposure time. Theoretical analysis of the effect of the data acquisition parameters on the detector signal to noise ratio, spatial reconstruction resolution, and temporal reconstruction resolution is also presented in this paper.
Compressed sensing with gradient total variation for low-dose CBCT reconstruction
NASA Astrophysics Data System (ADS)
Seo, Chang-Woo; Cha, Bo Kyung; Jeon, Seongchae; Huh, Young; Park, Justin C.; Lee, Byeonghun; Baek, Junghee; Kim, Eunyoung
2015-06-01
This paper describes the improvement of convergence speed with gradient total variation (GTV) in compressed sensing (CS) for low-dose cone-beam computed tomography (CBCT) reconstruction. We derive a fast algorithm for the constrained total variation (TV)-based a minimum number of noisy projections. To achieve this task we combine the GTV with a TV-norm regularization term to promote an accelerated sparsity in the X-ray attenuation characteristics of the human body. The GTV is derived from a TV and enforces more efficient computationally and faster in convergence until a desired solution is achieved. The numerical algorithm is simple and derives relatively fast convergence. We apply a gradient projection algorithm that seeks a solution iteratively in the direction of the projected gradient while enforcing a non-negatively of the found solution. In comparison with the Feldkamp, Davis, and Kress (FDK) and conventional TV algorithms, the proposed GTV algorithm showed convergence in ≤18 iterations, whereas the original TV algorithm needs at least 34 iterations in reducing 50% of the projections compared with the FDK algorithm in order to reconstruct the chest phantom images. Future investigation includes improving imaging quality, particularly regarding X-ray cone-beam scatter, and motion artifacts of CBCT reconstruction.
NASA Astrophysics Data System (ADS)
Bruns, S.; Stipp, S. L. S.; Sørensen, H. O.
2017-07-01
X-ray micro- and nanotomography has evolved into a quantitative analysis tool rather than a mere qualitative visualization technique for the study of porous natural materials. Tomographic reconstructions are subject to noise that has to be handled by image filters prior to quantitative analysis. Typically, denoising filters are designed to handle random noise, such as Gaussian or Poisson noise. In tomographic reconstructions, noise has been projected from Radon space to Euclidean space, i.e. post reconstruction noise cannot be expected to be random but to be correlated. Reconstruction artefacts, such as streak or ring artefacts, aggravate the filtering process so algorithms performing well with random noise are not guaranteed to provide satisfactory results for X-ray tomography reconstructions. With sufficient image resolution, the crystalline origin of most geomaterials results in tomography images of objects that are untextured. We developed a denoising framework for these kinds of samples that combines a noise level estimate with iterative nonlocal means denoising. This allows splitting the denoising task into several weak denoising subtasks where the later filtering steps provide a controlled level of texture removal. We describe a hands-on explanation for the use of this iterative denoising approach and the validity and quality of the image enhancement filter was evaluated in a benchmarking experiment with noise footprints of a varying level of correlation and residual artefacts. They were extracted from real tomography reconstructions. We found that our denoising solutions were superior to other denoising algorithms, over a broad range of contrast-to-noise ratios on artificial piecewise constant signals.
Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M.; El Fakhri, Georges
2013-01-01
Purpose: Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Methods: Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. Results: At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%–29% and 32%–70% for 50 × 106 and 10 × 106 detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40–50 iterations), while more than 500 iterations were needed for CG. Conclusions: The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method. PMID:24089922
Yang, Wen Jie; Yan, Fu Hua; Liu, Bo; Pang, Li Fang; Hou, Liang; Zhang, Huan; Pan, Zi Lai; Chen, Ke Min
2013-01-01
To evaluate the performance of sinogram-affirmed iterative (SAFIRE) reconstruction on image quality of low-dose lung computed tomographic (CT) screening compared with filtered back projection (FBP). Three hundred four patients for annual low-dose lung CT screening were examined by a dual-source CT system at 120 kilovolt (peak) with reference tube current of 40 mA·s. Six image serials were reconstructed, including one data set of FBP and 5 data sets of SAFIRE with different reconstruction strengths from 1 to 5. Image noise was recorded; and subjective scores of image noise, images artifacts, and the overall image quality were also assessed by 2 radiologists. The mean ± SD weight for all patients was 66.3 ± 12.8 kg, and the body mass index was 23.4 ± 3.2. The mean ± SD dose-length product was 95.2 ± 30.6 mGy cm, and the mean ± SD effective dose was 1.6 ± 0.5 mSv. The observation agreements for image noise grade, artifact grade, and the overall image quality were 0.785, 0.595 and 0.512, respectively. Among the overall 6 data sets, both the measured mean objective image noise and the subjective image noise of FBP was the highest, and the image noise decreased with the increasing of SAFIRE reconstruction strength. The data sets of S3 obtained the best image quality scores. Sinogram-affirmed iterative reconstruction can significantly improve image quality of low-dose lung CT screening compared with FBP, and SAFIRE with reconstruction strength 3 was a pertinent choice for low-dose lung CT.
Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M; El Fakhri, Georges
2013-10-01
Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%-29% and 32%-70% for 50 × 10(6) and 10 × 10(6) detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40-50 iterations), while more than 500 iterations were needed for CG. The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, S; Zhang, Y; Ma, J
Purpose: To investigate iterative reconstruction via prior image constrained total generalized variation (PICTGV) for spectral computed tomography (CT) using fewer projections while achieving greater image quality. Methods: The proposed PICTGV method is formulated as an optimization problem, which balances the data fidelity and prior image constrained total generalized variation of reconstructed images in one framework. The PICTGV method is based on structure correlations among images in the energy domain and high-quality images to guide the reconstruction of energy-specific images. In PICTGV method, the high-quality image is reconstructed from all detector-collected X-ray signals and is referred as the broad-spectrum image. Distinctmore » from the existing reconstruction methods applied on the images with first order derivative, the higher order derivative of the images is incorporated into the PICTGV method. An alternating optimization algorithm is used to minimize the PICTGV objective function. We evaluate the performance of PICTGV on noise and artifacts suppressing using phantom studies and compare the method with the conventional filtered back-projection method as well as TGV based method without prior image. Results: On the digital phantom, the proposed method outperforms the existing TGV method in terms of the noise reduction, artifacts suppression, and edge detail preservation. Compared to that obtained by the TGV based method without prior image, the relative root mean square error in the images reconstructed by the proposed method is reduced by over 20%. Conclusion: The authors propose an iterative reconstruction via prior image constrained total generalize variation for spectral CT. Also, we have developed an alternating optimization algorithm and numerically demonstrated the merits of our approach. Results show that the proposed PICTGV method outperforms the TGV method for spectral CT.« less
Yamaguchi, Tomohiro; Fujii, Takashi; Abe, Yoshito; Hirai, Teruhisa; Kang, Dongchon; Namba, Keiichi; Hamasaki, Naotaka; Mitsuoka, Kaoru
2010-03-01
The C-terminal membrane domain of erythrocyte band 3 functions as an anion exchanger. Here, we report the three-dimensional (3D) structure of the membrane domain in an inhibitor-stabilized, outward-open conformation at 18A resolution. Unstained, frozen-hydrated tubular crystals containing the membrane domain of band 3 purified from human red blood cells (hB3MD) were examined using cryo-electron microscopy and iterative helical real-space reconstruction (IHRSR). The 3D image reconstruction of the tubular crystals showed the molecular packing of hB3MD dimers with dimensions of 60 x 110 A in the membrane plane and a thickness of 70A across the membrane. Immunoelectron microscopy and carboxyl-terminal digestion demonstrated that the intracellular surface of hB3MD was exposed on the outer surface of the tubular crystal. A 3D density map revealed that hB3MD consists of at least two subdomains and that the outward-open form is characterized by a large hollow area on the extracellular surface and continuous density on the intracellular surface. (c) 2009 Elsevier Inc. All rights reserved.
Bischel, Alexander; Stratis, Andreas; Kakar, Apoorv; Bosmans, Hilde; Jacobs, Reinhilde; Gassner, Eva-Maria; Puelacher, Wolfgang; Pauwels, Ruben
2016-01-01
Objective: The aim of this study was to evaluate whether application of ultralow dose protocols and iterative reconstruction technology (IRT) influence quantitative Hounsfield units (HUs) and contrast-to-noise ratio (CNR) in dentomaxillofacial CT imaging. Methods: A phantom with inserts of five types of materials was scanned using protocols for (a) a clinical reference for navigated surgery (CT dose index volume 36.58 mGy), (b) low-dose sinus imaging (18.28 mGy) and (c) four ultralow dose imaging (4.14, 2.63, 0.99 and 0.53 mGy). All images were reconstructed using: (i) filtered back projection (FBP); (ii) IRT: adaptive statistical iterative reconstruction-50 (ASIR-50), ASIR-100 and model-based iterative reconstruction (MBIR); and (iii) standard (std) and bone kernel. Mean HU, CNR and average HU error after recalibration were determined. Each combination of protocols was compared using Friedman analysis of variance, followed by Dunn's multiple comparison test. Results: Pearson's sample correlation coefficients were all >0.99. Ultralow dose protocols using FBP showed errors of up to 273 HU. Std kernels had less HU variability than bone kernels. MBIR reduced the error value for the lowest dose protocol to 138 HU and retained the highest relative CNR. ASIR could not demonstrate significant advantages over FBP. Conclusions: Considering a potential dose reduction as low as 1.5% of a std protocol, ultralow dose protocols and IRT should be further tested for clinical dentomaxillofacial CT imaging. Advances in knowledge: HU as a surrogate for bone density may vary significantly in CT ultralow dose imaging. However, use of std kernels and MBIR technology reduce HU error values and may retain the highest CNR. PMID:26859336
Choi, Se Y; Ahn, Seung H; Choi, Jae D; Kim, Jung H; Lee, Byoung-Il; Kim, Jeong-In
2016-01-01
Objective: The purpose of this study was to compare CT image quality for evaluating urolithiasis using filtered back projection (FBP), statistical iterative reconstruction (IR) and knowledge-based iterative model reconstruction (IMR) according to various scan parameters and radiation doses. Methods: A 5 × 5 × 5 mm3 uric acid stone was placed in a physical human phantom at the level of the pelvis. 3 tube voltages (120, 100 and 80 kV) and 4 current–time products (100, 70, 30 and 15 mAs) were implemented in 12 scans. Each scan was reconstructed with FBP, statistical IR (Levels 5–7) and knowledge-based IMR (soft-tissue Levels 1–3). The radiation dose, objective image quality and signal-to-noise ratio (SNR) were evaluated, and subjective assessments were performed. Results: The effective doses ranged from 0.095 to 2.621 mSv. Knowledge-based IMR showed better objective image noise and SNR than did FBP and statistical IR. The subjective image noise of FBP was worse than that of statistical IR and knowledge-based IMR. The subjective assessment scores deteriorated after a break point of 100 kV and 30 mAs. Conclusion: At the setting of 100 kV and 30 mAs, the radiation dose can be decreased by approximately 84% while keeping the subjective image assessment. Advances in knowledge: Patients with urolithiasis can be evaluated with ultralow-dose non-enhanced CT using a knowledge-based IMR algorithm at a substantially reduced radiation dose with the imaging quality preserved, thereby minimizing the risks of radiation exposure while providing clinically relevant diagnostic benefits for patients. PMID:26577542
A constrained modulus reconstruction technique for breast cancer assessment.
Samani, A; Bishop, J; Plewes, D B
2001-09-01
A reconstruction technique for breast tissue elasticity modulus is described. This technique assumes that the geometry of normal and suspicious tissues is available from a contrast-enhanced magnetic resonance image. Furthermore, it is assumed that the modulus is constant throughout each tissue volume. The technique, which uses quasi-static strain data, is iterative where each iteration involves modulus updating followed by stress calculation. Breast mechanical stimulation is assumed to be done by two compressional rigid plates. As a result, stress is calculated using the finite element method based on the well-controlled boundary conditions of the compression plates. Using the calculated stress and the measured strain, modulus updating is done element-by-element based on Hooke's law. Breast tissue modulus reconstruction using simulated data and phantom modulus reconstruction using experimental data indicate that the technique is robust.
Tan, T J; Lau, Kenneth K; Jackson, Dana; Ardley, Nicholas; Borasu, Adina
2017-04-01
The purpose of this study was to assess the efficacy of model-based iterative reconstruction (MBIR), statistical iterative reconstruction (SIR), and filtered back projection (FBP) image reconstruction algorithms in the delineation of ureters and overall image quality on non-enhanced computed tomography of the renal tracts (NECT-KUB). This was a prospective study of 40 adult patients who underwent NECT-KUB for investigation of ureteric colic. Images were reconstructed using FBP, SIR, and MBIR techniques and individually and randomly assessed by two blinded radiologists. Parameters measured were overall image quality, presence of ureteric calculus, presence of hydronephrosis or hydroureters, image quality of each ureteric segment, total length of ureters unable to be visualized, attenuation values of image noise, and retroperitoneal fat content for each patient. There were no diagnostic discrepancies between image reconstruction modalities for urolithiasis. Overall image qualities and for each ureteric segment were superior using MBIR (67.5 % rated as 'Good to Excellent' vs. 25 % in SIR and 2.5 % in FBP). The lengths of non-visualized ureteric segments were shortest using MBIR (55.0 % measured 'less than 5 cm' vs. ASIR 33.8 % and FBP 10 %). MBIR was able to reduce overall image noise by up to 49.36 % over SIR and 71.02 % over FBP. MBIR technique improves overall image quality and visualization of ureters over FBP and SIR.
A noise power spectrum study of a new model‐based iterative reconstruction system: Veo 3.0
Li, Guang; Liu, Xinming; Dodge, Cristina T.; Jensen, Corey T.
2016-01-01
The purpose of this study was to evaluate performance of the third generation of model‐based iterative reconstruction (MBIR) system, Veo 3.0, based on noise power spectrum (NPS) analysis with various clinical presets over a wide range of clinically applicable dose levels. A CatPhan 600 surrounded by an oval, fat‐equivalent ring to mimic patient size/shape was scanned 10 times at each of six dose levels on a GE HD 750 scanner. NPS analysis was performed on images reconstructed with various Veo 3.0 preset combinations for comparisons of those images reconstructed using Veo 2.0, filtered back projection (FBP) and adaptive statistical iterative reconstruction (ASiR). The new Target Thickness setting resulted in higher noise in thicker axial images. The new Texture Enhancement function achieved a more isotropic noise behavior with less image artifacts. Veo 3.0 provides additional reconstruction options designed to allow the user choice of balance between spatial resolution and image noise, relative to Veo 2.0. Veo 3.0 provides more user selectable options and in general improved isotropic noise behavior in comparison to Veo 2.0. The overall noise reduction performance of both versions of MBIR was improved in comparison to FBP and ASiR, especially at low‐dose levels. PACS number(s): 87.57.‐s, 87.57.Q‐, 87.57.C‐, 87.57.nf, 87.57.C‐, 87.57.cm PMID:27685118
NASA Astrophysics Data System (ADS)
Lee, Haenghwa; Choi, Sunghoon; Jo, Byungdu; Kim, Hyemi; Lee, Donghoon; Kim, Dohyeon; Choi, Seungyeon; Lee, Youngjin; Kim, Hee-Joung
2017-03-01
Chest digital tomosynthesis (CDT) is a new 3D imaging technique that can be expected to improve the detection of subtle lung disease over conventional chest radiography. Algorithm development for CDT system is challenging in that a limited number of low-dose projections are acquired over a limited angular range. To confirm the feasibility of algebraic reconstruction technique (ART) method under variations in key imaging parameters, quality metrics were conducted using LUNGMAN phantom included grand-glass opacity (GGO) tumor. Reconstructed images were acquired from the total 41 projection images over a total angular range of +/-20°. We evaluated contrast-to-noise ratio (CNR) and artifacts spread function (ASF) to investigate the effect of reconstruction parameters such as number of iterations, relaxation parameter and initial guess on image quality. We found that proper value of ART relaxation parameter could improve image quality from the same projection. In this study, proper value of relaxation parameters for zero-image (ZI) and back-projection (BP) initial guesses were 0.4 and 0.6, respectively. Also, the maximum CNR values and the minimum full width at half maximum (FWHM) of ASF were acquired in the reconstructed images after 20 iterations and 3 iterations, respectively. According to the results, BP initial guess for ART method could provide better image quality than ZI initial guess. In conclusion, ART method with proper reconstruction parameters could improve image quality due to the limited angular range in CDT system.
An object-oriented simulator for 3D digital breast tomosynthesis imaging system.
Seyyedi, Saeed; Cengiz, Kubra; Kamasak, Mustafa; Yildirim, Isa
2013-01-01
Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values.
An Object-Oriented Simulator for 3D Digital Breast Tomosynthesis Imaging System
Cengiz, Kubra
2013-01-01
Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values. PMID:24371468
GPU implementation of prior image constrained compressed sensing (PICCS)
NASA Astrophysics Data System (ADS)
Nett, Brian E.; Tang, Jie; Chen, Guang-Hong
2010-04-01
The Prior Image Constrained Compressed Sensing (PICCS) algorithm (Med. Phys. 35, pg. 660, 2008) has been applied to several computed tomography applications with both standard CT systems and flat-panel based systems designed for guiding interventional procedures and radiation therapy treatment delivery. The PICCS algorithm typically utilizes a prior image which is reconstructed via the standard Filtered Backprojection (FBP) reconstruction algorithm. The algorithm then iteratively solves for the image volume that matches the measured data, while simultaneously assuring the image is similar to the prior image. The PICCS algorithm has demonstrated utility in several applications including: improved temporal resolution reconstruction, 4D respiratory phase specific reconstructions for radiation therapy, and cardiac reconstruction from data acquired on an interventional C-arm. One disadvantage of the PICCS algorithm, just as other iterative algorithms, is the long computation times typically associated with reconstruction. In order for an algorithm to gain clinical acceptance reconstruction must be achievable in minutes rather than hours. In this work the PICCS algorithm has been implemented on the GPU in order to significantly reduce the reconstruction time of the PICCS algorithm. The Compute Unified Device Architecture (CUDA) was used in this implementation.
Low-dose CT image reconstruction using gain intervention-based dictionary learning
NASA Astrophysics Data System (ADS)
Pathak, Yadunath; Arya, K. V.; Tiwari, Shailendra
2018-05-01
Computed tomography (CT) approach is extensively utilized in clinical diagnoses. However, X-ray residue in human body may introduce somatic damage such as cancer. Owing to radiation risk, research has focused on the radiation exposure distributed to patients through CT investigations. Therefore, low-dose CT has become a significant research area. Many researchers have proposed different low-dose CT reconstruction techniques. But, these techniques suffer from various issues such as over smoothing, artifacts, noise, etc. Therefore, in this paper, we have proposed a novel integrated low-dose CT reconstruction technique. The proposed technique utilizes global dictionary-based statistical iterative reconstruction (GDSIR) and adaptive dictionary-based statistical iterative reconstruction (ADSIR)-based reconstruction techniques. In case the dictionary (D) is predetermined, then GDSIR can be used and if D is adaptively defined then ADSIR is appropriate choice. The gain intervention-based filter is also used as a post-processing technique for removing the artifacts from low-dose CT reconstructed images. Experiments have been done by considering the proposed and other low-dose CT reconstruction techniques on well-known benchmark CT images. Extensive experiments have shown that the proposed technique outperforms the available approaches.
Optimization of the reconstruction parameters in [123I]FP-CIT SPECT
NASA Astrophysics Data System (ADS)
Niñerola-Baizán, Aida; Gallego, Judith; Cot, Albert; Aguiar, Pablo; Lomeña, Francisco; Pavía, Javier; Ros, Domènec
2018-04-01
The aim of this work was to obtain a set of parameters to be applied in [123I]FP-CIT SPECT reconstruction in order to minimize the error between standardized and true values of the specific uptake ratio (SUR) in dopaminergic neurotransmission SPECT studies. To this end, Monte Carlo simulation was used to generate a database of 1380 projection data-sets from 23 subjects, including normal cases and a variety of pathologies. Studies were reconstructed using filtered back projection (FBP) with attenuation correction and ordered subset expectation maximization (OSEM) with correction for different degradations (attenuation, scatter and PSF). Reconstruction parameters to be optimized were the cut-off frequency of a 2D Butterworth pre-filter in FBP, and the number of iterations and the full width at Half maximum of a 3D Gaussian post-filter in OSEM. Reconstructed images were quantified using regions of interest (ROIs) derived from Magnetic Resonance scans and from the Automated Anatomical Labeling map. Results were standardized by applying a simple linear regression line obtained from the entire patient dataset. Our findings show that we can obtain a set of optimal parameters for each reconstruction strategy. The accuracy of the standardized SUR increases when the reconstruction method includes more corrections. The use of generic ROIs instead of subject-specific ROIs adds significant inaccuracies. Thus, after reconstruction with OSEM and correction for all degradations, subject-specific ROIs led to errors between standardized and true SUR values in the range [‑0.5, +0.5] in 87% and 92% of the cases for caudate and putamen, respectively. These percentages dropped to 75% and 88% when the generic ROIs were used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andreyev, A.
Purpose: Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. Methods: To validate the proposed algorithm we used Monte Carlomore » simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Results: Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2–3 orders of magnitude per iteration. Conclusions: The results of our tests demonstrate the improvement of image resolution provided by the OE reconstructions with resolution recovery. The quality of images and their contrast are similar to those obtained from the OE reconstructions from scans simulated with perfect energy and spatial resolutions.« less
Resolution recovery for Compton camera using origin ensemble algorithm.
Andreyev, A; Celler, A; Ozsahin, I; Sitek, A
2016-08-01
Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. To validate the proposed algorithm we used Monte Carlo simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2-3 orders of magnitude per iteration. The results of our tests demonstrate the improvement of image resolution provided by the OE reconstructions with resolution recovery. The quality of images and their contrast are similar to those obtained from the OE reconstructions from scans simulated with perfect energy and spatial resolutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, E; Lasio, G; Yi, B
2014-06-01
Purpose: The Iterative Subtraction Algorithm (ISA) method generates retrospectively a pre-selected motion phase cone-beam CT image from the full motion cone-beam CT acquired at standard rotation speed. This work evaluates ISA method with real lung patient data. Methods: The goal of the ISA algorithm is to extract motion and no- motion components form the full reconstruction CBCT. The workflow consists of subtracting from the full CBCT all of the undesired motion phases and obtain a motion de-blurred single-phase CBCT image, followed by iteration of this subtraction process. ISA is realized as follows: 1) The projections are sorted to various phases,more » and from all phases, a full reconstruction is performed to generate an image CTM. 2) Generate forward projections of CTM at the desired phase projection angles, the subtraction of projection and the forward projection will reconstruct a CTSub1, which diminishes the desired phase component. 3) By adding back the CTSub1 to CTm, no motion CBCT, CTS1, can be computed. 4) CTS1 still contains residual motion component. 5) This residual motion component can be further reduced by iteration.The ISA 4DCBCT technique was implemented using Varian Trilogy accelerator OBI system. To evaluate the method, a lung patient CBCT dataset was used. The reconstruction algorithm is FDK. Results: The single phase CBCT reconstruction generated via ISA successfully isolates the desired motion phase from the full motion CBCT, effectively reducing motion blur. It also shows improved image quality, with reduced streak artifacts with respect to the reconstructions from unprocessed phase-sorted projections only. Conclusion: A CBCT motion de-blurring algorithm, ISA, has been developed and evaluated with lung patient data. The algorithm allows improved visualization of a single phase motion extracted from a standard CBCT dataset. This study has been supported by National Institute of Health through R01CA133539.« less
Weiß, Jakob; Schabel, Christoph; Bongers, Malte; Raupach, Rainer; Clasen, Stephan; Notohamiprodjo, Mike; Nikolaou, Konstantin; Bamberg, Fabian
2017-03-01
Background Metal artifacts often impair diagnostic accuracy in computed tomography (CT) imaging. Therefore, effective and workflow implemented metal artifact reduction algorithms are crucial to gain higher diagnostic image quality in patients with metallic hardware. Purpose To assess the clinical performance of a novel iterative metal artifact reduction (iMAR) algorithm for CT in patients with dental fillings. Material and Methods Thirty consecutive patients scheduled for CT imaging and dental fillings were included in the analysis. All patients underwent CT imaging using a second generation dual-source CT scanner (120 kV single-energy; 100/Sn140 kV in dual-energy, 219 mAs, gantry rotation time 0.28-1/s, collimation 0.6 mm) as part of their clinical work-up. Post-processing included standard kernel (B49) and an iterative MAR algorithm. Image quality and diagnostic value were assessed qualitatively (Likert scale) and quantitatively (HU ± SD) by two reviewers independently. Results All 30 patients were included in the analysis, with equal reconstruction times for iMAR and standard reconstruction (17 s ± 0.5 vs. 19 s ± 0.5; P > 0.05). Visual image quality was significantly higher for iMAR as compared with standard reconstruction (3.8 ± 0.5 vs. 2.6 ± 0.5; P < 0.0001, respectively) and showed improved evaluation of adjacent anatomical structures. Similarly, HU-based measurements of degree of artifacts were significantly lower in the iMAR reconstructions as compared with the standard reconstruction (0.9 ± 1.6 vs. -20 ± 47; P < 0.05, respectively). Conclusion The tested iterative, raw-data based reconstruction MAR algorithm allows for a significant reduction of metal artifacts and improved evaluation of adjacent anatomical structures in the head and neck area in patients with dental hardware.
Yamada, Yoshitake; Yamada, Minoru; Sugisawa, Koichi; Akita, Hirotaka; Shiomi, Eisuke; Abe, Takayuki; Okuda, Shigeo; Jinzaki, Masahiro
2015-01-01
Abstract The purpose of this study was to compare renal cyst pseudoenhancement between virtual monochromatic spectral (VMS) and conventional polychromatic 120-kVp images obtained during the same abdominal computed tomography (CT) examination and among images reconstructed using filtered back projection (FBP), adaptive statistical iterative reconstruction (ASIR), and model-based iterative reconstruction (MBIR). Our institutional review board approved this prospective study; each participant provided written informed consent. Thirty-one patients (19 men, 12 women; age range, 59–85 years; mean age, 73.2 ± 5.5 years) with renal cysts underwent unenhanced 120-kVp CT followed by sequential fast kVp-switching dual-energy (80/140 kVp) and 120-kVp abdominal enhanced CT in the nephrographic phase over a 10-cm scan length with a random acquisition order and 4.5-second intervals. Fifty-one renal cysts (maximal diameter, 18.0 ± 14.7 mm [range, 4–61 mm]) were identified. The CT attenuation values of the cysts as well as of the kidneys were measured on the unenhanced images, enhanced VMS images (at 70 keV) reconstructed using FBP and ASIR from dual-energy data, and enhanced 120-kVp images reconstructed using FBP, ASIR, and MBIR. The results were analyzed using the mixed-effects model and paired t test with Bonferroni correction. The attenuation increases (pseudoenhancement) of the renal cysts on the VMS images reconstructed using FBP/ASIR (least square mean, 5.0/6.0 Hounsfield units [HU]; 95% confidence interval, 2.6–7.4/3.6–8.4 HU) were significantly lower than those on the conventional 120-kVp images reconstructed using FBP/ASIR/MBIR (least square mean, 12.1/12.8/11.8 HU; 95% confidence interval, 9.8–14.5/10.4–15.1/9.4–14.2 HU) (all P < .001); on the other hand, the CT attenuation values of the kidneys on the VMS images were comparable to those on the 120-kVp images. Regardless of the reconstruction algorithm, 70-keV VMS images showed a lower degree of pseudoenhancement of renal cysts than 120-kVp images, while maintaining kidney contrast enhancement comparable to that on 120-kVp images. PMID:25881852
Spillover Compensation in the Presence of Respiratory Motion Embedded in SPECT Perfusion Data
NASA Astrophysics Data System (ADS)
Pretorius, P. Hendrik; King, Michael A.
2008-02-01
Spillover from adjacent significant accumulations of extra-cardiac activity decreases diagnostic accuracy of SPECT perfusion imaging in especially the inferior/septal cardiac region. One method of compensating for the spillover at some location outside of a structure is to estimate it as the counts blurred into this location when a template (3D model) of the structure undergoes simulated imaging followed by reconstruction. The objective of this study was to determine what impact uncorrected respiratory motion has on such spillover compensation of extra-cardiac activity in the right coronary artery (RCA) territory, and if it is possible to use manual segmentation to define the extra-cardiac activity template(s) used in spillover correction. Two separate MCAT phantoms (1283 matrices) were simulated to represent the source and attenuation distributions of patients with and without respiratory motion. For each phantom the heart was modeled: 1) with a normal perfusion pattern and 2) with an RCA defect equal to 50% of the normal myocardium count level. After Monte Carlo simulation of 64times64times120 projections with appropriate noise, data were reconstructed using the rescaled block iterative (RBI) algorithm with 30 subsets and 5 iterations with compensation for attenuation, scatter and resolution. A 3D Gaussian post-filter with a sigma of 0.476 cm was used to suppress noise. Manual segmentation of the liver in filtered emission slices was used to create 3D binary templates. The true liver distribution (with and without respiratory motion included) was also used as binary templates. These templates were projected using a ray-driven projector simulating the imaging system with the exclusion of Compton scatter and reconstructed using the same protocol as for the emission data, excluding scatter compensation. Reconstructed templates were scaled using reconstructed emission count levels from the liver, and spillover subtracted outside the template. It was evident from the polar maps that the manually segmented template reconstructions were unable to remove all the spillover originating in the liver from the inferior wall. This was especially noticeable when a perfusion defect is present. Templates based on the true liver distribution appreciably improved spillover correction. Thus the emerging combined SPECT/CT technology may play a vital role in identifying and segmenting extra-cardiac structures more reliably thereby facilitating spillover correction. This study also indicates that compensation for respiratory motion might play an important role in spillover compensation.
A methodology for image quality evaluation of advanced CT systems.
Wilson, Joshua M; Christianson, Olav I; Richard, Samuel; Samei, Ehsan
2013-03-01
This work involved the development of a phantom-based method to quantify the performance of tube current modulation and iterative reconstruction in modern computed tomography (CT) systems. The quantification included resolution, HU accuracy, noise, and noise texture accounting for the impact of contrast, prescribed dose, reconstruction algorithm, and body size. A 42-cm-long, 22.5-kg polyethylene phantom was designed to model four body sizes. Each size was represented by a uniform section, for the measurement of the noise-power spectrum (NPS), and a feature section containing various rods, for the measurement of HU and the task-based modulation transfer function (TTF). The phantom was scanned on a clinical CT system (GE, 750HD) using a range of tube current modulation settings (NI levels) and reconstruction methods (FBP and ASIR30). An image quality analysis program was developed to process the phantom data to calculate the targeted image quality metrics as a function of contrast, prescribed dose, and body size. The phantom fabrication closely followed the design specifications. In terms of tube current modulation, the tube current and resulting image noise varied as a function of phantom size as expected based on the manufacturer specification: From the 16- to 37-cm section, the HU contrast for each rod was inversely related to phantom size, and noise was relatively constant (<5% change). With iterative reconstruction, the TTF exhibited a contrast dependency with better performance for higher contrast objects. At low noise levels, TTFs of iterative reconstruction were better than those of FBP, but at higher noise, that superiority was not maintained at all contrast levels. Relative to FBP, the NPS of iterative reconstruction exhibited an ~30% decrease in magnitude and a 0.1 mm(-1) shift in the peak frequency. Phantom and image quality analysis software were created for assessing CT image quality over a range of contrasts, doses, and body sizes. The testing platform enabled robust NPS, TTF, HU, and pixel noise measurements as a function of body size capable of characterizing the performance of reconstruction algorithms and tube current modulation techniques.
Projection matrix acquisition for cone-beam computed tomography iterative reconstruction
NASA Astrophysics Data System (ADS)
Yang, Fuqiang; Zhang, Dinghua; Huang, Kuidong; Shi, Wenlong; Zhang, Caixin; Gao, Zongzhao
2017-02-01
Projection matrix is an essential and time-consuming part in computed tomography (CT) iterative reconstruction. In this article a novel calculation algorithm of three-dimensional (3D) projection matrix is proposed to quickly acquire the matrix for cone-beam CT (CBCT). The CT data needed to be reconstructed is considered as consisting of the three orthogonal sets of equally spaced and parallel planes, rather than the individual voxels. After getting the intersections the rays with the surfaces of the voxels, the coordinate points and vertex is compared to obtain the index value that the ray traversed. Without considering ray-slope to voxel, it just need comparing the position of two points. Finally, the computer simulation is used to verify the effectiveness of the algorithm.
Born iterative reconstruction using perturbed-phase field estimates.
Astheimer, Jeffrey P; Waag, Robert C
2008-10-01
A method of image reconstruction from scattering measurements for use in ultrasonic imaging is presented. The method employs distorted-wave Born iteration but does not require using a forward-problem solver or solving large systems of equations. These calculations are avoided by limiting intermediate estimates of medium variations to smooth functions in which the propagated fields can be approximated by phase perturbations derived from variations in a geometric path along rays. The reconstruction itself is formed by a modification of the filtered-backpropagation formula that includes correction terms to account for propagation through an estimated background. Numerical studies that validate the method for parameter ranges of interest in medical applications are presented. The efficiency of this method offers the possibility of real-time imaging from scattering measurements.
Three-dimensional focus of attention for iterative cone-beam micro-CT reconstruction
NASA Astrophysics Data System (ADS)
Benson, T. M.; Gregor, J.
2006-09-01
Three-dimensional iterative reconstruction of high-resolution, circular orbit cone-beam x-ray CT data is often considered impractical due to the demand for vast amounts of computer cycles and associated memory. In this paper, we show that the computational burden can be reduced by limiting the reconstruction to a small, well-defined portion of the image volume. We first discuss using the support region defined by the set of voxels covered by all of the projection views. We then present a data-driven preprocessing technique called focus of attention that heuristically separates both image and projection data into object and background before reconstruction, thereby further reducing the reconstruction region of interest. We present experimental results for both methods based on mouse data and a parallelized implementation of the SIRT algorithm. The computational savings associated with the support region are substantial. However, the results for focus of attention are even more impressive in that only about one quarter of the computer cycles and memory are needed compared with reconstruction of the entire image volume. The image quality is not compromised by either method.
Rapid alignment of nanotomography data using joint iterative reconstruction and reprojection.
Gürsoy, Doğa; Hong, Young P; He, Kuan; Hujsak, Karl; Yoo, Seunghwan; Chen, Si; Li, Yue; Ge, Mingyuan; Miller, Lisa M; Chu, Yong S; De Andrade, Vincent; He, Kai; Cossairt, Oliver; Katsaggelos, Aggelos K; Jacobsen, Chris
2017-09-18
As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the same error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.
Accelerated Optical Projection Tomography Applied to In Vivo Imaging of Zebrafish
Correia, Teresa; Yin, Jun; Ramel, Marie-Christine; Andrews, Natalie; Katan, Matilda; Bugeon, Laurence; Dallman, Margaret J.; McGinty, James; Frankel, Paul; French, Paul M. W.; Arridge, Simon
2015-01-01
Optical projection tomography (OPT) provides a non-invasive 3-D imaging modality that can be applied to longitudinal studies of live disease models, including in zebrafish. Current limitations include the requirement of a minimum number of angular projections for reconstruction of reasonable OPT images using filtered back projection (FBP), which is typically several hundred, leading to acquisition times of several minutes. It is highly desirable to decrease the number of required angular projections to decrease both the total acquisition time and the light dose to the sample. This is particularly important to enable longitudinal studies, which involve measurements of the same fish at different time points. In this work, we demonstrate that the use of an iterative algorithm to reconstruct sparsely sampled OPT data sets can provide useful 3-D images with 50 or fewer projections, thereby significantly decreasing the minimum acquisition time and light dose while maintaining image quality. A transgenic zebrafish embryo with fluorescent labelling of the vasculature was imaged to acquire densely sampled (800 projections) and under-sampled data sets of transmitted and fluorescence projection images. The under-sampled OPT data sets were reconstructed using an iterative total variation-based image reconstruction algorithm and compared against FBP reconstructions of the densely sampled data sets. To illustrate the potential for quantitative analysis following rapid OPT data acquisition, a Hessian-based method was applied to automatically segment the reconstructed images to select the vasculature network. Results showed that 3-D images of the zebrafish embryo and its vasculature of sufficient visual quality for quantitative analysis can be reconstructed using the iterative algorithm from only 32 projections—achieving up to 28 times improvement in imaging speed and leading to total acquisition times of a few seconds. PMID:26308086
Lim, Kyungjae; Kwon, Heejin; Cho, Jinhan; Oh, Jongyoung; Yoon, Seongkuk; Kang, Myungjin; Ha, Dongho; Lee, Jinhwa; Kang, Eunju
2015-01-01
The purpose of this study was to assess the image quality of a novel advanced iterative reconstruction (IR) method called as "adaptive statistical IR V" (ASIR-V) by comparing the image noise, contrast-to-noise ratio (CNR), and spatial resolution from those of filtered back projection (FBP) and adaptive statistical IR (ASIR) on computed tomography (CT) phantom image. We performed CT scans at 5 different tube currents (50, 70, 100, 150, and 200 mA) using 3 types of CT phantoms. Scanned images were subsequently reconstructed in 7 different scan settings, such as FBP, and 3 levels of ASIR and ASIR-V (30%, 50%, and 70%). The image noise was measured in the first study using body phantom. The CNR was measured in the second study using contrast phantom and the spatial resolutions were measured in the third study using a high-resolution phantom. We compared the image noise, CNR, and spatial resolution among the 7 reconstructed image scan settings to determine whether noise reduction, high CNR, and high spatial resolution could be achieved at ASIR-V. At quantitative analysis of the first and second studies, it showed that the images reconstructed using ASIR-V had reduced image noise and improved CNR compared with those of FBP and ASIR (P < 0.001). At qualitative analysis of the third study, it also showed that the images reconstructed using ASIR-V had significantly improved spatial resolution than those of FBP and ASIR (P < 0.001). Our phantom studies showed that ASIR-V provides a significant reduction in image noise and a significant improvement in CNR as well as spatial resolution. Therefore, this technique has the potential to reduce the radiation dose further without compromising image quality.
The motional Stark effect diagnostic for ITER using a line-shift approach.
Foley, E L; Levinton, F M; Yuh, H Y; Zakharov, L E
2008-10-01
The United States has been tasked with the development and implementation of a motional Stark effect (MSE) system on ITER. In the harsh ITER environment, MSE is particularly susceptible to degradation, as it depends on polarimetry, and the polarization reflection properties of surfaces are highly sensitive to thin film effects due to plasma deposition and erosion of a first mirror. Here we present the results of a comprehensive study considering a new MSE-based approach to internal plasma magnetic field measurements for ITER. The proposed method uses the line shifts in the MSE spectrum (MSE-LS) to provide a radial profile of the magnetic field magnitude. To determine the utility of MSE-LS for equilibrium reconstruction, studies were performed using the ESC-ERV code system. A near-term opportunity to test the use of MSE-LS for equilibrium reconstruction is being pursued in the implementation of MSE with laser-induced fluorescence on NSTX. Though the field values and beam energies are very different from ITER, the use of a laser allows precision spectroscopy with a similar ratio of linewidth to line spacing on NSTX as would be achievable with a passive system on ITER. Simulation results for ITER and NSTX are presented, and the relative merits of the traditional line polarization approach and the new line-shift approach are discussed.
Nonlinearity response correction in phase-shifting deflectometry
NASA Astrophysics Data System (ADS)
Nguyen, Manh The; Kang, Pilseong; Ghim, Young-Sik; Rhee, Hyug-Gyo
2018-04-01
Owing to the nonlinearity response of digital devices such as screens and cameras in phase-shifting deflectometry, non-sinusoidal phase-shifted fringe patterns are generated and additional measurement errors are introduced. In this paper, a new deflectometry technique is described for overcoming these problems using a pre-distorted pattern combined with an advanced iterative algorithm. The experiment results show that this method can reconstruct the 3D surface map of a sample without fringe print-through caused by the nonlinearity response of digital devices. The proposed technique is verified by measuring the surface height variations in a deformable mirror and comparing them with the measurement result obtained using a coordinate measuring machine. The difference between the two measurement results is estimated to be less than 13 µm.
Low Dose CT Reconstruction via Edge-preserving Total Variation Regularization
Tian, Zhen; Jia, Xun; Yuan, Kehong; Pan, Tinsu; Jiang, Steve B.
2014-01-01
High radiation dose in CT scans increases a lifetime risk of cancer and has become a major clinical concern. Recently, iterative reconstruction algorithms with Total Variation (TV) regularization have been developed to reconstruct CT images from highly undersampled data acquired at low mAs levels in order to reduce the imaging dose. Nonetheless, the low contrast structures tend to be smoothed out by the TV regularization, posing a great challenge for the TV method. To solve this problem, in this work we develop an iterative CT reconstruction algorithm with edge-preserving TV regularization to reconstruct CT images from highly undersampled data obtained at low mAs levels. The CT image is reconstructed by minimizing an energy consisting of an edge-preserving TV norm and a data fidelity term posed by the x-ray projections. The edge-preserving TV term is proposed to preferentially perform smoothing only on non-edge part of the image in order to better preserve the edges, which is realized by introducing a penalty weight to the original total variation norm. During the reconstruction process, the pixels at edges would be gradually identified and given small penalty weight. Our iterative algorithm is implemented on GPU to improve its speed. We test our reconstruction algorithm on a digital NCAT phantom, a physical chest phantom, and a Catphan phantom. Reconstruction results from a conventional FBP algorithm and a TV regularization method without edge preserving penalty are also presented for comparison purpose. The experimental results illustrate that both TV-based algorithm and our edge-preserving TV algorithm outperform the conventional FBP algorithm in suppressing the streaking artifacts and image noise under the low dose context. Our edge-preserving algorithm is superior to the TV-based algorithm in that it can preserve more information of low contrast structures and therefore maintain acceptable spatial resolution. PMID:21860076
Kumar, Dushyant; Hariharan, Hari; Faizy, Tobias D; Borchert, Patrick; Siemonsen, Susanne; Fiehler, Jens; Reddy, Ravinder; Sedlacik, Jan
2018-05-12
We present a computationally feasible and iterative multi-voxel spatially regularized algorithm for myelin water fraction (MWF) reconstruction. This method utilizes 3D spatial correlations present in anatomical/pathological tissues and underlying B1 + -inhomogeneity or flip angle inhomogeneity to enhance the noise robustness of the reconstruction while intrinsically accounting for stimulated echo contributions using T2-distribution data alone. Simulated data and in vivo data acquired using 3D non-selective multi-echo spin echo (3DNS-MESE) were used to compare the reconstruction quality of the proposed approach against those of the popular algorithm (the method by Prasloski et al.) and our previously proposed 2D multi-slice spatial regularization spatial regularization approach. We also investigated whether the inter-sequence correlations and agreements improved as a result of the proposed approach. MWF-quantifications from two sequences, 3DNS-MESE vs 3DNS-gradient and spin echo (3DNS-GRASE), were compared for both reconstruction approaches to assess correlations and agreements between inter-sequence MWF-value pairs. MWF values from whole-brain data of six volunteers and two multiple sclerosis patients are being reported as well. In comparison with competing approaches such as Prasloski's method or our previously proposed 2D multi-slice spatial regularization method, the proposed method showed better agreements with simulated truths using regression analyses and Bland-Altman analyses. For 3DNS-MESE data, MWF-maps reconstructed using the proposed algorithm provided better depictions of white matter structures in subcortical areas adjoining gray matter which agreed more closely with corresponding contrasts on T2-weighted images than MWF-maps reconstructed with the method by Prasloski et al. We also achieved a higher level of correlations and agreements between inter-sequence (3DNS-MESE vs 3DNS-GRASE) MWF-value pairs. The proposed algorithm provides more noise-robust fits to T2-decay data and improves MWF-quantifications in white matter structures especially in the sub-cortical white matter and major white matter tract regions. Copyright © 2018 Elsevier Inc. All rights reserved.
Accelerating 4D flow MRI by exploiting vector field divergence regularization.
Santelli, Claudio; Loecher, Michael; Busch, Julia; Wieben, Oliver; Schaeffter, Tobias; Kozerke, Sebastian
2016-01-01
To improve velocity vector field reconstruction from undersampled four-dimensional (4D) flow MRI by penalizing divergence of the measured flow field. Iterative image reconstruction in which magnitude and phase are regularized separately in alternating iterations was implemented. The approach allows incorporating prior knowledge of the flow field being imaged. In the present work, velocity data were regularized to reduce divergence, using either divergence-free wavelets (DFW) or a finite difference (FD) method using the ℓ1-norm of divergence and curl. The reconstruction methods were tested on a numerical phantom and in vivo data. Results of the DFW and FD approaches were compared with data obtained with standard compressed sensing (CS) reconstruction. Relative to standard CS, directional errors of vector fields and divergence were reduced by 55-60% and 38-48% for three- and six-fold undersampled data with the DFW and FD methods. Velocity vector displays of the numerical phantom and in vivo data were found to be improved upon DFW or FD reconstruction. Regularization of vector field divergence in image reconstruction from undersampled 4D flow data is a valuable approach to improve reconstruction accuracy of velocity vector fields. © 2014 Wiley Periodicals, Inc.
RMP: Reduced-set matching pursuit approach for efficient compressed sensing signal reconstruction.
Abdel-Sayed, Michael M; Khattab, Ahmed; Abu-Elyazeed, Mohamed F
2016-11-01
Compressed sensing enables the acquisition of sparse signals at a rate that is much lower than the Nyquist rate. Compressed sensing initially adopted [Formula: see text] minimization for signal reconstruction which is computationally expensive. Several greedy recovery algorithms have been recently proposed for signal reconstruction at a lower computational complexity compared to the optimal [Formula: see text] minimization, while maintaining a good reconstruction accuracy. In this paper, the Reduced-set Matching Pursuit (RMP) greedy recovery algorithm is proposed for compressed sensing. Unlike existing approaches which either select too many or too few values per iteration, RMP aims at selecting the most sufficient number of correlation values per iteration, which improves both the reconstruction time and error. Furthermore, RMP prunes the estimated signal, and hence, excludes the incorrectly selected values. The RMP algorithm achieves a higher reconstruction accuracy at a significantly low computational complexity compared to existing greedy recovery algorithms. It is even superior to [Formula: see text] minimization in terms of the normalized time-error product, a new metric introduced to measure the trade-off between the reconstruction time and error. RMP superior performance is illustrated with both noiseless and noisy samples.
Graph-cut based discrete-valued image reconstruction.
Tuysuzoglu, Ahmet; Karl, W Clem; Stojanovic, Ivana; Castañòn, David; Ünlü, M Selim
2015-05-01
Efficient graph-cut methods have been used with great success for labeling and denoising problems occurring in computer vision. Unfortunately, the presence of linear image mappings has prevented the use of these techniques in most discrete-amplitude image reconstruction problems. In this paper, we develop a graph-cut based framework for the direct solution of discrete amplitude linear image reconstruction problems cast as regularized energy function minimizations. We first analyze the structure of discrete linear inverse problem cost functions to show that the obstacle to the application of graph-cut methods to their solution is the variable mixing caused by the presence of the linear sensing operator. We then propose to use a surrogate energy functional that overcomes the challenges imposed by the sensing operator yet can be utilized efficiently in existing graph-cut frameworks. We use this surrogate energy functional to devise a monotonic iterative algorithm for the solution of discrete valued inverse problems. We first provide experiments using local convolutional operators and show the robustness of the proposed technique to noise and stability to changes in regularization parameter. Then we focus on nonlocal, tomographic examples where we consider limited-angle data problems. We compare our technique with state-of-the-art discrete and continuous image reconstruction techniques. Experiments show that the proposed method outperforms state-of-the-art techniques in challenging scenarios involving discrete valued unknowns.
GPU-accelerated iterative reconstruction for limited-data tomography in CBCT systems.
de Molina, Claudia; Serrano, Estefania; Garcia-Blas, Javier; Carretero, Jesus; Desco, Manuel; Abella, Monica
2018-05-15
Standard cone-beam computed tomography (CBCT) involves the acquisition of at least 360 projections rotating through 360 degrees. Nevertheless, there are cases in which only a few projections can be taken in a limited angular span, such as during surgery, where rotation of the source-detector pair is limited to less than 180 degrees. Reconstruction of limited data with the conventional method proposed by Feldkamp, Davis and Kress (FDK) results in severe artifacts. Iterative methods may compensate for the lack of data by including additional prior information, although they imply a high computational burden and memory consumption. We present an accelerated implementation of an iterative method for CBCT following the Split Bregman formulation, which reduces computational time through GPU-accelerated kernels. The implementation enables the reconstruction of large volumes (>1024 3 pixels) using partitioning strategies in forward- and back-projection operations. We evaluated the algorithm on small-animal data for different scenarios with different numbers of projections, angular span, and projection size. Reconstruction time varied linearly with the number of projections and quadratically with projection size but remained almost unchanged with angular span. Forward- and back-projection operations represent 60% of the total computational burden. Efficient implementation using parallel processing and large-memory management strategies together with GPU kernels enables the use of advanced reconstruction approaches which are needed in limited-data scenarios. Our GPU implementation showed a significant time reduction (up to 48 ×) compared to a CPU-only implementation, resulting in a total reconstruction time from several hours to few minutes.
Olsson, Anna; Arlig, Asa; Carlsson, Gudrun Alm; Gustafsson, Agnetha
2007-09-01
The image quality of single photon emission computed tomography (SPECT) depends on the reconstruction algorithm used. The purpose of the present study was to evaluate parameters in ordered subset expectation maximization (OSEM) and to compare systematically with filtered back-projection (FBP) for reconstruction of regional cerebral blood flow (rCBF) SPECT, incorporating attenuation and scatter correction. The evaluation was based on the trade-off between contrast recovery and statistical noise using different sizes of subsets, number of iterations and filter parameters. Monte Carlo simulated SPECT studies of a digital human brain phantom were used. The contrast recovery was calculated as measured contrast divided by true contrast. Statistical noise in the reconstructed images was calculated as the coefficient of variation in pixel values. A constant contrast level was reached above 195 equivalent maximum likelihood expectation maximization iterations. The choice of subset size was not crucial as long as there were > or = 2 projections per subset. The OSEM reconstruction was found to give 5-14% higher contrast recovery than FBP for all clinically relevant noise levels in rCBF SPECT. The Butterworth filter, power 6, achieved the highest stable contrast recovery level at all clinically relevant noise levels. The cut-off frequency should be chosen according to the noise level accepted in the image. Trade-off plots are shown to be a practical way of deciding the number of iterations and subset size for the OSEM reconstruction and can be used for other examination types in nuclear medicine.
NASA Astrophysics Data System (ADS)
Krauze, W.; Makowski, P.; Kujawińska, M.
2015-06-01
Standard tomographic algorithms applied to optical limited-angle tomography result in the reconstructions that have highly anisotropic resolution and thus special algorithms are developed. State of the art approaches utilize the Total Variation (TV) minimization technique. These methods give very good results but are applicable to piecewise constant structures only. In this paper, we propose a novel algorithm for 3D limited-angle tomography - Total Variation Iterative Constraint method (TVIC) which enhances the applicability of the TV regularization to non-piecewise constant samples, like biological cells. This approach consists of two parts. First, the TV minimization is used as a strong regularizer to create a sharp-edged image converted to a 3D binary mask which is then iteratively applied in the tomographic reconstruction as a constraint in the object domain. In the present work we test the method on a synthetic object designed to mimic basic structures of a living cell. For simplicity, the test reconstructions were performed within the straight-line propagation model (SIRT3D solver from the ASTRA Tomography Toolbox), but the strategy is general enough to supplement any algorithm for tomographic reconstruction that supports arbitrary geometries of plane-wave projection acquisition. This includes optical diffraction tomography solvers. The obtained reconstructions present resolution uniformity and general shape accuracy expected from the TV regularization based solvers, but keeping the smooth internal structures of the object at the same time. Comparison between three different patterns of object illumination arrangement show very small impact of the projection acquisition geometry on the image quality.
GPU-accelerated regularized iterative reconstruction for few-view cone beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matenine, Dmitri, E-mail: dmitri.matenine.1@ulaval.ca; Goussard, Yves, E-mail: yves.goussard@polymtl.ca; Després, Philippe, E-mail: philippe.despres@phy.ulaval.ca
2015-04-15
Purpose: The present work proposes an iterative reconstruction technique designed for x-ray transmission computed tomography (CT). The main objective is to provide a model-based solution to the cone-beam CT reconstruction problem, yielding accurate low-dose images via few-views acquisitions in clinically acceptable time frames. Methods: The proposed technique combines a modified ordered subsets convex (OSC) algorithm and the total variation minimization (TV) regularization technique and is called OSC-TV. The number of subsets of each OSC iteration follows a reduction pattern in order to ensure the best performance of the regularization method. Considering the high computational cost of the algorithm, it ismore » implemented on a graphics processing unit, using parallelization to accelerate computations. Results: The reconstructions were performed on computer-simulated as well as human pelvic cone-beam CT projection data and image quality was assessed. In terms of convergence and image quality, OSC-TV performs well in reconstruction of low-dose cone-beam CT data obtained via a few-view acquisition protocol. It compares favorably to the few-view TV-regularized projections onto convex sets (POCS-TV) algorithm. It also appears to be a viable alternative to full-dataset filtered backprojection. Execution times are of 1–2 min and are compatible with the typical clinical workflow for nonreal-time applications. Conclusions: Considering the image quality and execution times, this method may be useful for reconstruction of low-dose clinical acquisitions. It may be of particular benefit to patients who undergo multiple acquisitions by reducing the overall imaging radiation dose and associated risks.« less
An Assessment of Iterative Reconstruction Methods for Sparse Ultrasound Imaging
Valente, Solivan A.; Zibetti, Marcelo V. W.; Pipa, Daniel R.; Maia, Joaquim M.; Schneider, Fabio K.
2017-01-01
Ultrasonic image reconstruction using inverse problems has recently appeared as an alternative to enhance ultrasound imaging over beamforming methods. This approach depends on the accuracy of the acquisition model used to represent transducers, reflectivity, and medium physics. Iterative methods, well known in general sparse signal reconstruction, are also suited for imaging. In this paper, a discrete acquisition model is assessed by solving a linear system of equations by an ℓ1-regularized least-squares minimization, where the solution sparsity may be adjusted as desired. The paper surveys 11 variants of four well-known algorithms for sparse reconstruction, and assesses their optimization parameters with the goal of finding the best approach for iterative ultrasound imaging. The strategy for the model evaluation consists of using two distinct datasets. We first generate data from a synthetic phantom that mimics real targets inside a professional ultrasound phantom device. This dataset is contaminated with Gaussian noise with an estimated SNR, and all methods are assessed by their resulting images and performances. The model and methods are then assessed with real data collected by a research ultrasound platform when scanning the same phantom device, and results are compared with beamforming. A distinct real dataset is finally used to further validate the proposed modeling. Although high computational effort is required by iterative methods, results show that the discrete model may lead to images closer to ground-truth than traditional beamforming. However, computing capabilities of current platforms need to evolve before frame rates currently delivered by ultrasound equipments are achievable. PMID:28282862
NASA Astrophysics Data System (ADS)
Courdurier, M.; Monard, F.; Osses, A.; Romero, F.
2015-09-01
In medical single-photon emission computed tomography (SPECT) imaging, we seek to simultaneously obtain the internal radioactive sources and the attenuation map using not only ballistic measurements but also first-order scattering measurements and assuming a very specific scattering regime. The problem is modeled using the radiative transfer equation by means of an explicit non-linear operator that gives the ballistic and scattering measurements as a function of the radioactive source and attenuation distributions. First, by differentiating this non-linear operator we obtain a linearized inverse problem. Then, under regularity hypothesis for the source distribution and attenuation map and considering small attenuations, we rigorously prove that the linear operator is invertible and we compute its inverse explicitly. This allows proof of local uniqueness for the non-linear inverse problem. Finally, using the previous inversion result for the linear operator, we propose a new type of iterative algorithm for simultaneous source and attenuation recovery for SPECT based on the Neumann series and a Newton-Raphson algorithm.
Iterative wave-front reconstruction in the Fourier domain.
Bond, Charlotte Z; Correia, Carlos M; Sauvage, Jean-François; Neichel, Benoit; Fusco, Thierry
2017-05-15
The use of Fourier methods in wave-front reconstruction can significantly reduce the computation time for large telescopes with a high number of degrees of freedom. However, Fourier algorithms for discrete data require a rectangular data set which conform to specific boundary requirements, whereas wave-front sensor data is typically defined over a circular domain (the telescope pupil). Here we present an iterative Gerchberg routine modified for the purposes of discrete wave-front reconstruction which adapts the measurement data (wave-front sensor slopes) for Fourier analysis, fulfilling the requirements of the fast Fourier transform (FFT) and providing accurate reconstruction. The routine is used in the adaptation step only and can be coupled to any other Wiener-like or least-squares method. We compare simulations using this method with previous Fourier methods and show an increase in performance in terms of Strehl ratio and a reduction in noise propagation for a 40×40 SPHERE-like adaptive optics system. For closed loop operation with minimal iterations the Gerchberg method provides an improvement in Strehl, from 95.4% to 96.9% in K-band. This corresponds to ~ 40 nm improvement in rms, and avoids the high spatial frequency errors present in other methods, providing an increase in contrast towards the edge of the correctable band.
Gaitanis, Anastasios; Kastis, George A; Vlastou, Elena; Bouziotis, Penelope; Verginis, Panayotis; Anagnostopoulos, Constantinos D
2017-08-01
The Tera-Tomo 3D image reconstruction algorithm (a version of OSEM), provided with the Mediso nanoScan® PC (PET8/2) small-animal positron emission tomograph (PET)/x-ray computed tomography (CT) scanner, has various parameter options such as total level of regularization, subsets, and iterations. Also, the acquisition time in PET plays an important role. This study aims to assess the performance of this new small-animal PET/CT scanner for different acquisition times and reconstruction parameters, for 2-deoxy-2-[ 18 F]fluoro-D-glucose ([ 18 F]FDG) and Ga-68, under the NEMA NU 4-2008 standards. Various image quality metrics were calculated for different realizations of [ 18 F]FDG and Ga-68 filled image quality (IQ) phantoms. [ 18 F]FDG imaging produced improved images over Ga-68. The best compromise for the optimization of all image quality factors is achieved for at least 30 min acquisition and image reconstruction with 52 iteration updates combined with a high regularization level. A high regularization level at 52 iteration updates and 30 min acquisition time were found to optimize most of the figures of merit investigated.
Solving ill-posed inverse problems using iterative deep neural networks
NASA Astrophysics Data System (ADS)
Adler, Jonas; Öktem, Ozan
2017-12-01
We propose a partially learned approach for the solution of ill-posed inverse problems with not necessarily linear forward operators. The method builds on ideas from classical regularisation theory and recent advances in deep learning to perform learning while making use of prior information about the inverse problem encoded in the forward operator, noise model and a regularising functional. The method results in a gradient-like iterative scheme, where the ‘gradient’ component is learned using a convolutional network that includes the gradients of the data discrepancy and regulariser as input in each iteration. We present results of such a partially learned gradient scheme on a non-linear tomographic inversion problem with simulated data from both the Sheep-Logan phantom as well as a head CT. The outcome is compared against filtered backprojection and total variation reconstruction and the proposed method provides a 5.4 dB PSNR improvement over the total variation reconstruction while being significantly faster, giving reconstructions of 512 × 512 pixel images in about 0.4 s using a single graphics processing unit (GPU).
Zhang, Jiulou; Shi, Junwei; Guang, Huizhi; Zuo, Simin; Liu, Fei; Bai, Jing; Luo, Jianwen
2016-06-01
High-intensity background fluorescence is generally encountered in fluorescence molecular tomography (FMT), because of the accumulation of fluorescent probes in nontarget tissues or the existence of autofluorescence in biological tissues. The reconstruction results are affected or even distorted by the background fluorescence, especially when the distribution of fluorescent targets is relatively sparse. The purpose of this paper is to reduce the negative effect of background fluorescence on FMT reconstruction. After each iteration of the Tikhonov regularization algorithm, 3-D discrete cosine transform is adopted to filter the intermediate results. And then, a sparsity constraint step based on L1 regularization is applied to restrain the energy of the objective function. Phantom experiments with different fluorescence intensities of homogeneous and heterogeneous background are carried out to validate the performance of the proposed scheme. The results show that the reconstruction quality can be improved with the proposed iterative correction scheme. The influence of background fluorescence in FMT can be reduced effectively because of the filtering of the intermediate results, the detail preservation, and noise suppression of L1 regularization.
The ZpiM algorithm: a method for interferometric image reconstruction in SAR/SAS.
Dias, José M B; Leitao, José M N
2002-01-01
This paper presents an effective algorithm for absolute phase (not simply modulo-2-pi) estimation from incomplete, noisy and modulo-2pi observations in interferometric aperture radar and sonar (InSAR/InSAS). The adopted framework is also representative of other applications such as optical interferometry, magnetic resonance imaging and diffraction tomography. The Bayesian viewpoint is adopted; the observation density is 2-pi-periodic and accounts for the interferometric pair decorrelation and system noise; the a priori probability of the absolute phase is modeled by a compound Gauss-Markov random field (CGMRF) tailored to piecewise smooth absolute phase images. We propose an iterative scheme for the computation of the maximum a posteriori probability (MAP) absolute phase estimate. Each iteration embodies a discrete optimization step (Z-step), implemented by network programming techniques and an iterative conditional modes (ICM) step (pi-step). Accordingly, the algorithm is termed ZpiM, where the letter M stands for maximization. An important contribution of the paper is the simultaneous implementation of phase unwrapping (inference of the 2pi-multiples) and smoothing (denoising of the observations). This improves considerably the accuracy of the absolute phase estimates compared to methods in which the data is low-pass filtered prior to unwrapping. A set of experimental results, comparing the proposed algorithm with alternative methods, illustrates the effectiveness of our approach.
Cube Kohonen self-organizing map (CKSOM) model with new equations in organizing unstructured data.
Lim, Seng Poh; Haron, Habibollah
2013-09-01
Surface reconstruction by using 3-D data is used to represent the surface of an object and perform important tasks. The type of data used is important and can be described as either structured or unstructured. For unstructured data, there is no connectivity information between data points. As a result, incorrect shapes will be obtained during the imaging process. Therefore, the data should be reorganized by finding the correct topology so that the correct shape can be obtained. Previous studies have shown that the Kohonen self-organizing map (KSOM) could be used to solve data organizing problems. However, 2-D Kohonen maps are limited because they are unable to cover the whole surface of closed 3-D surface data. Furthermore, the neurons inside the 3-D KSOM structure should be removed in order to create a correct wireframe model. This is because only the outside neurons are used to represent the surface of an object. The aim of this paper is to use KSOM to organize unstructured data for closed surfaces. KSOM isused in this paper by testing its ability to organize medical image data because KSOM is mostly used in constructing engineering field data. Enhancements are added to the model by introducing class number and the index vector, and new equations are created. Various grid sizes and maximum iterations are tested in the experiments. Based on the results, the number of redundancies is found to be directly proportional to the grid size. When we increase the maximum iterations, the surface of the image becomes smoother. An area formula is used and manual calculations are performed to validate the results. This model is implemented and images are created using Dev C++ and GNUPlot.
Computer-aided diagnosis of early knee osteoarthritis based on MRI T2 mapping.
Wu, Yixiao; Yang, Ran; Jia, Sen; Li, Zhanjun; Zhou, Zhiyang; Lou, Ting
2014-01-01
This work was aimed at studying the method of computer-aided diagnosis of early knee OA (OA: osteoarthritis). Based on the technique of MRI (MRI: Magnetic Resonance Imaging) T2 Mapping, through computer image processing, feature extraction, calculation and analysis via constructing a classifier, an effective computer-aided diagnosis method for knee OA was created to assist doctors in their accurate, timely and convenient detection of potential risk of OA. In order to evaluate this method, a total of 1380 data from the MRI images of 46 samples of knee joints were collected. These data were then modeled through linear regression on an offline general platform by the use of the ImageJ software, and a map of the physical parameter T2 was reconstructed. After the image processing, the T2 values of ten regions in the WORMS (WORMS: Whole-organ Magnetic Resonance Imaging Score) areas of the articular cartilage were extracted to be used as the eigenvalues in data mining. Then,a RBF (RBF: Radical Basis Function) network classifier was built to classify and identify the collected data. The classifier exhibited a final identification accuracy of 75%, indicating a good result of assisting diagnosis. Since the knee OA classifier constituted by a weights-directly-determined RBF neural network didn't require any iteration, our results demonstrated that the optimal weights, appropriate center and variance could be yielded through simple procedures. Furthermore, the accuracy for both the training samples and the testing samples from the normal group could reach 100%. Finally, the classifier was superior both in time efficiency and classification performance to the frequently used classifiers based on iterative learning. Thus it was suitable to be used as an aid to computer-aided diagnosis of early knee OA.
Hybrid cloud and cluster computing paradigms for life science applications
2010-01-01
Background Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Results Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. Conclusions The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. Methods We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments. PMID:21210982
Hybrid cloud and cluster computing paradigms for life science applications.
Qiu, Judy; Ekanayake, Jaliya; Gunarathne, Thilina; Choi, Jong Youl; Bae, Seung-Hee; Li, Hui; Zhang, Bingjing; Wu, Tak-Lon; Ruan, Yang; Ekanayake, Saliya; Hughes, Adam; Fox, Geoffrey
2010-12-21
Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments.
Expectation maximization for hard X-ray count modulation profiles
NASA Astrophysics Data System (ADS)
Benvenuto, F.; Schwartz, R.; Piana, M.; Massone, A. M.
2013-07-01
Context. This paper is concerned with the image reconstruction problem when the measured data are solar hard X-ray modulation profiles obtained from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) instrument. Aims: Our goal is to demonstrate that a statistical iterative method classically applied to the image deconvolution problem is very effective when utilized to analyze count modulation profiles in solar hard X-ray imaging based on rotating modulation collimators. Methods: The algorithm described in this paper solves the maximum likelihood problem iteratively and encodes a positivity constraint into the iterative optimization scheme. The result is therefore a classical expectation maximization method this time applied not to an image deconvolution problem but to image reconstruction from count modulation profiles. The technical reason that makes our implementation particularly effective in this application is the use of a very reliable stopping rule which is able to regularize the solution providing, at the same time, a very satisfactory Cash-statistic (C-statistic). Results: The method is applied to both reproduce synthetic flaring configurations and reconstruct images from experimental data corresponding to three real events. In this second case, the performance of expectation maximization, when compared to Pixon image reconstruction, shows a comparable accuracy and a notably reduced computational burden; when compared to CLEAN, shows a better fidelity with respect to the measurements with a comparable computational effectiveness. Conclusions: If optimally stopped, expectation maximization represents a very reliable method for image reconstruction in the RHESSI context when count modulation profiles are used as input data.
Precht, Helle; Thygesen, Jesper; Gerke, Oke; Egstrup, Kenneth; Waaler, Dag; Lambrechtsen, Jess
2016-12-01
Coronary computed tomography angiography (CCTA) requires high spatial and temporal resolution, increased low contrast resolution for the assessment of coronary artery stenosis, plaque detection, and/or non-coronary pathology. Therefore, new reconstruction algorithms, particularly iterative reconstruction (IR) techniques, have been developed in an attempt to improve image quality with no cost in radiation exposure. To evaluate whether adaptive statistical iterative reconstruction (ASIR) enhances perceived image quality in CCTA compared to filtered back projection (FBP). Thirty patients underwent CCTA due to suspected coronary artery disease. Images were reconstructed using FBP, 30% ASIR, and 60% ASIR. Ninety image sets were evaluated by five observers using the subjective visual grading analysis (VGA) and assessed by proportional odds modeling. Objective quality assessment (contrast, noise, and the contrast-to-noise ratio [CNR]) was analyzed with linear mixed effects modeling on log-transformed data. The need for ethical approval was waived by the local ethics committee as the study only involved anonymously collected clinical data. VGA showed significant improvements in sharpness by comparing FBP with ASIR, resulting in odds ratios of 1.54 for 30% ASIR and 1.89 for 60% ASIR ( P = 0.004). The objective measures showed significant differences between FBP and 60% ASIR ( P < 0.0001) for noise, with an estimated ratio of 0.82, and for CNR, with an estimated ratio of 1.26. ASIR improved the subjective image quality of parameter sharpness and, objectively, reduced noise and increased CNR.
An iterative reconstruction method for high-pitch helical luggage CT
NASA Astrophysics Data System (ADS)
Xue, Hui; Zhang, Li; Chen, Zhiqiang; Jin, Xin
2012-10-01
X-ray luggage CT is widely used in airports and railway stations for the purpose of detecting contrabands and dangerous goods that may be potential threaten to public safety, playing an important role in homeland security. An X-ray luggage CT is usually in a helical trajectory with a high pitch for achieving a high passing speed of the luggage. The disadvantage of high pitch is that conventional filtered back-projection (FBP) requires a very large slice thickness, leading to bad axial resolution and helical artifacts. Especially when severe data inconsistencies are present in the z-direction, like the ends of a scanning object, the partial volume effect leads to inaccuracy value and may cause a wrong identification. In this paper, an iterative reconstruction method is developed to improve the image quality and accuracy for a large-spacing multi-detector high-pitch helical luggage CT system. In this method, the slice thickness is set to be much smaller than the pitch. Each slice involves projection data collected in a rather small angular range, being an ill-conditioned limited-angle problem. Firstly a low-resolution reconstruction is employed to obtain images, which are used as prior images in the following process. Then iterative reconstruction is performed to obtain high-resolution images. This method enables a high volume coverage speed and a thin reconstruction slice for the helical luggage CT. We validate this method with data collected in a commercial X-ray luggage CT.
NASA Astrophysics Data System (ADS)
Mechlem, Korbinian; Ehn, Sebastian; Sellerer, Thorsten; Pfeiffer, Franz; Noël, Peter B.
2017-03-01
In spectral computed tomography (spectral CT), the additional information about the energy dependence of attenuation coefficients can be exploited to generate material selective images. These images have found applications in various areas such as artifact reduction, quantitative imaging or clinical diagnosis. However, significant noise amplification on material decomposed images remains a fundamental problem of spectral CT. Most spectral CT algorithms separate the process of material decomposition and image reconstruction. Separating these steps is suboptimal because the full statistical information contained in the spectral tomographic measurements cannot be exploited. Statistical iterative reconstruction (SIR) techniques provide an alternative, mathematically elegant approach to obtaining material selective images with improved tradeoffs between noise and resolution. Furthermore, image reconstruction and material decomposition can be performed jointly. This is accomplished by a forward model which directly connects the (expected) spectral projection measurements and the material selective images. To obtain this forward model, detailed knowledge of the different photon energy spectra and the detector response was assumed in previous work. However, accurately determining the spectrum is often difficult in practice. In this work, a new algorithm for statistical iterative material decomposition is presented. It uses a semi-empirical forward model which relies on simple calibration measurements. Furthermore, an efficient optimization algorithm based on separable surrogate functions is employed. This partially negates one of the major shortcomings of SIR, namely high computational cost and long reconstruction times. Numerical simulations and real experiments show strongly improved image quality and reduced statistical bias compared to projection-based material decomposition.
Layer-oriented multigrid wavefront reconstruction algorithms for multi-conjugate adaptive optics
NASA Astrophysics Data System (ADS)
Gilles, Luc; Ellerbroek, Brent L.; Vogel, Curtis R.
2003-02-01
Multi-conjugate adaptive optics (MCAO) systems with 104-105 degrees of freedom have been proposed for future giant telescopes. Using standard matrix methods to compute, optimize, and implement wavefront control algorithms for these systems is impractical, since the number of calculations required to compute and apply the reconstruction matrix scales respectively with the cube and the square of the number of AO degrees of freedom. In this paper, we develop an iterative sparse matrix implementation of minimum variance wavefront reconstruction for telescope diameters up to 32m with more than 104 actuators. The basic approach is the preconditioned conjugate gradient method, using a multigrid preconditioner incorporating a layer-oriented (block) symmetric Gauss-Seidel iterative smoothing operator. We present open-loop numerical simulation results to illustrate algorithm convergence.
Born iterative reconstruction using perturbed-phase field estimates
Astheimer, Jeffrey P.; Waag, Robert C.
2008-01-01
A method of image reconstruction from scattering measurements for use in ultrasonic imaging is presented. The method employs distorted-wave Born iteration but does not require using a forward-problem solver or solving large systems of equations. These calculations are avoided by limiting intermediate estimates of medium variations to smooth functions in which the propagated fields can be approximated by phase perturbations derived from variations in a geometric path along rays. The reconstruction itself is formed by a modification of the filtered-backpropagation formula that includes correction terms to account for propagation through an estimated background. Numerical studies that validate the method for parameter ranges of interest in medical applications are presented. The efficiency of this method offers the possibility of real-time imaging from scattering measurements. PMID:19062873
Modelling the physics in iterative reconstruction for transmission computed tomography
Nuyts, Johan; De Man, Bruno; Fessler, Jeffrey A.; Zbijewski, Wojciech; Beekman, Freek J.
2013-01-01
There is an increasing interest in iterative reconstruction (IR) as a key tool to improve quality and increase applicability of X-ray CT imaging. IR has the ability to significantly reduce patient dose, it provides the flexibility to reconstruct images from arbitrary X-ray system geometries and it allows to include detailed models of photon transport and detection physics, to accurately correct for a wide variety of image degrading effects. This paper reviews discretisation issues and modelling of finite spatial resolution, Compton scatter in the scanned object, data noise and the energy spectrum. Widespread implementation of IR with highly accurate model-based correction, however, still requires significant effort. In addition, new hardware will provide new opportunities and challenges to improve CT with new modelling. PMID:23739261
Road detection in SAR images using a tensor voting algorithm
NASA Astrophysics Data System (ADS)
Shen, Dajiang; Hu, Chun; Yang, Bing; Tian, Jinwen; Liu, Jian
2007-11-01
In this paper, the problem of the detection of road networks in Synthetic Aperture Radar (SAR) images is addressed. Most of the previous methods extract the road by detecting lines and network reconstruction. Traditional algorithms such as MRFs, GA, Level Set, used in the progress of reconstruction are iterative. The tensor voting methodology we proposed is non-iterative, and non-sensitive to initialization. Furthermore, the only free parameter is the size of the neighborhood, related to the scale. The algorithm we present is verified to be effective when it's applied to the road extraction using the real Radarsat Image.
NASA Astrophysics Data System (ADS)
Quan, Haiyang; Wu, Fan; Hou, Xi
2015-10-01
New method for reconstructing rotationally asymmetric surface deviation with pixel-level spatial resolution is proposed. It is based on basic iterative scheme and accelerates the Gauss-Seidel method by introducing an acceleration parameter. This modified Successive Over-relaxation (SOR) is effective for solving the rotationally asymmetric components with pixel-level spatial resolution, without the usage of a fitting procedure. Compared to the Jacobi and Gauss-Seidel method, the modified SOR method with an optimal relaxation factor converges much faster and saves more computational costs and memory space without reducing accuracy. It has been proved by real experimental results.
Variable aperture-based ptychographical iterative engine method
NASA Astrophysics Data System (ADS)
Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng
2018-02-01
A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches.
Rapid alignment of nanotomography data using joint iterative reconstruction and reprojection
Gürsoy, Doğa; Hong, Young P.; He, Kuan; ...
2017-09-18
As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the samemore » error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.« less
NASA Astrophysics Data System (ADS)
Thomas, L.; Tremblais, B.; David, L.
2014-03-01
Optimization of multiplicative algebraic reconstruction technique (MART), simultaneous MART and block iterative MART reconstruction techniques was carried out on synthetic and experimental data. Different criteria were defined to improve the preprocessing of the initial images. Knowledge of how each reconstruction parameter influences the quality of particle volume reconstruction and computing time is the key in Tomo-PIV. These criteria were applied to a real case, a jet in cross flow, and were validated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, B.; Zeng, G. L.
2006-09-15
A rotating slat collimator can be used to acquire planar-integral data. It achieves higher geometric efficiency than a parallel-hole collimator by accepting more photons, but the planar-integral data contain less tomographic information that may result in larger noise amplification in the reconstruction. Lodge evaluated the rotating slat system and the parallel-hole system based on noise behavior for an FBP reconstruction. Here, we evaluate the noise propagation properties of the two collimation systems for iterative reconstruction. We extend Huesman's noise propagation analysis of the line-integral system to the planar-integral case, and show that approximately 2.0(D/dp) SPECT angles, 2.5(D/dp) self-spinning angles atmore » each detector position, and a 0.5dp detector sampling interval are required in order for the planar-integral data to be efficiently utilized. Here, D is the diameter of the object and dp is the linear dimension of the voxels that subdivide the object. The noise propagation behaviors of the two systems are then compared based on a least-square reconstruction using the ratio of the SNR in the image reconstructed using a planar-integral system to that reconstructed using a line-integral system. The ratio is found to be proportional to {radical}(F/D), where F is a geometric efficiency factor. This result has been verified by computer simulations. It confirms that for an iterative reconstruction, the noise tradeoff of the two systems is not only dependent on the increase of the geometric efficiency afforded by the planar projection method, but also dependent on the size of the object. The planar-integral system works better for small objects, while the line-integral system performs better for large ones. This result is consistent with Lodge's results based on the FBP method.« less
Sowa-Staszczak, Anna; Lenda-Tracz, Wioletta; Tomaszuk, Monika; Głowa, Bogusław; Hubalewska-Dydejczyk, Alicja
2013-01-01
Somatostatin receptor scintigraphy (SRS) is a useful tool in the assessment of GEP-NET (gastroenteropancreatic neuroendocrine tumor) patients. The choice of appropriate settings of image reconstruction parameters is crucial in interpretation of these images. The aim of the study was to investigate how the GEP NET lesion signal to noise ratio (TCS/TCB) depends on different reconstruction settings for Flash 3D software (Siemens). SRS results of 76 randomly selected patients with confirmed GEP-NET were analyzed. For SPECT studies the data were acquired using standard clinical settings 3-4 h after the injection of 740 MBq 99mTc-[EDDA/HYNIC] octreotate. To obtain final images the OSEM 3D Flash reconstruction with different settings and FBP reconstruction were used. First, the TCS/TCB ratio in voxels was analyzed for different combinations of the number of subsets and the number of iterations of the OSEM 3D Flash reconstruction. Secondly, the same ratio was analyzed for different parameters of the Gaussian filter (with FWHM = 2-4 times greater from the pixel size). Also the influence of scatter correction on the TCS/TCB ratio was investigated. With increasing number of subsets and iterations, the increase of TCS/TCB ratio was observed. With increasing settings of Gauss [FWHM coefficient] filter, the decrease of TCS/TCB ratio was reported. The use of scatter correction slightly decreases the values of this ratio. OSEM algorithm provides a meaningfully better reconstruction of the SRS SPECT study as compared to the FBP technique. A high number of subsets improves image quality (images are smoother). Increasing number of iterations gives a better contrast and the shapes of lesions and organs are sharper. The choice of reconstruction parameters is a compromise between image qualitative appearance and its quantitative accuracy and should not be modified when comparing multiple studies of the same patient.
Vachha, Behroze; Brodoefel, Harald; Wilcox, Carol; Hackney, David B; Moonis, Gul
2013-12-01
To compare objective and subjective image quality in neck CT images acquired at different tube current-time products (275 mAs and 340 mAs) and reconstructed with filtered-back-projection (FBP) and adaptive statistical iterative reconstruction (ASIR). HIPAA-compliant study with IRB approval and waiver of informed consent. 66 consecutive patients were randomly assigned to undergo contrast-enhanced neck CT at a standard tube-current-time-product (340 mAs; n = 33) or reduced tube-current-time-product (275 mAs, n = 33). Data sets were reconstructed with FBP and 2 levels (30%, 40%) of ASIR-FBP blending at 340 mAs and 275 mAs. Two neuroradiologists assessed subjective image quality in a blinded and randomized manner. Volume CT dose index (CTDIvol), dose-length-product (DLP), effective dose, and objective image noise were recorded. Signal-to-noise ratio (SNR) was computed as mean attenuation in a region of interest in the sternocleidomastoid muscle divided by image noise. Compared with FBP, ASIR resulted in a reduction of image noise at both 340 mAs and 275 mAs. Reduction of tube current from 340 mAs to 275 mAs resulted in an increase in mean objective image noise (p=0.02) and a decrease in SNR (p = 0.03) when images were reconstructed with FBP. However, when the 275 mAs images were reconstructed using ASIR, the mean objective image noise and SNR were similar to those of the standard 340 mAs CT images reconstructed with FBP (p>0.05). Subjective image noise was ranked by both raters as either average or less-than-average irrespective of the tube current and iterative reconstruction technique. Adapting ASIR into neck CT protocols reduced effective dose by 17% without compromising image quality. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Progressive Stochastic Reconstruction Technique (PSRT) for cryo electron tomography.
Turoňová, Beata; Marsalek, Lukas; Davidovič, Tomáš; Slusallek, Philipp
2015-03-01
Cryo Electron Tomography (cryoET) plays an essential role in Structural Biology, as it is the only technique that allows to study the structure of large macromolecular complexes in their close to native environment in situ. The reconstruction methods currently in use, such as Weighted Back Projection (WBP) or Simultaneous Iterative Reconstruction Technique (SIRT), deliver noisy and low-contrast reconstructions, which complicates the application of high-resolution protocols, such as Subtomogram Averaging (SA). We propose a Progressive Stochastic Reconstruction Technique (PSRT) - a novel iterative approach to tomographic reconstruction in cryoET based on Monte Carlo random walks guided by Metropolis-Hastings sampling strategy. We design a progressive reconstruction scheme to suit the conditions present in cryoET and apply it successfully to reconstructions of macromolecular complexes from both synthetic and experimental datasets. We show how to integrate PSRT into SA, where it provides an elegant solution to the region-of-interest problem and delivers high-contrast reconstructions that significantly improve template-based localization without any loss of high-resolution structural information. Furthermore, the locality of SA is exploited to design an importance sampling scheme which significantly speeds up the otherwise slow Monte Carlo approach. Finally, we design a new memory efficient solution for the specimen-level interior problem of cryoET, removing all associated artifacts. Copyright © 2015 Elsevier Inc. All rights reserved.
[High resolution reconstruction of PET images using the iterative OSEM algorithm].
Doll, J; Henze, M; Bublitz, O; Werling, A; Adam, L E; Haberkorn, U; Semmler, W; Brix, G
2004-06-01
Improvement of the spatial resolution in positron emission tomography (PET) by incorporation of the image-forming characteristics of the scanner into the process of iterative image reconstruction. All measurements were performed at the whole-body PET system ECAT EXACT HR(+) in 3D mode. The acquired 3D sinograms were sorted into 2D sinograms by means of the Fourier rebinning (FORE) algorithm, which allows the usage of 2D algorithms for image reconstruction. The scanner characteristics were described by a spatially variant line-spread function (LSF), which was determined from activated copper-64 line sources. This information was used to model the physical degradation processes in PET measurements during the course of 2D image reconstruction with the iterative OSEM algorithm. To assess the performance of the high-resolution OSEM algorithm, phantom measurements performed at a cylinder phantom, the hotspot Jaszczack phantom, and the 3D Hoffmann brain phantom as well as different patient examinations were analyzed. Scanner characteristics could be described by a Gaussian-shaped LSF with a full-width at half-maximum increasing from 4.8 mm at the center to 5.5 mm at a radial distance of 10.5 cm. Incorporation of the LSF into the iteration formula resulted in a markedly improved resolution of 3.0 and 3.5 mm, respectively. The evaluation of phantom and patient studies showed that the high-resolution OSEM algorithm not only lead to a better contrast resolution in the reconstructed activity distributions but also to an improved accuracy in the quantification of activity concentrations in small structures without leading to an amplification of image noise or even the occurrence of image artifacts. The spatial and contrast resolution of PET scans can markedly be improved by the presented image restauration algorithm, which is of special interest for the examination of both patients with brain disorders and small animals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noid, G; Tai, A; Li, X
2015-06-15
Purpose: Iterative reconstruction (IR) algorithms are developed to improve CT image quality (IQ) by reducing noise without diminishing spatial resolution or contrast. The CT IQ for patients with a high Body Mass Index (BMI) can suffer from increased noise due to photon starvation. The purpose of this study is to investigate and to quantify the IQ enhancement for high BMI patients through the application of IR algorithms. Methods: CT raw data collected for 6 radiotherapy (RT) patients with BMI, greater than or equal to 30 were retrospectively analyzed. All CT data were acquired using a CT scanner (Somaton Definition ASmore » Open, Siemens) installed in a linac room (CT-on-rails) using standard imaging protocols. The CT data were reconstructed using the Sinogram Affirmed Iterative Reconstruction (SAFIRE) and Filtered Back Projection (FBP) methods. IQ metrics of the obtained CTs were compared and correlated with patient depth and BMI. The patient depth was defined as the largest distance from anterior to posterior along the bilateral symmetry axis. Results: IR techniques are demonstrated to preserve contrast and reduce noise in comparison to traditional FBP. Driven by the reduction in noise, the contrast to noise ratio is roughly doubled by adopting the highest SAFIRE strength. A significant correlation was observed between patient depth and IR noise reduction through Pearson’s correlation test (R = 0.9429/P = 0.0167). The mean patient depth was 30.4 cm and the average relative noise reduction for the strongest iterative reconstruction was 55%. Conclusion: The IR techniques produce a measureable enhancement to CT IQ by reducing the noise. Dramatic noise reduction is evident for the high BMI patients. The improved CT IQ enables more accurate delineation of tumors and organs at risk and more accuarte dose calculations for RT planning and delivery guidance. Supported by Siemens.« less
SU-G-BRA-11: Tumor Tracking in An Iterative Volume of Interest Based 4D CBCT Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, R; Pan, T; Ahmad, M
2016-06-15
Purpose: 4D CBCT can allow evaluation of tumor motion immediately prior to radiation therapy, but suffers from heavy artifacts that limit its ability to track tumors. Various iterative and compressed sensing reconstructions have been proposed to reduce these artifacts, but are costly time-wise and can degrade the image quality of bony anatomy for alignment with regularization. We have previously proposed an iterative volume of interest (I4D VOI) method which minimizes reconstruction time and maintains image quality of bony anatomy by focusing a 4D reconstruction within a VOI. The purpose of this study is to test the tumor tracking accuracy ofmore » this method compared to existing methods. Methods: Long scan (8–10 mins) CBCT data with corresponding RPM data was collected for 12 lung cancer patients. The full data set was sorted into 8 phases and reconstructed using FDK cone beam reconstruction to serve as a gold standard. The data was reduced in way that maintains a normal breathing pattern and used to reconstruct 4D images using FDK, low and high regularization TV minimization (λ=2,10), and the proposed I4D VOI method with PTVs used for the VOI. Tumor trajectories were found using rigid registration within the VOI for each reconstruction and compared to the gold standard. Results: The root mean square error (RMSE) values were 2.70mm for FDK, 2.50mm for low regularization TV, 1.48mm for high regularization TV, and 2.34mm for I4D VOI. Streak artifacts in I4D VOI were reduced compared to FDK and images were less blurred than TV reconstructed images. Conclusion: I4D VOI performed at least as well as existing methods in tumor tracking, with the exception of high regularization TV minimization. These results along with the reconstruction time and outside VOI image quality advantages suggest I4D VOI to be an improvement over existing methods. Funding support provided by CPRIT grant RP110562-P2-01.« less
Improved Image Quality in Head and Neck CT Using a 3D Iterative Approach to Reduce Metal Artifact.
Wuest, W; May, M S; Brand, M; Bayerl, N; Krauss, A; Uder, M; Lell, M
2015-10-01
Metal artifacts from dental fillings and other devices degrade image quality and may compromise the detection and evaluation of lesions in the oral cavity and oropharynx by CT. The aim of this study was to evaluate the effect of iterative metal artifact reduction on CT of the oral cavity and oropharynx. Data from 50 consecutive patients with metal artifacts from dental hardware were reconstructed with standard filtered back-projection, linear interpolation metal artifact reduction (LIMAR), and iterative metal artifact reduction. The image quality of sections that contained metal was analyzed for the severity of artifacts and diagnostic value. A total of 455 sections (mean ± standard deviation, 9.1 ± 4.1 sections per patient) contained metal and were evaluated with each reconstruction method. Sections without metal were not affected by the algorithms and demonstrated image quality identical to each other. Of these sections, 38% were considered nondiagnostic with filtered back-projection, 31% with LIMAR, and only 7% with iterative metal artifact reduction. Thirty-three percent of the sections had poor image quality with filtered back-projection, 46% with LIMAR, and 10% with iterative metal artifact reduction. Thirteen percent of the sections with filtered back-projection, 17% with LIMAR, and 22% with iterative metal artifact reduction were of moderate image quality, 16% of the sections with filtered back-projection, 5% with LIMAR, and 30% with iterative metal artifact reduction were of good image quality, and 1% of the sections with LIMAR and 31% with iterative metal artifact reduction were of excellent image quality. Iterative metal artifact reduction yields the highest image quality in comparison with filtered back-projection and linear interpolation metal artifact reduction in patients with metal hardware in the head and neck area. © 2015 by American Journal of Neuroradiology.
Truong, Trong-Kha; Guidon, Arnaud
2014-01-01
Purpose To develop and compare three novel reconstruction methods designed to inherently correct for motion-induced phase errors in multi-shot spiral diffusion tensor imaging (DTI) without requiring a variable-density spiral trajectory or a navigator echo. Theory and Methods The first method simply averages magnitude images reconstructed with sensitivity encoding (SENSE) from each shot, whereas the second and third methods rely on SENSE to estimate the motion-induced phase error for each shot, and subsequently use either a direct phase subtraction or an iterative conjugate gradient (CG) algorithm, respectively, to correct for the resulting artifacts. Numerical simulations and in vivo experiments on healthy volunteers were performed to assess the performance of these methods. Results The first two methods suffer from a low signal-to-noise ratio (SNR) or from residual artifacts in the reconstructed diffusion-weighted images and fractional anisotropy maps. In contrast, the third method provides high-quality, high-resolution DTI results, revealing fine anatomical details such as a radial diffusion anisotropy in cortical gray matter. Conclusion The proposed SENSE+CG method can inherently and effectively correct for phase errors, signal loss, and aliasing artifacts caused by both rigid and nonrigid motion in multi-shot spiral DTI, without increasing the scan time or reducing the SNR. PMID:23450457
Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen
2016-01-01
Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.
Prostate Brachytherapy Seed Reconstruction with Gaussian Blurring and Optimal Coverage Cost
Lee, Junghoon; Liu, Xiaofeng; Jain, Ameet K.; Song, Danny Y.; Burdette, E. Clif; Prince, Jerry L.; Fichtinger, Gabor
2009-01-01
Intraoperative dosimetry in prostate brachytherapy requires localization of the implanted radioactive seeds. A tomosynthesis-based seed reconstruction method is proposed. A three-dimensional volume is reconstructed from Gaussian-blurred projection images and candidate seed locations are computed from the reconstructed volume. A false positive seed removal process, formulated as an optimal coverage problem, iteratively removes “ghost” seeds that are created by tomosynthesis reconstruction. In an effort to minimize pose errors that are common in conventional C-arms, initial pose parameter estimates are iteratively corrected by using the detected candidate seeds as fiducials, which automatically “focuses” the collected images and improves successive reconstructed volumes. Simulation results imply that the implanted seed locations can be estimated with a detection rate of ≥ 97.9% and ≥ 99.3% from three and four images, respectively, when the C-arm is calibrated and the pose of the C-arm is known. The algorithm was also validated on phantom data sets successfully localizing the implanted seeds from four or five images. In a Phase-1 clinical trial, we were able to localize the implanted seeds from five intraoperative fluoroscopy images with 98.8% (STD=1.6) overall detection rate. PMID:19605321
A fractional-order accumulative regularization filter for force reconstruction
NASA Astrophysics Data System (ADS)
Wensong, Jiang; Zhongyu, Wang; Jing, Lv
2018-02-01
The ill-posed inverse problem of the force reconstruction comes from the influence of noise to measured responses and results in an inaccurate or non-unique solution. To overcome this ill-posedness, in this paper, the transfer function of the reconstruction model is redefined by a Fractional order Accumulative Regularization Filter (FARF). First, the measured responses with noise are refined by a fractional-order accumulation filter based on a dynamic data refresh strategy. Second, a transfer function, generated by the filtering results of the measured responses, is manipulated by an iterative Tikhonov regularization with a serious of iterative Landweber filter factors. Third, the regularization parameter is optimized by the Generalized Cross-Validation (GCV) to improve the ill-posedness of the force reconstruction model. A Dynamic Force Measurement System (DFMS) for the force reconstruction is designed to illustrate the application advantages of our suggested FARF method. The experimental result shows that the FARF method with r = 0.1 and α = 20, has a PRE of 0.36% and an RE of 2.45%, is superior to other cases of the FARF method and the traditional regularization methods when it comes to the dynamic force reconstruction.
Iterative Reconstruction of Volumetric Particle Distribution for 3D Velocimetry
NASA Astrophysics Data System (ADS)
Wieneke, Bernhard; Neal, Douglas
2011-11-01
A number of different volumetric flow measurement techniques exist for following the motion of illuminated particles. For experiments that have lower seeding densities, 3D-PTV uses recorded images from typically 3-4 cameras and then tracks the individual particles in space and time. This technique is effective in flows that have lower seeding densities. For flows that have a higher seeding density, tomographic PIV uses a tomographic reconstruction algorithm (e.g. MART) to reconstruct voxel intensities of the recorded volume followed by the cross-correlation of subvolumes to provide the instantaneous 3D vector fields on a regular grid. A new hybrid algorithm is presented which iteratively reconstructs the 3D-particle distribution directly using particles with certain imaging properties instead of voxels as base functions. It is shown with synthetic data that this method is capable of reconstructing densely seeded flows up to 0.05 particles per pixel (ppp) with the same or higher accuracy than 3D-PTV and tomographic PIV. Finally, this new method is validated using experimental data on a turbulent jet.
Meng, Yuguang; Lei, Hao
2010-06-01
An efficient iterative gridding reconstruction method with correction of off-resonance artifacts was developed, which is especially tailored for multiple-shot non-Cartesian imaging. The novelty of the method lies in that the transformation matrix for gridding (T) was constructed as the convolution of two sparse matrices, among which the former is determined by the sampling interval and the spatial distribution of the off-resonance frequencies and the latter by the sampling trajectory and the target grid in the Cartesian space. The resulting T matrix is also sparse and can be solved efficiently with the iterative conjugate gradient algorithm. It was shown that, with the proposed method, the reconstruction speed in multiple-shot non-Cartesian imaging can be improved significantly while retaining high reconstruction fidelity. More important, the method proposed allows tradeoff between the accuracy and the computation time of reconstruction, making customization of the use of such a method in different applications possible. The performance of the proposed method was demonstrated by numerical simulation and multiple-shot spiral imaging on rat brain at 4.7 T. (c) 2010 Wiley-Liss, Inc.
WE-G-BRF-07: Non-Circular Scanning Trajectories with Varian Developer Mode
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, A; Pearson, E; Pan, X
2014-06-15
Purpose: Cone-beam CT (CBCT) in image-guide radiation therapy (IGRT) typicallyacquires scan data via the circular trajectory of the linearaccelerator's (linac) gantry rotation. Though this lends itself toanalytic reconstruction algorithms like FDK, iterative reconstructionalgorithms allow for a broader range of scanning trajectories. Weimplemented a non-circular scanning trajectory with Varian's TrueBeamDeveloper Mode and performed some preliminary reconstructions toverify the geometry. Methods: We used TrueBeam Developer Mode to program a new scanning trajectorythat increases the field of view (FOV) along the gantry rotation axiswithout moving the patient. This trajectory consisted of moving thegantry in a circle, then translating the source and detector alongmore » theaxial direction before acquiring another circular scan 19 cm away fromthe first. The linear portion of the trajectory includes an additional4.5 cm above and below the axial planes of the source's circularrotation. We scanned a calibration phantom consisting of a lucite tubewith a spiral pattern of CT spots and used the maximum-likelihoodalgorithm to iteratively reconstruct the CBCT volume. Results: With the TrueBeam trajectory definition, we acquired projection dataof the calibration phantom using the previously described trajectory.We obtained a scan of the treatment couch for log normalization byscanning with the same trajectory but without the phantom present.Using the nominal geometric parameters reported in the projectionheaders with our iterative reconstruction algorithm, we obtained acorrect reconstruction of the calibration phantom. Conclusion: The ability to implement new scanning trajectories with the TrueBeamDeveloper Mode enables us access to a new parameter space for imagingwith CBCT for IGRT. Previous simulations and simple dual circle scanshave shown iterative reconstruction with non-circular trajectories canincrease the axial FOV with CBCT. Use of Developer Mode allowsexperimentally testing these and other new scanning trajectories. Support was provided in part by the University of Chicago Research Computing Center, Varian Medical Systems, and NIH Grants 1RO1CA120540, T32EB002103, S10 RR021039 and P30 CA14599. The contents of this work are solely the responsibility of the authors and do not necessarily represent the official views of the supporting organizations.« less
Fontarensky, Mikael; Alfidja, Agaïcha; Perignon, Renan; Schoenig, Arnaud; Perrier, Christophe; Mulliez, Aurélien; Guy, Laurent; Boyer, Louis
2015-07-01
To evaluate the accuracy of reduced-dose abdominal computed tomographic (CT) imaging by using a new generation model-based iterative reconstruction (MBIR) to diagnose acute renal colic compared with a standard-dose abdominal CT with 50% adaptive statistical iterative reconstruction (ASIR). This institutional review board-approved prospective study included 118 patients with symptoms of acute renal colic who underwent the following two successive CT examinations: standard-dose ASIR 50% and reduced-dose MBIR. Two radiologists independently reviewed both CT examinations for presence or absence of renal calculi, differential diagnoses, and associated abnormalities. The imaging findings, radiation dose estimates, and image quality of the two CT reconstruction methods were compared. Concordance was evaluated by κ coefficient, and descriptive statistics and t test were used for statistical analysis. Intraobserver correlation was 100% for the diagnosis of renal calculi (κ = 1). Renal calculus (τ = 98.7%; κ = 0.97) and obstructive upper urinary tract disease (τ = 98.16%; κ = 0.95) were detected, and differential or alternative diagnosis was performed (τ = 98.87% κ = 0.95). MBIR allowed a dose reduction of 84% versus standard-dose ASIR 50% (mean volume CT dose index, 1.7 mGy ± 0.8 [standard deviation] vs 10.9 mGy ± 4.6; mean size-specific dose estimate, 2.2 mGy ± 0.7 vs 13.7 mGy ± 3.9; P < .001) without a conspicuous deterioration in image quality (reduced-dose MBIR vs ASIR 50% mean scores, 3.83 ± 0.49 vs 3.92 ± 0.27, respectively; P = .32) or increase in noise (reduced-dose MBIR vs ASIR 50% mean, respectively, 18.36 HU ± 2.53 vs 17.40 HU ± 3.42). Its main drawback remains the long time required for reconstruction (mean, 40 minutes). A reduced-dose protocol with MBIR allowed a dose reduction of 84% without increasing noise and without an conspicuous deterioration in image quality in patients suspected of having renal colic.
Widmann, Gerlig; Schullian, Peter; Gassner, Eva-Maria; Hoermann, Romed; Bale, Reto; Puelacher, Wolfgang
2015-03-01
OBJECTIVE. The purpose of this article is to evaluate 2D and 3D image quality of high-resolution ultralow-dose CT images of the craniofacial bone for navigated surgery using adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR) in comparison with standard filtered backprojection (FBP). MATERIALS AND METHODS. A formalin-fixed human cadaver head was scanned using a clinical reference protocol at a CT dose index volume of 30.48 mGy and a series of five ultralow-dose protocols at 3.48, 2.19, 0.82, 0.44, and 0.22 mGy using FBP and ASIR at 50% (ASIR-50), ASIR at 100% (ASIR-100), and MBIR. Blinded 2D axial and 3D volume-rendered images were compared with each other by three readers using top-down scoring. Scores were analyzed per protocol or dose and reconstruction. All images were compared with the FBP reference at 30.48 mGy. A nonparametric Mann-Whitney U test was used. Statistical significance was set at p < 0.05. RESULTS. For 2D images, the FBP reference at 30.48 mGy did not statistically significantly differ from ASIR-100 at 3.48 mGy, ASIR-100 at 2.19 mGy, and MBIR at 0.82 mGy. MBIR at 2.19 and 3.48 mGy scored statistically significantly better than the FBP reference (p = 0.032 and 0.001, respectively). For 3D images, the FBP reference at 30.48 mGy did not statistically significantly differ from all reconstructions at 3.48 mGy; FBP and ASIR-100 at 2.19 mGy; FBP, ASIR-100, and MBIR at 0.82 mGy; MBIR at 0.44 mGy; and MBIR at 0.22 mGy. CONCLUSION. MBIR (2D and 3D) and ASIR-100 (2D) may significantly improve subjective image quality of ultralow-dose images and may allow more than 90% dose reductions.
Effects of ray profile modeling on resolution recovery in clinical CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hofmann, Christian; Knaup, Michael; Kachelrieß, Marc, E-mail: marc.kachelriess@dkfz-heidelberg.de
2014-02-15
Purpose: Iterative image reconstruction gains more and more interest in clinical routine, as it promises to reduce image noise (and thereby patient dose), to reduce artifacts, or to improve spatial resolution. However, among vendors and researchers, there is no consensus of how to best achieve these goals. The authors are focusing on the aspect of geometric ray profile modeling, which is realized by some algorithms, while others model the ray as a straight line. The authors incorporate ray-modeling (RM) in nonregularized iterative reconstruction. That means, instead of using one simple single needle beam to represent the x-ray, the authors evaluatemore » the double integral of attenuation path length over the finite source distribution and the finite detector element size in the numerical forward projection. Our investigations aim at analyzing the resolution recovery (RR) effects of RM. Resolution recovery means that frequencies can be recovered beyond the resolution limit of the imaging system. In order to evaluate, whether clinical CT images can benefit from modeling the geometrical properties of each x-ray, the authors performed a 2D simulation study of a clinical CT fan-beam geometry that includes the precise modeling of these geometrical properties. Methods: All simulations and reconstructions are performed in native fan-beam geometry. A water phantom with resolution bar patterns and a Forbild thorax phantom with circular resolution patterns representing calcifications in the heart region are simulated. An FBP reconstruction with a Ram–Lak kernel is used as a reference reconstruction. The FBP is compared to iterative reconstruction techniques with and without RM: An ordered subsets convex (OSC) algorithm without any RM (OSC), an OSC where the forward projection is modeled concerning the finite focal spot and detector size (OSC-RM) and an OSC with RM and with a matched forward and backprojection pair (OSC-T-RM, T for transpose). In all cases, noise was matched to be able to focus on comparing spatial resolution. The authors use two different simulation settings. Both are based on the geometry of a typical clinical CT system (0.7 mm detector element size at isocenter, 1024 projections per rotation). Setting one has an exaggerated source width of 5.0 mm. Setting two has a realistically small source width of 0.5 mm. The authors also investigate the transition from setting one to two. To quantify image quality, the authors analyze line profiles through the resolution patterns to define a contrast factor (CF) for contrast-resolution plots, and the authors compare the normalized cross-correlation (NCC) with respect to the ground truth of the circular resolution patterns. To independently analyze whether RM is of advantage, the authors implemented several iterative reconstruction algorithms: The statistical iterative reconstruction algorithm OSC, the ordered subsets simultaneous algebraic reconstruction technique (OSSART) and another statistical iterative reconstruction algorithm, denoted with ordered subsets maximum likelihood (OSML) algorithm. All algorithms were implemented both without RM (denoted as OSC, OSSART, and OSML) and with RM (denoted as OSC-RM, OSSART-RM, and OSML-RM). Results: For the unrealistic case of a 5.0 mm focal spot the CF can be improved by a factor of two due to RM: the 4.2 LP/cm bar pattern, which is the first bar pattern that cannot be resolved without RM, can be easily resolved with RM. For the realistic case of a 0.5 mm focus, all results show approximately the same CF. The NCC shows no significant dependency on RM when the source width is smaller than 2.0 mm (as in clinical CT). From 2.0 mm to 5.0 mm focal spot size increasing improvements can be observed with RM. Conclusions: Geometric RM in iterative reconstruction helps improving spatial resolution, if the ray cross-section is significantly larger than the ray sampling distance. In clinical CT, however, the ray is not much thicker than the distance between neighboring ray centers, as the focal spot size is small and detector crosstalk is negligible, due to reflective coatings between detector elements. Therefore,RM appears not to be necessary in clinical CT to achieve resolution recovery.« less
Efficient radial tagging CMR exam: A coherent k-space reading and image reconstruction approach.
Golshani, Shokoufeh; Nasiraei-Moghaddam, Abbas
2017-04-01
Cardiac MR tagging techniques, which facilitate the strain evaluation, have not yet been widely adopted in clinics due to inefficiencies in acquisition and postprocessing. This problem may be alleviated by exploiting the coherency in the three steps of tagging: preparation, acquisition, and reconstruction. Herein, we propose a fully polar-based tagging approach that may lead to real-time strain mapping. Radial readout trajectories were used to acquire radial tagging images and a Hankel-based algorithm, referred to as Polar Fourier Transform (PFT), has been adapted for reconstruction of the acquired raw data. In both phantom and human subjects, the overall performance of the method was investigated against radial undersampling and compared with the conventional reconstruction methods. Radially tagged images were reconstructed by the proposed PFT method from as few as 24 spokes with normalized root-mean-square-error of less than 3%. The reconstructed images showed a central focusing behavior, where the undersampling effects were pushed to the peripheral areas out of the central region of interest. Comparing the results with the re-gridding reconstruction technique, superior image quality and high robustness of the method were further established. In addition, a relative increase of 68 ± 2.5% in tagline sharpness was achieved for the PFT images and also higher tagging contrast (72 ± 5.6%), resulted from the well-tolerated undersampling artifacts, was observed in all reconstructions. The proposed approach led to the acceleration of the acquisition process, which was evaluated for up to eight-fold retrospectively from the fully sampled data. This is promising toward real-time imaging, and in contrast to iterative techniques, the method is consistent with online reconstruction. Magn Reson Med 77:1459-1472, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Lv, Peijie; Liu, Jie; Zhang, Rui; Jia, Yan
2015-01-01
Objective To assess the lesion conspicuity and image quality in CT evaluation of small (≤ 3 cm) hepatocellular carcinomas (HCCs) using automatic tube voltage selection (ATVS) and automatic tube current modulation (ATCM) with or without iterative reconstruction. Materials and Methods One hundred and five patients with 123 HCC lesions were included. Fifty-seven patients were scanned using both ATVS and ATCM and images were reconstructed using either filtered back-projection (FBP) (group A1) or sinogram-affirmed iterative reconstruction (SAFIRE) (group A2). Forty-eight patients were imaged using only ATCM, with a fixed tube potential of 120 kVp and FBP reconstruction (group B). Quantitative parameters (image noise in Hounsfield unit and contrast-to-noise ratio of the aorta, the liver, and the hepatic tumors) and qualitative visual parameters (image noise, overall image quality, and lesion conspicuity as graded on a 5-point scale) were compared among the groups. Results Group A2 scanned with the automatically chosen 80 kVp and 100 kVp tube voltages ranked the best in lesion conspicuity and subjective and objective image quality (p values ranging from < 0.001 to 0.004) among the three groups, except for overall image quality between group A2 and group B (p = 0.022). Group A1 showed higher image noise (p = 0.005) but similar lesion conspicuity and overall image quality as compared with group B. The radiation dose in group A was 19% lower than that in group B (p = 0.022). Conclusion CT scanning with combined use of ATVS and ATCM and image reconstruction with SAFIRE algorithm provides higher lesion conspicuity and better image quality for evaluating small hepatic HCCs with radiation dose reduction. PMID:25995682
Investigation of cone-beam CT image quality trade-off for image-guided radiation therapy
NASA Astrophysics Data System (ADS)
Bian, Junguo; Sharp, Gregory C.; Park, Yang-Kyun; Ouyang, Jinsong; Bortfeld, Thomas; El Fakhri, Georges
2016-05-01
It is well-known that projections acquired over an angular range slightly over 180° (so-called short scan) are sufficient for fan-beam reconstruction. However, due to practical imaging conditions (projection data and reconstruction image discretization, physical factors, and data noise), the short-scan reconstructions may have different appearances and properties from the full-scan (scans over 360°) reconstructions. Nevertheless, short-scan configurations have been used in applications such as cone-beam CT (CBCT) for head-neck-cancer image-guided radiation therapy (IGRT) that only requires a small field of view due to the potential reduced imaging time and dose. In this work, we studied the image quality trade-off for full, short, and full/short scan configurations with both conventional filtered-backprojection (FBP) reconstruction and iterative reconstruction algorithms based on total-variation (TV) minimization for head-neck-cancer IGRT. Anthropomorphic and Catphan phantoms were scanned at different exposure levels with a clinical scanner used in IGRT. Both visualization- and numerical-metric-based evaluation studies were performed. The results indicate that the optimal exposure level and number of views are in the middle range for both FBP and TV-based iterative algorithms and the optimization is object-dependent and task-dependent. The optimal view numbers decrease with the total exposure levels for both FBP and TV-based algorithms. The results also indicate there are slight differences between FBP and TV-based iterative algorithms for the image quality trade-off: FBP seems to be more in favor of larger number of views while the TV-based algorithm is more robust to different data conditions (number of views and exposure levels) than the FBP algorithm. The studies can provide a general guideline for image-quality optimization for CBCT used in IGRT and other applications.
Statistical iterative reconstruction to improve image quality for digital breast tomosynthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Shiyu, E-mail: shiyu.xu@gmail.com; Chen, Ying, E-mail: adachen@siu.edu; Lu, Jianping
2015-09-15
Purpose: Digital breast tomosynthesis (DBT) is a novel modality with the potential to improve early detection of breast cancer by providing three-dimensional (3D) imaging with a low radiation dose. 3D image reconstruction presents some challenges: cone-beam and flat-panel geometry, and highly incomplete sampling. A promising means to overcome these challenges is statistical iterative reconstruction (IR), since it provides the flexibility of accurate physics modeling and a general description of system geometry. The authors’ goal was to develop techniques for applying statistical IR to tomosynthesis imaging data. Methods: These techniques include the following: a physics model with a local voxel-pair basedmore » prior with flexible parameters to fine-tune image quality; a precomputed parameter λ in the prior, to remove data dependence and to achieve a uniform resolution property; an effective ray-driven technique to compute the forward and backprojection; and an oversampled, ray-driven method to perform high resolution reconstruction with a practical region-of-interest technique. To assess the performance of these techniques, the authors acquired phantom data on the stationary DBT prototype system. To solve the estimation problem, the authors proposed an optimization-transfer based algorithm framework that potentially allows fewer iterations to achieve an acceptably converged reconstruction. Results: IR improved the detectability of low-contrast and small microcalcifications, reduced cross-plane artifacts, improved spatial resolution, and lowered noise in reconstructed images. Conclusions: Although the computational load remains a significant challenge for practical development, the superior image quality provided by statistical IR, combined with advancing computational techniques, may bring benefits to screening, diagnostics, and intraoperative imaging in clinical applications.« less
Optimisation of reconstruction--reprojection-based motion correction for cardiac SPECT.
Kangasmaa, Tuija S; Sohlberg, Antti O
2014-07-01
Cardiac motion is a challenging cause of image artefacts in myocardial perfusion SPECT. A wide range of motion correction methods have been developed over the years, and so far automatic algorithms based on the reconstruction--reprojection principle have proved to be the most effective. However, these methods have not been fully optimised in terms of their free parameters and implementational details. Two slightly different implementations of reconstruction--reprojection-based motion correction techniques were optimised for effective, good-quality motion correction and then compared with each other. The first of these methods (Method 1) was the traditional reconstruction-reprojection motion correction algorithm, where the motion correction is done in projection space, whereas the second algorithm (Method 2) performed motion correction in reconstruction space. The parameters that were optimised include the type of cost function (squared difference, normalised cross-correlation and mutual information) that was used to compare measured and reprojected projections, and the number of iterations needed. The methods were tested with motion-corrupt projection datasets, which were generated by adding three different types of motion (lateral shift, vertical shift and vertical creep) to motion-free cardiac perfusion SPECT studies. Method 2 performed slightly better overall than Method 1, but the difference between the two implementations was small. The execution time for Method 2 was much longer than for Method 1, which limits its clinical usefulness. The mutual information cost function gave clearly the best results for all three motion sets for both correction methods. Three iterations were sufficient for a good quality correction using Method 1. The traditional reconstruction--reprojection-based method with three update iterations and mutual information cost function is a good option for motion correction in clinical myocardial perfusion SPECT.
Heggen, Kristin Livelten; Pedersen, Hans Kristian; Andersen, Hilde Kjernlie; Martinsen, Anne Catrine T
2016-01-01
Background Iterative reconstruction can reduce image noise and thereby facilitate dose reduction. Purpose To evaluate qualitative and quantitative image quality for full dose and dose reduced head computed tomography (CT) protocols reconstructed using filtered back projection (FBP) and adaptive statistical iterative reconstruction (ASIR). Material and Methods Fourteen patients undergoing follow-up head CT were included. All patients underwent full dose (FD) exam and subsequent 15% dose reduced (DR) exam, reconstructed using FBP and 30% ASIR. Qualitative image quality was assessed using visual grading characteristics. Quantitative image quality was assessed using ROI measurements in cerebrospinal fluid (CSF), white matter, peripheral and central gray matter. Additionally, quantitative image quality was measured in Catphan and vendor’s water phantom. Results There was no significant difference in qualitative image quality between FD FBP and DR ASIR. Comparing same scan FBP versus ASIR, a noise reduction of 28.6% in CSF and between −3.7 and 3.5% in brain parenchyma was observed. Comparing FD FBP versus DR ASIR, a noise reduction of 25.7% in CSF, and −7.5 and 6.3% in brain parenchyma was observed. Image contrast increased in ASIR reconstructions. Contrast-to-noise ratio was improved in DR ASIR compared to FD FBP. In phantoms, noise reduction was in the range of 3 to 28% with image content. Conclusion There was no significant difference in qualitative image quality between full dose FBP and dose reduced ASIR. CNR improved in DR ASIR compared to FD FBP mostly due to increased contrast, not reduced noise. Therefore, we recommend using caution if reducing dose and applying ASIR to maintain image quality. PMID:27583169
Østerås, Bjørn Helge; Heggen, Kristin Livelten; Pedersen, Hans Kristian; Andersen, Hilde Kjernlie; Martinsen, Anne Catrine T
2016-08-01
Iterative reconstruction can reduce image noise and thereby facilitate dose reduction. To evaluate qualitative and quantitative image quality for full dose and dose reduced head computed tomography (CT) protocols reconstructed using filtered back projection (FBP) and adaptive statistical iterative reconstruction (ASIR). Fourteen patients undergoing follow-up head CT were included. All patients underwent full dose (FD) exam and subsequent 15% dose reduced (DR) exam, reconstructed using FBP and 30% ASIR. Qualitative image quality was assessed using visual grading characteristics. Quantitative image quality was assessed using ROI measurements in cerebrospinal fluid (CSF), white matter, peripheral and central gray matter. Additionally, quantitative image quality was measured in Catphan and vendor's water phantom. There was no significant difference in qualitative image quality between FD FBP and DR ASIR. Comparing same scan FBP versus ASIR, a noise reduction of 28.6% in CSF and between -3.7 and 3.5% in brain parenchyma was observed. Comparing FD FBP versus DR ASIR, a noise reduction of 25.7% in CSF, and -7.5 and 6.3% in brain parenchyma was observed. Image contrast increased in ASIR reconstructions. Contrast-to-noise ratio was improved in DR ASIR compared to FD FBP. In phantoms, noise reduction was in the range of 3 to 28% with image content. There was no significant difference in qualitative image quality between full dose FBP and dose reduced ASIR. CNR improved in DR ASIR compared to FD FBP mostly due to increased contrast, not reduced noise. Therefore, we recommend using caution if reducing dose and applying ASIR to maintain image quality.
Kuya, Keita; Shinohara, Yuki; Kato, Ayumi; Sakamoto, Makoto; Kurosaki, Masamichi; Ogawa, Toshihide
2017-03-01
The aim of this study is to assess the value of adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR) for reduction of metal artifacts due to dental hardware in carotid CT angiography (CTA). Thirty-seven patients with dental hardware who underwent carotid CTA were included. CTA was performed with a GE Discovery CT750 HD scanner and reconstructed with filtered back projection (FBP), ASIR, and MBIR. We measured the standard deviation at the cervical segment of the internal carotid artery that was affected most by dental metal artifacts (SD 1 ) and the standard deviation at the common carotid artery that was not affected by the artifact (SD 2 ). We calculated the artifact index (AI) as follows: AI = [(SD 1 )2 - (SD 2 )2]1/2 and compared each AI for FBP, ASIR, and MBIR. Visual assessment of the internal carotid artery was also performed by two neuroradiologists using a five-point scale for each axial and reconstructed sagittal image. The inter-observer agreement was analyzed using weighted kappa analysis. MBIR significantly improved AI compared with FBP and ASIR (p < 0.001, each). We found no significant difference in AI between FBP and ASIR (p = 0.502). The visual score of MBIR was significantly better than those of FBP and ASIR (p < 0.001, each), whereas the scores of ASIR were the same as those of FBP. Kappa values indicated good inter-observer agreements in all reconstructed images (0.747-0.778). MBIR resulted in a significant reduction in artifact from dental hardware in carotid CTA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brady, Samuel L., E-mail: samuel.brady@stjude.org; Shulkin, Barry L.
2015-02-15
Purpose: To develop ultralow dose computed tomography (CT) attenuation correction (CTAC) acquisition protocols for pediatric positron emission tomography CT (PET CT). Methods: A GE Discovery 690 PET CT hybrid scanner was used to investigate the change to quantitative PET and CT measurements when operated at ultralow doses (10–35 mA s). CT quantitation: noise, low-contrast resolution, and CT numbers for 11 tissue substitutes were analyzed in-phantom. CT quantitation was analyzed to a reduction of 90% volume computed tomography dose index (0.39/3.64; mGy) from baseline. To minimize noise infiltration, 100% adaptive statistical iterative reconstruction (ASiR) was used for CT reconstruction. PET imagesmore » were reconstructed with the lower-dose CTAC iterations and analyzed for: maximum body weight standardized uptake value (SUV{sub bw}) of various diameter targets (range 8–37 mm), background uniformity, and spatial resolution. Radiation dose and CTAC noise magnitude were compared for 140 patient examinations (76 post-ASiR implementation) to determine relative dose reduction and noise control. Results: CT numbers were constant to within 10% from the nondose reduced CTAC image for 90% dose reduction. No change in SUV{sub bw}, background percent uniformity, or spatial resolution for PET images reconstructed with CTAC protocols was found down to 90% dose reduction. Patient population effective dose analysis demonstrated relative CTAC dose reductions between 62% and 86% (3.2/8.3–0.9/6.2). Noise magnitude in dose-reduced patient images increased but was not statistically different from predose-reduced patient images. Conclusions: Using ASiR allowed for aggressive reduction in CT dose with no change in PET reconstructed images while maintaining sufficient image quality for colocalization of hybrid CT anatomy and PET radioisotope uptake.« less
Investigation of cone-beam CT image quality trade-off for image-guided radiation therapy.
Bian, Junguo; Sharp, Gregory C; Park, Yang-Kyun; Ouyang, Jinsong; Bortfeld, Thomas; El Fakhri, Georges
2016-05-07
It is well-known that projections acquired over an angular range slightly over 180° (so-called short scan) are sufficient for fan-beam reconstruction. However, due to practical imaging conditions (projection data and reconstruction image discretization, physical factors, and data noise), the short-scan reconstructions may have different appearances and properties from the full-scan (scans over 360°) reconstructions. Nevertheless, short-scan configurations have been used in applications such as cone-beam CT (CBCT) for head-neck-cancer image-guided radiation therapy (IGRT) that only requires a small field of view due to the potential reduced imaging time and dose. In this work, we studied the image quality trade-off for full, short, and full/short scan configurations with both conventional filtered-backprojection (FBP) reconstruction and iterative reconstruction algorithms based on total-variation (TV) minimization for head-neck-cancer IGRT. Anthropomorphic and Catphan phantoms were scanned at different exposure levels with a clinical scanner used in IGRT. Both visualization- and numerical-metric-based evaluation studies were performed. The results indicate that the optimal exposure level and number of views are in the middle range for both FBP and TV-based iterative algorithms and the optimization is object-dependent and task-dependent. The optimal view numbers decrease with the total exposure levels for both FBP and TV-based algorithms. The results also indicate there are slight differences between FBP and TV-based iterative algorithms for the image quality trade-off: FBP seems to be more in favor of larger number of views while the TV-based algorithm is more robust to different data conditions (number of views and exposure levels) than the FBP algorithm. The studies can provide a general guideline for image-quality optimization for CBCT used in IGRT and other applications.
Wang, Tao; Gong, Yi; Shi, Yibing; Hua, Rong; Zhang, Qingshan
2017-07-01
The feasibility of application of low-concentration contrast agent and low tube voltage combined with iterative reconstruction in whole brain computed tomography perfusion (CTP) imaging of patients with acute cerebral infarction was investigated. Fifty-nine patients who underwent whole brain CTP examination and diagnosed with acute cerebral infarction from September 2014 to March 2016 were selected. Patients were randomly divided into groups A and B. There were 28 cases in group A [tube voltage, 100 kV; contrast agent, iohexol (350 mg I/ml), reconstructed by filtered back projection] and 31 cases in group B [tube voltage, 80 kV; contrast agent, iodixanol (270 mg I/ml), reconstructed by algebraic reconstruction technique]. The artery CT value, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), dose length product, effective dose (ED) of radiation and brain iodine intake of both groups were measured and statistically analyzed. Two physicians carried out kappa (κ) analysis on the consistency of image quality evaluation. The difference in subjective image quality evaluation between the groups was tested by χ 2 . The differences in CT value, SNR, CNR, CTP and CT angiography subjective image quality evaluation between both groups were not statistically significant (P>0.05); the diagnosis rate of the acute infarcts between the two groups was not significantly different; while the ED and iodine intake in group B (dual low-dose group) were lower than group A. In conclusion, combination of low tube voltage and iterative reconstruction technique, and application of low-concentration contrast agent (270 mg I/ml) in whole brain CTP examination reduced ED and iodine intake without compromising image quality, thereby reducing the risk of contrast-induced nephropathy.
Remote inspection with multi-copters, radiological sensors and SLAM techniques
NASA Astrophysics Data System (ADS)
Carvalho, Henrique; Vale, Alberto; Marques, Rúben; Ventura, Rodrigo; Brouwer, Yoeri; Gonçalves, Bruno
2018-01-01
Activated material can be found in different scenarios, such as in nuclear reactor facilities or medical facilities (e.g. in positron emission tomography commonly known as PET scanning). In addition, there are unexpected scenarios resulting from possible accidents, or where dangerous material is hidden for terrorism attacks using nuclear weapons. Thus, a technological solution is important to cope with fast and reliable remote inspection. The multi-copter is a common type of Unmanned Aerial Vehicle (UAV) that provides the ability to perform a first radiological inspection in the described scenarios. The paper proposes a solution with a multi-copter equipped with on-board sensors to perform a 3D reconstruction and a radiological mapping of the scenario. A depth camera and a Geiger-Müler counter are the used sensors. The inspection is performed in two steps: i) a 3D reconstruction of the environment and ii) radiation activity inference to localise and quantify sources of radiation. Experimental results were achieved with real 3D data and simulated radiation activity. Experimental tests with real sources of radiation are planned in the next iteration of the work.
Mangold, Stefanie; De Cecco, Carlo N; Wichmann, Julian L; Canstein, Christian; Varga-Szemes, Akos; Caruso, Damiano; Fuller, Stephen R; Bamberg, Fabian; Nikolaou, Konstantin; Schoepf, U Joseph
2016-05-01
To compare, on an intra-individual basis, the effect of automated tube voltage selection (ATVS), integrated circuit detector and advanced iterative reconstruction on radiation dose and image quality of aortic CTA studies using 2nd and 3rd generation dual-source CT (DSCT). We retrospectively evaluated 32 patients who had undergone CTA of the entire aorta with both 2nd generation DSCT at 120kV using filtered back projection (FBP) (protocol 1) and 3rd generation DSCT using ATVS, an integrated circuit detector and advanced iterative reconstruction (protocol 2). Contrast-to-noise ratio (CNR) was calculated. Image quality was subjectively evaluated using a five-point scale. Radiation dose parameters were recorded. All studies were considered of diagnostic image quality. CNR was significantly higher with protocol 2 (15.0±5.2 vs 11.0±4.2; p<.0001). Subjective image quality analysis revealed no significant differences for evaluation of attenuation (p=0.08501) but image noise was rated significantly lower with protocol 2 (p=0.0005). Mean tube voltage and effective dose were 94.7±14.1kV and 6.7±3.9mSv with protocol 2; 120±0kV and 11.5±5.2mSv with protocol 1 (p<0.0001, respectively). Aortic CTA performed with 3rd generation DSCT, ATVS, integrated circuit detector, and advanced iterative reconstruction allow a substantial reduction of radiation exposure while improving image quality in comparison to 120kV imaging with FBP. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Du, Yi; Wang, Xiangang; Xiang, Xincheng; Wei, Zhouping
2016-12-01
Optical computed tomography (optical-CT) is a high-resolution, fast, and easily accessible readout modality for gel dosimeters. This paper evaluates a hybrid iterative image reconstruction algorithm for optical-CT gel dosimeter imaging, namely, the simultaneous algebraic reconstruction technique (SART) integrated with ordered subsets (OS) iteration and total variation (TV) minimization regularization. The mathematical theory and implementation workflow of the algorithm are detailed. Experiments on two different optical-CT scanners were performed for cross-platform validation. For algorithm evaluation, the iterative convergence is first shown, and peak-to-noise-ratio (PNR) and contrast-to-noise ratio (CNR) results are given with the cone-beam filtered backprojection (FDK) algorithm and the FDK results followed by median filtering (mFDK) as reference. The effect on spatial gradients and reconstruction artefacts is also investigated. The PNR curve illustrates that the results of SART + OS + TV finally converges to that of FDK but with less noise, which implies that the dose-OD calibration method for FDK is also applicable to the proposed algorithm. The CNR in selected regions-of-interest (ROIs) of SART + OS + TV results is almost double that of FDK and 50% higher than that of mFDK. The artefacts in SART + OS + TV results are still visible, but have been much suppressed with little spatial gradient loss. Based on the assessment, we can conclude that this hybrid SART + OS + TV algorithm outperforms both FDK and mFDK in denoising, preserving spatial dose gradients and reducing artefacts, and its effectiveness and efficiency are platform independent.
Modeling of Pixelated Detector in SPECT Pinhole Reconstruction.
Feng, Bing; Zeng, Gengsheng L
2014-04-10
A challenge for the pixelated detector is that the detector response of a gamma-ray photon varies with the incident angle and the incident location within a crystal. The normalization map obtained by measuring the flood of a point-source at a large distance can lead to artifacts in reconstructed images. In this work, we investigated a method of generating normalization maps by ray-tracing through the pixelated detector based on the imaging geometry and the photo-peak energy for the specific isotope. The normalization is defined for each pinhole as the normalized detector response for a point-source placed at the focal point of the pinhole. Ray-tracing is used to generate the ideal flood image for a point-source. Each crystal pitch area on the back of the detector is divided into 60 × 60 sub-pixels. Lines are obtained by connecting between a point-source and the centers of sub-pixels inside each crystal pitch area. For each line ray-tracing starts from the entrance point at the detector face and ends at the center of a sub-pixel on the back of the detector. Only the attenuation by NaI(Tl) crystals along each ray is assumed to contribute directly to the flood image. The attenuation by the silica (SiO 2 ) reflector is also included in the ray-tracing. To calculate the normalization for a pinhole, we need to calculate the ideal flood for a point-source at 360 mm distance (where the point-source was placed for the regular flood measurement) and the ideal flood image for the point-source at the pinhole focal point, together with the flood measurement at 360 mm distance. The normalizations are incorporated in the iterative OSEM reconstruction as a component of the projection matrix. Applications to single-pinhole and multi-pinhole imaging showed that this method greatly reduced the reconstruction artifacts.
Development of acoustic model-based iterative reconstruction technique for thick-concrete imaging
NASA Astrophysics Data System (ADS)
Almansouri, Hani; Clayton, Dwight; Kisner, Roger; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector
2016-02-01
Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well's health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.1
Xiaodong Zhuge; Palenstijn, Willem Jan; Batenburg, Kees Joost
2016-01-01
In this paper, we present a novel iterative reconstruction algorithm for discrete tomography (DT) named total variation regularized discrete algebraic reconstruction technique (TVR-DART) with automated gray value estimation. This algorithm is more robust and automated than the original DART algorithm, and is aimed at imaging of objects consisting of only a few different material compositions, each corresponding to a different gray value in the reconstruction. By exploiting two types of prior knowledge of the scanned object simultaneously, TVR-DART solves the discrete reconstruction problem within an optimization framework inspired by compressive sensing to steer the current reconstruction toward a solution with the specified number of discrete gray values. The gray values and the thresholds are estimated as the reconstruction improves through iterations. Extensive experiments from simulated data, experimental μCT, and electron tomography data sets show that TVR-DART is capable of providing more accurate reconstruction than existing algorithms under noisy conditions from a small number of projection images and/or from a small angular range. Furthermore, the new algorithm requires less effort on parameter tuning compared with the original DART algorithm. With TVR-DART, we aim to provide the tomography society with an easy-to-use and robust algorithm for DT.
He, Bo; Liu, Yang; Dong, Diya; Shen, Yue; Yan, Tianhong; Nian, Rui
2015-08-13
In this paper, a novel iterative sparse extended information filter (ISEIF) was proposed to solve the simultaneous localization and mapping problem (SLAM), which is very crucial for autonomous vehicles. The proposed algorithm solves the measurement update equations with iterative methods adaptively to reduce linearization errors. With the scalability advantage being kept, the consistency and accuracy of SEIF is improved. Simulations and practical experiments were carried out with both a land car benchmark and an autonomous underwater vehicle. Comparisons between iterative SEIF (ISEIF), standard EKF and SEIF are presented. All of the results convincingly show that ISEIF yields more consistent and accurate estimates compared to SEIF and preserves the scalability advantage over EKF, as well.
Huang, Hsuan-Ming; Hsiao, Ing-Tsung
2016-01-01
In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate.
NASA Astrophysics Data System (ADS)
Yu, Liang; Antoni, Jerome; Leclere, Quentin; Jiang, Weikang
2017-11-01
Acoustical source reconstruction is a typical inverse problem, whose minimum frequency of reconstruction hinges on the size of the array and maximum frequency depends on the spacing distance between the microphones. For the sake of enlarging the frequency of reconstruction and reducing the cost of an acquisition system, Cyclic Projection (CP), a method of sequential measurements without reference, was recently investigated (JSV,2016,372:31-49). In this paper, the Propagation based Fast Iterative Shrinkage Thresholding Algorithm (Propagation-FISTA) is introduced, which improves CP in two aspects: (1) the number of acoustic sources is no longer needed and the only making assumption is that of a "weakly sparse" eigenvalue spectrum; (2) the construction of the spatial basis is much easier and adaptive to practical scenarios of acoustical measurements benefiting from the introduction of propagation based spatial basis. The proposed Propagation-FISTA is first investigated with different simulations and experimental setups and is next illustrated with an industrial case.
Bayesian image reconstruction for improving detection performance of muon tomography.
Wang, Guobao; Schultz, Larry J; Qi, Jinyi
2009-05-01
Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.
Huang, Hsuan-Ming; Hsiao, Ing-Tsung
2016-01-01
In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate. PMID:27073853
NASA Astrophysics Data System (ADS)
Gu, Chengwei; Zeng, Dong; Lin, Jiahui; Li, Sui; He, Ji; Zhang, Hao; Bian, Zhaoying; Niu, Shanzhou; Zhang, Zhang; Huang, Jing; Chen, Bo; Zhao, Dazhe; Chen, Wufan; Ma, Jianhua
2018-06-01
Myocardial perfusion computed tomography (MPCT) imaging is commonly used to detect myocardial ischemia quantitatively. A limitation in MPCT is that an additional radiation dose is required compared to unenhanced CT due to its repeated dynamic data acquisition. Meanwhile, noise and streak artifacts in low-dose cases are the main factors that degrade the accuracy of quantifying myocardial ischemia and hamper the diagnostic utility of the filtered backprojection reconstructed MPCT images. Moreover, it is noted that the MPCT images are composed of a series of 2/3D images, which can be naturally regarded as a 3/4-order tensor, and the MPCT images are globally correlated along time and are sparse across space. To obtain higher fidelity ischemia from low-dose MPCT acquisitions quantitatively, we propose a robust statistical iterative MPCT image reconstruction algorithm by incorporating tensor total generalized variation (TTGV) regularization into a penalized weighted least-squares framework. Specifically, the TTGV regularization fuses the spatial correlation of the myocardial structure and the temporal continuation of the contrast agent intake during the perfusion. Then, an efficient iterative strategy is developed for the objective function optimization. Comprehensive evaluations have been conducted on a digital XCAT phantom and a preclinical porcine dataset regarding the accuracy of the reconstructed MPCT images, the quantitative differentiation of ischemia and the algorithm’s robustness and efficiency.
Split Bregman's optimization method for image construction in compressive sensing
NASA Astrophysics Data System (ADS)
Skinner, D.; Foo, S.; Meyer-Bäse, A.
2014-05-01
The theory of compressive sampling (CS) was reintroduced by Candes, Romberg and Tao, and D. Donoho in 2006. Using a priori knowledge that a signal is sparse, it has been mathematically proven that CS can defY Nyquist sampling theorem. Theoretically, reconstruction of a CS image relies on the minimization and optimization techniques to solve this complex almost NP-complete problem. There are many paths to consider when compressing and reconstructing an image but these methods have remained untested and unclear on natural images, such as underwater sonar images. The goal of this research is to perfectly reconstruct the original sonar image from a sparse signal while maintaining pertinent information, such as mine-like object, in Side-scan sonar (SSS) images. Goldstein and Osher have shown how to use an iterative method to reconstruct the original image through a method called Split Bregman's iteration. This method "decouples" the energies using portions of the energy from both the !1 and !2 norm. Once the energies are split, Bregman iteration is used to solve the unconstrained optimization problem by recursively solving the problems simultaneously. The faster these two steps or energies can be solved then the faster the overall method becomes. While the majority of CS research is still focused on the medical field, this paper will demonstrate the effectiveness of the Split Bregman's methods on sonar images.
Efficient iterative image reconstruction algorithm for dedicated breast CT
NASA Astrophysics Data System (ADS)
Antropova, Natalia; Sanchez, Adrian; Reiser, Ingrid S.; Sidky, Emil Y.; Boone, John; Pan, Xiaochuan
2016-03-01
Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.
A framelet-based iterative maximum-likelihood reconstruction algorithm for spectral CT
NASA Astrophysics Data System (ADS)
Wang, Yingmei; Wang, Ge; Mao, Shuwei; Cong, Wenxiang; Ji, Zhilong; Cai, Jian-Feng; Ye, Yangbo
2016-11-01
Standard computed tomography (CT) cannot reproduce spectral information of an object. Hardware solutions include dual-energy CT which scans the object twice in different x-ray energy levels, and energy-discriminative detectors which can separate lower and higher energy levels from a single x-ray scan. In this paper, we propose a software solution and give an iterative algorithm that reconstructs an image with spectral information from just one scan with a standard energy-integrating detector. The spectral information obtained can be used to produce color CT images, spectral curves of the attenuation coefficient μ (r,E) at points inside the object, and photoelectric images, which are all valuable imaging tools in cancerous diagnosis. Our software solution requires no change on hardware of a CT machine. With the Shepp-Logan phantom, we have found that although the photoelectric and Compton components were not perfectly reconstructed, their composite effect was very accurately reconstructed as compared to the ground truth and the dual-energy CT counterpart. This means that our proposed method has an intrinsic benefit in beam hardening correction and metal artifact reduction. The algorithm is based on a nonlinear polychromatic acquisition model for x-ray CT. The key technique is a sparse representation of iterations in a framelet system. Convergence of the algorithm is studied. This is believed to be the first application of framelet imaging tools to a nonlinear inverse problem.
NASA Astrophysics Data System (ADS)
Nunez, Jorge; Llacer, Jorge
1993-10-01
This paper describes a general Bayesian iterative algorithm with entropy prior for image reconstruction. It solves the cases of both pure Poisson data and Poisson data with Gaussian readout noise. The algorithm maintains positivity of the solution; it includes case-specific prior information (default map) and flatfield corrections; it removes background and can be accelerated to be faster than the Richardson-Lucy algorithm. In order to determine the hyperparameter that balances the entropy and liklihood terms in the Bayesian approach, we have used a liklihood cross-validation technique. Cross-validation is more robust than other methods because it is less demanding in terms of the knowledge of exact data characteristics and of the point-spread function. We have used the algorithm to reconstruct successfully images obtained in different space-and ground-based imaging situations. It has been possible to recover most of the original intended capabilities of the Hubble Space Telescope (HST) wide field and planetary camera (WFPC) and faint object camera (FOC) from images obtained in their present state. Semireal simulations for the future wide field planetary camera 2 show that even after the repair of the spherical abberration problem, image reconstruction can play a key role in improving the resolution of the cameras, well beyond the design of the Hubble instruments. We also show that ground-based images can be reconstructed successfully with the algorithm. A technique which consists of dividing the CCD observations into two frames, with one-half the exposure time each, emerges as a recommended procedure for the utilization of the described algorithms. We have compared our technique with two commonly used reconstruction algorithms: the Richardson-Lucy and the Cambridge maximum entropy algorithms.
NASA Astrophysics Data System (ADS)
Lu, Yao; Chan, Heang-Ping; Wei, Jun; Hadjiiski, Lubomir M.; Samala, Ravi K.
2017-10-01
In digital breast tomosynthesis (DBT), the high-attenuation metallic clips marking a previous biopsy site in the breast cause errors in the estimation of attenuation along the ray paths intersecting the markers during reconstruction, which result in interplane and inplane artifacts obscuring the visibility of subtle lesions. We proposed a new metal artifact reduction (MAR) method to improve image quality. Our method uses automatic detection and segmentation to generate a marker location map for each projection (PV). A voting technique based on the geometric correlation among different PVs is designed to reduce false positives (FPs) and to label the pixels on the PVs and the voxels in the imaged volume that represent the location and shape of the markers. An iterative diffusion method replaces the labeled pixels on the PVs with estimated tissue intensity from the neighboring regions while preserving the original pixel values in the neighboring regions. The inpainted PVs are then used for DBT reconstruction. The markers are repainted on the reconstructed DBT slices for radiologists’ information. The MAR method is independent of reconstruction techniques or acquisition geometry. For the training set, the method achieved 100% success rate with one FP in 19 views. For the test set, the success rate by view was 97.2% for core biopsy microclips and 66.7% for clusters of large post-lumpectomy markers with a total of 10 FPs in 58 views. All FPs were large dense benign calcifications that also generated artifacts if they were not corrected by MAR. For the views with successful detection, the metal artifacts were reduced to a level that was not visually apparent in the reconstructed slices. The visibility of breast lesions obscured by the reconstruction artifacts from the metallic markers was restored.
NASA Astrophysics Data System (ADS)
Mei, Kai; Kopp, Felix K.; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Kirschke, Jan S.; Noël, Peter B.; Baum, Thomas
2017-03-01
The trabecular bone microstructure is a key to the early diagnosis and advanced therapy monitoring of osteoporosis. Regularly measuring bone microstructure with conventional multi-detector computer tomography (MDCT) would expose patients with a relatively high radiation dose. One possible solution to reduce exposure to patients is sampling fewer projection angles. This approach can be supported by advanced reconstruction algorithms, with their ability to achieve better image quality under reduced projection angles or high levels of noise. In this work, we investigated the performance of iterative reconstruction from sparse sampled projection data on trabecular bone microstructure in in-vivo MDCT scans of human spines. The computed MDCT images were evaluated by calculating bone microstructure parameters. We demonstrated that bone microstructure parameters were still computationally distinguishable when half or less of the radiation dose was employed.
Variable aperture-based ptychographical iterative engine method.
Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng
2018-02-01
A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Thygesen, Jesper; Gerke, Oke; Egstrup, Kenneth; Waaler, Dag; Lambrechtsen, Jess
2016-01-01
Background Coronary computed tomography angiography (CCTA) requires high spatial and temporal resolution, increased low contrast resolution for the assessment of coronary artery stenosis, plaque detection, and/or non-coronary pathology. Therefore, new reconstruction algorithms, particularly iterative reconstruction (IR) techniques, have been developed in an attempt to improve image quality with no cost in radiation exposure. Purpose To evaluate whether adaptive statistical iterative reconstruction (ASIR) enhances perceived image quality in CCTA compared to filtered back projection (FBP). Material and Methods Thirty patients underwent CCTA due to suspected coronary artery disease. Images were reconstructed using FBP, 30% ASIR, and 60% ASIR. Ninety image sets were evaluated by five observers using the subjective visual grading analysis (VGA) and assessed by proportional odds modeling. Objective quality assessment (contrast, noise, and the contrast-to-noise ratio [CNR]) was analyzed with linear mixed effects modeling on log-transformed data. The need for ethical approval was waived by the local ethics committee as the study only involved anonymously collected clinical data. Results VGA showed significant improvements in sharpness by comparing FBP with ASIR, resulting in odds ratios of 1.54 for 30% ASIR and 1.89 for 60% ASIR (P = 0.004). The objective measures showed significant differences between FBP and 60% ASIR (P < 0.0001) for noise, with an estimated ratio of 0.82, and for CNR, with an estimated ratio of 1.26. Conclusion ASIR improved the subjective image quality of parameter sharpness and, objectively, reduced noise and increased CNR. PMID:28405477
Katsura, Masaki; Sato, Jiro; Akahane, Masaaki; Mise, Yoko; Sumida, Kaoru; Abe, Osamu
2017-08-01
To compare image quality characteristics of high-resolution computed tomography (HRCT) in the evaluation of interstitial lung disease using three different reconstruction methods: model-based iterative reconstruction (MBIR), adaptive statistical iterative reconstruction (ASIR), and filtered back projection (FBP). Eighty-nine consecutive patients with interstitial lung disease underwent standard-of-care chest CT with 64-row multi-detector CT. HRCT images were reconstructed in 0.625-mm contiguous axial slices using FBP, ASIR, and MBIR. Two radiologists independently assessed the images in a blinded manner for subjective image noise, streak artifacts, and visualization of normal and pathologic structures. Objective image noise was measured in the lung parenchyma. Spatial resolution was assessed by measuring the modulation transfer function (MTF). MBIR offered significantly lower objective image noise (22.24±4.53, P<0.01 among all pairs, Student's t-test) compared with ASIR (39.76±7.41) and FBP (51.91±9.71). MTF (spatial resolution) was increased using MBIR compared with ASIR and FBP. MBIR showed improvements in visualization of normal and pathologic structures over ASIR and FBP, while ASIR was rated quite similarly to FBP. MBIR significantly improved subjective image noise (P<0.01 among all pairs, the sign test), and streak artifacts (P<0.01 each for MBIR vs. the other 2 image data sets). MBIR provides high-quality HRCT images for interstitial lung disease by reducing image noise and streak artifacts and improving spatial resolution compared with ASIR and FBP. Copyright © 2017 Elsevier B.V. All rights reserved.
Shi, Junwei; Liu, Fei; Zhang, Guanglei; Luo, Jianwen; Bai, Jing
2014-04-01
Owing to the high degree of scattering of light through tissues, the ill-posedness of fluorescence molecular tomography (FMT) inverse problem causes relatively low spatial resolution in the reconstruction results. Unlike L2 regularization, L1 regularization can preserve the details and reduce the noise effectively. Reconstruction is obtained through a restarted L1 regularization-based nonlinear conjugate gradient (re-L1-NCG) algorithm, which has been proven to be able to increase the computational speed with low memory consumption. The algorithm consists of inner and outer iterations. In the inner iteration, L1-NCG is used to obtain the L1-regularized results. In the outer iteration, the restarted strategy is used to increase the convergence speed of L1-NCG. To demonstrate the performance of re-L1-NCG in terms of spatial resolution, simulation and physical phantom studies with fluorescent targets located with different edge-to-edge distances were carried out. The reconstruction results show that the re-L1-NCG algorithm has the ability to resolve targets with an edge-to-edge distance of 0.1 cm at a depth of 1.5 cm, which is a significant improvement for FMT.
de Barros, Pietro Paolo; Metello, Luis F.; Camozzato, Tatiane Sabriela Cagol; Vieira, Domingos Manuel da Silva
2015-01-01
Objective The present study is aimed at contributing to identify the most appropriate OSEM parameters to generate myocardial perfusion imaging reconstructions with the best diagnostic quality, correlating them with patients’ body mass index. Materials and Methods The present study included 28 adult patients submitted to myocardial perfusion imaging in a public hospital. The OSEM method was utilized in the images reconstruction with six different combinations of iterations and subsets numbers. The images were analyzed by nuclear cardiology specialists taking their diagnostic value into consideration and indicating the most appropriate images in terms of diagnostic quality. Results An overall scoring analysis demonstrated that the combination of four iterations and four subsets has generated the most appropriate images in terms of diagnostic quality for all the classes of body mass index; however, the role played by the combination of six iterations and four subsets is highlighted in relation to the higher body mass index classes. Conclusion The use of optimized parameters seems to play a relevant role in the generation of images with better diagnostic quality, ensuring the diagnosis and consequential appropriate and effective treatment for the patient. PMID:26543282
NASA Astrophysics Data System (ADS)
Peng, Chengtao; Qiu, Bensheng; Zhang, Cheng; Ma, Changyu; Yuan, Gang; Li, Ming
2017-07-01
Over the years, the X-ray computed tomography (CT) has been successfully used in clinical diagnosis. However, when the body of the patient to be examined contains metal objects, the image reconstructed would be polluted by severe metal artifacts, which affect the doctor's diagnosis of disease. In this work, we proposed a dynamic re-weighted total variation (DRWTV) technique combined with the statistic iterative reconstruction (SIR) method to reduce the artifacts. The DRWTV method is based on the total variation (TV) and re-weighted total variation (RWTV) techniques, but it provides a sparser representation than TV and protects the tissue details better than RWTV. Besides, the DRWTV can suppress the artifacts and noise, and the SIR convergence speed is also accelerated. The performance of the algorithm is tested on both simulated phantom dataset and clinical dataset, which are the teeth phantom with two metal implants and the skull with three metal implants, respectively. The proposed algorithm (SIR-DRWTV) is compared with two traditional iterative algorithms, which are SIR and SIR constrained by RWTV regulation (SIR-RWTV). The results show that the proposed algorithm has the best performance in reducing metal artifacts and protecting tissue details.
NASA Astrophysics Data System (ADS)
Yoon, Hyun Jin; Jeong, Young Jin; Son, Hye Joo; Kang, Do-Young; Hyun, Kyung-Yae; Lee, Min-Kyung
2015-01-01
The spatial resolution in positron emission tomography (PET) is fundamentally limited by the geometry of the detector element, the positron's recombination range with electrons, the acollinearity of the positron, the crystal decoding error, the penetration into the detector ring, and the reconstruction algorithms. In this paper, optimized parameters are suggested to produce high-resolution PET images by using an iterative reconstruction algorithm. A phantom with three point sources structured with three capillary tubes was prepared with an axial extension of less than 1 mm and was filled with 18F-fluorodeoxyglucose (18F-FDG) with concentrations above 200 MBq/cc. The performance measures of all the PET images were acquired according to the National Electrical Manufacturers Association (NEMA) NU 2-2007 standards procedures. The parameters for the iterative reconstruction were adjusted around the values recommended by General Electric GE, and the optimized values of the spatial resolution and the full width at half maximum (FWHM) or the full width at tenth of maximum (FWTM) values were found for the best PET resolution. The axial and the transverse spatial resolutions, according to the filtered back-projection (FBP) at 1 cm off-axis, were 4.81 and 4.48 mm, respectively. The axial and the transaxial spatial resolutions at 10 cm off-axis were 5.63 mm and 5.08 mm, respectively, and the trans-axial resolution at 10 cm was evaluated as the average of the radial and the tangential measurements. The recommended optimized parameters of the spatial resolution according to the NEMA phantom for the number of subsets, the number of iterations, and the Gaussian post-filter are 12, 3, and 3 mm for the iterative reconstruction VUE Point HD without the SharpIR algorithm (HD), and 12, 12, and 5.2 mm with SharpIR (HD.S), respectively, according to the Advantage Workstation Volume Share 5 (AW4.6). The performance measurements for the GE Discovery PET/CT 710 using the NEMA NU 2-2007 standards from our results will be helpful in the quantitative analysis of PET scanner images. The spatial resolution was modified more by using an improved algorithm such as HD.S, than by using HD and FBP. The use of the optimized parameters for iterative reconstructions is strongly recommended for qualitative images from the GE Discovery PET/CT 710 scanner.
Multiresolution Iterative Reconstruction in High-Resolution Extremity Cone-Beam CT
Cao, Qian; Zbijewski, Wojciech; Sisniega, Alejandro; Yorkston, John; Siewerdsen, Jeffrey H; Stayman, J Webster
2016-01-01
Application of model-based iterative reconstruction (MBIR) to high resolution cone-beam CT (CBCT) is computationally challenging because of the very fine discretization (voxel size <100 µm) of the reconstructed volume. Moreover, standard MBIR techniques require that the complete transaxial support for the acquired projections is reconstructed, thus precluding acceleration by restricting the reconstruction to a region-of-interest. To reduce the computational burden of high resolution MBIR, we propose a multiresolution Penalized-Weighted Least Squares (PWLS) algorithm, where the volume is parameterized as a union of fine and coarse voxel grids as well as selective binning of detector pixels. We introduce a penalty function designed to regularize across the boundaries between the two grids. The algorithm was evaluated in simulation studies emulating an extremity CBCT system and in a physical study on a test-bench. Artifacts arising from the mismatched discretization of the fine and coarse sub-volumes were investigated. The fine grid region was parameterized using 0.15 mm voxels and the voxel size in the coarse grid region was varied by changing a downsampling factor. No significant artifacts were found in either of the regions for downsampling factors of up to 4×. For a typical extremities CBCT volume size, this downsampling corresponds to an acceleration of the reconstruction that is more than five times faster than a brute force solution that applies fine voxel parameterization to the entire volume. For certain configurations of the coarse and fine grid regions, in particular when the boundary between the regions does not cross high attenuation gradients, downsampling factors as high as 10× can be used without introducing artifacts, yielding a ~50× speedup in PWLS. The proposed multiresolution algorithm significantly reduces the computational burden of high resolution iterative CBCT reconstruction and can be extended to other applications of MBIR where computationally expensive, high-fidelity forward models are applied only to a sub-region of the field-of-view. PMID:27694701
Development and Evaluation of Real-Time Volumetric Compton Gamma-Ray Imaging
NASA Astrophysics Data System (ADS)
Barnowski, Ross Wegner
An approach to gamma-ray imaging has been developed that enables near real-time volumetric (3D) imaging of unknown environments thus improving the utility of gamma-ray imaging for source-search and radiation mapping applications. The approach, herein dubbed scene data fusion (SDF), is based on integrating mobile radiation imagers with real time tracking and scene reconstruction algorithms to enable a mobile mode of operation and 3D localization of gamma-ray sources. The real-time tracking allows the imager to be moved throughout the environment or around a particular object of interest, obtaining the multiple perspectives necessary for standoff 3D imaging. A 3D model of the scene, provided in real-time by a simultaneous localization and mapping (SLAM) algorithm, can be incorporated into the image reconstruction reducing the reconstruction time and improving imaging performance. The SDF concept is demonstrated in this work with a Microsoft Kinect RGB-D sensor, a real-time SLAM solver, and two different mobile gamma-ray imaging platforms. The first is a cart-based imaging platform known as the Volumetric Compton Imager (VCI), comprising two 3D position-sensitive high purity germanium (HPGe) detectors, exhibiting excellent gamma-ray imaging characteristics, but with limited mobility due to the size and weight of the cart. The second system is the High Efficiency Multimodal Imager (HEMI) a hand-portable gamma-ray imager comprising 96 individual cm3 CdZnTe crystals arranged in a two-plane, active-mask configuration. The HEMI instrument has poorer energy and angular resolution than the VCI, but is truly hand-portable, allowing the SDF concept to be tested in multiple environments and for more challenging imaging scenarios. An iterative algorithm based on Compton kinematics is used to reconstruct the gamma-ray source distribution in all three spatial dimensions. Each of the two mobile imaging systems are used to demonstrate SDF for a variety of scenarios, including general search and mapping scenarios with several point gamma-ray sources over the range of energies relevant for Compton imaging. More specific imaging scenarios are also addressed, including directed search and object interrogation scenarios. Finally, the volumetric image quality is quantitatively investigated with respect to the number of Compton events acquired during a measurement, the list-mode uncertainty of the Compton cone data, and the uncertainty in the pose estimate from the real-time tracking algorithm. SDF advances the real-world applicability of gamma-ray imaging for many search, mapping, and verification scenarios by improving the tractability of the gamma-ray image reconstruction and providing context for the 3D localization of gamma-ray sources within the environment in real-time.
Incremental Refinement of FAÇADE Models with Attribute Grammar from 3d Point Clouds
NASA Astrophysics Data System (ADS)
Dehbi, Y.; Staat, C.; Mandtler, L.; Pl¨umer, L.
2016-06-01
Data acquisition using unmanned aerial vehicles (UAVs) has gotten more and more attention over the last years. Especially in the field of building reconstruction the incremental interpretation of such data is a demanding task. In this context formal grammars play an important role for the top-down identification and reconstruction of building objects. Up to now, the available approaches expect offline data in order to parse an a-priori known grammar. For mapping on demand an on the fly reconstruction based on UAV data is required. An incremental interpretation of the data stream is inevitable. This paper presents an incremental parser of grammar rules for an automatic 3D building reconstruction. The parser enables a model refinement based on new observations with respect to a weighted attribute context-free grammar (WACFG). The falsification or rejection of hypotheses is supported as well. The parser can deal with and adapt available parse trees acquired from previous interpretations or predictions. Parse trees derived so far are updated in an iterative way using transformation rules. A diagnostic step searches for mismatches between current and new nodes. Prior knowledge on façades is incorporated. It is given by probability densities as well as architectural patterns. Since we cannot always assume normal distributions, the derivation of location and shape parameters of building objects is based on a kernel density estimation (KDE). While the level of detail is continuously improved, the geometrical, semantic and topological consistency is ensured.
Tomographic image reconstruction using the cell broadband engine (CBE) general purpose hardware
NASA Astrophysics Data System (ADS)
Knaup, Michael; Steckmann, Sven; Bockenbach, Olivier; Kachelrieß, Marc
2007-02-01
Tomographic image reconstruction, such as the reconstruction of CT projection values, of tomosynthesis data, PET or SPECT events, is computational very demanding. In filtered backprojection as well as in iterative reconstruction schemes, the most time-consuming steps are forward- and backprojection which are often limited by the memory bandwidth. Recently, a novel general purpose architecture optimized for distributed computing became available: the Cell Broadband Engine (CBE). Its eight synergistic processing elements (SPEs) currently allow for a theoretical performance of 192 GFlops (3 GHz, 8 units, 4 floats per vector, 2 instructions, multiply and add, per clock). To maximize image reconstruction speed we modified our parallel-beam and perspective backprojection algorithms which are highly optimized for standard PCs, and optimized the code for the CBE processor. 1-3 In addition, we implemented an optimized perspective forwardprojection on the CBE which allows us to perform statistical image reconstructions like the ordered subset convex (OSC) algorithm. 4 Performance was measured using simulated data with 512 projections per rotation and 5122 detector elements. The data were backprojected into an image of 512 3 voxels using our PC-based approaches and the new CBE- based algorithms. Both the PC and the CBE timings were scaled to a 3 GHz clock frequency. On the CBE, we obtain total reconstruction times of 4.04 s for the parallel backprojection, 13.6 s for the perspective backprojection and 192 s for a complete OSC reconstruction, consisting of one initial Feldkamp reconstruction, followed by 4 OSC iterations.
Integrable mappings and the notion of anticonfinement
NASA Astrophysics Data System (ADS)
Mase, T.; Willox, R.; Ramani, A.; Grammaticos, B.
2018-06-01
We examine the notion of anticonfinement and the role it has to play in the singularity analysis of discrete systems. A singularity is said to be anticonfined if singular values continue to arise indefinitely for the forward and backward iterations of a mapping, with only a finite number of iterates taking regular values in between. We show through several concrete examples that the behaviour of some anticonfined singularities is strongly related to the integrability properties of the discrete mappings in which they arise, and we explain how to use this information to decide on the integrability or non-integrability of the mapping.
He, Bo; Liu, Yang; Dong, Diya; Shen, Yue; Yan, Tianhong; Nian, Rui
2015-01-01
In this paper, a novel iterative sparse extended information filter (ISEIF) was proposed to solve the simultaneous localization and mapping problem (SLAM), which is very crucial for autonomous vehicles. The proposed algorithm solves the measurement update equations with iterative methods adaptively to reduce linearization errors. With the scalability advantage being kept, the consistency and accuracy of SEIF is improved. Simulations and practical experiments were carried out with both a land car benchmark and an autonomous underwater vehicle. Comparisons between iterative SEIF (ISEIF), standard EKF and SEIF are presented. All of the results convincingly show that ISEIF yields more consistent and accurate estimates compared to SEIF and preserves the scalability advantage over EKF, as well. PMID:26287194
Ultrafast and scalable cone-beam CT reconstruction using MapReduce in a cloud computing environment.
Meng, Bowen; Pratx, Guillem; Xing, Lei
2011-12-01
Four-dimensional CT (4DCT) and cone beam CT (CBCT) are widely used in radiation therapy for accurate tumor target definition and localization. However, high-resolution and dynamic image reconstruction is computationally demanding because of the large amount of data processed. Efficient use of these imaging techniques in the clinic requires high-performance computing. The purpose of this work is to develop a novel ultrafast, scalable and reliable image reconstruction technique for 4D CBCT∕CT using a parallel computing framework called MapReduce. We show the utility of MapReduce for solving large-scale medical physics problems in a cloud computing environment. In this work, we accelerated the Feldcamp-Davis-Kress (FDK) algorithm by porting it to Hadoop, an open-source MapReduce implementation. Gated phases from a 4DCT scans were reconstructed independently. Following the MapReduce formalism, Map functions were used to filter and backproject subsets of projections, and Reduce function to aggregate those partial backprojection into the whole volume. MapReduce automatically parallelized the reconstruction process on a large cluster of computer nodes. As a validation, reconstruction of a digital phantom and an acquired CatPhan 600 phantom was performed on a commercial cloud computing environment using the proposed 4D CBCT∕CT reconstruction algorithm. Speedup of reconstruction time is found to be roughly linear with the number of nodes employed. For instance, greater than 10 times speedup was achieved using 200 nodes for all cases, compared to the same code executed on a single machine. Without modifying the code, faster reconstruction is readily achievable by allocating more nodes in the cloud computing environment. Root mean square error between the images obtained using MapReduce and a single-threaded reference implementation was on the order of 10(-7). Our study also proved that cloud computing with MapReduce is fault tolerant: the reconstruction completed successfully with identical results even when half of the nodes were manually terminated in the middle of the process. An ultrafast, reliable and scalable 4D CBCT∕CT reconstruction method was developed using the MapReduce framework. Unlike other parallel computing approaches, the parallelization and speedup required little modification of the original reconstruction code. MapReduce provides an efficient and fault tolerant means of solving large-scale computing problems in a cloud computing environment.
Ultrafast and scalable cone-beam CT reconstruction using MapReduce in a cloud computing environment
Meng, Bowen; Pratx, Guillem; Xing, Lei
2011-01-01
Purpose: Four-dimensional CT (4DCT) and cone beam CT (CBCT) are widely used in radiation therapy for accurate tumor target definition and localization. However, high-resolution and dynamic image reconstruction is computationally demanding because of the large amount of data processed. Efficient use of these imaging techniques in the clinic requires high-performance computing. The purpose of this work is to develop a novel ultrafast, scalable and reliable image reconstruction technique for 4D CBCT/CT using a parallel computing framework called MapReduce. We show the utility of MapReduce for solving large-scale medical physics problems in a cloud computing environment. Methods: In this work, we accelerated the Feldcamp–Davis–Kress (FDK) algorithm by porting it to Hadoop, an open-source MapReduce implementation. Gated phases from a 4DCT scans were reconstructed independently. Following the MapReduce formalism, Map functions were used to filter and backproject subsets of projections, and Reduce function to aggregate those partial backprojection into the whole volume. MapReduce automatically parallelized the reconstruction process on a large cluster of computer nodes. As a validation, reconstruction of a digital phantom and an acquired CatPhan 600 phantom was performed on a commercial cloud computing environment using the proposed 4D CBCT/CT reconstruction algorithm. Results: Speedup of reconstruction time is found to be roughly linear with the number of nodes employed. For instance, greater than 10 times speedup was achieved using 200 nodes for all cases, compared to the same code executed on a single machine. Without modifying the code, faster reconstruction is readily achievable by allocating more nodes in the cloud computing environment. Root mean square error between the images obtained using MapReduce and a single-threaded reference implementation was on the order of 10−7. Our study also proved that cloud computing with MapReduce is fault tolerant: the reconstruction completed successfully with identical results even when half of the nodes were manually terminated in the middle of the process. Conclusions: An ultrafast, reliable and scalable 4D CBCT/CT reconstruction method was developed using the MapReduce framework. Unlike other parallel computing approaches, the parallelization and speedup required little modification of the original reconstruction code. MapReduce provides an efficient and fault tolerant means of solving large-scale computing problems in a cloud computing environment. PMID:22149842
Kamesh Iyer, Srikant; Tasdizen, Tolga; Likhite, Devavrat; DiBella, Edward
2016-01-01
Purpose: Rapid reconstruction of undersampled multicoil MRI data with iterative constrained reconstruction method is a challenge. The authors sought to develop a new substitution based variable splitting algorithm for faster reconstruction of multicoil cardiac perfusion MRI data. Methods: The new method, split Bregman multicoil accelerated reconstruction technique (SMART), uses a combination of split Bregman based variable splitting and iterative reweighting techniques to achieve fast convergence. Total variation constraints are used along the spatial and temporal dimensions. The method is tested on nine ECG-gated dog perfusion datasets, acquired with a 30-ray golden ratio radial sampling pattern and ten ungated human perfusion datasets, acquired with a 24-ray golden ratio radial sampling pattern. Image quality and reconstruction speed are evaluated and compared to a gradient descent (GD) implementation and to multicoil k-t SLR, a reconstruction technique that uses a combination of sparsity and low rank constraints. Results: Comparisons based on blur metric and visual inspection showed that SMART images had lower blur and better texture as compared to the GD implementation. On average, the GD based images had an ∼18% higher blur metric as compared to SMART images. Reconstruction of dynamic contrast enhanced (DCE) cardiac perfusion images using the SMART method was ∼6 times faster than standard gradient descent methods. k-t SLR and SMART produced images with comparable image quality, though SMART was ∼6.8 times faster than k-t SLR. Conclusions: The SMART method is a promising approach to reconstruct good quality multicoil images from undersampled DCE cardiac perfusion data rapidly. PMID:27036592
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets
Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De; ...
2017-01-28
Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less
GENFIRE: A generalized Fourier iterative reconstruction algorithm for high-resolution 3D imaging
Pryor, Alan; Yang, Yongsoo; Rana, Arjun; ...
2017-09-05
Tomography has made a radical impact on diverse fields ranging from the study of 3D atomic arrangements in matter to the study of human health in medicine. Despite its very diverse applications, the core of tomography remains the same, that is, a mathematical method must be implemented to reconstruct the 3D structure of an object from a number of 2D projections. Here, we present the mathematical implementation of a tomographic algorithm, termed GENeralized Fourier Iterative REconstruction (GENFIRE), for high-resolution 3D reconstruction from a limited number of 2D projections. GENFIRE first assembles a 3D Fourier grid with oversampling and then iteratesmore » between real and reciprocal space to search for a global solution that is concurrently consistent with the measured data and general physical constraints. The algorithm requires minimal human intervention and also incorporates angular refinement to reduce the tilt angle error. We demonstrate that GENFIRE can produce superior results relative to several other popular tomographic reconstruction techniques through numerical simulations and by experimentally reconstructing the 3D structure of a porous material and a frozen-hydrated marine cyanobacterium. As a result, equipped with a graphical user interface, GENFIRE is freely available from our website and is expected to find broad applications across different disciplines.« less
GENFIRE: A generalized Fourier iterative reconstruction algorithm for high-resolution 3D imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pryor, Alan; Yang, Yongsoo; Rana, Arjun
Tomography has made a radical impact on diverse fields ranging from the study of 3D atomic arrangements in matter to the study of human health in medicine. Despite its very diverse applications, the core of tomography remains the same, that is, a mathematical method must be implemented to reconstruct the 3D structure of an object from a number of 2D projections. Here, we present the mathematical implementation of a tomographic algorithm, termed GENeralized Fourier Iterative REconstruction (GENFIRE), for high-resolution 3D reconstruction from a limited number of 2D projections. GENFIRE first assembles a 3D Fourier grid with oversampling and then iteratesmore » between real and reciprocal space to search for a global solution that is concurrently consistent with the measured data and general physical constraints. The algorithm requires minimal human intervention and also incorporates angular refinement to reduce the tilt angle error. We demonstrate that GENFIRE can produce superior results relative to several other popular tomographic reconstruction techniques through numerical simulations and by experimentally reconstructing the 3D structure of a porous material and a frozen-hydrated marine cyanobacterium. As a result, equipped with a graphical user interface, GENFIRE is freely available from our website and is expected to find broad applications across different disciplines.« less
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De
Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less
Fuchs, Tobias A; Fiechter, Michael; Gebhard, Cathérine; Stehli, Julia; Ghadri, Jelena R; Kazakauskaite, Egle; Herzog, Bernhard A; Husmann, Lars; Gaemperli, Oliver; Kaufmann, Philipp A
2013-03-01
To assess the impact of adaptive statistical iterative reconstruction (ASIR) on coronary plaque volume and composition analysis as well as on stenosis quantification in high definition coronary computed tomography angiography (CCTA). We included 50 plaques in 29 consecutive patients who were referred for the assessment of known or suspected coronary artery disease (CAD) with contrast-enhanced CCTA on a 64-slice high definition CT scanner (Discovery HD 750, GE Healthcare). CCTA scans were reconstructed with standard filtered back projection (FBP) with no ASIR (0 %) or with increasing contributions of ASIR, i.e. 20, 40, 60, 80 and 100 % (no FBP). Plaque analysis (volume, components and stenosis degree) was performed using a previously validated automated software. Mean values for minimal diameter and minimal area as well as degree of stenosis did not change significantly using different ASIR reconstructions. There was virtually no impact of reconstruction algorithms on mean plaque volume or plaque composition (e.g. soft, intermediate and calcified component). However, with increasing ASIR contribution, the percentage of plaque volume component between 401 and 500 HU decreased significantly (p < 0.05). Modern image reconstruction algorithms such as ASIR, which has been developed for noise reduction in latest high resolution CCTA scans, can be used reliably without interfering with the plaque analysis and stenosis severity assessment.
LCAMP: Location Constrained Approximate Message Passing for Compressed Sensing MRI
Sung, Kyunghyun; Daniel, Bruce L; Hargreaves, Brian A
2016-01-01
Iterative thresholding methods have been extensively studied as faster alternatives to convex optimization methods for solving large-sized problems in compressed sensing. A novel iterative thresholding method called LCAMP (Location Constrained Approximate Message Passing) is presented for reducing computational complexity and improving reconstruction accuracy when a nonzero location (or sparse support) constraint can be obtained from view shared images. LCAMP modifies the existing approximate message passing algorithm by replacing the thresholding stage with a location constraint, which avoids adjusting regularization parameters or thresholding levels. This work is first compared with other conventional reconstruction methods using random 1D signals and then applied to dynamic contrast-enhanced breast MRI to demonstrate the excellent reconstruction accuracy (less than 2% absolute difference) and low computation time (5 - 10 seconds using Matlab) with highly undersampled 3D data (244 × 128 × 48; overall reduction factor = 10). PMID:23042658
Comparison of SIRT and SQS for Regularized Weighted Least Squares Image Reconstruction
Gregor, Jens; Fessler, Jeffrey A.
2015-01-01
Tomographic image reconstruction is often formulated as a regularized weighted least squares (RWLS) problem optimized by iterative algorithms that are either inherently algebraic or derived from a statistical point of view. This paper compares a modified version of SIRT (Simultaneous Iterative Reconstruction Technique), which is of the former type, with a version of SQS (Separable Quadratic Surrogates), which is of the latter type. We show that the two algorithms minimize the same criterion function using similar forms of preconditioned gradient descent. We present near-optimal relaxation for both based on eigenvalue bounds and include a heuristic extension for use with ordered subsets. We provide empirical evidence that SIRT and SQS converge at the same rate for all intents and purposes. For context, we compare their performance with an implementation of preconditioned conjugate gradient. The illustrative application is X-ray CT of luggage for aviation security. PMID:26478906
High-order noise analysis for low dose iterative image reconstruction methods: ASIR, IRIS, and MBAI
NASA Astrophysics Data System (ADS)
Do, Synho; Singh, Sarabjeet; Kalra, Mannudeep K.; Karl, W. Clem; Brady, Thomas J.; Pien, Homer
2011-03-01
Iterative reconstruction techniques (IRTs) has been shown to suppress noise significantly in low dose CT imaging. However, medical doctors hesitate to accept this new technology because visual impression of IRT images are different from full-dose filtered back-projection (FBP) images. Most common noise measurements such as the mean and standard deviation of homogeneous region in the image that do not provide sufficient characterization of noise statistics when probability density function becomes non-Gaussian. In this study, we measure L-moments of intensity values of images acquired at 10% of normal dose and reconstructed by IRT methods of two state-of-art clinical scanners (i.e., GE HDCT and Siemens DSCT flash) by keeping dosage level identical to each other. The high- and low-dose scans (i.e., 10% of high dose) were acquired from each scanner and L-moments of noise patches were calculated for the comparison.
Stokes-Doppler coherence imaging for ITER boundary tomography.
Howard, J; Kocan, M; Lisgo, S; Reichle, R
2016-11-01
An optical coherence imaging system is presently being designed for impurity transport studies and other applications on ITER. The wide variation in magnetic field strength and pitch angle (assumed known) across the field of view generates additional Zeeman-polarization-weighting information that can improve the reliability of tomographic reconstructions. Because background reflected light will be somewhat depolarized analysis of only the polarized fraction may be enough to provide a level of background suppression. We present the principles behind these ideas and some simulations that demonstrate how the approach might work on ITER. The views and opinions expressed herein do not necessarily reflect those of the ITER Organization.
HeinzelCluster: accelerated reconstruction for FORE and OSEM3D.
Vollmar, S; Michel, C; Treffert, J T; Newport, D F; Casey, M; Knöss, C; Wienhard, K; Liu, X; Defrise, M; Heiss, W D
2002-08-07
Using iterative three-dimensional (3D) reconstruction techniques for reconstruction of positron emission tomography (PET) is not feasible on most single-processor machines due to the excessive computing time needed, especially so for the large sinogram sizes of our high-resolution research tomograph (HRRT). In our first approach to speed up reconstruction time we transform the 3D scan into the format of a two-dimensional (2D) scan with sinograms that can be reconstructed independently using Fourier rebinning (FORE) and a fast 2D reconstruction method. On our dedicated reconstruction cluster (seven four-processor systems, Intel PIII@700 MHz, switched fast ethernet and Myrinet, Windows NT Server), we process these 2D sinograms in parallel. We have achieved a speedup > 23 using 26 processors and also compared results for different communication methods (RPC, Syngo, Myrinet GM). The other approach is to parallelize OSEM3D (implementation of C Michel), which has produced the best results for HRRT data so far and is more suitable for an adequate treatment of the sinogram gaps that result from the detector geometry of the HRRT. We have implemented two levels of parallelization for four dedicated cluster (a shared memory fine-grain level on each node utilizing all four processors and a coarse-grain level allowing for 15 nodes) reducing the time for one core iteration from over 7 h to about 35 min.
Development of Acoustic Model-Based Iterative Reconstruction Technique for Thick-Concrete Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Almansouri, Hani; Clayton, Dwight A; Kisner, Roger A
Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structuresmore » are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well's health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.« less
Muon reconstruction in the Daya Bay water pools
Hackenburg, R. W.
2017-08-12
Muon reconstruction in the Daya Bay water pools would serve to verify the simulated muon fluxes and offer the possibility of studying cosmic muons in general. This reconstruction is, however, complicated by many optical obstacles and the small coverage of photomultiplier tubes (PMTs) as compared to other large water Cherenkov detectors. The PMTs’ timing information is useful only in the case of direct, unreflected Cherenkov light. This requires PMTs to be added and removed as an hypothesized muon trajectory is iteratively improved, to account for the changing effects of obstacles and direction of light. Therefore, muon reconstruction in the Dayamore » Bay water pools does not lend itself to a general fitting procedure employing smoothly varying functions with continuous derivatives. Here, we describe an algorithm which overcomes these complications. It employs the method of Least Mean Squares to determine an hypothesized trajectory from the PMTs’ charge-weighted positions. This initially hypothesized trajectory is then iteratively refined using the PMTs’ timing information. Reconstructions with simulated data reproduce the simulated trajectory to within about 5° in direction and about 45 cm in position at the pool surface, with a bias that tends to pull tracks away from the vertical by about 3°.« less
Muon reconstruction in the Daya Bay water pools
NASA Astrophysics Data System (ADS)
Hackenburg, R. W.
2017-11-01
Muon reconstruction in the Daya Bay water pools would serve to verify the simulated muon fluxes and offer the possibility of studying cosmic muons in general. This reconstruction is, however, complicated by many optical obstacles and the small coverage of photomultiplier tubes (PMTs) as compared to other large water Cherenkov detectors. The PMTs' timing information is useful only in the case of direct, unreflected Cherenkov light. This requires PMTs to be added and removed as an hypothesized muon trajectory is iteratively improved, to account for the changing effects of obstacles and direction of light. Therefore, muon reconstruction in the Daya Bay water pools does not lend itself to a general fitting procedure employing smoothly varying functions with continuous derivatives. Here, an algorithm is described which overcomes these complications. It employs the method of Least Mean Squares to determine an hypothesized trajectory from the PMTs' charge-weighted positions. This initially hypothesized trajectory is then iteratively refined using the PMTs' timing information. Reconstructions with simulated data reproduce the simulated trajectory to within about 5°in direction and about 45 cm in position at the pool surface, with a bias that tends to pull tracks away from the vertical by about 3°.
Reconstruction of sparse-view X-ray computed tomography using adaptive iterative algorithms.
Liu, Li; Lin, Weikai; Jin, Mingwu
2015-01-01
In this paper, we propose two reconstruction algorithms for sparse-view X-ray computed tomography (CT). Treating the reconstruction problems as data fidelity constrained total variation (TV) minimization, both algorithms adapt the alternate two-stage strategy: projection onto convex sets (POCS) for data fidelity and non-negativity constraints and steepest descent for TV minimization. The novelty of this work is to determine iterative parameters automatically from data, thus avoiding tedious manual parameter tuning. In TV minimization, the step sizes of steepest descent are adaptively adjusted according to the difference from POCS update in either the projection domain or the image domain, while the step size of algebraic reconstruction technique (ART) in POCS is determined based on the data noise level. In addition, projection errors are used to compare with the error bound to decide whether to perform ART so as to reduce computational costs. The performance of the proposed methods is studied and evaluated using both simulated and physical phantom data. Our methods with automatic parameter tuning achieve similar, if not better, reconstruction performance compared to a representative two-stage algorithm. Copyright © 2014 Elsevier Ltd. All rights reserved.
X-ray computed tomography using curvelet sparse regularization.
Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias
2015-04-01
Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.
Decomposed direct matrix inversion for fast non-cartesian SENSE reconstructions.
Qian, Yongxian; Zhang, Zhenghui; Wang, Yi; Boada, Fernando E
2006-08-01
A new k-space direct matrix inversion (DMI) method is proposed here to accelerate non-Cartesian SENSE reconstructions. In this method a global k-space matrix equation is established on basic MRI principles, and the inverse of the global encoding matrix is found from a set of local matrix equations by taking advantage of the small extension of k-space coil maps. The DMI algorithm's efficiency is achieved by reloading the precalculated global inverse when the coil maps and trajectories remain unchanged, such as in dynamic studies. Phantom and human subject experiments were performed on a 1.5T scanner with a standard four-channel phased-array cardiac coil. Interleaved spiral trajectories were used to collect fully sampled and undersampled 3D raw data. The equivalence of the global k-space matrix equation to its image-space version, was verified via conjugate gradient (CG) iterative algorithms on a 2x undersampled phantom and numerical-model data sets. When applied to the 2x undersampled phantom and human-subject raw data, the decomposed DMI method produced images with small errors (< or = 3.9%) relative to the reference images obtained from the fully-sampled data, at a rate of 2 s per slice (excluding 4 min for precalculating the global inverse at an image size of 256 x 256). The DMI method may be useful for noise evaluations in parallel coil designs, dynamic MRI, and 3D sodium MRI with fixed coils and trajectories. Copyright 2006 Wiley-Liss, Inc.
Solomon, Justin; Mileto, Achille; Nelson, Rendon C; Roy Choudhury, Kingshuk; Samei, Ehsan
2016-04-01
To determine if radiation dose and reconstruction algorithm affect the computer-based extraction and analysis of quantitative imaging features in lung nodules, liver lesions, and renal stones at multi-detector row computed tomography (CT). Retrospective analysis of data from a prospective, multicenter, HIPAA-compliant, institutional review board-approved clinical trial was performed by extracting 23 quantitative imaging features (size, shape, attenuation, edge sharpness, pixel value distribution, and texture) of lesions on multi-detector row CT images of 20 adult patients (14 men, six women; mean age, 63 years; range, 38-72 years) referred for known or suspected focal liver lesions, lung nodules, or kidney stones. Data were acquired between September 2011 and April 2012. All multi-detector row CT scans were performed at two different radiation dose levels; images were reconstructed with filtered back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction (MBIR) algorithms. A linear mixed-effects model was used to assess the effect of radiation dose and reconstruction algorithm on extracted features. Among the 23 imaging features assessed, radiation dose had a significant effect on five, three, and four of the features for liver lesions, lung nodules, and renal stones, respectively (P < .002 for all comparisons). Adaptive statistical iterative reconstruction had a significant effect on three, one, and one of the features for liver lesions, lung nodules, and renal stones, respectively (P < .002 for all comparisons). MBIR reconstruction had a significant effect on nine, 11, and 15 of the features for liver lesions, lung nodules, and renal stones, respectively (P < .002 for all comparisons). Of note, the measured size of lung nodules and renal stones with MBIR was significantly different than those for the other two algorithms (P < .002 for all comparisons). Although lesion texture was significantly affected by the reconstruction algorithm used (average of 3.33 features affected by MBIR throughout lesion types; P < .002, for all comparisons), no significant effect of the radiation dose setting was observed for all but one of the texture features (P = .002-.998). Radiation dose settings and reconstruction algorithms affect the extraction and analysis of quantitative imaging features in lesions at multi-detector row CT.
Sun, Jihang; Yu, Tong; Liu, Jinrong; Duan, Xiaomin; Hu, Di; Liu, Yong; Peng, Yun
2017-03-16
Model-based iterative reconstruction (MBIR) is a promising reconstruction method which could improve CT image quality with low radiation dose. The purpose of this study was to demonstrate the advantage of using MBIR for noise reduction and image quality improvement in low dose chest CT for children with necrotizing pneumonia, over the adaptive statistical iterative reconstruction (ASIR) and conventional filtered back-projection (FBP) technique. Twenty-six children with necrotizing pneumonia (aged 2 months to 11 years) who underwent standard of care low dose CT scans were included. Thinner-slice (0.625 mm) images were retrospectively reconstructed using MBIR, ASIR and conventional FBP techniques. Image noise and signal-to-noise ratio (SNR) for these thin-slice images were measured and statistically analyzed using ANOVA. Two radiologists independently analyzed the image quality for detecting necrotic lesions, and results were compared using a Friedman's test. Radiation dose for the overall patient population was 0.59 mSv. There was a significant improvement in the high-density and low-contrast resolution of the MBIR reconstruction resulting in more detection and better identification of necrotic lesions (38 lesions in 0.625 mm MBIR images vs. 29 lesions in 0.625 mm FBP images). The subjective display scores (mean ± standard deviation) for the detection of necrotic lesions were 5.0 ± 0.0, 2.8 ± 0.4 and 2.5 ± 0.5 with MBIR, ASIR and FBP reconstruction, respectively, and the respective objective image noise was 13.9 ± 4.0HU, 24.9 ± 6.6HU and 33.8 ± 8.7HU. The image noise decreased by 58.9 and 26.3% in MBIR images as compared to FBP and ASIR images. Additionally, the SNR of MBIR images was significantly higher than FBP images and ASIR images. The quality of chest CT images obtained by MBIR in children with necrotizing pneumonia was significantly improved by the MBIR technique as compared to the ASIR and FBP reconstruction, to provide a more confident and accurate diagnosis for necrotizing pneumonia.
Ellmann, Stephan; Kammerer, Ferdinand; Brand, Michael; Allmendinger, Thomas; May, Matthias S; Uder, Michael; Lell, Michael M; Kramer, Manuel
2016-05-01
The aim of this study was to determine the dose reduction potential of iterative reconstruction (IR) algorithms in computed tomography angiography (CTA) of the circle of Willis using a novel method of evaluating the quality of radiation dose-reduced images. This study relied on ReconCT, a proprietary reconstruction software that allows simulating CT scans acquired with reduced radiation dose based on the raw data of true scans. To evaluate the performance of ReconCT in this regard, a phantom study was performed to compare the image noise of true and simulated scans within simulated vessels of a head phantom. That followed, 10 patients scheduled for CTA of the circle of Willis were scanned according to our institute's standard protocol (100 kV, 145 reference mAs). Subsequently, CTA images of these patients were reconstructed as either a full-dose weighted filtered back projection or with radiation dose reductions down to 10% of the full-dose level and Sinogram-Affirmed Iterative Reconstruction (SAFIRE) with either strength 3 or 5. Images were marked with arrows pointing on vessels of different sizes, and image pairs were presented to observers. Five readers assessed image quality with 2-alternative forced choice comparisons. In the phantom study, no significant differences were observed between the noise levels of simulated and true scans in filtered back projection, SAFIRE 3, and SAFIRE 5 reconstructions.The dose reduction potential for patient scans showed a strong dependence on IR strength as well as on the size of the vessel of interest. Thus, the potential radiation dose reductions ranged from 84.4% for the evaluation of great vessels reconstructed with SAFIRE 5 to 40.9% for the evaluation of small vessels reconstructed with SAFIRE 3. This study provides a novel image quality evaluation method based on 2-alternative forced choice comparisons. In CTA of the circle of Willis, higher IR strengths and greater vessel sizes allowed higher degrees of radiation dose reduction.
Ghaderi, Parviz; Marateb, Hamid R
2017-07-01
The aim of this study was to reconstruct low-quality High-density surface EMG (HDsEMG) signals, recorded with 2-D electrode arrays, using image inpainting and surface reconstruction methods. It is common that some fraction of the electrodes may provide low-quality signals. We used variety of image inpainting methods, based on partial differential equations (PDEs), and surface reconstruction methods to reconstruct the time-averaged or instantaneous muscle activity maps of those outlier channels. Two novel reconstruction algorithms were also proposed. HDsEMG signals were recorded from the biceps femoris and brachial biceps muscles during low-to-moderate-level isometric contractions, and some of the channels (5-25%) were randomly marked as outliers. The root-mean-square error (RMSE) between the original and reconstructed maps was then calculated. Overall, the proposed Poisson and wave PDE outperformed the other methods (average RMSE 8.7 μV rms ± 6.1 μV rms and 7.5 μV rms ± 5.9 μV rms ) for the time-averaged single-differential and monopolar map reconstruction, respectively. Biharmonic Spline, the discrete cosine transform, and the Poisson PDE outperformed the other methods for the instantaneous map reconstruction. The running time of the proposed Poisson and wave PDE methods, implemented using a Vectorization package, was 4.6 ± 5.7 ms and 0.6 ± 0.5 ms, respectively, for each signal epoch or time sample in each channel. The proposed reconstruction algorithms could be promising new tools for reconstructing muscle activity maps in real-time applications. Proper reconstruction methods could recover the information of low-quality recorded channels in HDsEMG signals.
X-ray scanning of overhead aurorae from rockets
NASA Technical Reports Server (NTRS)
Barcus, J. R.; Goldberg, R. A.; Gesell, L. H.
1981-01-01
Two Nike Tomahawk rocket payloads were launched into energetic auroral events in September, 1976 to investigate the structure of these events, as well as their effects on the atmosphere. X-ray scintillation detectors with energy discrimination in four ranges were used to measure the deposition of bremsstrahlung produced X-rays within the stratosphere and mesosphere. Iterative computer techniques were used to reconstruct X-ray source maps at 100 km, taking atmospheric absorption effects into account. Payload 18.178 was launched on September 21st into an aurora having two distinct azimuthal regions of optical brightness. The X-ray scanner detected the same features, and overlays of the X-ray source maps on all-sky photographs showed spatial coincidence of the X-ray with optical features at the lower energies (below 40 keV). Payload 18.179 was launched September 23rd into an aurora with a more diffuse character. The optical structure did not coincide as well with the measured X-ray structure. There was also an indication of a two-component spectrum for each event, with the hard component originating in the more diffuse, optically faint regions.
Ryska, Pavel; Kvasnicka, Tomas; Jandura, Jiri; Klzo, Ludovit; Grepl, Jakub; Zizka, Jan
2014-06-01
To compare the effective and eye lens radiation dose in helical MDCT brain examinations using automatic tube current modulation in conjunction with either standard filtered back projection (FBP) technique or iterative reconstruction in image space (IRIS). Of 400 adult brain MDCT examinations, 200 were performed using FBP and 200 using IRIS with the following parameters: tube voltage 120 kV, rotation period 1 second, pitch factor 0.55, automatic tube current modulation in both transverse and longitudinal planes with reference mAs 300 (FBP) and 200 (IRIS). Doses were calculated from CT dose index and dose length product values utilising ImPACT software; the organ dose to the lens was derived from the actual tube current-time product value applied to the lens. Image quality was assessed by two independent readers blinded to the type of image reconstruction technique. The average effective scan dose was 1.47±0.26 mSv (FBP) and 0.98±0.15 mSv (IRIS), respectively (33.3% decrease). The average organ dose to the eye lens decreased from 40.0±3.3 mGy (FBP) to 26.6±2.0 mGy (IRIS, 33.5% decrease). No significant change in diagnostic image quality was noted between IRIS and FBP scans (P=0.17). Iterative reconstruction of cerebral MDCT examinations enables reduction of both effective and organ eye lens dose by one third without signficant loss of image quality.
Ping, Bo; Su, Fenzhen; Meng, Yunshan
2016-01-01
In this study, an improved Data INterpolating Empirical Orthogonal Functions (DINEOF) algorithm for determination of missing values in a spatio-temporal dataset is presented. Compared with the ordinary DINEOF algorithm, the iterative reconstruction procedure until convergence based on every fixed EOF to determine the optimal EOF mode is not necessary and the convergence criterion is only reached once in the improved DINEOF algorithm. Moreover, in the ordinary DINEOF algorithm, after optimal EOF mode determination, the initial matrix with missing data will be iteratively reconstructed based on the optimal EOF mode until the reconstruction is convergent. However, the optimal EOF mode may be not the best EOF for some reconstructed matrices generated in the intermediate steps. Hence, instead of using asingle EOF to fill in the missing data, in the improved algorithm, the optimal EOFs for reconstruction are variable (because the optimal EOFs are variable, the improved algorithm is called VE-DINEOF algorithm in this study). To validate the accuracy of the VE-DINEOF algorithm, a sea surface temperature (SST) data set is reconstructed by using the DINEOF, I-DINEOF (proposed in 2015) and VE-DINEOF algorithms. Four parameters (Pearson correlation coefficient, signal-to-noise ratio, root-mean-square error, and mean absolute difference) are used as a measure of reconstructed accuracy. Compared with the DINEOF and I-DINEOF algorithms, the VE-DINEOF algorithm can significantly enhance the accuracy of reconstruction and shorten the computational time.
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.
Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T
2017-01-01
Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.
NASA Astrophysics Data System (ADS)
van Gent, P. L.; Michaelis, D.; van Oudheusden, B. W.; Weiss, P.-É.; de Kat, R.; Laskari, A.; Jeon, Y. J.; David, L.; Schanz, D.; Huhn, F.; Gesemann, S.; Novara, M.; McPhaden, C.; Neeteson, N. J.; Rival, D. E.; Schneiders, J. F. G.; Schrijer, F. F. J.
2017-04-01
A test case for pressure field reconstruction from particle image velocimetry (PIV) and Lagrangian particle tracking (LPT) has been developed by constructing a simulated experiment from a zonal detached eddy simulation for an axisymmetric base flow at Mach 0.7. The test case comprises sequences of four subsequent particle images (representing multi-pulse data) as well as continuous time-resolved data which can realistically only be obtained for low-speed flows. Particle images were processed using tomographic PIV processing as well as the LPT algorithm `Shake-The-Box' (STB). Multiple pressure field reconstruction techniques have subsequently been applied to the PIV results (Eulerian approach, iterative least-square pseudo-tracking, Taylor's hypothesis approach, and instantaneous Vortex-in-Cell) and LPT results (FlowFit, Vortex-in-Cell-plus, Voronoi-based pressure evaluation, and iterative least-square pseudo-tracking). All methods were able to reconstruct the main features of the instantaneous pressure fields, including methods that reconstruct pressure from a single PIV velocity snapshot. Highly accurate reconstructed pressure fields could be obtained using LPT approaches in combination with more advanced techniques. In general, the use of longer series of time-resolved input data, when available, allows more accurate pressure field reconstruction. Noise in the input data typically reduces the accuracy of the reconstructed pressure fields, but none of the techniques proved to be critically sensitive to the amount of noise added in the present test case.
Lee, Ho; Fahimian, Benjamin P; Xing, Lei
2017-03-21
This paper proposes a binary moving-blocker (BMB)-based technique for scatter correction in cone-beam computed tomography (CBCT). In concept, a beam blocker consisting of lead strips, mounted in front of the x-ray tube, moves rapidly in and out of the beam during a single gantry rotation. The projections are acquired in alternating phases of blocked and unblocked cone beams, where the blocked phase results in a stripe pattern in the width direction. To derive the scatter map from the blocked projections, 1D B-Spline interpolation/extrapolation is applied by using the detected information in the shaded regions. The scatter map of the unblocked projections is corrected by averaging two scatter maps that correspond to their adjacent blocked projections. The scatter-corrected projections are obtained by subtracting the corresponding scatter maps from the projection data and are utilized to generate the CBCT image by a compressed-sensing (CS)-based iterative reconstruction algorithm. Catphan504 and pelvis phantoms were used to evaluate the method's performance. The proposed BMB-based technique provided an effective method to enhance the image quality by suppressing scatter-induced artifacts, such as ring artifacts around the bowtie area. Compared to CBCT without a blocker, the spatial nonuniformity was reduced from 9.1% to 3.1%. The root-mean-square error of the CT numbers in the regions of interest (ROIs) was reduced from 30.2 HU to 3.8 HU. In addition to high resolution, comparable to that of the benchmark image, the CS-based reconstruction also led to a better contrast-to-noise ratio in seven ROIs. The proposed technique enables complete scatter-corrected CBCT imaging with width-truncated projections and allows reducing the acquisition time to approximately half. This work may have significant implications for image-guided or adaptive radiation therapy, where CBCT is often used.
NASA Astrophysics Data System (ADS)
Lee, Ho; Fahimian, Benjamin P.; Xing, Lei
2017-03-01
This paper proposes a binary moving-blocker (BMB)-based technique for scatter correction in cone-beam computed tomography (CBCT). In concept, a beam blocker consisting of lead strips, mounted in front of the x-ray tube, moves rapidly in and out of the beam during a single gantry rotation. The projections are acquired in alternating phases of blocked and unblocked cone beams, where the blocked phase results in a stripe pattern in the width direction. To derive the scatter map from the blocked projections, 1D B-Spline interpolation/extrapolation is applied by using the detected information in the shaded regions. The scatter map of the unblocked projections is corrected by averaging two scatter maps that correspond to their adjacent blocked projections. The scatter-corrected projections are obtained by subtracting the corresponding scatter maps from the projection data and are utilized to generate the CBCT image by a compressed-sensing (CS)-based iterative reconstruction algorithm. Catphan504 and pelvis phantoms were used to evaluate the method’s performance. The proposed BMB-based technique provided an effective method to enhance the image quality by suppressing scatter-induced artifacts, such as ring artifacts around the bowtie area. Compared to CBCT without a blocker, the spatial nonuniformity was reduced from 9.1% to 3.1%. The root-mean-square error of the CT numbers in the regions of interest (ROIs) was reduced from 30.2 HU to 3.8 HU. In addition to high resolution, comparable to that of the benchmark image, the CS-based reconstruction also led to a better contrast-to-noise ratio in seven ROIs. The proposed technique enables complete scatter-corrected CBCT imaging with width-truncated projections and allows reducing the acquisition time to approximately half. This work may have significant implications for image-guided or adaptive radiation therapy, where CBCT is often used.
A new art code for tomographic interferometry
NASA Technical Reports Server (NTRS)
Tan, H.; Modarress, D.
1987-01-01
A new algebraic reconstruction technique (ART) code based on the iterative refinement method of least squares solution for tomographic reconstruction is presented. Accuracy and the convergence of the technique is evaluated through the application of numerically generated interferometric data. It was found that, in general, the accuracy of the results was superior to other reported techniques. The iterative method unconditionally converged to a solution for which the residual was minimum. The effects of increased data were studied. The inversion error was found to be a function of the input data error only. The convergence rate, on the other hand, was affected by all three parameters. Finally, the technique was applied to experimental data, and the results are reported.
NASA Astrophysics Data System (ADS)
Yu, Jie; Liu, Yikan; Yamamoto, Masahiro
2018-04-01
In this article, we investigate the determination of the spatial component in the time-dependent second order coefficient of a hyperbolic equation from both theoretical and numerical aspects. By the Carleman estimates for general hyperbolic operators and an auxiliary Carleman estimate, we establish local Hölder stability with either partial boundary or interior measurements under certain geometrical conditions. For numerical reconstruction, we minimize a Tikhonov functional which penalizes the gradient of the unknown function. Based on the resulting variational equation, we design an iteration method which is updated by solving a Poisson equation at each step. One-dimensional prototype examples illustrate the numerical performance of the proposed iteration.
Invariants, Attractors and Bifurcation in Two Dimensional Maps with Polynomial Interaction
NASA Astrophysics Data System (ADS)
Hacinliyan, Avadis Simon; Aybar, Orhan Ozgur; Aybar, Ilknur Kusbeyzi
This work will present an extended discrete-time analysis on maps and their generalizations including iteration in order to better understand the resulting enrichment of the bifurcation properties. The standard concepts of stability analysis and bifurcation theory for maps will be used. Both iterated maps and flows are used as models for chaotic behavior. It is well known that when flows are converted to maps by discretization, the equilibrium points remain the same but a richer bifurcation scheme is observed. For example, the logistic map has a very simple behavior as a differential equation but as a map fold and period doubling bifurcations are observed. A way to gain information about the global structure of the state space of a dynamical system is investigating invariant manifolds of saddle equilibrium points. Studying the intersections of the stable and unstable manifolds are essential for understanding the structure of a dynamical system. It has been known that the Lotka-Volterra map and systems that can be reduced to it or its generalizations in special cases involving local and polynomial interactions admit invariant manifolds. Bifurcation analysis of this map and its higher iterates can be done to understand the global structure of the system and the artifacts of the discretization by comparing with the corresponding results from the differential equation on which they are based.
Noël, Peter B; Engels, Stephan; Köhler, Thomas; Muenzel, Daniela; Franz, Daniela; Rasper, Michael; Rummeny, Ernst J; Dobritz, Martin; Fingerle, Alexander A
2018-01-01
Background The explosive growth of computer tomography (CT) has led to a growing public health concern about patient and population radiation dose. A recently introduced technique for dose reduction, which can be combined with tube-current modulation, over-beam reduction, and organ-specific dose reduction, is iterative reconstruction (IR). Purpose To evaluate the quality, at different radiation dose levels, of three reconstruction algorithms for diagnostics of patients with proven liver metastases under tumor follow-up. Material and Methods A total of 40 thorax-abdomen-pelvis CT examinations acquired from 20 patients in a tumor follow-up were included. All patients were imaged using the standard-dose and a specific low-dose CT protocol. Reconstructed slices were generated by using three different reconstruction algorithms: a classical filtered back projection (FBP); a first-generation iterative noise-reduction algorithm (iDose4); and a next generation model-based IR algorithm (IMR). Results The overall detection of liver lesions tended to be higher with the IMR algorithm than with FBP or iDose4. The IMR dataset at standard dose yielded the highest overall detectability, while the low-dose FBP dataset showed the lowest detectability. For the low-dose protocols, a significantly improved detectability of the liver lesion can be reported compared to FBP or iDose 4 ( P = 0.01). The radiation dose decreased by an approximate factor of 5 between the standard-dose and the low-dose protocol. Conclusion The latest generation of IR algorithms significantly improved the diagnostic image quality and provided virtually noise-free images for ultra-low-dose CT imaging.
Zhang, Cheng; Zhang, Tao; Li, Ming; Peng, Chengtao; Liu, Zhaobang; Zheng, Jian
2016-06-18
In order to reduce the radiation dose of CT (computed tomography), compressed sensing theory has been a hot topic since it provides the possibility of a high quality recovery from the sparse sampling data. Recently, the algorithm based on DL (dictionary learning) was developed to deal with the sparse CT reconstruction problem. However, the existing DL algorithm focuses on the minimization problem with the L2-norm regularization term, which leads to reconstruction quality deteriorating while the sampling rate declines further. Therefore, it is essential to improve the DL method to meet the demand of more dose reduction. In this paper, we replaced the L2-norm regularization term with the L1-norm one. It is expected that the proposed L1-DL method could alleviate the over-smoothing effect of the L2-minimization and reserve more image details. The proposed algorithm solves the L1-minimization problem by a weighting strategy, solving the new weighted L2-minimization problem based on IRLS (iteratively reweighted least squares). Through the numerical simulation, the proposed algorithm is compared with the existing DL method (adaptive dictionary based statistical iterative reconstruction, ADSIR) and other two typical compressed sensing algorithms. It is revealed that the proposed algorithm is more accurate than the other algorithms especially when further reducing the sampling rate or increasing the noise. The proposed L1-DL algorithm can utilize more prior information of image sparsity than ADSIR. By transforming the L2-norm regularization term of ADSIR with the L1-norm one and solving the L1-minimization problem by IRLS strategy, L1-DL could reconstruct the image more exactly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noid, G; Chen, G; Tai, A
2014-06-01
Purpose: Iterative reconstruction (IR) algorithms are developed to improve CT image quality (IQ) by reducing noise without diminishing spatial resolution or contrast. For CT in radiation therapy (RT), slightly increasing imaging dose to improve IQ may be justified if it can substantially enhance structure delineation. The purpose of this study is to investigate and to quantify the IQ enhancement as a result of increasing imaging doses and using IR algorithms. Methods: CT images were acquired for phantoms, built to evaluate IQ metrics including spatial resolution, contrast and noise, with a variety of imaging protocols using a CT scanner (Definition ASmore » Open, Siemens) installed inside a Linac room. Representative patients were scanned once the protocols were optimized. Both phantom and patient scans were reconstructed using the Sinogram Affirmed Iterative Reconstruction (SAFIRE) and the Filtered Back Projection (FBP) methods. IQ metrics of the obtained CTs were compared. Results: IR techniques are demonstrated to preserve spatial resolution as measured by the point spread function and reduce noise in comparison to traditional FBP. Driven by the reduction in noise, the contrast to noise ratio is doubled by adopting the highest SAFIRE strength. As expected, increasing imaging dose reduces noise for both SAFIRE and FBP reconstructions. The contrast to noise increases from 3 to 5 by increasing the dose by a factor of 4. Similar IQ improvement was observed on the CTs for selected patients with pancreas and prostrate cancers. Conclusion: The IR techniques produce a measurable enhancement to CT IQ by reducing the noise. Increasing imaging dose further reduces noise independent of the IR techniques. The improved CT enables more accurate delineation of tumors and/or organs at risk during RT planning and delivery guidance.« less
NASA Astrophysics Data System (ADS)
Kadrmas, Dan J.; Frey, Eric C.; Karimi, Seemeen S.; Tsui, Benjamin M. W.
1998-04-01
Accurate scatter compensation in SPECT can be performed by modelling the scatter response function during the reconstruction process. This method is called reconstruction-based scatter compensation (RBSC). It has been shown that RBSC has a number of advantages over other methods of compensating for scatter, but using RBSC for fully 3D compensation has resulted in prohibitively long reconstruction times. In this work we propose two new methods that can be used in conjunction with existing methods to achieve marked reductions in RBSC reconstruction times. The first method, coarse-grid scatter modelling, significantly accelerates the scatter model by exploiting the fact that scatter is dominated by low-frequency information. The second method, intermittent RBSC, further accelerates the reconstruction process by limiting the number of iterations during which scatter is modelled. The fast implementations were evaluated using a Monte Carlo simulated experiment of the 3D MCAT phantom with
tracer, and also using experimentally acquired data with
tracer. Results indicated that these fast methods can reconstruct, with fully 3D compensation, images very similar to those obtained using standard RBSC methods, and in reconstruction times that are an order of magnitude shorter. Using these methods, fully 3D iterative reconstruction with RBSC can be performed well within the realm of clinically realistic times (under 10 minutes for
image reconstruction).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, B; Southern Medical University, Guangzhou, Guangdong; Tian, Z
Purpose: While compressed sensing-based cone-beam CT (CBCT) iterative reconstruction techniques have demonstrated tremendous capability of reconstructing high-quality images from undersampled noisy data, its long computation time still hinders wide application in routine clinic. The purpose of this study is to develop a reconstruction framework that employs modern consensus optimization techniques to achieve CBCT reconstruction on a multi-GPU platform for improved computational efficiency. Methods: Total projection data were evenly distributed to multiple GPUs. Each GPU performed reconstruction using its own projection data with a conventional total variation regularization approach to ensure image quality. In addition, the solutions from GPUs were subjectmore » to a consistency constraint that they should be identical. We solved the optimization problem with all the constraints considered rigorously using an alternating direction method of multipliers (ADMM) algorithm. The reconstruction framework was implemented using OpenCL on a platform with two Nvidia GTX590 GPU cards, each with two GPUs. We studied the performance of our method and demonstrated its advantages through a simulation case with a NCAT phantom and an experimental case with a Catphan phantom. Result: Compared with the CBCT images reconstructed using conventional FDK method with full projection datasets, our proposed method achieved comparable image quality with about one third projection numbers. The computation time on the multi-GPU platform was ∼55 s and ∼ 35 s in the two cases respectively, achieving a speedup factor of ∼ 3.0 compared with single GPU reconstruction. Conclusion: We have developed a consensus ADMM-based CBCT reconstruction method which enabled performing reconstruction on a multi-GPU platform. The achieved efficiency made this method clinically attractive.« less
A novel color image encryption scheme using alternate chaotic mapping structure
NASA Astrophysics Data System (ADS)
Wang, Xingyuan; Zhao, Yuanyuan; Zhang, Huili; Guo, Kang
2016-07-01
This paper proposes an color image encryption algorithm using alternate chaotic mapping structure. Initially, we use the R, G and B components to form a matrix. Then one-dimension logistic and two-dimension logistic mapping is used to generate a chaotic matrix, then iterate two chaotic mappings alternately to permute the matrix. For every iteration, XOR operation is adopted to encrypt plain-image matrix, then make further transformation to diffuse the matrix. At last, the encrypted color image is obtained from the confused matrix. Theoretical analysis and experimental results has proved the cryptosystem is secure and practical, and it is suitable for encrypting color images.
Iterative Nonlinear Tikhonov Algorithm with Constraints for Electromagnetic Tomography
NASA Technical Reports Server (NTRS)
Xu, Feng; Deshpande, Manohar
2012-01-01
Low frequency electromagnetic tomography such as the capacitance tomography (ECT) has been proposed for monitoring and mass-gauging of gas-liquid two-phase system under microgravity condition in NASA's future long-term space missions. Due to the ill-posed inverse problem of ECT, images reconstructed using conventional linear algorithms often suffer from limitations such as low resolution and blurred edges. Hence, new efficient high resolution nonlinear imaging algorithms are needed for accurate two-phase imaging. The proposed Iterative Nonlinear Tikhonov Regularized Algorithm with Constraints (INTAC) is based on an efficient finite element method (FEM) forward model of quasi-static electromagnetic problem. It iteratively minimizes the discrepancy between FEM simulated and actual measured capacitances by adjusting the reconstructed image using the Tikhonov regularized method. More importantly, it enforces the known permittivity of two phases to the unknown pixels which exceed the reasonable range of permittivity in each iteration. This strategy does not only stabilize the converging process, but also produces sharper images. Simulations show that resolution improvement of over 2 times can be achieved by INTAC with respect to conventional approaches. Strategies to further improve spatial imaging resolution are suggested, as well as techniques to accelerate nonlinear forward model and thus increase the temporal resolution.
Strong Convergence for a Finite Family of Generalized Asymptotically Nonexpansive Mappings
NASA Astrophysics Data System (ADS)
Ma, Zhi-Hong; Chen, Ru-Dond
The purpose of this paper is to show the convergence theorems for generalized asymptotically nonexpansive mappings and asymptotically nonexpansive mappings in Banach spaces by using a new iteration which is a natural generalization of the implicit iteration. In the meantime, we give the necessary and sufficient conditions of the strong convergence to approximate a common fixed point and modify some flaw in the results of Thakur [11]. As one will see, the results presented in this paper are an extension of the corresponding results [8,11].
Task Equivalence for Model and Human-Observer Comparisons in SPECT Localization Studies
NASA Astrophysics Data System (ADS)
Sen, Anando; Kalantari, Faraz; Gifford, Howard C.
2016-06-01
While mathematical model observers are intended for efficient assessment of medical imaging systems, their findings should be relevant for human observers as the primary clinical end users. We have investigated whether pursuing equivalence between the model and human-observer tasks can help ensure this goal. A localization receiver operating characteristic (LROC) study tested prostate lesion detection in simulated In-111 SPECT imaging with anthropomorphic phantoms. The test images were 2D slices extracted from reconstructed volumes. The iterative ordered sets expectation-maximization (OSEM) reconstruction algorithm was used with Gaussian postsmoothing. Variations in the number of iterations and the level of postfiltering defined the test strategies in the study. Human-observer performance was compared with that of a visual-search (VS) observer, a scanning channelized Hotelling observer, and a scanning channelized nonprewhitening (CNPW) observer. These model observers were applied with precise information about the target regions of interest (ROIs). ROI knowledge was a study variable for the human observers. In one study format, the humans read the SPECT image alone. With a dual-modality format, the SPECT image was presented alongside an anatomical image slice extracted from the density map of the phantom. Performance was scored by area under the LROC curve. The human observers performed significantly better with the dual-modality format, and correlation with the model observers was also improved. Given the human-observer data from the SPECT study format, the Pearson correlation coefficients for the model observers were 0.58 (VS), -0.12 (CH), and -0.23 (CNPW). The respective coefficients based on the human-observer data from the dual-modality study were 0.72, 0.27, and -0.11. These results point towards the continued development of the VS observer for enhancing task equivalence in model-observer studies.
Ichikawa, Yasutaka; Kitagawa, Kakuya; Nagasawa, Naoki; Murashima, Shuichi; Sakuma, Hajime
2013-08-09
The recently developed model-based iterative reconstruction (MBIR) enables significant reduction of image noise and artifacts, compared with adaptive statistical iterative reconstruction (ASIR) and filtered back projection (FBP). The purpose of this study was to evaluate lesion detectability of low-dose chest computed tomography (CT) with MBIR in comparison with ASIR and FBP. Chest CT was acquired with 64-slice CT (Discovery CT750HD) with standard-dose (5.7 ± 2.3 mSv) and low-dose (1.6 ± 0.8 mSv) conditions in 55 patients (aged 72 ± 7 years) who were suspected of lung disease on chest radiograms. Low-dose CT images were reconstructed with MBIR, ASIR 50% and FBP, and standard-dose CT images were reconstructed with FBP, using a reconstructed slice thickness of 0.625 mm. Two observers evaluated the image quality of abnormal lung and mediastinal structures on a 5-point scale (Score 5 = excellent and score 1 = non-diagnostic). The objective image noise was also measured as the standard deviation of CT intensity in the descending aorta. The image quality score of enlarged mediastinal lymph nodes on low-dose MBIR CT (4.7 ± 0.5) was significantly improved in comparison with low-dose FBP and ASIR CT (3.0 ± 0.5, p = 0.004; 4.0 ± 0.5, p = 0.02, respectively), and was nearly identical to the score of standard-dose FBP image (4.8 ± 0.4, p = 0.66). Concerning decreased lung attenuation (bulla, emphysema, or cyst), the image quality score on low-dose MBIR CT (4.9 ± 0.2) was slightly better compared to low-dose FBP and ASIR CT (4.5 ± 0.6, p = 0.01; 4.6 ± 0.5, p = 0.01, respectively). There were no significant differences in image quality scores of visualization of consolidation or mass, ground-glass attenuation, or reticular opacity among low- and standard-dose CT series. Image noise with low-dose MBIR CT (11.6 ± 1.0 Hounsfield units (HU)) were significantly lower than with low-dose ASIR (21.1 ± 2.6 HU, p < 0.0005), low-dose FBP CT (30.9 ± 3.9 HU, p < 0.0005), and standard-dose FBP CT (16.6 ± 2.3 HU, p < 0.0005). MBIR shows greater potential than ASIR for providing diagnostically acceptable low-dose CT without compromising image quality. With radiation dose reduction of >70%, MBIR can provide equivalent lesion detectability of standard-dose FBP CT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsui, B.M.W.; Frey, E.C.; Lalush, D.S.
1996-12-31
We investigated methods to accurately reconstruct 180{degrees} truncated TCT and SPECT projection data obtained from a right-angle dual-camera SPECT system for myocardial SPECT with attenuation compensation. The 180{degrees} data reconstruction methods would permit substantial savings in transmission data acquisition time. Simulation data from the 3D MCAT phantom and clinical data from large patients were used in the evaluation study. Different transmission reconstruction methods including the FBP, transmission ML-EM, transmission ML-SA, and BIT algorithms with and without using the body contour as support, were used in the TCT image reconstructions. The accuracy of both the TCT and attenuation compensated SPECT imagesmore » were evaluated for different degrees of truncation and noise levels. We found that using the FBP reconstructed TCT images resulted in higher count density in the left ventricular (LV) wall of the attenuation compensated SPECT images. The LV wall count density obtained using the iteratively reconstructed TCT images with and without support were similar to each other and were more accurate than that using the FBP. However, the TCT images obtained with support show fewer image artifacts than without support. Among the iterative reconstruction algorithms, the ML-SA algorithm provides the most accurate reconstruction but is the slowest. The BIT algorithm is the fastest but shows the most image artifacts. We conclude that accurate attenuation compensated images can be obtained with truncated 180{degrees} data from large patients using a right-angle dual-camera SPECT system.« less
Rapid anatomical brain imaging using spiral acquisition and an expanded signal model.
Kasper, Lars; Engel, Maria; Barmet, Christoph; Haeberlin, Maximilian; Wilm, Bertram J; Dietrich, Benjamin E; Schmid, Thomas; Gross, Simon; Brunner, David O; Stephan, Klaas E; Pruessmann, Klaas P
2018-03-01
We report the deployment of spiral acquisition for high-resolution structural imaging at 7T. Long spiral readouts are rendered manageable by an expanded signal model including static off-resonance and B 0 dynamics along with k-space trajectories and coil sensitivity maps. Image reconstruction is accomplished by inversion of the signal model using an extension of the iterative non-Cartesian SENSE algorithm. Spiral readouts up to 25 ms are shown to permit whole-brain 2D imaging at 0.5 mm in-plane resolution in less than a minute. A range of options is explored, including proton-density and T 2 * contrast, acceleration by parallel imaging, different readout orientations, and the extraction of phase images. Results are shown to exhibit competitive image quality along with high geometric consistency. Copyright © 2017 Elsevier Inc. All rights reserved.
Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen
2016-01-01
Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems. PMID:26901410
Schmidt, Rita; Webb, Andrew
2016-01-01
Electrical Properties Tomography (EPT) using MRI is a technique that has been developed to provide a new contrast mechanism for in vivo imaging. Currently the most common method relies on the solution of the homogeneous Helmholtz equation, which has limitations in accurate estimation at tissue interfaces. A new method proposed in this work combines a Maxwell's integral equation representation of the problem, and the use of high permittivity materials (HPM) to control the RF field, in order to reconstruct the electrical properties image. The magnetic field is represented by an integral equation considering each point as a contrast source. This equation can be solved in an inverse method. In this study we use a reference simulation or scout scan of a uniform phantom to provide an initial estimate for the inverse solution, which allows the estimation of the complex permittivity within a single iteration. Incorporating two setups with and without the HPM improves the reconstructed result, especially with respect to the very low electric field in the center of the sample. Electromagnetic simulations of the brain were performed at 3T to generate the B1(+) field maps and reconstruct the electric properties images. The standard deviations of the relative permittivity and conductivity were within 14% and 18%, respectively for a volume consisting of white matter, gray matter and cerebellum. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Fahmi, Rachid; Eck, Brendan L.; Vembar, Mani; Bezerra, Hiram G.; Wilson, David L.
2014-03-01
We investigated the use of an advanced hybrid iterative reconstruction (IR) technique (iDose4, Philips Health- care) for low dose dynamic myocardial CT perfusion (CTP) imaging. A porcine model was created to mimic coronary stenosis through partial occlusion of the left anterior descending (LAD) artery with a balloon catheter. The severity of LAD occlusion was adjusted with FFR measurements. Dynamic CT images were acquired at end-systole (45% R-R) using a multi-detector CT (MDCT) scanner. Various corrections were applied to the acquired scans to reduce motion and imaging artifacts. Absolute myocardial blood flow (MBF) was computed with a deconvolution-based approach using singular value decomposition (SVD). We compared a high and a low dose radiation protocol corresponding to two different tube-voltage/tube-current combinations (80kV p/100mAs and 120kV p/150mAs). The corresponding radiation doses for these protocols are 7.8mSv and 34.3mSV , respectively. The images were reconstructed using conventional FBP and three noise-reduction strengths of the IR method, iDose. Flow contrast-to-noise ratio, CNRf, as obtained from MBF maps, was used to quantitatively evaluate the effect of reconstruction on contrast between normal and ischemic myocardial tissue. Preliminary results showed that the use of iDose to reconstruct low dose images provide better or comparable CNRf to that of high dose images reconstructed with FBP, suggesting significant dose savings. CNRf was improved with the three used levels of iDose compared to FBP for both protocols. When using the entire 4D dynamic sequence for MBF computation, a 77% dose reduction was achieved, while considering only half the scans (i.e., every other heart cycle) allowed even further dose reduction while maintaining relatively higher CNRf.
May, Matthias S; Wüst, Wolfgang; Brand, Michael; Stahl, Christian; Allmendinger, Thomas; Schmidt, Bernhard; Uder, Michael; Lell, Michael M
2011-07-01
We sought to evaluate the image quality of iterative reconstruction in image space (IRIS) in half-dose (HD) datasets compared with full-dose (FD) and HD filtered back projection (FBP) reconstruction in abdominal computed tomography (CT). To acquire data with FD and HD simultaneously, contrast-enhanced abdominal CT was performed with a dual-source CT system, both tubes operating at 120 kV, 100 ref.mAs, and pitch 0.8. Three different image datasets were reconstructed from the raw data: Standard FD images applying FBP which served as reference, HD images applying FBP and HD images applying IRIS. For the HD data sets, only data from 1 tube detector-system was used. Quantitative image quality analysis was performed by measuring image noise in tissue and air. Qualitative image quality was evaluated according to the European Guidelines on Quality criteria for CT. Additional assessment of artifacts, lesion conspicuity, and edge sharpness was performed. : Image noise in soft tissue was substantially decreased in HD-IRIS (-3.4 HU, -22%) and increased in HD-FBP (+6.2 HU, +39%) images when compared with the reference (mean noise, 15.9 HU). No significant differences between the FD-FBP and HD-IRIS images were found for the visually sharp anatomic reproduction, overall diagnostic acceptability (P = 0.923), lesion conspicuity (P = 0.592), and edge sharpness (P = 0.589), while HD-FBP was rated inferior. Streak artifacts and beam hardening was significantly more prominent in HD-FBP while HD-IRIS images exhibited a slightly different noise pattern. Direct intrapatient comparison of standard FD body protocols and HD-IRIS reconstruction suggest that the latest iterative reconstruction algorithms allow for approximately 50% dose reduction without deterioration of the high image quality necessary for confident diagnosis.
SU-F-I-08: CT Image Ring Artifact Reduction Based On Prior Image
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, C; Qi, H; Chen, Z
Purpose: In computed tomography (CT) system, CT images with ring artifacts will be reconstructed when some adjacent bins of detector don’t work. The ring artifacts severely degrade CT image quality. We present a useful CT ring artifacts reduction based on projection data correction, aiming at estimating the missing data of projection data accurately, thus removing the ring artifacts of CT images. Methods: The method consists of ten steps: 1) Identification of abnormal pixel line in projection sinogram; 2) Linear interpolation within the pixel line of projection sinogram; 3) FBP reconstruction using interpolated projection data; 4) Filtering FBP image using meanmore » filter; 5) Forwarding projection of filtered FBP image; 6) Subtraction forwarded projection from original projection; 7) Linear interpolation of abnormal pixel line area in the subtraction projection; 8) Adding the interpolated subtraction projection on the forwarded projection; 9) FBP reconstruction using corrected projection data; 10) Return to step 4 until the pre-set iteration number is reached. The method is validated on simulated and real data to restore missing projection data and reconstruct ring artifact-free CT images. Results: We have studied impact of amount of dead bins of CT detector on the accuracy of missing data estimation in projection sinogram. For the simulated case with a resolution of 256 by 256 Shepp-Logan phantom, three iterations are sufficient to restore projection data and reconstruct ring artifact-free images when the dead bins rating is under 30%. The dead-bin-induced artifacts are substantially reduced. More iteration number is needed to reconstruct satisfactory images while the rating of dead bins increases. Similar results were found for a real head phantom case. Conclusion: A practical CT image ring artifact correction scheme based on projection data is developed. This method can produce ring artifact-free CT images feasibly and effectively.« less
Mueck, F G; Körner, M; Scherr, M K; Geyer, L L; Deak, Z; Linsenmaier, U; Reiser, M; Wirth, S
2012-03-01
To compare the image quality of dose-reduced 64-row abdominal CT reconstructed at different levels of adaptive statistical iterative reconstruction (ASIR) to full-dose baseline examinations reconstructed with filtered back-projection (FBP) in a clinical setting and upgrade situation. Abdominal baseline examinations (noise index NI = 29; LightSpeed VCT XT, GE) were intra-individually compared to follow-up studies on a CT with an ASIR option (NI = 43; Discovery HD750, GE), n = 42. Standard-kernel images were calculated with ASIR blendings of 0 - 100 % in slice and volume mode, respectively. Three experienced radiologists compared the image quality of these 567 sets to their corresponding full-dose baseline examination (- 2: diagnostically inferior, - 1: inferior, 0: equal, + 1: superior, + 2: diagnostically superior). Furthermore, a phantom was scanned. Statistical analysis used the Wilcoxon - the Mann-Whitney U-test and the intra-class correlation (ICC). The mean CTDIvol decreased from 19.7 ± 5.5 to 12.2 ± 4.7 mGy (p < 0.001). The ICC was 0.861. The total image quality of the dose-reduced ASIR studies was comparable to the baseline at ASIR 50 % in slice (p = 0.18) and ASIR 50 - 100 % in volume mode (p > 0.10). Volume mode performed 73 % slower than slice mode (p < 0.01). After the system upgrade, the vendor recommendation of ASIR 50 % in slice mode allowed for a dose reduction of 38 % in abdominal CT with comparable image quality and time expenditure. However, there is still further dose reduction potential for more complex reconstruction settings. © Georg Thieme Verlag KG Stuttgart · New York.
Imaging complex objects using learning tomography
NASA Astrophysics Data System (ADS)
Lim, JooWon; Goy, Alexandre; Shoreh, Morteza Hasani; Unser, Michael; Psaltis, Demetri
2018-02-01
Optical diffraction tomography (ODT) can be described using the scattering process through an inhomogeneous media. An inherent nonlinearity exists relating the scattering medium and the scattered field due to multiple scattering. Multiple scattering is often assumed to be negligible in weakly scattering media. This assumption becomes invalid as the sample gets more complex resulting in distorted image reconstructions. This issue becomes very critical when we image a complex sample. Multiple scattering can be simulated using the beam propagation method (BPM) as the forward model of ODT combined with an iterative reconstruction scheme. The iterative error reduction scheme and the multi-layer structure of BPM are similar to neural networks. Therefore we refer to our imaging method as learning tomography (LT). To fairly assess the performance of LT in imaging complex samples, we compared LT with the conventional iterative linear scheme using Mie theory which provides the ground truth. We also demonstrate the capacity of LT to image complex samples using experimental data of a biological cell.
Ultra-Low-Dose Fetal CT With Model-Based Iterative Reconstruction: A Prospective Pilot Study.
Imai, Rumi; Miyazaki, Osamu; Horiuchi, Tetsuya; Asano, Keisuke; Nishimura, Gen; Sago, Haruhiko; Nosaka, Shunsuke
2017-06-01
Prenatal diagnosis of skeletal dysplasia by means of 3D skeletal CT examination is highly accurate. However, it carries a risk of fetal exposure to radiation. Model-based iterative reconstruction (MBIR) technology can reduce radiation exposure; however, to our knowledge, the lower limit of an optimal dose is currently unknown. The objectives of this study are to establish ultra-low-dose fetal CT as a method for prenatal diagnosis of skeletal dysplasia and to evaluate the appropriate radiation dose for ultra-low-dose fetal CT. Relationships between tube current and image noise in adaptive statistical iterative reconstruction and MBIR were examined using a 32-cm CT dose index (CTDI) phantom. On the basis of the results of this examination and the recommended methods for the MBIR option and the known relationship between noise and tube current for filtered back projection, as represented by the expression SD = (milliamperes) -0.5 , the lower limit of the optimal dose in ultra-low-dose fetal CT with MBIR was set. The diagnostic power of the CT images obtained using the aforementioned scanning conditions was evaluated, and the radiation exposure associated with ultra-low-dose fetal CT was compared with that noted in previous reports. Noise increased in nearly inverse proportion to the square root of the dose in adaptive statistical iterative reconstruction and in inverse proportion to the fourth root of the dose in MBIR. Ultra-low-dose fetal CT was found to have a volume CTDI of 0.5 mGy. Prenatal diagnosis was accurately performed on the basis of ultra-low-dose fetal CT images that were obtained using this protocol. The level of fetal exposure to radiation was 0.7 mSv. The use of ultra-low-dose fetal CT with MBIR led to a substantial reduction in radiation exposure, compared with the CT imaging method currently used at our institution, but it still enabled diagnosis of skeletal dysplasia without reducing diagnostic power.
NASA Astrophysics Data System (ADS)
Ott, Julien G.; Becce, Fabio; Monnin, Pascal; Schmidt, Sabine; Bochud, François O.; Verdun, Francis R.
2014-08-01
The state of the art to describe image quality in medical imaging is to assess the performance of an observer conducting a task of clinical interest. This can be done by using a model observer leading to a figure of merit such as the signal-to-noise ratio (SNR). Using the non-prewhitening (NPW) model observer, we objectively characterised the evolution of its figure of merit in various acquisition conditions. The NPW model observer usually requires the use of the modulation transfer function (MTF) as well as noise power spectra. However, although the computation of the MTF poses no problem when dealing with the traditional filtered back-projection (FBP) algorithm, this is not the case when using iterative reconstruction (IR) algorithms, such as adaptive statistical iterative reconstruction (ASIR) or model-based iterative reconstruction (MBIR). Given that the target transfer function (TTF) had already shown it could accurately express the system resolution even with non-linear algorithms, we decided to tune the NPW model observer, replacing the standard MTF by the TTF. It was estimated using a custom-made phantom containing cylindrical inserts surrounded by water. The contrast differences between the inserts and water were plotted for each acquisition condition. Then, mathematical transformations were performed leading to the TTF. As expected, the first results showed a dependency of the image contrast and noise levels on the TTF for both ASIR and MBIR. Moreover, FBP also proved to be dependent of the contrast and noise when using the lung kernel. Those results were then introduced in the NPW model observer. We observed an enhancement of SNR every time we switched from FBP to ASIR to MBIR. IR algorithms greatly improve image quality, especially in low-dose conditions. Based on our results, the use of MBIR could lead to further dose reduction in several clinical applications.
Yasaka, Koichiro; Kamiya, Kouhei; Irie, Ryusuke; Maeda, Eriko; Sato, Jiro; Ohtomo, Kuni
To compare the differences in metal artefact degree and the depiction of structures in helical neck CT, in patients with metallic dental fillings, among adaptive iterative dose reduction three dimensional (AIDR 3D), forward-projected model-based iterative reconstruction solution (FIRST) and AIDR 3D with single-energy metal artefact reduction (SEMAR-A). In this retrospective clinical study, 22 patients (males, 13; females, 9; mean age, 64.6 ± 12.6 years) with metallic dental fillings who underwent contrast-enhanced helical CT involving the oropharyngeal region were included. Neck axial images were reconstructed with AIDR 3D, FIRST and SEMAR-A. Metal artefact degree and depiction of structures (the apex and root of the tongue, parapharyngeal space, superior portion of the internal jugular chain and parotid gland) were evaluated on a four-point scale by two radiologists. Placing regions of interest, standard deviations of the oral cavity and nuchal muscle (at the slice where no metal exists) were measured and metal artefact indices were calculated (the square root of the difference of the squares of them). In SEMAR-A, metal artefact was significantly reduced and depictions of all structures were significantly improved compared with those in FIRST and AIDR 3D (p ≤ 0.001, sign test). Metal artefact index for the oral cavity in AIDR 3D/FIRST/SEMAR-A was 572.0/477.7/88.4, and significant differences were seen between each reconstruction algorithm (p < 0.0001, Wilcoxon signed-rank test). SEMAR-A could provide images with lesser metal artefact and better depiction of structures than AIDR 3D and FIRST.
Memory-induced nonlinear dynamics of excitation in cardiac diseases.
Landaw, Julian; Qu, Zhilin
2018-04-01
Excitable cells, such as cardiac myocytes, exhibit short-term memory, i.e., the state of the cell depends on its history of excitation. Memory can originate from slow recovery of membrane ion channels or from accumulation of intracellular ion concentrations, such as calcium ion or sodium ion concentration accumulation. Here we examine the effects of memory on excitation dynamics in cardiac myocytes under two diseased conditions, early repolarization and reduced repolarization reserve, each with memory from two different sources: slow recovery of a potassium ion channel and slow accumulation of the intracellular calcium ion concentration. We first carry out computer simulations of action potential models described by differential equations to demonstrate complex excitation dynamics, such as chaos. We then develop iterated map models that incorporate memory, which accurately capture the complex excitation dynamics and bifurcations of the action potential models. Finally, we carry out theoretical analyses of the iterated map models to reveal the underlying mechanisms of memory-induced nonlinear dynamics. Our study demonstrates that the memory effect can be unmasked or greatly exacerbated under certain diseased conditions, which promotes complex excitation dynamics, such as chaos. The iterated map models reveal that memory converts a monotonic iterated map function into a nonmonotonic one to promote the bifurcations leading to high periodicity and chaos.
Memory-induced nonlinear dynamics of excitation in cardiac diseases
NASA Astrophysics Data System (ADS)
Landaw, Julian; Qu, Zhilin
2018-04-01
Excitable cells, such as cardiac myocytes, exhibit short-term memory, i.e., the state of the cell depends on its history of excitation. Memory can originate from slow recovery of membrane ion channels or from accumulation of intracellular ion concentrations, such as calcium ion or sodium ion concentration accumulation. Here we examine the effects of memory on excitation dynamics in cardiac myocytes under two diseased conditions, early repolarization and reduced repolarization reserve, each with memory from two different sources: slow recovery of a potassium ion channel and slow accumulation of the intracellular calcium ion concentration. We first carry out computer simulations of action potential models described by differential equations to demonstrate complex excitation dynamics, such as chaos. We then develop iterated map models that incorporate memory, which accurately capture the complex excitation dynamics and bifurcations of the action potential models. Finally, we carry out theoretical analyses of the iterated map models to reveal the underlying mechanisms of memory-induced nonlinear dynamics. Our study demonstrates that the memory effect can be unmasked or greatly exacerbated under certain diseased conditions, which promotes complex excitation dynamics, such as chaos. The iterated map models reveal that memory converts a monotonic iterated map function into a nonmonotonic one to promote the bifurcations leading to high periodicity and chaos.
Jia, Xun; Lou, Yifei; Li, Ruijiang; Song, William Y; Jiang, Steve B
2010-04-01
Cone-beam CT (CBCT) plays an important role in image guided radiation therapy (IGRT). However, the large radiation dose from serial CBCT scans in most IGRT procedures raises a clinical concern, especially for pediatric patients who are essentially excluded from receiving IGRT for this reason. The goal of this work is to develop a fast GPU-based algorithm to reconstruct CBCT from undersampled and noisy projection data so as to lower the imaging dose. The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. The authors developed a GPU-friendly version of the forward-backward splitting algorithm to solve this model. A multigrid technique is also employed. It is found that 20-40 x-ray projections are sufficient to reconstruct images with satisfactory quality for IGRT. The reconstruction time ranges from 77 to 130 s on an NVIDIA Tesla C1060 (NVIDIA, Santa Clara, CA) GPU card, depending on the number of projections used, which is estimated about 100 times faster than similar iterative reconstruction approaches. Moreover, phantom studies indicate that the algorithm enables the CBCT to be reconstructed under a scanning protocol with as low as 0.1 mA s/projection. Comparing with currently widely used full-fan head and neck scanning protocol of approximately 360 projections with 0.4 mA s/projection, it is estimated that an overall 36-72 times dose reduction has been achieved in our fast CBCT reconstruction algorithm. This work indicates that the developed GPU-based CBCT reconstruction algorithm is capable of lowering imaging dose considerably. The high computation efficiency in this algorithm makes the iterative CBCT reconstruction approach applicable in real clinical environments.
Temporal resolution and motion artifacts in single-source and dual-source cardiac CT.
Schöndube, Harald; Allmendinger, Thomas; Stierstorfer, Karl; Bruder, Herbert; Flohr, Thomas
2013-03-01
The temporal resolution of a given image in cardiac computed tomography (CT) has so far mostly been determined from the amount of CT data employed for the reconstruction of that image. The purpose of this paper is to examine the applicability of such measures to the newly introduced modality of dual-source CT as well as to methods aiming to provide improved temporal resolution by means of an advanced image reconstruction algorithm. To provide a solid base for the examinations described in this paper, an extensive review of temporal resolution in conventional single-source CT is given first. Two different measures for assessing temporal resolution with respect to the amount of data involved are introduced, namely, either taking the full width at half maximum of the respective data weighting function (FWHM-TR) or the total width of the weighting function (total TR) as a base of the assessment. Image reconstruction using both a direct fan-beam filtered backprojection with Parker weighting as well as using a parallel-beam rebinning step are considered. The theory of assessing temporal resolution by means of the data involved is then extended to dual-source CT. Finally, three different advanced iterative reconstruction methods that all use the same input data are compared with respect to the resulting motion artifact level. For brevity and simplicity, the examinations are limited to two-dimensional data acquisition and reconstruction. However, all results and conclusions presented in this paper are also directly applicable to both circular and helical cone-beam CT. While the concept of total TR can directly be applied to dual-source CT, the definition of the FWHM of a weighting function needs to be slightly extended to be applicable to this modality. The three different advanced iterative reconstruction methods examined in this paper result in significantly different images with respect to their motion artifact level, despite exactly the same amount of data being used in the reconstruction process. The concept of assessing temporal resolution by means of the data employed for reconstruction can nicely be extended from single-source to dual-source CT. However, for advanced (possibly nonlinear iterative) reconstruction algorithms the examined approach fails to deliver accurate results. New methods and measures to assess the temporal resolution of CT images need to be developed to be able to accurately compare the performance of such algorithms.
A fast multi-resolution approach to tomographic PIV
NASA Astrophysics Data System (ADS)
Discetti, Stefano; Astarita, Tommaso
2012-03-01
Tomographic particle image velocimetry (Tomo-PIV) is a recently developed three-component, three-dimensional anemometric non-intrusive measurement technique, based on an optical tomographic reconstruction applied to simultaneously recorded images of the distribution of light intensity scattered by seeding particles immersed into the flow. Nowadays, the reconstruction process is carried out mainly by iterative algebraic reconstruction techniques, well suited to handle the problem of limited number of views, but computationally intensive and memory demanding. The adoption of the multiplicative algebraic reconstruction technique (MART) has become more and more accepted. In the present work, a novel multi-resolution approach is proposed, relying on the adoption of a coarser grid in the first step of the reconstruction to obtain a fast estimation of a reliable and accurate first guess. A performance assessment, carried out on three-dimensional computer-generated distributions of particles, shows a substantial acceleration of the reconstruction process for all the tested seeding densities with respect to the standard method based on 5 MART iterations; a relevant reduction in the memory storage is also achieved. Furthermore, a slight accuracy improvement is noticed. A modified version, improved by a multiplicative line of sight estimation of the first guess on the compressed configuration, is also tested, exhibiting a further remarkable decrease in both memory storage and computational effort, mostly at the lowest tested seeding densities, while retaining the same performances in terms of accuracy.
Development of a GNSS water vapour tomography system using algebraic reconstruction techniques
NASA Astrophysics Data System (ADS)
Bender, Michael; Dick, Galina; Ge, Maorong; Deng, Zhiguo; Wickert, Jens; Kahle, Hans-Gert; Raabe, Armin; Tetzlaff, Gerd
2011-05-01
A GNSS water vapour tomography system developed to reconstruct spatially resolved humidity fields in the troposphere is described. The tomography system was designed to process the slant path delays of about 270 German GNSS stations in near real-time with a temporal resolution of 30 min, a horizontal resolution of 40 km and a vertical resolution of 500 m or better. After a short introduction to the GPS slant delay processing the framework of the GNSS tomography is described in detail. Different implementations of the iterative algebraic reconstruction techniques (ART) used to invert the linear inverse problem are discussed. It was found that the multiplicative techniques (MART) provide the best results with least processing time, i.e., a tomographic reconstruction of about 26,000 slant delays on a 8280 cell grid can be obtained in less than 10 min. Different iterative reconstruction techniques are compared with respect to their convergence behaviour and some numerical parameters. The inversion can be considerably stabilized by using additional non-GNSS observations and implementing various constraints. Different strategies for initialising the tomography and utilizing extra information are discussed. At last an example of a reconstructed field of the wet refractivity is presented and compared to the corresponding distribution of the integrated water vapour, an analysis of a numerical weather model (COSMO-DE) and some radiosonde profiles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Bin; Lyu, Qingwen; Ma, Jianhua
2016-04-15
Purpose: In computed tomography perfusion (CTP) imaging, an initial phase CT acquired with a high-dose protocol can be used to improve the image quality of later phase CT acquired with a low-dose protocol. For dynamic regions, signals in the later low-dose CT may not be completely recovered if the initial CT heavily regularizes the iterative reconstruction process. The authors propose a hybrid nonlocal means (hNLM) regularization model for iterative reconstruction of low-dose CTP to overcome the limitation of the conventional prior-image induced penalty. Methods: The hybrid penalty was constructed by combining the NLM of the initial phase high-dose CT inmore » the stationary region and later phase low-dose CT in the dynamic region. The stationary and dynamic regions were determined by the similarity between the initial high-dose scan and later low-dose scan. The similarity was defined as a Gaussian kernel-based distance between the patch-window of the same pixel in the two scans, and its measurement was then used to weigh the influence of the initial high-dose CT. For regions with high similarity (e.g., stationary region), initial high-dose CT played a dominant role for regularizing the solution. For regions with low similarity (e.g., dynamic region), the regularization relied on a low-dose scan itself. This new hNLM penalty was incorporated into the penalized weighted least-squares (PWLS) for CTP reconstruction. Digital and physical phantom studies were performed to evaluate the PWLS-hNLM algorithm. Results: Both phantom studies showed that the PWLS-hNLM algorithm is superior to the conventional prior-image induced penalty term without considering the signal changes within the dynamic region. In the dynamic region of the Catphan phantom, the reconstruction error measured by root mean square error was reduced by 42.9% in PWLS-hNLM reconstructed image. Conclusions: The PWLS-hNLM algorithm can effectively use the initial high-dose CT to reconstruct low-dose CTP in the stationary region while reducing its influence in the dynamic region.« less
Wang, Jin; Zhang, Chen; Wang, Yuanyuan
2017-05-30
In photoacoustic tomography (PAT), total variation (TV) based iteration algorithm is reported to have a good performance in PAT image reconstruction. However, classical TV based algorithm fails to preserve the edges and texture details of the image because it is not sensitive to the direction of the image. Therefore, it is of great significance to develop a new PAT reconstruction algorithm to effectively solve the drawback of TV. In this paper, a directional total variation with adaptive directivity (DDTV) model-based PAT image reconstruction algorithm, which weightedly sums the image gradients based on the spatially varying directivity pattern of the image is proposed to overcome the shortcomings of TV. The orientation field of the image is adaptively estimated through a gradient-based approach. The image gradients are weighted at every pixel based on both its anisotropic direction and another parameter, which evaluates the estimated orientation field reliability. An efficient algorithm is derived to solve the iteration problem associated with DDTV and possessing directivity of the image adaptively updated for each iteration step. Several texture images with various directivity patterns are chosen as the phantoms for the numerical simulations. The 180-, 90- and 30-view circular scans are conducted. Results obtained show that the DDTV-based PAT reconstructed algorithm outperforms the filtered back-projection method (FBP) and TV algorithms in the quality of reconstructed images with the peak signal-to-noise rations (PSNR) exceeding those of TV and FBP by about 10 and 18 dB, respectively, for all cases. The Shepp-Logan phantom is studied with further discussion of multimode scanning, convergence speed, robustness and universality aspects. In-vitro experiments are performed for both the sparse-view circular scanning and linear scanning. The results further prove the effectiveness of the DDTV, which shows better results than that of the TV with sharper image edges and clearer texture details. Both numerical simulation and in vitro experiments confirm that the DDTV provides a significant quality improvement of PAT reconstructed images for various directivity patterns.
Iterative methods for dose reduction and image enhancement in tomography
Miao, Jianwei; Fahimian, Benjamin Pooya
2012-09-18
A system and method for creating a three dimensional cross sectional image of an object by the reconstruction of its projections that have been iteratively refined through modification in object space and Fourier space is disclosed. The invention provides systems and methods for use with any tomographic imaging system that reconstructs an object from its projections. In one embodiment, the invention presents a method to eliminate interpolations present in conventional tomography. The method has been experimentally shown to provide higher resolution and improved image quality parameters over existing approaches. A primary benefit of the method is radiation dose reduction since the invention can produce an image of a desired quality with a fewer number projections than seen with conventional methods.
Complex amplitude reconstruction by iterative amplitude-phase retrieval algorithm with reference
NASA Astrophysics Data System (ADS)
Shen, Cheng; Guo, Cheng; Tan, Jiubin; Liu, Shutian; Liu, Zhengjun
2018-06-01
Multi-image iterative phase retrieval methods have been successfully applied in plenty of research fields due to their simple but efficient implementation. However, there is a mismatch between the measurement of the first long imaging distance and the sequential interval. In this paper, an amplitude-phase retrieval algorithm with reference is put forward without additional measurements or priori knowledge. It gets rid of measuring the first imaging distance. With a designed update formula, it significantly raises the convergence speed and the reconstruction fidelity, especially in phase retrieval. Its superiority over the original amplitude-phase retrieval (APR) method is validated by numerical analysis and experiments. Furthermore, it provides a conceptual design of a compact holographic image sensor, which can achieve numerical refocusing easily.
Dose reduction potential of iterative reconstruction algorithms in neck CTA-a simulation study.
Ellmann, Stephan; Kammerer, Ferdinand; Allmendinger, Thomas; Brand, Michael; Janka, Rolf; Hammon, Matthias; Lell, Michael M; Uder, Michael; Kramer, Manuel
2016-10-01
This study aimed to determine the degree of radiation dose reduction in neck CT angiography (CTA) achievable with Sinogram-affirmed iterative reconstruction (SAFIRE) algorithms. 10 consecutive patients scheduled for neck CTA were included in this study. CTA images of the external carotid arteries either were reconstructed with filtered back projection (FBP) at full radiation dose level or underwent simulated dose reduction by proprietary reconstruction software. The dose-reduced images were reconstructed using either SAFIRE 3 or SAFIRE 5 and compared with full-dose FBP images in terms of vessel definition. 5 observers performed a total of 3000 pairwise comparisons. SAFIRE allowed substantial radiation dose reductions in neck CTA while maintaining vessel definition. The possible levels of radiation dose reduction ranged from approximately 34 to approximately 90% and depended on the SAFIRE algorithm strength and the size of the vessel of interest. In general, larger vessels permitted higher degrees of radiation dose reduction, especially with higher SAFIRE strength levels. With small vessels, the superiority of SAFIRE 5 over SAFIRE 3 was lost. Neck CTA can be performed with substantially less radiation dose when SAFIRE is applied. The exact degree of radiation dose reduction should be adapted to the clinical question, in particular to the smallest vessel needing excellent definition.
Maffei, E; Martini, C; Rossi, A; Mollet, N; Lario, C; Castiglione Morelli, M; Clemente, A; Gentile, G; Arcadi, T; Seitun, S; Catalano, O; Aldrovandi, A; Cademartiri, F
2012-08-01
The authors evaluated the diagnostic accuracy of second-generation dual-source (DSCT) computed tomography coronary angiography (CTCA) with iterative reconstructions for detecting obstructive coronary artery disease (CAD). Between June 2010 and February 2011, we enrolled 160 patients (85 men; mean age 61.2±11.6 years) with suspected CAD. All patients underwent CTCA and conventional coronary angiography (CCA). For the CTCA scan (Definition Flash, Siemens), we use prospective tube current modulation and 70-100 ml of iodinated contrast material (Iomeprol 400 mgI/ ml, Bracco). Data sets were reconstructed with iterative reconstruction algorithm (IRIS, Siemens). CTCA and CCA reports were used to evaluate accuracy using the threshold for significant stenosis at ≥50% and ≥70%, respectively. No patient was excluded from the analysis. Heart rate was 64.3±11.9 bpm and radiation dose was 7.2±2.1 mSv. Disease prevalence was 30% (48/160). Sensitivity, specificity and positive and negative predictive values of CTCA in detecting significant stenosis were 90.1%, 93.3%, 53.2% and 99.1% (per segment), 97.5%, 91.2%, 61.4% and 99.6% (per vessel) and 100%, 83%, 71.6% and 100% (per patient), respectively. Positive and negative likelihood ratios at the per-patient level were 5.89 and 0.0, respectively. CTCA with second-generation DSCT in the real clinical world shows a diagnostic performance comparable with previously reported validation studies. The excellent negative predictive value and likelihood ratio make CTCA a first-line noninvasive method for diagnosing obstructive CAD.
Bellesi, Luca; Wyttenbach, Rolf; Gaudino, Diego; Colleoni, Paolo; Pupillo, Francesco; Carrara, Mauro; Braghetti, Antonio; Puligheddu, Carla; Presilla, Stefano
2017-01-01
The aim of this work was to evaluate detection of low-contrast objects and image quality in computed tomography (CT) phantom images acquired at different tube loadings (i.e. mAs) and reconstructed with different algorithms, in order to find appropriate settings to reduce the dose to the patient without any image detriment. Images of supraslice low-contrast objects of a CT phantom were acquired using different mAs values. Images were reconstructed using filtered back projection (FBP), hybrid and iterative model-based methods. Image quality parameters were evaluated in terms of modulation transfer function; noise, and uniformity using two software resources. For the definition of low-contrast detectability, studies based on both human (i.e. four-alternative forced-choice test) and model observers were performed across the various images. Compared to FBP, image quality parameters were improved by using iterative reconstruction (IR) algorithms. In particular, IR model-based methods provided a 60% noise reduction and a 70% dose reduction, preserving image quality and low-contrast detectability for human radiological evaluation. According to the model observer, the diameters of the minimum detectable detail were around 2 mm (up to 100 mAs). Below 100 mAs, the model observer was unable to provide a result. IR methods improve CT protocol quality, providing a potential dose reduction while maintaining a good image detectability. Model observer can in principle be useful to assist human performance in CT low-contrast detection tasks and in dose optimisation.
Radiation dose reduction for CT lung cancer screening using ASIR and MBIR: a phantom study.
Mathieu, Kelsey B; Ai, Hua; Fox, Patricia S; Godoy, Myrna Cobos Barco; Munden, Reginald F; de Groot, Patricia M; Pan, Tinsu
2014-03-06
The purpose of this study was to reduce the radiation dosage associated with computed tomography (CT) lung cancer screening while maintaining overall diagnostic image quality and definition of ground-glass opacities (GGOs). A lung screening phantom and a multipurpose chest phantom were used to quantitatively assess the performance of two iterative image reconstruction algorithms (adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR)) used in conjunction with reduced tube currents relative to a standard clinical lung cancer screening protocol (51 effective mAs (3.9 mGy) and filtered back-projection (FBP) reconstruction). To further assess the algorithms' performances, qualitative image analysis was conducted (in the form of a reader study) using the multipurpose chest phantom, which was implanted with GGOs of two densities. Our quantitative image analysis indicated that tube current, and thus radiation dose, could be reduced by 40% or 80% from ASIR or MBIR, respectively, compared with conventional FBP, while maintaining similar image noise magnitude and contrast-to-noise ratio. The qualitative portion of our study, which assessed reader preference, yielded similar results, indicating that dose could be reduced by 60% (to 20 effective mAs (1.6 mGy)) with either ASIR or MBIR, while maintaining GGO definition. Additionally, the readers' preferences (as indicated by their ratings) regarding overall image quality were equal or better (for a given dose) when using ASIR or MBIR, compared with FBP. In conclusion, combining ASIR or MBIR with reduced tube current may allow for lower doses while maintaining overall diagnostic image quality, as well as GGO definition, during CT lung cancer screening.
Study on monostable and bistable reaction-diffusion equations by iteration of travelling wave maps
NASA Astrophysics Data System (ADS)
Yi, Taishan; Chen, Yuming
2017-12-01
In this paper, based on the iterative properties of travelling wave maps, we develop a new method to obtain spreading speeds and asymptotic propagation for monostable and bistable reaction-diffusion equations. Precisely, for Dirichlet problems of monostable reaction-diffusion equations on the half line, by making links between travelling wave maps and integral operators associated with the Dirichlet diffusion kernel (the latter is NOT invariant under translation), we obtain some iteration properties of the Dirichlet diffusion and some a priori estimates on nontrivial solutions of Dirichlet problems under travelling wave transformation. We then provide the asymptotic behavior of nontrivial solutions in the space-time region for Dirichlet problems. These enable us to develop a unified method to obtain results on heterogeneous steady states, travelling waves, spreading speeds, and asymptotic spreading behavior for Dirichlet problem of monostable reaction-diffusion equations on R+ as well as of monostable/bistable reaction-diffusion equations on R.
NASA Astrophysics Data System (ADS)
Dunbar, John A.; Cook, Richard W.
2003-07-01
Existing methods for the palinspastic reconstruction of structure maps do not adequately account for heterogeneous rock strain and hence cannot accurately treat features such as fault terminations and non-cylindrical folds. We propose a new finite element formulation of the map reconstruction problem that treats such features explicitly. In this approach, a model of the map surface, with internal openings that honor the topology of the fault-gap network, is constructed of triangular finite elements. Both model building and reconstruction algorithms are guided by rules relating fault-gap topology to the kinematics of fault motion and are fully automated. We represent the total strain as the sum of a prescribed component of locally homogeneous simple shear and a minimum amount of heterogeneous residual strain. The region within which a particular orientation of simple shear is treated as homogenous can be as small as an individual element or as large as the entire map. For residual strain calculations, we treat the map surface as a hyperelastic membrane. A globally optimum reconstruction is found that unfolds the map while faithfully honoring assigned strain mechanisms, closes fault gaps without overlap or gap and imparts the least possible residual strain in the restored surface. The amount and distribution of the residual strain serves as a diagnostic tool for identifying mapping errors. The method can be used to reconstruct maps offset by any number of faults that terminate, branch and offset each other in arbitrarily complex ways.
Influence of reconstruction algorithms on image quality in SPECT myocardial perfusion imaging.
Davidsson, Anette; Olsson, Eva; Engvall, Jan; Gustafsson, Agnetha
2017-11-01
We investigated if image- and diagnostic quality in SPECT MPI could be maintained despite a reduced acquisition time adding Depth Dependent Resolution Recovery (DDRR) for image reconstruction. Images were compared with filtered back projection (FBP) and iterative reconstruction using Ordered Subsets Expectation Maximization with (IRAC) and without (IRNC) attenuation correction (AC). Stress- and rest imaging for 15 min was performed on 21 subjects with a dual head gamma camera (Infinia Hawkeye; GE Healthcare), ECG-gating with 8 frames/cardiac cycle and a low-dose CT-scan. A 9 min acquisition was generated using five instead of eight gated frames and was reconstructed with DDRR, with (IRACRR) and without AC (IRNCRR) as well as with FBP. Three experienced nuclear medicine specialists visually assessed anonymized images according to eight criteria on a four point scale, three related to image quality and five to diagnostic confidence. Statistical analysis was performed using Visual Grading Regression (VGR). Observer confidence in statements on image quality was highest for the images that were reconstructed using DDRR (P<0·01 compared to FBP). Iterative reconstruction without DDRR was not superior to FBP. Interobserver variability was significant for statements on image quality (P<0·05) but lower in the diagnostic statements on ischemia and scar. The confidence in assessing ischemia and scar was not different between the reconstruction techniques (P = n.s.). SPECT MPI collected in 9 min, reconstructed with DDRR and AC, produced better image quality than the standard procedure. The observers expressed the highest diagnostic confidence in the DDRR reconstruction. © 2016 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.
Iterative framework radiation hybrid mapping
USDA-ARS?s Scientific Manuscript database
Building comprehensive radiation hybrid maps for large sets of markers is a computationally expensive process, since the basic mapping problem is equivalent to the traveling salesman problem. The mapping problem is also susceptible to noise, and as a result, it is often beneficial to remove markers ...
NASA Astrophysics Data System (ADS)
Chen, Jiaoxuan; Zhang, Maomao; Liu, Yinyan; Chen, Jiaoliao; Li, Yi
2017-03-01
Electrical capacitance tomography (ECT) is a promising technique applied in many fields. However, the solutions for ECT are not unique and highly sensitive to the measurement noise. To remain a good shape of reconstructed object and endure a noisy data, a Rudin-Osher-Fatemi (ROF) model with total variation regularization is applied to image reconstruction in ECT. Two numerical methods, which are simplified augmented Lagrangian (SAL) and accelerated alternating direction method of multipliers (AADMM), are innovatively introduced to try to solve the above mentioned problems in ECT. The effect of the parameters and the number of iterations for different algorithms, and the noise level in capacitance data are discussed. Both simulation and experimental tests were carried out to validate the feasibility of the proposed algorithms, compared to the Landweber iteration (LI) algorithm. The results show that the SAL and AADMM algorithms can handle a high level of noise and the AADMM algorithm outperforms other algorithms in identifying the object from its background.
Anisotropic elastic moduli reconstruction in transversely isotropic model using MRE
NASA Astrophysics Data System (ADS)
Song, Jiah; In Kwon, Oh; Seo, Jin Keun
2012-11-01
Magnetic resonance elastography (MRE) is an elastic tissue property imaging modality in which the phase-contrast based MRI imaging technique is used to measure internal displacement induced by a harmonically oscillating mechanical vibration. MRE has made rapid technological progress in the past decade and has now reached the stage of clinical use. Most of the research outcomes are based on the assumption of isotropy. Since soft tissues like skeletal muscles show anisotropic behavior, the MRE technique should be extended to anisotropic elastic property imaging. This paper considers reconstruction in a transversely isotropic model, which is the simplest case of anisotropy, and develops a new non-iterative reconstruction method for visualizing the elastic moduli distribution. This new method is based on an explicit representation formula using the Newtonian potential of measured displacement. Hence, the proposed method does not require iterations since it directly recovers the anisotropic elastic moduli. We perform numerical simulations in order to demonstrate the feasibility of the proposed method in recovering a two-dimensional anisotropic tensor.
Nakajima, Nobuharu
2010-07-20
When a very intense beam is used for illuminating an object in coherent x-ray diffraction imaging, the intensities at the center of the diffraction pattern for the object are cut off by a beam stop that is utilized to block the intense beam. Until now, only iterative phase-retrieval methods have been applied to object reconstruction from a single diffraction pattern with a deficiency of central data due to a beam stop. As an alternative method, I present a noniterative solution in which an interpolation method based on the sampling theorem for the missing data is used for object reconstruction with our previously proposed phase-retrieval method using an aperture-array filter. Computer simulations demonstrate the reconstruction of a complex-amplitude object from a single diffraction pattern with a missing data area, which is generally difficult to treat with the iterative methods because a nonnegativity constraint cannot be used for such an object.
Wang, G.L.; Chew, W.C.; Cui, T.J.; Aydiner, A.A.; Wright, D.L.; Smith, D.V.
2004-01-01
Three-dimensional (3D) subsurface imaging by using inversion of data obtained from the very early time electromagnetic system (VETEM) was discussed. The study was carried out by using the distorted Born iterative method to match the internal nonlinear property of the 3D inversion problem. The forward solver was based on the total-current formulation bi-conjugate gradient-fast Fourier transform (BCCG-FFT). It was found that the selection of regularization parameter follow a heuristic rule as used in the Levenberg-Marquardt algorithm so that the iteration is stable.
Numerical solution of Euler's equation by perturbed functionals
NASA Technical Reports Server (NTRS)
Dey, S. K.
1985-01-01
A perturbed functional iteration has been developed to solve nonlinear systems. It adds at each iteration level, unique perturbation parameters to nonlinear Gauss-Seidel iterates which enhances its convergence properties. As convergence is approached these parameters are damped out. Local linearization along the diagonal has been used to compute these parameters. The method requires no computation of Jacobian or factorization of matrices. Analysis of convergence depends on properties of certain contraction-type mappings, known as D-mappings. In this article, application of this method to solve an implicit finite difference approximation of Euler's equation is studied. Some representative results for the well known shock tube problem and compressible flows in a nozzle are given.
Quantitative susceptibility mapping of human brain at 3T: a multisite reproducibility study.
Lin, P-Y; Chao, T-C; Wu, M-L
2015-03-01
Quantitative susceptibility mapping of the human brain has demonstrated strong potential in examining iron deposition, which may help in investigating possible brain pathology. This study assesses the reproducibility of quantitative susceptibility mapping across different imaging sites. In this study, the susceptibility values of 5 regions of interest in the human brain were measured on 9 healthy subjects following calibration by using phantom experiments. Each of the subjects was imaged 5 times on 1 scanner with the same procedure repeated on 3 different 3T systems so that both within-site and cross-site quantitative susceptibility mapping precision levels could be assessed. Two quantitative susceptibility mapping algorithms, similar in principle, one by using iterative regularization (iterative quantitative susceptibility mapping) and the other with analytic optimal solutions (deterministic quantitative susceptibility mapping), were implemented, and their performances were compared. Results show that while deterministic quantitative susceptibility mapping had nearly 700 times faster computation speed, residual streaking artifacts seem to be more prominent compared with iterative quantitative susceptibility mapping. With quantitative susceptibility mapping, the putamen, globus pallidus, and caudate nucleus showed smaller imprecision on the order of 0.005 ppm, whereas the red nucleus and substantia nigra, closer to the skull base, had a somewhat larger imprecision of approximately 0.01 ppm. Cross-site errors were not significantly larger than within-site errors. Possible sources of estimation errors are discussed. The reproducibility of quantitative susceptibility mapping in the human brain in vivo is regionally dependent, and the precision levels achieved with quantitative susceptibility mapping should allow longitudinal and multisite studies such as aging-related changes in brain tissue magnetic susceptibility. © 2015 by American Journal of Neuroradiology.
Image transmission system using adaptive joint source and channel decoding
NASA Astrophysics Data System (ADS)
Liu, Weiliang; Daut, David G.
2005-03-01
In this paper, an adaptive joint source and channel decoding method is designed to accelerate the convergence of the iterative log-dimain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec, which makes it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. Due to the error resilience modes, some bits are known to be either correct or in error. The positions of these bits are then fed back to the channel decoder. The log-likelihood ratios (LLR) of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. That is, for lower channel SNR, a larger factor is assigned, and vice versa. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the non-source controlled decoding method up to 5dB in terms of PSNR for various reconstructed images.
Shaikh, Tanvir R; Gao, Haixiao; Baxter, William T; Asturias, Francisco J; Boisset, Nicolas; Leith, Ardean; Frank, Joachim
2009-01-01
This protocol describes the reconstruction of biological molecules from the electron micrographs of single particles. Computation here is performed using the image-processing software SPIDER and can be managed using a graphical user interface, termed the SPIDER Reconstruction Engine. Two approaches are described to obtain an initial reconstruction: random-conical tilt and common lines. Once an existing model is available, reference-based alignment can be used, a procedure that can be iterated. Also described is supervised classification, a method to look for homogeneous subsets when multiple known conformations of the molecule may coexist. PMID:19180078
Interferometric tomography of continuous fields with incomplete projections
NASA Technical Reports Server (NTRS)
Cha, Soyoung S.; Sun, Hogwei
1988-01-01
Interferometric tomography in the presence of an opaque object is investigated. The developed iterative algorithm does not need to augment the missing information. It is based on the successive reconstruction of the difference field, the difference between the object field to be reconstructed and its estimate, only in the difined region. The application of the algorithm results in stable convergence.
4D-PET reconstruction using a spline-residue model with spatial and temporal roughness penalties
NASA Astrophysics Data System (ADS)
Ralli, George P.; Chappell, Michael A.; McGowan, Daniel R.; Sharma, Ricky A.; Higgins, Geoff S.; Fenwick, John D.
2018-05-01
4D reconstruction of dynamic positron emission tomography (dPET) data can improve the signal-to-noise ratio in reconstructed image sequences by fitting smooth temporal functions to the voxel time-activity-curves (TACs) during the reconstruction, though the optimal choice of function remains an open question. We propose a spline-residue model, which describes TACs as weighted sums of convolutions of the arterial input function with cubic B-spline basis functions. Convolution with the input function constrains the spline-residue model at early time-points, potentially enhancing noise suppression in early time-frames, while still allowing a wide range of TAC descriptions over the entire imaged time-course, thus limiting bias. Spline-residue based 4D-reconstruction is compared to that of a conventional (non-4D) maximum a posteriori (MAP) algorithm, and to 4D-reconstructions based on adaptive-knot cubic B-splines, the spectral model and an irreversible two-tissue compartment (‘2C3K’) model. 4D reconstructions were carried out using a nested-MAP algorithm including spatial and temporal roughness penalties. The algorithms were tested using Monte-Carlo simulated scanner data, generated for a digital thoracic phantom with uptake kinetics based on a dynamic [18F]-Fluromisonidazole scan of a non-small cell lung cancer patient. For every algorithm, parametric maps were calculated by fitting each voxel TAC within a sub-region of the reconstructed images with the 2C3K model. Compared to conventional MAP reconstruction, spline-residue-based 4D reconstruction achieved >50% improvements for five of the eight combinations of the four kinetics parameters for which parametric maps were created with the bias and noise measures used to analyse them, and produced better results for 5/8 combinations than any of the other reconstruction algorithms studied, while spectral model-based 4D reconstruction produced the best results for 2/8. 2C3K model-based 4D reconstruction generated the most biased parametric maps. Inclusion of a temporal roughness penalty function improved the performance of 4D reconstruction based on the cubic B-spline, spectral and spline-residue models.
Fast, Accurate and Shift-Varying Line Projections for Iterative Reconstruction Using the GPU
Pratx, Guillem; Chinn, Garry; Olcott, Peter D.; Levin, Craig S.
2013-01-01
List-mode processing provides an efficient way to deal with sparse projections in iterative image reconstruction for emission tomography. An issue often reported is the tremendous amount of computation required by such algorithm. Each recorded event requires several back- and forward line projections. We investigated the use of the programmable graphics processing unit (GPU) to accelerate the line-projection operations and implement fully-3D list-mode ordered-subsets expectation-maximization for positron emission tomography (PET). We designed a reconstruction approach that incorporates resolution kernels, which model the spatially-varying physical processes associated with photon emission, transport and detection. Our development is particularly suitable for applications where the projection data is sparse, such as high-resolution, dynamic, and time-of-flight PET reconstruction. The GPU approach runs more than 50 times faster than an equivalent CPU implementation while image quality and accuracy are virtually identical. This paper describes in details how the GPU can be used to accelerate the line projection operations, even when the lines-of-response have arbitrary endpoint locations and shift-varying resolution kernels are used. A quantitative evaluation is included to validate the correctness of this new approach. PMID:19244015