GPU-accelerated image reconstruction for optical and infrared interferometry
NASA Astrophysics Data System (ADS)
Baron, Fabien; Kloppenborg, Brian
2010-07-01
The advent of GPU hardware and associated software libraries for scientific computing renders possible acceleration of parallelisable problems by a typical factor of 10-100. We present the first GPU-accelerated and open source image reconstruction software for optical/infrared interferometry, making use of the OpenCL library. Finally we evaluate how this improvement in speed may translate in terms of improvement in image reconstruction quality for currently computationnally intensive algorithms.
Wang, Kun; Huang, Chao; Kao, Yu-Jiun; Chou, Cheng-Ying; Oraevsky, Alexander A.; Anastasio, Mark A.
2013-01-01
Purpose: Optoacoustic tomography (OAT) is inherently a three-dimensional (3D) inverse problem. However, most studies of OAT image reconstruction still employ two-dimensional imaging models. One important reason is because 3D image reconstruction is computationally burdensome. The aim of this work is to accelerate existing image reconstruction algorithms for 3D OAT by use of parallel programming techniques. Methods: Parallelization strategies are proposed to accelerate a filtered backprojection (FBP) algorithm and two different pairs of projection/backprojection operations that correspond to two different numerical imaging models. The algorithms are designed to fully exploit the parallel computing power of graphics processing units (GPUs). In order to evaluate the parallelization strategies for the projection/backprojection pairs, an iterative image reconstruction algorithm is implemented. Computer simulation and experimental studies are conducted to investigate the computational efficiency and numerical accuracy of the developed algorithms. Results: The GPU implementations improve the computational efficiency by factors of 1000, 125, and 250 for the FBP algorithm and the two pairs of projection/backprojection operators, respectively. Accurate images are reconstructed by use of the FBP and iterative image reconstruction algorithms from both computer-simulated and experimental data. Conclusions: Parallelization strategies for 3D OAT image reconstruction are proposed for the first time. These GPU-based implementations significantly reduce the computational time for 3D image reconstruction, complementing our earlier work on 3D OAT iterative image reconstruction. PMID:23387778
Wang, Kun; Huang, Chao; Kao, Yu-Jiun; Chou, Cheng-Ying; Oraevsky, Alexander A; Anastasio, Mark A
2013-02-01
Optoacoustic tomography (OAT) is inherently a three-dimensional (3D) inverse problem. However, most studies of OAT image reconstruction still employ two-dimensional imaging models. One important reason is because 3D image reconstruction is computationally burdensome. The aim of this work is to accelerate existing image reconstruction algorithms for 3D OAT by use of parallel programming techniques. Parallelization strategies are proposed to accelerate a filtered backprojection (FBP) algorithm and two different pairs of projection/backprojection operations that correspond to two different numerical imaging models. The algorithms are designed to fully exploit the parallel computing power of graphics processing units (GPUs). In order to evaluate the parallelization strategies for the projection/backprojection pairs, an iterative image reconstruction algorithm is implemented. Computer simulation and experimental studies are conducted to investigate the computational efficiency and numerical accuracy of the developed algorithms. The GPU implementations improve the computational efficiency by factors of 1000, 125, and 250 for the FBP algorithm and the two pairs of projection/backprojection operators, respectively. Accurate images are reconstructed by use of the FBP and iterative image reconstruction algorithms from both computer-simulated and experimental data. Parallelization strategies for 3D OAT image reconstruction are proposed for the first time. These GPU-based implementations significantly reduce the computational time for 3D image reconstruction, complementing our earlier work on 3D OAT iterative image reconstruction.
Kole, J S; Beekman, F J
2006-02-21
Statistical reconstruction methods offer possibilities to improve image quality as compared with analytical methods, but current reconstruction times prohibit routine application in clinical and micro-CT. In particular, for cone-beam x-ray CT, the use of graphics hardware has been proposed to accelerate the forward and back-projection operations, in order to reduce reconstruction times. In the past, wide application of this texture hardware mapping approach was hampered owing to limited intrinsic accuracy. Recently, however, floating point precision has become available in the latest generation commodity graphics cards. In this paper, we utilize this feature to construct a graphics hardware accelerated version of the ordered subset convex reconstruction algorithm. The aims of this paper are (i) to study the impact of using graphics hardware acceleration for statistical reconstruction on the reconstructed image accuracy and (ii) to measure the speed increase one can obtain by using graphics hardware acceleration. We compare the unaccelerated algorithm with the graphics hardware accelerated version, and for the latter we consider two different interpolation techniques. A simulation study of a micro-CT scanner with a mathematical phantom shows that at almost preserved reconstructed image accuracy, speed-ups of a factor 40 to 222 can be achieved, compared with the unaccelerated algorithm, and depending on the phantom and detector sizes. Reconstruction from physical phantom data reconfirms the usability of the accelerated algorithm for practical cases.
NASA Astrophysics Data System (ADS)
Kole, J. S.; Beekman, F. J.
2006-02-01
Statistical reconstruction methods offer possibilities to improve image quality as compared with analytical methods, but current reconstruction times prohibit routine application in clinical and micro-CT. In particular, for cone-beam x-ray CT, the use of graphics hardware has been proposed to accelerate the forward and back-projection operations, in order to reduce reconstruction times. In the past, wide application of this texture hardware mapping approach was hampered owing to limited intrinsic accuracy. Recently, however, floating point precision has become available in the latest generation commodity graphics cards. In this paper, we utilize this feature to construct a graphics hardware accelerated version of the ordered subset convex reconstruction algorithm. The aims of this paper are (i) to study the impact of using graphics hardware acceleration for statistical reconstruction on the reconstructed image accuracy and (ii) to measure the speed increase one can obtain by using graphics hardware acceleration. We compare the unaccelerated algorithm with the graphics hardware accelerated version, and for the latter we consider two different interpolation techniques. A simulation study of a micro-CT scanner with a mathematical phantom shows that at almost preserved reconstructed image accuracy, speed-ups of a factor 40 to 222 can be achieved, compared with the unaccelerated algorithm, and depending on the phantom and detector sizes. Reconstruction from physical phantom data reconfirms the usability of the accelerated algorithm for practical cases.
Prakash, Jaya; Chandrasekharan, Venkittarayan; Upendra, Vishwajith; Yalavarthy, Phaneendra K
2010-01-01
Diffuse optical tomographic image reconstruction uses advanced numerical models that are computationally costly to be implemented in the real time. The graphics processing units (GPUs) offer desktop massive parallelization that can accelerate these computations. An open-source GPU-accelerated linear algebra library package is used to compute the most intensive matrix-matrix calculations and matrix decompositions that are used in solving the system of linear equations. These open-source functions were integrated into the existing frequency-domain diffuse optical image reconstruction algorithms to evaluate the acceleration capability of the GPUs (NVIDIA Tesla C 1060) with increasing reconstruction problem sizes. These studies indicate that single precision computations are sufficient for diffuse optical tomographic image reconstruction. The acceleration per iteration can be up to 40, using GPUs compared to traditional CPUs in case of three-dimensional reconstruction, where the reconstruction problem is more underdetermined, making the GPUs more attractive in the clinical settings. The current limitation of these GPUs in the available onboard memory (4 GB) that restricts the reconstruction of a large set of optical parameters, more than 13,377.
Chou, Chia-Chu; Chandramouli, Gadisetti V R; Shin, Taehoon; Devasahayam, Nallathamby; McMillan, Alan; Babadi, Behtash; Gullapalli, Rao; Krishna, Murali C; Zhuo, Jiachen
2017-04-01
Electron paramagnetic resonance (EPR) imaging has evolved as a promising tool to provide non-invasive assessment of tissue oxygenation levels. Due to the extremely short T2 relaxation time of electrons, single point imaging (SPI) is used in EPRI, limiting achievable spatial and temporal resolution. This presents a problem when attempting to measure changes in hypoxic state. In order to capture oxygen variation in hypoxic tissues and localize cycling hypoxia regions, an accelerated EPRI imaging method with minimal loss of information is needed. We present an image acceleration technique, partial Fourier compressed sensing (PFCS), that combines compressed sensing (CS) and partial Fourier reconstruction. PFCS augments the original CS equation using conjugate symmetry information for missing measurements. To further improve image quality in order to reconstruct low-resolution EPRI images, a projection onto convex sets (POCS)-based phase map and a spherical-sampling mask are used in the reconstruction process. The PFCS technique was used in phantoms and in vivo SCC7 tumor mice to evaluate image quality and accuracy in estimating O2 concentration. In both phantom and in vivo experiments, PFCS demonstrated the ability to reconstruct images more accurately with at least a 4-fold acceleration compared to traditional CS. Meanwhile, PFCS is able to better preserve the distinct spatial pattern in a phantom with a spatial resolution of 0.6mm. On phantoms containing Oxo63 solution with different oxygen concentrations, PFCS reconstructed linewidth maps that were discriminative of different O2 concentrations. Moreover, PFCS reconstruction of partially sampled data provided a better discrimination of hypoxic and oxygenated regions in a leg tumor compared to traditional CS reconstructed images. EPR images with an acceleration factor of four are feasible using PFCS with reasonable assessment of tissue oxygenation. The technique can greatly enhance EPR applications and improve our
NASA Astrophysics Data System (ADS)
Wang, Qi; Lian, Zhijie; Wang, Jianming; Chen, Qingliang; Sun, Yukuan; Li, Xiuyan; Duan, Xiaojie; Cui, Ziqiang; Wang, Huaxiang
2016-11-01
Electrical impedance tomography (EIT) reconstruction is a nonlinear and ill-posed problem. Exact reconstruction of an EIT image inverts a high dimensional mathematical model to calculate the conductivity field, which causes significant problems regarding that the computational complexity will reduce the achievable frame rate, which is considered as a major advantage of EIT imaging. The single-step method, state estimation method, and projection method were always used to accelerate reconstruction process. The basic principle of these methods is to reduce computational complexity. However, maintaining high resolution in space together with not much cost is still challenging, especially for complex conductivity distribution. This study proposes an idea to accelerate image reconstruction of EIT based on compressive sensing (CS) theory, namely, CSEIT method. The novel CSEIT method reduces the sampling rate through minimizing redundancy in measurements, so that detailed information of reconstruction is not lost. In order to obtain sparse solution, which is the prior condition of signal recovery required by CS theory, a novel image reconstruction algorithm based on patch-based sparse representation is proposed. By applying the new framework of CSEIT, the data acquisition time, or the sampling rate, is reduced by more than two times, while the accuracy of reconstruction is significantly improved.
Huang, Hsuan-Ming; Hsiao, Ing-Tsung
2017-01-01
Over the past decade, image quality in low-dose computed tomography has been greatly improved by various compressive sensing- (CS-) based reconstruction methods. However, these methods have some disadvantages including high computational cost and slow convergence rate. Many different speed-up techniques for CS-based reconstruction algorithms have been developed. The purpose of this paper is to propose a fast reconstruction framework that combines a CS-based reconstruction algorithm with several speed-up techniques. First, total difference minimization (TDM) was implemented using the soft-threshold filtering (STF). Second, we combined TDM-STF with the ordered subsets transmission (OSTR) algorithm for accelerating the convergence. To further speed up the convergence of the proposed method, we applied the power factor and the fast iterative shrinkage thresholding algorithm to OSTR and TDM-STF, respectively. Results obtained from simulation and phantom studies showed that many speed-up techniques could be combined to greatly improve the convergence speed of a CS-based reconstruction algorithm. More importantly, the increased computation time (≤10%) was minor as compared to the acceleration provided by the proposed method. In this paper, we have presented a CS-based reconstruction framework that combines several acceleration techniques. Both simulation and phantom studies provide evidence that the proposed method has the potential to satisfy the requirement of fast image reconstruction in practical CT.
Acceleration of the direct reconstruction of linear parametric images using nested algorithms.
Wang, Guobao; Qi, Jinyi
2010-03-07
Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.
Angelis, G I; Reader, A J; Markiewicz, P J; Kotasidis, F A; Lionheart, W R; Matthews, J C
2013-08-07
Recent studies have demonstrated the benefits of a resolution model within iterative reconstruction algorithms in an attempt to account for effects that degrade the spatial resolution of the reconstructed images. However, these algorithms suffer from slower convergence rates, compared to algorithms where no resolution model is used, due to the additional need to solve an image deconvolution problem. In this paper, a recently proposed algorithm, which decouples the tomographic and image deconvolution problems within an image-based expectation maximization (EM) framework, was evaluated. This separation is convenient, because more computational effort can be placed on the image deconvolution problem and therefore accelerate convergence. Since the computational cost of solving the image deconvolution problem is relatively small, multiple image-based EM iterations do not significantly increase the overall reconstruction time. The proposed algorithm was evaluated using 2D simulations, as well as measured 3D data acquired on the high-resolution research tomograph. Results showed that bias reduction can be accelerated by interleaving multiple iterations of the image-based EM algorithm solving the resolution model problem, with a single EM iteration solving the tomographic problem. Significant improvements were observed particularly for voxels that were located on the boundaries between regions of high contrast within the object being imaged and for small regions of interest, where resolution recovery is usually more challenging. Minor differences were observed using the proposed nested algorithm, compared to the single iteration normally performed, when an optimal number of iterations are performed for each algorithm. However, using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer tomographic iterations (up to 70% fewer iterations for small regions). Nevertheless, the optimal number of nested image-based EM
Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A.; Yang, Deshan; Tan, Jun
2016-04-15
Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated
Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A
2016-04-01
The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two
Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.
2016-01-01
Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated
Accelerating Image Reconstruction in Dual-Head PET System by GPU and Symmetry Properties
Chou, Cheng-Ying; Kao, Yu-Jiun; Wang, Weichung; Kao, Chien-Min; Chen, Chin-Tu
2012-01-01
Positron emission tomography (PET) is an important imaging modality in both clinical usage and research studies. We have developed a compact high-sensitivity PET system that consisted of two large-area panel PET detector heads, which produce more than 224 million lines of response and thus request dramatic computational demands. In this work, we employed a state-of-the-art graphics processing unit (GPU), NVIDIA Tesla C2070, to yield an efficient reconstruction process. Our approaches ingeniously integrate the distinguished features of the symmetry properties of the imaging system and GPU architectures, including block/warp/thread assignments and effective memory usage, to accelerate the computations for ordered subset expectation maximization (OSEM) image reconstruction. The OSEM reconstruction algorithms were implemented employing both CPU-based and GPU-based codes, and their computational performance was quantitatively analyzed and compared. The results showed that the GPU-accelerated scheme can drastically reduce the reconstruction time and thus can largely expand the applicability of the dual-head PET system. PMID:23300527
Yu, Fengchao; Liu, Huafeng; Hu, Zhenghui; Shi, Pengcheng
2012-04-01
As a consequence of the random nature of photon emissions and detections, the data collected by a positron emission tomography (PET) imaging system can be shown to be Poisson distributed. Meanwhile, there have been considerable efforts within the tracer kinetic modeling communities aimed at establishing the relationship between the PET data and physiological parameters that affect the uptake and metabolism of the tracer. Both statistical and physiological models are important to PET reconstruction. The majority of previous efforts are based on simplified, nonphysical mathematical expression, such as Poisson modeling of the measured data, which is, on the whole, completed without consideration of the underlying physiology. In this paper, we proposed a graphics processing unit (GPU)-accelerated reconstruction strategy that can take both statistical model and physiological model into consideration with the aid of state-space evolution equations. The proposed strategy formulates the organ activity distribution through tracer kinetics models and the photon-counting measurements through observation equations, thus making it possible to unify these two constraints into a general framework. In order to accelerate reconstruction, GPU-based parallel computing is introduced. Experiments of Zubal-thorax-phantom data, Monte Carlo simulated phantom data, and real phantom data show the power of the method. Furthermore, thanks to the computing power of the GPU, the reconstruction time is practical for clinical application.
Acceleration of iterative image reconstruction for x-ray imaging for security applications
NASA Astrophysics Data System (ADS)
Degirmenci, Soysal; Politte, David G.; Bosch, Carl; Tricha, Nawfel; O'Sullivan, Joseph A.
2015-03-01
Three-dimensional image reconstruction for scanning baggage in security applications is becoming increasingly important. Compared to medical x-ray imaging, security imaging systems must be designed for a greater variety of objects. There is a lot of variation in attenuation and nearly every bag scanned has metal present, potentially yielding significant artifacts. Statistical iterative reconstruction algorithms are known to reduce metal artifacts and yield quantitatively more accurate estimates of attenuation than linear methods. For iterative image reconstruction algorithms to be deployed at security checkpoints, the images must be quantitatively accurate and the convergence speed must be increased dramatically. There are many approaches for increasing convergence; two approaches are described in detail in this paper. The first approach includes a scheduled change in the number of ordered subsets over iterations and a reformulation of convergent ordered subsets that was originally proposed by Ahn, Fessler et. al.1 The second approach is based on varying the multiplication factor in front of the additive step in the alternating minimization (AM) algorithm, resulting in more aggressive updates in iterations. Each approach is implemented on real data from a SureScanTM x 1000 Explosive Detection System∗ and compared to straightforward implementations of the alternating minimization algorithm of O'Sullivan and Benac2 with a Huber-type edge-preserving penalty, originally proposed by Lange.3
Basha, Tamer A; Akçakaya, Mehmet; Goddu, Beth; Berg, Sophie; Nezafat, Reza
2015-01-01
The aim of this study was to implement and evaluate an accelerated three-dimensional (3D) cine phase contrast MRI sequence by combining a randomly sampled 3D k-space acquisition sequence with an echo planar imaging (EPI) readout. An accelerated 3D cine phase contrast MRI sequence was implemented by combining EPI readout with randomly undersampled 3D k-space data suitable for compressed sensing (CS) reconstruction. The undersampled data were then reconstructed using low-dimensional structural self-learning and thresholding (LOST). 3D phase contrast MRI was acquired in 11 healthy adults using an overall acceleration of 7 (EPI factor of 3 and CS rate of 3). For comparison, a single two-dimensional (2D) cine phase contrast scan was also performed with sensitivity encoding (SENSE) rate 2 and approximately at the level of the pulmonary artery bifurcation. The stroke volume and mean velocity in both the ascending and descending aorta were measured and compared between two sequences using Bland-Altman plots. An average scan time of 3 min and 30 s, corresponding to an acceleration rate of 7, was achieved for 3D cine phase contrast scan with one direction flow encoding, voxel size of 2 × 2 × 3 mm(3) , foot-head coverage of 6 cm and temporal resolution of 30 ms. The mean velocity and stroke volume in both the ascending and descending aorta were statistically equivalent between the proposed 3D sequence and the standard 2D cine phase contrast sequence. The combination of EPI with a randomly undersampled 3D k-space sampling sequence using LOST reconstruction allows a seven-fold reduction in scan time of 3D cine phase contrast MRI without compromising blood flow quantification.
Acceleration of iterative tomographic image reconstruction by reference-based back projection
NASA Astrophysics Data System (ADS)
Cheng, Chang-Chieh; Li, Ping-Hui; Ching, Yu-Tai
2016-03-01
The purpose of this paper is to design and implement an efficient iterative reconstruction algorithm for computational tomography. We accelerate the reconstruction speed of algebraic reconstruction technique (ART), an iterative reconstruction method, by using the result of filtered backprojection (FBP), a wide used algorithm of analytical reconstruction, to be an initial guess and the reference for the first iteration and each back projection stage respectively. Both two improvements can reduce the error between the forward projection of each iteration and the measurements. We use three methods of quantitative analysis, root-mean-square error (RMSE), peak signal to noise ratio (PSNR), and structural content (SC), to show that our method can reduce the number of iterations by more than half and the quality of the result is better than the original ART.
Accelerated 3D-OSEM image reconstruction using a Beowulf PC cluster for pinhole SPECT.
Zeniya, Tsutomu; Watabe, Hiroshi; Sohlberg, Antti; Iida, Hidehiro
2007-11-01
A conventional pinhole single-photon emission computed tomography (SPECT) with a single circular orbit has limitations associated with non-uniform spatial resolution or axial blurring. Recently, we demonstrated that three-dimensional (3D) images with uniform spatial resolution and no blurring can be obtained by complete data acquired using two-circular orbit, combined with the 3D ordered subsets expectation maximization (OSEM) reconstruction method. However, a long computation time is required to obtain the reconstruction image, because of the fact that 3D-OSEM is an iterative method and two-orbit acquisition doubles the size of the projection data. To reduce the long reconstruction time, we parallelized the two-orbit pinhole 3D-OSEM reconstruction process by using a Beowulf personal computer (PC) cluster. The Beowulf PC cluster consists of seven PCs connected to Gbit Ethernet switches. Message passing interface protocol was utilized for parallelizing the reconstruction process. The projection data in a subset are distributed to each PC. The partial image forward- and back-projected in each PC is transferred to all PCs. The current image estimate on each PC is updated after summing the partial images. The performance of parallelization on the PC cluster was evaluated using two independent projection data sets acquired by a pinhole SPECT system with two different circular orbits. Parallelization using the PC cluster improved the reconstruction time with increasing number of PCs. The reconstruction time of 54 min by the single PC was decreased to 10 min when six or seven PCs were used. The speed-up factor was 5.4. The reconstruction image by the PC cluster was virtually identical with that by the single PC. Parallelization of 3D-OSEM reconstruction for pinhole SPECT using the PC cluster can significantly reduce the computation time, whereas its implementation is simple and inexpensive.
GPU accelerated generation of digitally reconstructed radiographs for 2-D/3-D image registration.
Dorgham, Osama M; Laycock, Stephen D; Fisher, Mark H
2012-09-01
Recent advances in programming languages for graphics processing units (GPUs) provide developers with a convenient way of implementing applications which can be executed on the CPU and GPU interchangeably. GPUs are becoming relatively cheap, powerful, and widely available hardware components, which can be used to perform intensive calculations. The last decade of hardware performance developments shows that GPU-based computation is progressing significantly faster than CPU-based computation, particularly if one considers the execution of highly parallelisable algorithms. Future predictions illustrate that this trend is likely to continue. In this paper, we introduce a way of accelerating 2-D/3-D image registration by developing a hybrid system which executes on the CPU and utilizes the GPU for parallelizing the generation of digitally reconstructed radiographs (DRRs). Based on the advancements of the GPU over the CPU, it is timely to exploit the benefits of many-core GPU technology by developing algorithms for DRR generation. Although some previous work has investigated the rendering of DRRs using the GPU, this paper investigates approximations which reduce the computational overhead while still maintaining a quality consistent with that needed for 2-D/3-D registration with sufficient accuracy to be clinically acceptable in certain applications of radiation oncology. Furthermore, by comparing implementations of 2-D/3-D registration on the CPU and GPU, we investigate current performance and propose an optimal framework for PC implementations addressing the rigid registration problem. Using this framework, we are able to render DRR images from a 256×256×133 CT volume in ~24 ms using an NVidia GeForce 8800 GTX and in ~2 ms using NVidia GeForce GTX 580. In addition to applications requiring fast automatic patient setup, these levels of performance suggest image-guided radiation therapy at video frame rates is technically feasible using relatively low cost PC
NASA Astrophysics Data System (ADS)
Nouizi, F.; Erkol, H.; Luk, A.; Marks, M.; Unlu, M. B.; Gulsen, G.
2016-10-01
We previously introduced photo-magnetic imaging (PMI), an imaging technique that illuminates the medium under investigation with near-infrared light and measures the induced temperature increase using magnetic resonance thermometry (MRT). Using a multiphysics solver combining photon migration and heat diffusion, PMI models the spatiotemporal distribution of temperature variation and recovers high resolution optical absorption images using these temperature maps. In this paper, we present a new fast non-iterative reconstruction algorithm for PMI. This new algorithm uses analytic methods during the resolution of the forward problem and the assembly of the sensitivity matrix. We validate our new analytic-based algorithm with the first generation finite element method (FEM) based reconstruction algorithm previously developed by our team. The validation is performed using, first synthetic data and afterwards, real MRT measured temperature maps. Our new method accelerates the reconstruction process 30-fold when compared to a single iteration of the FEM-based algorithm.
NASA Astrophysics Data System (ADS)
Choi, Sunghoon; Lee, Haenghwa; Lee, Donghoon; Choi, Seungyeon; Shin, Jungwook; Jang, Woojin; Seo, Chang-Woo; Kim, Hee-Joung
2017-03-01
A compressed-sensing (CS) technique has been rapidly applied in medical imaging field for retrieving volumetric data from highly under-sampled projections. Among many variant forms, CS technique based on a total-variation (TV) regularization strategy shows fairly reasonable results in cone-beam geometry. In this study, we implemented the TV-based CS image reconstruction strategy in our prototype chest digital tomosynthesis (CDT) R/F system. Due to the iterative nature of time consuming processes in solving a cost function, we took advantage of parallel computing using graphics processing units (GPU) by the compute unified device architecture (CUDA) programming to accelerate our algorithm. In order to compare the algorithmic performance of our proposed CS algorithm, conventional filtered back-projection (FBP) and simultaneous algebraic reconstruction technique (SART) reconstruction schemes were also studied. The results indicated that the CS produced better contrast-to-noise ratios (CNRs) in the physical phantom images (Teflon region-of-interest) by factors of 3.91 and 1.93 than FBP and SART images, respectively. The resulted human chest phantom images including lung nodules with different diameters also showed better visual appearance in the CS images. Our proposed GPU-accelerated CS reconstruction scheme could produce volumetric data up to 80 times than CPU programming. Total elapsed time for producing 50 coronal planes with 1024×1024 image matrix using 41 projection views were 216.74 seconds for proposed CS algorithms on our GPU programming, which could match the clinically feasible time ( 3 min). Consequently, our results demonstrated that the proposed CS method showed a potential of additional dose reduction in digital tomosynthesis with reasonable image quality in a fast time.
Combining ordered subsets and momentum for accelerated X-ray CT image reconstruction.
Kim, Donghwan; Ramani, Sathish; Fessler, Jeffrey A
2015-01-01
Statistical X-ray computed tomography (CT) reconstruction can improve image quality from reduced dose scans, but requires very long computation time. Ordered subsets (OS) methods have been widely used for research in X-ray CT statistical image reconstruction (and are used in clinical PET and SPECT reconstruction). In particular, OS methods based on separable quadratic surrogates (OS-SQS) are massively parallelizable and are well suited to modern computing architectures, but the number of iterations required for convergence should be reduced for better practical use. This paper introduces OS-SQS-momentum algorithms that combine Nesterov's momentum techniques with OS-SQS methods, greatly improving convergence speed in early iterations. If the number of subsets is too large, the OS-SQS-momentum methods can be unstable, so we propose diminishing step sizes that stabilize the method while preserving the very fast convergence behavior. Experiments with simulated and real 3D CT scan data illustrate the performance of the proposed algorithms.
An accelerated threshold-based back-projection algorithm for Compton camera image reconstruction
Mundy, Daniel W.; Herman, Michael G.
2011-01-15
Purpose: Compton camera imaging (CCI) systems are currently under investigation for radiotherapy dose reconstruction and verification. The ability of such a system to provide real-time images during dose delivery will be limited by the computational speed of the image reconstruction algorithm. In this work, the authors present a fast and simple method by which to generate an initial back-projected image from acquired CCI data, suitable for use in a filtered back-projection algorithm or as a starting point for iterative reconstruction algorithms, and compare its performance to the current state of the art. Methods: Each detector event in a CCI system describes a conical surface that includes the true point of origin of the detected photon. Numerical image reconstruction algorithms require, as a first step, the back-projection of each of these conical surfaces into an image space. The algorithm presented here first generates a solution matrix for each slice of the image space by solving the intersection of the conical surface with the image plane. Each element of the solution matrix is proportional to the distance of the corresponding voxel from the true intersection curve. A threshold function was developed to extract those pixels sufficiently close to the true intersection to generate a binary intersection curve. This process is repeated for each image plane for each CCI detector event, resulting in a three-dimensional back-projection image. The performance of this algorithm was tested against a marching algorithm known for speed and accuracy. Results: The threshold-based algorithm was found to be approximately four times faster than the current state of the art with minimal deficit to image quality, arising from the fact that a generically applicable threshold function cannot provide perfect results in all situations. The algorithm fails to extract a complete intersection curve in image slices near the detector surface for detector event cones having axes nearly
Iterative image reconstruction with a single-board computer employing hardware acceleration
Mayans, R.; Rogers, W.L.; Clinthorne, N.H.; Atkins, D.; Chin, I.; Hanao, J.
1984-01-01
Iterative reconstruction of tomographic images offers much greater flexibility than filtered backprojection; finite ray width, spatially variant resolution, nonstandard ray geometry, missing angular samples and irregular attenuation maps are all readily accommodated. In addition, various solution strategies such as least square or maximum entropy can be implemented. The principal difficulty is that either a large computer must be used or the computation time is excessive. The authors have developed an image reconstructor based on the Intel 86/12 single-board computer. The design strategy was to first implement a family of reconstruction algorithms in PLM-86 and to identify the slowest common computation segments. Next, double precision arithmetic was recoded and extended addressing calls replaced with in-line code. Finally, the inner loop was shortened by factoring the computation. Computation times for these versions were in the ratio 1:0:75:0.5. Using software only, a single iteration of the ART algorithm for finite beam geometry involving 300k pixel weights could be accomplished in 70 seconds with high quality images obtained in three iterations. In addition the authors examined multibus compatible hardware additions to further speed the computation. The simplest of those schemes, which performs only the forward projection, has been constructed and is being tested. With this addition, computation time is expected to be reduced an additional 40%. With this approach that have combined flexible choice of algorithm with reasonable image reconstruction time.
Jiansen Li; Jianqi Sun; Ying Song; Yanran Xu; Jun Zhao
2014-01-01
An effective way to improve the data acquisition speed of magnetic resonance imaging (MRI) is using under-sampled k-space data, and dictionary learning method can be used to maintain the reconstruction quality. Three-dimensional dictionary trains the atoms in dictionary in the form of blocks, which can utilize the spatial correlation among slices. Dual-dictionary learning method includes a low-resolution dictionary and a high-resolution dictionary, for sparse coding and image updating respectively. However, the amount of data is huge for three-dimensional reconstruction, especially when the number of slices is large. Thus, the procedure is time-consuming. In this paper, we first utilize the NVIDIA Corporation's compute unified device architecture (CUDA) programming model to design the parallel algorithms on graphics processing unit (GPU) to accelerate the reconstruction procedure. The main optimizations operate in the dictionary learning algorithm and the image updating part, such as the orthogonal matching pursuit (OMP) algorithm and the k-singular value decomposition (K-SVD) algorithm. Then we develop another version of CUDA code with algorithmic optimization. Experimental results show that more than 324 times of speedup is achieved compared with the CPU-only codes when the number of MRI slices is 24.
Accelerating statistical image reconstruction algorithms for fan-beam x-ray CT using cloud computing
NASA Astrophysics Data System (ADS)
Srivastava, Somesh; Rao, A. Ravishankar; Sheinin, Vadim
2011-03-01
Statistical image reconstruction algorithms potentially offer many advantages to x-ray computed tomography (CT), e.g. lower radiation dose. But, their adoption in practical CT scanners requires extra computation power, which is traditionally provided by incorporating additional computing hardware (e.g. CPU-clusters, GPUs, FPGAs etc.) into a scanner. An alternative solution is to access the required computation power over the internet from a cloud computing service, which is orders-of-magnitude more cost-effective. This is because users only pay a small pay-as-you-go fee for the computation resources used (i.e. CPU time, storage etc.), and completely avoid purchase, maintenance and upgrade costs. In this paper, we investigate the benefits and shortcomings of using cloud computing for statistical image reconstruction. We parallelized the most time-consuming parts of our application, the forward and back projectors, using MapReduce, the standard parallelization library on clouds. From preliminary investigations, we found that a large speedup is possible at a very low cost. But, communication overheads inside MapReduce can limit the maximum speedup, and a better MapReduce implementation might become necessary in the future. All the experiments for this paper, including development and testing, were completed on the Amazon Elastic Compute Cloud (EC2) for less than $20.
Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib
2016-02-07
FDG-PET study also revealed that for the same noise level, a higher contrast recovery can be obtained by increasing the number of TOF subsets. It can be concluded that the proposed TOF weighting matrix pre-computation and subsetization approaches enable to further accelerate and improve the convergence properties of OSEM and MLEM algorithms, thus opening new avenues for accelerated TOF PET image reconstruction.
Convex Accelerated Maximum Entropy Reconstruction
Worley, Bradley
2016-01-01
Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm – called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm – is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra. PMID:26894476
Convex accelerated maximum entropy reconstruction.
Worley, Bradley
2016-04-01
Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm - called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm - is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra. Copyright © 2016 Elsevier Inc. All rights reserved.
Convex accelerated maximum entropy reconstruction
NASA Astrophysics Data System (ADS)
Worley, Bradley
2016-04-01
Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm - called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm - is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra.
NASA Technical Reports Server (NTRS)
Limber, Mark A.; Manteuffel, Thomas A.; Mccormick, Stephen F.; Sholl, David S.
1993-01-01
We consider the problem of image reconstruction from a finite number of projections over the space L(sup 1)(Omega), where Omega is a compact subset of the set of Real numbers (exp 2). We prove that, given a discretization of the projection space, the function that generates the correct projection data and maximizes the Boltzmann-Shannon entropy is piecewise constant on a certain discretization of Omega, which we call the 'optimal grid'. It is on this grid that one obtains the maximum resolution given the problem setup. The size of this grid grows very quickly as the number of projections and number of cells per projection grow, indicating fast computational methods are essential to make its use feasible. We use a Fenchel duality formulation of the problem to keep the number of variables small while still using the optimal discretization, and propose a multilevel scheme to improve convergence of a simple cyclic maximization scheme applied to the dual problem.
Kim, Donghwan; Pal, Debashish; Thibault, Jean-Baptiste; Fessler, Jeffrey A.
2013-01-01
Statistical image reconstruction algorithms in X-ray CT provide improved image quality for reduced dose levels but require substantial computation time. Iterative algorithms that converge in few iterations and that are amenable to massive parallelization are favorable in multiprocessor implementations. The separable quadratic surrogate (SQS) algorithm is desirable as it is simple and updates all voxels simultaneously. However, the standard SQS algorithm requires many iterations to converge. This paper proposes an extension of the SQS algorithm that leads to spatially non-uniform updates. The non-uniform (NU) SQS encourages larger step sizes for the voxels that are expected to change more between the current and the final image, accelerating convergence, while the derivation of NU-SQS guarantees monotonic descent. Ordered subsets (OS) algorithms can also accelerate SQS, provided suitable “subset balance” conditions hold. These conditions can fail in 3D helical cone-beam CT due to incomplete sampling outside the axial region-of-interest (ROI). This paper proposes a modified OS algorithm that is more stable outside the ROI in helical CT. We use CT scans to demonstrate that the proposed NU-OS-SQS algorithm handles the helical geometry better than the conventional OS methods and “converges” in less than half the time of ordinary OS-SQS. PMID:23751959
NASA Astrophysics Data System (ADS)
Zhu, Dianwen; Li, Changqing
2016-01-01
Fluorescence molecular tomography (FMT) is a significant preclinical imaging modality that has been actively studied in the past two decades. It remains a challenging task to obtain fast and accurate reconstruction of fluorescent probe distribution in small animals due to the large computational burden and the ill-posed nature of the inverse problem. We have recently studied a nonuniform multiplicative updating algorithm that combines with the ordered subsets (OS) method for fast convergence. However, increasing the number of OS leads to greater approximation errors and the speed gain from larger number of OS is limited. We propose to further enhance the convergence speed by incorporating a first-order momentum method that uses previous iterations to achieve optimal convergence rate. Using numerical simulations and a cubic phantom experiment, we have systematically compared the effects of the momentum technique, the OS method, and the nonuniform updating scheme in accelerating the FMT reconstruction. We found that the proposed combined method can produce a high-quality image using an order of magnitude less time.
Huang, Hsuan-Ming; Hsiao, Ing-Tsung
2016-01-01
In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate. PMID:27073853
Accelerated augmented Lagrangian method for few-view CT reconstruction
NASA Astrophysics Data System (ADS)
Wu, Junfeng; Mou, Xuanqin
2012-03-01
Recently iterative reconstruction algorithms with total variation (TV) regularization have shown its tremendous power in image reconstruction from few-view projection data, but it is much more demanding in computation. In this paper, we propose an accelerated augmented Lagrangian method (ALM) for few-view CT reconstruction with total variation regularization. Experimental phantom results demonstrate that the proposed method not only reconstruct high quality image from few-view projection data but also converge fast to the optimal solution.
Ramos, M; Ferrer, S; Verdu, G
2005-01-01
Mammography is a non-invasive technique used for the detection of breast lesions. The use of this technique in a breast screening program requires a continuous quality control testing in mammography units for ensuring a minimum absorbed glandular dose without modifying image quality. Digital mammography has been progressively introduced in screening centers, since recent evolution of photostimulable phosphor detectors. The aim of this work is the validation of a methodology for reconstructing digital images of a polymethyl-methacrylate (PMMA) phantom (P01 model) under pure Monte Carlo techniques. A reference image has been acquired for this phantom under automatic exposure control (AEC) mode (28 kV and 14 mAs). Some variance reduction techniques (VRT) have been applied to improve the efficiency of the simulations, defined as the number of particles reaching the imaging system per starting particle. All images have been used and stored in DICOM format. The results prove that the signal-to-noise ratio (SNR) of the reconstructed images have been increased with the use of the VRT, showing similar values between different employed tallies. As a conclusion, these images could be used during quality control testing for showing any deviation of the exposition parameters from the desired reference level.
Choi, Kihwan; Li, Ruijiang; Nam, Haewon; Xing, Lei
2014-06-21
As a solution to iterative CT image reconstruction, first-order methods are prominent for the large-scale capability and the fast convergence rate [Formula: see text]. In practice, the CT system matrix with a large condition number may lead to slow convergence speed despite the theoretically promising upper bound. The aim of this study is to develop a Fourier-based scaling technique to enhance the convergence speed of first-order methods applied to CT image reconstruction. Instead of working in the projection domain, we transform the projection data and construct a data fidelity model in Fourier space. Inspired by the filtered backprojection formalism, the data are appropriately weighted in Fourier space. We formulate an optimization problem based on weighted least-squares in the Fourier space and total-variation (TV) regularization in image space for parallel-beam, fan-beam and cone-beam CT geometry. To achieve the maximum computational speed, the optimization problem is solved using a fast iterative shrinkage-thresholding algorithm with backtracking line search and GPU implementation of projection/backprojection. The performance of the proposed algorithm is demonstrated through a series of digital simulation and experimental phantom studies. The results are compared with the existing TV regularized techniques based on statistics-based weighted least-squares as well as basic algebraic reconstruction technique. The proposed Fourier-based compressed sensing (CS) method significantly improves both the image quality and the convergence rate compared to the existing CS techniques.
NASA Astrophysics Data System (ADS)
Choi, Kihwan; Li, Ruijiang; Nam, Haewon; Xing, Lei
2014-06-01
As a solution to iterative CT image reconstruction, first-order methods are prominent for the large-scale capability and the fast convergence rate {O}(1/k^2). In practice, the CT system matrix with a large condition number may lead to slow convergence speed despite the theoretically promising upper bound. The aim of this study is to develop a Fourier-based scaling technique to enhance the convergence speed of first-order methods applied to CT image reconstruction. Instead of working in the projection domain, we transform the projection data and construct a data fidelity model in Fourier space. Inspired by the filtered backprojection formalism, the data are appropriately weighted in Fourier space. We formulate an optimization problem based on weighted least-squares in the Fourier space and total-variation (TV) regularization in image space for parallel-beam, fan-beam and cone-beam CT geometry. To achieve the maximum computational speed, the optimization problem is solved using a fast iterative shrinkage-thresholding algorithm with backtracking line search and GPU implementation of projection/backprojection. The performance of the proposed algorithm is demonstrated through a series of digital simulation and experimental phantom studies. The results are compared with the existing TV regularized techniques based on statistics-based weighted least-squares as well as basic algebraic reconstruction technique. The proposed Fourier-based compressed sensing (CS) method significantly improves both the image quality and the convergence rate compared to the existing CS techniques.
Accelerating reconstruction of reference digital tomosynthesis using graphics hardware.
Yan, Hui; Ren, Lei; Godfrey, Devon J; Yin, Fang-Fang
2007-10-01
The successful implementation of digital tomosynthesis (DTS) for on-board image guided radiation therapy (IGRT) requires fast DTS image reconstruction. Both target and reference DTS image sets are required to support an image registration application for IGRT. Target images are usually DTS image sets reconstructed from on-board projections, which can be accomplished quickly using the conventional filtered backprojection algorithm. Reference images are DTS image sets reconstructed from digitally reconstructed radiographs (DRRs) previously generated from conventional planning CT data. Generating a set of DRRs from planning CT is relatively slow using the conventional ray-casting algorithm. In order to facilitate DTS reconstruction within a clinically acceptable period of time, we implemented a high performance DRR reconstruction algorithm on a graphics processing unit of commercial PC graphics hardware. The performance of this new algorithm was evaluated and compared with that which is achieved using the conventional software-based ray-casting algorithm. DTS images were reconstructed from DRRs previously generated by both hardware and software algorithms. On average, the DRR reconstruction efficiency using the hardware method is improved by a factor of 67 over the software method. The image quality of the DRRs was comparable to those generated using the software-based ray-casting algorithm. Accelerated DRR reconstruction significantly reduces the overall time required to produce a set of reference DTS images from planning CT and makes this technique clinically practical for target localization for radiation therapy.
Sparsity regularized image reconstruction
NASA Astrophysics Data System (ADS)
Hero, Alfred
2015-03-01
Most image reconstruction problems are under-determined: there are far more pixels to be resolved than there are measurements available. This means that the image space has more degrees of freedom than the measurement space. To make headway in such under-determined image reconstruction problems one must either incorporate domain knowledge or regularize. Domain knowledge restricts the size of the image space while regularization introduces bias, e.g., by forcing the reconstructed image to be smooth or have limited support. Both approaches are equivalent and can be interpreted as making the image sparse in some domain. This paper will provide a selective overview of some of the principal methods of sparsity regularized image reconstruction.
Nagarajan, Rajakumar; Iqbal, Zohaib; Burns, Brian; Wilson, Neil E; Sarma, Manoj K; Margolis, Daniel A; Reiter, Robert E; Raman, Steven S; Thomas, M Albert
2015-11-01
The overlap of metabolites is a major limitation in one-dimensional (1D) spectral-based single-voxel MRS and multivoxel-based MRSI. By combining echo planar spectroscopic imaging (EPSI) with a two-dimensional (2D) J-resolved spectroscopic (JPRESS) sequence, 2D spectra can be recorded in multiple locations in a single slice of prostate using four-dimensional (4D) echo planar J-resolved spectroscopic imaging (EP-JRESI). The goal of the present work was to validate two different non-linear reconstruction methods independently using compressed sensing-based 4D EP-JRESI in prostate cancer (PCa): maximum entropy (MaxEnt) and total variation (TV). Twenty-two patients with PCa with a mean age of 63.8 years (range, 46-79 years) were investigated in this study. A 4D non-uniformly undersampled (NUS) EP-JRESI sequence was implemented on a Siemens 3-T MRI scanner. The NUS data were reconstructed using two non-linear reconstruction methods, namely MaxEnt and TV. Using both TV and MaxEnt reconstruction methods, the following observations were made in cancerous compared with non-cancerous locations: (i) higher mean (choline + creatine)/citrate metabolite ratios; (ii) increased levels of (choline + creatine)/spermine and (choline + creatine)/myo-inositol; and (iii) decreased levels of (choline + creatine)/(glutamine + glutamate). We have shown that it is possible to accelerate the 4D EP-JRESI sequence by four times and that the data can be reliably reconstructed using the TV and MaxEnt methods. The total acquisition duration was less than 13 min and we were able to detect and quantify several metabolites. Copyright © 2015 John Wiley & Sons, Ltd.
Efficient holoscopy image reconstruction.
Hillmann, Dierck; Franke, Gesa; Lührs, Christian; Koch, Peter; Hüttmann, Gereon
2012-09-10
Holoscopy is a tomographic imaging technique that combines digital holography and Fourier-domain optical coherence tomography (OCT) to gain tomograms with diffraction limited resolution and uniform sensitivity over several Rayleigh lengths. The lateral image information is calculated from the spatial interference pattern formed by light scattered from the sample and a reference beam. The depth information is obtained from the spectral dependence of the recorded digital holograms. Numerous digital holograms are acquired at different wavelengths and then reconstructed for a common plane in the sample. Afterwards standard Fourier-domain OCT signal processing achieves depth discrimination. Here we describe and demonstrate an optimized data reconstruction algorithm for holoscopy which is related to the inverse scattering reconstruction of wavelength-scanned full-field optical coherence tomography data. Instead of calculating a regularized pseudoinverse of the forward operator, the recorded optical fields are propagated back into the sample volume. In one processing step the high frequency components of the scattering potential are reconstructed on a non-equidistant grid in three-dimensional spatial frequency space. A Fourier transform yields an OCT equivalent image of the object structure. In contrast to the original holoscopy reconstruction with backpropagation and Fourier transform with respect to the wavenumber, the required processing time does neither depend on the confocal parameter nor on the depth of the volume. For an imaging NA of 0.14, the processing time was decreased by a factor of 15, at higher NA the gain in reconstruction speed may reach two orders of magnitude.
Budjan, Johannes; Haubenreisser, Holger; Henzler, Thomas; Sudarski, Sonja; Schmidt, Michaela; Doesch, Christina; Akin, Ibrahim; Borggrefe, Martin; Meßner, Nadja M.; Schoenberg, Stefan O.; Attenberger, Ulrike I.; Papavassiliu, Theano
2016-01-01
To generate a patient-friendly, time-efficient cardiac MRI examination protocol, a highly accelerated real-time CINE MR sequence (SSIR) was acquired in the idle time in between contrast injection and late gadolinium enhancement phase. 20 consecutive patients underwent a cardiac MRI examination including a multi-breath-hold sequence as gold standard (Ref) as well as SSIR sequences with (SSIR-BH) and without breath-hold (SSIR-nonBH). SSIR sequences were acquired 4 minutes after gadolinium injection. Right- (RV) and left-ventricular (LV) volumetric functional parameters were evaluated and compared between Ref and SSIR sequences. Despite reduced contrast between myocardium and intra-ventricular blood, volumetric as well as regional wall movement assessment revealed high agreement between both SSIR sequences and Ref. Excellent correlation and narrow limits of agreements were found for both SSIR-BH and SSIR-nonBH when compared to Ref for both LV (mean LV ejection fraction [EF] Ref: 52.8 ± 12.6%, SSIR-BH 52.3 ± 12.9%, SSIR-nonBH 52.5 ± 12.6%) and RV (mean RV EF Ref: 52.7 ± 9.4%, SSIR-BH 52.0 ± 8.1%, SSIR-nonBH 52.2 ± 9.3%) analyses. Even when acquired in the idle time in between gadolinium injection and LGE acquisition, the highly accelerated SSIR sequence delivers accurate volumetric and regional wall movement information. It thus seems ideal for very time-efficient and robust cardiac MR imaging protocols. PMID:27905543
Augmented Likelihood Image Reconstruction.
Stille, Maik; Kleine, Matthias; Hägele, Julian; Barkhausen, Jörg; Buzug, Thorsten M
2016-01-01
The presence of high-density objects remains an open problem in medical CT imaging. Data of projections passing through objects of high density, such as metal implants, are dominated by noise and are highly affected by beam hardening and scatter. Reconstructed images become less diagnostically conclusive because of pronounced artifacts that manifest as dark and bright streaks. A new reconstruction algorithm is proposed with the aim to reduce these artifacts by incorporating information about shape and known attenuation coefficients of a metal implant. Image reconstruction is considered as a variational optimization problem. The afore-mentioned prior knowledge is introduced in terms of equality constraints. An augmented Lagrangian approach is adapted in order to minimize the associated log-likelihood function for transmission CT. During iterations, temporally appearing artifacts are reduced with a bilateral filter and new projection values are calculated, which are used later on for the reconstruction. A detailed evaluation in cooperation with radiologists is performed on software and hardware phantoms, as well as on clinically relevant patient data of subjects with various metal implants. Results show that the proposed reconstruction algorithm is able to outperform contemporary metal artifact reduction methods such as normalized metal artifact reduction.
Cheng, Lishui; Hobbs, Robert F; Sgouros, George; Frey, Eric C
2014-11-01
Three-dimensional (3D) dosimetry has the potential to provide better prediction of response of normal tissues and tumors and is based on 3D estimates of the activity distribution in the patient obtained from emission tomography. Dose-volume histograms (DVHs) are an important summary measure of 3D dosimetry and a widely used tool for treatment planning in radiation therapy. Accurate estimates of the radioactivity distribution in space and time are desirable for accurate 3D dosimetry. The purpose of this work was to develop and demonstrate the potential of penalized SPECT image reconstruction methods to improve DVHs estimates obtained from 3D dosimetry methods. The authors developed penalized image reconstruction methods, using maximum a posteriori (MAP) formalism, which intrinsically incorporate regularization in order to control noise and, unlike linear filters, are designed to retain sharp edges. Two priors were studied: one is a 3D hyperbolic prior, termed single-time MAP (STMAP), and the second is a 4D hyperbolic prior, termed cross-time MAP (CTMAP), using both the spatial and temporal information to control noise. The CTMAP method assumed perfect registration between the estimated activity distributions and projection datasets from the different time points. Accelerated and convergent algorithms were derived and implemented. A modified NURBS-based cardiac-torso phantom with a multicompartment kidney model and organ activities and parameters derived from clinical studies were used in a Monte Carlo simulation study to evaluate the methods. Cumulative dose-rate volume histograms (CDRVHs) and cumulative DVHs (CDVHs) obtained from the phantom and from SPECT images reconstructed with both the penalized algorithms and OS-EM were calculated and compared both qualitatively and quantitatively. The STMAP method was applied to patient data and CDRVHs obtained with STMAP and OS-EM were compared qualitatively. The results showed that the penalized algorithms substantially
Twenty-fold acceleration of 3D projection reconstruction MPI
Goodwill, Patrick W.; Saritas, Emine Ulku; Zheng, Bo; Lu, Kuan; Conolly, Steven M.
2014-01-01
We experimentally demonstrate a 20-fold improvement in acquisition time in projection reconstruction (PR) magnetic particle imaging (MPI) relative to the state-of-the-art PR MPI imaging results. We achieve this acceleration in our imaging system by introducing an additional Helmholtz electromagnet pair, which creates a slow shift (focus) field. Because of magnetostimulation limits in humans, we show that scan time with three-dimensional (3D) PR MPI is theoretically within the same order of magnitude as 3D MPI with a field free point; however, PR MPI has an order of magnitude signal-to-noise ratio gain. PMID:23940058
Sparsity-Promoting Calibration for GRAPPA Accelerated Parallel MRI Reconstruction
Weller, Daniel S.; Polimeni, Jonathan R.; Grady, Leo; Wald, Lawrence L.; Adalsteinsson, Elfar; Goyal, Vivek K
2013-01-01
The amount of calibration data needed to produce images of adequate quality can prevent auto-calibrating parallel imaging reconstruction methods like Generalized Autocalibrating Partially Parallel Acquisitions (GRAPPA) from achieving a high total acceleration factor. To improve the quality of calibration when the number of auto-calibration signal (ACS) lines is restricted, we propose a sparsity-promoting regularized calibration method that finds a GRAPPA kernel consistent with the ACS fit equations that yields jointly sparse reconstructed coil channel images. Several experiments evaluate the performance of the proposed method relative to un-regularized and existing regularized calibration methods for both low-quality and underdetermined fits from the ACS lines. These experiments demonstrate that the proposed method, like other regularization methods, is capable of mitigating noise amplification, and in addition, the proposed method is particularly effective at minimizing coherent aliasing artifacts caused by poor kernel calibration in real data. Using the proposed method, we can increase the total achievable acceleration while reducing degradation of the reconstructed image better than existing regularized calibration methods. PMID:23584259
Exercises in PET Image Reconstruction
NASA Astrophysics Data System (ADS)
Nix, Oliver
These exercises are complementary to the theoretical lectures about positron emission tomography (PET) image reconstruction. They aim at providing some hands on experience in PET image reconstruction and focus on demonstrating the different data preprocessing steps and reconstruction algorithms needed to obtain high quality PET images. Normalisation, geometric-, attenuation- and scatter correction are introduced. To explain the necessity of those some basics about PET scanner hardware, data acquisition and organisation are reviewed. During the course the students use a software application based on the STIR (software for tomographic image reconstruction) library 1,2 which allows them to dynamically select or deselect corrections and reconstruction methods as well as to modify their most important parameters. Following the guided tutorial, the students get an impression on the effect the individual data precorrections have on image quality and what happens if they are forgotten. Several data sets in sinogram format are provided, such as line source data, Jaszczak phantom data sets with high and low statistics and NEMA whole body phantom data. The two most frequently used reconstruction algorithms in PET image reconstruction, filtered back projection (FBP) and the iterative OSEM (ordered subset expectation maximation) approach are used to reconstruct images. The exercise should help the students gaining an understanding what the reasons for inferior image quality and artefacts are and how to improve quality by a clever choice of reconstruction parameters.
Image Contrast in Holographic Reconstructions
ERIC Educational Resources Information Center
Russell, B. R.
1969-01-01
The fundamental concepts of holography are explained using elementary wave ideas. Discusses wavefront reconstruction and contrast in hemigraphic images. The consequence of recording only the intensity at a given surface and using an oblique reference wave is shown to be an incomplete reconstruction resulting in image of low contrast. (LC)
[The study of associated reconstruction using MV linear accelerator and cone-beam CT].
Liu, Zun-gang; Zhao, Jun; Zhuang, Tian-ge
2006-07-01
In this paper, we proposed a new scan mode and image reconstruction method, which combines the data from both the linear accelerator and the cone-beam CT to reconstruct the volume with a limited rotation angle and low sampling rate. The classical filtered backprojection method and the iterative method are utilized to reconstruct the volume. The reconstruction results of the two methods are compared with each other with a relavant anlysis given here.
GPU-accelerated SART reconstruction using the CUDA programming environment
NASA Astrophysics Data System (ADS)
Keck, Benjamin; Hofmann, Hannes; Scherl, Holger; Kowarschik, Markus; Hornegger, Joachim
2009-02-01
The Common Unified Device Architecture (CUDA) introduced in 2007 by NVIDIA is a recent programming model making use of the unified shader design of the most recent graphics processing units (GPUs). The programming interface allows algorithm implementation using standard C language along with a few extensions without any knowledge about graphics programming using OpenGL, DirectX, and shading languages. We apply this novel technology to the Simultaneous Algebraic Reconstruction Technique (SART), which is an advanced iterative image reconstruction method in cone-beam CT. So far, the computational complexity of this algorithm has prohibited its use in most medical applications. However, since today's GPUs provide a high level of parallelism and are highly cost-efficient processors, they are predestinated for performing the iterative reconstruction according to medical requirements. In this paper we present an efficient implementation of the most time-consuming parts of the iterative reconstruction algorithm: forward- and back-projection. We also explain the required strategy to parallelize the algorithm for the CUDA 1.1 and CUDA 2.0 architecture. Furthermore, our implementation introduces an acceleration technique for the reconstruction compared to a standard SART implementation on the GPU using CUDA. Thus, we present an implementation that can be used in a time-critical clinical environment. Finally, we compare our results to the current applications on multi-core workstations, with respect to both reconstruction speed and (dis-)advantages. Our implementation exhibits a speed-up of more than 64 compared to a state-of-the-art CPU using hardware-accelerated texture interpolation.
Accelerated nonlinear multichannel ultrasonic tomographic imaging using target sparseness.
Chengdong Dong; Yuanwei Jin; Enyue Lu
2014-03-01
This paper presents an accelerated iterative Landweber method for nonlinear ultrasonic tomographic imaging in a multiple-input multiple-output (MIMO) configuration under a sparsity constraint on the image. The proposed method introduces the emerging MIMO signal processing techniques and target sparseness constraints in the traditional computational imaging field, thus significantly improves the speed of image reconstruction compared with the conventional imaging method while producing high quality images. Using numerical examples, we demonstrate that incorporating prior knowledge about the imaging field such as target sparseness accelerates significantly the convergence of the iterative imaging method, which provides considerable benefits to real-time tomographic imaging applications.
Craniofacial reconstruction - series (image)
Patients requiring craniofacial reconstruction have: birth defects (such as hypertelorism, Crouzon's disease, Apert's syndrome) injuries to the head, face, or jaws (maxillofacial) tumors deformities caused by treatments of tumors
Li, Shu; Chan, Cheong; Stockmann, Jason P; Tagare, Hemant; Adluru, Ganesh; Tam, Leo K; Galiana, Gigi; Constable, R Todd; Kozerke, Sebastian; Peters, Dana C
2015-04-01
To investigate algebraic reconstruction technique (ART) for parallel imaging reconstruction of radial data, applied to accelerated cardiac cine. A graphics processing unit (GPU)-accelerated ART reconstruction was implemented and applied to simulations, point spread functions and in 12 subjects imaged with radial cardiac cine acquisitions. Cine images were reconstructed with radial ART at multiple undersampling levels (192 Nr × Np = 96 to 16). Images were qualitatively and quantitatively analyzed for sharpness and artifacts, and compared to filtered back-projection, and conjugate gradient SENSE. Radial ART provided reduced artifacts and mainly preserved spatial resolution, for both simulations and in vivo data. Artifacts were qualitatively and quantitatively less with ART than filtered back-projection using 48, 32, and 24 Np , although filtered back-projection provided quantitatively sharper images at undersampling levels of 48-24 Np (all P < 0.05). Use of undersampled radial data for generating auto-calibrated coil-sensitivity profiles resulted in slightly reduced quality. ART was comparable to conjugate gradient SENSE. GPU-acceleration increased ART reconstruction speed 15-fold, with little impact on the images. GPU-accelerated ART is an alternative approach to image reconstruction for parallel radial MR imaging, providing reduced artifacts while mainly maintaining sharpness compared to filtered back-projection, as shown by its first application in cardiac studies. © 2014 Wiley Periodicals, Inc.
Li, Shu; Chan, Cheong; Stockmann, Jason P.; Tagare, Hemant; Adluru, Ganesh; Tam, Leo K.; Galiana, Gigi; Constable, R. Todd; Kozerke, Sebastian; Peters, Dana C.
2014-01-01
Purpose To investigate algebraic reconstruction technique (ART) for parallel imaging reconstruction of radial data, applied to accelerated cardiac cine. Methods A GPU-accelerated ART reconstruction was implemented and applied to simulations, point spread functions (PSF) and in twelve subjects imaged with radial cardiac cine acquisitions. Cine images were reconstructed with radial ART at multiple undersampling levels (192 Nr x Np = 96 to 16). Images were qualitatively and quantitatively analyzed for sharpness and artifacts, and compared to filtered back-projection (FBP), and conjugate gradient SENSE (CG SENSE). Results Radial ART provided reduced artifacts and mainly preserved spatial resolution, for both simulations and in vivo data. Artifacts were qualitatively and quantitatively less with ART than FBP using 48, 32, and 24 Np, although FBP provided quantitatively sharper images at undersampling levels of 48-24 Np (all p<0.05). Use of undersampled radial data for generating auto-calibrated coil-sensitivity profiles resulted in slightly reduced quality. ART was comparable to CG SENSE. GPU-acceleration increased ART reconstruction speed 15-fold, with little impact on the images. Conclusion GPU-accelerated ART is an alternative approach to image reconstruction for parallel radial MR imaging, providing reduced artifacts while mainly maintaining sharpness compared to FBP, as shown by its first application in cardiac studies. PMID:24753213
Accelerated Focused Ultrasound Imaging
White, P. Jason; Thomenius, Kai; Clement, Gregory T.
2010-01-01
One of the most, basic trade-offs in ultrasound imaging involves frame rate, depth, and number of lines. Achieving good spatial resolution and coverage requires a large number of lines, leading to decreases in frame rate. An even more serious imaging challenge occurs with imaging modes involving spatial compounding and 3-D/4-D imaging, which are severely limited by the slow speed of sound in tissue. The present work can overcome these traditional limitations, making ultrasound imaging many-fold faster. By emitting several beams at once, and by separating the resulting overlapped signals through spatial and temporal processing, spatial resolution and/or coverage can be increased by many-fold while leaving frame rates unaffected. The proposed approach can also be extended to imaging strategies that do not involve transmit beamforming, such as synthetic aperture imaging. Simulated and experimental results are presented where imaging speed is improved by up to 32-fold, with little impact on image quality. Object complexity has little impact on the method’s performance, and data from biological systems can readily be handled. The present work may open the door to novel multiplexed and/or multidimensional protocols considered impractical today. PMID:20040398
Trigonometric Transforms for Image Reconstruction
1998-06-01
applying trigo - nometric transforms to image reconstruction problems. Many existing linear image reconstruc- tion techniques rely on knowledge of...ancestors. The research performed for this dissertation represents the first time the symmetric convolution-multiplication property of trigo - nometric...Fourier domain. The traditional representation of these filters will be similar to new trigo - nometric transform versions derived in later chapters
Imaging using accelerated heavy ions
Chu, W.T.
1982-05-01
Several methods for imaging using accelerated heavy ion beams are being investigated at Lawrence Berkeley Laboratory. Using the HILAC (Heavy-Ion Linear Accelerator) as an injector, the Bevalac can accelerate fully stripped atomic nuclei from carbon (Z = 6) to krypton (Z = 34), and partly stripped ions up to uranium (Z = 92). Radiographic studies to date have been conducted with helium (from 184-inch cyclotron), carbon, oxygen, and neon beams. Useful ranges in tissue of 40 cm or more are available. To investigate the potential of heavy-ion projection radiography and computed tomography (CT), several methods and instrumentation have been studied.
2016-01-01
Purpose Ghosting‐robust reconstruction of blipped‐CAIPI echo planar imaging simultaneous multislice data with low computational load. Methods To date, Slice‐GRAPPA, with “odd–even” kernels that improve ghosting performance, has been the framework of choice for such reconstructions due to its predecessor SENSE‐GRAPPA being deemed unsuitable for blipped‐CAIPI data. Modifications to SENSE‐GRAPPA are used to restore CAIPI compatibility and to make it robust against ghosting. Two implementations are tested, one where slices and in‐plane unaliasing are dealt in the same serial manner as in Slice‐GRAPPA [referred to as one‐dimensional (1D)‐NGC‐SENSE‐GRAPPA, where NGC stands for Nyquist Ghost Corrected] and one where both are unaliased in a single step (2D‐NGC‐SENSE‐GRAPPA), which is analytically and experimentally shown to be computationally cheaper. Results The 1D‐NGC‐SENSE‐GRAPPA and odd‐even Slice‐GRAPPA perform identically, whereas 2D‐NGC‐SENSE‐GRAPPA shows reduced error propagation, less residual ghosting when reliable reference data were available. When the latter was not the case, error propagation was increased. Conclusion Unlike Slice‐GRAPPA, SENSE‐GRAPPA operates fully within the GRAPPA framework, for which improved reconstructions (e.g., iterative, nonlinear) have been developed over the past decade. It could, therefore, bring benefit to the reconstruction of SMS data as an attractive alternative to Slice‐GRAPPA. Magn Reson Med 77:998–1009, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. PMID:26932565
Modern methods of image reconstruction.
NASA Astrophysics Data System (ADS)
Puetter, R. C.
The author reviews the image restoration or reconstruction problem in its general setting. He first discusses linear methods for solving the problem of image deconvolution, i.e. the case in which the data are a convolution of a point-spread function and an underlying unblurred image. Next, non-linear methods are introduced in the context of Bayesian estimation, including maximum likelihood and maximum entropy methods. Then, the author discusses the role of language and information theory concepts for data compression and solving the inverse problem. The concept of algorithmic information content (AIC) is introduced and is shown to be crucial to achieving optimal data compression and optimized Bayesian priors for image reconstruction. The dependence of the AIC on the selection of language then suggests how efficient coordinate systems for the inverse problem may be selected. The author also introduced pixon-based image restoration and reconstruction methods. The relation between image AIC and the Bayesian incarnation of Occam's Razor is discussed, as well as the relation of multiresolution pixon languages and image fractal dimension. Also discussed is the relation of pixons to the role played by the Heisenberg uncertainty principle in statistical physics and how pixon-based image reconstruction provides a natural extension to the Akaike information criterion for maximum likelihood. The author presents practical applications of pixon-based Bayesian estimation to the restoration of astronomical images. He discusses the effects of noise, effects of finite sampling on resolution, and special problems associated with spatially correlated noise introduced by mosaicing. Comparisons to other methods demonstrate the significant improvements afforded by pixon-based methods and illustrate the science that such performance improvements allow.
Image processing and reconstruction
Chartrand, Rick
2012-06-15
This talk will examine some mathematical methods for image processing and the solution of underdetermined, linear inverse problems. The talk will have a tutorial flavor, mostly accessible to undergraduates, while still presenting research results. The primary approach is the use of optimization problems. We will find that relaxing the usual assumption of convexity will give us much better results.
Computational methods for image reconstruction.
Chung, Julianne; Ruthotto, Lars
2017-04-01
Reconstructing images from indirect measurements is a central problem in many applications, including the subject of this special issue, quantitative susceptibility mapping (QSM). The process of image reconstruction typically requires solving an inverse problem that is ill-posed and large-scale and thus challenging to solve. Although the research field of inverse problems is thriving and very active with diverse applications, in this part of the special issue we will focus on recent advances in inverse problems that are specific to deconvolution problems, the class of problems to which QSM belongs. We will describe analytic tools that can be used to investigate underlying ill-posedness and apply them to the QSM reconstruction problem and the related extensively studied image deblurring problem. We will discuss state-of-the-art computational tools and methods for image reconstruction, including regularization approaches and regularization parameter selection methods. We finish by outlining some of the current trends and future challenges. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
An improved image reconstruction method for optical intensity correlation Imaging
NASA Astrophysics Data System (ADS)
Gao, Xin; Feng, Lingjie; Li, Xiyu
2016-12-01
The intensity correlation imaging method is a novel kind of interference imaging and it has favorable prospects in deep space recognition. However, restricted by the low detecting signal-to-noise ratio (SNR), it's usually very difficult to obtain high-quality image of deep space object like high-Earth-orbit (HEO) satellite with existing phase retrieval methods. In this paper, based on the priori intensity statistical distribution model of the object and characteristics of measurement noise distribution, an improved method of Prior Information Optimization (PIO) is proposed to reduce the ambiguous images and accelerate the phase retrieval procedure thus realizing fine image reconstruction. As the simulations and experiments show, compared to previous methods, our method could acquire higher-resolution images with less error in low SNR condition.
Computational Imaging for VLBI Image Reconstruction
NASA Astrophysics Data System (ADS)
Bouman, Katherine L.; Johnson, Michael D.; Zoran, Daniel; Fish, Vincent L.; Doeleman, Sheperd S.; Freeman, William T.
2016-03-01
Very long baseline interferometry (VLBI) is a technique for imaging celestial radio emissions by simultaneously observing a source from telescopes distributed across Earth. The challenges in reconstructing images from fine angular resolution VLBI data are immense. The data is extremely sparse and noisy, thus requiring statistical image models such as those designed in the computer vision community. In this paper we present a novel Bayesian approach for VLBI image reconstruction. While other methods often require careful tuning and parameter selection for different types of data, our method (CHIRP) produces good results under different settings such as low SNR or extended emission. The success of our method is demonstrated on realistic synthetic experiments as well as publicly available real data. We present this problem in a way that is accessible to members of the community, and provide a dataset website (vlbiimaging.csail.mit.edu) that facilitates controlled comparisons! across algorithms.
Accelerated MR diffusion tensor imaging using distributed compressed sensing.
Wu, Yin; Zhu, Yan-Jie; Tang, Qiu-Yang; Zou, Chao; Liu, Wei; Dai, Rui-Bin; Liu, Xin; Wu, Ed X; Ying, Leslie; Liang, Dong
2014-02-01
Diffusion tensor imaging (DTI) is known to suffer from long acquisition time in the orders of several minutes or even hours. Therefore, a feasible way to accelerate DTI data acquisition is highly desirable. In this article, the feasibility and efficacy of distributed compressed sensing to fast DTI is investigated by exploiting the joint sparsity prior in diffusion-weighted images. Fully sampled DTI datasets were obtained from both simulated phantom and experimental heart sample, with diffusion gradient applied in six directions. The k-space data were undersampled retrospectively with acceleration factors from 2 to 6. Diffusion-weighted images were reconstructed by solving an l2-l1 norm minimization problem. Reconstruction performance with varied signal-to-noise ratio and acceleration factors were evaluated by root-mean-square error and maps of reconstructed DTI indices. Superiority of distributed compressed sensing over basic compressed sensing was confirmed with simulation, and the reconstruction accuracy was influenced by signal-to-noise ratio and acceleration factors. Experimental results demonstrate that DTI indices including fractional anisotropy, mean diffusivities, and orientation of primary eigenvector can be obtained with high accuracy at acceleration factors up to 4. Distributed compressed sensing is shown to be able to accelerate DTI and may be used to reduce DTI acquisition time practically. Copyright © 2013 Wiley Periodicals, Inc.
MR Guided PET Image Reconstruction
Bai, Bing; Li, Quanzheng; Leahy, Richard M.
2013-01-01
The resolution of PET images is limited by the physics of positron-electron annihilation and instrumentation for photon coincidence detection. Model based methods that incorporate accurate physical and statistical models have produced significant improvements in reconstructed image quality when compared to filtered backprojection reconstruction methods. However, it has often been suggested that by incorporating anatomical information, the resolution and noise properties of PET images could be improved, leading to better quantitation or lesion detection. With the recent development of combined MR-PET scanners, it is possible to collect intrinsically co-registered MR images. It is therefore now possible to routinely make use of anatomical information in PET reconstruction, provided appropriate methods are available. In this paper we review research efforts over the past 20 years to develop these methods. We discuss approaches based on the use of both Markov random field priors and joint information or entropy measures. The general framework for these methods is described and their performance and longer term potential and limitations discussed. PMID:23178087
Towards inherently distortion-free MR images for image-guided radiotherapy on an MRI accelerator.
Crijns, S P M; Bakker, C J G; Seevinck, P R; de Leeuw, H; Lagendijk, J J W; Raaymakers, B W
2012-03-07
In MR-guided interventions, it is mandatory to establish a solid relationship between the imaging coordinate system and world coordinates. This is particularly important in image-guided radiotherapy (IGRT) on an MRI accelerator, as the interaction of matter with γ-radiation cannot be visualized. In conventional acquisitions, off-resonance effects cause discrepancies between coordinate systems. We propose to mitigate this by using only phase encoding and to reduce the longer acquisitions by under-sampling and regularized reconstruction. To illustrate the performance of this acquisition in the presence of off-resonance phenomena, phantom and in vivo images are acquired using spin-echo (SE) and purely phase-encoded sequences. Data are retrospectively under-sampled and reconstructed iteratively. We observe accurate geometries in purely phase-encoded images for all cases, whereas SE images of the same phantoms display image distortions. Regularized reconstruction yields accurate phantom images under high acceleration factors. In vivo images were reconstructed faithfully while using acceleration factors up to 4. With the proposed technique, inherently undistorted images with one-to-one correspondence to world coordinates can be obtained. It is a valuable tool in geometry quality assurance, treatment planning and online image guidance. Under-sampled acquisition combined with regularized reconstruction can be used to accelerate the acquisition while retaining geometrical accuracy.
Fast reconstruction of digital tomosynthesis using on-board images
Yan Hui; Godfrey, Devon J.; Yin Fangfang
2008-05-15
Digital tomosynthesis (DTS) is a method to reconstruct pseudo three-dimensional (3D) volume images from two-dimensional x-ray projections acquired over limited scan angles. Compared with cone-beam computed tomography, which is frequently used for 3D image guided radiation therapy, DTS requires less imaging time and dose. Successful implementation of DTS for fast target localization requires the reconstruction process to be accomplished within tight clinical time constraints (usually within 2 min). To achieve this goal, substantial improvement of reconstruction efficiency is necessary. In this study, a reconstruction process based upon the algorithm proposed by Feldkamp, Davis, and Kress was implemented on graphics hardware for the purpose of acceleration. The performance of the novel reconstruction implementation was tested for phantom and real patient cases. The efficiency of DTS reconstruction was improved by a factor of 13 on average, without compromising image quality. With acceleration of the reconstruction algorithm, the whole DTS generation process including data preprocessing, reconstruction, and DICOM conversion is accomplished within 1.5 min, which ultimately meets clinical requirement for on-line target localization.
Fast reconstruction of digital tomosynthesis using on-board images.
Yan, Hui; Godfrey, Devon J; Yin, Fang-Fang
2008-05-01
Digital tomosynthesis (DTS) is a method to reconstruct pseudo three-dimensional (3D) volume images from two-dimensional x-ray projections acquired over limited scan angles. Compared with cone-beam computed tomography, which is frequently used for 3D image guided radiation therapy, DTS requires less imaging time and dose. Successful implementation of DTS for fast target localization requires the reconstruction process to be accomplished within tight clinical time constraints (usually within 2 min). To achieve this goal, substantial improvement of reconstruction efficiency is necessary. In this study, a reconstruction process based upon the algorithm proposed by Feldkamp, Davis, and Kress was implemented on graphics hardware for the purpose of acceleration. The performance of the novel reconstruction implementation was tested for phantom and real patient cases. The efficiency of DTS reconstruction was improved by a factor of 13 on average, without compromising image quality. With acceleration of the reconstruction algorithm, the whole DTS generation process including data preprocessing, reconstruction, and DICOM conversion is accomplished within 1.5 min, which ultimately meets clinical requirement for on-line target localization.
Image reconstruction with analytical point spread functions
NASA Astrophysics Data System (ADS)
Asensio Ramos, A.; López Ariste, A.
2010-07-01
Context. The image degradation produced by atmospheric turbulence and optical aberrations is usually alleviated using post-facto image reconstruction techniques, even when observing with adaptive optics systems. Aims: These techniques rely on the development of the wavefront using Zernike functions and the non-linear optimization of a certain metric. The resulting optimization procedure is computationally heavy. Our aim is to alleviate this computational burden. Methods: We generalize the extended Zernike-Nijboer theory to carry out the analytical integration of the Fresnel integral and present a natural basis set for the development of the point spread function when the wavefront is described using Zernike functions. Results: We present a linear expansion of the point spread function in terms of analytic functions, which, in addition, takes defocusing into account in a natural way. This expansion is used to develop a very fast phase-diversity reconstruction technique, which is demonstrated in terms of some applications. Conclusions: We propose that the linear expansion of the point spread function can be applied to accelerate other reconstruction techniques in use that are based on blind deconvolution.
Wave-CAIPI for highly accelerated 3D imaging.
Bilgic, Berkin; Gagoski, Borjan A; Cauley, Stephen F; Fan, Audrey P; Polimeni, Jonathan R; Grant, P Ellen; Wald, Lawrence L; Setsompop, Kawin
2015-06-01
To introduce the wave-CAIPI (controlled aliasing in parallel imaging) acquisition and reconstruction technique for highly accelerated 3D imaging with negligible g-factor and artifact penalties. The wave-CAIPI 3D acquisition involves playing sinusoidal gy and gz gradients during the readout of each kx encoding line while modifying the 3D phase encoding strategy to incur interslice shifts as in 2D-CAIPI acquisitions. The resulting acquisition spreads the aliasing evenly in all spatial directions, thereby taking full advantage of 3D coil sensitivity distribution. By expressing the voxel spreading effect as a convolution in image space, an efficient reconstruction scheme that does not require data gridding is proposed. Rapid acquisition and high-quality image reconstruction with wave-CAIPI is demonstrated for high-resolution magnitude and phase imaging and quantitative susceptibility mapping. Wave-CAIPI enables full-brain gradient echo acquisition at 1 mm isotropic voxel size and R = 3 × 3 acceleration with maximum g-factors of 1.08 at 3T and 1.05 at 7T. Relative to the other advanced Cartesian encoding strategies (2D-CAIPI and bunched phase encoding) wave-CAIPI yields up to two-fold reduction in maximum g-factor for nine-fold acceleration at both field strengths. Wave-CAIPI allows highly accelerated 3D acquisitions with low artifact and negligible g-factor penalties, and may facilitate clinical application of high-resolution volumetric imaging. © 2014 Wiley Periodicals, Inc.
A New Joint-Blade SENSE Reconstruction for Accelerated PROPELLER MRI
Lyu, Mengye; Liu, Yilong; Xie, Victor B.; Feng, Yanqiu; Guo, Hua; Wu, Ed X.
2017-01-01
PROPELLER technique is widely used in MRI examinations for being motion insensitive, but it prolongs scan time and is restricted mainly to T2 contrast. Parallel imaging can accelerate PROPELLER and enable more flexible contrasts. Here, we propose a multi-step joint-blade (MJB) SENSE reconstruction to reduce the noise amplification in parallel imaging accelerated PROPELLER. MJB SENSE utilizes the fact that PROPELLER blades contain sharable information and blade-combined images can serve as regularization references. It consists of three steps. First, conventional blade-combined images are obtained using the conventional simple single-blade (SSB) SENSE, which reconstructs each blade separately. Second, the blade-combined images are employed as regularization for blade-wise noise reduction. Last, with virtual high-frequency data resampled from the previous step, all blades are jointly reconstructed to form the final images. Simulations were performed to evaluate the proposed MJB SENSE for noise reduction and motion correction. MJB SENSE was also applied to both T2-weighted and T1-weighted in vivo brain data. Compared to SSB SENSE, MJB SENSE greatly reduced the noise amplification at various acceleration factors, leading to increased image SNR in all simulation and in vivo experiments, including T1-weighted imaging with short echo trains. Furthermore, it preserved motion correction capability and was computationally efficient. PMID:28205602
A New Joint-Blade SENSE Reconstruction for Accelerated PROPELLER MRI.
Lyu, Mengye; Liu, Yilong; Xie, Victor B; Feng, Yanqiu; Guo, Hua; Wu, Ed X
2017-02-16
PROPELLER technique is widely used in MRI examinations for being motion insensitive, but it prolongs scan time and is restricted mainly to T2 contrast. Parallel imaging can accelerate PROPELLER and enable more flexible contrasts. Here, we propose a multi-step joint-blade (MJB) SENSE reconstruction to reduce the noise amplification in parallel imaging accelerated PROPELLER. MJB SENSE utilizes the fact that PROPELLER blades contain sharable information and blade-combined images can serve as regularization references. It consists of three steps. First, conventional blade-combined images are obtained using the conventional simple single-blade (SSB) SENSE, which reconstructs each blade separately. Second, the blade-combined images are employed as regularization for blade-wise noise reduction. Last, with virtual high-frequency data resampled from the previous step, all blades are jointly reconstructed to form the final images. Simulations were performed to evaluate the proposed MJB SENSE for noise reduction and motion correction. MJB SENSE was also applied to both T2-weighted and T1-weighted in vivo brain data. Compared to SSB SENSE, MJB SENSE greatly reduced the noise amplification at various acceleration factors, leading to increased image SNR in all simulation and in vivo experiments, including T1-weighted imaging with short echo trains. Furthermore, it preserved motion correction capability and was computationally efficient.
Compressed sensing sparse reconstruction for coherent field imaging
NASA Astrophysics Data System (ADS)
Bei, Cao; Xiu-Juan, Luo; Yu, Zhang; Hui, Liu; Ming-Lai, Chen
2016-04-01
Return signal processing and reconstruction plays a pivotal role in coherent field imaging, having a significant influence on the quality of the reconstructed image. To reduce the required samples and accelerate the sampling process, we propose a genuine sparse reconstruction scheme based on compressed sensing theory. By analyzing the sparsity of the received signal in the Fourier spectrum domain, we accomplish an effective random projection and then reconstruct the return signal from as little as 10% of traditional samples, finally acquiring the target image precisely. The results of the numerical simulations and practical experiments verify the correctness of the proposed method, providing an efficient processing approach for imaging fast-moving targets in the future. Project supported by the National Natural Science Foundation of China (Grant No. 61505248) and the Fund from Chinese Academy of Sciences, the Light of “Western” Talent Cultivation Plan “Dr. Western Fund Project” (Grant No. Y429621213).
Filtering in SPECT Image Reconstruction
Lyra, Maria; Ploussi, Agapi
2011-01-01
Single photon emission computed tomography (SPECT) imaging is widely implemented in nuclear medicine as its clinical role in the diagnosis and management of several diseases is, many times, very helpful (e.g., myocardium perfusion imaging). The quality of SPECT images are degraded by several factors such as noise because of the limited number of counts, attenuation, or scatter of photons. Image filtering is necessary to compensate these effects and, therefore, to improve image quality. The goal of filtering in tomographic images is to suppress statistical noise and simultaneously to preserve spatial resolution and contrast. The aim of this work is to describe the most widely used filters in SPECT applications and how these affect the image quality. The choice of the filter type, the cut-off frequency and the order is a major problem in clinical routine. In many clinical cases, information for specific parameters is not provided, and findings cannot be extrapolated to other similar SPECT imaging applications. A literature review for the determination of the mostly used filters in cardiac, brain, bone, liver, kidneys, and thyroid applications is also presented. As resulting from the overview, no filter is perfect, and the selection of the proper filters, most of the times, is done empirically. The standardization of image-processing results may limit the filter types for each SPECT examination to certain few filters and some of their parameters. Standardization, also, helps in reducing image processing time, as the filters and their parameters must be standardised before being put to clinical use. Commercial reconstruction software selections lead to comparable results interdepartmentally. The manufacturers normally supply default filters/parameters, but these may not be relevant in various clinical situations. After proper standardisation, it is possible to use many suitable filters or one optimal filter. PMID:21760768
Image reconstruction in optical tomography.
Arridge, S R; Schweiger, M
1997-01-01
Optical tomography is a new medical imaging modality that is at the threshold of realization. A large amount of clinical work has shown the very real benefits that such a method could provide. At the same time a considerable effort has been put into theoretical studies of its probable success. At present there exist gaps between these two realms. In this paper we review some general approaches to inverse problems to set the context for optical tomography, defining both the terms forward problem and inverse problem. An essential requirement is to treat the problem in a nonlinear fashion, by using an iterative method. This in turn requires a convenient method of evaluating the forward problem, and its derivatives and variance. Photon transport models are described for obtaining analytical and numerical solutions for the most commonly used ones are reviewed. The inverse problem is approached by classical gradient-based solution methods. In order to develop practical implementations of these methods, we discuss the important topic of photon measurement density functions, which represent the derivative of the forward problem. We show some results that represent the most complex and realistic simulations of optical tomography yet developed. We suggest, in particular, that both time-resolved, and intensity-modulated systems can reconstruct variations in both optical absorption and scattering, but that unmodulated, non-time-resolved systems are prone to severe artefact. We believe that optical tomography reconstruction methods can now be reliably applied to a wide variety of real clinical data. The expected resolution of the method is poor, meaning that it is unlikely that the type of high-resolution images seen in computed tomography or medical resonance imaging can ever be obtained. Nevertheless we strongly expect the functional nature of these images to have a high degree of clinical significance. PMID:9232860
Implementation of GPU-accelerated back projection for EPR imaging.
Qiao, Zhiwei; Redler, Gage; Epel, Boris; Qian, Yuhua; Halpern, Howard
2015-01-01
Electron paramagnetic resonance (EPR) Imaging (EPRI) is a robust method for measuring in vivo oxygen concentration (pO2). For 3D pulse EPRI, a commonly used reconstruction algorithm is the filtered backprojection (FBP) algorithm, in which the backprojection process is computationally intensive and may be time consuming when implemented on a CPU. A multistage implementation of the backprojection can be used for acceleration, however it is not flexible (requires equal linear angle projection distribution) and may still be time consuming. In this work, single-stage backprojection is implemented on a GPU (Graphics Processing Units) having 1152 cores to accelerate the process. The GPU implementation results in acceleration by over a factor of 200 overall and by over a factor of 3500 if only the computing time is considered. Some important experiences regarding the implementation of GPU-accelerated backprojection for EPRI are summarized. The resulting accelerated image reconstruction is useful for real-time image reconstruction monitoring and other time sensitive applications.
Method for position emission mammography image reconstruction
Smith, Mark Frederick
2004-10-12
An image reconstruction method comprising accepting coincidence datat from either a data file or in real time from a pair of detector heads, culling event data that is outside a desired energy range, optionally saving the desired data for each detector position or for each pair of detector pixels on the two detector heads, and then reconstructing the image either by backprojection image reconstruction or by iterative image reconstruction. In the backprojection image reconstruction mode, rays are traced between centers of lines of response (LOR's), counts are then either allocated by nearest pixel interpolation or allocated by an overlap method and then corrected for geometric effects and attenuation and the data file updated. If the iterative image reconstruction option is selected, one implementation is to compute a grid Siddon retracing, and to perform maximum likelihood expectation maiximization (MLEM) computed by either: a) tracing parallel rays between subpixels on opposite detector heads; or b) tracing rays between randomized endpoint locations on opposite detector heads.
Matrix-based image reconstruction methods for tomography
Llacer, J.; Meng, J.D.
1984-10-01
Matrix methods of image reconstruction have not been used, in general, because of the large size of practical matrices, ill condition upon inversion and the success of Fourier-based techniques. An exception is the work that has been done at the Lawrence Berkeley Laboratory for imaging with accelerated radioactive ions. An extension of that work into more general imaging problems shows that, with a correct formulation of the problem, positron tomography with ring geometries results in well behaved matrices which can be used for image reconstruction with no distortion of the point response in the field of view and flexibility in the design of the instrument. Maximum Likelihood Estimator methods of reconstruction, which use the system matrices tailored to specific instruments and do not need matrix inversion, are shown to result in good preliminary images. A parallel processing computer structure based on multiple inexpensive microprocessors is proposed as a system to implement the matrix-MLE methods. 14 references, 7 figures.
Reconstructing HST Images of Asteroids
NASA Astrophysics Data System (ADS)
Storrs, A. D.; Bank, S.; Gerhardt, H.; Makhoul, K.
2003-12-01
We present reconstructions of images of 22 large main belt asteroids that were observed by Hubble Space Telescope with the Wide-Field/Planetary cameras. All images were restored with the MISTRAL program (Mugnier, Fusco, and Conan 2003) at enhanced spatial resolution. This is possible thanks to the well-studied and stable point spread function (PSF) on HST. We present some modeling of this process and determine that the Strehl ratio for WF/PC (aberrated) images can be improved to 130 ratio of 80 We will report sizes, shapes, and albedos for these objects, as well as any surface features. Images taken with the WFPC-2 instrument were made in a variety of filters so that it should be possible to investigate changes in mineralogy across the surface of the larger asteroids in a manner similar to that done on 4 Vesta by Binzel et al. (1997). Of particular interest are a possible water of hydration feature on 1 Ceres, and the non-observation of a constriction or gap between the components of 216 Kleopatra. Reduction of this data was aided by grant HST-GO-08583.08A from the Space Telescope Science Institute. References: Mugnier, L.M., T. Fusco, and J.-M. Conan, 2003. JOSA A (submitted) Binzel, R.P., Gaffey, M.J., Thomas, P.C., Zellner, B.H., Storrs, A.D., and Wells, E.N. 1997. Icarus 128 pp. 95-103
Heuristic reconstructions of neutron penumbral images
Nozaki, Shinya; Chen Yenwei
2004-10-01
Penumbral imaging is a technique of coded aperture imaging proposed for imaging of highly penetrating radiations. To date, the penumbral imaging technique has been successfully applied to neutron imaging in laser fusion experiments. Since the reconstruction of penumbral images is based on linear deconvolution methods, such as inverse filter and Wiener filer, the point spread function of apertures should be space invariant; it is also sensitive to the noise contained in penumbral images. In this article, we propose a new heuristic reconstruction method for neutron penumbral imaging, which can be used for a space-variant imaging system and is also very tolerant to the noise.
Image reconstruction: an overview for clinicians
Hansen, Michael S.; Kellman, Peter
2014-01-01
Image reconstruction plays a critical role in the clinical use of magnetic resonance imaging. The MRI raw data is not acquired in image space and the role of the image reconstruction process is to transform the acquired raw data into images that can be interpreted clinically. This process involves multiple signal processing steps that each have an impact on the image quality. This review explains the basic terminology used for describing and quantifying image quality in terms of signal to noise ratio and point spread function. In this context, several commonly used image reconstruction components are discussed. The image reconstruction components covered include noise pre-whitening for phased array data acquisition, interpolation needed to reconstruct square pixels, raw data filtering for reducing Gibbs ringing artifacts, Fourier transforms connecting the raw data with image space, and phased array coil combination. The treatment of phased array coils includes a general explanation of parallel imaging as a coil combination technique. The review is aimed at readers with no signal processing experience and should enable them to understand what role basic image reconstruction steps play in the formation of clinical images and how the resulting image quality is described. PMID:24962650
Image reconstruction: an overview for clinicians.
Hansen, Michael S; Kellman, Peter
2015-03-01
Image reconstruction plays a critical role in the clinical use of magnetic resonance imaging (MRI). The MRI raw data is not acquired in image space and the role of the image reconstruction process is to transform the acquired raw data into images that can be interpreted clinically. This process involves multiple signal processing steps that each have an impact on the image quality. This review explains the basic terminology used for describing and quantifying image quality in terms of signal-to-noise ratio and point spread function. In this context, several commonly used image reconstruction components are discussed. The image reconstruction components covered include noise prewhitening for phased array data acquisition, interpolation needed to reconstruct square pixels, raw data filtering for reducing Gibbs ringing artifacts, Fourier transforms connecting the raw data with image space, and phased array coil combination. The treatment of phased array coils includes a general explanation of parallel imaging as a coil combination technique. The review is aimed at readers with no signal processing experience and should enable them to understand what role basic image reconstruction steps play in the formation of clinical images and how the resulting image quality is described.
Three-Dimensional Reconstruction Of Ultrasound Images
NASA Astrophysics Data System (ADS)
Lalouche, Robert C.; Bickmore, Dan; Tessler, Franklin N.; Mankovich, Nicholas J.; Huang, H. K.; Kangarloo, Hooshang
1989-05-01
We have established a three-dimensional (3-D) imaging facility for reconstruction of serial two-dimensional (2-D) ultrasound images. In the facility, contiguous 2-D images are captured directly at the clinical site from the real-time video signals of a Labsonics serial ultrasound imager. The images are digitized and stored on an IBM PC. They are then transferred over an Ethernet communication network to the Image Processing Laboratory. Finally, the serial images are reformatted and the 3-D images are reconstructed on a Pixar image computer. The reconstruction method involves grey level remapping, slice interpolation, tissue classification, surface enhancement, illumination, projection, and display. We have demonstrated that 3-D ultra-sound images can be created which bring out features difficult to discern in 2-D ultrasound images.
Studies on image compression and image reconstruction
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Nori, Sekhar; Araj, A.
1994-01-01
During this six month period our works concentrated on three, somewhat different areas. We looked at and developed a number of error concealment schemes for use in a variety of video coding environments. This work is described in an accompanying (draft) Masters thesis. In the thesis we describe application of this techniques to the MPEG video coding scheme. We felt that the unique frame ordering approach used in the MPEG scheme would be a challenge to any error concealment/error recovery technique. We continued with our work in the vector quantization area. We have also developed a new type of vector quantizer, which we call a scan predictive vector quantization. The scan predictive VQ was tested on data processed at Goddard to approximate Landsat 7 HRMSI resolution and compared favorably with existing VQ techniques. A paper describing this work is included. The third area is concerned more with reconstruction than compression. While there is a variety of efficient lossless image compression schemes, they all have a common property that they use past data to encode future data. This is done either via taking differences, context modeling, or by building dictionaries. When encoding large images, this common property becomes a common flaw. When the user wishes to decode just a portion of the image, the requirement that the past history be available forces the decoding of a significantly larger portion of the image than desired by the user. Even with intelligent partitioning of the image dataset, the number of pixels decoded may be four times the number of pixels requested. We have developed an adaptive scanning strategy which can be used with any lossless compression scheme and which lowers the additional number of pixels to be decoded to about 7 percent of the number of pixels requested! A paper describing these results is included.
Undersampling strategies for compressed sensing accelerated MR spectroscopic imaging
NASA Astrophysics Data System (ADS)
Vidya Shankar, Rohini; Hu, Houchun Harry; Bikkamane Jayadev, Nutandev; Chang, John C.; Kodibagkar, Vikram D.
2017-03-01
Compressed sensing (CS) can accelerate magnetic resonance spectroscopic imaging (MRSI), facilitating its widespread clinical integration. The objective of this study was to assess the effect of different undersampling strategy on CS-MRSI reconstruction quality. Phantom data were acquired on a Philips 3 T Ingenia scanner. Four types of undersampling masks, corresponding to each strategy, namely, low resolution, variable density, iterative design, and a priori were simulated in Matlab and retrospectively applied to the test 1X MRSI data to generate undersampled datasets corresponding to the 2X - 5X, and 7X accelerations for each type of mask. Reconstruction parameters were kept the same in each case(all masks and accelerations) to ensure that any resulting differences can be attributed to the type of mask being employed. The reconstructed datasets from each mask were statistically compared with the reference 1X, and assessed using metrics like the root mean square error and metabolite ratios. Simulation results indicate that both the a priori and variable density undersampling masks maintain high fidelity with the 1X up to five-fold acceleration. The low resolution mask based reconstructions showed statistically significant differences from the 1X with the reconstruction failing at 3X, while the iterative design reconstructions maintained fidelity with the 1X till 4X acceleration. In summary, a pilot study was conducted to identify an optimal sampling mask in CS-MRSI. Simulation results demonstrate that the a priori and variable density masks can provide statistically similar results to the fully sampled reference. Future work would involve implementing these two masks prospectively on a clinical scanner.
Accelerating electron tomography reconstruction algorithm ICON with GPU.
Chen, Yu; Wang, Zihao; Zhang, Jingrong; Li, Lun; Wan, Xiaohua; Sun, Fei; Zhang, Fa
2017-01-01
Electron tomography (ET) plays an important role in studying in situ cell ultrastructure in three-dimensional space. Due to limited tilt angles, ET reconstruction always suffers from the "missing wedge" problem. With a validation procedure, iterative compressed-sensing optimized NUFFT reconstruction (ICON) demonstrates its power in the restoration of validated missing information for low SNR biological ET dataset. However, the huge computational demand has become a major problem for the application of ICON. In this work, we analyzed the framework of ICON and classified the operations of major steps of ICON reconstruction into three types. Accordingly, we designed parallel strategies and implemented them on graphics processing units (GPU) to generate a parallel program ICON-GPU. With high accuracy, ICON-GPU has a great acceleration compared to its CPU version, up to 83.7×, greatly relieving ICON's dependence on computing resource.
An evaluation of GPU acceleration for sparse reconstruction
NASA Astrophysics Data System (ADS)
Braun, Thomas R.
2010-04-01
Image processing applications typically parallelize well. This gives a developer interested in data throughput several different implementation options, including multiprocessor machines, general purpose computation on the graphics processor, and custom gate-array designs. Herein, we will investigate these first two options for dictionary learning and sparse reconstruction, specifically focusing on the K-SVD algorithm for dictionary learning and the Batch Orthogonal Matching Pursuit for sparse reconstruction. These methods have been shown to provide state of the art results for image denoising, classification, and object recognition. We'll explore the GPU implementation and show that GPUs are not significantly better or worse than CPUs for this application.
Fast Dictionary-Based Reconstruction for Diffusion Spectrum Imaging
Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F.; Yendiki, Anastasia; Wald, Lawrence L.; Adalsteinsson, Elfar
2015-01-01
Diffusion Spectrum Imaging (DSI) reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation (TV) transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using Matlab running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using Principal Component Analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm. PMID:23846466
Fast dictionary-based reconstruction for diffusion spectrum imaging.
Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F; Yendiki, Anastasia; Wald, Lawrence L; Adalsteinsson, Elfar
2013-11-01
Diffusion spectrum imaging reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using MATLAB running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using principal component analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm.
Improving Tritium Exposure Reconstructions Using Accelerator Mass Spectrometry
Love, A; Hunt, J; Knezovich, J
2003-06-01
Exposure reconstructions for radionuclides are inherently difficult. As a result, most reconstructions are based primarily on mathematical models of environmental fate and transport. These models can have large uncertainties, as important site-specific information is unknown, missing, or crudely estimated. Alternatively, surrogate environmental measurements of exposure can be used for site-specific reconstructions. In cases where environmental transport processes are complex, well-chosen environmental surrogates can have smaller exposure uncertainty than mathematical models. Because existing methodologies have significant limitations, the development or improvement of methodologies for reconstructing exposure from environmental measurements would provide important additional tools in assessing the health effects of chronic exposure. As an example, the direct measurement of tritium atoms by accelerator mass spectrometry (AMS) enables rapid low-activity tritium measurements from milligram-sized samples, which permit greater ease of sample collection, faster throughput, and increased spatial and/or temporal resolution. Tritium AMS was previously demonstrated for a tree growing on known levels of tritiated water and for trees exposed to atmospheric releases of tritiated water vapor. In these analyses, tritium levels were measured from milligram-sized samples with sample preparation times of a few days. Hundreds of samples were analyzed within a few months of sample collection and resulted in the reconstruction of spatial and temporal exposure from tritium releases.
Accelerated magnetic resonance imaging using the sparsity of multi-channel coil images.
Xie, Guoxi; Song, Yibiao; Shi, Caiyun; Feng, Xiang; Zheng, Hairong; Weng, Dehe; Qiu, Bensheng; Liu, Xin
2014-02-01
Joint estimation of coil sensitivities and output image (JSENSE) is a promising approach that improves the reconstruction of parallel magnetic resonance imaging (pMRI). However, when acceleration factor increases, the signal to noise ratio (SNR) of JSENSE reconstruction decreases as quickly as that of the conventional pMRI. Although sparse constraints have been used to improve the JSENSE reconstruction in recent years, these constraints only use the sparsity of the output image, which cannot fully exploit the prior information of pMRI. In this paper, we use the sparsity of coil images, instead of the output image, to exploit more prior information for JSENSE. Numerical simulation, phantom and in vivo experiments demonstrate that the proposed method has better performance than the SparseSENSE method and the constrained JSENSE method using the sparsity of the output image only. © 2013 Elsevier Inc. All rights reserved.
Image Reconstruction Using Analysis Model Prior
Han, Yu; Du, Huiqian; Lam, Fan; Mei, Wenbo; Fang, Liping
2016-01-01
The analysis model has been previously exploited as an alternative to the classical sparse synthesis model for designing image reconstruction methods. Applying a suitable analysis operator on the image of interest yields a cosparse outcome which enables us to reconstruct the image from undersampled data. In this work, we introduce additional prior in the analysis context and theoretically study the uniqueness issues in terms of analysis operators in general position and the specific 2D finite difference operator. We establish bounds on the minimum measurement numbers which are lower than those in cases without using analysis model prior. Based on the idea of iterative cosupport detection (ICD), we develop a novel image reconstruction model and an effective algorithm, achieving significantly better reconstruction performance. Simulation results on synthetic and practical magnetic resonance (MR) images are also shown to illustrate our theoretical claims. PMID:27379171
Kim, Joshua; Ionascu, Dan; Zhang, Tiezhi
2013-01-01
Purpose: To accelerate iterative algebraic reconstruction algorithms using a cylindrical image grid. Methods: Tetrahedron beam computed tomography (TBCT) is designed to overcome the scatter and detector problems of cone beam computed tomography (CBCT). Iterative algebraic reconstruction algorithms have been shown to mitigate approximate reconstruction artifacts that appear at large cone angles, but clinical implementation is limited by their high computational cost. In this study, a cylindrical voxelization method on a cylindrical grid is developed in order to take advantage of the symmetries of the cylindrical geometry. The cylindrical geometry is a natural fit for the circular scanning trajectory employed in volumetric CT methods such as CBCT and TBCT. This method was implemented in combination with the simultaneous algebraic reconstruction technique (SART). Both two- and three-dimensional numerical phantoms as well as a patient CT image were utilized to generate the projection sets used for reconstruction. The reconstructed images were compared to the original phantoms using a set of three figures of merit (FOM). Results: The cylindrical voxelization on a cylindrical reconstruction grid was successfully implemented in combination with the SART reconstruction algorithm. The FOM results showed that the cylindrical reconstructions were able to maintain the accuracy of the Cartesian reconstructions. In three dimensions, the cylindrical method provided better accuracy than the Cartesian methods. At the same time, the cylindrical method was able to provide a speedup factor of approximately 40 while also reducing the system matrix storage size by 2 orders of magnitude. Conclusions: TBCT image reconstruction using a cylindrical image grid was able to provide a significant improvement in the reconstruction time and a more compact system matrix for storage on the hard drive and in memory while maintaining the image quality provided by the Cartesian voxelization on a
NASA Astrophysics Data System (ADS)
Karamat, Muhammad I.; Farncombe, Troy H.
2015-10-01
Simultaneous multi-isotope Single Photon Emission Computed Tomography (SPECT) imaging has a number of applications in cardiac, brain, and cancer imaging. The major concern however, is the significant crosstalk contamination due to photon scatter between the different isotopes. The current study focuses on a method of crosstalk compensation between two isotopes in simultaneous dual isotope SPECT acquisition applied to cancer imaging using 99mTc and 111In. We have developed an iterative image reconstruction technique that simulates the photon down-scatter from one isotope into the acquisition window of a second isotope. Our approach uses an accelerated Monte Carlo (MC) technique for the forward projection step in an iterative reconstruction algorithm. The MC estimated scatter contamination of a radionuclide contained in a given projection view is then used to compensate for the photon contamination in the acquisition window of other nuclide. We use a modified ordered subset-expectation maximization (OS-EM) algorithm named simultaneous ordered subset-expectation maximization (Sim-OSEM), to perform this step. We have undertaken a number of simulation tests and phantom studies to verify this approach. The proposed reconstruction technique was also evaluated by reconstruction of experimentally acquired phantom data. Reconstruction using Sim-OSEM showed very promising results in terms of contrast recovery and uniformity of object background compared to alternative reconstruction methods implementing alternative scatter correction schemes (i.e., triple energy window or separately acquired projection data). In this study the evaluation is based on the quality of reconstructed images and activity estimated using Sim-OSEM. In order to quantitate the possible improvement in spatial resolution and signal to noise ratio (SNR) observed in this study, further simulation and experimental studies are required.
Similarity-regulation of OS-EM for accelerated SPECT reconstruction
NASA Astrophysics Data System (ADS)
Vaissier, P. E. B.; Beekman, F. J.; Goorden, M. C.
2016-06-01
Ordered subsets expectation maximization (OS-EM) is widely used to accelerate image reconstruction in single photon emission computed tomography (SPECT). Speedup of OS-EM over maximum likelihood expectation maximization (ML-EM) is close to the number of subsets used. Although a high number of subsets can shorten reconstruction times significantly, it can also cause severe image artifacts such as improper erasure of reconstructed activity if projections contain few counts. We recently showed that such artifacts can be prevented by using a count-regulated OS-EM (CR-OS-EM) algorithm which automatically adapts the number of subsets for each voxel based on the estimated number of counts that the voxel contributed to the projections. While CR-OS-EM reached high speed-up over ML-EM in high-activity regions of images, speed in low-activity regions could still be very slow. In this work we propose similarity-regulated OS-EM (SR-OS-EM) as a much faster alternative to CR-OS-EM. SR-OS-EM also automatically and locally adapts the number of subsets, but it uses a different criterion for subset regulation: the number of subsets that is used for updating an individual voxel depends on how similar the reconstruction algorithm would update the estimated activity in that voxel with different subsets. Reconstructions of an image quality phantom and in vivo scans show that SR-OS-EM retains all of the favorable properties of CR-OS-EM, while reconstruction speed can be up to an order of magnitude higher in low-activity regions. Moreover our results suggest that SR-OS-EM can be operated with identical reconstruction parameters (including the number of iterations) for a wide range of count levels, which can be an additional advantage from a user perspective since users would only have to post-filter an image to present it at an appropriate noise level.
Joint MR-PET Reconstruction Using a Multi-Channel Image Regularizer.
Knoll, Florian; Holler, Martin; Koesters, Thomas; Otazo, Ricardo; Bredies, Kristian; Sodickson, Daniel K
2017-01-01
While current state of the art MR-PET scanners enable simultaneous MR and PET measurements, the acquired data sets are still usually reconstructed separately. We propose a new multi-modality reconstruction framework using second order Total Generalized Variation (TGV) as a dedicated multi-channel regularization functional that jointly reconstructs images from both modalities. In this way, information about the underlying anatomy is shared during the image reconstruction process while unique differences are preserved. Results from numerical simulations and in-vivo experiments using a range of accelerated MR acquisitions and different MR image contrasts demonstrate improved PET image quality, resolution, and quantitative accuracy.
Image reconstruction for robot assisted ultrasound tomography
NASA Astrophysics Data System (ADS)
Aalamifar, Fereshteh; Zhang, Haichong K.; Rahmim, Arman; Boctor, Emad M.
2016-04-01
An investigation of several image reconstruction methods for robot-assisted ultrasound (US) tomography setup is presented. In the robot-assisted setup, an expert moves the US probe to the location of interest, and a robotic arm automatically aligns another US probe with it. The two aligned probes can then transmit and receive US signals which are subsequently used for tomographic reconstruction. This study focuses on reconstruction of the speed of sound. In various simulation evaluations as well as in an experiment with a millimeter-range inaccuracy, we demonstrate that the limited data provided by two probes can be used to reconstruct pixel-wise images differentiating between media with different speeds of sound. Combining the results of this investigation with the developed robot-assisted US tomography setup, we envision feasibility of this setup for tomographic imaging in applications beyond breast imaging, with potentially significant efficacy in cancer diagnosis.
Accelerated high-frame-rate mouse heart cine-MRI using compressed sensing reconstruction.
Motaal, Abdallah G; Coolen, Bram F; Abdurrachim, Desiree; Castro, Rui M; Prompers, Jeanine J; Florack, Luc M J; Nicolay, Klaas; Strijkers, Gustav J
2013-04-01
We introduce a new protocol to obtain very high-frame-rate cinematographic (Cine) MRI movies of the beating mouse heart within a reasonable measurement time. The method is based on a self-gated accelerated fast low-angle shot (FLASH) acquisition and compressed sensing reconstruction. Key to our approach is that we exploit the stochastic nature of the retrospective triggering acquisition scheme to produce an undersampled and random k-t space filling that allows for compressed sensing reconstruction and acceleration. As a standard, a self-gated FLASH sequence with a total acquisition time of 10 min was used to produce single-slice Cine movies of seven mouse hearts with 90 frames per cardiac cycle. Two times (2×) and three times (3×) k-t space undersampled Cine movies were produced from 2.5- and 1.5-min data acquisitions, respectively. The accelerated 90-frame Cine movies of mouse hearts were successfully reconstructed with a compressed sensing algorithm. The movies had high image quality and the undersampling artifacts were effectively removed. Left ventricular functional parameters, i.e. end-systolic and end-diastolic lumen surface areas and early-to-late filling rate ratio as a parameter to evaluate diastolic function, derived from the standard and accelerated Cine movies, were nearly identical.
Image Reconstruction for Prostate Specific Nuclear Medicine imagers
Mark Smith
2007-01-11
There is increasing interest in the design and construction of nuclear medicine detectors for dedicated prostate imaging. These include detectors designed for imaging the biodistribution of radiopharmaceuticals labeled with single gamma as well as positron-emitting radionuclides. New detectors and acquisition geometries present challenges and opportunities for image reconstruction. In this contribution various strategies for image reconstruction for these special purpose imagers are reviewed. Iterative statistical algorithms provide a framework for reconstructing prostate images from a wide variety of detectors and acquisition geometries for PET and SPECT. The key to their success is modeling the physics of photon transport and data acquisition and the Poisson statistics of nuclear decay. Analytic image reconstruction methods can be fast and are useful for favorable acquisition geometries. Future perspectives on algorithm development and data analysis for prostate imaging are presented.
CUDA accelerated uniform re-sampling for non-Cartesian MR reconstruction.
Feng, Chaolu; Zhao, Dazhe
2015-01-01
A grid-driven gridding (GDG) method is proposed to uniformly re-sample non-Cartesian raw data acquired in PROPELLER, in which a trajectory window for each Cartesian grid is first computed. The intensity of the reconstructed image at this grid is the weighted average of raw data in this window. Taking consider of the single instruction multiple data (SIMD) property of the proposed GDG, a CUDA accelerated method is then proposed to improve the performance of the proposed GDG. Two groups of raw data sampled by PROPELLER in two resolutions are reconstructed by the proposed method. To balance computation resources of the GPU and obtain the best performance improvement, four thread-block strategies are adopted. Experimental results demonstrate that although the proposed GDG is more time consuming than traditional DDG, the CUDA accelerated GDG is almost 10 times faster than traditional DDG.
Cao, Zhipeng; Oh, Sukhoon; Otazo, Ricardo; Sica, Christopher T; Griswold, Mark A; Collins, Christopher M
2015-04-01
Introduce a novel compressed sensing reconstruction method to accelerate proton resonance frequency shift temperature imaging for MRI-induced radiofrequency heating evaluation. A compressed sensing approach that exploits sparsity of the complex difference between postheating and baseline images is proposed to accelerate proton resonance frequency temperature mapping. The method exploits the intra-image and inter-image correlations to promote sparsity and remove shared aliasing artifacts. Validations were performed on simulations and retrospectively undersampled data acquired in ex vivo and in vivo studies by comparing performance with previously published techniques. The proposed complex difference constrained compressed sensing reconstruction method improved the reconstruction of smooth and local proton resonance frequency temperature change images compared to various available reconstruction methods in a simulation study, a retrospective study with heating of a human forearm in vivo, and a retrospective study with heating of a sample of beef ex vivo. Complex difference based compressed sensing with utilization of a fully sampled baseline image improves the reconstruction accuracy for accelerated proton resonance frequency thermometry. It can be used to improve the volumetric coverage and temporal resolution in evaluation of radiofrequency heating due to MRI, and may help facilitate and validate temperature-based methods for safety assurance. © 2014 Wiley Periodicals, Inc.
Fast parallel algorithm for CT image reconstruction.
Flores, Liubov A; Vidal, Vicent; Mayo, Patricia; Rodenas, Francisco; Verdú, Gumersindo
2012-01-01
In X-ray computed tomography (CT) the X rays are used to obtain the projection data needed to generate an image of the inside of an object. The image can be generated with different techniques. Iterative methods are more suitable for the reconstruction of images with high contrast and precision in noisy conditions and from a small number of projections. Their use may be important in portable scanners for their functionality in emergency situations. However, in practice, these methods are not widely used due to the high computational cost of their implementation. In this work we analyze iterative parallel image reconstruction with the Portable Extensive Toolkit for Scientific computation (PETSc).
Bayesian image reconstruction - The pixon and optimal image modeling
NASA Technical Reports Server (NTRS)
Pina, R. K.; Puetter, R. C.
1993-01-01
In this paper we describe the optimal image model, maximum residual likelihood method (OptMRL) for image reconstruction. OptMRL is a Bayesian image reconstruction technique for removing point-spread function blurring. OptMRL uses both a goodness-of-fit criterion (GOF) and an 'image prior', i.e., a function which quantifies the a priori probability of the image. Unlike standard maximum entropy methods, which typically reconstruct the image on the data pixel grid, OptMRL varies the image model in order to find the optimal functional basis with which to represent the image. We show how an optimal basis for image representation can be selected and in doing so, develop the concept of the 'pixon' which is a generalized image cell from which this basis is constructed. By allowing both the image and the image representation to be variable, the OptMRL method greatly increases the volume of solution space over which the image is optimized. Hence the likelihood of the final reconstructed image is greatly increased. For the goodness-of-fit criterion, OptMRL uses the maximum residual likelihood probability distribution introduced previously by Pina and Puetter (1992). This GOF probability distribution, which is based on the spatial autocorrelation of the residuals, has the advantage that it ensures spatially uncorrelated image reconstruction residuals.
Bayesian image reconstruction - The pixon and optimal image modeling
NASA Astrophysics Data System (ADS)
Pina, R. K.; Puetter, R. C.
1993-06-01
In this paper we describe the optimal image model, maximum residual likelihood method (OptMRL) for image reconstruction. OptMRL is a Bayesian image reconstruction technique for removing point-spread function blurring. OptMRL uses both a goodness-of-fit criterion (GOF) and an 'image prior', i.e., a function which quantifies the a priori probability of the image. Unlike standard maximum entropy methods, which typically reconstruct the image on the data pixel grid, OptMRL varies the image model in order to find the optimal functional basis with which to represent the image. We show how an optimal basis for image representation can be selected and in doing so, develop the concept of the 'pixon' which is a generalized image cell from which this basis is constructed. By allowing both the image and the image representation to be variable, the OptMRL method greatly increases the volume of solution space over which the image is optimized. Hence the likelihood of the final reconstructed image is greatly increased. For the goodness-of-fit criterion, OptMRL uses the maximum residual likelihood probability distribution introduced previously by Pina and Puetter (1992). This GOF probability distribution, which is based on the spatial autocorrelation of the residuals, has the advantage that it ensures spatially uncorrelated image reconstruction residuals.
Bayesian image reconstruction - The pixon and optimal image modeling
NASA Technical Reports Server (NTRS)
Pina, R. K.; Puetter, R. C.
1993-01-01
In this paper we describe the optimal image model, maximum residual likelihood method (OptMRL) for image reconstruction. OptMRL is a Bayesian image reconstruction technique for removing point-spread function blurring. OptMRL uses both a goodness-of-fit criterion (GOF) and an 'image prior', i.e., a function which quantifies the a priori probability of the image. Unlike standard maximum entropy methods, which typically reconstruct the image on the data pixel grid, OptMRL varies the image model in order to find the optimal functional basis with which to represent the image. We show how an optimal basis for image representation can be selected and in doing so, develop the concept of the 'pixon' which is a generalized image cell from which this basis is constructed. By allowing both the image and the image representation to be variable, the OptMRL method greatly increases the volume of solution space over which the image is optimized. Hence the likelihood of the final reconstructed image is greatly increased. For the goodness-of-fit criterion, OptMRL uses the maximum residual likelihood probability distribution introduced previously by Pina and Puetter (1992). This GOF probability distribution, which is based on the spatial autocorrelation of the residuals, has the advantage that it ensures spatially uncorrelated image reconstruction residuals.
Xu, Q; Yang, D; Tan, J; Anastasio, M
2012-06-01
To improve image quality and reduce imaging dose in CBCT for radiation therapy applications and to realize near real-time image reconstruction based on use of a fast convergence iterative algorithm and acceleration by multi-GPUs. An iterative image reconstruction that sought to minimize a weighted least squares cost function that employed total variation (TV) regularization was employed to mitigate projection data incompleteness and noise. To achieve rapid 3D image reconstruction (< 1 min), a highly optimized multiple-GPU implementation of the algorithm was developed. The convergence rate and reconstruction accuracy were evaluated using a modified 3D Shepp-Logan digital phantom and a Catphan-600 physical phantom. The reconstructed images were compared with the clinical FDK reconstruction results. Digital phantom studies showed that only 15 iterations and 60 iterations are needed to achieve algorithm convergence for 360-view and 60-view cases, respectively. The RMSE was reduced to 10-4 and 10-2, respectively, by using 15 iterations for each case. Our algorithm required 5.4s to complete one iteration for the 60-view case using one Tesla C2075 GPU. The few-view study indicated that our iterative algorithm has great potential to reduce the imaging dose and preserve good image quality. For the physical Catphan studies, the images obtained from the iterative algorithm possessed better spatial resolution and higher SNRs than those obtained from by use of a clinical FDK reconstruction algorithm. We have developed a fast convergence iterative algorithm for CBCT image reconstruction. The developed algorithm yielded images with better spatial resolution and higher SNR than those produced by a commercial FDK tool. In addition, from the few-view study, the iterative algorithm has shown great potential for significantly reducing imaging dose. We expect that the developed reconstruction approach will facilitate applications including IGART and patient daily CBCT-based treatment
Heuristic optimization in penumbral image for high resolution reconstructed image
Azuma, R.; Nozaki, S.; Fujioka, S.; Chen, Y. W.; Namihira, Y.
2010-10-15
Penumbral imaging is a technique which uses the fact that spatial information can be recovered from the shadow or penumbra that an unknown source casts through a simple large circular aperture. The size of the penumbral image on the detector can be mathematically determined as its aperture size, object size, and magnification. Conventional reconstruction methods are very sensitive to noise. On the other hand, the heuristic reconstruction method is very tolerant of noise. However, the aperture size influences the accuracy and resolution of the reconstructed image. In this article, we propose the optimization of the aperture size for the neutron penumbral imaging.
Edge-Preserving PET Image Reconstruction Using Trust Optimization Transfer
Wang, Guobao; Qi, Jinyi
2014-01-01
Iterative image reconstruction for positron emission tomography (PET) can improve image quality by using spatial regularization. The most commonly used quadratic penalty often over-smoothes sharp edges and fine features in reconstructed images, while non-quadratic penalties can preserve edges and achieve higher contrast recovery. Existing optimization algorithms such as the expectation maximization (EM) and preconditioned conjugate gradient (PCG) algorithms work well for the quadratic penalty, but are less efficient for high-curvature or non-smooth edge-preserving regularizations. This paper proposes a new algorithm to accelerate edge-preserving image reconstruction by using two strategies: trust surrogate and optimization transfer descent. Trust surrogate approximates the original penalty by a smoother function at each iteration, but guarantees the algorithm to descend monotonically; Optimization transfer descent accelerates a conventional optimization transfer algorithm by using conjugate gradient and line search. Results of computer simulations and real 3D data show that the proposed algorithm converges much faster than the conventional EM and PCG for smooth edge-preserving regularization and can also be more efficient than the current state-of-art algorithms for the non-smooth ℓ1 regularization. PMID:25438302
Edge-preserving PET image reconstruction using trust optimization transfer.
Wang, Guobao; Qi, Jinyi
2015-04-01
Iterative image reconstruction for positron emission tomography can improve image quality by using spatial regularization. The most commonly used quadratic penalty often oversmoothes sharp edges and fine features in reconstructed images, while nonquadratic penalties can preserve edges and achieve higher contrast recovery. Existing optimization algorithms such as the expectation maximization (EM) and preconditioned conjugate gradient (PCG) algorithms work well for the quadratic penalty, but are less efficient for high-curvature or nonsmooth edge-preserving regularizations. This paper proposes a new algorithm to accelerate edge-preserving image reconstruction by using two strategies: trust surrogate and optimization transfer descent. Trust surrogate approximates the original penalty by a smoother function at each iteration, but guarantees the algorithm to descend monotonically; Optimization transfer descent accelerates a conventional optimization transfer algorithm by using conjugate gradient and line search. Results of computer simulations and real 3-D data show that the proposed algorithm converges much faster than the conventional EM and PCG for smooth edge-preserving regularization and can also be more efficient than the current state-of-art algorithms for the nonsmooth l1 regularization.
Cao, Zhipeng; Oh, Sukhoon; Otazo, Ricardo; Sica, Christopher T.; Griswold, Mark A.; Collins, Christopher M.
2014-01-01
Purpose Introduce a novel compressed sensing reconstruction method to accelerate proton resonance frequency (PRF) shift temperature imaging for MRI induced radiofrequency (RF) heating evaluation. Methods A compressed sensing approach that exploits sparsity of the complex difference between post-heating and baseline images is proposed to accelerate PRF temperature mapping. The method exploits the intra- and inter-image correlations to promote sparsity and remove shared aliasing artifacts. Validations were performed on simulations and retrospectively undersampled data acquired in ex-vivo and in-vivo studies by comparing performance with previously proposed techniques. Results The proposed complex difference constrained compressed sensing reconstruction method improved the reconstruction of smooth and local PRF temperature change images compared to various available reconstruction methods in a simulation study, a retrospective study with heating of a human forearm in vivo, and a retrospective study with heating of a sample of beef ex vivo . Conclusion Complex difference based compressed sensing with utilization of a fully-sampled baseline image improves the reconstruction accuracy for accelerated PRF thermometry. It can be used to improve the volumetric coverage and temporal resolution in evaluation of RF heating due to MRI, and may help facilitate and validate temperature-based methods for safety assurance. PMID:24753099
Improving tritium exposure reconstructions using accelerator mass spectrometry
Hunt, J. R.; Vogel, J. S.; Knezovich, J. P.
2010-01-01
Direct measurement of tritium atoms by accelerator mass spectrometry (AMS) enables rapid low-activity tritium measurements from milligram-sized samples and permits greater ease of sample collection, faster throughput, and increased spatial and/or temporal resolution. Because existing methodologies for quantifying tritium have some significant limitations, the development of tritium AMS has allowed improvements in reconstructing tritium exposure concentrations from environmental measurements and provides an important additional tool in assessing the temporal and spatial distribution of chronic exposure. Tritium exposure reconstructions using AMS were previously demonstrated for a tree growing on known levels of tritiated water and for trees exposed to atmospheric releases of tritiated water vapor. In these analyses, tritium levels were measured from milligram-sized samples with sample preparation times of a few days. Hundreds of samples were analyzed within a few months of sample collection and resulted in the reconstruction of spatial and temporal exposure from tritium releases. Although the current quantification limit of tritium AMS is not adequate to determine natural environmental variations in tritium concentrations, it is expected to be sufficient for studies assessing possible health effects from chronic environmental tritium exposure. PMID:14735274
Improving tritium exposure reconstructions using accelerator mass spectrometry.
Love, A H; Hunt, J R; Vogel, J S; Knezovich, J P
2004-05-01
Direct measurement of tritium atoms by accelerator mass spectrometry (AMS) enables rapid low-activity tritium measurements from milligram-sized samples and permits greater ease of sample collection, faster throughput, and increased spatial and/or temporal resolution. Because existing methodologies for quantifying tritium have some significant limitations, the development of tritium AMS has allowed improvements in reconstructing tritium exposure concentrations from environmental measurements and provides an important additional tool in assessing the temporal and spatial distribution of chronic exposure. Tritium exposure reconstructions using AMS were previously demonstrated for a tree growing on known levels of tritiated water and for trees exposed to atmospheric releases of tritiated water vapor. In these analyses, tritium levels were measured from milligram-sized samples with sample preparation times of a few days. Hundreds of samples were analyzed within a few months of sample collection and resulted in the reconstruction of spatial and temporal exposure from tritium releases. Although the current quantification limit of tritium AMS is not adequate to determine natural environmental variations in tritium concentrations, it is expected to be sufficient for studies assessing possible health effects from chronic environmental tritium exposure.
CT Image Reconstruction from Sparse Projections Using Adaptive TpV Regularization
Chen, Zijia; Zhou, Linghong
2015-01-01
Radiation dose reduction without losing CT image quality has been an increasing concern. Reducing the number of X-ray projections to reconstruct CT images, which is also called sparse-projection reconstruction, can potentially avoid excessive dose delivered to patients in CT examination. To overcome the disadvantages of total variation (TV) minimization method, in this work we introduce a novel adaptive TpV regularization into sparse-projection image reconstruction and use FISTA technique to accelerate iterative convergence. The numerical experiments demonstrate that the proposed method suppresses noise and artifacts more efficiently, and preserves structure information better than other existing reconstruction methods. PMID:26089962
Improved fat-water reconstruction algorithm with graphics hardware acceleration.
Johnson, David H; Narayan, Sreenath; Flask, Chris A; Wilson, David L
2010-02-01
To develop a fast and robust Iterative Decomposition of water and fat with Echo Asymmetry and Least-squares (IDEAL) reconstruction algorithm using graphics processor unit (GPU) computation. The fat-water reconstruction was expedited by vectorizing the fat-water parameter estimation, which was implemented on a graphics card to evaluate potential speed increases due to data-parallelization. In addition, we vectorized and compared Brent's method with golden section search for the optimization of the unknown field inhomogeneity parameter (psi) in the IDEAL equations. The algorithm was made more robust to fat-water ambiguities using a modified planar extrapolation (MPE) of psi algorithm. As compared to simple planar extrapolation (PE), the use of an averaging filter in MPE made the reconstruction more robust to neighborhoods poorly fit by a two-dimensional plane. Fat-water reconstruction time was reduced by up to a factor of 11.6 on a GPU as compared to CPU-only reconstruction. The MPE algorithms incorrectly assigned fewer pixels than PE using careful manual correction as a gold standard (0.7% versus 4.5%; P < 10(-4)). Brent's method used fewer iterations than golden section search in the vast majority of pixels (6.8 +/- 1.5 versus 9.6 +/- 1.6 iterations). Data sets acquired on a high field scanner can be quickly and robustly reconstructed using our algorithm. A GPU implementation results in significant time savings, which will become increasingly important with the trend toward high resolution mouse and human imaging.
Focusing criterion in DHM image reconstruction
NASA Astrophysics Data System (ADS)
Mihailescu, M.; Mihale, N.; Popescu, R. C.; Acasandrei, A.; Paun, I. A.; Dinescu, M.; Scarlat, E.
2015-02-01
This study is presenting the theoretical approach and the practical results of a precise activity involved in the hologram reconstruction in order to find the optimally focused image of MG63 osteoblast-like cells cultivated on polymeric flat substrates. The morphology and dynamic of the cell is investigated by digital holographic microscopy (DHM) technique. The reconstruction is digitally performed using an algorithm based on the scalar theory of diffraction in the Fresnel approximation. The quality of the 3D images of the cells is crucially depending on the focusing capability of the reconstruction chain to fit the parameters of the optical recorder, particularly the focusing value. Our proposal to find the focused image is based on the images decomposition on gray levels and their histogram analysis. More precisely the focusing criterion is based on the evaluation of the form of this distribution.
Accelerator Test of an Imaging Calorimeter
NASA Technical Reports Server (NTRS)
Christl, Mark J.; Adams, James H., Jr.; Binns, R. W.; Derrickson, J. H.; Fountain, W. F.; Howell, L. W.; Gregory, J. C.; Hink, P. L.; Israel, M. H.; Kippen, R. M.; Whitaker, Ann F. (Technical Monitor)
2001-01-01
The Imaging Calorimeter for ACCESS (ICA) utilizes a thin sampling calorimeter concept for direct measurements of high-energy cosmic rays. The ICA design uses arrays of small scintillating fibers to measure the energy and trajectory of the produced cascades. A test instrument has been developed to study the performance of this concept at accelerator energies and for comparison with simulations. Two test exposures have been completed using a CERN test beam. Some results from the accelerator tests are presented.
Direct parallel image reconstructions for spiral trajectories using GRAPPA.
Heidemann, Robin M; Griswold, Mark A; Seiberlich, Nicole; Krüger, Gunnar; Kannengiesser, Stephan A R; Kiefer, Berthold; Wiggins, Graham; Wald, Lawrence L; Jakob, Peter M
2006-08-01
The use of spiral trajectories is an efficient way to cover a desired k-space partition in magnetic resonance imaging (MRI). Compared to conventional Cartesian k-space sampling, it allows faster acquisitions and results in a slight reduction of the high gradient demand in fast dynamic scans, such as in functional MRI (fMRI). However, spiral images are more susceptible to off-resonance effects that cause blurring artifacts and distortions of the point-spread function (PSF), and thereby degrade the image quality. Since off-resonance effects scale with the readout duration, the respective artifacts can be reduced by shortening the readout trajectory. Multishot experiments represent one approach to reduce these artifacts in spiral imaging, but result in longer scan times and potentially increased flow and motion artifacts. Parallel imaging methods are another promising approach to improve image quality through an increase in the acquisition speed. However, non-Cartesian parallel image reconstructions are known to be computationally time-consuming, which is prohibitive for clinical applications. In this study a new and fast approach for parallel image reconstructions for spiral imaging based on the generalized autocalibrating partially parallel acquisitions (GRAPPA) methodology is presented. With this approach the computational burden is reduced such that it becomes comparable to that needed in accelerated Cartesian procedures. The respective spiral images with two- to eightfold acceleration clearly benefit from the advantages of parallel imaging, such as enabling parallel MRI single-shot spiral imaging with the off-resonance behavior of multishot acquisitions. Copyright 2006 Wiley-Liss, Inc.
Geometric reconstruction using tracked ultrasound strain imaging
NASA Astrophysics Data System (ADS)
Pheiffer, Thomas S.; Simpson, Amber L.; Ondrake, Janet E.; Miga, Michael I.
2013-03-01
The accurate identification of tumor margins during neurosurgery is a primary concern for the surgeon in order to maximize resection of malignant tissue while preserving normal function. The use of preoperative imaging for guidance is standard of care, but tumor margins are not always clear even when contrast agents are used, and so margins are often determined intraoperatively by visual and tactile feedback. Ultrasound strain imaging creates a quantitative representation of tissue stiffness which can be used in real-time. The information offered by strain imaging can be placed within a conventional image-guidance workflow by tracking the ultrasound probe and calibrating the image plane, which facilitates interpretation of the data by placing it within a common coordinate space with preoperative imaging. Tumor geometry in strain imaging is then directly comparable to the geometry in preoperative imaging. This paper presents a tracked ultrasound strain imaging system capable of co-registering with preoperative tomograms and also of reconstructing a 3D surface using the border of the strain lesion. In a preliminary study using four phantoms with subsurface tumors, tracked strain imaging was registered to preoperative image volumes and then tumor surfaces were reconstructed using contours extracted from strain image slices. The volumes of the phantom tumors reconstructed from tracked strain imaging were approximately between 1.5 to 2.4 cm3, which was similar to the CT volumes of 1.0 to 2.3 cm3. Future work will be done to robustly characterize the reconstruction accuracy of the system.
Reconstruction algorithm for improved ultrasound image quality.
Madore, Bruno; Meral, F Can
2012-02-01
A new algorithm is proposed for reconstructing raw RF data into ultrasound images. Previous delay-and-sum beamforming reconstruction algorithms are essentially one-dimensional, because a sum is performed across all receiving elements. In contrast, the present approach is two-dimensional, potentially allowing any time point from any receiving element to contribute to any pixel location. Computer-intensive matrix inversions are performed once, in advance, to create a reconstruction matrix that can be reused indefinitely for a given probe and imaging geometry. Individual images are generated through a single matrix multiplication with the raw RF data, without any need for separate envelope detection or gridding steps. Raw RF data sets were acquired using a commercially available digital ultrasound engine for three imaging geometries: a 64-element array with a rectangular field-of- view (FOV), the same probe with a sector-shaped FOV, and a 128-element array with rectangular FOV. The acquired data were reconstructed using our proposed method and a delay- and-sum beamforming algorithm for comparison purposes. Point spread function (PSF) measurements from metal wires in a water bath showed that the proposed method was able to reduce the size of the PSF and its spatial integral by about 20 to 38%. Images from a commercially available quality-assurance phantom had greater spatial resolution and contrast when reconstructed with the proposed approach.
Image reconstruction algorithms with wavelet filtering for optoacoustic imaging
NASA Astrophysics Data System (ADS)
Gawali, S.; Leggio, L.; Broadway, C.; González, P.; Sánchez, M.; Rodríguez, S.; Lamela, H.
2016-03-01
Optoacoustic imaging (OAI) is a hybrid biomedical imaging modality based on the generation and detection of ultrasound by illuminating the target tissue by laser light. Typically, laser light in visible or near infrared spectrum is used as an excitation source. OAI is based on the implementation of image reconstruction algorithms using the spatial distribution of optical absorption in tissues. In this work, we apply a time-domain back-projection (BP) reconstruction algorithm and a wavelet filtering for point and line detection, respectively. A comparative study between point detection and integrated line detection has been carried out by evaluating their effects on the image reconstructed. Our results demonstrate that the back-projection algorithm proposed is efficient for reconstructing high-resolution images of absorbing spheres embedded in a non-absorbing medium when it is combined with the wavelet filtering.
CT substitutes derived from MR images reconstructed with parallel imaging.
Johansson, Adam; Garpebring, Anders; Asklund, Thomas; Nyholm, Tufve
2014-08-01
Computed tomography (CT) substitute images can be generated from ultrashort echo time (UTE) MRI sequences with radial k-space sampling. These CT substitutes can be used as ordinary CT images for PET attenuation correction and radiotherapy dose calculations. Parallel imaging allows faster acquisition of magnetic resonance (MR) images by exploiting differences in receiver coil element sensitivities. This study investigates whether non-Cartesian parallel imaging reconstruction can be used to improve CT substitutes generated from shorter examination times. The authors used gridding as well as two non-Cartesian parallel imaging reconstruction methods, SPIRiT and CG-SENSE, to reconstruct radial UTE and gradient echo (GE) data into images of the head for 23 patients. For each patient, images were reconstructed from the full dataset and from a number of subsampled datasets. The subsampled datasets simulated shorter acquisition times by containing fewer radial k-space spokes (1000, 2000, 3000, 5000, and 10,000 spokes) than the full dataset (30,000 spokes). For each combination of patient, reconstruction method, and number of spokes, the reconstructed UTE and GE images were used to generate a CT substitute. Each CT substitute image was compared to a real CT image of the same patient. The mean absolute deviation between the CT number in CT substitute and CT decreased when using SPIRiT as compared to gridding reconstruction. However, the reduction was small and the CT substitute algorithm was insensitive to moderate subsampling (≥ 5000 spokes) regardless of reconstruction method. For more severe subsampling (≤ 3000 spokes), corresponding to acquisition times less than a minute long, the CT substitute quality was deteriorated for all reconstruction methods but SPIRiT gave a reduction in the mean absolute deviation of down to 25 Hounsfield units compared to gridding. SPIRiT marginally improved the CT substitute quality for a given number of radial spokes as compared to
Experience With A Programmable Imaging Accelerator
NASA Astrophysics Data System (ADS)
England, Nick
1989-07-01
Workstations have a large number of advantages for use as a personal computing resource. Unfortunately, currently these machines do not have enough performance to provide interactive 2-D and 3-D imaging capability, and aren't likely to in the foreseeable future. Consequently, they must be accelerated in some fashion. Accelerators need to be physically, visually, and computationally integrated with the workstation to be of maximum effectiveness. Furthermore, the rapidly changing requirements and increasing functionality of today's applications demand a high level of flexibility, impossible to meet with a traditional hardwired image processor architecture. This paper will describe the development of one form of the new breed of imaging accelerator and experiences (and lessons learned) from its application to a variety of problems.
GPU-accelerated regularized iterative reconstruction for few-view cone beam CT
Matenine, Dmitri; Goussard, Yves
2015-04-15
Purpose: The present work proposes an iterative reconstruction technique designed for x-ray transmission computed tomography (CT). The main objective is to provide a model-based solution to the cone-beam CT reconstruction problem, yielding accurate low-dose images via few-views acquisitions in clinically acceptable time frames. Methods: The proposed technique combines a modified ordered subsets convex (OSC) algorithm and the total variation minimization (TV) regularization technique and is called OSC-TV. The number of subsets of each OSC iteration follows a reduction pattern in order to ensure the best performance of the regularization method. Considering the high computational cost of the algorithm, it is implemented on a graphics processing unit, using parallelization to accelerate computations. Results: The reconstructions were performed on computer-simulated as well as human pelvic cone-beam CT projection data and image quality was assessed. In terms of convergence and image quality, OSC-TV performs well in reconstruction of low-dose cone-beam CT data obtained via a few-view acquisition protocol. It compares favorably to the few-view TV-regularized projections onto convex sets (POCS-TV) algorithm. It also appears to be a viable alternative to full-dataset filtered backprojection. Execution times are of 1–2 min and are compatible with the typical clinical workflow for nonreal-time applications. Conclusions: Considering the image quality and execution times, this method may be useful for reconstruction of low-dose clinical acquisitions. It may be of particular benefit to patients who undergo multiple acquisitions by reducing the overall imaging radiation dose and associated risks.
GPU-accelerated regularized iterative reconstruction for few-view cone beam CT.
Matenine, Dmitri; Goussard, Yves; Després, Philippe
2015-04-01
The present work proposes an iterative reconstruction technique designed for x-ray transmission computed tomography (CT). The main objective is to provide a model-based solution to the cone-beam CT reconstruction problem, yielding accurate low-dose images via few-views acquisitions in clinically acceptable time frames. The proposed technique combines a modified ordered subsets convex (OSC) algorithm and the total variation minimization (TV) regularization technique and is called OSC-TV. The number of subsets of each OSC iteration follows a reduction pattern in order to ensure the best performance of the regularization method. Considering the high computational cost of the algorithm, it is implemented on a graphics processing unit, using parallelization to accelerate computations. The reconstructions were performed on computer-simulated as well as human pelvic cone-beam CT projection data and image quality was assessed. In terms of convergence and image quality, OSC-TV performs well in reconstruction of low-dose cone-beam CT data obtained via a few-view acquisition protocol. It compares favorably to the few-view TV-regularized projections onto convex sets (POCS-TV) algorithm. It also appears to be a viable alternative to full-dataset filtered backprojection. Execution times are of 1-2 min and are compatible with the typical clinical workflow for nonreal-time applications. Considering the image quality and execution times, this method may be useful for reconstruction of low-dose clinical acquisitions. It may be of particular benefit to patients who undergo multiple acquisitions by reducing the overall imaging radiation dose and associated risks.
Superresolution images reconstructed from aliased images
NASA Astrophysics Data System (ADS)
Vandewalle, Patrick; Susstrunk, Sabine E.; Vetterli, Martin
2003-06-01
In this paper, we present a simple method to almost quadruple the spatial resolution of aliased images. From a set of four low resolution, undersampled and shifted images, a new image is constructed with almost twice the resolution in each dimension. The resulting image is aliasing-free. A small aliasing-free part of the frequency domain of the images is used to compute the exact subpixel shifts. When the relative image positions are known, a higher resolution image can be constructed using the Papoulis-Gerchberg algorithm. The proposed method is tested in a simulation where all simulation parameters are well controlled, and where the resulting image can be compared with its original. The algorithm is also applied to real, noisy images from a digital camera. Both experiments show very good results.
Accelerated diffusion spectrum imaging with compressed sensing using adaptive dictionaries.
Bilgic, Berkin; Setsompop, Kawin; Cohen-Adad, Julien; Yendiki, Anastasia; Wald, Lawrence L; Adalsteinsson, Elfar
2012-12-01
Diffusion spectrum imaging offers detailed information on complex distributions of intravoxel fiber orientations at the expense of extremely long imaging times (∼1 h). Recent work by Menzel et al. demonstrated successful recovery of diffusion probability density functions from sub-Nyquist sampled q-space by imposing sparsity constraints on the probability density functions under wavelet and total variation transforms. As the performance of compressed sensing reconstruction depends strongly on the level of sparsity in the selected transform space, a dictionary specifically tailored for diffusion probability density functions can yield higher fidelity results. To our knowledge, this work is the first application of adaptive dictionaries in diffusion spectrum imaging, whereby we reduce the scan time of whole brain diffusion spectrum imaging acquisition from 50 to 17 min while retaining high image quality. In vivo experiments were conducted with the 3T Connectome MRI. The root-mean-square error of the reconstructed "missing" diffusion images were calculated by comparing them to a gold standard dataset (obtained from acquiring 10 averages of diffusion images in these missing directions). The root-mean-square error from the proposed reconstruction method is up to two times lower than that of Menzel et al.'s method and is actually comparable to that of the fully-sampled 50 minute scan. Comparison of tractography solutions in 18 major white-matter pathways also indicated good agreement between the fully-sampled and 3-fold accelerated reconstructions. Further, we demonstrate that a dictionary trained using probability density functions from a single slice of a particular subject generalizes well to other slices from the same subject, as well as to slices from other subjects. Copyright © 2012 Wiley Periodicals, Inc.
CUDA accelerated method for motion correction in MR PROPELLER imaging.
Feng, Chaolu; Yang, Jingzhu; Zhao, Dazhe; Liu, Jiren
2013-10-01
In PROPELLER, raw data are collected in N strips, each locating at the center of k-space and consisting of Mx sampling points in frequency encoding direction and L lines in phase encoding direction. Phase correction, rotation correction, and translation correction are used to remove artifacts caused by physiological motion and physical movement, but their time complexities reach O(Mx×Mx×L×N), O(N×RA×Mx×L×(Mx×L+RN×RN)), and O(N×(RN×RN+Mx×L)) where RN×RN is the coordinate space each strip gridded onto and RA denotes the rotation range. A CUDA accelerated method is proposed in this paper to improve their performances. Although our method is implemented on a general PC with Geforce 8800GT and Intel Core(TM)2 E6550 2.33GHz, it can directly run on more modern GPUs and achieve a greater speedup ratio without being changed. Experiments demonstrate that (1) our CUDA accelerated phase correction achieves exactly the same result with the non-accelerated implementation, (2) the results of our CUDA accelerated rotation correction and translation correction have only slight differences with those of their non-accelerated implementation, (3) images reconstructed from the motion correction results of CUDA accelerated methods proposed in this paper satisfy the clinical requirements, and (4) the speed up ratio is close to 6.5. Copyright © 2013 Elsevier Inc. All rights reserved.
Image reconstruction by parametric cubic convolution
NASA Technical Reports Server (NTRS)
Park, S. K.; Schowengerdt, R. A.
1983-01-01
Cubic convolution, which has been discussed by Rifman and McKinnon (1974), was originally developed for the reconstruction of Landsat digital images. In the present investigation, the reconstruction properties of the one-parameter family of cubic convolution interpolation functions are considered and thee image degradation associated with reasonable choices of this parameter is analyzed. With the aid of an analysis in the frequency domain it is demonstrated that in an image-independent sense there is an optimal value for this parameter. The optimal value is not the standard value commonly referenced in the literature. It is also demonstrated that in an image-dependent sense, cubic convolution can be adapted to any class of images characterized by a common energy spectrum.
Proton computed tomography images with algebraic reconstruction
NASA Astrophysics Data System (ADS)
Bruzzi, M.; Civinini, C.; Scaringella, M.; Bonanno, D.; Brianzi, M.; Carpinelli, M.; Cirrone, G. A. P.; Cuttone, G.; Presti, D. Lo; Maccioni, G.; Pallotta, S.; Randazzo, N.; Romano, F.; Sipala, V.; Talamonti, C.; Vanzi, E.
2017-02-01
A prototype of proton Computed Tomography (pCT) system for hadron-therapy has been manufactured and tested in a 175 MeV proton beam with a non-homogeneous phantom designed to simulate high-contrast material. BI-SART reconstruction algorithms have been implemented with GPU parallelism, taking into account of most likely paths of protons in matter. Reconstructed tomography images with density resolutions r.m.s. down to 1% and spatial resolutions <1 mm, achieved within processing times of 15‧ for a 512×512 pixels image prove that this technique will be beneficial if used instead of X-CT in hadron-therapy.
Niu, Tianye; Zhu, Lei
2012-07-01
Recent advances in compressed sensing (CS) enable accurate CT image reconstruction from highly undersampled and noisy projection measurements, due to the sparsifiable feature of most CT images using total variation (TV). These novel reconstruction methods have demonstrated advantages in clinical applications where radiation dose reduction is critical, such as onboard cone-beam CT (CBCT) imaging in radiation therapy. The image reconstruction using CS is formulated as either a constrained problem to minimize the TV objective within a small and fixed data fidelity error, or an unconstrained problem to minimize the data fidelity error with TV regularization. However, the conventional solutions to the above two formulations are either computationally inefficient or involved with inconsistent regularization parameter tuning, which significantly limit the clinical use of CS-based iterative reconstruction. In this paper, we propose an optimization algorithm for CS reconstruction which overcomes the above two drawbacks. The data fidelity tolerance of CS reconstruction can be well estimated based on the measured data, as most of the projection errors are from Poisson noise after effective data correction for scatter and beam-hardening effects. We therefore adopt the TV optimization framework with a data fidelity constraint. To accelerate the convergence, we first convert such a constrained optimization using a logarithmic barrier method into a form similar to that of the conventional TV regularization based reconstruction but with an automatically adjusted penalty weight. The problem is then solved efficiently by gradient projection with an adaptive Barzilai-Borwein step-size selection scheme. The proposed algorithm is referred to as accelerated barrier optimization for CS (ABOCS), and evaluated using both digital and physical phantom studies. ABOCS directly estimates the data fidelity tolerance from the raw projection data. Therefore, as demonstrated in both digital Shepp
Niu, Tianye; Zhu, Lei
2012-07-01
Recent advances in compressed sensing (CS) enable accurate CT image reconstruction from highly undersampled and noisy projection measurements, due to the sparsifiable feature of most CT images using total variation (TV). These novel reconstruction methods have demonstrated advantages in clinical applications where radiation dose reduction is critical, such as onboard cone-beam CT (CBCT) imaging in radiation therapy. The image reconstruction using CS is formulated as either a constrained problem to minimize the TV objective within a small and fixed data fidelity error, or an unconstrained problem to minimize the data fidelity error with TV regularization. However, the conventional solutions to the above two formulations are either computationally inefficient or involved with inconsistent regularization parameter tuning, which significantly limit the clinical use of CS-based iterative reconstruction. In this paper, we propose an optimization algorithm for CS reconstruction which overcomes the above two drawbacks. The data fidelity tolerance of CS reconstruction can be well estimated based on the measured data, as most of the projection errors are from Poisson noise after effective data correction for scatter and beam-hardening effects. We therefore adopt the TV optimization framework with a data fidelity constraint. To accelerate the convergence, we first convert such a constrained optimization using a logarithmic barrier method into a form similar to that of the conventional TV regularization based reconstruction but with an automatically adjusted penalty weight. The problem is then solved efficiently by gradient projection with an adaptive Barzilai-Borwein step-size selection scheme. The proposed algorithm is referred to as accelerated barrier optimization for CS (ABOCS), and evaluated using both digital and physical phantom studies. ABOCS directly estimates the data fidelity tolerance from the raw projection data. Therefore, as demonstrated in both digital Shepp
Niu, Tianye; Zhu, Lei
2012-01-01
Purpose: Recent advances in compressed sensing (CS) enable accurate CT image reconstruction from highly undersampled and noisy projection measurements, due to the sparsifiable feature of most CT images using total variation (TV). These novel reconstruction methods have demonstrated advantages in clinical applications where radiation dose reduction is critical, such as onboard cone-beam CT (CBCT) imaging in radiation therapy. The image reconstruction using CS is formulated as either a constrained problem to minimize the TV objective within a small and fixed data fidelity error, or an unconstrained problem to minimize the data fidelity error with TV regularization. However, the conventional solutions to the above two formulations are either computationally inefficient or involved with inconsistent regularization parameter tuning, which significantly limit the clinical use of CS-based iterative reconstruction. In this paper, we propose an optimization algorithm for CS reconstruction which overcomes the above two drawbacks. Methods: The data fidelity tolerance of CS reconstruction can be well estimated based on the measured data, as most of the projection errors are from Poisson noise after effective data correction for scatter and beam-hardening effects. We therefore adopt the TV optimization framework with a data fidelity constraint. To accelerate the convergence, we first convert such a constrained optimization using a logarithmic barrier method into a form similar to that of the conventional TV regularization based reconstruction but with an automatically adjusted penalty weight. The problem is then solved efficiently by gradient projection with an adaptive Barzilai–Borwein step-size selection scheme. The proposed algorithm is referred to as accelerated barrier optimization for CS (ABOCS), and evaluated using both digital and physical phantom studies. Results: ABOCS directly estimates the data fidelity tolerance from the raw projection data. Therefore, as
Accelerated signal encoding and reconstruction using pixon method
Puetter, Richard; Yahil, Amos; Pina, Robert
2005-05-17
The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape, size, and/or position) as needed to best fit the data.
Accelerated signal encoding and reconstruction using pixon method
Puetter, Richard; Yahil, Amos
2002-01-01
The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape, size, and/or position) as needed to best fit the data.
Accelerated signal encoding and reconstruction using pixon method
Puetter, Richard; Yahil, Amos
2002-01-01
The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape size, and/or position) as needed to best fit the data.
Image reconstruction of compressed sensing MRI using graph-based redundant wavelet transform.
Lai, Zongying; Qu, Xiaobo; Liu, Yunsong; Guo, Di; Ye, Jing; Zhan, Zhifang; Chen, Zhong
2016-01-01
Compressed sensing magnetic resonance imaging has shown great capacity for accelerating magnetic resonance imaging if an image can be sparsely represented. How the image is sparsified seriously affects its reconstruction quality. In the present study, a graph-based redundant wavelet transform is introduced to sparsely represent magnetic resonance images in iterative image reconstructions. With this transform, image patches is viewed as vertices and their differences as edges, and the shortest path on the graph minimizes the total difference of all image patches. Using the l1 norm regularized formulation of the problem solved by an alternating-direction minimization with continuation algorithm, the experimental results demonstrate that the proposed method outperforms several state-of-the-art reconstruction methods in removing artifacts and achieves fewer reconstruction errors on the tested datasets.
Medical Imaging Inspired Vertex Reconstruction at LHC
NASA Astrophysics Data System (ADS)
Hageböck, S.; von Toerne, E.
2012-12-01
Three-dimensional image reconstruction in medical applications (PET or X-ray CT) utilizes sophisticated filter algorithms to linear trajectories of coincident photon pairs or x-rays. The goal is to reconstruct an image of an emitter density distribution. In a similar manner, tracks in particle physics originate from vertices that need to be distinguished from background track combinations. In this study it is investigated if vertex reconstruction in high energy proton collisions may benefit from medical imaging methods. A new method of vertex finding, the Medical Imaging Vertexer (MIV), is presented based on a three-dimensional filtered backprojection algorithm. It is compared to the open-source RAVE vertexing package. The performance of the vertex finding algorithms is evaluated as a function of instantaneous luminosity using simulated LHC collisions. Tracks in these collisions are described by a simplified detector model which is inspired by the tracking performance of the LHC experiments. At high luminosities (25 pileup vertices and more), the medical imaging approach finds vertices with a higher efficiency and purity than the RAVE “Adaptive Vertex Reconstructor” algorithm. It is also much faster if more than 25 vertices are to be reconstructed because the amount of CPU time rises linearly with the number of tracks whereas it rises quadratically for the adaptive vertex fitter AVR.
Imaging appearances of lateral ankle ligament reconstruction.
Chien, Alexander J; Jacobson, Jon A; Jamadar, David A; Brigido, Monica Kalume; Femino, John E; Hayes, Curtis W
2004-01-01
Six patients were retrospectively identified as having undergone lateral ligament reconstruction surgery. The surgical procedures were categorized into four groups: direct lateral ligament repair, peroneus brevis tendon rerouting, peroneus brevis tendon loop, and peroneus brevis tendon split and rerouting. At radiography and magnetic resonance (MR) imaging, the presence of one or more suture anchors in the region of the anterior talofibular ligament indicates direct ligament repair, whereas a fibular tunnel indicates peroneus brevis tendon rerouting or loop. Both ultrasonography (US) and MR imaging demonstrate rerouted tendons as part of lateral ankle reconstruction; however, MR imaging can also depict the rerouted tendon within an osseous tunnel if present, especially if T1-weighted sequences are used. Artifact from suture material may obscure the tendon at MR imaging but not at US. With both modalities, the integrity of the rerouted peroneus brevis tendon is best evaluated by following the tendon proximally from its distal attachment site, which typically remains unchanged. The rerouted tendon or portion of the tendon can then be traced proximally to its reattachment site. Familiarity with the surgical procedures most commonly used for lateral ankle ligament reconstruction, and with the imaging features of these procedures, is essential for avoiding diagnostic pitfalls and ensuring accurate assessment of the ligament reconstruction.
Iterative image reconstruction in spectral CT
NASA Astrophysics Data System (ADS)
Hernandez, Daniel; Michel, Eric; Kim, Hye S.; Kim, Jae G.; Han, Byung H.; Cho, Min H.; Lee, Soo Y.
2012-03-01
Scan time of spectral-CTs is much longer than conventional CTs due to limited number of x-ray photons detectable by photon-counting detectors. However, the spectral pixel information in spectral-CT has much richer information on physiological and pathological status of the tissues than the CT-number in conventional CT, which makes the spectral- CT one of the promising future imaging modalities. One simple way to reduce the scan time in spectral-CT imaging is to reduce the number of views in the acquisition of projection data. But, this may result in poorer SNR and strong streak artifacts which can severely compromise the image quality. In this work, spectral-CT projection data were obtained from a lab-built spectral-CT consisting of a single CdTe photon counting detector, a micro-focus x-ray tube and scan mechanics. For the image reconstruction, we used two iterative image reconstruction methods, the simultaneous iterative reconstruction technique (SIRT) and the total variation minimization based on conjugate gradient method (CG-TV), along with the filtered back-projection (FBP) to compare the image quality. From the imaging of the iodine containing phantoms, we have observed that SIRT and CG-TV are superior to the FBP method in terms of SNR and streak artifacts.
Image superresolution reconstruction via granular computing clustering.
Liu, Hongbing; Zhang, Fan; Wu, Chang-an; Huang, Jun
2014-01-01
The problem of generating a superresolution (SR) image from a single low-resolution (LR) input image is addressed via granular computing clustering in the paper. Firstly, and the training images are regarded as SR image and partitioned into some SR patches, which are resized into LS patches, the training set is composed of the SR patches and the corresponding LR patches. Secondly, the granular computing (GrC) clustering is proposed by the hypersphere representation of granule and the fuzzy inclusion measure compounded by the operation between two granules. Thirdly, the granule set (GS) including hypersphere granules with different granularities is induced by GrC and used to form the relation between the LR image and the SR image by lasso. Experimental results showed that GrC achieved the least root mean square errors between the reconstructed SR image and the original image compared with bicubic interpolation, sparse representation, and NNLasso.
NASA Astrophysics Data System (ADS)
Ting, Samuel T.
cine images. First, algorithmic and implementational approaches are proposed for reducing the computational time for a compressed sensing reconstruction framework. Specific optimization algorithms based on the fast iterative/shrinkage algorithm (FISTA) are applied in the context of real-time cine image reconstruction to achieve efficient per-iteration computation time. Implementation within a code framework utilizing commercially available graphics processing units (GPUs) allows for practical and efficient implementation directly within the clinical environment. Second, patch-based sparsity models are proposed to enable compressed sensing signal recovery from highly undersampled data. Numerical studies demonstrate that this approach can help improve image quality at higher undersampling ratios, enabling real-time cine imaging at higher acceleration rates. In this work, it is shown that these techniques yield a holistic framework for achieving efficient reconstruction of real-time cine images with spatial and temporal resolution sufficient for use in the clinical environment. A thorough description of these techniques from both a theoretical and practical view is provided - both of which may be of interest to the reader in terms of future work.
Pang, Wai-Man; Qin, Jing; Lu, Yuqiang; Xie, Yongming; Chui, Chee-Kong; Heng, Pheng-Ann
2011-03-01
To accelerate the simultaneous algebraic reconstruction technique (SART) with motion compensation for speedy and quality computed tomography reconstruction by exploiting CUDA-enabled GPU. Two core techniques are proposed to fit SART into the CUDA architecture: (1) a ray-driven projection along with hardware trilinear interpolation, and (2) a voxel-driven back-projection that can avoid redundant computation by combining CUDA shared memory. We utilize the independence of each ray and voxel on both techniques to design CUDA kernel to represent a ray in the projection and a voxel in the back-projection respectively. Thus, significant parallelization and performance boost can be achieved. For motion compensation, we rectify each ray's direction during the projection and back-projection stages based on a known motion vector field. Extensive experiments demonstrate the proposed techniques can provide faster reconstruction without compromising image quality. The process rate is nearly 100 projections s (-1), and it is about 150 times faster than a CPU-based SART. The reconstructed image is compared against ground truth visually and quantitatively by peak signal-to-noise ratio (PSNR) and line profiles. We further evaluate the reconstruction quality using quantitative metrics such as signal-to-noise ratio (SNR) and mean-square-error (MSE). All these reveal that satisfactory results are achieved. The effects of major parameters such as ray sampling interval and relaxation parameter are also investigated by a series of experiments. A simulated dataset is used for testing the effectiveness of our motion compensation technique. The results demonstrate our reconstructed volume can eliminate undesirable artifacts like blurring. Our proposed method has potential to realize instantaneous presentation of 3D CT volume to physicians once the projection data are acquired.
Stochastic image reconstruction for a dual-particle imaging system
NASA Astrophysics Data System (ADS)
Hamel, M. C.; Polack, J. K.; Poitrasson-Rivière, A.; Flaska, M.; Clarke, S. D.; Pozzi, S. A.; Tomanin, A.; Peerani, P.
2016-02-01
Stochastic image reconstruction has been applied to a dual-particle imaging system being designed for nuclear safeguards applications. The dual-particle imager (DPI) is a combined Compton-scatter and neutron-scatter camera capable of producing separate neutron and photon images. The stochastic origin ensembles (SOE) method was investigated as an imaging method for the DPI because only a minimal estimation of system response is required to produce images with quality that is comparable to common maximum-likelihood methods. This work contains neutron and photon SOE image reconstructions for a 252Cf point source, two mixed-oxide (MOX) fuel canisters representing point sources, and the MOX fuel canisters representing a distributed source. Simulation of the DPI using MCNPX-PoliMi is validated by comparison of simulated and measured results. Because image quality is dependent on the number of counts and iterations used, the relationship between these quantities is investigated.
Fast nonlinear image reconstruction for scanning impedance imaging.
Liu, Hongze; Hawkins, Aaron R; Schultz, Stephen M; Oliphant, Travis E
2008-03-01
Scanning (electrical) impedance imaging (SII) is a novel high-resolution imaging modality that has the potential of imaging the electrical properties of thin biological tissues. In this paper, we apply the reciprocity principle to the modeling of the SII system and develop a fast nonlinear inverse method for image reconstruction. The method is fast because it uses convolution to eliminate the requirement of a numerical solver for the 3-D electrostatic field in the SII system. Numerical results show that our approach can accurately reveal the exact conductivity distribution from the measured current map for different 2-D simulation phantoms. Experiments were also performed using our SII system for a piece of butterfly wing and breast cancer cells. Two-dimensional current images were measured and corresponding quantitative conductivity images were restored using our approach. The reconstructed images are quantitative and reveal details not present in the measured images.
Fast Image Reconstruction with L2-Regularization
Bilgic, Berkin; Chatnuntawech, Itthi; Fan, Audrey P.; Setsompop, Kawin; Cauley, Stephen F.; Wald, Lawrence L.; Adalsteinsson, Elfar
2014-01-01
Purpose We introduce L2-regularized reconstruction algorithms with closed-form solutions that achieve dramatic computational speed-up relative to state of the art L1- and L2-based iterative algorithms while maintaining similar image quality for various applications in MRI reconstruction. Materials and Methods We compare fast L2-based methods to state of the art algorithms employing iterative L1- and L2-regularization in numerical phantom and in vivo data in three applications; 1) Fast Quantitative Susceptibility Mapping (QSD), 2) Lipid artifact suppression in Magnetic Resonance Spectroscopic Imaging (MRSI), and 3) Diffusion Spectrum Imaging (DSI). In all cases, proposed L2-based methods are compared with the state of the art algorithms, and two to three orders of magnitude speed up is demonstrated with similar reconstruction quality. Results The closed-form solution developed for regularized QSM allows processing of a 3D volume under 5 seconds, the proposed lipid suppression algorithm takes under 1 second to reconstruct single-slice MRSI data, while the PCA based DSI algorithm estimates diffusion propagators from undersampled q-space for a single slice under 30 seconds, all running in Matlab using a standard workstation. Conclusion For the applications considered herein, closed-form L2-regularization can be a faster alternative to its iterative counterpart or L1-based iterative algorithms, without compromising image quality. PMID:24395184
Integrated Image Reconstruction and Gradient Nonlinearity Correction
Tao, Shengzhen; Trzasko, Joshua D.; Shu, Yunhong; Huston, John; Bernstein, Matt A.
2014-01-01
Purpose To describe a model-based reconstruction strategy for routine magnetic resonance imaging (MRI) that accounts for gradient nonlinearity (GNL) during rather than after transformation to the image domain, and demonstrate that this approach reduces the spatial resolution loss that occurs during strictly image-domain GNL-correction. Methods After reviewing conventional GNL-correction methods, we propose a generic signal model for GNL-affected MRI acquisitions, discuss how it incorporates into contemporary image reconstruction platforms, and describe efficient non-uniform fast Fourier transform (NUFFT)-based computational routines for these. The impact of GNL-correction on spatial resolution by the conventional and proposed approaches is investigated on phantom data acquired at varying offsets from gradient isocenter, as well as on fully-sampled and (retrospectively) undersampled in vivo acquisitions. Results Phantom results demonstrate that resolution loss that occurs during GNL-correction is significantly less for the proposed strategy than for the standard approach at distances >10 cm from isocenter with a 35 cm FOV gradient coil. The in vivo results suggest that the proposed strategy better preserves fine anatomical detail than retrospective GNL-correction while offering comparable geometric correction. Conclusion Accounting for GNL during image reconstruction allows geometric distortion to be corrected with less spatial resolution loss than is typically observed with the conventional image domain correction strategy. PMID:25298258
Image reconstruction from photon sparse data
Mertens, Lena; Sonnleitner, Matthias; Leach, Jonathan; Agnew, Megan; Padgett, Miles J.
2017-01-01
We report an algorithm for reconstructing images when the average number of photons recorded per pixel is of order unity, i.e. photon-sparse data. The image optimisation algorithm minimises a cost function incorporating both a Poissonian log-likelihood term based on the deviation of the reconstructed image from the measured data and a regularization-term based upon the sum of the moduli of the second spatial derivatives of the reconstructed image pixel intensities. The balance between these two terms is set by a bootstrapping technique where the target value of the log-likelihood term is deduced from a smoothed version of the original data. When compared to the original data, the processed images exhibit lower residuals with respect to the true object. We use photon-sparse data from two different experimental systems, one system based on a single-photon, avalanche photo-diode array and the other system on a time-gated, intensified camera. However, this same processing technique could most likely be applied to any low photon-number image irrespective of how the data is collected. PMID:28169363
Image reconstruction from photon sparse data
NASA Astrophysics Data System (ADS)
Mertens, Lena; Sonnleitner, Matthias; Leach, Jonathan; Agnew, Megan; Padgett, Miles J.
2017-02-01
We report an algorithm for reconstructing images when the average number of photons recorded per pixel is of order unity, i.e. photon-sparse data. The image optimisation algorithm minimises a cost function incorporating both a Poissonian log-likelihood term based on the deviation of the reconstructed image from the measured data and a regularization-term based upon the sum of the moduli of the second spatial derivatives of the reconstructed image pixel intensities. The balance between these two terms is set by a bootstrapping technique where the target value of the log-likelihood term is deduced from a smoothed version of the original data. When compared to the original data, the processed images exhibit lower residuals with respect to the true object. We use photon-sparse data from two different experimental systems, one system based on a single-photon, avalanche photo-diode array and the other system on a time-gated, intensified camera. However, this same processing technique could most likely be applied to any low photon-number image irrespective of how the data is collected.
Image reconstruction from photon sparse data.
Mertens, Lena; Sonnleitner, Matthias; Leach, Jonathan; Agnew, Megan; Padgett, Miles J
2017-02-07
We report an algorithm for reconstructing images when the average number of photons recorded per pixel is of order unity, i.e. photon-sparse data. The image optimisation algorithm minimises a cost function incorporating both a Poissonian log-likelihood term based on the deviation of the reconstructed image from the measured data and a regularization-term based upon the sum of the moduli of the second spatial derivatives of the reconstructed image pixel intensities. The balance between these two terms is set by a bootstrapping technique where the target value of the log-likelihood term is deduced from a smoothed version of the original data. When compared to the original data, the processed images exhibit lower residuals with respect to the true object. We use photon-sparse data from two different experimental systems, one system based on a single-photon, avalanche photo-diode array and the other system on a time-gated, intensified camera. However, this same processing technique could most likely be applied to any low photon-number image irrespective of how the data is collected.
Non-Cartesian Parallel Imaging Reconstruction
Wright, Katherine L.; Hamilton, Jesse I.; Griswold, Mark A.; Gulani, Vikas; Seiberlich, Nicole
2014-01-01
Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be employed to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the non-homogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian GRAPPA, and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. PMID:24408499
Fast acceleration-encoded magnetic resonance imaging.
Forster, J; Sieverding, L; Breuer, J; Lutz, O; Schick, F
2001-01-01
Direct acceleration imaging with high spatial resolution was implemented and tested. The well-known principle of phase encoding motion components was applied. Suitable gradient switching provides a signal phase shift proportional to the acceleration perpendicular to the slice in the first scan of the sequences. An additional scan serving as a reference was recorded for compensation of phase effects due to magnetic field inhomogeneities. The first scan compensated for phase shifts from undesired first- and second-order motions; the second scan was completely insensitive to velocity and acceleration in all directions. Advantages of the proposed two-step technique compared to former approaches with Fourier acceleration encoding (with several phase encoding steps) are relatively short echo times and short total measuring times. On the other hand, the new approach does not allow us to assess the velocity or acceleration spectrum simultaneously. The capabilities of the sequences were tested on a modern 1.5 T whole body MR unit providing relatively high gradient amplitudes (25 mT/m) and short rise times (600 micros to maximum amplitude). The results from a mechanical acceleration phantom showed a standard deviation of 0.3 m/s2 in sequences with an acceleration range between -12 and 12 m/s2. This range covers the expected maximum acceleration in the human aorta of 10 m/s2. Further tests were performed on a stenosis phantom with a variable volume flow rate to assess the flow characteristics and possible displacement artifacts of the sequences. Preliminary examinations of volunteers demonstrate the potential applicability of the technique in vivo.
Spectral Reconstruction for Obtaining Virtual Hyperspectral Images
NASA Astrophysics Data System (ADS)
Perez, G. J. P.; Castro, E. C.
2016-12-01
Hyperspectral sensors demonstrated its capabalities in identifying materials and detecting processes in a satellite scene. However, availability of hyperspectral images are limited due to the high development cost of these sensors. Currently, most of the readily available data are from multi-spectral instruments. Spectral reconstruction is an alternative method to address the need for hyperspectral information. The spectral reconstruction technique has been shown to provide a quick and accurate detection of defects in an integrated circuit, recovers damaged parts of frescoes, and it also aids in converting a microscope into an imaging spectrometer. By using several spectral bands together with a spectral library, a spectrum acquired by a sensor can be expressed as a linear superposition of elementary signals. In this study, spectral reconstruction is used to estimate the spectra of different surfaces imaged by Landsat 8. Four atmospherically corrected surface reflectance from three visible bands (499 nm, 585 nm, 670 nm) and one near-infrared band (872 nm) of Landsat 8, and a spectral library of ground elements acquired from the United States Geological Survey (USGS) are used. The spectral library is limited to 420-1020 nm spectral range, and is interpolated at one nanometer resolution. Singular Value Decomposition (SVD) is used to calculate the basis spectra, which are then applied to reconstruct the spectrum. The spectral reconstruction is applied for test cases within the library consisting of vegetation communities. This technique was successful in reconstructing a hyperspectral signal with error of less than 12% for most of the test cases. Hence, this study demonstrated the potential of simulating information at any desired wavelength, creating a virtual hyperspectral sensor without the need for additional satellite bands.
Sparse image reconstruction for molecular imaging.
Ting, Michael; Raich, Raviv; Hero, Alfred O
2009-06-01
The application that motivates this paper is molecular imaging at the atomic level. When discretized at subatomic distances, the volume is inherently sparse. Noiseless measurements from an imaging technology can be modeled by convolution of the image with the system point spread function (psf). Such is the case with magnetic resonance force microscopy (MRFM), an emerging technology where imaging of an individual tobacco mosaic virus was recently demonstrated with nanometer resolution. We also consider additive white Gaussian noise (AWGN) in the measurements. Many prior works of sparse estimators have focused on the case when H has low coherence; however, the system matrix H in our application is the convolution matrix for the system psf. A typical convolution matrix has high coherence. This paper, therefore, does not assume a low coherence H. A discrete-continuous form of the Laplacian and atom at zero (LAZE) p.d.f. used by Johnstone and Silverman is formulated, and two sparse estimators derived by maximizing the joint p.d.f. of the observation and image conditioned on the hyperparameters. A thresholding rule that generalizes the hard and soft thresholding rule appears in the course of the derivation. This so-called hybrid thresholding rule, when used in the iterative thresholding framework, gives rise to the hybrid estimator, a generalization of the lasso. Estimates of the hyperparameters for the lasso and hybrid estimator are obtained via Stein's unbiased risk estimate (SURE). A numerical study with a Gaussian psf and two sparse images shows that the hybrid estimator outperforms the lasso.
Optimal Discretization Resolution in Algebraic Image Reconstruction
NASA Astrophysics Data System (ADS)
Sharif, Behzad; Kamalabadi, Farzad
2005-11-01
In this paper, we focus on data-limited tomographic imaging problems where the underlying linear inverse problem is ill-posed. A typical regularized reconstruction algorithm uses algebraic formulation with a predetermined discretization resolution. If the selected resolution is too low, we may loose useful details of the underlying image and if it is too high, the reconstruction will be unstable and the representation will fit irrelevant features. In this work, two approaches are introduced to address this issue. The first approach is using Mallow's CL method or generalized cross-validation. For each of the two methods, a joint estimator of regularization parameter and discretization resolution is proposed and their asymptotic optimality is investigated. The second approach is a Bayesian estimator of the model order using a complexity-penalizing prior. Numerical experiments focus on a space imaging application from a set of limited-angle tomographic observations.
Improved Reconstruction for MR Spectroscopic Imaging
Maudsley, Andrew A.
2009-01-01
Sensitivity limitations of in vivo magnetic resonance spectroscopic imaging (MRSI) require that the extent of spatial-frequency (k-space) sampling be limited, thereby reducing spatial resolution and increasing the effects of Gibbs ringing that is associated with the use of Fourier transform reconstruction. Additional problems occur in the spectral dimension, where quantitation of individual spectral components is made more difficult by the typically low signal-to-noise ratios, variable lineshapes, and baseline distortions, particularly in areas of significant magnetic field inhomogeneity. Given the potential of in vivo MRSI measurements for a number of clinical and biomedical research applications, there is considerable interest in improving the quality of the metabolite image reconstructions. In this report, a reconstruction method is described that makes use of parametric modeling and MRI-derived tissue distribution functions to enhance the MRSI spatial reconstruction. Additional preprocessing steps are also proposed to avoid difficulties associated with image regions containing spectra of inadequate quality, which are commonly present in the in vivo MRSI data. PMID:17518063
Speckle image reconstruction of the adaptive optics solar images.
Zhong, Libo; Tian, Yu; Rao, Changhui
2014-11-17
Speckle image reconstruction, in which the speckle transfer function (STF) is modeled as annular distribution according to the angular dependence of adaptive optics (AO) compensation and the individual STF in each annulus is obtained by the corresponding Fried parameter calculated from the traditional spectral ratio method, is used to restore the solar images corrected by AO system in this paper. The reconstructions of the solar images acquired by a 37-element AO system validate this method and the image quality is improved evidently. Moreover, we found the photometric accuracy of the reconstruction is field dependent due to the influence of AO correction. With the increase of angular separation of the object from the AO lockpoint, the relative improvement becomes approximately more and more effective and tends to identical in the regions far away the central field of view. The simulation results show this phenomenon is mainly due to the disparity of the calculated STF from the real AO STF with the angular dependence.
A computationally efficient superresolution image reconstruction algorithm.
Nguyen, N; Milanfar, P; Golub, G
2001-01-01
Superresolution reconstruction produces a high-resolution image from a set of low-resolution images. Previous iterative methods for superresolution had not adequately addressed the computational and numerical issues for this ill-conditioned and typically underdetermined large scale problem. We propose efficient block circulant preconditioners for solving the Tikhonov-regularized superresolution problem by the conjugate gradient method. We also extend to underdetermined systems the derivation of the generalized cross-validation method for automatic calculation of regularization parameters. The effectiveness of our preconditioners and regularization techniques is demonstrated with superresolution results for a simulated sequence and a forward looking infrared (FLIR) camera image sequence.
Goebel, Juliane; Nensa, Felix; Bomas, Bettina; Schemuth, Haemi P; Maderwald, Stefan; Gratz, Marcel; Quick, Harald H; Schlosser, Thomas; Nassenstein, Kai
2016-12-01
Improved real-time cardiac magnetic resonance (CMR) sequences have currently been introduced, but so far only limited practical experience exists. This study aimed at image reconstruction optimization and clinical validation of a new highly accelerated real-time cine SPARSE-SENSE sequence. Left ventricular (LV) short-axis stacks of a real-time free-breathing SPARSE-SENSE sequence with high spatiotemporal resolution and of a standard segmented cine SSFP sequence were acquired at 1.5 T in 11 volunteers and 15 patients. To determine the optimal iterations, all volunteers' SPARSE-SENSE images were reconstructed using 10-200 iterations, and contrast ratios, image entropies, and reconstruction times were assessed. Subsequently, the patients' SPARSE-SENSE images were reconstructed with the clinically optimal iterations. LV volumetric values were evaluated and compared between both sequences. Sufficient image quality and acceptable reconstruction times were achieved when using 80 iterations. Bland-Altman plots and Passing-Bablok regression showed good agreement for all volumetric parameters. 80 iterations are recommended for iterative SPARSE-SENSE image reconstruction in clinical routine. Real-time cine SPARSE-SENSE yielded comparable volumetric results as the current standard SSFP sequence. Due to its intrinsic low image acquisition times, real-time cine SPARSE-SENSE imaging with iterative image reconstruction seems to be an attractive alternative for LV function analysis. • A highly accelerated real-time CMR sequence using SPARSE-SENSE was evaluated. • SPARSE-SENSE allows free breathing in real-time cardiac cine imaging. • For clinically optimal SPARSE-SENSE image reconstruction, 80 iterations are recommended. • Real-time SPARSE-SENSE imaging yielded comparable volumetric results as the reference SSFP sequence. • The fast SPARSE-SENSE sequence is an attractive alternative to standard SSFP sequences.
Context dependent anti-aliasing image reconstruction
NASA Technical Reports Server (NTRS)
Beaudet, Paul R.; Hunt, A.; Arlia, N.
1989-01-01
Image Reconstruction has been mostly confined to context free linear processes; the traditional continuum interpretation of digital array data uses a linear interpolator with or without an enhancement filter. Here, anti-aliasing context dependent interpretation techniques are investigated for image reconstruction. Pattern classification is applied to each neighborhood to assign it a context class; a different interpolation/filter is applied to neighborhoods of differing context. It is shown how the context dependent interpolation is computed through ensemble average statistics using high resolution training imagery from which the lower resolution image array data is obtained (simulation). A quadratic least squares (LS) context-free image quality model is described from which the context dependent interpolation coefficients are derived. It is shown how ensembles of high-resolution images can be used to capture the a priori special character of different context classes. As a consequence, a priori information such as the translational invariance of edges along the edge direction, edge discontinuity, and the character of corners is captured and can be used to interpret image array data with greater spatial resolution than would be expected by the Nyquist limit. A Gibb-like artifact associated with this super-resolution is discussed. More realistic context dependent image quality models are needed and a suggestion is made for using a quality model which now is finding application in data compression.
Propagation phasor approach for holographic image reconstruction
NASA Astrophysics Data System (ADS)
Luo, Wei; Zhang, Yibo; Göröcs, Zoltán; Feizi, Alborz; Ozcan, Aydogan
2016-03-01
To achieve high-resolution and wide field-of-view, digital holographic imaging techniques need to tackle two major challenges: phase recovery and spatial undersampling. Previously, these challenges were separately addressed using phase retrieval and pixel super-resolution algorithms, which utilize the diversity of different imaging parameters. Although existing holographic imaging methods can achieve large space-bandwidth-products by performing pixel super-resolution and phase retrieval sequentially, they require large amounts of data, which might be a limitation in high-speed or cost-effective imaging applications. Here we report a propagation phasor approach, which for the first time combines phase retrieval and pixel super-resolution into a unified mathematical framework and enables the synthesis of new holographic image reconstruction methods with significantly improved data efficiency. In this approach, twin image and spatial aliasing signals, along with other digital artifacts, are interpreted as noise terms that are modulated by phasors that analytically depend on the lateral displacement between hologram and sensor planes, sample-to-sensor distance, wavelength, and the illumination angle. Compared to previous holographic reconstruction techniques, this new framework results in five- to seven-fold reduced number of raw measurements, while still achieving a competitive resolution and space-bandwidth-product. We also demonstrated the success of this approach by imaging biological specimens including Papanicolaou and blood smears.
Propagation phasor approach for holographic image reconstruction
Luo, Wei; Zhang, Yibo; Göröcs, Zoltán; Feizi, Alborz; Ozcan, Aydogan
2016-01-01
To achieve high-resolution and wide field-of-view, digital holographic imaging techniques need to tackle two major challenges: phase recovery and spatial undersampling. Previously, these challenges were separately addressed using phase retrieval and pixel super-resolution algorithms, which utilize the diversity of different imaging parameters. Although existing holographic imaging methods can achieve large space-bandwidth-products by performing pixel super-resolution and phase retrieval sequentially, they require large amounts of data, which might be a limitation in high-speed or cost-effective imaging applications. Here we report a propagation phasor approach, which for the first time combines phase retrieval and pixel super-resolution into a unified mathematical framework and enables the synthesis of new holographic image reconstruction methods with significantly improved data efficiency. In this approach, twin image and spatial aliasing signals, along with other digital artifacts, are interpreted as noise terms that are modulated by phasors that analytically depend on the lateral displacement between hologram and sensor planes, sample-to-sensor distance, wavelength, and the illumination angle. Compared to previous holographic reconstruction techniques, this new framework results in five- to seven-fold reduced number of raw measurements, while still achieving a competitive resolution and space-bandwidth-product. We also demonstrated the success of this approach by imaging biological specimens including Papanicolaou and blood smears. PMID:26964671
NASA Astrophysics Data System (ADS)
Niu, Tianye; Ye, Xiaojing; Fruhauf, Quentin; Petrongolo, Michael; Zhu, Lei
2014-04-01
Recently, we proposed a new algorithm of accelerated barrier optimization compressed sensing (ABOCS) for iterative CT reconstruction. The previous implementation of ABOCS uses gradient projection (GP) with a Barzilai-Borwein (BB) step-size selection scheme (GP-BB) to search for the optimal solution. The algorithm does not converge stably due to its non-monotonic behavior. In this paper, we further improve the convergence of ABOCS using the unknown-parameter Nesterov (UPN) method and investigate the ABOCS reconstruction performance on clinical patient data. Comparison studies are carried out on reconstructions of computer simulation, a physical phantom and a head-and-neck patient. In all of these studies, the ABOCS results using UPN show more stable and faster convergence than those of the GP-BB method and a state-of-the-art Bregman-type method. As shown in the simulation study of the Shepp-Logan phantom, UPN achieves the same image quality as those of GP-BB and the Bregman-type methods, but reduces the iteration numbers by up to 50% and 90%, respectively. In the Catphan©600 phantom study, a high-quality image with relative reconstruction error (RRE) less than 3% compared to the full-view result is obtained using UPN with 17% projections (60 views). In the conventional filtered-backprojection reconstruction, the corresponding RRE is more than 15% on the same projection data. The superior performance of ABOCS with the UPN implementation is further demonstrated on the head-and-neck patient. Using 25% projections (91 views), the proposed method reduces the RRE from 21% as in the filtered backprojection (FBP) results to 7.3%. In conclusion, we propose UPN for ABOCS implementation. As compared to GP-BB and the Bregman-type methods, the new method significantly improves the convergence with higher stability and fewer iterations.
Niu, Tianye; Ye, Xiaojing; Fruhauf, Quentin; Petrongolo, Michael; Zhu, Lei
2014-01-01
Recently, we proposed a new algorithm of accelerated barrier optimization compressed sensing (ABOCS) for iterative CT reconstruction. The previous implementation of ABOCS uses gradient projection (GP) with a Barzilai-Borwein (BB) step-size selection scheme (GP-BB) to search for the optimal solution. The algorithm does not converge stably due to its non-monotonic behavior. In this paper, we further improve the convergence of ABOCS using the unknown-parameter Nesterov (UPN) method and investigate the ABOCS reconstruction performance on clinical patient data. Comparison studies are carried out on reconstructions of computer simulation, a physical phantom and a head-and-neck patient. In all of these studies, the ABOCS results using UPN show more stable and faster convergence than those of the GPBB method and a state-of-the-art Bregman-type method. As shown in the simulation study of the Shepp-Logan phantom, UPN achieves the same image quality as those of GPBB and the Bregman-type method, but reduces the iteration numbers by up to 50% and 90%, respectively. In the Catphan©600 phantom study, a high-quality image with relative reconstruction error (RRE) less than 3% compared to the full-view result is obtained using UPN with 17% projections (60 views). In the conventional filtered-backprojection (FBP) reconstruction, the corresponding RRE is more than 15% on the same projection data. The superior performance of ABOCS with the UPN implementation is further demonstrated on the head-and-neck patient. Using 25% projections (91 views), the proposed method reduces the RRE from 21% as in the FBP results to 7.3%. In conclusion, we propose UPN for ABOCS implementation. As compared to GPBB and the Bregman-type methods, the new method significantly improves the convergence with higher stability and less iterations. PMID:24625411
NASA Astrophysics Data System (ADS)
Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming
2011-02-01
High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.
Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming
2011-02-01
High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.
Accelerated 3D catheter visualization from triplanar MR projection images.
Schirra, Carsten Oliver; Weiss, Steffen; Krueger, Sascha; Caulfield, Denis; Pedersen, Steen F; Razavi, Reza; Kozerke, Sebastian; Schaeffter, Tobias
2010-07-01
One major obstacle for MR-guided catheterizations is long acquisition times associated with visualizing interventional devices. Therefore, most techniques presented hitherto rely on single-plane imaging to visualize the catheter. Recently, accelerated three-dimensional (3D) imaging based on compressed sensing has been proposed to reduce acquisition times. However, frame rates with this technique remain low, and the 3D reconstruction problem yields a considerable computational load. In X-ray angiography, it is well understood that the shape of interventional devices can be derived in 3D space from a limited number of projection images. In this work, this fact is exploited to develop a method for 3D visualization of active catheters from multiplanar two-dimensional (2D) projection MR images. This is favorable to 3D MRI as the overall number of acquired profiles, and consequently the acquisition time, is reduced. To further reduce measurement times, compressed sensing is employed. Furthermore, a novel single-channel catheter design is presented that combines a solenoidal tip coil in series with a single-loop antenna, enabling simultaneous tip tracking and shape visualization. The tracked tip and catheter properties provide constraints for compressed sensing reconstruction and subsequent 2D/3D curve fitting. The feasibility of the method is demonstrated in phantoms and in an in vivo pig experiment.
Performance-based assessment of reconstructed images
Hanson, Kenneth
2009-01-01
During the early 90s, I engaged in a productive and enjoyable collaboration with Robert Wagner and his colleague, Kyle Myers. We explored the ramifications of the principle that tbe quality of an image should be assessed on the basis of how well it facilitates the performance of appropriate visual tasks. We applied this principle to algorithms used to reconstruct scenes from incomplete and/or noisy projection data. For binary visual tasks, we used both the conventional disk detection and a new challenging task, inspired by the Rayleigh resolution criterion, of deciding whether an object was a blurred version of two dots or a bar. The results of human and machine observer tests were summarized with the detectability index based on the area under the ROC curve. We investigated a variety of reconstruction algorithms, including ART, with and without a nonnegativity constraint, and the MEMSYS3 algorithm. We concluded that the performance of the Raleigh task was optimized when the strength of the prior was near MEMSYS's default 'classic' value for both human and machine observers. A notable result was that the most-often-used metric of rms error in the reconstruction was not necessarily indicative of the value of a reconstructed image for the purpose of performing visual tasks.
HYPR: constrained reconstruction for enhanced SNR in dynamic medical imaging
NASA Astrophysics Data System (ADS)
Mistretta, C.; Wieben, O.; Velikina, J.; Wu, Y.; Johnson, K.; Korosec, F.; Unal, O.; Chen, G.; Fain, S.; Christian, B.; Nalcioglu, O.; Kruger, R. A.; Block, W.; Samsonov, A.; Speidel, M.; Van Lysel, M.; Rowley, H.; Supanich, M.; Turski, P.; Wu, Yan; Holmes, J.; Kecskemeti, S.; Moran, C.; O'Halloran, R.; Keith, L.; Alexander, A.; Brodsky, E.; Lee, J. E.; Hall, T.; Zagzebski, J.
2008-03-01
During the last eight years our group has developed radial acquisitions with angular undersampling factors of several hundred that accelerate MRI in selected applications. As with all previous acceleration techniques, SNR typically falls as least as fast as the inverse square root of the undersampling factor. This limits the SNR available to support the small voxels that these methods can image over short time intervals in applications like time-resolved contrast-enhanced MR angiography (CE-MRA). Instead of processing each time interval independently, we have developed constrained reconstruction methods that exploit the significant correlation between temporal sampling points. A broad class of methods, termed HighlY Constrained Back PRojection (HYPR), generalizes this concept to other modalities and sampling dimensions.
Hyperspectral image reconstruction for diffuse optical tomography
Larusson, Fridrik; Fantini, Sergio; Miller, Eric L.
2011-01-01
We explore the development and performance of algorithms for hyperspectral diffuse optical tomography (DOT) for which data from hundreds of wavelengths are collected and used to determine the concentration distribution of chromophores in the medium under investigation. An efficient method is detailed for forming the images using iterative algorithms applied to a linearized Born approximation model assuming the scattering coefficient is spatially constant and known. The L-surface framework is employed to select optimal regularization parameters for the inverse problem. We report image reconstructions using 126 wavelengths with estimation error in simulations as low as 0.05 and mean square error of experimental data of 0.18 and 0.29 for ink and dye concentrations, respectively, an improvement over reconstructions using fewer specifically chosen wavelengths. PMID:21483616
Akçakaya, Mehmet; Basha, Tamer A; Chan, Raymond H; Manning, Warren J; Nezafat, Reza
2014-02-01
To enable accelerated isotropic sub-millimeter whole-heart coronary MRI within a 6-min acquisition and to compare this with a current state-of-the-art accelerated imaging technique at acceleration rates beyond what is used clinically. Coronary MRI still faces major challenges, including lengthy acquisition time, low signal-to-noise-ratio (SNR), and suboptimal spatial resolution. Higher spatial resolution in the sub-millimeter range is desirable, but this results in increased acquisition time and lower SNR, hindering its clinical implementation. In this study, we sought to use an advanced B1-weighted compressed sensing technique for highly accelerated sub-millimeter whole-heart coronary MRI, and to compare the results to parallel imaging, the current-state-of-the-art, where both techniques were used at acceleration rates beyond what is used clinically. Two whole-heart coronary MRI datasets were acquired in seven healthy adult subjects (30.3 ± 12.1 years; 3 men), using prospective 6-fold acceleration, with random undersampling for the proposed compressed sensing technique and with uniform undersampling for sensitivity encoding reconstruction. Reconstructed images were qualitatively compared in terms of image scores and perceived SNR on a four-point scale (1 = poor, 4 = excellent) by an experienced blinded reader. The proposed technique resulted in images with clear visualization of all coronary branches. Overall image quality and perceived SNR of the compressed sensing images were significantly higher than those of parallel imaging (P = 0.03 for both), which suffered from noise amplification artifacts due to the reduced SNR. The proposed compressed sensing-based reconstruction and acquisition technique for sub-millimeter whole-heart coronary MRI provides 6-fold acceleration, where it outperforms parallel imaging with uniform undersampling. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Li, Zengguang; Xi, Xiaoqi; Han, Yu; Yan, Bin; Li, Lei
2016-10-01
The circle-plus-line trajectory satisfies the exact reconstruction data sufficiency condition, which can be applied in C-arm X-ray Computed Tomography (CT) system to increase reconstruction image quality in a large cone angle. The m-line reconstruction algorithm is adopted for this trajectory. The selection of the direction of m-lines is quite flexible and the m-line algorithm needs less data for accurate reconstruction compared with FDK-type algorithms. However, the computation complexity of the algorithm is very large to obtain efficient serial processing calculations. The reconstruction speed has become an important issue which limits its practical applications. Therefore, the acceleration of the algorithm has great meanings. Compared with other hardware accelerations, the graphics processing unit (GPU) has become the mainstream in the CT image reconstruction. GPU acceleration has achieved a better acceleration effect in FDK-type algorithms. But the implementation of the m-line algorithm's acceleration for the circle-plus-line trajectory is different from the FDK algorithm. The parallelism of the circular-plus-line algorithm needs to be analyzed to design the appropriate acceleration strategy. The implementation can be divided into the following steps. First, selecting m-lines to cover the entire object to be rebuilt; second, calculating differentiated back projection of the point on the m-lines; third, performing Hilbert filtering along the m-line direction; finally, the m-line reconstruction results need to be three-dimensional-resembled and then obtain the Cartesian coordinate reconstruction results. In this paper, we design the reasonable GPU acceleration strategies for each step to improve the reconstruction speed as much as possible. The main contribution is to design an appropriate acceleration strategy for the circle-plus-line trajectory m-line reconstruction algorithm. Sheep-Logan phantom is used to simulate the experiment on a single K20 GPU. The
Analysis of Cultural Heritage by Accelerator Techniques and Analytical Imaging
NASA Astrophysics Data System (ADS)
Ide-Ektessabi, Ari; Toque, Jay Arre; Murayama, Yusuke
2011-12-01
In this paper we present the result of experimental investigation using two very important accelerator techniques: (1) synchrotron radiation XRF and XAFS; and (2) accelerator mass spectrometry and multispectral analytical imaging for the investigation of cultural heritage. We also want to introduce a complementary approach to the investigation of artworks which is noninvasive and nondestructive that can be applied in situ. Four major projects will be discussed to illustrate the potential applications of these accelerator and analytical imaging techniques: (1) investigation of Mongolian Textile (Genghis Khan and Kublai Khan Period) using XRF, AMS and electron microscopy; (2) XRF studies of pigments collected from Korean Buddhist paintings; (3) creating a database of elemental composition and spectral reflectance of more than 1000 Japanese pigments which have been used for traditional Japanese paintings; and (4) visible light-near infrared spectroscopy and multispectral imaging of degraded malachite and azurite. The XRF measurements of the Japanese and Korean pigments could be used to complement the results of pigment identification by analytical imaging through spectral reflectance reconstruction. On the other hand, analysis of the Mongolian textiles revealed that they were produced between 12th and 13th century. Elemental analysis of the samples showed that they contained traces of gold, copper, iron and titanium. Based on the age and trace elements in the samples, it was concluded that the textiles were produced during the height of power of the Mongol empire, which makes them a valuable cultural heritage. Finally, the analysis of the degraded and discolored malachite and azurite demonstrates how multispectral analytical imaging could be used to complement the results of high energy-based techniques.
Deep Reconstruction Models for Image Set Classification.
Hayat, Munawar; Bennamoun, Mohammed; An, Senjian
2015-04-01
Image set classification finds its applications in a number of real-life scenarios such as classification from surveillance videos, multi-view camera networks and personal albums. Compared with single image based classification, it offers more promises and has therefore attracted significant research attention in recent years. Unlike many existing methods which assume images of a set to lie on a certain geometric surface, this paper introduces a deep learning framework which makes no such prior assumptions and can automatically discover the underlying geometric structure. Specifically, a Template Deep Reconstruction Model (TDRM) is defined whose parameters are initialized by performing unsupervised pre-training in a layer-wise fashion using Gaussian Restricted Boltzmann Machines (GRBMs). The initialized TDRM is then separately trained for images of each class and class-specific DRMs are learnt. Based on the minimum reconstruction errors from the learnt class-specific models, three different voting strategies are devised for classification. Extensive experiments are performed to demonstrate the efficacy of the proposed framework for the tasks of face and object recognition from image sets. Experimental results show that the proposed method consistently outperforms the existing state of the art methods.
Coil Compression for Accelerated Imaging with Cartesian Sampling
Zhang, Tao; Pauly, John M.; Vasanawala, Shreyas S.; Lustig, Michael
2012-01-01
MRI using receiver arrays with many coil elements can provide high signal-to-noise ratio and increase parallel imaging acceleration. At the same time, the growing number of elements results in larger datasets and more computation in the reconstruction. This is of particular concern in 3D acquisitions and in iterative reconstructions. Coil compression algorithms are effective in mitigating this problem by compressing data from many channels into fewer virtual coils. In Cartesian sampling there often are fully sampled k-space dimensions. In this work, a new coil compression technique for Cartesian sampling is presented that exploits the spatially varying coil sensitivities in these non-subsampled dimensions for better compression and computation reduction. Instead of directly compressing in k-space, coil compression is performed separately for each spatial location along the fully-sampled directions, followed by an additional alignment process that guarantees the smoothness of the virtual coil sensitivities. This important step provides compatibility with autocalibrating parallel imaging techniques. Its performance is not susceptible to artifacts caused by a tight imaging fieldof-view. High quality compression of in-vivo 3D data from a 32 channel pediatric coil into 6 virtual coils is demonstrated. PMID:22488589
A Sparse Reconstruction Algorithm for Ultrasonic Images in Nondestructive Testing
Guarneri, Giovanni Alfredo; Pipa, Daniel Rodrigues; Junior, Flávio Neves; de Arruda, Lúcia Valéria Ramos; Zibetti, Marcelo Victor Wüst
2015-01-01
Ultrasound imaging systems (UIS) are essential tools in nondestructive testing (NDT). In general, the quality of images depends on two factors: system hardware features and image reconstruction algorithms. This paper presents a new image reconstruction algorithm for ultrasonic NDT. The algorithm reconstructs images from A-scan signals acquired by an ultrasonic imaging system with a monostatic transducer in pulse-echo configuration. It is based on regularized least squares using a l1 regularization norm. The method is tested to reconstruct an image of a point-like reflector, using both simulated and real data. The resolution of reconstructed image is compared with four traditional ultrasonic imaging reconstruction algorithms: B-scan, SAFT, ω-k SAFT and regularized least squares (RLS). The method demonstrates significant resolution improvement when compared with B-scan—about 91% using real data. The proposed scheme also outperforms traditional algorithms in terms of signal-to-noise ratio (SNR). PMID:25905700
Intraoperative imaging in orbital and midface reconstruction.
Wilde, Frank; Schramm, Alexander
2014-10-01
The orbit is very often affected by injuries which can leave patients not only with esthetic deficits, but also with functional impairments if reconstruction is inadequate. Computer-assisted surgery helps to achieve predictable outcomes in reconstruction. Today, intraoperative three-dimensional (3D) imaging is an important element in the workflow of computer-assisted orbital surgery. Clinical and radiological diagnosis by means of computed tomography is followed by preoperative computer-assisted planning to define and simulate the desired outcome of reconstruction. In difficult cases, intraoperative navigation helps in the implementation of procedure plans at the site of surgery. Intraoperative 3D imaging then allows an intraoperative final control to be made and the outcome of the surgery to be validated. Today, this is preferably done using 3D C-arm devices based on cone beam computed tomography. They help to avoid malpositioning of bone fragments and/or inserted implants assuring the quality of complex operations and reducing the number of secondary interventions necessary.
Spatially adaptive regularized iterative high-resolution image reconstruction algorithm
NASA Astrophysics Data System (ADS)
Lim, Won Bae; Park, Min K.; Kang, Moon Gi
2000-12-01
High resolution images are often required in applications such as remote sensing, frame freeze in video, military and medical imaging. Digital image sensor arrays, which are used for image acquisition in many imaging systems, are not dense enough to prevent aliasing, so the acquired images will be degraded by aliasing effects. To prevent aliasing without loss of resolution, a dense detector array is required. But it may be very costly or unavailable, thus, many imaging systems are designed to allow some level of aliasing during image acquisition. The purpose of our work is to reconstruct an unaliased high resolution image from the acquired aliased image sequence. In this paper, we propose a spatially adaptive regularized iterative high resolution image reconstruction algorithm for blurred, noisy and down-sampled image sequences. The proposed approach is based on a Constrained Least Squares (CLS) high resolution reconstruction algorithm, with spatially adaptive regularization operators and parameters. These regularization terms are shown to improve the reconstructed image quality by forcing smoothness, while preserving edges in the reconstructed high resolution image. Accurate sub-pixel motion registration is the key of the success of the high resolution image reconstruction algorithm. However, sub-pixel motion registration may have some level of registration error. Therefore, a reconstruction algorithm which is robust against the registration error is required. The registration algorithm uses a gradient based sub-pixel motion estimator which provides shift information for each of the recorded frames. The proposed algorithm is based on a technique of high resolution image reconstruction, and it solves spatially adaptive regularized constrained least square minimization functionals. In this paper, we show that the reconstruction algorithm gives dramatic improvements in the resolution of the reconstructed image and is effective in handling the aliased information. The
Motion correction based reconstruction method for compressively sampled cardiac MR imaging.
Ahmed, Abdul Haseeb; Qureshi, Ijaz M; Shah, Jawad Ali; Zaheer, Muhammad
2017-02-01
Respiratory motion during Magnetic Resonance (MR) acquisition causes strong blurring artifacts in the reconstructed images. These artifacts become more pronounced when used with the fast imaging reconstruction techniques like compressed sensing (CS). Recently, an MR reconstruction technique has been done with the help of compressed sensing (CS), to provide good quality sparse images from the highly under-sampled k-space data. In order to maximize the benefits of CS, it is obvious to use CS with the motion corrected samples. In this paper, we propose a novel CS based motion corrected image reconstruction technique. First, k-space data have been assigned to different respiratory state with the help of frequency domain phase correlation method. Then, multiple sparsity constraints has been used to provide good quality reconstructed cardiac cine images with the highly under-sampled k-space data. The proposed method exploits the multiple sparsity constraints, in combination with demon based registration technique and a novel reconstruction technique to provide the final motion free images. The proposed method is very simple to implement in clinical settings as compared to existing motion corrected methods. The performance of the proposed method is examined using simulated data and clinical data. Results show that this method performs better than the reconstruction of CS based method of cardiac cine images. Different acceleration rates have been used to show the performance of the proposed method. Copyright © 2016 Elsevier Inc. All rights reserved.
Huang, Jinhong; Guo, Li; Feng, Qianjin; Chen, Wufan; Feng, Yanqiu
2015-07-21
Image reconstruction from undersampled k-space data accelerates magnetic resonance imaging (MRI) by exploiting image sparseness in certain transform domains. Employing image patch representation over a learned dictionary has the advantage of being adaptive to local image structures and thus can better sparsify images than using fixed transforms (e.g. wavelets and total variations). Dictionary learning methods have recently been introduced to MRI reconstruction, and these methods demonstrate significantly reduced reconstruction errors compared to sparse MRI reconstruction using fixed transforms. However, the synthesis sparse coding problem in dictionary learning is NP-hard and computationally expensive. In this paper, we present a novel sparsity-promoting orthogonal dictionary updating method for efficient image reconstruction from highly undersampled MRI data. The orthogonality imposed on the learned dictionary enables the minimization problem in the reconstruction to be solved by an efficient optimization algorithm which alternately updates representation coefficients, orthogonal dictionary, and missing k-space data. Moreover, both sparsity level and sparse representation contribution using updated dictionaries gradually increase during iterations to recover more details, assuming the progressively improved quality of the dictionary. Simulation and real data experimental results both demonstrate that the proposed method is approximately 10 to 100 times faster than the K-SVD-based dictionary learning MRI method and simultaneously improves reconstruction accuracy.
NASA Astrophysics Data System (ADS)
Plotnikov, Illya; Vourlidas, Angelos; Tylka, Allan J.; Pinto, Rui; Rouillard, Alexis; Tirole, Margot
2016-07-01
Identifying the physical mechanisms that produce the most energetic particles is a long-standing observational and theoretical challenge in astrophysics. Strong pressure waves have been proposed as efficient accelerators both in the solar and astrophysical contexts via various mechanisms such as diffusive-shock/shock-drift acceleration and betatron effects. In diffusive-shock acceleration, the efficacy of the process relies on shock waves being super-critical or moving several times faster than the characteristic speed of the medium they propagate through (a high Alfven Mach number) and on the orientation of the magnetic field upstream of the shock front. High-cadence, multipoint imaging using the NASA STEREO, SOHO and SDO spacecrafts now permits the 3-D reconstruction of pressure waves formed during the eruption of coronal mass ejections. Using these unprecedented capabilities, some recent studies have provided new insights on the timing and longitudinal extent of solar energetic particles, including the first derivations of the time-dependent 3-dimensional distribution of the expansion speed and Mach numbers of coronal shock waves. We will review these recent developments by focusing on particle events that occurred between 2011 and 2015. These new techniques also provide the opportunity to investigate the enigmatic long-duration gamma ray events.
Fast contrast enhanced imaging with projection reconstruction
NASA Astrophysics Data System (ADS)
Peters, Dana Ceceilia
The use of contrast agents has lead to great advances in magnetic resonance angiography (MRA). Here we present the first application of projection reconstruction to contrast enhanced MRA. In this research the limited angle projection reconstruction (PR) trajectory is implemented to acquire higher resolution images per unit time than with conventional Fourier transform (FT) imaging. It is well known that as FOV is reduced in conventional spin- warp imaging, higher resolution per unit time can be obtained, but aliasing may appear as a replication of outside material within the FOV. The limited angle PR acquisition also produces aliasing artifacts. This method produced artifacts which were unacceptable in X-ray CT but which appear to be tolerable in MR Angiography. Resolution throughout the FOV is determined by the projection readout resolution and not by the number of projections. As the number of projections is reduced, the resolution is unchanged, but low intensity artifacts appear. Here are presented the results of using limited angle PR in phantoms and contrast-enhanced angiograms of humans.
Chen, Xiao; Yang, Yang; Cai, Xiaoying; Auger, Daniel A; Meyer, Craig H; Salerno, Michael; Epstein, Frederick H
2016-06-14
Cine Displacement Encoding with Stimulated Echoes (DENSE) provides accurate quantitative imaging of cardiac mechanics with rapid displacement and strain analysis; however, image acquisition times are relatively long. Compressed sensing (CS) with parallel imaging (PI) can generally provide high-quality images recovered from data sampled below the Nyquist rate. The purposes of the present study were to develop CS-PI-accelerated acquisition and reconstruction methods for cine DENSE, to assess their accuracy for cardiac imaging using retrospective undersampling, and to demonstrate their feasibility for prospectively-accelerated 2D cine DENSE imaging in a single breathhold. An accelerated cine DENSE sequence with variable-density spiral k-space sampling and golden angle rotations through time was implemented. A CS method, Block LOw-rank Sparsity with Motion-guidance (BLOSM), was combined with sensitivity encoding (SENSE) for the reconstruction of under-sampled multi-coil spiral data. Seven healthy volunteers and 7 patients underwent 2D cine DENSE imaging with fully-sampled acquisitions (14-26 heartbeats in duration) and with prospectively rate-2 and rate-4 accelerated acquisitions (14 and 8 heartbeats in duration). Retrospectively- and prospectively-accelerated data were reconstructed using BLOSM-SENSE and SENSE. Image quality of retrospectively-undersampled data was quantified using the relative root mean square error (rRMSE). Myocardial displacement and circumferential strain were computed for functional assessment, and linear correlation and Bland-Altman analyses were used to compare accelerated acquisitions to fully-sampled reference datasets. For retrospectively-undersampled data, BLOSM-SENSE provided similar or lower rRMSE at rate-2 and lower rRMSE at rate-4 acceleration compared to SENSE (p < 0.05, ANOVA). Similarly, for retrospective undersampling, BLOSM-SENSE provided similar or better correlation with reference displacement and strain data at rate-2 and
Prior image constrained image reconstruction in emerging computed tomography applications
NASA Astrophysics Data System (ADS)
Brunner, Stephen T.
Advances have been made in computed tomography (CT), especially in the past five years, by incorporating prior images into the image reconstruction process. In this dissertation, we investigate prior image constrained image reconstruction in three emerging CT applications: dual-energy CT, multi-energy photon-counting CT, and cone-beam CT in image-guided radiation therapy. First, we investigate the application of Prior Image Constrained Compressed Sensing (PICCS) in dual-energy CT, which has been called "one of the hottest research areas in CT." Phantom and animal studies are conducted using a state-of-the-art 64-slice GE Discovery 750 HD CT scanner to investigate the extent to which PICCS can enable radiation dose reduction in material density and virtual monochromatic imaging. Second, we extend the application of PICCS from dual-energy CT to multi-energy photon-counting CT, which has been called "one of the 12 topics in CT to be critical in the next decade." Numerical simulations are conducted to generate multiple energy bin images for a photon-counting CT acquisition and to investigate the extent to which PICCS can enable radiation dose efficiency improvement. Third, we investigate the performance of a newly proposed prior image constrained scatter correction technique to correct scatter-induced shading artifacts in cone-beam CT, which, when used in image-guided radiation therapy procedures, can assist in patient localization, and potentially, dose verification and adaptive radiation therapy. Phantom studies are conducted using a Varian 2100 EX system with an on-board imager to investigate the extent to which the prior image constrained scatter correction technique can mitigate scatter-induced shading artifacts in cone-beam CT. Results show that these prior image constrained image reconstruction techniques can reduce radiation dose in dual-energy CT by 50% in phantom and animal studies in material density and virtual monochromatic imaging, can lead to radiation
Otazo, Ricardo; Kim, Daniel; Axel, Leon; Sodickson, Daniel K.
2010-01-01
First-pass cardiac perfusion MRI is a natural candidate for compressed sensing acceleration since its representation in the combined temporal Fourier and spatial domain is sparse and the required incoherence can be effectively accomplished by k-t random undersampling. However, the required number of samples in practice (three to five times the number of sparse coefficients) limits the acceleration for compressed sensing alone. Parallel imaging may also be used to accelerate cardiac perfusion MRI, with acceleration factors ultimately limited by noise amplification. In this work, compressed sensing and parallel imaging are combined by merging the k-t SPARSE technique with SENSE reconstruction to substantially increase the acceleration rate for perfusion imaging. We also present a new theoretical framework for understanding the combination of k-t SPARSE with SENSE based on distributed compressed sensing theory. This framework, which identifies parallel imaging as a distributed multisensor implementation of compressed sensing, enables an estimate of feasible acceleration for the combined approach. We demonstrate feasibility of 8-fold acceleration in vivo with whole-heart coverage and high spatial and temporal resolution using standard coil arrays. The method is relatively insensitive to respiratory motion artifacts and presents similar temporal fidelity and image quality when compared to GRAPPA with 2-fold acceleration. PMID:20535813
Spectral reconstruction for a 6 MV linear accelerator
NASA Astrophysics Data System (ADS)
Hernández-Bojórquez, M.; Martínez-Dávalos, A.; Lárraga, J. M.
2004-09-01
In this work we present the first results of an x-ray spectral reconstruction for a 6 MV Varian LINAC. The shape of the spectrum will be used in Monte Carlo treatment planning in order to improve the quality and accuracy of the calculated dose distributions. We based our simulation method on the formalism proposed by Francois et al. (1). In this method the spectrum is reconstructed from transmission measurements under narrow beam geometry for multiple attenuator thicknesses. These data allowed us to reconstruct the x-ray spectrum through direct solution of matrix systems using spectral algebra formalism.
HTGRAPPA: real-time B1-weighted image domain TGRAPPA reconstruction.
Saybasili, Haris; Kellman, Peter; Griswold, Mark A; Derbyshire, J Andrew; Guttman, Michael A
2009-06-01
The temporal generalized autocalibrating partially parallel acquisitions (TGRAPPA) algorithm for parallel MRI was modified for real-time low latency imaging in interventional procedures using image domain, B(1)-weighted reconstruction. GRAPPA coefficients were calculated in k-space, but applied in the image domain after appropriate transformation. Convolution-like operations in k-space were thus avoided, resulting in improved reconstruction speed. Image domain GRAPPA weights were combined into composite unmixing coefficients using adaptive B(1)-map estimates and optimal noise weighting. Images were reconstructed by pixel-by-pixel multiplication in the image domain, rather than time-consuming convolution operations in k-space. Reconstruction and weight-set calculation computations were parallelized and implemented on a general-purpose multicore architecture. The weight calculation was performed asynchronously to the real-time image reconstruction using a dedicated parallel processing thread. The weight-set coefficients were computed in an adaptive manner with updates linked to changes in the imaging scan plane. In this implementation, reconstruction speed is not dependent on acceleration rate or GRAPPA kernel size.
Yang, Alice C; Kretzler, Madison; Sudarski, Sonja; Gulani, Vikas; Seiberlich, Nicole
2016-06-01
The family of sparse reconstruction techniques, including the recently introduced compressed sensing framework, has been extensively explored to reduce scan times in magnetic resonance imaging (MRI). While there are many different methods that fall under the general umbrella of sparse reconstructions, they all rely on the idea that a priori information about the sparsity of MR images can be used to reconstruct full images from undersampled data. This review describes the basic ideas behind sparse reconstruction techniques, how they could be applied to improve MRI, and the open challenges to their general adoption in a clinical setting. The fundamental principles underlying different classes of sparse reconstructions techniques are examined, and the requirements that each make on the undersampled data outlined. Applications that could potentially benefit from the accelerations that sparse reconstructions could provide are described, and clinical studies using sparse reconstructions reviewed. Lastly, technical and clinical challenges to widespread implementation of sparse reconstruction techniques, including optimization, reconstruction times, artifact appearance, and comparison with current gold standards, are discussed.
Wang, Daifa; Qiao, Huiting; Song, Xiaolei; Fan, Yubo; Li, Deyu
2012-12-20
In fluorescence molecular tomography, the accurate and stable reconstruction of fluorescence-labeled targets remains a challenge for wide application of this imaging modality. Here we propose a two-step three-dimensional shape-based reconstruction method using graphics processing unit (GPU) acceleration. In this method, the fluorophore distribution is assumed as the sum of ellipsoids with piecewise-constant fluorescence intensities. The inverse problem is formulated as a constrained nonlinear least-squares problem with respect to shape parameters, leading to much less ill-posedness as the number of unknowns is greatly reduced. Considering that various shape parameters contribute differently to the boundary measurements, we use a two-step optimization algorithm to handle them in a distinctive way and also stabilize the reconstruction. Additionally, the GPU acceleration is employed for finite-element-method-based calculation of the objective function value and the Jacobian matrix, which reduces the total optimization time from around 10 min to less than 1 min. The numerical simulations show that our method can accurately reconstruct multiple targets of various shapes while the conventional voxel-based reconstruction cannot separate the nearby targets. Moreover, the two-step optimization can tolerate different initial values in the existence of noises, even when the number of targets is not known a priori. A physical phantom experiment further demonstrates the method's potential in practical applications.
NASA Astrophysics Data System (ADS)
Pua, Rizza; Park, Miran; Wi, Sunhee; Cho, Seungryong
2016-12-01
We propose a hybrid metal artifact reduction (MAR) approach for computed tomography (CT) that is computationally more efficient than a fully iterative reconstruction method, but at the same time achieves superior image quality to the interpolation-based in-painting techniques. Our proposed MAR method, an image-based artifact subtraction approach, utilizes an intermediate prior image reconstructed via PDART to recover the background information underlying the high density objects. For comparison, prior images generated by total-variation minimization (TVM) algorithm, as a realization of fully iterative approach, were also utilized as intermediate images. From the simulation and real experimental results, it has been shown that PDART drastically accelerates the reconstruction to an acceptable quality of prior images. Incorporating PDART-reconstructed prior images in the proposed MAR scheme achieved higher quality images than those by a conventional in-painting method. Furthermore, the results were comparable to the fully iterative MAR that uses high-quality TVM prior images.
Multiple-wavelength Color Digital Holography for Monochromatic Image Reconstruction
NASA Astrophysics Data System (ADS)
Cheremkhin, P. A.; Shevkunov, I. A.; Petrov, N. V.
In this paper, we consider the opposite problem, namely, using of color digital holograms simultaneously recorded on several wavelengths for the reconstruction of monochromatic images. Special feature of the procedure of monochromatic image reconstruction from the color hologram is the necessity of extracting information from separate spectral channels with a corresponding overlaying of obtained images to avoid mismatching of their spatial position caused by dependence of methods of numerical reconstruction from the laser wavelength.
FPGA Coprocessor for Accelerated Classification of Images
NASA Technical Reports Server (NTRS)
Pingree, Paula J.; Scharenbroich, Lucas J.; Werne, Thomas A.
2008-01-01
An effort related to that described in the preceding article focuses on developing a spaceborne processing platform for fast and accurate onboard classification of image data, a critical part of modern satellite image processing. The approach again has been to exploit the versatility of recently developed hybrid Virtex-4FX field-programmable gate array (FPGA) to run diverse science applications on embedded processors while taking advantage of the reconfigurable hardware resources of the FPGAs. In this case, the FPGA serves as a coprocessor that implements legacy C-language support-vector-machine (SVM) image-classification algorithms to detect and identify natural phenomena such as flooding, volcanic eruptions, and sea-ice break-up. The FPGA provides hardware acceleration for increased onboard processing capability than previously demonstrated in software. The original C-language program demonstrated on an imaging instrument aboard the Earth Observing-1 (EO-1) satellite implements a linear-kernel SVM algorithm for classifying parts of the images as snow, water, ice, land, or cloud or unclassified. Current onboard processors, such as on EO-1, have limited computing power, extremely limited active storage capability and are no longer considered state-of-the-art. Using commercially available software that translates C-language programs into hardware description language (HDL) files, the legacy C-language program, and two newly formulated programs for a more capable expanded-linear-kernel and a more accurate polynomial-kernel SVM algorithm, have been implemented in the Virtex-4FX FPGA. In tests, the FPGA implementations have exhibited significant speedups over conventional software implementations running on general-purpose hardware.
Iterative feature refinement for accurate undersampled MR image reconstruction.
Wang, Shanshan; Liu, Jianbo; Liu, Qiegen; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong
2016-05-07
Accelerating MR scan is of great significance for clinical, research and advanced applications, and one main effort to achieve this is the utilization of compressed sensing (CS) theory. Nevertheless, the existing CSMRI approaches still have limitations such as fine structure loss or high computational complexity. This paper proposes a novel iterative feature refinement (IFR) module for accurate MR image reconstruction from undersampled K-space data. Integrating IFR with CSMRI which is equipped with fixed transforms, we develop an IFR-CS method to restore meaningful structures and details that are originally discarded without introducing too much additional complexity. Specifically, the proposed IFR-CS is realized with three iterative steps, namely sparsity-promoting denoising, feature refinement and Tikhonov regularization. Experimental results on both simulated and in vivo MR datasets have shown that the proposed module has a strong capability to capture image details, and that IFR-CS is comparable and even superior to other state-of-the-art reconstruction approaches.
Iterative feature refinement for accurate undersampled MR image reconstruction
NASA Astrophysics Data System (ADS)
Wang, Shanshan; Liu, Jianbo; Liu, Qiegen; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong
2016-05-01
Accelerating MR scan is of great significance for clinical, research and advanced applications, and one main effort to achieve this is the utilization of compressed sensing (CS) theory. Nevertheless, the existing CSMRI approaches still have limitations such as fine structure loss or high computational complexity. This paper proposes a novel iterative feature refinement (IFR) module for accurate MR image reconstruction from undersampled K-space data. Integrating IFR with CSMRI which is equipped with fixed transforms, we develop an IFR-CS method to restore meaningful structures and details that are originally discarded without introducing too much additional complexity. Specifically, the proposed IFR-CS is realized with three iterative steps, namely sparsity-promoting denoising, feature refinement and Tikhonov regularization. Experimental results on both simulated and in vivo MR datasets have shown that the proposed module has a strong capability to capture image details, and that IFR-CS is comparable and even superior to other state-of-the-art reconstruction approaches.
Images from Bits: Non-Iterative Image Reconstruction for Quanta Image Sensors
Chan, Stanley H.; Elgendy, Omar A.; Wang, Xiran
2016-01-01
A quanta image sensor (QIS) is a class of single-photon imaging devices that measure light intensity using oversampled binary observations. Because of the stochastic nature of the photon arrivals, data acquired by QIS is a massive stream of random binary bits. The goal of image reconstruction is to recover the underlying image from these bits. In this paper, we present a non-iterative image reconstruction algorithm for QIS. Unlike existing reconstruction methods that formulate the problem from an optimization perspective, the new algorithm directly recovers the images through a pair of nonlinear transformations and an off-the-shelf image denoising algorithm. By skipping the usual optimization procedure, we achieve orders of magnitude improvement in speed and even better image reconstruction quality. We validate the new algorithm on synthetic datasets, as well as real videos collected by one-bit single-photon avalanche diode (SPAD) cameras. PMID:27879687
Images from Bits: Non-Iterative Image Reconstruction for Quanta Image Sensors.
Chan, Stanley H; Elgendy, Omar A; Wang, Xiran
2016-11-22
A quanta image sensor (QIS) is a class of single-photon imaging devices that measure light intensity using oversampled binary observations. Because of the stochastic nature of the photon arrivals, data acquired by QIS is a massive stream of random binary bits. The goal of image reconstruction is to recover the underlying image from these bits. In this paper, we present a non-iterative image reconstruction algorithm for QIS. Unlike existing reconstruction methods that formulate the problem from an optimization perspective, the new algorithm directly recovers the images through a pair of nonlinear transformations and an off-the-shelf image denoising algorithm. By skipping the usual optimization procedure, we achieve orders of magnitude improvement in speed and even better image reconstruction quality. We validate the new algorithm on synthetic datasets, as well as real videos collected by one-bit single-photon avalanche diode (SPAD) cameras.
Structure assisted compressed sensing reconstruction of undersampled AFM images.
Oxvig, Christian Schou; Arildsen, Thomas; Larsen, Torben
2017-01-01
The use of compressed sensing in atomic force microscopy (AFM) can potentially speed-up image acquisition, lower probe-specimen interaction, or enable super resolution imaging. The idea in compressed sensing for AFM is to spatially undersample the specimen, i.e. only acquire a small fraction of the full image of it, and then use advanced computational techniques to reconstruct the remaining part of the image whenever this is possible. Our initial experiments have shown that it is possible to leverage inherent structure in acquired AFM images to improve image reconstruction. Thus, we have studied structure in the discrete cosine transform coefficients of typical AFM images. Based on this study, we propose a generic support structure model that may be used to improve the quality of the reconstructed AFM images. Furthermore, we propose a modification to the established iterative thresholding reconstruction algorithms that enables the use of our proposed structure model in the reconstruction process. Through a large set of reconstructions, the general reconstruction capability improvement achievable using our structured model is shown both quantitatively and qualitatively. Specifically, our experiments show that our proposed algorithm improves over established iterative thresholding algorithms by being able to reconstruct AFM images to a comparable quality using fewer measurements or equivalently obtaining a more detailed reconstruction for a fixed number of measurements. Copyright © 2016 Elsevier B.V. All rights reserved.
Sharma, Samir D; Fong, Caroline L; Tzung, Brian S; Law, Meng; Nayak, Krishna S
2013-09-01
The aim of this study was to determine to what degree current compressed sensing methods are capable of accelerating clinical magnetic resonance neuroimaging sequences. Two 2-dimensional clinical sequences were chosen for this study because of their long scan times. A pilot study was used to establish the sampling scheme and regularization parameter needed in compressed sensing reconstruction. These findings were used in a subsequent blinded study in which images reconstructed using compressed sensing were evaluated by 2 board-certified neuroradiologists. Image quality was evaluated at up to 10 anatomical features. The findings indicate that compressed sensing may provide 2-fold acceleration of certain clinical magnetic resonance neuroimaging sequences. A global ringing artifact and image blurring were identified as the 2 primary artifacts that would hinder the ability to confidently discern abnormality. Compressed sensing is able to moderately accelerate certain neuroimaging sequences without severe loss of clinically relevant information. For those sequences with coarser spatial resolution and/or at a higher acceleration factor, artifacts degrade the quality of the reconstructed image to a point where they are of little to no clinical value.
McClymont, Darryl; Teh, Irvin; Whittington, Hannah J.; Grau, Vicente
2015-01-01
Purpose Diffusion MRI requires acquisition of multiple diffusion‐weighted images, resulting in long scan times. Here, we investigate combining compressed sensing and a fast imaging sequence to dramatically reduce acquisition times in cardiac diffusion MRI. Methods Fully sampled and prospectively undersampled diffusion tensor imaging data were acquired in five rat hearts at acceleration factors of between two and six using a fast spin echo (FSE) sequence. Images were reconstructed using a compressed sensing framework, enforcing sparsity by means of decomposition by adaptive dictionaries. A tensor was fit to the reconstructed images and fiber tractography was performed. Results Acceleration factors of up to six were achieved, with a modest increase in root mean square error of mean apparent diffusion coefficient (ADC), fractional anisotropy (FA), and helix angle. At an acceleration factor of six, mean values of ADC and FA were within 2.5% and 5% of the ground truth, respectively. Marginal differences were observed in the fiber tracts. Conclusion We developed a new k‐space sampling strategy for acquiring prospectively undersampled diffusion‐weighted data, and validated a novel compressed sensing reconstruction algorithm based on adaptive dictionaries. The k‐space undersampling and FSE acquisition each reduced acquisition times by up to 6× and 8×, respectively, as compared to fully sampled spin echo imaging. Magn Reson Med 76:248–258, 2016. © 2015 Wiley Periodicals, Inc. PMID:26302363
McClymont, Darryl; Teh, Irvin; Whittington, Hannah J; Grau, Vicente; Schneider, Jürgen E
2016-07-01
Diffusion MRI requires acquisition of multiple diffusion-weighted images, resulting in long scan times. Here, we investigate combining compressed sensing and a fast imaging sequence to dramatically reduce acquisition times in cardiac diffusion MRI. Fully sampled and prospectively undersampled diffusion tensor imaging data were acquired in five rat hearts at acceleration factors of between two and six using a fast spin echo (FSE) sequence. Images were reconstructed using a compressed sensing framework, enforcing sparsity by means of decomposition by adaptive dictionaries. A tensor was fit to the reconstructed images and fiber tractography was performed. Acceleration factors of up to six were achieved, with a modest increase in root mean square error of mean apparent diffusion coefficient (ADC), fractional anisotropy (FA), and helix angle. At an acceleration factor of six, mean values of ADC and FA were within 2.5% and 5% of the ground truth, respectively. Marginal differences were observed in the fiber tracts. We developed a new k-space sampling strategy for acquiring prospectively undersampled diffusion-weighted data, and validated a novel compressed sensing reconstruction algorithm based on adaptive dictionaries. The k-space undersampling and FSE acquisition each reduced acquisition times by up to 6× and 8×, respectively, as compared to fully sampled spin echo imaging. Magn Reson Med 76:248-258, 2016. © 2015 Wiley Periodicals, Inc. © 2015 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.
Hollingsworth, Kieren G; Higgins, David M; McCallum, Michelle; Ward, Louise; Coombs, Anna; Straub, Volker
2014-12-01
Fat fraction measurement in muscular dystrophy has an important role to play in future therapy trials. Undersampled data acquisition reconstructed by combined compressed sensing and parallel imaging (CS-PI) can potentially reduce trial cost and improve compliance. These benefits are only gained from prospectively undersampled acquisitions. Eight patients with Becker muscular dystrophy were recruited and prospectively undersampled data at ratios of 3.65×, 4.94×, and 6.42× were acquired in addition to fully sampled data: equivalent coherent undersamplings were acquired for reconstruction with parallel imaging alone (PI). Fat fraction maps and maps of total signal were created using a combined compressed sensing/parallel imaging (CS-PI) reconstruction. The CS-PI reconstructions are of sufficient quality to allow muscle delineation at 3.65× and 4.94× undersampling but some muscles were obscured at 6.42×. When plotted against the fat fractions derived from fully sampled data, non-significant bias and 95% limits of agreement of 1.58%, 2.17% and 2.41% were found for the three CS-PI reconstructions, while a 3.36× PI reconstruction yields 2.78%, 1.8 times worse than the equivalent CS-PI reconstruction. Prospective undersampling and CS-PI reconstruction of muscle fat fraction mapping can be used to accelerate muscle fat fraction measurement in muscular dystrophy. © 2013 Wiley Periodicals, Inc.
Neural net classification and LMS reconstruction to halftone images
NASA Astrophysics Data System (ADS)
Chang, Pao-Chi; Yu, Che-Sheng
1998-01-01
The objective of this work is to reconstruct high quality gray-level images from halftone images, or the inverse halftoning process. We develop high performance halftone reconstruction methods for several commonly used halftone techniques. For better reconstruction quality, image classification based on halftone techniques is placed before the reconstruction process so that the halftone reconstruction process can be fine tuned for each halftone technique. The classification is based on enhanced 1D correlation of halftone images and processed with a three- layer back propagation neural network. This classification method reached 100 percent accuracy with a limited set of images processed by dispersed-dot ordered dithering, clustered-dot ordered dithering, constrained average, and error diffusion methods in our experiments. For image reconstruction, we apply the least-mean-square adaptive filtering algorithm which intends to discover the optimal filter weights and the mask shapes. As a result, it yields very good reconstruction image quality. The error diffusion yields the best reconstructed quality among the halftone methods. In addition, the LMS method generates optimal image masks which are significantly different for each halftone method. These optimal masks can also be applied to more sophisticated reconstruction methods as the default filter masks.
Sharif, Behzad; Derbyshire, J. Andrew; Faranesh, Anthony Z.; Bresler, Yoram
2010-01-01
MR imaging of the human heart without explicit cardiac synchronization promises to extend the applicability of cardiac MR to a larger patient population and potentially expand its diagnostic capabilities. However, conventional non-gated imaging techniques typically suffer from low image quality or inadequate spatio-temporal resolution and fidelity. Patient-Adaptive Reconstruction and Acquisition in Dynamic Imaging with Sensitivity Encoding (PARADISE) is a highly-accelerated non-gated dynamic imaging method that enables artifact-free imaging with high spatio-temporal resolutions by utilizing novel computational techniques to optimize the imaging process. In addition to using parallel imaging, the method gains acceleration from a physiologically-driven spatio-temporal support model; hence, it is doubly accelerated. The support model is patient-adaptive, i.e., its geometry depends on dynamics of the imaged slice, e.g., subject’s heart-rate and heart location within the slice. The proposed method is also doubly adaptive as it adapts both the acquisition and reconstruction schemes. Based on the theory of time-sequential sampling, the proposed framework explicitly accounts for speed limitations of gradient encoding and provides performance guarantees on achievable image quality. The presented in-vivo results demonstrate the effectiveness and feasibility of the PARADISE method for high resolution non-gated cardiac MRI during a short breath-hold. PMID:20665794
Zhou, Yihang; Pandit, Prachi; Pedoia, Valentina; Rivoire, Julien; Wang, Yanhua; Liang, Dong; Li, Xiaojuan; Ying, Leslie
2015-01-01
Purpose To accelerate T1ρ quantification in cartilage imaging using combined compressed sensing with iterative locally adaptive support detection and JSENSE. Methods To reconstruct T1ρ images from accelerated acquisition at different time of spin-lock (TSLs), we propose an approach to combine an advanced compressed sensing (CS) based reconstruction technique, LAISD (Locally-Adaptive Iterative Support Detection), and an advanced parallel imaging technique, JSENSE. Specifically, the reconstruction process alternates iteratively among local support detection in the domain of principal component analysis, compressed sensing reconstruction of the image sequence, and sensitivity estimation with JSENSE. T1ρ quantification results from accelerated scans using the proposed method are evaluated using in vivo knee cartilage data from bi-lateral scans of three healthy volunteers. Result T1ρ maps obtained from accelerated scans (acceleration factors of 3 and 3.5) using the proposed method showed results comparable to conventional full scans. The T1ρ errors in all compartments are below 1%, which is well below the in vivo reproducibility of cartilage T1ρ reported from previous studies. Conclusion The proposed method can significantly accelerate the acquisition process of T1ρ quantification on human cartilage imaging without sacrificing accuracy, which will greatly facilitate the clinical translation of quantitative cartilage MRI. PMID:26010735
Zhou, Yihang; Pandit, Prachi; Pedoia, Valentina; Rivoire, Julien; Wang, Yanhua; Liang, Dong; Li, Xiaojuan; Ying, Leslie
2016-04-01
To accelerate T1ρ quantification in cartilage imaging using combined compressed sensing with iterative locally adaptive support detection and JSENSE. To reconstruct T1ρ images from accelerated acquisition at different time of spin-lock (TSLs), we propose an approach to combine an advanced compressed sensing (CS) based reconstruction technique, LAISD (locally adaptive iterative support detection), and an advanced parallel imaging technique, JSENSE. Specifically, the reconstruction process alternates iteratively among local support detection in the domain of principal component analysis, compressed sensing reconstruction of the image sequence, and sensitivity estimation with JSENSE. T1ρ quantification results from accelerated scans using the proposed method are evaluated using in vivo knee cartilage data from bilateral scans of three healthy volunteers. T1ρ maps obtained from accelerated scans (acceleration factors of 3 and 3.5) using the proposed method showed results comparable to conventional full scans. The T1ρ errors in all compartments are below 1%, which is well below the in vivo reproducibility of cartilage T1ρ reported from previous studies. The proposed method can significantly accelerate the acquisition process of T1ρ quantification on human cartilage imaging without sacrificing accuracy, which will greatly facilitate the clinical translation of quantitative cartilage MRI. © 2015 Wiley Periodicals, Inc.
Imaging, Reconstruction, And Display Of Corneal Topography
NASA Astrophysics Data System (ADS)
Klyce, Stephen D.; Wilson, Steven E.
1989-12-01
The cornea is the major refractive element in the eye; even minor surface distortions can produce a significant reduction in visual acuity. Standard clinical methods used to evaluate corneal shape include keratometry, which assumes the cornea is ellipsoidal in shape, and photokeratoscopy, which images a series of concentric light rings on the corneal surface. These methods fail to document many of the corneal distortions that can degrade visual acuity. Algorithms have been developed to reconstruct the three dimensional shape of the cornea from keratoscope images, and to present these data in the clinically useful display of color-coded contour maps of corneal surface power. This approach has been implemented on a new generation video keratoscope system (Computed Anatomy, Inc.) with rapid automatic digitization of the image rings by a rule-based approach. The system has found clinical use in the early diagnosis of corneal shape anomalies such as keratoconus and contact lens-induced corneal warpage, in the evaluation of cataract and corneal transplant procedures, and in the assessment of corneal refractive surgical procedures. Currently, ray tracing techniques are being used to correlate corneal surface topography with potential visual acuity in an effort to more fully understand the tolerances of corneal shape consistent with good vision and to help determine the site of dysfunction in the visually impaired.
Photogrammetric 3D reconstruction using mobile imaging
NASA Astrophysics Data System (ADS)
Fritsch, Dieter; Syll, Miguel
2015-03-01
In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.
Gong, Enhao; Huang, Feng; Ying, Kui; Wu, Wenchuan; Wang, Shi; Yuan, Chun
2015-02-01
A typical clinical MR examination includes multiple scans to acquire images with different contrasts for complementary diagnostic information. The multicontrast scheme requires long scanning time. The combination of partially parallel imaging and compressed sensing (CS-PPI) has been used to reconstruct accelerated scans. However, there are several unsolved problems in existing methods. The target of this work is to improve existing CS-PPI methods for multicontrast imaging, especially for two-dimensional imaging. If the same field of view is scanned in multicontrast imaging, there is significant amount of sharable information. It is proposed in this study to use manifold sharable information among multicontrast images to enhance CS-PPI in a sequential way. Coil sensitivity information and structure based adaptive regularization, which were extracted from previously reconstructed images, were applied to enhance the following reconstructions. The proposed method is called Parallel-imaging and compressed-sensing Reconstruction Of Multicontrast Imaging using SharablE information (PROMISE). Using L1 -SPIRiT as a CS-PPI example, results on multicontrast brain and carotid scans demonstrated that lower error level and better detail preservation can be achieved by exploiting manifold sharable information. Besides, the privilege of PROMISE still exists while there is interscan motion. Using the sharable information among multicontrast images can enhance CS-PPI with tolerance to motions. © 2014 Wiley Periodicals, Inc.
Hielscher, Andreas H; Bartel, Sebastian
2004-02-01
Optical tomography (OT) is a fast developing novel imaging modality that uses near-infrared (NIR) light to obtain cross-sectional views of optical properties inside the human body. A major challenge remains the time-consuming, computational-intensive image reconstruction problem that converts NIR transmission measurements into cross-sectional images. To increase the speed of iterative image reconstruction schemes that are commonly applied for OT, we have developed and implemented several parallel algorithms on a cluster of workstations. Static process distribution as well as dynamic load balancing schemes suitable for heterogeneous clusters and varying machine performances are introduced and tested. The resulting algorithms are shown to accelerate the reconstruction process to various degrees, substantially reducing the computation times for clinically relevant problems.
Reconstruction-based 3D/2D image registration.
Tomazevic, Dejan; Likar, Bostjan; Pernus, Franjo
2005-01-01
In this paper we present a novel 3D/2D registration method, where first, a 3D image is reconstructed from a few 2D X-ray images and next, the preoperative 3D image is brought into the best possible spatial correspondence with the reconstructed image by optimizing a similarity measure. Because the quality of the reconstructed image is generally low, we introduce a novel asymmetric mutual information similarity measure, which is able to cope with low image quality as well as with different imaging modalities. The novel 3D/2D registration method has been evaluated using standardized evaluation methodology and publicly available 3D CT, 3DRX, and MR and 2D X-ray images of two spine phantoms, for which gold standard registrations were known. In terms of robustness, reliability and capture range the proposed method outperformed the gradient-based method and the method based on digitally reconstructed radiographs (DRRs).
Precision Pointing Reconstruction and Geometric Metadata Generation for Cassini Images
NASA Astrophysics Data System (ADS)
French, R. S.; Showalter, M. R.; Gordon, M. K.
2017-06-01
We are reconstructing accurate pointing for 400,000 images taken by Cassini at Saturn. The results will be provided to the public along with per-pixel metadata describing precise image contents such as geographical location and viewing geometry.
Zhang, Tao; Chowdhury, Shilpy; Lustig, Michael; Barth, Richard A; Alley, Marcus T; Grafendorfer, Thomas; Calderon, Paul D; Robb, Fraser J L; Pauly, John M; Vasanawala, Shreyas S
2014-07-01
To deploy clinically, a combined parallel imaging compressed sensing method with coil compression that achieves a rapid image reconstruction, and assess its clinical performance in contrast-enhanced abdominal pediatric MRI. With Institutional Review Board approval and informed patient consent/assent, 29 consecutive pediatric patients were recruited. Dynamic contrast-enhanced MRI was acquired on a 3 Tesla scanner using a dedicated 32-channel pediatric coil and a three-dimensional SPGR sequence, with pseudo-random undersampling at a high acceleration (R = 7.2). Undersampled data were reconstructed with three methods: a traditional parallel imaging method and a combined parallel imaging compressed sensing method with and without coil compression. The three sets of images were evaluated independently and blindly by two radiologists at one siting, for overall image quality and delineation of anatomical structures. Wilcoxon tests were performed to test the hypothesis that there was no significant difference in the evaluations, and interobserver agreement was analyzed. Fast reconstruction with coil compression did not deteriorate image quality. The mean score of structural delineation of the fast reconstruction was 4.1 on a 5-point scale, significantly better (P < 0.05) than traditional parallel imaging (mean score 3.1). Fair to substantial interobserver agreement was reached in structural delineation assessment. A fast combined parallel imaging compressed sensing method is feasible in a pediatric clinical setting. Preliminary results suggest it may improve structural delineation over parallel imaging. © 2013 Wiley Periodicals, Inc.
Simultaneous reconstruction of the activity image and registration of the CT image in TOF-PET
NASA Astrophysics Data System (ADS)
Rezaei, Ahmadreza; Michel, Christian; Casey, Michael E.; Nuyts, Johan
2016-02-01
Previously, maximum-likelihood methods have been proposed to jointly estimate the activity image and the attenuation image or the attenuation sinogram from time-of-flight (TOF) positron emission tomography (PET) data. In this contribution, we propose a method that addresses the possible alignment problem of the TOF-PET emission data and the computed tomography (CT) attenuation data, by combining reconstruction and registration. The method, called MLRR, iteratively reconstructs the activity image while registering the available CT-based attenuation image, so that the pair of activity and attenuation images maximise the likelihood of the TOF emission sinogram. The algorithm is slow to converge, but some acceleration could be achieved by using Nesterov’s momentum method and by applying a multi-resolution scheme for the non-rigid displacement estimation. The latter also helps to avoid local optima, although convergence to the global optimum cannot be guaranteed. The results are evaluated on 2D and 3D simulations as well as a respiratory gated clinical scan. Our experiments indicate that the proposed method is able to correct for possible misalignment of the CT-based attenuation image, and is therefore a very promising approach to suppressing attenuation artefacts in clinical PET/CT. When applied to respiratory gated data of a patient scan, it produced deformations that are compatible with breathing motion and which reduced the well known attenuation artefact near the dome of the liver. Since the method makes use of the energy-converted CT attenuation image, the scale problem of joint reconstruction is automatically solved.
Accounting for hardware imperfections in EIT image reconstruction algorithms.
Hartinger, Alzbeta E; Gagnon, Hervé; Guardo, Robert
2007-07-01
Electrical impedance tomography (EIT) is a non-invasive technique for imaging the conductivity distribution of a body section. Different types of EIT images can be reconstructed: absolute, time difference and frequency difference. Reconstruction algorithms are sensitive to many errors which translate into image artefacts. These errors generally result from incorrect modelling or inaccurate measurements. Every reconstruction algorithm incorporates a model of the physical set-up which must be as accurate as possible since any discrepancy with the actual set-up will cause image artefacts. Several methods have been proposed in the literature to improve the model realism, such as creating anatomical-shaped meshes, adding a complete electrode model and tracking changes in electrode contact impedances and positions. Absolute and frequency difference reconstruction algorithms are particularly sensitive to measurement errors and generally assume that measurements are made with an ideal EIT system. Real EIT systems have hardware imperfections that cause measurement errors. These errors translate into image artefacts since the reconstruction algorithm cannot properly discriminate genuine measurement variations produced by the medium under study from those caused by hardware imperfections. We therefore propose a method for eliminating these artefacts by integrating a model of the system hardware imperfections into the reconstruction algorithms. The effectiveness of the method has been evaluated by reconstructing absolute, time difference and frequency difference images with and without the hardware model from data acquired on a resistor mesh phantom. Results have shown that artefacts are smaller for images reconstructed with the model, especially for frequency difference imaging.
Concurrent image and dose reconstruction for image guided radiation therapy
NASA Astrophysics Data System (ADS)
Sheng, Ke
The importance of knowing the patient actual position is essential for intensity modulated radiation therapy (IMRT). This procedure uses tightened margin and escalated tumor dose. In order to eliminate the uncertainty of the geometry in IMRT, daily imaging is prefered. The imaging dose, limited field of view and the imaging concurrency of the MVCT (mega-voltage computerized tomography) are investigated in this work. By applying partial volume imaging (PVI), imaging dose can be reduced for a region of interest (ROI) imaging. The imaging dose and the image quality are quantitatively balanced with inverse imaging dose planning. With PVI, 72% average imaging dose reduction was observed on a typical prostate patient case. The algebraic reconstruction technique (ART) based projection onto convex sets (POCS) shows higher robustness than filtered back projection when available imaging data is not complete and continuous. However, when the projection is continuous as in the actual delivery, a non-iterative wavelet based multiresolution local tomography (WMLT) is able to achieve 1% accuracy within the ROI. The reduction of imaging dose is dependent on the size of ROI. The improvement of concurrency is also discussed based on the combination of PVI and WMLT. Useful target images were acquired with treatment beams and the temporal resolution can be increased to 20 seconds in tomotherapy. The data truncation problem with the portal imager was also studied. Results show that the image quality is not adversely affected by truncation when WMLT is employed. When the online imaging is available, a perturbation dose calculation (PDC) that estimates the actual delivered dose is proposed. Corrected from the Fano's theorem, PDC counts the first order term in the density variation to calculate the internal and external anatomy change. Although change in the dose distribution that is caused by the internal organ motion is less than 1% for 6 MV beams, the external anatomy change has
Comparison of two image reconstruction algorithms for microwave tomography
NASA Astrophysics Data System (ADS)
Fhager, A.; Persson, M.
2005-06-01
Two image reconstruction algorithms for microwave tomography are compared and contrasted. One is a general, gradient-based minimization algorithm. The other is the chirp pulse microwave computed tomography (CP-MCT) method, which is a highly computationally efficient reconstruction method but also a method best suited for low contrasts. The results of the simulations show that when imaging high-contrast objects, such as a breast cancer tumor, reconstructions made are comparable to results from the minimization algorithm below a contrast of about 10%. The simulations, however, show that the reconstructions made by the CP-MCT method are very robust to noise. The reconstruction of the conductivity using the minimization algorithm, on the other hand, is very sensitive to the level of noise. In spite of a strong degradation in the conductivity reconstructions, the corresponding permittivity reconstructions do not show the same sensitivity to the noise level.
Image reconstruction for a stationary digital breast tomosynthesis system
NASA Astrophysics Data System (ADS)
Rajaram, Ramya; Yang, Guang; Quan, Enzhuo; Frederick, Brandon; Lalush, David S.; Zhou, Otto Z.
2009-02-01
We have designed and built a stationary digital breast tomosynthesis (DBT) system containing a carbon nanotube based field emission x-ray source array to examine the possibility of obtaining a reduced scan time and improved image quality compared to conventional DBT systems. There are 25 individually addressable x-ray sources in our linear source array that are evenly angularly spaced to cover an angle of 48°. The sources are turned on sequentially during imaging and there is no motion of either the source or the detector. We present here an iterative reconstruction method based on a modified Ordered-Subset Convex (MOSC) algorithm that was employed for the reconstruction of images from the new DBT system. Using this algorithm based on a maximum-likelihood model, we reconstruct on non-cubic voxels for increased computational efficiency resulting in high in-plane resolution in the images. We have applied the reconstruction technique on simulated and phantom data from the system. Even without the use of the subsets, the reconstruction of an experimental 9-beam system with 960×768 pixels took less than 6 minutes (10 iterations). The projection images of a simulated mammography accreditation phantom were reconstructed using MOSC and a Simultaneous Algebraic Reconstruction technique (SART) and the results from the comparison between the two algorithms allow us to conclude that the MOSC is capable of delivering excellent image quality when used in tomosynthesis image reconstruction.
Synergistic image reconstruction for hybrid ultrasound and photoacoustic computed tomography
NASA Astrophysics Data System (ADS)
Matthews, Thomas P.; Wang, Kun; Wang, Lihong V.; Anastasio, Mark A.
2015-03-01
Conventional photoacoustic computed tomography (PACT) image reconstruction methods assume that the object and surrounding medium are described by a constant speed-of-sound (SOS) value. In order to accurately recover fine structures, SOS heterogeneities should be quantified and compensated for during PACT reconstruction. To address this problem, several groups have proposed hybrid systems that combine PACT with ultrasound computed tomography (USCT). In such systems, a SOS map is reconstructed first via USCT. Consequently, this SOS map is employed to inform the PACT reconstruction method. Additionally, the SOS map can provide structural information regarding tissue, which is complementary to the functional information from the PACT image. We propose a paradigm shift in the way that images are reconstructed in hybrid PACT-USCT imaging. Inspired by our observation that information about the SOS distribution is encoded in PACT measurements, we propose to jointly reconstruct the absorbed optical energy density and SOS distributions from a combined set of USCT and PACT measurements, thereby reducing the two reconstruction problems into one. This innovative approach has several advantages over conventional approaches in which PACT and USCT images are reconstructed independently: (1) Variations in the SOS will automatically be accounted for, optimizing PACT image quality; (2) The reconstructed PACT and USCT images will possess minimal systematic artifacts because errors in the imaging models will be optimally balanced during the joint reconstruction; (3) Due to the exploitation of information regarding the SOS distribution in the full-view PACT data, our approach will permit high-resolution reconstruction of the SOS distribution from sparse array data.
On multigrid methods for image reconstruction from projections
Henson, V.E.; Robinson, B.T.; Limber, M.
1994-12-31
The sampled Radon transform of a 2D function can be represented as a continuous linear map R : L{sup 1} {yields} R{sup N}. The image reconstruction problem is: given a vector b {element_of} R{sup N}, find an image (or density function) u(x, y) such that Ru = b. Since in general there are infinitely many solutions, the authors pick the solution with minimal 2-norm. Numerous proposals have been made regarding how best to discretize this problem. One can, for example, select a set of functions {phi}{sub j} that span a particular subspace {Omega} {contained_in} L{sup 1}, and model R : {Omega} {yields} R{sup N}. The subspace {Omega} may be chosen as a member of a sequence of subspaces whose limit is dense in L{sup 1}. One approach to the choice of {Omega} gives rise to a natural pixel discretization of the image space. Two possible choices of the set {phi}{sub j} are the set of characteristic functions of finite-width `strips` representing energy transmission paths and the set of intersections of such strips. The authors have studied the eigenstructure of the matrices B resulting from these choices and the effect of applying a Gauss-Seidel iteration to the problem Bw = b. There exists a near null space into which the error vectors migrate with iteration, after which Gauss-Seidel iteration stalls. The authors attempt to accelerate convergence via a multilevel scheme, based on the principles of McCormick`s Multilevel Projection Method (PML). Coarsening is achieved by thickening the rays which results in a much smaller discretization of an optimal grid, and a halving of the number of variables. This approach satisfies all the requirements of the PML scheme. They have observed that a multilevel approach based on this idea accelerates convergence at least to the point where noise in the data dominates.
Functional imaging of murine hearts using accelerated self-gated UTE cine MRI.
Motaal, Abdallah G; Noorman, Nils; de Graaf, Wolter L; Hoerr, Verena; Florack, Luc M J; Nicolay, Klaas; Strijkers, Gustav J
2015-01-01
We introduce a fast protocol for ultra-short echo time (UTE) Cine magnetic resonance imaging (MRI) of the beating murine heart. The sequence involves a self-gated UTE with golden-angle radial acquisition and compressed sensing reconstruction. The self-gated acquisition is performed asynchronously with the heartbeat, resulting in a randomly undersampled kt-space that facilitates compressed sensing reconstruction. The sequence was tested in 4 healthy rats and 4 rats with chronic myocardial infarction, approximately 2 months after surgery. As a control, a non-accelerated self-gated multi-slice FLASH sequence with an echo time (TE) of 2.76 ms, 4.5 signal averages, a matrix of 192 × 192, and an acquisition time of 2 min 34 s per slice was used to obtain Cine MRI with 15 frames per heartbeat. Non-accelerated UTE MRI was performed with TE = 0.29 ms, a reconstruction matrix of 192 × 192, and an acquisition time of 3 min 47 s per slice for 3.5 averages. Accelerated imaging with 2×, 4× and 5× undersampled kt-space data was performed with 1 min, 30 and 15 s acquisitions, respectively. UTE Cine images up to 5× undersampled kt-space data could be successfully reconstructed using a compressed sensing algorithm. In contrast to the FLASH Cine images, flow artifacts in the UTE images were nearly absent due to the short echo time, simplifying segmentation of the left ventricular (LV) lumen. LV functional parameters derived from the control and the accelerated Cine movies were statistically identical.
Calibration and Image Reconstruction for the Hurricane Imaging Radiometer (HIRAD)
NASA Technical Reports Server (NTRS)
Ruf, Christopher; Roberts, J. Brent; Biswas, Sayak; James, Mark W.; Miller, Timothy
2012-01-01
The Hurricane Imaging Radiometer (HIRAD) is a new airborne passive microwave synthetic aperture radiometer designed to provide wide swath images of ocean surface wind speed under heavy precipitation and, in particular, in tropical cyclones. It operates at 4, 5, 6 and 6.6 GHz and uses interferometric signal processing to synthesize a pushbroom imager in software from a low profile planar antenna with no mechanical scanning. HIRAD participated in NASA s Genesis and Rapid Intensification Processes (GRIP) mission during Fall 2010 as its first science field campaign. HIRAD produced images of upwelling brightness temperature over a aprox 70 km swath width with approx 3 km spatial resolution. From this, ocean surface wind speed and column averaged atmospheric liquid water content can be retrieved across the swath. The calibration and image reconstruction algorithms that were used to verify HIRAD functional performance during and immediately after GRIP were only preliminary and used a number of simplifying assumptions and approximations about the instrument design and performance. The development and performance of a more detailed and complete set of algorithms are reported here.
Moghari, Mehdi H; Uecker, Martin; Roujol, Sébastien; Sabbagh, Majid; Geva, Tal; Powell, Andrew J
2017-05-11
To accelerate whole-heart three-dimension MR angiography (MRA) by using a variable-density Poisson-disc undersampling pattern and a compressed sensing (CS) reconstruction algorithm, and compare the results with sensitivity encoding (SENSE). For whole-heart MRA, a prospective variable-density Poisson-disc k-space undersampling pattern was developed in which 1-2% of central part of k-space was fully sampled, and sampling in the remainder decreased exponentially toward the periphery. The undersampled data were then estimated using CS reconstruction. In patients, images using this sequence with an undersampling rate of ≈6 were compared with those using a SENSE rate of 2 (n = 15) and a SENSE rate of 6 (n = 13). Compared with SENSE rate 2, CS rate 6 images had similar objective border sharpness, significantly lower subjective image quality scores at all four locations (all P < 0.01), and shorter scan times (P < 0.05). Compared with SENSE rate 6, CS rate 6 had similar objective border sharpness at all four locations, significantly better subjective image quality scores at three of four locations (all P < 0.01), and similar scan times (P = 0.24). Compared with SENSE with a comparable acceleration rate, a variable-density Poisson-disc undersampling pattern and CS reconstruction achieved better subjective image quality and similar border sharpness. Magn Reson Med, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Numerical modelling and image reconstruction in diffuse optical tomography
Dehghani, Hamid; Srinivasan, Subhadra; Pogue, Brian W.; Gibson, Adam
2009-01-01
The development of diffuse optical tomography as a functional imaging modality has relied largely on the use of model-based image reconstruction. The recovery of optical parameters from boundary measurements of light propagation within tissue is inherently a difficult one, because the problem is nonlinear, ill-posed and ill-conditioned. Additionally, although the measured near-infrared signals of light transmission through tissue provide high imaging contrast, the reconstructed images suffer from poor spatial resolution due to the diffuse propagation of light in biological tissue. The application of model-based image reconstruction is reviewed in this paper, together with a numerical modelling approach to light propagation in tissue as well as generalized image reconstruction using boundary data. A comprehensive review and details of the basis for using spatial and structural prior information are also discussed, whereby the use of spectral and dual-modality systems can improve contrast and spatial resolution. PMID:19581256
Ultra-Fast Image Reconstruction of Tomosynthesis Mammography Using GPU.
Arefan, D; Talebpour, A; Ahmadinejhad, N; Kamali Asl, A
2015-06-01
Digital Breast Tomosynthesis (DBT) is a technology that creates three dimensional (3D) images of breast tissue. Tomosynthesis mammography detects lesions that are not detectable with other imaging systems. If image reconstruction time is in the order of seconds, we can use Tomosynthesis systems to perform Tomosynthesis-guided Interventional procedures. This research has been designed to study ultra-fast image reconstruction technique for Tomosynthesis Mammography systems using Graphics Processing Unit (GPU). At first, projections of Tomosynthesis mammography have been simulated. In order to produce Tomosynthesis projections, it has been designed a 3D breast phantom from empirical data. It is based on MRI data in its natural form. Then, projections have been created from 3D breast phantom. The image reconstruction algorithm based on FBP was programmed with C++ language in two methods using central processing unit (CPU) card and the Graphics Processing Unit (GPU). It calculated the time of image reconstruction in two kinds of programming (using CPU and GPU).
Lebedev, Sergej; Sawall, Stefan; Knaup, Michael; Kachelrieß, Marc
2017-09-01
Optimization of the AIR-algorithm for improved convergence and performance. The AIR method is an iterative algorithm for CT image reconstruction. As a result of its linearity with respect to the basis images, the AIR algorithm possesses well defined, regular image quality metrics, e.g. point spread function (PSF) or modulation transfer function (MTF), unlike other iterative reconstruction algorithms. The AIR algorithm computes weighting images α to blend between a set of basis images that preferably have mutually exclusive properties, e.g. high spatial resolution or low noise. The optimized algorithm uses an approach that alternates between the optimization of rawdata fidelity using an OSSART like update and regularization using gradient descent, as opposed to the initially proposed AIR using a straightforward gradient descent implementation. A regularization strength for a given task is chosen by formulating a requirement for the noise reduction and checking whether it is fulfilled for different regularization strengths, while monitoring the spatial resolution using the voxel-wise defined modulation transfer function for the AIR image. The optimized algorithm computes similar images in a shorter time compared to the initial gradient descent implementation of AIR. The result can be influenced by multiple parameters that can be narrowed down to a relatively simple framework to compute high quality images. The AIR images, for instance, can have at least a 50% lower noise level compared to the sharpest basis image, while the spatial resolution is mostly maintained. The optimization improves performance by a factor of 6, while maintaining image quality. Furthermore, it was demonstrated that the spatial resolution for AIR can be determined using regular image quality metrics, given smooth weighting images. This is not possible for other iterative reconstructions as a result of their non linearity. A simple set of parameters for the algorithm is discussed that provides
Undersampled MR Image Reconstruction with Data-Driven Tight Frame
Liu, Jianbo; Wang, Shanshan; Peng, Xi; Liang, Dong
2015-01-01
Undersampled magnetic resonance image reconstruction employing sparsity regularization has fascinated many researchers in recent years under the support of compressed sensing theory. Nevertheless, most existing sparsity-regularized reconstruction methods either lack adaptability to capture the structure information or suffer from high computational load. With the aim of further improving image reconstruction accuracy without introducing too much computation, this paper proposes a data-driven tight frame magnetic image reconstruction (DDTF-MRI) method. By taking advantage of the efficiency and effectiveness of data-driven tight frame, DDTF-MRI trains an adaptive tight frame to sparsify the to-be-reconstructed MR image. Furthermore, a two-level Bregman iteration algorithm has been developed to solve the proposed model. The proposed method has been compared to two state-of-the-art methods on four datasets and encouraging performances have been achieved by DDTF-MRI. PMID:26199641
[Image reconstruction in electrical impedance tomography based on genetic algorithm].
Hou, Weidong; Mo, Yulong
2003-03-01
Image reconstruction in electrical impedance tomography (EIT) is a highly ill-posed, non-linear inverse problem. The modified Newton-Raphson (MNR) iteration algorithm is deduced from the strictest theoretic analysis. It is an optimization algorithm based on minimizing the object function. The MNR algorithm with regularization technique is usually not stable, due to the serious image reconstruction model error and measurement noise. So the reconstruction precision is not high when used in static EIT. A new static image reconstruction method for EIT based on genetic algorithm (GA-EIT) is proposed in this paper. The experimental results indicate that the performance (including stability, the precision and space resolution in reconstructing the static EIT image) of the GA-EIT algorithm is better than that of the MNR algorithm.
Reconstruction of biofilm images: combining local and global structural parameters
Resat, Haluk; Renslow, Ryan S.; Beyenal, Haluk
2014-10-20
Digitized images can be used for quantitative comparison of biofilms grown under different conditions. Using biofilm image reconstruction, it was previously found that biofilms with a completely different look can have nearly identical structural parameters and that the most commonly utilized global structural parameters were not sufficient to uniquely define these biofilms. Here, additional local and global parameters are introduced to show that these parameters considerably increase the reliability of the image reconstruction process. Assessment using human evaluators indicated that the correct identification rate of the reconstructed images increased from 50% to 72% with the introduction of the new parameters into the reconstruction procedure. An expanded set of parameters especially improved the identification of biofilm structures with internal orientational features and of structures in which colony sizes and spatial locations varied. Hence, the newly introduced structural parameter sets helped to better classify the biofilms by incorporating finer local structural details into the reconstruction process.
Four dimensional reconstruction and analysis of plume images
NASA Astrophysics Data System (ADS)
Dhawan, Atam P.; Disimile, Peter J.; Peck, Charles, III
Results of a time-history based three-dimensional reconstruction of cross-sectional images corresponding to a specific planar location of the jet structure are reported. The experimental set-up is described, and three-dimensional displays of time-history based reconstruction of the jet structure are presented. Future developments in image analysis, quantification and interpretation, and flow visualization of rocket engine plume images are expected to provide a tool for correlating engine diagnostic features with visible flow structures.
Accelerating k-t sparse using k-space aliasing for dynamic MRI imaging.
Pawar, Kamlesh; Egan, Gary F; Zhang, Jingxin
2013-01-01
Dynamic imaging is challenging in MRI and acceleration techniques are usually needed to acquire dynamic scene. K-t sparse is an acceleration technique based on compressed sensing, it acquires fewer amounts of data in k-t space by pseudo random ordering of phase encodes and reconstructs dynamic scene by exploiting sparsity of k-t space in transform domain. Another recently introduced technique accelerates dynamic MRI scans by acquiring k-space data in aliased form. K-space aliasing technique uses multiple RF excitation pulses to deliberately acquire aliased k-space data. During reconstruction a simple Fourier transformation along time frames can unaliase the acquired aliased data. This paper presents a novel method to combine k-t sparse and k-space aliasing to achieve higher acceleration than each of the individual technique alone. In this particular combination, a very critical factor of compressed sensing, the ratio of the number of acquired phase encodes to the number of total phase encode (n/N) increases therefore compressed sensing component of reconstruction performs exceptionally well. Comparison of k-t sparse and the proposed technique for acceleration factors of 4, 6 and 8 is demonstrated in simulation on cardiac data.
NASA Astrophysics Data System (ADS)
Xue, Xinwei; Cheryauka, Arvi; Tubbs, David
2006-03-01
CT imaging in interventional and minimally-invasive surgery requires high-performance computing solutions that meet operational room demands, healthcare business requirements, and the constraints of a mobile C-arm system. The computational requirements of clinical procedures using CT-like data are increasing rapidly, mainly due to the need for rapid access to medical imagery during critical surgical procedures. The highly parallel nature of Radon transform and CT algorithms enables embedded computing solutions utilizing a parallel processing architecture to realize a significant gain of computational intensity with comparable hardware and program coding/testing expenses. In this paper, using a sample 2D and 3D CT problem, we explore the programming challenges and the potential benefits of embedded computing using commodity hardware components. The accuracy and performance results obtained on three computational platforms: a single CPU, a single GPU, and a solution based on FPGA technology have been analyzed. We have shown that hardware-accelerated CT image reconstruction can be achieved with similar levels of noise and clarity of feature when compared to program execution on a CPU, but gaining a performance increase at one or more orders of magnitude faster. 3D cone-beam or helical CT reconstruction and a variety of volumetric image processing applications will benefit from similar accelerations.
Quantitative image quality evaluation for cardiac CT reconstructions
NASA Astrophysics Data System (ADS)
Tseng, Hsin-Wu; Fan, Jiahua; Kupinski, Matthew A.; Balhorn, William; Okerlund, Darin R.
2016-03-01
Maintaining image quality in the presence of motion is always desirable and challenging in clinical Cardiac CT imaging. Different image-reconstruction algorithms are available on current commercial CT systems that attempt to achieve this goal. It is widely accepted that image-quality assessment should be task-based and involve specific tasks, observers, and associated figures of merits. In this work, we developed an observer model that performed the task of estimating the percentage of plaque in a vessel from CT images. We compared task performance of Cardiac CT image data reconstructed using a conventional FBP reconstruction algorithm and the SnapShot Freeze (SSF) algorithm, each at default and optimal reconstruction cardiac phases. The purpose of this work is to design an approach for quantitative image-quality evaluation of temporal resolution for Cardiac CT systems. To simulate heart motion, a moving coronary type phantom synchronized with an ECG signal was used. Three different percentage plaques embedded in a 3 mm vessel phantom were imaged multiple times under motion free, 60 bpm, and 80 bpm heart rates. Static (motion free) images of this phantom were taken as reference images for image template generation. Independent ROIs from the 60 bpm and 80 bpm images were generated by vessel tracking. The observer performed estimation tasks using these ROIs. Ensemble mean square error (EMSE) was used as the figure of merit. Results suggest that the quality of SSF images is superior to the quality of FBP images in higher heart-rate scans.
Basis Functions in Image Reconstruction From Projections: A Tutorial Introduction
NASA Astrophysics Data System (ADS)
Herman, Gabor T.
2015-11-01
The series expansion approaches to image reconstruction from projections assume that the object to be reconstructed can be represented as a linear combination of fixed basis functions and the task of the reconstruction algorithm is to estimate the coefficients in such a linear combination based on the measured projection data. It is demonstrated that using spherically symmetric basis functions (blobs), instead of ones based on the more traditional pixels, yields superior reconstructions of medically relevant objects. The demonstration uses simulated computerized tomography projection data of head cross-sections and the series expansion method ART for the reconstruction. In addition to showing the results of one anecdotal example, the relative efficacy of using pixel and blob basis functions in image reconstruction from projections is also evaluated using a statistical hypothesis testing based task oriented comparison methodology. The superiority of the efficacy of blob basis functions over that of pixel basis function is found to be statistically significant.
Sparse Reconstruction for Micro Defect Detection in Acoustic Micro Imaging
Zhang, Yichun; Shi, Tielin; Su, Lei; Wang, Xiao; Hong, Yuan; Chen, Kepeng; Liao, Guanglan
2016-01-01
Acoustic micro imaging has been proven to be sufficiently sensitive for micro defect detection. In this study, we propose a sparse reconstruction method for acoustic micro imaging. A finite element model with a micro defect is developed to emulate the physical scanning. Then we obtain the point spread function, a blur kernel for sparse reconstruction. We reconstruct deblurred images from the oversampled C-scan images based on l1-norm regularization, which can enhance the signal-to-noise ratio and improve the accuracy of micro defect detection. The method is further verified by experimental data. The results demonstrate that the sparse reconstruction is effective for micro defect detection in acoustic micro imaging. PMID:27783040
Sparse Reconstruction for Micro Defect Detection in Acoustic Micro Imaging.
Zhang, Yichun; Shi, Tielin; Su, Lei; Wang, Xiao; Hong, Yuan; Chen, Kepeng; Liao, Guanglan
2016-10-24
Acoustic micro imaging has been proven to be sufficiently sensitive for micro defect detection. In this study, we propose a sparse reconstruction method for acoustic micro imaging. A finite element model with a micro defect is developed to emulate the physical scanning. Then we obtain the point spread function, a blur kernel for sparse reconstruction. We reconstruct deblurred images from the oversampled C-scan images based on l₁-norm regularization, which can enhance the signal-to-noise ratio and improve the accuracy of micro defect detection. The method is further verified by experimental data. The results demonstrate that the sparse reconstruction is effective for micro defect detection in acoustic micro imaging.
Accelerated gradient methods for the x-ray imaging of solar flares
NASA Astrophysics Data System (ADS)
Bonettini, S.; Prato, M.
2014-05-01
In this paper we present new optimization strategies for the reconstruction of x-ray images of solar flares by means of the data collected by the Reuven Ramaty high energy solar spectroscopic imager. The imaging concept of the satellite is based on rotating modulation collimator instruments, which allow the use of both Fourier imaging approaches and reconstruction techniques based on the straightforward inversion of the modulated count profiles. Although in the last decade, greater attention has been devoted to the former strategies due to their very limited computational cost, here we consider the latter model and investigate the effectiveness of different accelerated gradient methods for the solution of the corresponding constrained minimization problem. Moreover, regularization is introduced through either an early stopping of the iterative procedure, or a Tikhonov term added to the discrepancy function by means of a discrepancy principle accounting for the Poisson nature of the noise affecting the data.
Robust image reconstruction enhancement based on Gaussian mixture model estimation
NASA Astrophysics Data System (ADS)
Zhao, Fan; Zhao, Jian; Han, Xizhen; Wang, He; Liu, Bochao
2016-03-01
The low quality of an image is often characterized by low contrast and blurred edge details. Gradients have a direct relationship with image edge details. More specifically, the larger the gradients, the clearer the image details become. Robust image reconstruction enhancement based on Gaussian mixture model estimation is proposed here. First, image is transformed to its gradient domain, obtaining the gradient histogram. Second, the gradient histogram is estimated and extended using a Gaussian mixture model, and the predetermined function is constructed. Then, using histogram specification technology, the gradient field is enhanced with the constraint of the predetermined function. Finally, a matrix sine transform-based method is applied to reconstruct the enhanced image from the enhanced gradient field. Experimental results show that the proposed algorithm can effectively enhance different types of images such as medical image, aerial image, and visible image, providing high-quality image information for high-level processing.
Simbol-X Formation Flight and Image Reconstruction
NASA Astrophysics Data System (ADS)
Civitani, M.; Djalal, S.; Le Duigou, J. M.; La Marle, O.; Chipaux, R.
2009-05-01
Simbol-X is the first operational mission relying on two satellites flying in formation. The dynamics of the telescope, due to the formation flight concept, raises a variety of problematic, like image reconstruction, that can be better evaluated via a simulation tools. We present here the first results obtained with Simulos, simulation tool aimed to study the relative spacecrafts navigation and the weight of the different parameters in image reconstruction and telescope performance evaluation. The simulation relies on attitude and formation flight sensors models, formation flight dynamics and control, mirror model and focal plane model, while the image reconstruction is based on the Line of Sight (LOS) concept.
Iterative Image Reconstruction for Limited-Angle CT Using Optimized Initial Image
Guo, Jingyu; Qi, Hongliang; Xu, Yuan; Chen, Zijia; Li, Shulong; Zhou, Linghong
2016-01-01
Limited-angle computed tomography (CT) has great impact in some clinical applications. Existing iterative reconstruction algorithms could not reconstruct high-quality images, leading to severe artifacts nearby edges. Optimal selection of initial image would influence the iterative reconstruction performance but has not been studied deeply yet. In this work, we proposed to generate optimized initial image followed by total variation (TV) based iterative reconstruction considering the feature of image symmetry. The simulated data and real data reconstruction results indicate that the proposed method effectively removes the artifacts nearby edges. PMID:27066107
Application of particle filtering algorithm in image reconstruction of EMT
NASA Astrophysics Data System (ADS)
Wang, Jingwen; Wang, Xu
2015-07-01
To improve the image quality of electromagnetic tomography (EMT), a new image reconstruction method of EMT based on a particle filtering algorithm is presented. Firstly, the principle of image reconstruction of EMT is analyzed. Then the search process for the optimal solution for image reconstruction of EMT is described as a system state estimation process, and the state space model is established. Secondly, to obtain the minimum variance estimation of image reconstruction, the optimal weights of random samples obtained from the state space are calculated from the measured information. Finally, simulation experiments with five different flow regimes are performed. The experimental results have shown that the average image error of reconstruction results obtained by the method mentioned in this paper is 42.61%, and the average correlation coefficient with the original image is 0.8706, which are much better than corresponding indicators obtained by LBP, Landweber and Kalman Filter algorithms. So, this EMT image reconstruction method has high efficiency and accuracy, and provides a new method and means for EMT research.
New model for optical image reconstruction for Nevoscope images
NASA Astrophysics Data System (ADS)
Maganti, Srinath S.; Kumar, Ananda; Dhawan, Atam P.
1999-05-01
Three dimensional shape, volume and depth of penetration of a skin lesion are significant factors for early diagnosis and prognosis of melanoma. An optical imaging instrument, Nevoscope is pursued in this work to image and reconstruct pigmented lesions, in three dimensions. The Nevoscope provides a set of planar projections of the pigmented inhomogeneity using transillumination. This paper presents a novel and simple algorithm to reconstruct the volume of the skin lesion from the optical projections acquired using the Nevoscope. The annular ring source of the Nevoscope injects light in the visible spectrum into the skin area surrounding the skin lesion. Light in the visible spectrum undergoes absorption and multiple scattering in the skin. Light photons, which are not extinct, are back scattered and re-emerge carrying information of the structure of the skin-lesion. The transilluminated photons are detected by a set of mirrors functioning as detectors to form 2D projections of the skin-lesion. The multiple-scattering phenomenon renders the inverse problem of solving for the volume of lesion non-linear. A diffusion- theory based approach along with the physics of light propagation in superficial layers of the skin results in a proposition of a hybrid model for solving the forward problem. An iterative non-linear inversion method is pursued to solve the inverse problem. Reconstruction of the lesion volume based on iterative algebraic reconstruction technique involves computation of 'weights' (contribution of a given voxel for a given photon path between a source and a detector) to calculate the forward and inverse solution for every iteration. A previously proposed model computes these weights as a product of two fluences. The first is the fluence calculated at a given voxel due to the annular ring source (forward fluence) and the second is the fluence calculated at the same voxel due to an imaginary point source at the detector (adjoint fluence). A diffusion theory
An image reconstruction method (IRBis) for optical/infrared interferometry
NASA Astrophysics Data System (ADS)
Hofmann, K.-H.; Weigelt, G.; Schertl, D.
2014-05-01
Aims: We present an image reconstruction method for optical/infrared long-baseline interferometry called IRBis (image reconstruction software using the bispectrum). We describe the theory and present applications to computer-simulated interferograms. Methods: The IRBis method can reconstruct an image from measured visibilities and closure phases. The applied optimization routine ASA_CG is based on conjugate gradients. The method allows the user to implement different regularizers, apply residual ratios as an additional metric for goodness-of-fit, and use previous iteration results as a prior to force convergence. Results: We present the theory of the IRBis method and several applications of the method to computer-simulated interferograms. The image reconstruction results show the dependence of the reconstructed image on the noise in the interferograms (e.g., for ten electron read-out noise and 139 to 1219 detected photons per interferogram), the regularization method, the angular resolution, and the reconstruction parameters applied. Furthermore, we present the IRBis reconstructions submitted to the interferometric imaging beauty contest 2012 initiated by the IAU Working Group on Optical/IR Interferometry and describe the performed data processing steps.
Sparsity-constrained PET image reconstruction with learned dictionaries.
Tang, Jing; Yang, Bao; Wang, Yanhua; Ying, Leslie
2016-09-07
PET imaging plays an important role in scientific and clinical measurement of biochemical and physiological processes. Model-based PET image reconstruction such as the iterative expectation maximization algorithm seeking the maximum likelihood solution leads to increased noise. The maximum a posteriori (MAP) estimate removes divergence at higher iterations. However, a conventional smoothing prior or a total-variation (TV) prior in a MAP reconstruction algorithm causes over smoothing or blocky artifacts in the reconstructed images. We propose to use dictionary learning (DL) based sparse signal representation in the formation of the prior for MAP PET image reconstruction. The dictionary to sparsify the PET images in the reconstruction process is learned from various training images including the corresponding MR structural image and a self-created hollow sphere. Using simulated and patient brain PET data with corresponding MR images, we study the performance of the DL-MAP algorithm and compare it quantitatively with a conventional MAP algorithm, a TV-MAP algorithm, and a patch-based algorithm. The DL-MAP algorithm achieves improved bias and contrast (or regional mean values) at comparable noise to what the other MAP algorithms acquire. The dictionary learned from the hollow sphere leads to similar results as the dictionary learned from the corresponding MR image. Achieving robust performance in various noise-level simulation and patient studies, the DL-MAP algorithm with a general dictionary demonstrates its potential in quantitative PET imaging.
Sparsity-constrained PET image reconstruction with learned dictionaries
NASA Astrophysics Data System (ADS)
Tang, Jing; Yang, Bao; Wang, Yanhua; Ying, Leslie
2016-09-01
PET imaging plays an important role in scientific and clinical measurement of biochemical and physiological processes. Model-based PET image reconstruction such as the iterative expectation maximization algorithm seeking the maximum likelihood solution leads to increased noise. The maximum a posteriori (MAP) estimate removes divergence at higher iterations. However, a conventional smoothing prior or a total-variation (TV) prior in a MAP reconstruction algorithm causes over smoothing or blocky artifacts in the reconstructed images. We propose to use dictionary learning (DL) based sparse signal representation in the formation of the prior for MAP PET image reconstruction. The dictionary to sparsify the PET images in the reconstruction process is learned from various training images including the corresponding MR structural image and a self-created hollow sphere. Using simulated and patient brain PET data with corresponding MR images, we study the performance of the DL-MAP algorithm and compare it quantitatively with a conventional MAP algorithm, a TV-MAP algorithm, and a patch-based algorithm. The DL-MAP algorithm achieves improved bias and contrast (or regional mean values) at comparable noise to what the other MAP algorithms acquire. The dictionary learned from the hollow sphere leads to similar results as the dictionary learned from the corresponding MR image. Achieving robust performance in various noise-level simulation and patient studies, the DL-MAP algorithm with a general dictionary demonstrates its potential in quantitative PET imaging.
MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods
Schmidt, Johannes F. M.; Santelli, Claudio; Kozerke, Sebastian
2016-01-01
An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675
Simultaneous maximum a posteriori longitudinal PET image reconstruction
NASA Astrophysics Data System (ADS)
Ellis, Sam; Reader, Andrew J.
2017-09-01
Positron emission tomography (PET) is frequently used to monitor functional changes that occur over extended time scales, for example in longitudinal oncology PET protocols that include routine clinical follow-up scans to assess the efficacy of a course of treatment. In these contexts PET datasets are currently reconstructed into images using single-dataset reconstruction methods. Inspired by recently proposed joint PET-MR reconstruction methods, we propose to reconstruct longitudinal datasets simultaneously by using a joint penalty term in order to exploit the high degree of similarity between longitudinal images. We achieved this by penalising voxel-wise differences between pairs of longitudinal PET images in a one-step-late maximum a posteriori (MAP) fashion, resulting in the MAP simultaneous longitudinal reconstruction (SLR) method. The proposed method reduced reconstruction errors and visually improved images relative to standard maximum likelihood expectation-maximisation (ML-EM) in simulated 2D longitudinal brain tumour scans. In reconstructions of split real 3D data with inserted simulated tumours, noise across images reconstructed with MAP-SLR was reduced to levels equivalent to doubling the number of detected counts when using ML-EM. Furthermore, quantification of tumour activities was largely preserved over a variety of longitudinal tumour changes, including changes in size and activity, with larger changes inducing larger biases relative to standard ML-EM reconstructions. Similar improvements were observed for a range of counts levels, demonstrating the robustness of the method when used with a single penalty strength. The results suggest that longitudinal regularisation is a simple but effective method of improving reconstructed PET images without using resolution degrading priors.
An analytical reconstruction model of the spread-out Bragg peak using laser-accelerated proton beams
NASA Astrophysics Data System (ADS)
Tao, Li; Zhu, Kun; Zhu, Jungao; Xu, Xiaohan; Lin, Chen; Ma, Wenjun; Lu, Haiyang; Zhao, Yanying; Lu, Yuanrong; Chen, Jia-er; Yan, Xueqing
2017-07-01
With the development of laser technology, laser-driven proton acceleration provides a new method for proton tumor therapy. However, it has not been applied in practice because of the wide and decreasing energy spectrum of laser-accelerated proton beams. In this paper, we propose an analytical model to reconstruct the spread-out Bragg peak (SOBP) using laser-accelerated proton beams. Firstly, we present a modified weighting formula for protons of different energies. Secondly, a theoretical model for the reconstruction of SOBPs with laser-accelerated proton beams has been built. It can quickly calculate the number of laser shots needed for each energy interval of the laser-accelerated protons. Finally, we show the 2D reconstruction results of SOBPs for laser-accelerated proton beams and the ideal situation. The final results show that our analytical model can give an SOBP reconstruction scheme that can be used for actual tumor therapy.
Infrared Astronomical Satellite (IRAS) image reconstruction and restoration
NASA Technical Reports Server (NTRS)
Gonsalves, R. A.; Lyons, T. D.; Price, S. D.; Levan, P. D.; Aumann, H. H.
1987-01-01
IRAS sky mapping data is being reconstructed as images, and an entropy-based restoration algorithm is being applied in an attempt to improve spatial resolution in extended sources. Reconstruction requires interpolation of non-uniformly sampled data. Restoration is accomplished with an iterative algorithm which begins with an inverse filter solution and iterates on it with a weighted entropy-based spectral subtraction.
Regularized Reconstruction of Dynamic Contrast-Enhanced MR Images for Evaluation of Breast Lesions
2009-09-01
in determining the image estimate is computing the gradient of the cost function. We were able to accelerate our computation by exploiting Toeplitz ...but, to our knowledge, we are the first to apply it to dynamic MRI. For this study, the Toeplitz -modified algorithm was 1.7 times faster than the...Decreased computation time by exploiting Toeplitz matrices in our reconstruction. • Investigated choice of algorithms’ regularization parameters based on
Reconstructing cosmic acceleration from modified and nonminimal gravity: The Yang-Mills case
Elizalde, E.; Lopez-Revelles, A. J.
2010-09-15
A variant of the accelerating cosmology reconstruction program is developed for f(R) gravity and for a modified Yang-Mills/Maxwell theory. Reconstruction schemes in terms of e-foldings and by using an auxiliary scalar field are developed and carefully compared, for the case of f(R) gravity. An example of a model with a transient phantom behavior without real matter is explicitly discussed in both schemes. Further, the two reconstruction schemes are applied to the more physically interesting case of a Yang-Mills/Maxwell theory, again with explicit examples. Detailed comparison of the two schemes of reconstruction is presented also for this theory. It seems to support, as well, physical nonequivalence of the two frames.
Kwak, Yongjun; Nam, Seunghoon; Akçakaya, Mehmet; Basha, Tamer A.; Goddu, Beth; Manning, Warren J.; Tarokh, Vahid; Nezafat, Reza
2012-01-01
Phase contrast (PC) cardiac MR (CMR) is widely used for the clinical assessment of blood flow in cardiovascular disease. One of the challenges of PC CMR is the long scan time which limits both spatial and temporal resolution. Compressed sensing (CS) reconstruction with accelerated PC acquisitions is a promising technique to increase the scan efficiency. In this study, we sought to utilize the sparsity of the complex difference (CD) of the two flow-encoded images as an additional constraint term to improve the CS reconstruction of the corresponding accelerated PC data acquisition. Using retrospectively under-sampled data, the proposed reconstruction technique was optimized and validated in-vivo on 15 healthy subjects. Then, prospectively under-sampled data was acquired on 11 healthy subjects and reconstructed with the proposed technique. The results show that there is good agreement between the cardiac output measurements from the fully-sampled data and the proposed CS reconstruction method using CD sparsity up to acceleration rate 5. In conclusion, we have developed and evaluated an improved reconstruction technique for accelerated PC CMR that utilizes the sparsity of the CD of the two flow-encoded images. PMID:23065722
Kwak, Yongjun; Nam, Seunghoon; Akçakaya, Mehmet; Basha, Tamer A; Goddu, Beth; Manning, Warren J; Tarokh, Vahid; Nezafat, Reza
2013-09-01
Phase contrast (PC) cardiac MR is widely used for the clinical assessment of blood flow in cardiovascular disease. One of the challenges of PC cardiac MR is the long scan time which limits both spatial and temporal resolution. Compressed sensing reconstruction with accelerated PC acquisitions is a promising technique to increase the scan efficiency. In this study, we sought to use the sparsity of the complex difference of the two flow-encoded images as an additional constraint term to improve the compressed sensing reconstruction of the corresponding accelerated PC data acquisition. Using retrospectively under-sampled data, the proposed reconstruction technique was optimized and validated in vivo on 15 healthy subjects. Then, prospectively under-sampled data was acquired on 11 healthy subjects and reconstructed with the proposed technique. The results show that there is good agreement between the cardiac output measurements from the fully sampled data and the proposed compressed sensing reconstruction method using complex difference sparsity up to acceleration rate 5. In conclusion, we have developed and evaluated an improved reconstruction technique for accelerated PC cardiac MR that uses the sparsity of the complex difference of the two flow-encoded images. Copyright © 2012 Wiley Periodicals, Inc.
Image reconstruction in an electro-holographic display
NASA Astrophysics Data System (ADS)
Son, Jung-Young; Son, Wook-Ho; Kim, Jae-Han; Choo, Hyongon
2017-05-01
The optical phenomena arising in the process of forming reconstructed images in a hologram are explained and shown visually with the use of light field images. The light fields at different distances from the hologram on a DMD reveal that the reconstructed image of each object point is formed by the corresponding Fresnel zone pattern, which is reconstructed from the hologram when it is illuminated by a reconstruction laser beam. The reconstructed image is a circle of least confusion laden with noise and distortion. It has a finite size and does not appear at the object distance from the hologram due to the presence of aberrations, especially that of a strong astigmatism. The astigmatism appears along the direction of the rotating axis of each pixel and its cross at right angles. The aberrations and noise are responsible for the distortion and deterioration of the resolution in the reconstructed image, the difference of the image position from that of the object, and a reduction in the depth resolution. The light field images also reveal intensity fluctuations due to the addition of the in- and out-phase of the rays from the hologram.
Acoustic imaging for temperature distribution reconstruction
NASA Astrophysics Data System (ADS)
Jia, Ruixi; Xiong, Qingyu; Liang, Shan
2016-12-01
For several industrial processes, such as burning and drying, temperature distribution is important because it can reflect the internal running state of industrial equipment and assist to develop control strategy and ensure safety in operation of industrial equipment. The principle of this technique is mainly based on the relationship between acoustic velocity and temperature. In this paper, an algorithm for temperature distribution reconstruction is considered. Compared with reconstruction results of simulation experiments with the least square algorithm and the proposed one, the latter indicates a better information reflection of temperature distribution and relatively higher reconstruction accuracy.
Digital infrared thermal imaging following anterior cruciate ligament reconstruction.
Barker, Lauren E; Markowski, Alycia M; Henneman, Kimberly
2012-03-01
This case describes the selective use of digital infrared thermal imaging for a 48-year-old woman who was being treated by a physical therapist following left anterior cruciate ligament (ACL) reconstruction with a semitendinosus autograft.
Initial analysis of the middle problem in CT image reconstruction.
Yang, Jiansheng; Yu, Hengyong; Wang, Ge
2017-04-05
The interior and exterior problems have been extensively studied in the field of reconstruction of computed tomography (CT) images, which lead to important theoretical and practical results. In this study, we formulate a middle problem of CT image reconstruction, which is more challenging than either the interior or exterior problems. In the middle problem of CT image reconstruction, projection data are measured through and only through the middle dough-like region, so that each projection profile misses data not only internally but also on both sides. For an object with a radially symmetric exterior, we proved that the middle problem could be uniquely solved if the middle ring-shaped zone is piecewise constant or there is a known sub-region inside this middle region. Then, we designed and evaluated a POCS-based algorithm for middle tomography, which is to reconstruct a middle image only from the available data. Finally, the remaining issues are also discussed for further research.
Online reconstruction of 3D magnetic particle imaging data
NASA Astrophysics Data System (ADS)
Knopp, T.; Hofmann, M.
2016-06-01
Magnetic particle imaging is a quantitative functional imaging technique that allows imaging of the spatial distribution of super-paramagnetic iron oxide particles at high temporal resolution. The raw data acquisition can be performed at frame rates of more than 40 volumes s-1. However, to date image reconstruction is performed in an offline step and thus no direct feedback is available during the experiment. Considering potential interventional applications such direct feedback would be mandatory. In this work, an online reconstruction framework is implemented that allows direct visualization of the particle distribution on the screen of the acquisition computer with a latency of about 2 s. The reconstruction process is adaptive and performs block-averaging in order to optimize the signal quality for a given amount of reconstruction time.
Freiberger, Manuel; Egger, Herbert; Liebmann, Manfred; Scharfetter, Hermann
2011-11-01
Image reconstruction in fluorescence optical tomography is a three-dimensional nonlinear ill-posed problem governed by a system of partial differential equations. In this paper we demonstrate that a combination of state of the art numerical algorithms and a careful hardware optimized implementation allows to solve this large-scale inverse problem in a few seconds on standard desktop PCs with modern graphics hardware. In particular, we present methods to solve not only the forward but also the non-linear inverse problem by massively parallel programming on graphics processors. A comparison of optimized CPU and GPU implementations shows that the reconstruction can be accelerated by factors of about 15 through the use of the graphics hardware without compromising the accuracy in the reconstructed images.
Freiberger, Manuel; Egger, Herbert; Liebmann, Manfred; Scharfetter, Hermann
2011-01-01
Image reconstruction in fluorescence optical tomography is a three-dimensional nonlinear ill-posed problem governed by a system of partial differential equations. In this paper we demonstrate that a combination of state of the art numerical algorithms and a careful hardware optimized implementation allows to solve this large-scale inverse problem in a few seconds on standard desktop PCs with modern graphics hardware. In particular, we present methods to solve not only the forward but also the non-linear inverse problem by massively parallel programming on graphics processors. A comparison of optimized CPU and GPU implementations shows that the reconstruction can be accelerated by factors of about 15 through the use of the graphics hardware without compromising the accuracy in the reconstructed images. PMID:22076279
K-space reconstruction of magnetic resonance inverse imaging (K-InI) of human visuomotor systems.
Lin, Fa-Hsuan; Witzel, Thomas; Chang, Wei-Tang; Wen-Kai Tsai, Kevin; Wang, Yen-Hsiang; Kuo, Wen-Jui; Belliveau, John W
2010-02-15
Using simultaneous measurements from multiple channels of a radio-frequency coil array, magnetic resonance inverse imaging (InI) can achieve ultra-fast dynamic functional imaging of the human with whole-brain coverage and a good spatial resolution. Mathematically, the InI reconstruction is a generalization of parallel MRI (pMRI), which includes image space and k-space reconstructions. Because of the auto-calibration technique, the pMRI k-space reconstruction offers more robust and adaptive reconstructions compared to the image space algorithm. Here we present the k-space InI (K-InI) reconstructions to reconstruct the highly accelerated BOLD-contrast fMRI data of the human brain to achieve 100 ms temporal resolution. Simulations show that K-InI reconstructions can offer 3D image reconstructions at each time frame with reasonable spatial resolution, which cannot be obtained using the previously proposed image space minimum-norm estimates (MNE) or linear constraint minimum variance (LCMV) spatial filtering reconstructions. The InI reconstructions of in vivo BOLD-contrast fMRI data during a visuomotor task show that K-InI offer 3 to 5 fold more sensitive detection of the brain activation than MNE and a comparable detection sensitivity to the LCMV reconstructions. The group average of the high temporal resolution K-InI reconstructions of the hemodynamic response also shows a relative onset timing difference between the visual (first) and somatomotor (second) cortices by 400 ms (600 ms time-to-peak timing difference). This robust and sensitive K-InI reconstruction can be applied to dynamic MRI acquisitions using a large-n coil array to improve the spatiotemporal resolution.
Reconstruction algorithms for optoacoustic imaging based on fiber optic detectors
NASA Astrophysics Data System (ADS)
Lamela, Horacio; Díaz-Tendero, Gonzalo; Gutiérrez, Rebeca; Gallego, Daniel
2011-06-01
Optoacoustic Imaging (OAI), a novel hybrid imaging technology, offers high contrast, molecular specificity and excellent resolution to overcome limitations of the current clinical modalities for detection of solid tumors. The exact time-domain reconstruction formula produces images with excellent resolution but poor contrast. Some approximate time-domain filtered back-projection reconstruction algorithms have also been reported to solve this problem. A wavelet transform implementation filtering can be used to sharpen object boundaries while simultaneously preserving high contrast of the reconstructed objects. In this paper, several algorithms, based on Back Projection (BP) techniques, have been suggested to process OA images in conjunction with signal filtering for ultrasonic point detectors and integral detectors. We apply these techniques first directly to a numerical generated sample image and then to the laserdigitalized image of a tissue phantom, obtaining in both cases the best results in resolution and contrast for a waveletbased filter.
Beyond maximum entropy: Fractal Pixon-based image reconstruction
NASA Technical Reports Server (NTRS)
Puetter, Richard C.; Pina, R. K.
1994-01-01
We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.
Three-dimensional surface reconstruction from multistatic SAR images.
Rigling, Brian D; Moses, Randolph L
2005-08-01
This paper discusses reconstruction of three-dimensional surfaces from multiple bistatic synthetic aperture radar (SAR) images. Techniques for surface reconstruction from multiple monostatic SAR images already exist, including interferometric processing and stereo SAR. We generalize these methods to obtain algorithms for bistatic interferometric SAR and bistatic stereo SAR. We also propose a framework for predicting the performance of our multistatic stereo SAR algorithm, and, from this framework, we suggest a metric for use in planning strategic deployment of multistatic assets.
Image denoising by adaptive Compressed Sensing reconstructions and fusions
NASA Astrophysics Data System (ADS)
Meiniel, William; Angelini, Elsa; Olivo-Marin, Jean-Christophe
2015-09-01
In this work, Compressed Sensing (CS) is investigated as a denoising tool in bioimaging. The denoising algorithm exploits multiple CS reconstructions, taking advantage of the robustness of CS in the presence of noise via regularized reconstructions and the properties of the Fourier transform of bioimages. Multiple reconstructions at low sampling rates are combined to generate high quality denoised images using several sparsity constraints. We present different combination methods for the CS reconstructions and quantitatively compare the performance of our denoising methods to state-of-the-art ones.
[Three-dimension reconstruction of ocular fundus image].
Chen, Ji; Peng, Chenglin
2008-02-01
The mathematical model for 3D reconstruction of ocular fundus images is constructed according to both the reduced eye model and the simplified model of fundus camera optical system. The relationship between the images of emmetropic and ametropic eye and the true shape of ocular fundus retina is analyzed, and then the mapping relationship from 2D ocular fundus plan image to 3D surface image is obtained. As a result, the real example of 3D reconstruction for ocular fundus images is given. The max visual field of ocular fundus image for three-dimensional reconstruction is decided by the max visual field angle of fundus camera, which limits a size of the visual field of 3D reconstruction image and a range of z axis. According to the formulas of 3D mapping, the 2D data of ocular fundus image is mapped to 3D data and then veins mapping is carried out; thereafter, the 3D surface image of ocular fundus can be drawn immediately. This method makes use of the existing 2D imaging equipments to provide 3D surface image of patient's ocular fundus, and can provide ophthalmologist with beneficial reference and help to their clinical diagnosis and treatment.
In vivo sensitivity estimation and imaging acceleration with rotating RF coil arrays at 7 Tesla.
Li, Mingyan; Jin, Jin; Zuo, Zhentao; Liu, Feng; Trakic, Adnan; Weber, Ewald; Zhuo, Yan; Xue, Rong; Crozier, Stuart
2015-03-01
Using a new rotating SENSitivity Encoding (rotating-SENSE) algorithm, we have successfully demonstrated that the rotating radiofrequency coil array (RRFCA) was capable of achieving a significant reduction in scan time and a uniform image reconstruction for a homogeneous phantom at 7 Tesla. However, at 7 Tesla the in vivo sensitivity profiles (B1(-)) become distinct at various angular positions. Therefore, sensitivity maps at other angular positions cannot be obtained by numerically rotating the acquired ones. In this work, a novel sensitivity estimation method for the RRFCA was developed and validated with human brain imaging. This method employed a library database and registration techniques to estimate coil sensitivity at an arbitrary angular position. The estimated sensitivity maps were then compared to the acquired sensitivity maps. The results indicate that the proposed method is capable of accurately estimating both magnitude and phase of sensitivity at an arbitrary angular position, which enables us to employ the rotating-SENSE algorithm to accelerate acquisition and reconstruct image. Compared to a stationary coil array with the same number of coil elements, the RRFCA was able to reconstruct images with better quality at a high reduction factor. It is hoped that the proposed rotation-dependent sensitivity estimation algorithm and the acceleration ability of the RRFCA will be particularly useful for ultra high field MRI.
In vivo sensitivity estimation and imaging acceleration with rotating RF coil arrays at 7 Tesla
NASA Astrophysics Data System (ADS)
Li, Mingyan; Jin, Jin; Zuo, Zhentao; Liu, Feng; Trakic, Adnan; Weber, Ewald; Zhuo, Yan; Xue, Rong; Crozier, Stuart
2015-03-01
Using a new rotating SENSitivity Encoding (rotating-SENSE) algorithm, we have successfully demonstrated that the rotating radiofrequency coil array (RRFCA) was capable of achieving a significant reduction in scan time and a uniform image reconstruction for a homogeneous phantom at 7 Tesla. However, at 7 Tesla the in vivo sensitivity profiles (B1-) become distinct at various angular positions. Therefore, sensitivity maps at other angular positions cannot be obtained by numerically rotating the acquired ones. In this work, a novel sensitivity estimation method for the RRFCA was developed and validated with human brain imaging. This method employed a library database and registration techniques to estimate coil sensitivity at an arbitrary angular position. The estimated sensitivity maps were then compared to the acquired sensitivity maps. The results indicate that the proposed method is capable of accurately estimating both magnitude and phase of sensitivity at an arbitrary angular position, which enables us to employ the rotating-SENSE algorithm to accelerate acquisition and reconstruct image. Compared to a stationary coil array with the same number of coil elements, the RRFCA was able to reconstruct images with better quality at a high reduction factor. It is hoped that the proposed rotation-dependent sensitivity estimation algorithm and the acceleration ability of the RRFCA will be particularly useful for ultra high field MRI.
Saybasili, Haris; Herzka, Daniel A.; Seiberlich, Nicole; A.Griswold, Mark
2014-01-01
Combination of non-Cartesian trajectories with parallel MRI permits to attain unmatched acceleration rates when compared to traditional Cartesian MRI during real-time imaging.However, computationally demanding reconstructions of such imaging techniques, such as k-space domain radial generalized auto-calibrating partially parallel acquisitions (radial GRAPPA) and image domain conjugate gradient sensitivity encoding (CG-SENSE), lead to longer reconstruction times and unacceptable latency for online real-time MRI on conventional computational hardware. Though CG-SENSE has been shown to work with low-latency using a general purpose graphics processing unit (GPU), to the best of our knowledge, no such effort has been made for radial GRAPPA. radial GRAPPA reconstruction, which is robust even with highly undersampled acquisitions, is not iterative, requiring only significant computation during initial calibration while achieving good image quality for low-latency imaging applications. In this work, we present a very fast, low-latency, reconstruction framework based on a heterogeneous system using multi-core CPUs and GPUs. We demonstrate an implementation of radial GRAPPA that permits reconstruction times on par with or faster than acquisition of highly accelerated datasets in both cardiac and dynamic musculoskeletal imaging scenarios. Acquisition and reconstructions times are reported. PMID:24690453
Proposal of fault-tolerant tomographic image reconstruction
NASA Astrophysics Data System (ADS)
Kudo, Hiroyuki; Takaki, Keita; Yamazaki, Fukashi; Nemoto, Takuya
2016-10-01
This paper deals with tomographic image reconstruction under the situation where some of projection data bins are contaminated with abnormal data. Such situations occur in various instances of tomography. We propose a new reconstruction algorithm called the Fault-Tolerant reconstruction outlined as follows. The least-squares (L2- norm) error function || Ax- b||22 used in ordinary iterative reconstructions is sensitive to the existence of abnormal data. The proposed algorithm utilizes the L1-norm error function || Ax- b||11 instead of the L2-norm, and we develop a row-action-type iterative algorithm using the proximal splitting framework in convex optimization fields. We also propose an improved version of the L1-norm reconstruction called the L1-TV reconstruction, in which a weak Total Variation (TV) penalty is added to the cost function. Simulation results demonstrate that reconstructed images with the L2-norm were severely damaged by the effect of abnormal bins, whereas images with the L1-norm and L1-TV reconstructions were robust to the existence of abnormal bins.
Method for image reconstruction of moving radionuclide source distribution
Stolin, Alexander V.; McKisson, John E.; Lee, Seung Joon; Smith, Mark Frederick
2012-12-18
A method for image reconstruction of moving radionuclide distributions. Its particular embodiment is for single photon emission computed tomography (SPECT) imaging of awake animals, though its techniques are general enough to be applied to other moving radionuclide distributions as well. The invention eliminates motion and blurring artifacts for image reconstructions of moving source distributions. This opens new avenues in the area of small animal brain imaging with radiotracers, which can now be performed without the perturbing influences of anesthesia or physical restraint on the biological system.
Reconstruction Techniques for Sparse Multistatic Linear Array Microwave Imaging
Sheen, David M.; Hall, Thomas E.
2014-06-09
Sequentially-switched linear arrays are an enabling technology for a number of near-field microwave imaging applications. Electronically sequencing along the array axis followed by mechanical scanning along an orthogonal axis allows dense sampling of a two-dimensional aperture in near real-time. In this paper, a sparse multi-static array technique will be described along with associated Fourier-Transform-based and back-projection-based image reconstruction algorithms. Simulated and measured imaging results are presented that show the effectiveness of the sparse array technique along with the merits and weaknesses of each image reconstruction approach.
Fuzzy-rule-based image reconstruction for positron emission tomography
NASA Astrophysics Data System (ADS)
Mondal, Partha P.; Rajan, K.
2005-09-01
Positron emission tomography (PET) and single-photon emission computed tomography have revolutionized the field of medicine and biology. Penalized iterative algorithms based on maximum a posteriori (MAP) estimation eliminate noisy artifacts by utilizing available prior information in the reconstruction process but often result in a blurring effect. MAP-based algorithms fail to determine the density class in the reconstructed image and hence penalize the pixels irrespective of the density class. Reconstruction with better edge information is often difficult because prior knowledge is not taken into account. The recently introduced median-root-prior (MRP)-based algorithm preserves the edges, but a steplike streaking effect is observed in the reconstructed image, which is undesirable. A fuzzy approach is proposed for modeling the nature of interpixel interaction in order to build an artifact-free edge-preserving reconstruction. The proposed algorithm consists of two elementary steps: (1) edge detection, in which fuzzy-rule-based derivatives are used for the detection of edges in the nearest neighborhood window (which is equivalent to recognizing nearby density classes), and (2) fuzzy smoothing, in which penalization is performed only for those pixels for which no edge is detected in the nearest neighborhood. Both of these operations are carried out iteratively until the image converges. Analysis shows that the proposed fuzzy-rule-based reconstruction algorithm is capable of producing qualitatively better reconstructed images than those reconstructed by MAP and MRP algorithms. The reconstructed images are sharper, with small features being better resolved owing to the nature of the fuzzy potential function.
A fast linear reconstruction method for scanning impedance imaging.
Liu, Hongze; Hawkins, Aaron R; Schultz, Stephen M; Oliphant, Travis E
2006-01-01
Scanning electrical impedance imaging (SII) has been developed and implemented as a novel high resolution imaging modality with the potential of imaging the electrical properties of biological tissues. In this paper, a fast linear model is derived and applied to the impedance image reconstruction of scanning impedance imaging. With the help of both the deblurring concept and the reciprocity principle, this new approach leads to a calibrated approximation of the exact impedance distribution rather than a relative one from the original simplified linear method. Additionally, the method shows much less computational cost than the more straightforward nonlinear inverse method based on the forward model. The kernel function of this new approach is described and compared to the kernel of the simplified linear method. Two-dimensional impedance images of a flower petal and cancer cells are reconstructed using this method. The images reveal details not present in the measured images.
Image reconstruction in photoacoustic tomography involving layered acoustic media
Schoonover, Robert W.; Anastasio, Mark A.
2012-01-01
Photoacoustic tomography (PAT), also known as thermoacoustic or optoacoustic tomography, is a rapidly emerging biomedical imaging technique that combines optical image contrast with ultrasound detection principles. Most existing reconstruction algorithms for PAT assume the object of interest possesses homogeneous acoustic properties. The images produced by such algorithms can contain significant distortions and artifacts when the object’s acoustic properties are spatially variant. In this work, we establish an image reconstruction formula for PAT applications in which a planar detection surface is employed and the to-be-imaged optical absorber is embedded in a known planar layered acoustic medium. The reconstruction formula is exact in a mathematical sense and accounts for multiple acoustic reflections between the layers of the medium. Computer-simulation studies are conducted to demonstrate and investigate the proposed method. PMID:21643397
Bayesian 2D Current Reconstruction from Magnetic Images
NASA Astrophysics Data System (ADS)
Clement, Colin B.; Bierbaum, Matthew K.; Nowack, Katja; Sethna, James P.
We employ a Bayesian image reconstruction scheme to recover 2D currents from magnetic flux imaged with scanning SQUIDs (Superconducting Quantum Interferometric Devices). Magnetic flux imaging is a versatile tool to locally probe currents and magnetic moments, however present reconstruction methods sacrifice resolution due to numerical instability. Using state-of-the-art blind deconvolution techniques we recover the currents, point-spread function and height of the SQUID loop by optimizing the probability of measuring an image. We obtain uncertainties on these quantities by sampling reconstructions. This generative modeling technique could be used to develop calibration protocols for scanning SQUIDs, to diagnose systematic noise in the imaging process, and can be applied to many tools beyond scanning SQUIDs.
Compensation for air voids in photoacoustic computed tomography image reconstruction
NASA Astrophysics Data System (ADS)
Matthews, Thomas P.; Li, Lei; Wang, Lihong V.; Anastasio, Mark A.
2016-03-01
Most image reconstruction methods in photoacoustic computed tomography (PACT) assume that the acoustic properties of the object and the surrounding medium are homogeneous. This can lead to strong artifacts in the reconstructed images when there are significant variations in sound speed or density. Air voids represent a particular challenge due to the severity of the differences between the acoustic properties of air and water. In whole-body small animal imaging, the presence of air voids in the lungs, stomach, and gastrointestinal system can limit image quality over large regions of the object. Iterative reconstruction methods based on the photoacoustic wave equation can account for these acoustic variations, leading to improved resolution, improved contrast, and a reduction in the number of imaging artifacts. However, the strong acoustic heterogeneities can lead to instability or errors in the numerical wave solver. Here, the impact of air voids on PACT image reconstruction is investigated, and procedures for their compensation are proposed. The contributions of sound speed and density variations to the numerical stability of the wave solver are considered, and a novel approach for mitigating the impact of air voids while reducing the computational burden of image reconstruction is identified. These results are verified by application to an experimental phantom.
Padhi, Shantanu K.; Howard, John
2013-01-01
Nonlinear microwave imaging heavily relies on an accurate numerical electromagnetic model of the antenna system. The model is used to simulate scattering data that is compared to its measured counterpart in order to reconstruct the image. In this paper an antenna system immersed in water is used to image different canonical objects in order to investigate the implication of modeling errors on the final reconstruction using a time domain-based iterative inverse reconstruction algorithm and three-dimensional FDTD modeling. With the test objects immersed in a background of air and tap water, respectively, we have studied the impact of antenna modeling errors, errors in the modeling of the background media, and made a comparison with a two-dimensional version of the algorithm. In conclusion even small modeling errors in the antennas can significantly alter the reconstructed image. Since the image reconstruction procedure is highly nonlinear general conclusions are very difficult to make. In our case it means that with the antenna system immersed in water and using our present FDTD-based electromagnetic model the imaging results are improved if refraining from modeling the water-wall-air interface and instead just use a homogeneous background of water in the model. PMID:23606825
Geoaccurate three-dimensional reconstruction via image-based geometry
NASA Astrophysics Data System (ADS)
Walvoord, Derek J.; Rossi, Adam J.; Paul, Bradley D.; Brower, Bernie; Pellechia, Matthew F.
2013-05-01
Recent technological advances in computing capabilities and persistent surveillance systems have led to increased focus on new methods of exploiting geospatial data, bridging traditional photogrammetric techniques and state-of-the-art multiple view geometry methodology. The structure from motion (SfM) problem in Computer Vision addresses scene reconstruction from uncalibrated cameras, and several methods exist to remove the inherent projective ambiguity. However, the reconstruction remains in an arbitrary world coordinate frame without knowledge of its relationship to a xed earth-based coordinate system. This work presents a novel approach for obtaining geoaccurate image-based 3-dimensional reconstructions in the absence of ground control points by using a SfM framework and the full physical sensor model of the collection system. Absolute position and orientation information provided by the imaging platform can be used to reconstruct the scene in a xed world coordinate system. Rather than triangulating pixels from multiple image-to-ground functions, each with its own random error, the relative reconstruction is computed via image-based geometry, i.e., geometry derived from image feature correspondences. In other words, the geolocation accuracy is improved using the relative distances provided by the SfM reconstruction. Results from the Exelis Wide-Area Motion Imagery (WAMI) system are provided to discuss conclusions and areas for future work.
Iterative image reconstruction and its role in cardiothoracic computed tomography.
Singh, Sarabjeet; Khawaja, Ranish Deedar Ali; Pourjabbar, Sarvenaz; Padole, Atul; Lira, Diego; Kalra, Mannudeep K
2013-11-01
Revolutionary developments in multidetector-row computed tomography (CT) scanner technology offer several advantages for imaging of cardiothoracic disorders. As a result, expanding applications of CT now account for >85 million CT examinations annually in the United States alone. Given the large number of CT examinations performed, concerns over increase in population-based risk for radiation-induced carcinogenesis have made CT radiation dose a top safety concern in health care. In response to this concern, several technologies have been developed to reduce the dose with more efficient use of scan parameters and the use of "newer" image reconstruction techniques. Although iterative image reconstruction algorithms were first introduced in the 1970s, filtered back projection was chosen as the conventional image reconstruction technique because of its simplicity and faster reconstruction times. With subsequent advances in computational speed and power, iterative reconstruction techniques have reemerged and have shown the potential of radiation dose optimization without adversely influencing diagnostic image quality. In this article, we review the basic principles of different iterative reconstruction algorithms and their implementation for various clinical applications in cardiothoracic CT examinations for reducing radiation dose.
Jeong, Kyeong-Min; Kim, Hee-Seung; Hong, Sung-In; Lee, Sung-Keun; Jo, Na-Young; Kim, Yong-Soo; Lim, Hong-Gi; Park, Jae-Hyeung
2012-10-08
Speed enhancement of integral imaging based incoherent Fourier hologram capture using a graphic processing unit is reported. Integral imaging based method enables exact hologram capture of real-existing three-dimensional objects under regular incoherent illumination. In our implementation, we apply parallel computation scheme using the graphic processing unit, accelerating the processing speed. Using enhanced speed of hologram capture, we also implement a pseudo real-time hologram capture and optical reconstruction system. The overall operation speed is measured to be 1 frame per second.
High-performance parallel image reconstruction for the New Vacuum Solar Telescope
NASA Astrophysics Data System (ADS)
Li, Xue-Bao; Liu, Zhong; Wang, Feng; Jin, Zhen-Yu; Xiang, Yong-Yuan; Zheng, Yan-Fang
2015-06-01
Many technologies have been developed to help improve spatial resolution of observational images for ground-based solar telescopes, such as adaptive optics (AO) systems and post-processing reconstruction. As any AO system correction is only partial, it is indispensable to use post-processing reconstruction techniques. In the New Vacuum Solar Telescope (NVST), a speckle-masking method is used to achieve the diffraction-limited resolution of the telescope. Although the method is very promising, the computation is quite intensive, and the amount of data is tremendous, requiring several months to reconstruct observational data of one day on a high-end computer. To accelerate image reconstruction, we parallelize the program package on a high-performance cluster. We describe parallel implementation details for several reconstruction procedures. The code is written in the C language using the Message Passing Interface (MPI) and is optimized for parallel processing in a multiprocessor environment. We show the excellent performance of parallel implementation, and the whole data processing speed is about 71 times faster than before. Finally, we analyze the scalability of the code to find possible bottlenecks, and propose several ways to further improve the parallel performance. We conclude that the presented program is capable of executing reconstruction applications in real-time at NVST.
Quantitative photoacoustic image reconstruction improves accuracy in deep tissue structures.
Mastanduno, Michael A; Gambhir, Sanjiv S
2016-10-01
Photoacoustic imaging (PAI) is emerging as a potentially powerful imaging tool with multiple applications. Image reconstruction for PAI has been relatively limited because of limited or no modeling of light delivery to deep tissues. This work demonstrates a numerical approach to quantitative photoacoustic image reconstruction that minimizes depth and spectrally derived artifacts. We present the first time-domain quantitative photoacoustic image reconstruction algorithm that models optical sources through acoustic data to create quantitative images of absorption coefficients. We demonstrate quantitative accuracy of less than 5% error in large 3 cm diameter 2D geometries with multiple targets and within 22% error in the largest size quantitative photoacoustic studies to date (6cm diameter). We extend the algorithm to spectral data, reconstructing 6 varying chromophores to within 17% of the true values. This quantitiative PA tomography method was able to improve considerably on filtered-back projection from the standpoint of image quality, absolute, and relative quantification in all our simulation geometries. We characterize the effects of time step size, initial guess, and source configuration on final accuracy. This work could help to generate accurate quantitative images from both endogenous absorbers and exogenous photoacoustic dyes in both preclinical and clinical work, thereby increasing the information content obtained especially from deep-tissue photoacoustic imaging studies.
Image of OCT denoising and 3D reconstructing method
NASA Astrophysics Data System (ADS)
Yan, Xue-tao; Yang, Jun; Liu, Zhi-hai; Yuan, Li-bo
2007-11-01
Optical coherence tomography (OCT), which is a novel tomography method, is non-contact, noninvasive image of the vivo tomograms, and have characteristic of high resolution and high speed; therefore it becomes an important direction of biomedicine imaging. However, when the OCT system used in specimen, noise and distortion will appear, because the speed of the system is confined, therefore image needs the reconstruction. The article studies OCT 3-D reconstruction method. It cotains denoising, recovering and segmenting, these image preprocessing technology are necessary. This paper studies the high scattering medium, such as specimen of skin, using photons transmiting properties, researches the denoising and recovering algorithm with optical photons model of propagation in biological tissu to remove the speckle of skin image and 3-D reconstrut. It proposes a dynamic average background estimation algorithm based on time-domain estimation method. This method combines the estimation in time-domain with the filter in frequency-domain to remove the noises of image effectively. In addition, it constructs a noise-model for recovering image to avoid longitudinal direction distortion and deep's amplitude distortion and image blurring. By compareing and discussing, this method improves and optimizes algorithms to improve the quality of image. The article optimizes iterative reconstruction algorithm by improving convergent speed, and realizes OCT specimen data's 3-D reconstruction. It opened the door for further analysis and diagnosis of diseases.
Quantitative photoacoustic image reconstruction improves accuracy in deep tissue structures
Mastanduno, Michael A.; Gambhir, Sanjiv S.
2016-01-01
Photoacoustic imaging (PAI) is emerging as a potentially powerful imaging tool with multiple applications. Image reconstruction for PAI has been relatively limited because of limited or no modeling of light delivery to deep tissues. This work demonstrates a numerical approach to quantitative photoacoustic image reconstruction that minimizes depth and spectrally derived artifacts. We present the first time-domain quantitative photoacoustic image reconstruction algorithm that models optical sources through acoustic data to create quantitative images of absorption coefficients. We demonstrate quantitative accuracy of less than 5% error in large 3 cm diameter 2D geometries with multiple targets and within 22% error in the largest size quantitative photoacoustic studies to date (6cm diameter). We extend the algorithm to spectral data, reconstructing 6 varying chromophores to within 17% of the true values. This quantitiative PA tomography method was able to improve considerably on filtered-back projection from the standpoint of image quality, absolute, and relative quantification in all our simulation geometries. We characterize the effects of time step size, initial guess, and source configuration on final accuracy. This work could help to generate accurate quantitative images from both endogenous absorbers and exogenous photoacoustic dyes in both preclinical and clinical work, thereby increasing the information content obtained especially from deep-tissue photoacoustic imaging studies. PMID:27867695
He, Xin; Cheng, Lishui; Fessler, Jeffrey A; Frey, Eric C
2011-06-01
In simultaneous dual-isotope myocardial perfusion SPECT (MPS) imaging, data are simultaneously acquired to determine the distributions of two radioactive isotopes. The goal of this work was to develop penalized maximum likelihood (PML) algorithms for a novel cross-tracer prior that exploits the fact that the two images reconstructed from simultaneous dual-isotope MPS projection data are perfectly registered in space. We first formulated the simultaneous dual-isotope MPS reconstruction problem as a joint estimation problem. A cross-tracer prior that couples voxel values on both images was then proposed. We developed an iterative algorithm to reconstruct the MPS images that converges to the maximum a posteriori solution for this prior based on separable surrogate functions. To accelerate the convergence, we developed a fast algorithm for the cross-tracer prior based on the complete data OS-EM (COSEM) framework. The proposed algorithm was compared qualitatively and quantitatively to a single-tracer version of the prior that did not include the cross-tracer term. Quantitative evaluations included comparisons of mean and standard deviation images as well as assessment of image fidelity using the mean square error. We also evaluated the cross tracer prior using a three-class observer study with respect to the three-class MPS diagnostic task, i.e., classifying patients as having either no defect, reversible defect, or fixed defects. For this study, a comparison with conventional ordered subsets-expectation maximization (OS-EM) reconstruction with postfiltering was performed. The comparisons to the single-tracer prior demonstrated similar resolution for areas of the image with large intensity changes and reduced noise in uniform regions. The cross-tracer prior was also superior to the single-tracer version in terms of restoring image fidelity. Results of the three-class observer study showed that the proposed cross-tracer prior and the convergent algorithms improved the
Rapid 3D dynamic arterial spin labeling with a sparse model-based image reconstruction.
Zhao, Li; Fielden, Samuel W; Feng, Xue; Wintermark, Max; Mugler, John P; Meyer, Craig H
2015-11-01
Dynamic arterial spin labeling (ASL) MRI measures the perfusion bolus at multiple observation times and yields accurate estimates of cerebral blood flow in the presence of variations in arterial transit time. ASL has intrinsically low signal-to-noise ratio (SNR) and is sensitive to motion, so that extensive signal averaging is typically required, leading to long scan times for dynamic ASL. The goal of this study was to develop an accelerated dynamic ASL method with improved SNR and robustness to motion using a model-based image reconstruction that exploits the inherent sparsity of dynamic ASL data. The first component of this method is a single-shot 3D turbo spin echo spiral pulse sequence accelerated using a combination of parallel imaging and compressed sensing. This pulse sequence was then incorporated into a dynamic pseudo continuous ASL acquisition acquired at multiple observation times, and the resulting images were jointly reconstructed enforcing a model of potential perfusion time courses. Performance of the technique was verified using a numerical phantom and it was validated on normal volunteers on a 3-Tesla scanner. In simulation, a spatial sparsity constraint improved SNR and reduced estimation errors. Combined with a model-based sparsity constraint, the proposed method further improved SNR, reduced estimation error and suppressed motion artifacts. Experimentally, the proposed method resulted in significant improvements, with scan times as short as 20s per time point. These results suggest that the model-based image reconstruction enables rapid dynamic ASL with improved accuracy and robustness.
Kalman filter techniques for accelerated Cartesian dynamic cardiac imaging.
Feng, Xue; Salerno, Michael; Kramer, Christopher M; Meyer, Craig H
2013-05-01
In dynamic MRI, spatial and temporal parallel imaging can be exploited to reduce scan time. Real-time reconstruction enables immediate visualization during the scan. Commonly used view-sharing techniques suffer from limited temporal resolution, and many of the more advanced reconstruction methods are either retrospective, time-consuming, or both. A Kalman filter model capable of real-time reconstruction can be used to increase the spatial and temporal resolution in dynamic MRI reconstruction. The original study describing the use of the Kalman filter in dynamic MRI was limited to non-Cartesian trajectories because of a limitation intrinsic to the dynamic model used in that study. Here the limitation is overcome, and the model is applied to the more commonly used Cartesian trajectory with fast reconstruction. Furthermore, a combination of the Kalman filter model with Cartesian parallel imaging is presented to further increase the spatial and temporal resolution and signal-to-noise ratio. Simulations and experiments were conducted to demonstrate that the Kalman filter model can increase the temporal resolution of the image series compared with view-sharing techniques and decrease the spatial aliasing compared with TGRAPPA. The method requires relatively little computation, and thus is suitable for real-time reconstruction.
Blind image quality metrics for optimal speckle image reconstruction in horizontal imaging scenarios
NASA Astrophysics Data System (ADS)
Bos, Jeremy P.; Roggemann, Michael C.
2012-10-01
We propose using certain blind image quality metrics to tune the inverse filter used for amplitude recovery in speckle imaging systems. The inverse filter in these systems requires knowledge of the blurring function. When imaging through turbulence over long horizontal paths near the ground an estimate of the blurring function can be obtained from theoretical models incorporating an estimate of the integrated turbulence strength along the imaging path. Estimates provided by the user in these scenarios are likely to be inaccurate resulting in suboptimal reconstructions. In this work, we use two blind image quality metrics; one metric is based on image sharpness, and the other on anisotropy in image entropy, to tune the value of integrated turbulence in the long exposure atmospheric blurring model with the goal of providing images equivalent to the minimum mean squared error (MMSE) image available. We find that both blind metrics are capable of choosing images that differ from the MMSE image by less than 4% using simulated data. Using the image sharpness metric, it was possible to produce images within 1% of the MMSE on average. Both metrics were also able produce high quality reconstructions from field data.
Bayesian-Optimal Image Reconstruction for Translational-Symmetric Filters
NASA Astrophysics Data System (ADS)
Tajima, Satohiro; Inoue, Masato; Okada, Masato
2008-05-01
Translational-symmetric filters provide a foundation for various kinds of image processing. When a filtered image containing noise is observed, the original one can be reconstructed by Bayesian inference. Furthermore, hyperparameters such as the smoothness of the image and the noise level in the communication channel through which the image observed can be estimated from the observed image by setting a criterion of maximizing marginalized likelihood. In this article we apply a diagonalization technique with the Fourier transform to this image reconstruction problem. This diagonalization not only reduces computational costs but also facilitates theoretical analyses of the estimation and reconstruction performances. We take as an example the Mexican-hat shaped neural cell receptive field seen in the early visual systems of animals, and we compare the reconstruction performances obtained under various hyperparameter and filter parameter conditions with each other and with the corresponding performances obtained under no-filter conditions. The results show that the using a Mexican-hat filter can reduce reconstruction error.
Influence of Iterative Reconstruction Algorithms on PET Image Resolution
NASA Astrophysics Data System (ADS)
Karpetas, G. E.; Michail, C. M.; Fountos, G. P.; Valais, I. G.; Nikolopoulos, D.; Kandarakis, I. S.; Panayiotakis, G. S.
2015-09-01
The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction. The simulated PET scanner was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the modulation transfer function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL, the ordered subsets separable paraboloidal surrogate (OSSPS), the median root prior (MRP) and OSMAPOSL with quadratic prior, algorithms. OSMAPOSL reconstruction was assessed by using fixed subsets and various iterations, as well as by using various beta (hyper) parameter values. MTF values were found to increase with increasing iterations. MTF also improves by using lower beta values. The simulated PET evaluation method, based on the TLC plane source, can be useful in the resolution assessment of PET scanners.
Compressed Sensing MR Image Reconstruction Exploiting TGV and Wavelet Sparsity
Du, Huiqian; Han, Yu; Mei, Wenbo
2014-01-01
Compressed sensing (CS) based methods make it possible to reconstruct magnetic resonance (MR) images from undersampled measurements, which is known as CS-MRI. The reference-driven CS-MRI reconstruction schemes can further decrease the sampling ratio by exploiting the sparsity of the difference image between the target and the reference MR images in pixel domain. Unfortunately existing methods do not work well given that contrast changes are incorrectly estimated or motion compensation is inaccurate. In this paper, we propose to reconstruct MR images by utilizing the sparsity of the difference image between the target and the motion-compensated reference images in wavelet transform and gradient domains. The idea is attractive because it requires neither the estimation of the contrast changes nor multiple times motion compensations. In addition, we apply total generalized variation (TGV) regularization to eliminate the staircasing artifacts caused by conventional total variation (TV). Fast composite splitting algorithm (FCSA) is used to solve the proposed reconstruction problem in order to improve computational efficiency. Experimental results demonstrate that the proposed method can not only reduce the computational cost but also decrease sampling ratio or improve the reconstruction quality alternatively. PMID:25371704
Digital Three-dimensional Reconstruction Based On Integral Imaging
Li, Chao; Chen, Qian; Hua, Hong; Mao, Chen; Shao, Ajun
2015-01-01
This paper presents a digital three dimensional reconstruction method based on a set of small-baseline elemental images captured with a micro-lens array and a CCD sensor. In this paper, we adopt the ASIFT (Affine Scale-invariant feature transform) operator as the image registration method. Among the set of captured elemental images, the elemental image located in the middle of the overall image field is used as the reference and corresponding matching points in each elemental image around the reference elemental are calculated, which enables to accurately compute the depth value of object points relatively to the reference image frame. Using optimization algorithm with redundant matching points can achieve 3D reconstruction finally. Our experimental results are presented to demonstrate excellent performance in accuracy and speed of the proposed algorithm. PMID:26236151
2010-01-01
Background The inverse problem of fluorescent molecular tomography (FMT) often involves complex large-scale matrix operations, which may lead to unacceptable computational errors and complexity. In this research, a tree structured Schur complement decomposition strategy is proposed to accelerate the reconstruction process and reduce the computational complexity. Additionally, an adaptive regularization scheme is developed to improve the ill-posedness of the inverse problem. Methods The global system is decomposed level by level with the Schur complement system along two paths in the tree structure. The resultant subsystems are solved in combination with the biconjugate gradient method. The mesh for the inverse problem is generated incorporating the prior information. During the reconstruction, the regularization parameters are adaptive not only to the spatial variations but also to the variations of the objective function to tackle the ill-posed nature of the inverse problem. Results Simulation results demonstrate that the strategy of the tree structured Schur complement decomposition obviously outperforms the previous methods, such as the conventional Conjugate-Gradient (CG) and the Schur CG methods, in both reconstruction accuracy and speed. As compared with the Tikhonov regularization method, the adaptive regularization scheme can significantly improve ill-posedness of the inverse problem. Conclusions The methods proposed in this paper can significantly improve the reconstructed image quality of FMT and accelerate the reconstruction process. PMID:20482886
Acceleration of the universe: a reconstruction of the effective equation of state
NASA Astrophysics Data System (ADS)
Mukherjee, Ankan
2016-07-01
This work is based upon a parametric reconstruction of the effective or total equation of state in a model for the Universe with accelerated expansion. The constraints on the model parameters are obtained by maximum-likelihood analysis using the supernova distance modulus data, observational Hubble data, baryon acoustic oscillation data and cosmic microwave background shift parameter data. For statistical comparison, the same analysis has also been carried out for the w cold dark matter (wCDM) dark energy model. Different model selection criteria (Akaike information criterion and Bayesian information criterion) give the clear indication that the reconstructed model is well consistent with the wCDM model. Then both the models [weff(z) model and wCDM model] have also been presented through (q0,j0) parameter space. Tighter constraint on the present values of dark energy equation of state parameter (wDE(z = 0)) and cosmological jerk (j0) have been achieved for the reconstructed model.
Highly accelerated projection imaging with coil sensitivity encoding for rapid MRI
Ersoz, Ali; Arpinar, Volkan Emre; Muftuler, L. Tugan
2013-01-01
Purpose: Rapid magnetic resonance imaging (MRI) acquisition is typically achieved by acquiring all or most lines of k-space after one radio frequency (RF) excitation. Parallel imaging techniques can further accelerate data acquisition by acquiring fewer phase-encoded lines and utilizing the spatial sensitivity information of the RF coil arrays. The goal of this study was to develop a new MRI data acquisition and reconstruction technique that is capable of reconstructing a two-dimensional (2D) image using highly undersampled k-space data without any special hardware. Such a technique would be very efficient, as it would significantly reduce the time wasted during multiple RF excitations or phase encoding and gradient switching periods. Methods: The essence of this new technique is to densely sample a small number of projections, which should be acquired at an angle other than 0° or multiples of 45°. This results in multiple rays passing through a voxel and provides new and independent measurements for each voxel. Then the images are reconstructed using the unique information coming from these projections combined with RF coil sensitivity profiles. The feasibility of this new technique was investigated with realistic simulations and experimental studies using a phantom and compared with conventional nonuniform fast Fourier transform technique. Eigenvalue analysis and error calculations were conducted to find optimal projection angles and minimum requirements for dense sampling. Results: Reconstruction of 64 × 64 images was done using a single projection from simulated data under different noise levels. Simulated reconstruction was also tested with two projections to assess the improvement. Experimental phantom images were reconstructed at higher resolution using 4, 8, and 16 projections. Cross-sectional profiles illustrate that the new technique resolved compartment boundaries clearly. Conclusions: Simulations demonstrated that only a single k-space line might be
Probe and object function reconstruction in incoherent stem imaging
Nellist, P.D.; Pennycook, S.J.
1996-09-01
Using the phase-object approximation it is shown how an annular dark- field (ADF) detector in a scanning transmission electron microscope (STEM) leads to an image which can be described by an incoherent model. The point spread function is found to be simply the illuminating probe intensity. An important consequence of this is that there is no phase problem in the imaging process, which allows various image processing methods to be applied directly to the image intensity data. Using an image of a GaAs<110>, the probe intensity profile is reconstructed, confirming the existence of a 1.3 {Angstrom} probe in a 300kV STEM. It is shown that simply deconvolving this reconstructed probe from the image data does not improve its interpretability because the dominant effects of the imaging process arise simply from the restricted resolution of the microscope. However, use of the reconstructed probe in a maximum entropy reconstruction is demonstrated, which allows information beyond the resolution limit to be restored and does allow improved image interpretation.
3D Image Reconstruction: Determination of Pattern Orientation
Blankenbecler, Richard
2003-03-13
The problem of determining the euler angles of a randomly oriented 3-D object from its 2-D Fraunhofer diffraction patterns is discussed. This problem arises in the reconstruction of a positive semi-definite 3-D object using oversampling techniques. In such a problem, the data consists of a measured set of magnitudes from 2-D tomographic images of the object at several unknown orientations. After the orientation angles are determined, the object itself can then be reconstructed by a variety of methods using oversampling, the magnitude data from the 2-D images, physical constraints on the image and then iteration to determine the phases.
Reconstruction of indoor scene from a single image
NASA Astrophysics Data System (ADS)
Wu, Di; Li, Hongyu; Zhang, Lin
2015-03-01
Given a single image of an indoor scene without any prior knowledge, is it possible for a computer to automatically reconstruct the structure of the scene? This letter proposes a reconstruction method, called RISSIM, to recover the 3D modelling of an indoor scene from a single image. The proposed method is composed of three steps: the estimation of vanishing points, the detection and classification of lines, and the plane mapping. To find vanishing points, a new feature descriptor, named "OCR", is defined to describe the texture orientation. With Phrase Congruency and Harris Detector, the line segments can be detected exactly, which is a prerequisite. Perspective transform is a defined as a reliable method whereby the points on the image can be represented on a 3D model. Experimental results show that the 3D structure of an indoor scene can be well reconstructed from a single image although the available depth information is limited.
Few-view image reconstruction with dual dictionaries
Lu, Yang; Zhao, Jun; Wang, Ge
2011-01-01
In this paper, we formulate the problem of computed tomography (CT) under sparsity and few-view constraints, and propose a novel algorithm for image reconstruction from few-view data utilizing the simultaneous algebraic reconstruction technique (SART) coupled with dictionary learning, sparse representation and total variation (TV) minimization on two interconnected levels. The main feature of our algorithm is the use of two dictionaries: a transitional dictionary for atom matching and a global dictionary for image updating. The atoms in the global and transitional dictionaries represent the image patches from high-quality and low-quality CT images, respectively. Experiments with simulated and real projections were performed to evaluate and validate the proposed algorithm. The results reconstructed using the proposed approach are significantly better than those using either SART or SART–TV. PMID:22155989
Bayesian image reconstruction for improving detection performance of muon tomography.
Wang, Guobao; Schultz, Larry J; Qi, Jinyi
2009-05-01
Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.
Application of accelerated acquisition and highly constrained reconstruction methods to MR
NASA Astrophysics Data System (ADS)
Wang, Kang
2011-12-01
There are many Magnetic Resonance Imaging (MRI) applications that require rapid data acquisition. In conventional proton MRI, representative applications include real-time dynamic imaging, whole-chest pulmonary perfusion imaging, high resolution coronary imaging, MR T1 or T2 mapping, etc. The requirement for fast acquisition and novel reconstruction methods is either due to clinical demand for high temporal resolution, high spatial resolution, or both. Another important category in which fast MRI methods are highly desirable is imaging with hyperpolarized (HP) contrast media, such as HP 3He imaging for evaluation of pulmonary function, and imaging of HP 13C-labeled substrates for the study of in vivo metabolic processes. To address these needs, numerous MR undersampling methods have been developed and combined with novel image reconstruction techniques. This thesis aims to develop novel data acquisition and image reconstruction techniques for the following applications. (I) Ultrashort echo time spectroscopic imaging (UTESI). The need for acquiring many echo images in spectroscopic imaging with high spatial resolution usually results in extended scan times, and thus requires k-space undersampling and novel imaging reconstruction methods to overcome the artifacts related to the undersampling. (2) Dynamic hyperpolarized 13C spectroscopic imaging. HP 13C compounds exhibit non-equilibrium T1 decay and rapidly evolving spectral dynamics, and therefore it is vital to utilize the polarized signal wisely and efficiently to observe the entire temporal dynamic of the injected "C compounds as well as the corresponding downstream metabolites. (3) Time-resolved contrast-enhanced MR angiography. The diagnosis of vascular diseases often requires large coverage of human body anatomies with high spatial resolution and sufficient temporal resolution for the separation of arterial phases from venous phases. The goal of simultaneously achieving high spatial and temporal resolution has
Nam, Seunghoon; Akçakaya, Mehmet; Basha, Tamer; Stehning, Christian; Manning, Warren J; Tarokh, Vahid; Nezafat, Reza
2013-01-01
A disadvantage of three-dimensional (3D) isotropic acquisition in whole-heart coronary MRI is the prolonged data acquisition time. Isotropic 3D radial trajectories allow undersampling of k-space data in all three spatial dimensions, enabling accelerated acquisition of the volumetric data. Compressed sensing (CS) reconstruction can provide further acceleration in the acquisition by removing the incoherent artifacts due to undersampling and improving the image quality. However, the heavy computational overhead of the CS reconstruction has been a limiting factor for its application. In this article, a parallelized implementation of an iterative CS reconstruction method for 3D radial acquisitions using a commercial graphics processing unit is presented. The execution time of the graphics processing unit-implemented CS reconstruction was compared with that of the C++ implementation, and the efficacy of the undersampled 3D radial acquisition with CS reconstruction was investigated in both phantom and whole-heart coronary data sets. Subsequently, the efficacy of CS in suppressing streaking artifacts in 3D whole-heart coronary MRI with 3D radial imaging and its convergence properties were studied. The CS reconstruction provides improved image quality (in terms of vessel sharpness and suppression of noise-like artifacts) compared with the conventional 3D gridding algorithm, and the graphics processing unit implementation greatly reduces the execution time of CS reconstruction yielding 34-54 times speed-up compared with C++ implementation. Copyright © 2012 Wiley Periodicals, Inc.
Klix, Sabrina; Hezel, Fabian; Fuchs, Katharina; Ruff, Jan; Dieringer, Matthias A.; Niendorf, Thoralf
2014-01-01
Purpose Design, validation and application of an accelerated fast spin-echo (FSE) variant that uses a split-echo approach for self-calibrated parallel imaging. Methods For self-calibrated, split-echo FSE (SCSE-FSE), extra displacement gradients were incorporated into FSE to decompose odd and even echo groups which were independently phase encoded to derive coil sensitivity maps, and to generate undersampled data (reduction factor up to R = 3). Reference and undersampled data were acquired simultaneously. SENSE reconstruction was employed. Results The feasibility of SCSE-FSE was demonstrated in phantom studies. Point spread function performance of SCSE-FSE was found to be competitive with traditional FSE variants. The immunity of SCSE-FSE for motion induced mis-registration between reference and undersampled data was shown using a dynamic left ventricular model and cardiac imaging. The applicability of black blood prepared SCSE-FSE for cardiac imaging was demonstrated in healthy volunteers including accelerated multi-slice per breath-hold imaging and accelerated high spatial resolution imaging. Conclusion SCSE-FSE obviates the need of external reference scans for SENSE reconstructed parallel imaging with FSE. SCSE-FSE reduces the risk for mis-registration between reference scans and accelerated acquisitions. SCSE-FSE is feasible for imaging of the heart and of large cardiac vessels but also meets the needs of brain, abdominal and liver imaging. PMID:24728341
Hyperspectral image reconstruction for x-ray fluorescence tomography
Gürsoy, Doǧa; Biçer, Tekin; Lanzirotti, Antonio; Newville, Matthew G.; De Carlo, Francesco
2015-01-01
A penalized maximum-likelihood estimation is proposed to perform hyperspectral (spatio-spectral) image reconstruction for X-ray fluorescence tomography. The approach minimizes a Poisson-based negative log-likelihood of the observed photon counts, and uses a penalty term that has the effect of encouraging local continuity of model parameter estimates in both spatial and spectral dimensions simultaneously. The performance of the reconstruction method is demonstrated with experimental data acquired from a seed of arabidopsis thaliana collected at the 13-ID-E microprobe beamline at the Advanced Photon Source. The resulting element distribution estimates with the proposed approach show significantly better reconstruction quality than the conventional analytical inversion approaches, and allows for a high data compression factor which can reduce data acquisition times remarkably. In particular, this technique provides the capability to tomographically reconstruct full energy dispersive spectra without compromising reconstruction artifacts that impact the interpretation of results.
Hyperspectral image reconstruction for x-ray fluorescence tomography
Gürsoy, Doǧa; Biçer, Tekin; Lanzirotti, Antonio; ...
2015-01-01
A penalized maximum-likelihood estimation is proposed to perform hyperspectral (spatio-spectral) image reconstruction for X-ray fluorescence tomography. The approach minimizes a Poisson-based negative log-likelihood of the observed photon counts, and uses a penalty term that has the effect of encouraging local continuity of model parameter estimates in both spatial and spectral dimensions simultaneously. The performance of the reconstruction method is demonstrated with experimental data acquired from a seed of arabidopsis thaliana collected at the 13-ID-E microprobe beamline at the Advanced Photon Source. The resulting element distribution estimates with the proposed approach show significantly better reconstruction quality than the conventional analytical inversionmore » approaches, and allows for a high data compression factor which can reduce data acquisition times remarkably. In particular, this technique provides the capability to tomographically reconstruct full energy dispersive spectra without compromising reconstruction artifacts that impact the interpretation of results.« less
Hyperspectral Image Reconstruction for X-ray Fluorescence Tomography
Gürsoy, Doǧa; Bicer, Tekin; Lanzirotti, Antonio; Newville, Matthew G.; De Carlo, Francesco
2015-04-07
A penalized maximum-likelihood estimation is proposed to perform hyperspectral (spatio-spectral) image reconstruction for X-ray fluorescence tomography. The approach minimizes a Poisson-based negative log-likelihood of the observed photon counts, and uses a penalty term that has the effect of encouraging local continuity of model parameter estimates in both spatial and spectral dimensions simultaneously. The performance of the reconstruction method is demonstrated with experimental data acquired from a seed of arabidopsis thaliana collected at the 13-ID-E microprobe beamline at the Advanced Photon Source. The resulting element distribution estimates with the proposed approach show significantly better reconstruction quality than the conventional analytical inversion approaches, and allows for a high data compression factor which can reduce data acquisition times remarkably. In particular, this technique provides the capability to tomographically reconstruct full energy dispersive spectra without compromising reconstruction artifacts that impact the interpretation of results.
Image reconstruction by phase retrieval with transverse translation diversity
NASA Astrophysics Data System (ADS)
Guizar-Sicairos, Manuel; Fienup, James R.
2008-08-01
Measuring a series of far-field intensity patterns from an object, taken after a transverse translation of the object with respect to a known illumination pattern, has been shown to make the problem of image reconstruction by phase retrieval much more robust. However, previously reported reconstruction algorithms [Phys. Rev. Lett. 93, 023903 (2004)] rely on an accurate knowledge of the translations and illumination pattern for a successful reconstruction. We developed a nonlinear optimization algorithm that allows optimization over the translations and illumination pattern, dramatically improving the reconstructions if the system parameters are inaccurately known [Opt. Express 16, 7264 (2008)]. In this paper we compare reconstructions obtained with these algorithms under realistic experimental scenarios.
Tomographic mesh generation for OSEM reconstruction of SPECT images
NASA Astrophysics Data System (ADS)
Lu, Yao; Yu, Bo; Vogelsang, Levon; Krol, Andrzej; Xu, Yuesheng; Hu, Xiaofei; Feiglin, David
2009-02-01
To improve quality of OSEM SPECT reconstruction in the mesh domain, we implemented an adaptive mesh generation method that produces tomographic mesh consisting of triangular elements with size and density commensurate with geometric detail of the objects. Node density and element size change smoothly as a function of distance from the edges and edge curvature without creation of 'bad' elements. Tomographic performance of mesh-based OSEM reconstruction is controlled by the tomographic mesh structure, i.e. node density distribution, which in turn is ruled by the number of key points on the boundaries. A greedy algorithm is used to influence the distribution of nodes on the boundaries. The relationship between tomographic mesh properties and OSEM reconstruction quality has been investigated. We conclude that by selecting adequate number of key points, one can produce a tomographic mesh with lowest number of nodes that is sufficient to provide desired quality of reconstructed images, appropriate for the imaging system properties.
Prospective regularization design in prior-image-based reconstruction
Dang, Hao; Siewerdsen, Jeffrey H.; Stayman, J. Webster
2015-01-01
Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g., through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in
Prospective regularization design in prior-image-based reconstruction
NASA Astrophysics Data System (ADS)
Dang, Hao; Siewerdsen, Jeffrey H.; Webster Stayman, J.
2015-12-01
Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in
PET image reconstruction: a robust state space approach.
Liu, Huafeng; Tian, Yi; Shi, Pengcheng
2005-01-01
Statistical iterative reconstruction algorithms have shown improved image quality over conventional nonstatistical methods in PET by using accurate system response models and measurement noise models. Strictly speaking, however, PET measurements, pre-corrected for accidental coincidences, are neither Poisson nor Gaussian distributed and thus do not meet basic assumptions of these algorithms. In addition, the difficulty in determining the proper system response model also greatly affects the quality of the reconstructed images. In this paper, we explore the usage of state space principles for the estimation of activity map in tomographic PET imaging. The proposed strategy formulates the organ activity distribution through tracer kinetics models, and the photon-counting measurements through observation equations, thus makes it possible to unify the dynamic reconstruction problem and static reconstruction problem into a general framework. Further, it coherently treats the uncertainties of the statistical model of the imaging system and the noisy nature of measurement data. Since H(infinity) filter seeks minimummaximum-error estimates without any assumptions on the system and data noise statistics, it is particular suited for PET image reconstruction where the statistical properties of measurement data and the system model are very complicated. The performance of the proposed framework is evaluated using Shepp-Logan simulated phantom data and real phantom data with favorable results.
A measurement system and image reconstruction in magnetic induction tomography.
Vauhkonen, M; Hamsch, M; Igney, C H
2008-06-01
Magnetic induction tomography (MIT) is a technique for imaging the internal conductivity distribution of an object. In MIT current-carrying coils are used to induce eddy currents in the object and the induced voltages are sensed with other coils. From these measurements, the internal conductivity distribution of the object can be reconstructed. In this paper, we introduce a 16-channel MIT measurement system that is capable of parallel readout of 16 receiver channels. The parallel measurements are carried out using high-quality audio sampling devices. Furthermore, approaches for reconstructing MIT images developed for the 16-channel MIT system are introduced. We consider low conductivity applications, conductivity less than 5 S m(-1), and we use a frequency of 10 MHz. In the image reconstruction, we use time-harmonic Maxwell's equation for the electric field. This equation is solved with the finite element method using edge elements and the images are reconstructed using a generalized Tikhonov regularization approach. Both difference and static image reconstruction approaches are considered. Results from simulations and real measurements collected with the Philips 16-channel MIT system are shown.
DCT and DST Based Image Compression for 3D Reconstruction
NASA Astrophysics Data System (ADS)
Siddeq, Mohammed M.; Rodrigues, Marcos A.
2017-03-01
This paper introduces a new method for 2D image compression whose quality is demonstrated through accurate 3D reconstruction using structured light techniques and 3D reconstruction from multiple viewpoints. The method is based on two discrete transforms: (1) A one-dimensional Discrete Cosine Transform (DCT) is applied to each row of the image. (2) The output from the previous step is transformed again by a one-dimensional Discrete Sine Transform (DST), which is applied to each column of data generating new sets of high-frequency components followed by quantization of the higher frequencies. The output is then divided into two parts where the low-frequency components are compressed by arithmetic coding and the high frequency ones by an efficient minimization encoding algorithm. At decompression stage, a binary search algorithm is used to recover the original high frequency components. The technique is demonstrated by compressing 2D images up to 99% compression ratio. The decompressed images, which include images with structured light patterns for 3D reconstruction and from multiple viewpoints, are of high perceptual quality yielding accurate 3D reconstruction. Perceptual assessment and objective quality of compression are compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results show that the proposed compression method is superior to both JPEG and JPEG2000 concerning 3D reconstruction, and with equivalent perceptual quality to JPEG2000.
Xie, Jingsi; Lai, Peng; Huang, Feng; Li, Yu; Li, Debiao
2010-05-01
Radial sampling has been demonstrated to be potentially useful in cardiac magnetic resonance imaging because it is less susceptible to motion than Cartesian sampling. Nevertheless, its capability of imaging acceleration remains limited by undersampling-induced streaking artifacts. In this study, a self-calibrated reconstruction method was developed to suppress streaking artifacts for highly accelerated parallel radial acquisitions in cardiac magnetic resonance imaging. Two- (2D) and three-dimensional (3D) radial k-space data were collected from a phantom and healthy volunteers. Images reconstructed using the proposed method and the conventional regridding method were compared based on statistical analysis on a four-point scale imaging scoring. It was demonstrated that the proposed method can effectively remove undersampling streaking artifacts and significantly improve image quality (P<.05). With the use of the proposed method, image score (1-4, 1=poor, 2=good, 3=very good, 4=excellent) was improved from 2.14 to 3.34 with the use of an undersampling factor of 4 and from 1.09 to 2.5 with the use of an undersampling factor of 8. Our study demonstrates that the proposed reconstruction method is effective for highly accelerated cardiac imaging applications using parallel radial acquisitions without calibration data.
NASA Astrophysics Data System (ADS)
Archer, Glen E.; Bos, Jeremy P.; Roggemann, Michael C.
2012-05-01
Terrestrial imaging over very long horizontal paths is increasingly common in surveillance and defense systems. All optical systems that operate in or through the atmosphere suffer from turbulence induced image blur. This paper explores the Mean-Square-Error (MSE) performance of a multi-frame-blind-deconvolution-based reconstruction technique using a non-linear optimization strategy to recover a reconstructed object. Three sets of 70 images representing low, moderate and severe turbulence degraded images were simulated from a diffraction limited image taken with a professional digital camera. Reconstructed objects showed significant, 54, 22 and 14 percent improvement in mean squared error for low, moderate, and severe turbulence cases respectively.
Parallel hyperspectral image reconstruction using random projections
NASA Astrophysics Data System (ADS)
Sevilla, Jorge; Martín, Gabriel; Nascimento, José M. P.
2016-10-01
Spaceborne sensors systems are characterized by scarce onboard computing and storage resources and by communication links with reduced bandwidth. Random projections techniques have been demonstrated as an effective and very light way to reduce the number of measurements in hyperspectral data, thus, the data to be transmitted to the Earth station is reduced. However, the reconstruction of the original data from the random projections may be computationally expensive. SpeCA is a blind hyperspectral reconstruction technique that exploits the fact that hyperspectral vectors often belong to a low dimensional subspace. SpeCA has shown promising results in the task of recovering hyperspectral data from a reduced number of random measurements. In this manuscript we focus on the implementation of the SpeCA algorithm for graphics processing units (GPU) using the compute unified device architecture (CUDA). Experimental results conducted using synthetic and real hyperspectral datasets on the GPU architecture by NVIDIA: GeForce GTX 980, reveal that the use of GPUs can provide real-time reconstruction. The achieved speedup is up to 22 times when compared with the processing time of SpeCA running on one core of the Intel i7-4790K CPU (3.4GHz), with 32 Gbyte memory.
Shape-based image reconstruction using linearized deformations
NASA Astrophysics Data System (ADS)
Öktem, Ozan; Chen, Chong; Onur Domaniç, Nevzat; Ravikumar, Pradeep; Bajaj, Chandrajit
2017-03-01
We introduce a reconstruction framework that can account for shape related prior information in imaging-related inverse problems. It is a variational scheme that uses a shape functional, whose definition is based on deformable template machinery from computational anatomy. We prove existence and, as a proof of concept, we apply the proposed shape-based reconstruction to 2D tomography with very sparse and/or highly noisy measurements.
An adaptive filtered back-projection for photoacoustic image reconstruction
Huang, He; Bustamante, Gilbert; Peterson, Ralph; Ye, Jing Yong
2015-05-15
Purpose: The purpose of this study is to develop an improved filtered-back-projection (FBP) algorithm for photoacoustic tomography (PAT), which allows image reconstruction with higher quality compared to images reconstructed through traditional algorithms. Methods: A rigorous expression of a weighting function has been derived directly from a photoacoustic wave equation and used as a ramp filter in Fourier domain. The authors’ new algorithm utilizes this weighting function to precisely calculate each photoacoustic signal’s contribution and then reconstructs the image based on the retarded potential generated from the photoacoustic sources. In addition, an adaptive criterion has been derived for selecting the cutoff frequency of a low pass filter. Two computational phantoms were created to test the algorithm. The first phantom contained five spheres with each sphere having different absorbances. The phantom was used to test the capability for correctly representing both the geometry and the relative absorbed energy in a planar measurement system. The authors also used another phantom containing absorbers of different sizes with overlapping geometry to evaluate the performance of the new method for complicated geometry. In addition, random noise background was added to the simulated data, which were obtained by using an arc-shaped array of 50 evenly distributed transducers that spanned 160° over a circle with a radius of 65 mm. A normalized factor between the neighbored transducers was applied for correcting measurement signals in PAT simulations. The authors assumed that the scanned object was mounted on a holder that rotated over the full 360° and the scans were set to a sampling rate of 20.48 MHz. Results: The authors have obtained reconstructed images of the computerized phantoms by utilizing the new FBP algorithm. From the reconstructed image of the first phantom, one can see that this new approach allows not only obtaining a sharp image but also showing
An adaptive filtered back-projection for photoacoustic image reconstruction
Huang, He; Bustamante, Gilbert; Peterson, Ralph; Ye, Jing Yong
2015-01-01
Purpose: The purpose of this study is to develop an improved filtered-back-projection (FBP) algorithm for photoacoustic tomography (PAT), which allows image reconstruction with higher quality compared to images reconstructed through traditional algorithms. Methods: A rigorous expression of a weighting function has been derived directly from a photoacoustic wave equation and used as a ramp filter in Fourier domain. The authors’ new algorithm utilizes this weighting function to precisely calculate each photoacoustic signal’s contribution and then reconstructs the image based on the retarded potential generated from the photoacoustic sources. In addition, an adaptive criterion has been derived for selecting the cutoff frequency of a low pass filter. Two computational phantoms were created to test the algorithm. The first phantom contained five spheres with each sphere having different absorbances. The phantom was used to test the capability for correctly representing both the geometry and the relative absorbed energy in a planar measurement system. The authors also used another phantom containing absorbers of different sizes with overlapping geometry to evaluate the performance of the new method for complicated geometry. In addition, random noise background was added to the simulated data, which were obtained by using an arc-shaped array of 50 evenly distributed transducers that spanned 160° over a circle with a radius of 65 mm. A normalized factor between the neighbored transducers was applied for correcting measurement signals in PAT simulations. The authors assumed that the scanned object was mounted on a holder that rotated over the full 360° and the scans were set to a sampling rate of 20.48 MHz. Results: The authors have obtained reconstructed images of the computerized phantoms by utilizing the new FBP algorithm. From the reconstructed image of the first phantom, one can see that this new approach allows not only obtaining a sharp image but also showing
Motion compensation for PET image reconstruction using deformable tetrahedral meshes
NASA Astrophysics Data System (ADS)
Manescu, P.; Ladjal, H.; Azencot, J.; Beuve, M.; Shariat, B.
2015-12-01
Respiratory-induced organ motion is a technical challenge to PET imaging. This motion induces displacements and deformation of the organs tissues, which need to be taken into account when reconstructing the spatial radiation activity. Classical image-based methods that describe motion using deformable image registration (DIR) algorithms cannot fully take into account the non-reproducibility of the respiratory internal organ motion nor the tissue volume variations that occur during breathing. In order to overcome these limitations, various biomechanical models of the respiratory system have been developed in the past decade as an alternative to DIR approaches. In this paper, we describe a new method of correcting motion artefacts in PET image reconstruction adapted to motion estimation models such as those based on the finite element method. In contrast with the DIR-based approaches, the radiation activity was reconstructed on deforming tetrahedral meshes. For this, we have re-formulated the tomographic reconstruction problem by introducing a time-dependent system matrix based calculated using tetrahedral meshes instead of voxelized images. The MLEM algorithm was chosen as the reconstruction method. The simulations performed in this study show that the motion compensated reconstruction based on tetrahedral deformable meshes has the capability to correct motion artefacts. Results demonstrate that, in the case of complex deformations, when large volume variations occur, the developed tetrahedral based method is more appropriate than the classical DIR-based one. This method can be used, together with biomechanical models controlled by external surrogates, to correct motion artefacts in PET images and thus reducing the need for additional internal imaging during the acquisition.
Relaxed Linearized Algorithms for Faster X-Ray CT Image Reconstruction.
Nien, Hung; Fessler, Jeffrey
2015-12-17
Statistical image reconstruction (SIR) methods are studied extensively for X-ray computed tomography (CT) due to the potential of acquiring CT scans with reduced X-ray dose while maintaining image quality. However, the longer reconstruction time of SIR methods hinders their use in X-ray CT in practice. To accelerate statistical methods, many optimization techniques have been investigated. Over-relaxation is a common technique to speed up convergence of iterative algorithms. For instance, using a relaxation parameter that is close to two in alternating direction method of multipliers (ADMM) has been shown to speed up convergence significantly. This paper proposes a relaxed linearized augmented Lagrangian (AL) method that shows theoretical faster convergence rate with over-relaxation and applies the proposed relaxed linearized AL method to X-ray CT image reconstruction problems. Experimental results with both simulated and real CT scan data show that the proposed relaxed algorithm (with ordered-subsets [OS] acceleration) is about twice as fast as the existing unrelaxed fast algorithms, with negligible computation and memory overhead.
Relaxed Linearized Algorithms for Faster X-Ray CT Image Reconstruction.
Nien, Hung; Fessler, Jeffrey A
2016-04-01
Statistical image reconstruction (SIR) methods are studied extensively for X-ray computed tomography (CT) due to the potential of acquiring CT scans with reduced X-ray dose while maintaining image quality. However, the longer reconstruction time of SIR methods hinders their use in X-ray CT in practice. To accelerate statistical methods, many optimization techniques have been investigated. Over-relaxation is a common technique to speed up convergence of iterative algorithms. For instance, using a relaxation parameter that is close to two in alternating direction method of multipliers (ADMM) has been shown to speed up convergence significantly. This paper proposes a relaxed linearized augmented Lagrangian (AL) method that shows theoretical faster convergence rate with over-relaxation and applies the proposed relaxed linearized AL method to X-ray CT image reconstruction problems. Experimental results with both simulated and real CT scan data show that the proposed relaxed algorithm (with ordered-subsets [OS] acceleration) is about twice as fast as the existing unrelaxed fast algorithms, with negligible computation and memory overhead.
Beyond maximum entropy: Fractal pixon-based image reconstruction
NASA Technical Reports Server (NTRS)
Puetter, R. C.; Pina, R. K.
1994-01-01
We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other methods, including Goodness-of-Fit (e.g. Least-Squares and Lucy-Richardson) and Maximum Entropy (ME). Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME.
Beyond maximum entropy: Fractal pixon-based image reconstruction
NASA Technical Reports Server (NTRS)
Puetter, R. C.; Pina, R. K.
1994-01-01
We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other methods, including Goodness-of-Fit (e.g. Least-Squares and Lucy-Richardson) and Maximum Entropy (ME). Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME.
Exposing digital image forgeries by 3D reconstruction technology
NASA Astrophysics Data System (ADS)
Wang, Yongqiang; Xu, Xiaojing; Li, Zhihui; Liu, Haizhen; Li, Zhigang; Huang, Wei
2009-11-01
Digital images are easy to tamper and edit due to availability of powerful image processing and editing software. Especially, forged images by taking from a picture of scene, because of no manipulation was made after taking, usual methods, such as digital watermarks, statistical correlation technology, can hardly detect the traces of image tampering. According to image forgery characteristics, a method, based on 3D reconstruction technology, which detect the forgeries by discriminating the dimensional relationship of each object appeared on image, is presented in this paper. This detection method includes three steps. In the first step, all the parameters of images were calibrated and each crucial object on image was chosen and matched. In the second step, the 3D coordinates of each object were calculated by bundle adjustment. In final step, the dimensional relationship of each object was analyzed. Experiments were designed to test this detection method; the 3D reconstruction and the forged image 3D reconstruction were computed independently. Test results show that the fabricating character in digital forgeries can be identified intuitively by this method.
Nien, Hung; Fessler, Jeffrey A.
2014-01-01
Augmented Lagrangian (AL) methods for solving convex optimization problems with linear constraints are attractive for imaging applications with composite cost functions due to the empirical fast convergence rate under weak conditions. However, for problems such as X-ray computed tomography (CT) image reconstruction, where the inner least-squares problem is challenging and requires iterations, AL methods can be slow. This paper focuses on solving regularized (weighted) least-squares problems using a linearized variant of AL methods that replaces the quadratic AL penalty term in the scaled augmented Lagrangian with its separable quadratic surrogate (SQS) function, leading to a simpler ordered-subsets (OS) accelerable splitting-based algorithm, OS-LALM. To further accelerate the proposed algorithm, we use a second-order recursive system analysis to design a deterministic downward continuation approach that avoids tedious parameter tuning and provides fast convergence. Experimental results show that the proposed algorithm significantly accelerates the convergence of X-ray CT image reconstruction with negligible overhead and can reduce OS artifacts when using many subsets. PMID:25248178
Nien, Hung; Fessler, Jeffrey A
2015-02-01
Augmented Lagrangian (AL) methods for solving convex optimization problems with linear constraints are attractive for imaging applications with composite cost functions due to the empirical fast convergence rate under weak conditions. However, for problems such as X-ray computed tomography (CT) image reconstruction, where the inner least-squares problem is challenging and requires iterations, AL methods can be slow. This paper focuses on solving regularized (weighted) least-squares problems using a linearized variant of AL methods that replaces the quadratic AL penalty term in the scaled augmented Lagrangian with its separable quadratic surrogate function, leading to a simpler ordered-subsets (OS) accelerable splitting-based algorithm, OS-LALM. To further accelerate the proposed algorithm, we use a second-order recursive system analysis to design a deterministic downward continuation approach that avoids tedious parameter tuning and provides fast convergence. Experimental results show that the proposed algorithm significantly accelerates the convergence of X-ray CT image reconstruction with negligible overhead and can reduce OS artifacts when using many subsets.
How to co-add images? I. A new iterative method for image reconstruction of dithered observations
NASA Astrophysics Data System (ADS)
Wang, Lei; Li, Guo-Liang
2017-09-01
By employing the previous Voronoi approach and replacing its nearest neighbor approximation with Drizzle in iterative signal extraction, we develop a fast iterative Drizzle algorithm, named fiDrizzle, to reconstruct the underlying band-limited image from undersampled dithered frames. Compared with the existing iDrizzle, the new algorithm improves rate of convergence and accelerates the computational speed. Moreover, under the same conditions (e.g. the same number of dithers and iterations), fiDrizzle can make a better quality reconstruction than iDrizzle, due to the newly discovered High Sampling caused Decelerating Convergence (HSDC) effect in the iterative signal extraction process. fiDrizzle demonstrates its powerful ability to perform image deconvolution from undersampled dithers.
Respiratory motion correction in emission tomography image reconstruction.
Reyes, Mauricio; Malandain, Grégoire; Koulibaly, Pierre Malick; González Ballester, Miguel A; Darcourt, Jacques
2005-01-01
In Emission Tomography imaging, respiratory motion causes artifacts in lungs and cardiac reconstructed images, which lead to misinterpretations and imprecise diagnosis. Solutions like respiratory gating, correlated dynamic PET techniques, list-mode data based techniques and others have been tested with improvements over the spatial activity distribution in lungs lesions, but with the disadvantages of requiring additional instrumentation or discarding part of the projection data used for reconstruction. The objective of this study is to incorporate respiratory motion correction directly into the image reconstruction process, without any additional acquisition protocol consideration. To this end, we propose an extension to the Maximum Likelihood Expectation Maximization (MLEM) algorithm that includes a respiratory motion model, which takes into account the displacements and volume deformations produced by the respiratory motion during the data acquisition process. We present results from synthetic simulations incorporating real respiratory motion as well as from phantom and patient data.
A 32-Channel Head Coil Array with Circularly Symmetric Geometry for Accelerated Human Brain Imaging
Chu, Ying-Hua; Hsu, Yi-Cheng; Keil, Boris; Kuo, Wen-Jui; Lin, Fa-Hsuan
2016-01-01
The goal of this study is to optimize a 32-channel head coil array for accelerated 3T human brain proton MRI using either a Cartesian or a radial k-space trajectory. Coils had curved trapezoidal shapes and were arranged in a circular symmetry (CS) geometry. Coils were optimally overlapped to reduce mutual inductance. Low-noise pre-amplifiers were used to further decouple between coils. The SNR and noise amplification in accelerated imaging were compared to results from a head coil array with a soccer-ball (SB) geometry. The maximal SNR in the CS array was about 120% (1070 vs. 892) and 62% (303 vs. 488) of the SB array at the periphery and the center of the FOV on a transverse plane, respectively. In one-dimensional 4-fold acceleration, the CS array has higher averaged SNR than the SB array across the whole FOV. Compared to the SB array, the CS array has a smaller g-factor at head periphery in all accelerated acquisitions. Reconstructed images using a radial k-space trajectory show that the CS array has a smaller error than the SB array in 2- to 5-fold accelerations. PMID:26909652
A 32-Channel Head Coil Array with Circularly Symmetric Geometry for Accelerated Human Brain Imaging.
Chu, Ying-Hua; Hsu, Yi-Cheng; Keil, Boris; Kuo, Wen-Jui; Lin, Fa-Hsuan
2016-01-01
The goal of this study is to optimize a 32-channel head coil array for accelerated 3T human brain proton MRI using either a Cartesian or a radial k-space trajectory. Coils had curved trapezoidal shapes and were arranged in a circular symmetry (CS) geometry. Coils were optimally overlapped to reduce mutual inductance. Low-noise pre-amplifiers were used to further decouple between coils. The SNR and noise amplification in accelerated imaging were compared to results from a head coil array with a soccer-ball (SB) geometry. The maximal SNR in the CS array was about 120% (1070 vs. 892) and 62% (303 vs. 488) of the SB array at the periphery and the center of the FOV on a transverse plane, respectively. In one-dimensional 4-fold acceleration, the CS array has higher averaged SNR than the SB array across the whole FOV. Compared to the SB array, the CS array has a smaller g-factor at head periphery in all accelerated acquisitions. Reconstructed images using a radial k-space trajectory show that the CS array has a smaller error than the SB array in 2- to 5-fold accelerations.
NASA Astrophysics Data System (ADS)
Yu, Zong-Han; Wu, Chun-Ming; Lin, Yo-Wei; Chuang, Ming-Lung; Tsai, Jui-che; Sun, Chia-Wei
2008-02-01
Diffuse optical tomography (DOT) is an emerging technique for biomedical imaging. The imaging quality of the DOT strongly depends on the reconstruction algorithm. In this paper, four inhomogeneities with various shapes of absorption distributions are simulated by a continues-wave DOT system. The DOT images are obtained based on the simultaneous iterative reconstruction technique (SIRT) method. To solve the trade-off problem between time consumption of reconstruction process and accuracy of reconstructed image, the iteration process needs a optimization criterion in algorithm. In this paper, the comparison between the root mean square error (RMSE) and the convergence rate (CR) in SIRT algorithm are demonstrated. From the simulation results, the CR reveals the information of global minimum in the iteration process. Based on the CR calculation, the SIRT can offer higher efficient image reconstructing in DOT system.
Image Reconstruction for Hybrid True-Color Micro-CT
Xu, Qiong; Yu, Hengyong; Bennett, James; He, Peng; Zainon, Rafidah; Doesburg, Robert; Opie, Alex; Walsh, Mike; Shen, Haiou; Butler, Anthony; Butler, Phillip; Mou, Xuanqin; Wang, Ge
2013-01-01
X-ray micro-CT is an important imaging tool for biomedical researchers. Our group has recently proposed a hybrid “true-color” micro-CT system to improve contrast resolution with lower system cost and radiation dose. The system incorporates an energy-resolved photon-counting true-color detector into a conventional micro-CT configuration, and can be used for material decomposition. In this paper, we demonstrate an interior color-CT image reconstruction algorithm developed for this hybrid true-color micro-CT system. A compressive sensing-based statistical interior tomography method is employed to reconstruct each channel in the local spectral imaging chain, where the reconstructed global gray-scale image from the conventional imaging chain served as the initial guess. Principal component analysis was used to map the spectral reconstructions into the color space. The proposed algorithm was evaluated by numerical simulations, physical phantom experiments, and animal studies. The results confirm the merits of the proposed algorithm, and demonstrate the feasibility of the hybrid true-color micro-CT system. Additionally, a “color diffusion” phenomenon was observed whereby high-quality true-color images are produced not only inside the region of interest, but also in neighboring regions. It appears harnessing that this phenomenon could potentially reduce the color detector size for a given ROI, further reducing system cost and radiation dose. PMID:22481806
Image reconstruction for hybrid true-color micro-CT.
Xu, Qiong; Yu, Hengyong; Bennett, James; He, Peng; Zainon, Rafidah; Doesburg, Robert; Opie, Alex; Walsh, Mike; Shen, Haiou; Butler, Anthony; Butler, Phillip; Mou, Xuanqin; Wang, Ge
2012-06-01
X-ray micro-CT is an important imaging tool for biomedical researchers. Our group has recently proposed a hybrid "true-color" micro-CT system to improve contrast resolution with lower system cost and radiation dose. The system incorporates an energy-resolved photon-counting true-color detector into a conventional micro-CT configuration, and can be used for material decomposition. In this paper, we demonstrate an interior color-CT image reconstruction algorithm developed for this hybrid true-color micro-CT system. A compressive sensing-based statistical interior tomography method is employed to reconstruct each channel in the local spectral imaging chain, where the reconstructed global gray-scale image from the conventional imaging chain served as the initial guess. Principal component analysis was used to map the spectral reconstructions into the color space. The proposed algorithm was evaluated by numerical simulations, physical phantom experiments, and animal studies. The results confirm the merits of the proposed algorithm, and demonstrate the feasibility of the hybrid true-color micro-CT system. Additionally, a "color diffusion" phenomenon was observed whereby high-quality true-color images are produced not only inside the region of interest, but also in neighboring regions. It appears harnessing that this phenomenon could potentially reduce the color detector size for a given ROI, further reducing system cost and radiation dose.
Iterative image reconstruction for CBCT using edge-preserving prior
Wang, Jing; Li, Tianfang; Xing, Lei
2009-01-01
On-board cone-beam computed tomography (CBCT) is a new imaging technique for radiation therapy guidance, which provides volumetric information of a patient at treatment position. CBCT improves the setup accuracy and may be used for dose reconstruction. However, there is great concern that the repeated use of CBCT during a treatment course delivers too much of an extra dose to the patient. To reduce the CBCT dose, one needs to lower the total mAs of the x-ray tube current, which usually leads to reduced image quality. Our goal of this work is to develop an effective method that enables one to achieve a clinically acceptable CBCT image with as low as possible mAs without compromising quality. An iterative image reconstruction algorithm based on a penalized weighted least-squares (PWLS) principle was developed for this purpose. To preserve edges in the reconstructed images, we designed an anisotropic penalty term of a quadratic form. The algorithm was evaluated with a CT quality assurance phantom and an anthropomorphic head phantom. Compared with conventional isotropic penalty, the PWLS image reconstruction algorithm with anisotropic penalty shows better resolution preservation. PMID:19235393
Sparse representation for the ISAR image reconstruction
NASA Astrophysics Data System (ADS)
Hu, Mengqi; Montalbo, John; Li, Shuxia; Sun, Ligang; Qiao, Zhijun G.
2016-05-01
In this paper, a sparse representation of the data for an inverse synthetic aperture radar (ISAR) system is provided in two dimensions. The proposed sparse representation motivates the use a of a Convex Optimization that recovers the image with far less samples, which is required by Nyquist-Shannon sampling theorem to increases the efficiency and decrease the cost of calculation in radar imaging.
Reconstruction by calibration over tensors for multi-coil multi-acquisition balanced SSFP imaging.
Biyik, Erdem; Ilicak, Efe; Çukur, Tolga
2017-09-01
To develop a rapid imaging framework for balanced steady-state free precession (bSSFP) that jointly reconstructs undersampled data (by a factor of R) across multiple coils (D) and multiple acquisitions (N). To devise a multi-acquisition coil compression technique for improved computational efficiency. The bSSFP image for a given coil and acquisition is modeled to be modulated by a coil sensitivity and a bSSFP profile. The proposed reconstruction by calibration over tensors (ReCat) recovers missing data by tensor interpolation over the coil and acquisition dimensions. Coil compression is achieved using a new method based on multilinear singular value decomposition (MLCC). ReCat is compared with iterative self-consistent parallel imaging (SPIRiT) and profile encoding (PE-SSFP) reconstructions. Compared to parallel imaging or profile-encoding methods, ReCat attains sensitive depiction of high-spatial-frequency information even at higher R. In the brain, ReCat improves peak SNR (PSNR) by 1.1 ± 1.0 dB over SPIRiT and by 0.9 ± 0.3 dB over PE-SSFP (mean ± SD across subjects; average for N = 2-8, R = 8-16). Furthermore, reconstructions based on MLCC achieve 0.8 ± 0.6 dB higher PSNR compared to those based on geometric coil compression (GCC) (average for N = 2-8, R = 4-16). ReCat is a promising acceleration framework for banding-artifact-free bSSFP imaging with high image quality; and MLCC offers improved computational efficiency for tensor-based reconstructions. Magn Reson Med, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Azhar, N.; Saad, W. H. M.; Manap, N. A.; Saad, N. M.; Syafeeza, A. R.
2017-06-01
This study presents the approach of 3D image reconstruction using an autonomous robotic arm for the image acquisition process. A low cost of the automated imaging platform is created using a pair of G15 servo motor connected in series to an Arduino UNO as a main microcontroller. Two sets of sequential images were obtained using different projection angle of the camera. The silhouette-based approach is used in this study for 3D reconstruction from the sequential images captured from several different angles of the object. Other than that, an analysis based on the effect of different number of sequential images on the accuracy of 3D model reconstruction was also carried out with a fixed projection angle of the camera. The effecting elements in the 3D reconstruction are discussed and the overall result of the analysis is concluded according to the prototype of imaging platform.
NASA Astrophysics Data System (ADS)
Gerber, Thomas; Liu, Yuzhu; Knopp, Gregor; Hemberger, Patrick; Bodi, Andras; Radi, Peter; Sych, Yaroslav
2013-03-01
Velocity map imaging (VMI) is used in mass spectrometry and in angle resolved photo-electron spectroscopy to determine the lateral momentum distributions of charged particles accelerated towards a detector. VM-images are composed of projected Newton spheres with a common centre. The 2D images are usually evaluated by a decomposition into base vectors each representing the 2D projection of a set of particles starting from a centre with a specific velocity distribution. We propose to evaluate 1D projections of VM-images in terms of 1D projections of spherical functions, instead. The proposed evaluation algorithm shows that all distribution information can be retrieved from an adequately chosen set of 1D projections, alleviating the numerical effort for the interpretation of VM-images considerably. The obtained results produce directly the coefficients of the involved spherical functions, making the reconstruction of sliced Newton spheres obsolete.
Gerber, Thomas; Liu Yuzhu; Knopp, Gregor; Hemberger, Patrick; Bodi, Andras; Radi, Peter; Sych, Yaroslav
2013-03-15
Velocity map imaging (VMI) is used in mass spectrometry and in angle resolved photo-electron spectroscopy to determine the lateral momentum distributions of charged particles accelerated towards a detector. VM-images are composed of projected Newton spheres with a common centre. The 2D images are usually evaluated by a decomposition into base vectors each representing the 2D projection of a set of particles starting from a centre with a specific velocity distribution. We propose to evaluate 1D projections of VM-images in terms of 1D projections of spherical functions, instead. The proposed evaluation algorithm shows that all distribution information can be retrieved from an adequately chosen set of 1D projections, alleviating the numerical effort for the interpretation of VM-images considerably. The obtained results produce directly the coefficients of the involved spherical functions, making the reconstruction of sliced Newton spheres obsolete.
Hofmann, Christian; Sawall, Stefan; Knaup, Michael; Kachelrieß, Marc
2014-06-15
Purpose: Iterative image reconstruction gains more and more interest in clinical routine, as it promises to reduce image noise (and thereby patient dose), to reduce artifacts, or to improve spatial resolution. Among vendors and researchers, however, there is no consensus of how to best achieve these aims. The general approach is to incorporatea priori knowledge into iterative image reconstruction, for example, by adding additional constraints to the cost function, which penalize variations between neighboring voxels. However, this approach to regularization in general poses a resolution noise trade-off because the stronger the regularization, and thus the noise reduction, the stronger the loss of spatial resolution and thus loss of anatomical detail. The authors propose a method which tries to improve this trade-off. The proposed reconstruction algorithm is called alpha image reconstruction (AIR). One starts with generating basis images, which emphasize certain desired image properties, like high resolution or low noise. The AIR algorithm reconstructs voxel-specific weighting coefficients that are applied to combine the basis images. By combining the desired properties of each basis image, one can generate an image with lower noise and maintained high contrast resolution thus improving the resolution noise trade-off. Methods: All simulations and reconstructions are performed in native fan-beam geometry. A water phantom with resolution bar patterns and low contrast disks is simulated. A filtered backprojection (FBP) reconstruction with a Ram-Lak kernel is used as a reference reconstruction. The results of AIR are compared against the FBP results and against a penalized weighted least squares reconstruction which uses total variation as regularization. The simulations are based on the geometry of the Siemens Somatom Definition Flash scanner. To quantitatively assess image quality, the authors analyze line profiles through resolution patterns to define a contrast
NASA Astrophysics Data System (ADS)
Onose, Alexandru; Dabbech, Arwa; Wiaux, Yves
2017-07-01
Next-generation radio interferometers, like the Square Kilometre Array, will acquire large amounts of data with the goal of improving the size and sensitivity of the reconstructed images by orders of magnitude. The efficient processing of large-scale data sets is of great importance. We propose an acceleration strategy for a recently proposed primal-dual distributed algorithm. A preconditioning approach can incorporate into the algorithmic structure both the sampling density of the measured visibilities and the noise statistics. Using the sampling density information greatly accelerates the convergence speed, especially for highly non-uniform sampling patterns, while relying on the correct noise statistics optimizes the sensitivity of the reconstruction. In connection to clean, our approach can be seen as including in the same algorithmic structure both natural and uniform weighting, thereby simultaneously optimizing both the resolution and the sensitivity. The method relies on a new non-Euclidean proximity operator for the data fidelity term, that generalizes the projection on to the ℓ2 ball where the noise lives for naturally weighted data, to the projection on to a generalized ellipsoid incorporating sampling density information through uniform weighting. Importantly, this non-Euclidean modification is only an acceleration strategy to solve the convex imaging problem with data fidelity dictated only by noise statistics. We show through simulations with realistic sampling patterns the acceleration obtained using the preconditioning. We also investigate the algorithm performance for the reconstruction of the 3C129 radio galaxy from real visibilities and compare with multiscale clean, showing better sensitivity and resolution. Our matlab code is available online on GitHub.
Tomographic image reconstruction via estimation of sparse unidirectional gradients.
Polak, Adam G; Mroczka, Janusz; Wysoczański, Dariusz
2017-02-01
Since computed tomography (CT) was developed over 35 years ago, new mathematical ideas and computational algorithms have been continuingly elaborated to improve the quality of reconstructed images. In recent years, a considerable effort can be noticed to apply the sparse solution of underdetermined system theory to the reconstruction of CT images from undersampled data. Its significance stems from the possibility of obtaining good quality CT images from low dose projections. Among diverse approaches, total variation (TV) minimizing 2D gradients of an image, seems to be the most popular method. In this paper, a new method for CT image reconstruction via sparse gradients estimation (SGE), is proposed. It consists in estimating 1D gradients specified in four directions using the iterative reweighting algorithm. To investigate its properties and to compare it with TV and other related methods, numerical simulations were performed according to the Monte Carlo scheme, using the Shepp-Logan and more realistic brain phantoms scanned at 9-60 directions in the range from 0 to 179°, with measurement data disturbed by additive Gaussians noise characterized by the relative level of 0.1%, 0.2%, 0.5%, 1%, 2% and 5%. The accuracy of image reconstruction was assessed in terms of the relative root-mean-square (RMS) error. The results show that the proposed SGE algorithm has returned more accurate images than TV for the cases fulfilling the sparsity conditions. Particularly, it preserves sharp edges of regions representing different tissues or organs and yields images of much better quality reconstructed from a small number of projections disturbed by relatively low measurement noise.
Filter and slice thickness selection in SPECT image reconstruction
Ivanovic, M.; Weber, D.A.; Wilson, G.A.; O'Mara, R.E.
1985-05-01
The choice of filter and slice thickness in SPECT image reconstruction as function of activity and linear and angular sampling were investigated in phantom and patient imaging studies. Reconstructed transverse and longitudinal spatial resolution of the system were measured using a line source in a water filled phantom. Phantom studies included measurements of the Data Spectrum phantom; clinical studies included tomographic procedures in 40 patients undergoing imaging of the temporomandibular joint. Slices of the phantom and patient images were evaluated for spatial of the phantom and patient images were evaluated for spatial resolution, noise, and image quality. Major findings include; spatial resolution and image quality improve with increasing linear sampling frequencies over the range of 4-8 mm/p in the phantom images, best spatial resolution and image quality in clinical images were observed at a linear sampling frequency of 6mm/p, Shepp and Logan filter gives the best spatial resolution for phantom studies at the lowest linear sampling frequency; smoothed Shepp and Logan filter provides best quality images without loss of resolution at higher frequencies and, spatial resolution and image quality improve with increased angular sampling frequency in the phantom at 40 c/p but appear to be independent of angular sampling frequency at 400 c/p.
NASA Astrophysics Data System (ADS)
Jayet, Baptiste; Ahmad, Junaid; Taylor, Shelley L.; Hill, Philip J.; Dehghani, Hamid; Morgan, Stephen P.
2017-03-01
Bioluminescence imaging (BLI) is a commonly used imaging modality in biology to study cancer in vivo in small animals. Images are generated using a camera to map the optical fluence emerging from the studied animal, then a numerical reconstruction algorithm is used to locate the sources and estimate their sizes. However, due to the strong light scattering properties of biological tissues, the resolution is very limited (around a few millimetres). Therefore obtaining accurate information about the pathology is complicated. We propose a combined ultrasound/optics approach to improve accuracy of these techniques. In addition to the BLI data, an ultrasound probe driven by a scanner is used for two main objectives. First, to obtain a pure acoustic image, which provides structural information of the sample. And second, to alter the light emission by the bioluminescent sources embedded inside the sample, which is monitored using a high speed optical detector (e.g. photomultiplier tube). We will show that this last measurement, used in conjunction with the ultrasound data, can provide accurate localisation of the bioluminescent sources. This can be used as a priori information by the numerical reconstruction algorithm, greatly increasing the accuracy of the BLI image reconstruction as compared to the image generated using only BLI data.
Three dimensional reconstruction of conventional stereo optic disc image.
Kong, H J; Kim, S K; Seo, J M; Park, K H; Chung, H; Park, K S; Kim, H C
2004-01-01
Stereo disc photograph was analyzed and reconstructed as 3 dimensional contour image to evaluate the status of the optic nerve head for the early detection of glaucoma and the evaluation of the efficacy of treatment. Stepwise preprocessing was introduced to detect the edge of the optic nerve head and retinal vessels and reduce noises. Paired images were registered by power cepstrum method and zero-mean normalized cross-correlation. After Gaussian blurring, median filter application and disparity pair searching, depth information in the 3 dimensionally reconstructed image was calculated by the simple triangulation formula. Calculated depth maps were smoothed through cubic B-spline interpolation and retinal vessels were visualized more clearly by adding reference image. Resulted 3 dimensional contour image showed optic cups, retinal vessels and the notching of the neural rim of the optic disc clearly and intuitively, helping physicians in understanding and interpreting the stereo disc photograph.
Zhou, Pengcheng; Bi, Yong; Sun, Minyuan; Wang, Hao; Li, Fang; Qi, Yan
2014-09-20
The 3D Gerchberg-Saxton (GS) algorithm can be used to compute a computer-generated hologram (CGH) to produce a 3D holographic display. But, using the 3D GS method, there exists a serious distortion in reconstructions of binary input images. We have eliminated the distortion and improved the image quality of the reconstructions by a maximum of 486%, using a symmetrical 3D GS algorithm that is developed based on a traditional 3D GS algorithm. In addition, the hologram computation speed has been accelerated by 9.28 times, which is significant for real-time holographic displays.
Computationally attractive reconstruction of bandlimited images from irregular samples.
Strohmer, T
1997-01-01
An efficient method for the reconstruction of bandlimited images and the approximation of arbitrary images from nonuniform sampling values is developed. The novel method is based on the observation that the reconstruction problem can be formulated as linear system of equations using two-dimensional (2-D) trigonometric polynomials, where the matrix is of block-Toeplitz type with Toeplitz blocks. This system is solved iteratively by the conjugate gradient (CG) method. We show that the use of so-called adaptive weights in the establishment of the block Toeplitz matrix can be seen as efficient preconditioning. The superiority of the new method over conventional approaches is demonstrated by numerical experiments.
Force reconstruction using the sum of weighted accelerations technique -- Max-Flat procedure
Carne, T.G.; Mayes, R.L.; Bateman, V.I.
1993-12-31
Force reconstruction is a procedure in which the externally applied force is inferred from measured structural response rather than directly measured. In a recently developed technique, the response acceleration time-histories are multiplied by scalar weights and summed to produce the reconstructed force. This reconstruction is called the Sum of Weighted Accelerations Technique (SWAT). One step in the application of this technique is the calculation of the appropriate scalar weights. In this paper a new method of estimating the weights, using measured frequency response function data, is developed and contrasted with the traditional SWAT method of inverting the mode-shape matrix. The technique uses frequency response function data, but is not based on deconvolution. An application that will be discussed as part of this paper is the impact into a rigid barrier of a weapon system with an energy-absorbing nose. The nose had been designed to absorb the energy of impact and to mitigate the shock to the interior components.
Image Reconstruction Using Large Optical Telescopes.
1982-02-15
imaged the Pluto/Charon system, resolved a multiple QSO (quasar) and we have mapped and imaged asymmetries in the envelope around the supergiant star ...fringes for point source. 38 11.7. Interference fringes for binary star . 40 1I.8. Power spectrum of C Tau. 42 III.1. PG 1115+080. 50 111.2. Tracking...Dawe’s limit given above. An example of short exposure star photos, at very large image scale, is given in Figure 1.1. The overall size of these
Very fast approximate reconstruction of MR images.
Angelidis, P A
1998-11-01
The ultra fast Fourier transform (UFFT) provides the means for a very fast computation of a magnetic resonance (MR) image, because it is implemented using only additions and no multiplications at all. It achieves this by approximating the complex exponential functions involved in the Fourier transform (FT) sum with computationally simpler periodic functions. This approximation introduces erroneous spectrum peaks of small magnitude. We examine the performance of this transform in some typical MRI signals. The results show that this transform can very quickly provide an MR image. It is proposed to be used as a replacement of the classically used FFT whenever a fast general overview of an image is required.
Dictionary Approaches to Image Compression and Reconstruction
NASA Technical Reports Server (NTRS)
Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.
1998-01-01
This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as phi(sub gamma), are discrete time signals, where gamma represents the dictionary index. A dictionary with a collection of these waveforms is typically complete or overcomplete. Given such a dictionary, the goal is to obtain a representation image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.
Dictionary Approaches to Image Compression and Reconstruction
NASA Technical Reports Server (NTRS)
Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.
1998-01-01
This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as lambda, are discrete time signals, where y represents the dictionary index. A dictionary with a collection of these waveforms Is typically complete or over complete. Given such a dictionary, the goal is to obtain a representation Image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.
Dictionary Approaches to Image Compression and Reconstruction
NASA Technical Reports Server (NTRS)
Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.
1998-01-01
This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as phi(sub gamma), are discrete time signals, where gamma represents the dictionary index. A dictionary with a collection of these waveforms is typically complete or overcomplete. Given such a dictionary, the goal is to obtain a representation image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.
Penalized maximum-likelihood image reconstruction for lesion detection
NASA Astrophysics Data System (ADS)
Qi, Jinyi; Huesman, Ronald H.
2006-08-01
Detecting cancerous lesions is one major application in emission tomography. In this paper, we study penalized maximum-likelihood image reconstruction for this important clinical task. Compared to analytical reconstruction methods, statistical approaches can improve the image quality by accurately modelling the photon detection process and measurement noise in imaging systems. To explore the full potential of penalized maximum-likelihood image reconstruction for lesion detection, we derived simplified theoretical expressions that allow fast evaluation of the detectability of a random lesion. The theoretical results are used to design the regularization parameters to improve lesion detectability. We conducted computer-based Monte Carlo simulations to compare the proposed penalty function, conventional penalty function, and a penalty function for isotropic point spread function. The lesion detectability is measured by a channelized Hotelling observer. The results show that the proposed penalty function outperforms the other penalty functions for lesion detection. The relative improvement is dependent on the size of the lesion. However, we found that the penalty function optimized for a 5 mm lesion still outperforms the other two penalty functions for detecting a 14 mm lesion. Therefore, it is feasible to use the penalty function designed for small lesions in image reconstruction, because detection of large lesions is relatively easy.
Parallel Image Reconstruction for New Vacuum Solar Telescope
NASA Astrophysics Data System (ADS)
Li, Xue-Bao; Wang, Feng; Xiang, Yong Yuan; Zheng, Yan Fang; Liu, Ying Bo; Deng, Hui; Ji, Kai Fan
2014-04-01
Many advanced ground-based solar telescopes improve the spatial resolution of observation images using an adaptive optics (AO) system. As any AO correction remains only partial, it is necessary to use post-processing image reconstruction techniques such as speckle masking or shift-and-add (SAA) to reconstruct a high-spatial-resolution image from atmospherically degraded solar images. In the New Vacuum Solar Telescope (NVST), the spatial resolution in solar images is improved by frame selection and SAA. In order to overcome the burden of massive speckle data processing, we investigate the possibility of using the speckle reconstruction program in a real-time application at the telescope site. The code has been written in the C programming language and optimized for parallel processing in a multi-processor environment. We analyze the scalability of the code to identify possible bottlenecks, and we conclude that the presented code is capable of being run in real-time reconstruction applications at NVST and future large aperture solar telescopes if care is taken that the multi-processor environment has low latencies between the computation nodes.
Body Image Screening for Cancer Patients Undergoing Reconstructive Surgery
Fingeret, Michelle Cororve; Nipomnick, Summer; Guindani, Michele; Baumann, Donald; Hanasono, Matthew; Crosby, Melissa
2014-01-01
Objectives Body image is a critical issue for cancer patients undergoing reconstructive surgery, as they can experience disfigurement and functional impairment. Distress related to appearance changes can lead to various psychosocial difficulties, and patients are often reluctant to discuss these issues with their healthcare team. Our goals were to design and evaluate a screening tool to aid providers in identifying patients who may benefit from referral for specialized psychosocial care to treat body image concerns. Methods We designed a brief 4-item instrument and administered it at a single time point to cancer patients who were undergoing reconstructive treatment. We used simple and multinomial regression models to evaluate whether survey responses, demographic, or clinical variables predicted interest and enrollment in counseling. Results Over 95% of the sample (n = 248) endorsed some concerns, preoccupation, or avoidance due to appearance changes. Approximately one-third of patients were interested in obtaining counseling or additional information to assist with body image distress. Each survey item significantly predicted interest and enrollment in counseling. Concern about future appearance changes was the single best predictor of counseling enrollment. Sex, age, and cancer type were not predictive of counseling interest or enrollment. Conclusions We present initial data supporting use of the Body Image Screener for Cancer Reconstruction. Our findings suggest benefits of administering this tool to patients presenting for reconstructive surgery. It is argued that screening and treatment for body image distress should be provided to this patient population at the earliest possible time point. PMID:25066586
Gadgetron: an open source framework for medical image reconstruction.
Hansen, Michael Schacht; Sørensen, Thomas Sangild
2013-06-01
This work presents a new open source framework for medical image reconstruction called the "Gadgetron." The framework implements a flexible system for creating streaming data processing pipelines where data pass through a series of modules or "Gadgets" from raw data to reconstructed images. The data processing pipeline is configured dynamically at run-time based on an extensible markup language configuration description. The framework promotes reuse and sharing of reconstruction modules and new Gadgets can be added to the Gadgetron framework through a plugin-like architecture without recompiling the basic framework infrastructure. Gadgets are typically implemented in C/C++, but the framework includes wrapper Gadgets that allow the user to implement new modules in the Python scripting language for rapid prototyping. In addition to the streaming framework infrastructure, the Gadgetron comes with a set of dedicated toolboxes in shared libraries for medical image reconstruction. This includes generic toolboxes for data-parallel (e.g., GPU-based) execution of compute-intensive components. The basic framework architecture is independent of medical imaging modality, but this article focuses on its application to Cartesian and non-Cartesian parallel magnetic resonance imaging.
PET image reconstruction: mean, variance, and optimal minimax criterion
NASA Astrophysics Data System (ADS)
Liu, Huafeng; Gao, Fei; Guo, Min; Xue, Liying; Nie, Jing; Shi, Pengcheng
2015-04-01
Given the noise nature of positron emission tomography (PET) measurements, it is critical to know the image quality and reliability as well as expected radioactivity map (mean image) for both qualitative interpretation and quantitative analysis. While existing efforts have often been devoted to providing only the reconstructed mean image, we present a unified framework for joint estimation of the mean and corresponding variance of the radioactivity map based on an efficient optimal min-max criterion. The proposed framework formulates the PET image reconstruction problem to be a transformation from system uncertainties to estimation errors, where the minimax criterion is adopted to minimize the estimation errors with possibly maximized system uncertainties. The estimation errors, in the form of a covariance matrix, express the measurement uncertainties in a complete way. The framework is then optimized by ∞-norm optimization and solved with the corresponding H∞ filter. Unlike conventional statistical reconstruction algorithms, that rely on the statistical modeling methods of the measurement data or noise, the proposed joint estimation stands from the point of view of signal energies and can handle from imperfect statistical assumptions to even no a priori statistical assumptions. The performance and accuracy of reconstructed mean and variance images are validated using Monte Carlo simulations. Experiments on phantom scans with a small animal PET scanner and real patient scans are also conducted for assessment of clinical potential.
Dynamic PET Image reconstruction for parametric imaging using the HYPR kernel method
NASA Astrophysics Data System (ADS)
Spencer, Benjamin; Qi, Jinyi; Badawi, Ramsey D.; Wang, Guobao
2017-03-01
Dynamic PET image reconstruction is a challenging problem because of the ill-conditioned nature of PET and the lowcounting statistics resulted from short time-frames in dynamic imaging. The kernel method for image reconstruction has been developed to improve image reconstruction of low-count PET data by incorporating prior information derived from high-count composite data. In contrast to most of the existing regularization-based methods, the kernel method embeds image prior information in the forward projection model and does not require an explicit regularization term in the reconstruction formula. Inspired by the existing highly constrained back-projection (HYPR) algorithm for dynamic PET image denoising, we propose in this work a new type of kernel that is simpler to implement and further improves the kernel-based dynamic PET image reconstruction. Our evaluation study using a physical phantom scan with synthetic FDG tracer kinetics has demonstrated that the new HYPR kernel-based reconstruction can achieve a better region-of-interest (ROI) bias versus standard deviation trade-off for dynamic PET parametric imaging than the post-reconstruction HYPR denoising method and the previously used nonlocal-means kernel.
Laser-wakefield accelerators as hard x-ray sources for 3D medical imaging of human bone.
Cole, J M; Wood, J C; Lopes, N C; Poder, K; Abel, R L; Alatabi, S; Bryant, J S J; Jin, A; Kneip, S; Mecseki, K; Symes, D R; Mangles, S P D; Najmudin, Z
2015-08-18
A bright μm-sized source of hard synchrotron x-rays (critical energy Ecrit > 30 keV) based on the betatron oscillations of laser wakefield accelerated electrons has been developed. The potential of this source for medical imaging was demonstrated by performing micro-computed tomography of a human femoral trabecular bone sample, allowing full 3D reconstruction to a resolution below 50 μm. The use of a 1 cm long wakefield accelerator means that the length of the beamline (excluding the laser) is dominated by the x-ray imaging distances rather than the electron acceleration distances. The source possesses high peak brightness, which allows each image to be recorded with a single exposure and reduces the time required for a full tomographic scan. These properties make this an interesting laboratory source for many tomographic imaging applications.
Laser-wakefield accelerators as hard x-ray sources for 3D medical imaging of human bone
Cole, J. M.; Wood, J. C.; Lopes, N. C.; Poder, K.; Abel, R. L.; Alatabi, S.; Bryant, J. S. J.; Jin, A.; Kneip, S.; Mecseki, K.; Symes, D. R.; Mangles, S. P. D.; Najmudin, Z.
2015-01-01
A bright μm-sized source of hard synchrotron x-rays (critical energy Ecrit > 30 keV) based on the betatron oscillations of laser wakefield accelerated electrons has been developed. The potential of this source for medical imaging was demonstrated by performing micro-computed tomography of a human femoral trabecular bone sample, allowing full 3D reconstruction to a resolution below 50 μm. The use of a 1 cm long wakefield accelerator means that the length of the beamline (excluding the laser) is dominated by the x-ray imaging distances rather than the electron acceleration distances. The source possesses high peak brightness, which allows each image to be recorded with a single exposure and reduces the time required for a full tomographic scan. These properties make this an interesting laboratory source for many tomographic imaging applications. PMID:26283308
Coronary x-ray angiographic reconstruction and image orientation
Sprague, Kevin; Drangova, Maria; Lehmann, Glen
2006-03-15
We have developed an interactive geometric method for 3D reconstruction of the coronary arteries using multiple single-plane angiographic views with arbitrary orientations. Epipolar planes and epipolar lines are employed to trace corresponding vessel segments on these views. These points are utilized to reconstruct 3D vessel centerlines. The accuracy of the reconstruction is assessed using: (1) near-intersection distances of the rays that connect x-ray sources with projected points, (2) distances between traced and projected centerlines. These same two measures enter into a fitness function for a genetic search algorithm (GA) employed to orient the angiographic image planes automatically in 3D avoiding local minima in the search for optimized parameters. Furthermore, the GA utilizes traced vessel shapes (as opposed to isolated anchor points) to assist the optimization process. Differences between two-view and multiview reconstructions are evaluated. Vessel radii are measured and used to render the coronary tree in 3D as a surface. Reconstruction fidelity is demonstrated via (1) virtual phantom, (2) real phantom, and (3) patient data sets, the latter two of which utilize the GA. These simulated and measured angiograms illustrate that the vessel centerlines are reconstructed in 3D with accuracy below 1 mm. The reconstruction method is thus accurate compared to typical vessel dimensions of 1-3 mm. The methods presented should enable a combined interpretation of the severity of coronary artery stenoses and the hemodynamic impact on myocardial perfusion in patients with coronary artery disease.
Image reconstruction for synchronous data acquisition in fluorescence molecular tomography.
Zhang, Xuanxuan; Liu, Fei; Zuo, Siming; Bai, Jing; Luo, Jianwen
2015-01-01
The present full-angle, free-space fluorescence molecular tomography (FMT) system uses a step-by-step strategy to acquire measurements, which consumes time for both the rotation of the object and the integration of the charge-coupled device (CCD) camera. Completing the integration during the rotation is a more time-efficient strategy called synchronous data acquisition. However, the positions of sources and detectors in this strategy are not stationary, which is not taken into account in the conventional reconstruction algorithm. In this paper we propose a reconstruction algorithm based on the finite element method (FEM) to overcome this problem. Phantom experiments were carried out to validate the performance of the algorithm. The results show that, compared with the conventional reconstruction algorithm used in the step-by-step data acquisition strategy, the proposed algorithm can reconstruct images with more accurate location data and lower relative errors when used with the synchronous data acquisition strategy.
Generalized Fourier slice theorem for cone-beam image reconstruction.
Zhao, Shuang-Ren; Jiang, Dazong; Yang, Kevin; Yang, Kang
2015-01-01
The cone-beam reconstruction theory has been proposed by Kirillov in 1961, Tuy in 1983, Feldkamp in 1984, Smith in 1985, Pierre Grangeat in 1990. The Fourier slice theorem is proposed by Bracewell 1956, which leads to the Fourier image reconstruction method for parallel-beam geometry. The Fourier slice theorem is extended to fan-beam geometry by Zhao in 1993 and 1995. By combining the above mentioned cone-beam image reconstruction theory and the above mentioned Fourier slice theory of fan-beam geometry, the Fourier slice theorem in cone-beam geometry is proposed by Zhao 1995 in short conference publication. This article offers the details of the derivation and implementation of this Fourier slice theorem for cone-beam geometry. Especially the problem of the reconstruction from Fourier domain has been overcome, which is that the value of in the origin of Fourier space is 0/0. The 0/0 type of limit is proper handled. As examples, the implementation results for the single circle and two perpendicular circle source orbits are shown. In the cone-beam reconstruction if a interpolation process is considered, the number of the calculations for the generalized Fourier slice theorem algorithm is
The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.
Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut
2014-06-01
Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms.
van Amerom, Joshua F P; Lloyd, David F A; Price, Anthony N; Kuklisova Murgasova, Maria; Aljabar, Paul; Malik, Shaihan J; Lohezic, Maelene; Rutherford, Mary A; Pushparajah, Kuberan; Razavi, Reza; Hajnal, Joseph V
2017-04-03
Development of a MRI acquisition and reconstruction strategy to depict fetal cardiac anatomy in the presence of maternal and fetal motion. The proposed strategy involves i) acquisition and reconstruction of highly accelerated dynamic MRI, followed by image-based ii) cardiac synchronization, iii) motion correction, iv) outlier rejection, and finally v) cardiac cine reconstruction. Postprocessing entirely was automated, aside from a user-defined region of interest delineating the fetal heart. The method was evaluated in 30 mid- to late gestational age singleton pregnancies scanned without maternal breath-hold. The combination of complementary acquisition/reconstruction and correction/rejection steps in the pipeline served to improve the quality of the reconstructed 2D cine images, resulting in increased visibility of small, dynamic anatomical features. Artifact-free cine images successfully were produced in 36 of 39 acquired data sets; prolonged general fetal movements precluded processing of the remaining three data sets. The proposed method shows promise as a motion-tolerant framework to enable further detail in MRI studies of the fetal heart and great vessels. Processing data in image-space allowed for spatial and temporal operations to be applied to the fetal heart in isolation, separate from extraneous changes elsewhere in the field of view. Magn Reson Med, 2017. © 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.
Reconstruction techniques for sparse multistatic linear array microwave imaging
NASA Astrophysics Data System (ADS)
Sheen, David M.; Hall, Thomas E.
2014-06-01
Sequentially-switched linear arrays are an enabling technology for a number of near-field microwave imaging applications. Electronically sequencing along the array axis followed by mechanical scanning along an orthogonal axis allows dense sampling of a two-dimensional aperture in near real-time. The Pacific Northwest National Laboratory (PNNL) has developed this technology for several applications including concealed weapon detection, groundpenetrating radar, and non-destructive inspection and evaluation. These techniques form three-dimensional images by scanning a diverging beam swept frequency transceiver over a two-dimensional aperture and mathematically focusing or reconstructing the data into three-dimensional images. Recently, a sparse multi-static array technology has been developed that reduces the number of antennas required to densely sample the linear array axis of the spatial aperture. This allows a significant reduction in cost and complexity of the linear-array-based imaging system. The sparse array has been specifically designed to be compatible with Fourier-Transform-based image reconstruction techniques; however, there are limitations to the use of these techniques, especially for extreme near-field operation. In the extreme near-field of the array, back-projection techniques have been developed that account for the exact location of each transmitter and receiver in the linear array and the 3-D image location. In this paper, the sparse array technique will be described along with associated Fourier-Transform-based and back-projection-based image reconstruction algorithms. Simulated imaging results are presented that show the effectiveness of the sparse array technique along with the merits and weaknesses of each image reconstruction approach.
NASA Astrophysics Data System (ADS)
Karakatsanis, Nicolas A.; Rahmim, Arman
2014-03-01
Graphical analysis is employed in the research setting to provide quantitative estimation of PET tracer kinetics from dynamic images at a single bed. Recently, we proposed a multi-bed dynamic acquisition framework enabling clinically feasible whole-body parametric PET imaging by employing post-reconstruction parameter estimation. In addition, by incorporating linear Patlak modeling within the system matrix, we enabled direct 4D reconstruction in order to effectively circumvent noise amplification in dynamic whole-body imaging. However, direct 4D Patlak reconstruction exhibits a relatively slow convergence due to the presence of non-sparse spatial correlations in temporal kinetic analysis. In addition, the standard Patlak model does not account for reversible uptake, thus underestimating the influx rate Ki. We have developed a novel whole-body PET parametric reconstruction framework in the STIR platform, a widely employed open-source reconstruction toolkit, a) enabling accelerated convergence of direct 4D multi-bed reconstruction, by employing a nested algorithm to decouple the temporal parameter estimation from the spatial image update process, and b) enhancing the quantitative performance particularly in regions with reversible uptake, by pursuing a non-linear generalized Patlak 4D nested reconstruction algorithm. A set of published kinetic parameters and the XCAT phantom were employed for the simulation of dynamic multi-bed acquisitions. Quantitative analysis on the Ki images demonstrated considerable acceleration in the convergence of the nested 4D whole-body Patlak algorithm. In addition, our simulated and patient whole-body data in the postreconstruction domain indicated the quantitative benefits of our extended generalized Patlak 4D nested reconstruction for tumor diagnosis and treatment response monitoring.
Efficient iterative image reconstruction algorithm for dedicated breast CT
NASA Astrophysics Data System (ADS)
Antropova, Natalia; Sanchez, Adrian; Reiser, Ingrid S.; Sidky, Emil Y.; Boone, John; Pan, Xiaochuan
2016-03-01
Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.
Improved satellite image compression and reconstruction via genetic algorithms
NASA Astrophysics Data System (ADS)
Babb, Brendan; Moore, Frank; Peterson, Michael; Lamont, Gary
2008-10-01
A wide variety of signal and image processing applications, including the US Federal Bureau of Investigation's fingerprint compression standard [3] and the JPEG-2000 image compression standard [26], utilize wavelets. This paper describes new research that demonstrates how a genetic algorithm (GA) may be used to evolve transforms that outperform wavelets for satellite image compression and reconstruction under conditions subject to quantization error. The new approach builds upon prior work by simultaneously evolving real-valued coefficients representing matched forward and inverse transform pairs at each of three levels of a multi-resolution analysis (MRA) transform. The training data for this investigation consists of actual satellite photographs of strategic urban areas. Test results show that a dramatic reduction in the error present in reconstructed satellite images may be achieved without sacrificing the compression capabilities of the forward transform. The transforms evolved during this research outperform previous start-of-the-art solutions, which optimized coefficients for the reconstruction transform only. These transforms also outperform wavelets, reducing error by more than 0.76 dB at a quantization level of 64. In addition, transforms trained using representative satellite images do not perform quite as well when subsequently tested against images from other classes (such as fingerprints or portraits). This result suggests that the GA developed for this research is automatically learning to exploit specific attributes common to the class of images represented in the training population.
Carey, Timothy; Oliver, David; Pniewski, Josh; Mueller, Terry; Bojescul, John
2013-01-01
The purpose of the present study is to present the results of anterior cruciate ligament (ACL) augmentation for patients having rotational instability despite an intact vertical graft in lieu of conventional revision ACL reconstruction. ACL augmentation surgery with a horizontal graft was performed to augment a healed vertical graft on five patients and an accelerated rehabilitation protocol was instituted. Functional outcomes were assessed by the Lower Extremity Functional Scale (LEFS) and the Modified Cincinnati Rating System (MCRS). All patients completed physical therapy within 5 months and were able to return to full military duty without limitation. LEFS and MCRS were significantly improved. ACL augmentation with a horizontal graft provides an excellent alternative to ACL revision reconstruction for patients with an intact vertical graft, allowing an earlier return to duty for military service members.
Whole Mouse Brain Image Reconstruction from Serial Coronal Sections Using FIJI (ImageJ).
Paletzki, Ronald; Gerfen, Charles R
2015-10-01
Whole-brain reconstruction of the mouse enables comprehensive analysis of the distribution of neurochemical markers, the distribution of anterogradely labeled axonal projections or retrogradely labeled neurons projecting to a specific brain site, or the distribution of neurons displaying activity-related markers in behavioral paradigms. This unit describes a method to produce whole-brain reconstruction image sets from coronal brain sections with up to four fluorescent markers using the freely available image-processing program FIJI (ImageJ).
Elasticity reconstructive imaging by means of stimulated echo MRI.
Chenevert, T L; Skovoroda, A R; O'Donnell, M; Emelianov, S Y
1998-03-01
A method is introduced to measure internal mechanical displacement and strain by means of MRI. Such measurements are needed to reconstruct an image of the elastic Young's modulus. A stimulated echo acquisition sequence with additional gradient pulses encodes internal displacements in response to an externally applied differential deformation. The sequence provides an accurate measure of static displacement by limiting the mechanical transitions to the mixing period of the simulated echo. Elasticity reconstruction involves definition of a region of interest having uniform Young's modulus along its boundary and subsequent solution of the discretized elasticity equilibrium equations. Data acquisition and reconstruction were performed on a urethane rubber phantom of known elastic properties and an ex vivo canine kidney phantom using <2% differential deformation. Regional elastic properties are well represented on Young's modulus images. The long-term objective of this work is to provide a means for remote palpation and elasticity quantitation in deep tissues otherwise inaccessible to manual palpation.
Photoacoustic image reconstruction: material detection and acoustical heterogeneities
NASA Astrophysics Data System (ADS)
Schoeder, S.; Kronbichler, M.; Wall, W. A.
2017-05-01
The correct consideration of acoustical heterogeneities in the context of photoacoustic image reconstruction is an open topic. In this publication a physically motivated algorithm is proposed that reconstructs the optical absorption and diffusion coefficients using a gradient-based scheme. The simultaneous reconstruction of both material properties allows for a subsequent material identification and an accordant update of the acoustical material properties. The algorithm is general in terms of illumination scenarios, detection geometries and applications. No prior knowledge on material distributions needs to be provided, only expected materials have to be specified. Numerical experiments are performed to gain insight into the complex inverse problem and to validate the proposed method. Results show that acoustical heterogeneities are correctly detected improving the optical images.
Point spread function based image reconstruction in optical projection tomography
NASA Astrophysics Data System (ADS)
Trull, Anna K.; van der Horst, Jelle; Palenstijn, Willem Jan; van Vliet, Lucas J.; van Leeuwen, Tristan; Kalkman, Jeroen
2017-10-01
As a result of the shallow depth of focus of the optical imaging system, the use of standard filtered back projection in optical projection tomography causes space-variant tangential blurring that increases with the distance to the rotation axis. We present a novel optical tomographic image reconstruction technique that incorporates the point spread function of the imaging lens in an iterative reconstruction. The technique is demonstrated using numerical simulations, tested on experimental optical projection tomography data of single fluorescent beads, and applied to high-resolution emission optical projection tomography imaging of an entire zebrafish larva. Compared to filtered back projection our results show greatly reduced radial and tangential blurring over the entire 5.2×5.2 mm2 field of view, and a significantly improved signal to noise ratio.
Point spread function based image reconstruction in optical projection tomography.
Trull, Anna Katharina; van der Horst, Jelle; Palenstijn, Willem Jan; van Vliet, Lucas J; van Leeuwen, Tristan; Kalkman, Jeroen
2017-08-30
As a result of the shallow depth of focus of the optical imaging system, the use of standard filtered back projection in optical projection tomography causes space-variant tangential blurring that increases with the distance to the rotation axis. We present a novel optical tomographic image reconstruction technique that incorporates the point spread function (PSF) of the imaging lens in an iterative reconstruction. The technique is demonstrated using numerical simulations, tested on experimental optical projection tomography data of single fluorescent beads, and applied to high-resolution emission optical projection tomography imaging of an entire zebrafish larva. Compared to filtered back projection our results show greatly reduced radial and tangential blurring over the entire 5.2 x 5.2 mm <sup>2</sup> field of view, and a significantly improved signal to noise ratio. © 2017 Institute of Physics and Engineering in Medicine.
Building Facade Reconstruction by Fusing Terrestrial Laser Points and Images
Pu, Shi; Vosselman, George
2009-01-01
Laser data and optical data have a complementary nature for three dimensional feature extraction. Efficient integration of the two data sources will lead to a more reliable and automated extraction of three dimensional features. This paper presents a semiautomatic building facade reconstruction approach, which efficiently combines information from terrestrial laser point clouds and close range images. A building facade's general structure is discovered and established using the planar features from laser data. Then strong lines in images are extracted using Canny extractor and Hough transformation, and compared with current model edges for necessary improvement. Finally, textures with optimal visibility are selected and applied according to accurate image orientations. Solutions to several challenge problems throughout the collaborated reconstruction, such as referencing between laser points and multiple images and automated texturing, are described. The limitations and remaining works of this approach are also discussed. PMID:22408539
Building facade reconstruction by fusing terrestrial laser points and images.
Pu, Shi; Vosselman, George
2009-01-01
Laser data and optical data have a complementary nature for three dimensional feature extraction. Efficient integration of the two data sources will lead to a more reliable and automated extraction of three dimensional features. This paper presents a semiautomatic building facade reconstruction approach, which efficiently combines information from terrestrial laser point clouds and close range images. A building facade's general structure is discovered and established using the planar features from laser data. Then strong lines in images are extracted using Canny extractor and Hough transformation, and compared with current model edges for necessary improvement. Finally, textures with optimal visibility are selected and applied according to accurate image orientations. Solutions to several challenge problems throughout the collaborated reconstruction, such as referencing between laser points and multiple images and automated texturing, are described. The limitations and remaining works of this approach are also discussed.
Image reconstruction method for non-synchronous THz signals
NASA Astrophysics Data System (ADS)
Oda, Naoki; Okubo, Syuichi; Sudou, Takayuki; Isoyama, Goro; Kato, Ryukou; Irizawa, Akinori; Kawase, Keigo
2014-05-01
Image reconstruction method for non-synchronous THz signals was developed for a combination of THz Free Electron Laser (THz-FEL) developed by Osaka University with THz imager. The method employs a slight time-difference between repetition period of THz macro-pulse from THz-FEL and a plurality of frames for THz imager, so that image can be reconstructed out of a predetermined number of time-sequential frames. This method was applied to THz-FEL and other pulsed THz source, and found very effective. Thermal time constants of pixels in 320x240 microbolometer array were also evaluated with this method, using quantum cascade laser as a THz source.
Morphological reconstruction of semantic layers in map images
NASA Astrophysics Data System (ADS)
Podlasov, Alexey; Ageenko, Eugene J.; Franti, Pasi
2006-01-01
Map images are composed of semantic layers depicted in arbitrary color. Color separation is often needed to divide the image into layers for storage and processing. Separation can result in severe artifacts because of the overlapping of the layers. In this work, we introduce a technique to restore the original semantic layers after the color separation. The proposed restoration technique improves compression performance of the reconstructed layers in comparison to the corrupted ones when compressed by lossless algorithms such as International Communication Unit (ITU) Group 4 (TIFF G4), Portable Network Graphics (PNG), Joint Bi-level Image experts Group (JBIG), and context tree method. The resulting technique also provides good visual quality of the reconstructed image layers, and can therefore be applied for selective layer removal/extraction in other map processing applications, e.g., area measurement.
Progress Update on Iterative Reconstruction of Neutron Tomographic Images
Hausladen, Paul; Gregor, Jens
2016-09-15
This report satisfies the fiscal year 2016 technical deliverable to report on progress in development of fast iterative reconstruction algorithms for project OR16-3DTomography-PD2Jb, "3D Tomography and Image Processing Using Fast Neutrons." This project has two overall goals. The first of these goals is to extend associated-particle fast neutron transmission and, particularly, induced-reaction tomographic imaging algorithms to three dimensions. The second of these goals is to automatically segment the resultant tomographic images into constituent parts, and then extract information about the parts, such as the class of shape and potentially shape parameters. This report addresses of the component of the project concerned with three-dimensional (3D) image reconstruction.
Lu, Xiangwen; Gao, Wenpei; Zuo, Jian-Min; Yuan, Jiabin
2015-02-01
Advances in diffraction and transmission electron microscopy (TEM) have greatly improved the prospect of three-dimensional (3D) structure reconstruction from two-dimensional (2D) images or diffraction patterns recorded in a tilt series at atomic resolution. Here, we report a new graphics processing unit (GPU) accelerated iterative transformation algorithm (ITA) based on polar fast Fourier transform for reconstructing 3D structure from 2D diffraction patterns. The algorithm also applies to image tilt series by calculating diffraction patterns from the recorded images using the projection-slice theorem. A gold icosahedral nanoparticle of 309 atoms is used as the model to test the feasibility, performance and robustness of the developed algorithm using simulations. Atomic resolution in 3D is achieved for the 309 atoms Au nanoparticle using 75 diffraction patterns covering 150° rotation. The capability demonstrated here provides an opportunity to uncover the 3D structure of small objects of nanometers in size by electron diffraction.
Computationally efficient algorithm for multifocus image reconstruction
NASA Astrophysics Data System (ADS)
Eltoukhy, Helmy A.; Kavusi, Sam
2003-05-01
A method for synthesizing enhanced depth of field digital still camera pictures using multiple differently focused images is presented. This technique exploits only spatial image gradients in the initial decision process. The image gradient as a focus measure has been shown to be experimentally valid and theoretically sound under weak assumptions with respect to unimodality and monotonicity. Subsequent majority filtering corroborates decisions with those of neighboring pixels, while the use of soft decisions enables smooth transitions across region boundaries. Furthermore, these last two steps add algorithmic robustness for coping with both sensor noise and optics-related effects, such as misregistration or optical flow, and minor intensity fluctuations. The dependence of these optical effects on several optical parameters is analyzed and potential remedies that can allay their impact with regard to the technique's limitations are discussed. Several examples of image synthesis using the algorithm are presented. Finally, leveraging the increasing functionality and emerging processing capabilities of digital still cameras, the method is shown to entail modest hardware requirements and is implementable using a parallel or general purpose processor.
Advances in imaging technologies for planning breast reconstruction
Mohan, Anita T.
2016-01-01
The role and choice of preoperative imaging for planning in breast reconstruction is still a disputed topic in the reconstructive community, with varying opinion on the necessity, the ideal imaging modality, costs and impact on patient outcomes. Since the advent of perforator flaps their use in microsurgical breast reconstruction has grown. Perforator based flaps afford lower donor morbidity by sparing the underlying muscle provide durable results, superior cosmesis to create a natural looking new breast, and are preferred in the context of radiation therapy. However these surgeries are complex; more technically challenging that implant based reconstruction, and leaves little room for error. The role of imaging in breast reconstruction can assist the surgeon in exploring or confirming flap choices based on donor site characteristics and presence of suitable perforators. Vascular anatomical studies in the lab have provided the surgeon a foundation of knowledge on location and vascular territories of individual perforators to improve our understanding for flap design and safe flap harvest. The creation of a presurgical map in patients can highlight any abnormal or individual anatomical variance to optimize flap design, intraoperative decision-making and execution of flap harvest with greater predictability and efficiency. This article highlights the role and techniques for preoperative planning using the newer technologies that have been adopted in reconstructive clinical practice: computed tomographic angiography (CTA), magnetic resonance angiography (MRA), laser-assisted indocyanine green fluorescence angiography (LA-ICGFA) and dynamic infrared thermography (DIRT). The primary focus of this paper is on the application of CTA and MRA imaging modalities. PMID:27047790
3D Image Reconstruction: Hamiltonian Method for Phase Recovery
Blankenbecler, Richard
2003-03-13
The problem of reconstructing a positive semi-definite 3-D image from the measurement of the magnitude of its 2-D fourier transform at a series of orientations is explored. The phase of the fourier transform is not measured. The algorithm developed here utilizes a Hamiltonian, or cost function, that at its minimum provides the solution to the stated problem. The energy function includes both data and physical constraints on the charge distribution or image.
RECONSTRUCTION OF HUMAN LUNG MORPHOLOGY MODELS FROM MAGNETIC RESONANCE IMAGES
Reconstruction of Human Lung Morphology Models from Magnetic Resonance Images
T. B. Martonen (Experimental Toxicology Division, U.S. EPA, Research Triangle Park, NC 27709) and K. K. Isaacs (School of Public Health, University of North Carolina, Chapel Hill, NC 27514)
RECONSTRUCTION OF HUMAN LUNG MORPHOLOGY MODELS FROM MAGNETIC RESONANCE IMAGES
Reconstruction of Human Lung Morphology Models from Magnetic Resonance Images
T. B. Martonen (Experimental Toxicology Division, U.S. EPA, Research Triangle Park, NC 27709) and K. K. Isaacs (School of Public Health, University of North Carolina, Chapel Hill, NC 27514)
Cortical Surface Reconstruction from High-Resolution MR Brain Images
Osechinskiy, Sergey; Kruggel, Frithjof
2012-01-01
Reconstruction of the cerebral cortex from magnetic resonance (MR) images is an important step in quantitative analysis of the human brain structure, for example, in sulcal morphometry and in studies of cortical thickness. Existing cortical reconstruction approaches are typically optimized for standard resolution (~1 mm) data and are not directly applicable to higher resolution images. A new PDE-based method is presented for the automated cortical reconstruction that is computationally efficient and scales well with grid resolution, and thus is particularly suitable for high-resolution MR images with submillimeter voxel size. The method uses a mathematical model of a field in an inhomogeneous dielectric. This field mapping, similarly to a Laplacian mapping, has nice laminar properties in the cortical layer, and helps to identify the unresolved boundaries between cortical banks in narrow sulci. The pial cortical surface is reconstructed by advection along the field gradient as a geometric deformable model constrained by topology-preserving level set approach. The method's performance is illustrated on exvivo images with 0.25–0.35 mm isotropic voxels. The method is further evaluated by cross-comparison with results of the FreeSurfer software on standard resolution data sets from the OASIS database featuring pairs of repeated scans for 20 healthy young subjects. PMID:22481909
Lobe based image reconstruction in Electrical Impedance Tomography.
Schullcke, Benjamin; Gong, Bo; Krueger-Ziolek, Sabine; Tawhai, Merryn; Adler, Andy; Mueller-Lisse, Ullrich; Moeller, Knut
2017-02-01
Electrical Impedance Tomography (EIT) is an imaging modality used to generate two-dimensional cross-sectional images representing impedance change in the thorax. The impedance of lung tissue changes with change in air content of the lungs; hence, EIT can be used to examine regional lung ventilation in patients with abnormal lungs. In lung EIT, electrodes are attached around the circumference of the thorax to inject small alternating currents and measure resulting voltages. In contrast to X-ray computed tomography (CT), EIT images do not depict a thorax slice of well defined thickness, but instead visualize a lens-shaped region around the electrode plane, which results from diffuse current propagation in the thorax. Usually, this is considered a drawback, since image interpretation is impeded if 'off-plane' conductivity changes are projected onto the reconstructed two-dimensional image. In this paper we describe an approach that takes advantage of current propagation below and above the electrode plane. The approach enables estimation of the individual conductivity change in each lung lobe from boundary voltage measurements. This could be used to monitor disease progression in patients with obstructive lung diseases, such as chronic obstructive pulmonary disease (COPD) or cystic fibrosis (CF) and to obtain a more comprehensive insight into the pathophysiology of the lung. Electrode voltages resulting from different conductivities in each lung lobe were simulated utilizing a realistic 3D finite element model (FEM) of the human thorax and the lungs. Overall 200 different patterns of conductivity change were simulated. A 'lobe reconstruction' algorithm was developed, applying patient-specific anatomical information in the reconstruction process. A standard EIT image reconstruction algorithm and the proposed 'lobe reconstruction' algorithm were used to estimate conductivity change in the lobes. The agreement between simulated and reconstructed conductivity change in
Optimized satellite image compression and reconstruction via evolution strategies
NASA Astrophysics Data System (ADS)
Babb, Brendan; Moore, Frank; Peterson, Michael
2009-05-01
This paper describes the automatic discovery, via an Evolution Strategy with Covariance Matrix Adaptation (CMA-ES), of vectors of real-valued coefficients representing matched forward and inverse transforms that outperform the 9/7 Cohen-Daubechies-Feauveau (CDF) discrete wavelet transform (DWT) for satellite image compression and reconstruction under conditions subject to quantization error. The best transform evolved during this study reduces the mean squared error (MSE) present in reconstructed satellite images by an average of 33.78% (1.79 dB), while maintaining the average information entropy (IE) of compressed images at 99.57% in comparison to the wavelet. In addition, this evolved transform achieves 49.88% (3.00 dB) average MSE reduction when tested on 80 images from the FBI fingerprint test set, and 42.35% (2.39 dB) average MSE reduction when tested on a set of 18 digital photographs, while achieving average IE of 104.36% and 100.08%, respectively. These results indicate that our evolved transform greatly improves the quality of reconstructed images without substantial loss of compression capability over a broad range of image classes.
An automated 3D reconstruction method of UAV images
NASA Astrophysics Data System (ADS)
Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping
2015-10-01
In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.
A novel data processing technique for image reconstruction of penumbral imaging
NASA Astrophysics Data System (ADS)
Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin
2011-06-01
CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.
NASA Technical Reports Server (NTRS)
Mareboyana, Manohar; Le Moigne-Stewart, Jacqueline; Bennett, Jerome
2016-01-01
In this paper, we demonstrate a simple algorithm that projects low resolution (LR) images differing in subpixel shifts on a high resolution (HR) also called super resolution (SR) grid. The algorithm is very effective in accuracy as well as time efficiency. A number of spatial interpolation techniques using nearest neighbor, inverse-distance weighted averages, Radial Basis Functions (RBF) etc. used in projection yield comparable results. For best accuracy of reconstructing SR image by a factor of two requires four LR images differing in four independent subpixel shifts. The algorithm has two steps: i) registration of low resolution images and (ii) shifting the low resolution images to align with reference image and projecting them on high resolution grid based on the shifts of each low resolution image using different interpolation techniques. Experiments are conducted by simulating low resolution images by subpixel shifts and subsampling of original high resolution image and the reconstructing the high resolution images from the simulated low resolution images. The results of accuracy of reconstruction are compared by using mean squared error measure between original high resolution image and reconstructed image. The algorithm was tested on remote sensing images and found to outperform previously proposed techniques such as Iterative Back Projection algorithm (IBP), Maximum Likelihood (ML), and Maximum a posterior (MAP) algorithms. The algorithm is robust and is not overly sensitive to the registration inaccuracies.
Integrated imaging of neuromagnetic reconstructions and morphological magnetic resonance data.
Kullmann, W H; Fuchs, M
1991-01-01
New neuromagnetic imaging methods provide spatial information about the functional electrical properties of complex current distributions in the human brain. For practical use in medical diagnosis a combination of the abstract neuromagnetic imaging results with magnetic resonance (MR) or computed tomography (CT) images of the morphology is required. The biomagnetic images can be overlayed onto three-dimensional morphological images with spatially arbitrary selectable slices, calculated from conventional 2D data. For the current reconstruction the 3D images furthermore provide a priori information about the conductor geometry. A combination of current source density calculations and linear estimation methods for handling the inverse magnetic problem allows quick imaging of impressed current source density in arbitrary volume conductors.
Reconstruction of 3d Digital Image of Weepingforsythia Pollen
NASA Astrophysics Data System (ADS)
Liu, Dongwu; Chen, Zhiwei; Xu, Hongzhi; Liu, Wenqi; Wang, Lina
Confocal microscopy, which is a major advance upon normal light microscopy, has been used in a number of scientific fields. By confocal microscopy techniques, cells and tissues can be visualized deeply, and three-dimensional images created. Compared with conventional microscopes, confocal microscope improves the resolution of images by eliminating out-of-focus light. Moreover, confocal microscope has a higher level of sensitivity due to highly sensitive light detectors and the ability to accumulate images captured over time. In present studies, a series of Weeping Forsythia pollen digital images (35 images in total) were acquired with confocal microscope, and the three-dimensional digital image of the pollen reconstructed with confocal microscope. Our results indicate that it's a very easy job to analysis threedimensional digital image of the pollen with confocal microscope and the probe Acridine orange (AO).
Super-resolution image reconstruction for ultrasonic nondestructive evaluation.
Li, Shanglei; Chu, Tsuchin Philip
2013-12-01
Ultrasonic testing is one of the most successful nondestructive evaluation (NDE) techniques for the inspection of carbon-fiber-reinforced polymer (CFRP) materials. This paper discusses the application of the iterative backprojection (IBP) super-resolution image reconstruction technique to carbon epoxy laminates with simulated defects to obtain high-resolution images for NDE. Super-resolution image reconstruction is an approach used to overcome the inherent resolution limitations of an existing ultrasonic system. It can greatly improve the image quality and allow more detailed inspection of the region of interest (ROI) with high resolution, improving defect evaluation and accuracy. First, three artificially simulated delamination defects in a CFRP panel were considered to evaluate and validate the application of the IBP method. The results of the validation indicate that both the contrast-tonoise ratio (CNR) and the peak signal-to-noise ratio (PSNR) value of the super-resolution result are better than the bicubic interpolation method. Then, the IBP method was applied to the low-resolution ultrasonic C-scan image sequence with subpixel displacement of two types of defects (delamination and porosity) which were obtained by the micro-scanning imaging technique. The result demonstrated that super-resolution images achieved better visual quality with an improved image resolution compared with raw C-scan images.
Comparison of image reconstruction methods for structured illumination microscopy
NASA Astrophysics Data System (ADS)
Lukeš, Tomas; Hagen, Guy M.; Křížek, Pavel; Švindrych, Zdeněk.; Fliegel, Karel; Klíma, Miloš
2014-05-01
Structured illumination microscopy (SIM) is a recent microscopy technique that enables one to go beyond the diffraction limit using patterned illumination. The high frequency information is encoded through aliasing into the observed image. By acquiring multiple images with different illumination patterns aliased components can be separated and a highresolution image reconstructed. Here we investigate image processing methods that perform the task of high-resolution image reconstruction, namely square-law detection, scaled subtraction, super-resolution SIM (SR-SIM), and Bayesian estimation. The optical sectioning and lateral resolution improvement abilities of these algorithms were tested under various noise level conditions on simulated data and on fluorescence microscopy images of a pollen grain test sample and of a cultured cell stained for the actin cytoskeleton. In order to compare the performance of the algorithms, the following objective criteria were evaluated: Signal to Noise Ratio (SNR), Signal to Background Ratio (SBR), circular average of the power spectral density and the S3 sharpness index. The results show that SR-SIM and Bayesian estimation combine illumination patterned images more effectively and provide better lateral resolution in exchange for more complex image processing. SR-SIM requires one to precisely shift the separated spectral components to their proper positions in reciprocal space. High noise levels in the raw data can cause inaccuracies in the shifts of the spectral components which degrade the super-resolved image. Bayesian estimation has proven to be more robust to changes in noise level and illumination pattern frequency.
Naraoka, Takuya; Kimura, Yuka; Tsuda, Eiichi; Yamamoto, Yuji; Ishibashi, Yasuyuki
2017-04-01
Remnant-preserved anterior cruciate ligament (ACL) reconstruction was introduced to improve clinical outcomes and biological healing. However, the effects of remnant preservation and the influence of the delay from injury until reconstruction on the outcomes of this technique are still uncertain. Purpose/Hypothesis: The purposes of this study were to evaluate whether remnant preservation improved the clinical outcomes and graft incorporation of ACL reconstruction and to examine the influence of the delay between ACL injury and reconstruction on the usefulness of remnant preservation. We hypothesized that remnant preservation improves clinical results and accelerates graft incorporation and that its effect is dependent on the delay between ACL injury and reconstruction. Cohort study; Level of evidence, 2. A total of 151 consecutive patients who underwent double-bundle ACL reconstruction using a semitendinosus graft were enrolled in this study: 74 knees underwent ACL reconstruction without a remnant (or the remnant was <25% of the intra-articular portion of the graft; NR group), while 77 knees underwent ACL reconstruction with remnant preservation (RP group). These were divided into 4 subgroups based on the time from injury to surgery: phase 1 was <3 weeks (n = 24), phase 2 was 3 to less than 8 weeks (n = 70), phase 3 was 8 to 20 weeks (n = 32), and phase 4 was >20 weeks (n = 25). Clinical measurements, including KT-1000 arthrometer side-to-side anterior tibial translation measurements, were assessed at 3, 6, 12, and 24 months after reconstruction. Magnetic resonance imaging evaluations of graft maturation and graft-tunnel integration of the anteromedial and posterolateral bundles were assessed at 3, 6, and 12 months after reconstruction. There was no difference in side-to-side anterior tibial translation between the NR and RP groups. There was also no difference in graft maturation between the 2 groups. Furthermore, the time from ACL injury until reconstruction did
An implementation of the NiftyRec medical imaging library for PIXE-tomography reconstruction
NASA Astrophysics Data System (ADS)
Michelet, C.; Barberet, P.; Desbarats, P.; Giovannelli, J.-F.; Schou, C.; Chebil, I.; Delville, M.-H.; Gordillo, N.; Beasley, D. G.; Devès, G.; Moretto, P.; Seznec, H.
2017-08-01
A new development of the TomoRebuild software package is presented, including ;thick sample; correction for non linear X-ray production (NLXP) and X-ray absorption (XA). As in the previous versions, C++ programming with standard libraries was used for easier portability. Data reduction requires different steps which may be run either from a command line instruction or via a user friendly interface, developed as a portable Java plugin in ImageJ. All experimental and reconstruction parameters can be easily modified, either directly in the ASCII parameter files or via the ImageJ interface. A detailed user guide in English is provided. Sinograms and final reconstructed images are generated in usual binary formats that can be read by most public domain graphic softwares. New MLEM and OSEM methods are proposed, using optimized methods from the NiftyRec medical imaging library. An overview of the different medical imaging methods that have been used for ion beam microtomography applications is presented. In TomoRebuild, PIXET data reduction is performed for each chemical element independently and separately from STIMT, except for two steps where the fusion of STIMT and PIXET data is required: the calculation of the correction matrix and the normalization of PIXET data to obtain mass fraction distributions. Correction matrices for NLXP and XA are calculated using procedures extracted from the DISRA code, taking into account a large X-ray detection solid angle. For this, the 3D STIMT mass density distribution is used, considering a homogeneous global composition. A first example of PIXET experiment using two detectors is presented. Reconstruction results are compared and found in good agreement between different codes: FBP, NiftyRec MLEM and OSEM of the TomoRebuild software package, the original DISRA, its accelerated version provided in JPIXET and the accelerated MLEM version of JPIXET, with or without correction.
Scattering calculation and image reconstruction using elevation-focused beams
Duncan, David P.; Astheimer, Jeffrey P.; Waag, Robert C.
2009-01-01
Pressure scattered by cylindrical and spherical objects with elevation-focused illumination and reception has been analytically calculated, and corresponding cross sections have been reconstructed with a two-dimensional algorithm. Elevation focusing was used to elucidate constraints on quantitative imaging of three-dimensional objects with two-dimensional algorithms. Focused illumination and reception are represented by angular spectra of plane waves that were efficiently computed using a Fourier interpolation method to maintain the same angles for all temporal frequencies. Reconstructions were formed using an eigenfunction method with multiple frequencies, phase compensation, and iteration. The results show that the scattered pressure reduces to a two-dimensional expression, and two-dimensional algorithms are applicable when the region of a three-dimensional object within an elevation-focused beam is approximately constant in elevation. The results also show that energy scattered out of the reception aperture by objects contained within the focused beam can result in the reconstructed values of attenuation slope being greater than true values at the boundary of the object. Reconstructed sound speed images, however, appear to be relatively unaffected by the loss in scattered energy. The broad conclusion that can be drawn from these results is that two-dimensional reconstructions require compensation to account for uncaptured three-dimensional scattering. PMID:19425653
Statistical reconstruction algorithms for continuous wave electron spin resonance imaging
NASA Astrophysics Data System (ADS)
Kissos, Imry; Levit, Michael; Feuer, Arie; Blank, Aharon
2013-06-01
Electron spin resonance imaging (ESRI) is an important branch of ESR that deals with heterogeneous samples ranging from semiconductor materials to small live animals and even humans. ESRI can produce either spatial images (providing information about the spatially dependent radical concentration) or spectral-spatial images, where an extra dimension is added to describe the absorption spectrum of the sample (which can also be spatially dependent). The mapping of oxygen in biological samples, often referred to as oximetry, is a prime example of an ESRI application. ESRI suffers frequently from a low signal-to-noise ratio (SNR), which results in long acquisition times and poor image quality. A broader use of ESRI is hampered by this slow acquisition, which can also be an obstacle for many biological applications where conditions may change relatively quickly over time. The objective of this work is to develop an image reconstruction scheme for continuous wave (CW) ESRI that would make it possible to reduce the data acquisition time without degrading the reconstruction quality. This is achieved by adapting the so-called "statistical reconstruction" method, recently developed for other medical imaging modalities, to the specific case of CW ESRI. Our new algorithm accounts for unique ESRI aspects such as field modulation, spectral-spatial imaging, and possible limitation on the gradient magnitude (the so-called "limited angle" problem). The reconstruction method shows improved SNR and contrast recovery vs. commonly used back-projection-based methods, for a variety of simulated synthetic samples as well as in actual CW ESRI experiments.
Wech, T; Gutberlet, M; Greiser, A; Stäb, D; Ritter, C O; Beer, M; Hahn, D; Köstler, H
2010-08-01
The aim of this study was to perform high-resolution functional MR imaging using accelerated density-weighted real-time acquisition (DE) and a combination of compressed sensing (CO) and parallel imaging for image reconstruction. Measurements were performed on a 3 T whole-body system equipped with a dedicated 32-channel body array coil. A one-dimensional density-weighted spin warp technique was used, i. e. non-equidistant phase encoding steps were acquired. The two acceleration techniques, compressed sensing and parallel imaging, were performed subsequently. From a complete Cartesian k-space, a four-fold uniformly undersampled k-space was created. In addition, each undersampled time frame was further undersampled by an additional acceleration factor of 2.1 using an individual density-weighted undersampling pattern for each time frame. Simulations were performed using data of a conventional human in-vivo cine examination and in-vivo measurements of the human heart were carried out employing an adapted real-time sequence. High-quality DECO real-time images using parallel acquisition of the function of the human heart could be acquired. An acceleration factor of 8.4 could be achieved making it possible to maintain the high spatial and temporal resolution without significant noise enhancement. DECO parallel imaging facilitates high acceleration factors, which allows real-time MR acquisition of the heart dynamics and function with an image quality comparable to that conventionally achieved with clinically established triggered cine imaging. Georg Thieme Verlag KG Stuttgart, New York.
2D Feature Recognition And 3d Reconstruction In Solar Euv Images
NASA Astrophysics Data System (ADS)
Aschwanden, Markus J.
2005-05-01
EUV images show the solar corona in a typical temperature range of T >rsim 1 MK, which encompasses the most common coronal structures: loops, filaments, and other magnetic structures in active regions, the quiet Sun, and coronal holes. Quantitative analysis increasingly demands automated 2D feature recognition and 3D reconstruction, in order to localize, track, and monitor the evolution of such coronal structures. We discuss numerical tools that “fingerprint” curvi-linear 1D features (e.g., loops and filaments). We discuss existing finger-printing algorithms, such as the brightness-gradient method, the oriented-connectivity method, stereoscopic methods, time-differencing, and space time feature recognition. We discuss improved 2D feature recognition and 3D reconstruction techniques that make use of additional a priori constraints, using guidance from magnetic field extrapolations, curvature radii constraints, and acceleration and velocity constraints in time-dependent image sequences. Applications of these algorithms aid the analysis of SOHO/EIT, TRACE, and STEREO/SECCHI data, such as disentangling, 3D reconstruction, and hydrodynamic modeling of coronal loops, postflare loops, filaments, prominences, and 3D reconstruction of the coronal magnetic field in general.
Evaluation of the Bresenham algorithm for image reconstruction with ultrasound computer tomography
NASA Astrophysics Data System (ADS)
Spieß, Norbert; Zapf, Michael; Ruiter, Nicole V.
2011-03-01
At Karlsruhe Institute of Technology a 3D Ultrasound Computer Tomography (USCT) system is under development for early breast cancer detection. With 3.5 million of acquired raw data and up to one billion voxels for one image, the reconstruction of breast volumes may last for weeks in highest possible resolution. The currently applied backprojection algorithm, based on the synthetic aperture focusing technique (SAFT), offers only limited potential for further decrease of the reconstruction time. An alternative reconstruction method could apply signal detected data and rasterizes the backprojected ellipsoids directly. A well-known rasterization algorithm is the Bresenham algorithm, which was originally designed to rasterize lines. In this work an existing Bresenham concept to rasterize circles is extended to comply with the requirements of image reconstruction in USCT: the circle rasterization was adapted to rasterize spheres and extended to floating point parameterization. The evaluation of the algorithm showed that the quality of the rasterization is comparable to the original algorithm. The achieved performance of the circle and sphere rasterization algorithm was 12MVoxel/s and 3.5MVoxel/s. When taking the performance increase due to the reduced A-Scan data into account, an acceleration of factor 28 in comparison to the currently applied algorithm could be reached. For future work the presented rasterization algorithm offers additional potential for further speed up.
Polarimetric ISAR: Simulation and image reconstruction
Chambers, David H.
2016-03-21
In polarimetric ISAR the illumination platform, typically airborne, carries a pair of antennas that are directed toward a fixed point on the surface as the platform moves. During platform motion, the antennas maintain their gaze on the point, creating an effective aperture for imaging any targets near that point. The interaction between the transmitted fields and targets (e.g. ships) is complicated since the targets are typically many wavelengths in size. Calculation of the field scattered from the target typically requires solving Maxwell’s equations on a large three-dimensional numerical grid. This is prohibitive to use in any real-world imaging algorithm, so the scattering process is typically simplified by assuming the target consists of a cloud of independent, non-interacting, scattering points (centers). Imaging algorithms based on this scattering model perform well in many applications. Since polarimetric radar is not very common, the scattering model is often derived for a scalar field (single polarization) where the individual scatterers are assumed to be small spheres. However, when polarization is important, we must generalize the model to explicitly account for the vector nature of the electromagnetic fields and its interaction with objects. In this note, we present a scattering model that explicitly includes the vector nature of the fields but retains the assumption that the individual scatterers are small. The response of the scatterers is described by electric and magnetic dipole moments induced by the incident fields. We show that the received voltages in the antennas are linearly related to the transmitting currents through a scattering impedance matrix that depends on the overall geometry of the problem and the nature of the scatterers.
Monte-Carlo simulations and image reconstruction for novel imaging scenarios in emission tomography
NASA Astrophysics Data System (ADS)
Gillam, John E.; Rafecas, Magdalena
2016-02-01
Emission imaging incorporates both the development of dedicated devices for data acquisition as well as algorithms for recovering images from that data. Emission tomography is an indirect approach to imaging. The effect of device modification on the final image can be understood through both the way in which data are gathered, using simulation, and the way in which the image is formed from that data, or image reconstruction. When developing novel devices, systems and imaging tasks, accurate simulation and image reconstruction allow performance to be estimated, and in some cases optimized, using computational methods before or during the process of physical construction. However, there are a vast range of approaches, algorithms and pre-existing computational tools that can be exploited and the choices made will affect the accuracy of the in silico results and quality of the reconstructed images. On the one hand, should important physical effects be neglected in either the simulation or reconstruction steps, specific enhancements provided by novel devices may not be represented in the results. On the other hand, over-modeling of device characteristics in either step leads to large computational overheads that can confound timely results. Here, a range of simulation methodologies and toolkits are discussed, as well as reconstruction algorithms that may be employed in emission imaging. The relative advantages and disadvantages of a range of options are highlighted using specific examples from current research scenarios.
Milankovic, Ivan L; Mijailovic, Nikola V; Filipovic, Nenad D; Peulic, Aleksandar S
2017-01-01
Image segmentation is one of the most common procedures in medical imaging applications. It is also a very important task in breast cancer detection. Breast cancer detection procedure based on mammography can be divided into several stages. The first stage is the extraction of the region of interest from a breast image, followed by the identification of suspicious mass regions, their classification, and comparison with the existing image database. It is often the case that already existing image databases have large sets of data whose processing requires a lot of time, and thus the acceleration of each of the processing stages in breast cancer detection is a very important issue. In this paper, the implementation of the already existing algorithm for region-of-interest based image segmentation for mammogram images on High-Performance Reconfigurable Dataflow Computers (HPRDCs) is proposed. As a dataflow engine (DFE) of such HPRDC, Maxeler's acceleration card is used. The experiments for examining the acceleration of that algorithm on the Reconfigurable Dataflow Computers (RDCs) are performed with two types of mammogram images with different resolutions. There were, also, several DFE configurations and each of them gave a different acceleration value of algorithm execution. Those acceleration values are presented and experimental results showed good acceleration.
Geraghty, Benjamin J; Lau, Justin Y C; Chen, Albert P; Cunningham, Charles H
2017-02-01
To enable large field-of-view, time-resolved volumetric coverage in hyperpolarized (13) C metabolic imaging by implementing a novel data acquisition and image reconstruction method based on the compressed sensing framework. A spectral-spatial pulse for single-resonance excitation followed by a symmetric echo-planar imaging (EPI) readout was implemented for encoding a 72 × 18 cm(2) field of view at 5 × 5 mm(2) resolution. Random undersampling was achieved with blipped z-gradients during the ramp portion of the echo-planar imaging readout. The sequence and reconstruction were tested with phantom studies and consecutive in vivo hyperpolarized (13) C scans in rats. Retrospectively and prospectively undersampled data were compared on the basis of structural similarity in the reconstructed images and the quantification of the lactate-to-pyruvate ratio in rat kidneys. No artifacts or loss of resolution are evident in the compressed sensing reconstructed images acquired with the proposed sequence. Structural similarity analysis indicate that compressed sensing reconstructions can accurately recover spatial features in the metabolic images evaluated. A novel z-blip acquisition sequence for compressed sensing accelerated hyperpolarized (13) C 3D echo-planar imaging was developed and demonstrated. The close agreement in lactate-to-pyruvate ratios from both retrospectively and prospectively undersampled data from rats shows that metabolic information is preserved with acceleration factors up to 3-fold with the developed method. Magn Reson Med 77:538-546, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Kim, D; Kang, S; Kim, T; Suh, T; Kim, S
2014-06-01
Purpose: In this paper, we implemented the four-dimensional (4D) digital tomosynthesis (DTS) imaging based on algebraic image reconstruction technique and total-variation minimization method in order to compensate the undersampled projection data and improve the image quality. Methods: The projection data were acquired as supposed the cone-beam computed tomography system in linear accelerator by the Monte Carlo simulation and the in-house 4D digital phantom generation program. We performed 4D DTS based upon simultaneous algebraic reconstruction technique (SART) among the iterative image reconstruction technique and total-variation minimization method (TVMM). To verify the effectiveness of this reconstruction algorithm, we performed systematic simulation studies to investigate the imaging performance. Results: The 4D DTS algorithm based upon the SART and TVMM seems to give better results than that based upon the existing method, or filtered-backprojection. Conclusion: The advanced image reconstruction algorithm for the 4D DTS would be useful to validate each intra-fraction motion during radiation therapy. In addition, it will be possible to give advantage to real-time imaging for the adaptive radiation therapy. This research was supported by Leading Foreign Research Institute Recruitment Program (Grant No.2009-00420) and Basic Atomic Energy Research Institute (BAERI); (Grant No. 2009-0078390) through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT and Future Planning (MSIP)
An image reconstruction from ECT data of complex flaws
NASA Astrophysics Data System (ADS)
Sasamoto, Akira; Suzuki, Takayuki; Nishimura, Yoshihiro
2009-03-01
A lack of safety and the reliability of the existing metallic structure can threaten people's lives today. It is getting stronger to demand to ensure reliable safety in society. Non Destructive Testing(NDT) can support to public safety with finding damaged structure. Eddy Current Testing (ECT) is a one of NDT for metallic or conductive materials. It already plays an important role in very wide field such as airline and power plants for maintenance, ironworks for production. Though ECT is considered as a finished testing method,it has the unwanted property that flaw blur in ECT signal.This defect partly comes from the essential principle of ECT. In order to obtain fine image of flaw, the authors proposed a method with signal processing to reconstruct more finer image of flaw from ECT signal. The method is based on simple relationship that signal are expressed as a convolution of response function and flaw shape. Many obtained results, more fine images of points flaw and both short and long line flaw than images of those ECT signal were reconstructed, show validity of the method for those flaws. Nevertheless its aim was fundamental survey on validation of the method so that tested flaws were limited in shape.In this paper,beyond that limitation, the authors wish to report the results of applications to complex shape flaws that are likely to be found in actual inspection site. The obtained reconstructed images show notable results indicate that the validity is kept even for complex flaw.
Computer system for four-dimensional transesophageal echocardiographic image reconstruction.
Duann, J R; Lin, S B; Hu, W C; Su, J L
1999-01-01
This paper presents a system for reconstructing a four-dimensional (4D) heart-beating image from transesophageal echocardiographic (TEE) data acquired with a rotational approach. The system consists of the necessary processing modules for two-dimensional (2D) echocardiogram reformation and 3D/4D-image reconstruction. These include the modules of image decoding, image re-coordinating, and three-dimensional (3D) volume rendering. The system is implemented under PC platform with Windows 95 operating system (with Intel Pentium-166 CPU, 64 MB RAM on board, and 2.0 GB hard disk capacity). It takes 6 min to reconstruct a 4D echocardiographic data set. The resultant 2D/3D/4D echocardiographic image provide the tools for investigating the phenomenon of heart beating, exploring the heart structure, and reformatting the 2D echocardiograms in an arbitrary plane. The functions provided by the system can be applied for further studies, such as 3D cardiac shape analysis, cardiac function measurement, and so forth.
Analyser-based phase contrast image reconstruction using geometrical optics.
Kitchen, M J; Pavlov, K M; Siu, K K W; Menk, R H; Tromba, G; Lewis, R A
2007-07-21
Analyser-based phase contrast imaging can provide radiographs of exceptional contrast at high resolution (<100 microm), whilst quantitative phase and attenuation information can be extracted using just two images when the approximations of geometrical optics are satisfied. Analytical phase retrieval can be performed by fitting the analyser rocking curve with a symmetric Pearson type VII function. The Pearson VII function provided at least a 10% better fit to experimentally measured rocking curves than linear or Gaussian functions. A test phantom, a hollow nylon cylinder, was imaged at 20 keV using a Si(1 1 1) analyser at the ELETTRA synchrotron radiation facility. Our phase retrieval method yielded a more accurate object reconstruction than methods based on a linear fit to the rocking curve. Where reconstructions failed to map expected values, calculations of the Takagi number permitted distinction between the violation of the geometrical optics conditions and the failure of curve fitting procedures. The need for synchronized object/detector translation stages was removed by using a large, divergent beam and imaging the object in segments. Our image acquisition and reconstruction procedure enables quantitative phase retrieval for systems with a divergent source and accounts for imperfections in the analyser.
Toward 5D image reconstruction for optical interferometry
NASA Astrophysics Data System (ADS)
Baron, Fabien; Kloppenborg, Brian; Monnier, John
2012-07-01
We report on our progress toward a flexible image reconstruction software for optical interferometry capable of "5D imaging" of stellar surfaces. 5D imaging is here defined as the capability to image directly one or several stars in three dimensions, with both the time and wavelength dependencies taken into account during the reconstruction process. Our algorithm makes use of the Healpix (Gorski et al., 2005) sphere partition scheme to tesselate the stellar surface, 3D Open Graphics Language (OpenGL) to model the spheroid geometry, and the Open Compute Language (OpenCL) framework for all other computations. We use the Monte Carlo Markov Chain software SQUEEZE to solve the image reconstruction problem on the surfaces of these stars. Finally, the Compressed Sensing and Bayesian Evidence paradigms are employed to determine the best regularization for spotted stars. Our algorithm makes use of the Healpix (reference needed) sphere partition scheme to tesselate the stellar surface, 3D Open Graphics Language (OpenGL) to model the spheroid, and the Open Compute Language (OpenCL) framework to model the Roche gravitational potential equation.
Probe reconstruction for holographic X-ray imaging
Hagemann, Johannes; Robisch, Anna-Lena; Osterhoff, Markus; Salditt, Tim
2017-01-01
In X-ray holographic near-field imaging the resolution and image quality depend sensitively on the beam. Artifacts are often encountered due to the strong focusing required to reach high resolution. Here, two schemes for reconstructing the complex-valued and extended wavefront of X-ray nano-probes, primarily in the planes relevant for imaging (i.e. focus, sample and detection plane), are presented and compared. Firstly, near-field ptychography is used, based on scanning a test pattern laterally as well as longitudinally along the optical axis. Secondly, any test pattern is dispensed of and the wavefront reconstructed only from data recorded for different longitudinal translations of the detector. For this purpose, an optimized multi-plane projection algorithm is presented, which can cope with the numerically very challenging setting of a divergent wavefront emanating from a hard X-ray nanoprobe. The results of both schemes are in very good agreement. The probe retrieval can be used as a tool for optics alignment, in particular at X-ray nanoprobe beamlines. Combining probe retrieval and object reconstruction is also shown to improve the image quality of holographic near-field imaging. PMID:28244446
Atmospheric isoplanatism and astronomical image reconstruction on Mauna Kea
Cowie, L.L.; Songaila, A.
1988-07-01
Atmospheric isoplanatism for visual wavelength image-reconstruction applications was measured on Mauna Kea in Hawaii. For most nights the correlation of the transform functions is substantially wider than the long-exposure transform function at separations up to 30 arcsec. Theoretical analysis shows that this is reasonable if the mean Fried parameter is approximately 30 cm at 5500 A. Reconstructed image quality may be described by a Gaussian with a FWHM of lambda/s/sub 0/. Under average conditions, s/sub 0/ (30 arcsec) exceeds 55 cm at 7000 A. The results show that visual image quality in the 0.1--0.2 arcsec range is obtainable over much of the sky with large ground-based telescopes on this site.
Multimodal optical molecular image reconstruction with frequency domain measurements.
Bartels, M; Chen, W; Bardhan, R; Ke, S; Halas, N J; Wareing, T; McGhee, J; Joshi, A
2009-01-01
Multimodality molecular imaging is becoming more and more important to understand both the structural and the functional characteristics of tissue, organs and tumors. So far, invasive nuclear methods utilizing ionizing radiation have been the "gold standard" of molecular imaging. We investigate non-contact, non-invasive, patient-tolerant and inexpensive near infrared (NIR) frequency domain optical tomography (FDOT) as a functional complement to structural X-ray computed tomography (CT) data. We show a novel multifrequency NIR FDOT approach both in transmission and reflectance mode and employ radiative transport equation (RTE) for 3D reconstruction of a target with novel fluorescent gold nanoshell indocyanine green (NS ICG) in an ex vivo nude mouse. The results demonstrate that gold NS ICG with multifrequency NIR FDOT is a promising fluorophore for multimodal optical molecular image reconstruction.
Proton Computed Tomography: iterative image reconstruction and dose evaluation
NASA Astrophysics Data System (ADS)
Civinini, C.; Bonanno, D.; Brianzi, M.; Carpinelli, M.; Cirrone, G. A. P.; Cuttone, G.; Lo Presti, D.; Maccioni, G.; Pallotta, S.; Randazzo, N.; Scaringella, M.; Romano, F.; Sipala, V.; Talamonti, C.; Vanzi, E.; Bruzzi, M.
2017-01-01
Proton Computed Tomography (pCT) is a medical imaging method with a potential for increasing accuracy of treatment planning and patient positioning in hadron therapy. A pCT system based on a Silicon microstrip tracker and a YAG:Ce crystal calorimeter has been developed within the INFN Prima-RDH collaboration. The prototype has been tested with a 175 MeV proton beam at The Svedberg Laboratory (Uppsala, Sweden) with the aim to reconstruct and characterize a tomographic image. Algebraic iterative reconstruction methods (ART), together with the most likely path formalism, have been used to obtain tomographies of an inhomogeneous phantom to eventually extract density and spatial resolutions. These results will be presented and discussed together with an estimation of the average dose delivered to the phantom and the dependence of the image quality on the dose. Due to the heavy computation load required by the algebraic algorithms the reconstruction programs have been implemented to fully exploit the high calculation parallelism of Graphics Processing Units. An extended field of view pCT system is in an advanced construction stage. This apparatus will be able to reconstruct objects of the size of a human head making possible to characterize this pCT approach in a pre-clinical environment.
An Inverse Problems Approach to MR-EPT Image Reconstruction.
Borsic, A; Perreard, I; Mahara, A; Halter, R J
2016-01-01
Magnetic Resonance-Electrical Properties Tomography (MR-EPT) is an imaging modality that maps the spatial distribution of the electrical conductivity and permittivity using standard MRI systems. The presence of a body within the scanner alters the RF field, and by mapping these alterations it is possible to recover the electrical properties. The field is time-harmonic, and can be described by the Helmholtz equation. Approximations to this equation have been previously used to estimate conductivity and permittivity in terms of first or second derivatives of RF field data. Using these same approximations, an inverse approach to solving the MR-EPT problem is presented here that leverages a forward model for describing the magnitude and phase of the field within the imaging domain, and a fitting approach for estimating the electrical properties distribution. The advantages of this approach are that 1) differentiation of the measured data is not required, thus reducing noise sensitivity, and 2) different regularization schemes can be adopted, depending on prior knowledge of the distribution of conductivity or permittivity, leading to improved image quality. To demonstrate the developed approach, both Quadratic (QR) and Total Variation (TV) regularization methods were implemented and evaluated through numerical simulation and experimentally acquired data. The proposed inverse approach to MR-EPT reconstruction correctly identifies contrasts and accurately reconstructs the geometry in both simulations and experiments. The TV regularized scheme reconstructs sharp spatial transitions, which are difficult to reconstruct with other, more traditional approaches.
Cone-beam image reconstruction using spherical harmonics.
Taguchi, K; Zeng, G L; Gullberg, G T
2001-06-01
Image reconstruction from cone-beam projections is required for both x-ray computed tomography (CT) and single photon emission computed tomography (SPECT). Grangeat's algorithm accurately performs cone-beam reconstruction provided that Tuy's data sufficiency condition is satisfied and projections are complete. The algorithm consists of three stages: (a) Forming weighted plane integrals by calculating the line integrals on the cone-beam detector, and obtaining the first derivative of the plane integrals (3D Radon transform) by taking the derivative of the weighted plane integrals. (b) Rebinning the data and calculating the second derivative with respect to the normal to the plane. (c) Reconstructing the image using the 3D Radon backprojection. A new method for implementing the first stage of Grangeat's algorithm was developed using spherical harmonics. The method assumes that the detector is large enough to image the whole object without truncation. Computer simulations show that if the trajectory of the cone vertex satisfies Tuy's data sufficiency condition, the proposed algorithm provides an exact reconstruction.
PET Image Reconstruction Using Information Theoretic Anatomical Priors
Somayajula, Sangeetha; Panagiotou, Christos; Rangarajan, Anand; Li, Quanzheng; Arridge, Simon R.
2011-01-01
We describe a nonparametric framework for incorporating information from co-registered anatomical images into positron emission tomographic (PET) image reconstruction through priors based on information theoretic similarity measures. We compare and evaluate the use of mutual information (MI) and joint entropy (JE) between feature vectors extracted from the anatomical and PET images as priors in PET reconstruction. Scale-space theory provides a framework for the analysis of images at different levels of detail, and we use this approach to define feature vectors that emphasize prominent boundaries in the anatomical and functional images, and attach less importance to detail and noise that is less likely to be correlated in the two images. Through simulations that model the best case scenario of perfect agreement between the anatomical and functional images, and a more realistic situation with a real magnetic resonance image and a PET phantom that has partial volumes and a smooth variation of intensities, we evaluate the performance of MI and JE based priors in comparison to a Gaussian quadratic prior, which does not use any anatomical information. We also apply this method to clinical brain scan data using F18 Fallypride, a tracer that binds to dopamine receptors and therefore localizes mainly in the striatum. We present an efficient method of computing these priors and their derivatives based on fast Fourier transforms that reduce the complexity of their convolution-like expressions. Our results indicate that while sensitive to initialization and choice of hyperparameters, information theoretic priors can reconstruct images with higher contrast and superior quantitation than quadratic priors. PMID:20851790
PET image reconstruction using information theoretic anatomical priors.
Somayajula, Sangeetha; Panagiotou, Christos; Rangarajan, Anand; Li, Quanzheng; Arridge, Simon R; Leahy, Richard M
2011-03-01
We describe a nonparametric framework for incorporating information from co-registered anatomical images into positron emission tomographic (PET) image reconstruction through priors based on information theoretic similarity measures. We compare and evaluate the use of mutual information (MI) and joint entropy (JE) between feature vectors extracted from the anatomical and PET images as priors in PET reconstruction. Scale-space theory provides a framework for the analysis of images at different levels of detail, and we use this approach to define feature vectors that emphasize prominent boundaries in the anatomical and functional images, and attach less importance to detail and noise that is less likely to be correlated in the two images. Through simulations that model the best case scenario of perfect agreement between the anatomical and functional images, and a more realistic situation with a real magnetic resonance image and a PET phantom that has partial volumes and a smooth variation of intensities, we evaluate the performance of MI and JE based priors in comparison to a Gaussian quadratic prior, which does not use any anatomical information. We also apply this method to clinical brain scan data using F(18) Fallypride, a tracer that binds to dopamine receptors and therefore localizes mainly in the striatum. We present an efficient method of computing these priors and their derivatives based on fast Fourier transforms that reduce the complexity of their convolution-like expressions. Our results indicate that while sensitive to initialization and choice of hyperparameters, information theoretic priors can reconstruct images with higher contrast and superior quantitation than quadratic priors.
Image reconstruction and compressive sensing in MIMO radar
NASA Astrophysics Data System (ADS)
Sun, Bing; Lopez, Juan; Qiao, Zhijun
2014-05-01
Multiple-input multiple-output (MIMO) radar utilizes the flexible configuration of transmitting and receiving antennas to construct images of target scenes. Because of the target scenes' sparsity, the compressive sensing (CS) technique can be used to realize a feasible reconstruction of the target scenes from undersampling data. This paper presents the signal model of MIMO radar and derive the corresponding CS measurement matrix, which shows success of the CS technique. Also the basis pursuit method and total-variation minimization method are adopted for different scenes' recovery. Numerical simulations are provided to illustrate the validity of reconstruction for one dimensional and two dimensional scenes.
Superiorization-based multi-energy CT image reconstruction
NASA Astrophysics Data System (ADS)
Yang, Q.; Cong, W.; Wang, G.
2017-04-01
The recently-developed superiorization approach is efficient and robust for solving various constrained optimization problems. This methodology can be applied to multi-energy CT image reconstruction with the regularization in terms of the prior rank, intensity and sparsity model (PRISM). In this paper, we propose a superiorized version of the simultaneous algebraic reconstruction technique (SART) based on the PRISM model. Then, we compare the proposed superiorized algorithm with the Split-Bregman algorithm in numerical experiments. The results show that both the Superiorized-SART and the Split-Bregman algorithms generate good results with weak noise and reduced artefacts.
NASA Astrophysics Data System (ADS)
Wang, Li; Gac, Nicolas; Mohammad-Djafari, Ali
2015-01-01
In order to improve quality of 3D X-ray tomography reconstruction for Non Destructive Testing (NDT), we investigate in this paper hierarchical Bayesian methods. In NDT, useful prior information on the volume like the limited number of materials or the presence of homogeneous area can be included in the iterative reconstruction algorithms. In hierarchical Bayesian methods, not only the volume is estimated thanks to the prior model of the volume but also the hyper parameters of this prior. This additional complexity in the reconstruction methods when applied to large volumes (from 5123 to 81923 voxels) results in an increasing computational cost. To reduce it, the hierarchical Bayesian methods investigated in this paper lead to an algorithm acceleration by Variational Bayesian Approximation (VBA) [1] and hardware acceleration thanks to projection and back-projection operators paralleled on many core processors like GPU [2]. In this paper, we will consider a Student-t prior on the gradient of the image implemented in a hierarchical way [3, 4, 1]. Operators H (forward or projection) and Ht (adjoint or back-projection) implanted in multi-GPU [2] have been used in this study. Different methods will be evalued on synthetic volume "Shepp and Logan" in terms of quality and time of reconstruction. We used several simple regularizations of order 1 and order 2. Other prior models also exists [5]. Sometimes for a discrete image, we can do the segmentation and reconstruction at the same time, then the reconstruction can be done with less projections.
Ha, S; Matej, S; Ispiryan, M; Mueller, K
2013-02-01
We describe a GPU-accelerated framework that efficiently models spatially (shift) variant system response kernels and performs forward- and back-projection operations with these kernels for the DIRECT (Direct Image Reconstruction for TOF) iterative reconstruction approach. Inherent challenges arise from the poor memory cache performance at non-axis aligned TOF directions. Focusing on the GPU memory access patterns, we utilize different kinds of GPU memory according to these patterns in order to maximize the memory cache performance. We also exploit the GPU instruction-level parallelism to efficiently hide long latencies from the memory operations. Our experiments indicate that our GPU implementation of the projection operators has slightly faster or approximately comparable time performance than FFT-based approaches using state-of-the-art FFTW routines. However, most importantly, our GPU framework can also efficiently handle any generic system response kernels, such as spatially symmetric and shift-variant as well as spatially asymmetric and shift-variant, both of which an FFT-based approach cannot cope with.
Ha, S.; Matej, S.; Ispiryan, M.; Mueller, K.
2013-01-01
We describe a GPU-accelerated framework that efficiently models spatially (shift) variant system response kernels and performs forward- and back-projection operations with these kernels for the DIRECT (Direct Image Reconstruction for TOF) iterative reconstruction approach. Inherent challenges arise from the poor memory cache performance at non-axis aligned TOF directions. Focusing on the GPU memory access patterns, we utilize different kinds of GPU memory according to these patterns in order to maximize the memory cache performance. We also exploit the GPU instruction-level parallelism to efficiently hide long latencies from the memory operations. Our experiments indicate that our GPU implementation of the projection operators has slightly faster or approximately comparable time performance than FFT-based approaches using state-of-the-art FFTW routines. However, most importantly, our GPU framework can also efficiently handle any generic system response kernels, such as spatially symmetric and shift-variant as well as spatially asymmetric and shift-variant, both of which an FFT-based approach cannot cope with. PMID:23531763
Reconstruction of pulse noisy images via stochastic resonance
Han, Jing; Liu, Hongjun; Sun, Qibing; Huang, Nan
2015-01-01
We investigate a practical technology for reconstructing nanosecond pulse noisy images via stochastic resonance, which is based on the modulation instability. A theoretical model of this method for optical pulse signal is built to effectively recover the pulse image. The nanosecond noise-hidden images grow at the expense of noise during the stochastic resonance process in a photorefractive medium. The properties of output images are mainly determined by the input signal-to-noise intensity ratio, the applied voltage across the medium, and the correlation length of noise background. A high cross-correlation gain is obtained by optimizing these parameters. This provides a potential method for detecting low-level or hidden pulse images in various imaging applications. PMID:26067911
LOR-interleaving image reconstruction for PET imaging with fractional-crystal collimation
Li, Yusheng; Matej, Samuel; Karp, Joel S.; Metzler, Scott D.
2015-01-01
Positron emission tomography (PET) has become an important modality in medical and molecular imaging. However, in most PET applications, the resolution is still mainly limited by the physical crystal sizes or the detector’s intrinsic spatial resolution. To achieve images with better spatial resolution in a central region of interest (ROI), we have previously proposed using collimation in PET scanner. The collimator is designed to partially mask detector crystals to detect lines of response (LORs) within fractional crystals. A sequence of collimator-encoded LORs is measured with different collimation configurations. This novel collimated scanner geometry makes the reconstruction problem challenging, as both detector and collimator effects need to be modeled to reconstruct high-resolution images from collimated LORs. In this paper, we present an LOR-interleaving (LORI) algorithm, which incorporates these effects and has the advantage of reusing existing reconstruction software, to reconstruct high-resolution images for PET with fractional-crystal collimation. We also develop a 3-D ray-tracing model incorporating both the collimator and crystal penetration for simulations and reconstructions of the collimated PET. By registering the collimator-encoded LORs with the collimator configurations, high-resolution LORs are restored based on the modeled transfer matrices using the nonnegative least-squares method and EM algorithm. The resolution-enhanced images are then reconstructed from the high-resolution LORs using the MLEM or OSEM algorithm. For validation, we applied the LORI method to a small-animal PET scanner, A-PET, with a specially designed collimator. We demonstrate through simulated reconstructions with a hot-rod phantom and MOBY phantom that the LORI reconstructions can substantially improve spatial resolution and quantification compared to the uncollimated reconstructions. The LORI algorithm is crucial to improve overall image quality of collimated PET, which
LOR-interleaving image reconstruction for PET imaging with fractional-crystal collimation
NASA Astrophysics Data System (ADS)
Li, Yusheng; Matej, Samuel; Karp, Joel S.; Metzler, Scott D.
2015-01-01
Positron emission tomography (PET) has become an important modality in medical and molecular imaging. However, in most PET applications, the resolution is still mainly limited by the physical crystal sizes or the detector’s intrinsic spatial resolution. To achieve images with better spatial resolution in a central region of interest (ROI), we have previously proposed using collimation in PET scanners. The collimator is designed to partially mask detector crystals to detect lines of response (LORs) within fractional crystals. A sequence of collimator-encoded LORs is measured with different collimation configurations. This novel collimated scanner geometry makes the reconstruction problem challenging, as both detector and collimator effects need to be modeled to reconstruct high-resolution images from collimated LORs. In this paper, we present a LOR-interleaving (LORI) algorithm, which incorporates these effects and has the advantage of reusing existing reconstruction software, to reconstruct high-resolution images for PET with fractional-crystal collimation. We also develop a 3D ray-tracing model incorporating both the collimator and crystal penetration for simulations and reconstructions of the collimated PET. By registering the collimator-encoded LORs with the collimator configurations, high-resolution LORs are restored based on the modeled transfer matrices using the non-negative least-squares method and EM algorithm. The resolution-enhanced images are then reconstructed from the high-resolution LORs using the MLEM or OSEM algorithm. For validation, we applied the LORI method to a small-animal PET scanner, A-PET, with a specially designed collimator. We demonstrate through simulated reconstructions with a hot-rod phantom and MOBY phantom that the LORI reconstructions can substantially improve spatial resolution and quantification compared to the uncollimated reconstructions. The LORI algorithm is crucial to improve overall image quality of collimated PET, which
The SRT reconstruction algorithm for semiquantification in PET imaging
Kastis, George A.; Gaitanis, Anastasios; Samartzis, Alexandros P.; Fokas, Athanasios S.
2015-10-15
Purpose: The spline reconstruction technique (SRT) is a new, fast algorithm based on a novel numerical implementation of an analytic representation of the inverse Radon transform. The mathematical details of this algorithm and comparisons with filtered backprojection were presented earlier in the literature. In this study, the authors present a comparison between SRT and the ordered-subsets expectation–maximization (OSEM) algorithm for determining contrast and semiquantitative indices of {sup 18}F-FDG uptake. Methods: The authors implemented SRT in the software for tomographic image reconstruction (STIR) open-source platform and evaluated this technique using simulated and real sinograms obtained from the GE Discovery ST positron emission tomography/computer tomography scanner. All simulations and reconstructions were performed in STIR. For OSEM, the authors used the clinical protocol of their scanner, namely, 21 subsets and two iterations. The authors also examined images at one, four, six, and ten iterations. For the simulation studies, the authors analyzed an image-quality phantom with cold and hot lesions. Two different versions of the phantom were employed at two different hot-sphere lesion-to-background ratios (LBRs), namely, 2:1 and 4:1. For each noiseless sinogram, 20 Poisson realizations were created at five different noise levels. In addition to making visual comparisons of the reconstructed images, the authors determined contrast and bias as a function of the background image roughness (IR). For the real-data studies, sinograms of an image-quality phantom simulating the human torso were employed. The authors determined contrast and LBR as a function of the background IR. Finally, the authors present plots of contrast as a function of IR after smoothing each reconstructed image with Gaussian filters of six different sizes. Statistical significance was determined by employing the Wilcoxon rank-sum test. Results: In both simulated and real studies, SRT
3D reconstruction, visualization, and measurement of MRI images
NASA Astrophysics Data System (ADS)
Pandya, Abhijit S.; Patel, Pritesh P.; Desai, Mehul B.; Desai, Paramtap
1999-03-01
This paper primarily focuses on manipulating 2D medical image data that often come in as Magnetic Resonance and reconstruct them into 3D volumetric images. Clinical diagnosis and therapy planning using 2D medical images can become a torturous problem for a physician. For example, our 2D breast images of a patient mimic a breast carcinoma. In reality, the patient has 'fat necrosis', a benign breast lump. Physicians need powerful, accurate and interactive 3D visualization systems to extract anatomical details and examine the root cause of the problem. Our proposal overcomes the above mentioned limitations through the development of volume rendering algorithms and extensive use of parallel, distributed and neural networks computing strategies. MRI coupled with 3D imaging provides a reliable method for quantifying 'fat necrosis' characteristics and progression. Our 3D interactive application enables a physician to compute spatial measurements and quantitative evaluations and, from a general point of view, use all 3D interactive tools that can help to plan a complex surgical operation. The capability of our medical imaging application can be extended to reconstruct and visualize 3D volumetric brain images. Our application promises to be an important tool in neurological surgery planning, time and cost reduction.
Light field display and 3D image reconstruction
NASA Astrophysics Data System (ADS)
Iwane, Toru
2016-06-01
Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.
Accuracy of quantitative reconstructions in SPECT/CT imaging
NASA Astrophysics Data System (ADS)
Shcherbinin, S.; Celler, A.; Belhocine, T.; van der Werf, R.; Driedger, A.
2008-09-01
The goal of this study was to determine the quantitative accuracy of our OSEM-APDI reconstruction method based on SPECT/CT imaging for Tc-99m, In-111, I-123, and I-131 isotopes. Phantom studies were performed on a SPECT/low-dose multislice CT system (Infinia-Hawkeye-4 slice, GE Healthcare) using clinical acquisition protocols. Two radioactive sources were centrally and peripherally placed inside an anthropometric Thorax phantom filled with non-radioactive water. Corrections for attenuation, scatter, collimator blurring and collimator septal penetration were applied and their contribution to the overall accuracy of the reconstruction was evaluated. Reconstruction with the most comprehensive set of corrections resulted in activity estimation with error levels of 3-5% for all the isotopes.
NASA Astrophysics Data System (ADS)
Chen, Jiaoxuan; Zhang, Maomao; Liu, Yinyan; Chen, Jiaoliao; Li, Yi
2017-03-01
Electrical capacitance tomography (ECT) is a promising technique applied in many fields. However, the solutions for ECT are not unique and highly sensitive to the measurement noise. To remain a good shape of reconstructed object and endure a noisy data, a Rudin–Osher–Fatemi (ROF) model with total variation regularization is applied to image reconstruction in ECT. Two numerical methods, which are simplified augmented Lagrangian (SAL) and accelerated alternating direction method of multipliers (AADMM), are innovatively introduced to try to solve the above mentioned problems in ECT. The effect of the parameters and the number of iterations for different algorithms, and the noise level in capacitance data are discussed. Both simulation and experimental tests were carried out to validate the feasibility of the proposed algorithms, compared to the Landweber iteration (LI) algorithm. The results show that the SAL and AADMM algorithms can handle a high level of noise and the AADMM algorithm outperforms other algorithms in identifying the object from its background.
Spectral image reconstruction for transcranial ultrasound measurement.
Clement, Greg T
2005-12-07
An approach aimed at improved ultrasound resolution and signal strength through highly attenuating media is presented. The method delivers a series of multiple-cycle bursts in order to construct a discrete spectral (frequency domain) response in one dimension. Cross-correlation of this ultrasound A-mode response with its transmitted signal results in time-localized peaks that correspond to scattering locations. The approach is particularly relevant to the problem of transcranial ultrasound imaging, as it combines numerous smaller signals into a single signal whose net power may exceed that which could be achieved using a single burst. Tests are performed with human skull fragments and nylon-wire targets embedded in a tissue phantom. Skulls are oriented to produce both lateral and shear modes of transcranial propagation. A total of nine locations distributed over three ex vivo human skull samples are studied. Compared with pulsed and chirped signals, results indicate more localized peaks when using the multi-cycle approach, with more accurate positioning when combined with the transcranial shear mode.
Spectral image reconstruction for transcranial ultrasound measurement
NASA Astrophysics Data System (ADS)
Clement, Greg T.
2005-12-01
An approach aimed at improved ultrasound resolution and signal strength through highly attenuating media is presented. The method delivers a series of multiple-cycle bursts in order to construct a discrete spectral (frequency domain) response in one dimension. Cross-correlation of this ultrasound A-mode response with its transmitted signal results in time-localized peaks that correspond to scattering locations. The approach is particularly relevant to the problem of transcranial ultrasound imaging, as it combines numerous smaller signals into a single signal whose net power may exceed that which could be achieved using a single burst. Tests are performed with human skull fragments and nylon-wire targets embedded in a tissue phantom. Skulls are oriented to produce both lateral and shear modes of transcranial propagation. A total of nine locations distributed over three ex vivo human skull samples are studied. Compared with pulsed and chirped signals, results indicate more localized peaks when using the multi-cycle approach, with more accurate positioning when combined with the transcranial shear mode.
Comparison of power spectra for tomosynthesis projections and reconstructed images
Engstrom, Emma; Reiser, Ingrid; Nishikawa, Robert
2009-05-15
Burgess et al. [Med. Phys. 28, 419-437 (2001)] showed that the power spectrum of mammographic breast background follows a power law and that lesion detectability is affected by the power-law exponent {beta} which measures the amount of structure in the background. Following the study of Burgess et al., the authors measured and compared the power-law exponent of mammographic backgrounds in tomosynthesis projections and reconstructed slices to investigate the effect of tomosynthesis imaging on background structure. Our data set consisted of 55 patient cases. For each case, regions of interest (ROIs) were extracted from both projection images and reconstructed slices. The periodogram of each ROI was computed by taking the squared modulus of the Fourier transform of the ROI. The power-law exponent was determined for each periodogram and averaged across all ROIs extracted from all projections or reconstructed slices for each patient data set. For the projections, the mean {beta} averaged across the 55 cases was 3.06 (standard deviation of 0.21), while it was 2.87 (0.24) for the corresponding reconstructions. The difference in {beta} for a given patient between the projection ROIs and the reconstructed ROIs averaged across the 55 cases was 0.194, which was statistically significant (p<0.001). The 95% CI for the difference between the mean value of {beta} for the projections and reconstructions was [0.170, 0.218]. The results are consistent with the observation that the amount of breast structure in the tomosynthesis slice is reduced compared to projection mammography and that this may lead to improved lesion detectability.
Comparison of power spectra for tomosynthesis projections and reconstructed images.
Engstrom, Emma; Reiser, Ingrid; Nishikawa, Robert
2009-05-01
Burgess et al. [Med. Phys. 28, 419-437 (2001)] showed that the power spectrum of mammographic breast background follows a power law and that lesion detectability is affected by the power-law exponent beta which measures the amount of structure in the background. Following the study of Burgess et al., the authors measured and compared the power-law exponent of mammographic backgrounds in tomosynthesis projections and reconstructed slices to investigate the effect of tomosynthesis imaging on background structure. Our data set consisted of 55 patient cases. For each case, regions of interest (ROIs) were extracted from both projection images and reconstructed slices. The periodogram of each ROI was computed by taking the squared modulus of the Fourier transform of the ROI. The power-law exponent was determined for each periodogram and averaged across all ROIs extracted from all projections or reconstructed slices for each patient data set. For the projections, the mean beta averaged across the 55 cases was 3.06 (standard deviation of 0.21), while it was 2.87 (0.24) for the corresponding reconstructions. The difference in beta for a given patient between the projection ROIs and the reconstructed ROIs averaged across the 55 cases was 0.194, which was statistically significant (p < 0.001). The 95% CI for the difference between the mean value of beta for the projections and reconstructions was [0.170, 0.218]. The results are consistent with the observation that the amount of breast structure in the tomosynthesis slice is reduced compared to projection mammography and that this may lead to improved lesion detectability.
Missing data reconstruction using Gaussian mixture models for fingerprint images
NASA Astrophysics Data System (ADS)
Agaian, Sos S.; Yeole, Rushikesh D.; Rao, Shishir P.; Mulawka, Marzena; Troy, Mike; Reinecke, Gary
2016-05-01
Publisher's Note: This paper, originally published on 25 May 2016, was replaced with a revised version on 16 June 2016. If you downloaded the original PDF, but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance. One of the most important areas in biometrics is matching partial fingerprints in fingerprint databases. Recently, significant progress has been made in designing fingerprint identification systems for missing fingerprint information. However, a dependable reconstruction of fingerprint images still remains challenging due to the complexity and the ill-posed nature of the problem. In this article, both binary and gray-level images are reconstructed. This paper also presents a new similarity score to evaluate the performance of the reconstructed binary image. The offered fingerprint image identification system can be automated and extended to numerous other security applications such as postmortem fingerprints, forensic science, investigations, artificial intelligence, robotics, all-access control, and financial security, as well as for the verification of firearm purchasers, driver license applicants, etc.
NASA Astrophysics Data System (ADS)
Sharp, J. H.; Barnard, J. S.; Kaneko, K.; Higashida, K.; Midgley, P. A.
2008-08-01
After previous work producing a successful 3D tomographic reconstruction of dislocations in GaN from conventional weak-beam dark-field (WBDF) images, we have reconstructed a cascade of dislocations in deformed and annealed silicon to a comparable standard using the more experimentally straightforward technique of STEM annular dark-field imaging (STEM ADF). In this mode, image contrast was much more consistent over the specimen tilt range than in conventional weak-beam dark-field imaging. Automatic acquisition software could thus restore the correct dislocation array to the field of view at each tilt angle, though manual focusing was still required. Reconstruction was carried out by sequential iterative reconstruction technique using FEI's Inspect3D software. Dislocations were distributed non-uniformly along cascades, with sparse areas between denser clumps in which individual dislocations of in-plane image width 24 nm could be distinguished in images and reconstruction. Denser areas showed more complicated stacking-fault contrast, hampering tomographic reconstruction. The general three-dimensional form of the denser areas was reproduced well, showing the dislocation array to be planar and not parallel to the foil surfaces.
[Design of the 2D-FFT image reconstruction software based on Matlab].
Xu, Hong-yu; Wang, Hong-zhi
2008-09-01
This paper presents a Matlab's implementation for 2D-FFT image reconstruction algorithm of magnetic resonance imaging, with the universal COM component that Windows system can identify. This allows to segregate the 2D-FFT image reconstruction algorithm from the business magnetic resonance imaging closed system, providing the ability for initial data processing before reconstruction, which would be important for improving the image quality, diagnostic value and image post-processing.
A dual oxygenation and fluorescence imaging platform for reconstructive surgery
NASA Astrophysics Data System (ADS)
Ashitate, Yoshitomo; Nguyen, John N.; Venugopal, Vivek; Stockdale, Alan; Neacsu, Florin; Kettenring, Frank; Lee, Bernard T.; Frangioni, John V.; Gioux, Sylvain
2013-03-01
There is a pressing clinical need to provide image guidance during surgery. Currently, assessment of tissue that needs to be resected or avoided is performed subjectively, leading to a large number of failures, patient morbidity, and increased healthcare costs. Because near-infrared (NIR) optical imaging is safe, noncontact, inexpensive, and can provide relatively deep information (several mm), it offers unparalleled capabilities for providing image guidance during surgery. These capabilities are well illustrated through the clinical translation of fluorescence imaging during oncologic surgery. In this work, we introduce a novel imaging platform that combines two complementary NIR optical modalities: oxygenation imaging and fluorescence imaging. We validated this platform during facial reconstructive surgery on large animals approaching the size of humans. We demonstrate that NIR fluorescence imaging provides identification of perforator arteries, assesses arterial perfusion, and can detect thrombosis, while oxygenation imaging permits the passive monitoring of tissue vital status, as well as the detection and origin of vascular compromise simultaneously. Together, the two methods provide a comprehensive approach to identifying problems and intervening in real time during surgery before irreparable damage occurs. Taken together, this novel platform provides fully integrated and clinically friendly endogenous and exogenous NIR optical imaging for improved image-guided intervention during surgery.
Hoy, Christopher L; Durr, Nicholas J; Ben-Yakar, Adela
2011-06-01
We present a fast-updating Lissajous image reconstruction methodology that uses an increased image frame rate beyond the pattern repeat rate generally used in conventional Lissajous image reconstruction methods. The fast display rate provides increased dynamic information and reduced motion blur, as compared to conventional Lissajous reconstruction, at the cost of single-frame pixel density. Importantly, this method does not discard any information from the conventional Lissajous image reconstruction, and frames from the complete Lissajous pattern can be displayed simultaneously. We present the theoretical background for this image reconstruction methodology along with images and video taken using the algorithm in a custom-built miniaturized multiphoton microscopy system.
Utility of high-definition FDG-PET image reconstruction for lung cancer staging.
Ozawa, Yoshiyuki; Hara, Masaki; Shibamoto, Yuta; Tamaki, Tsuneo; Nishio, Masami; Omi, Kumiko
2013-10-01
High-definition (HD) positron emission tomography (PET) image reconstruction is a new image reconstruction method based on the point spread function system, which improves the spatial resolution of the images. To compare the utility of HD reconstruction of PET images for staging lung cancer with that of conventional 2D ordered subset expectation maximization + Fourier rebinning (2D) reconstruction. Thirty-five lung cancer patients (24 men, 11 women; median age, 66 years) who underwent surgery after 18F-2-deoxy-fluoro-D-glucose (FDG)-PET-CT were studied. Their PET data were reconstructed with 2D and HD PET reconstruction algorithms. Two radiologists individually TNM staged both sets of images. They also evaluated the quality of the images and the diagnostic confidence that the images afforded them using 5-point scales. T, N, and M stages were correctly diagnosed on both the 2D and HD reconstructed images in 23 (66%), 25 (71%), and 30 (86%) of 35 cases, respectively. Overall TNM stage was correctly diagnosed on both types of reconstructed images in 23 cases (66%), underestimated in three (9%), and overestimated in nine (26%). No significant difference in T, N, or M stage or overall TNM stage was observed between the two reconstruction methods. However, the HD reconstructed images afforded a significantly higher level of diagnostic confidence during TNM staging than the 2D reconstructed images and were also of higher quality than the 2D reconstructed images. Although HD reconstruction of FDG-PET images did not improve the diagnostic accuracy of lung cancer staging compared with 2D reconstruction, the quality of the HD reconstructed images and the diagnostic confidence level they afforded the radiologists were higher than those of the conventional 2D reconstructed images.
Bayesian image reconstruction with space-variant noise suppression
NASA Astrophysics Data System (ADS)
Nunez, J.; Llacer, J.
1998-07-01
In this paper we present a Bayesian image reconstruction algorithm with entropy prior (FMAPE) that uses a space-variant hyperparameter. The spatial variation of the hyperparameter allows different degrees of resolution in areas of different statistical characteristics, thus avoiding the large residuals resulting from algorithms that use a constant hyperparameter. In the first implementation of the algorithm, we begin by segmenting a Maximum Likelihood Estimator (MLE) reconstruction. The segmentation method is based on using a wavelet decomposition and a self-organizing neural network. The result is a predetermined number of extended regions plus a small region for each star or bright object. To assign a different value of the hyperparameter to each extended region and star, we use either feasibility tests or cross-validation methods. Once the set of hyperparameters is obtained, we carried out the final Bayesian reconstruction, leading to a reconstruction with decreased bias and excellent visual characteristics. The method has been applied to data from the non-refurbished Hubble Space Telescope. The method can be also applied to ground-based images.
Parallel MR image reconstruction using augmented Lagrangian methods.
Ramani, Sathish; Fessler, Jeffrey A
2011-03-01
Magnetic resonance image (MRI) reconstruction using SENSitivity Encoding (SENSE) requires regularization to suppress noise and aliasing effects. Edge-preserving and sparsity-based regularization criteria can improve image quality, but they demand computation-intensive nonlinear optimization. In this paper, we present novel methods for regularized MRI reconstruction from undersampled sensitivity encoded data--SENSE-reconstruction--using the augmented Lagrangian (AL) framework for solving large-scale constrained optimization problems. We first formulate regularized SENSE-reconstruction as an unconstrained optimization task and then convert it to a set of (equivalent) constrained problems using variable splitting. We then attack these constrained versions in an AL framework using an alternating minimization method, leading to algorithms that can be implemented easily. The proposed methods are applicable to a general class of regularizers that includes popular edge-preserving (e.g., total-variation) and sparsity-promoting (e.g., l(1)-norm of wavelet coefficients) criteria and combinations thereof. Numerical experiments with synthetic and in vivo human data illustrate that the proposed AL algorithms converge faster than both general-purpose optimization algorithms such as nonlinear conjugate gradient (NCG) and state-of-the-art MFISTA.
Stokes image reconstruction for two-color microgrid polarization imaging systems.
Lemaster, Daniel A
2011-07-18
The Air Force Research Laboratory has developed a new microgrid polarization imaging system capable of simultaneously reconstructing linear Stokes parameter images in two colors on a single focal plane array. In this paper, an effective method for extracting Stokes images is presented for this type of camera system. It is also shown that correlations between the color bands can be exploited to significantly increase overall spatial resolution. Test data is used to show the advantages of this approach over bilinear interpolation. The bounds (in terms of available reconstruction bandwidth) on image resolution are also provided.
Photogrammetric 3d Building Reconstruction from Thermal Images
NASA Astrophysics Data System (ADS)
Maset, E.; Fusiello, A.; Crosilla, F.; Toldo, R.; Zorzetto, D.
2017-08-01
This paper addresses the problem of 3D building reconstruction from thermal infrared (TIR) images. We show that a commercial Computer Vision software can be used to automatically orient sequences of TIR images taken from an Unmanned Aerial Vehicle (UAV) and to generate 3D point clouds, without requiring any GNSS/INS data about position and attitude of the images nor camera calibration parameters. Moreover, we propose a procedure based on Iterative Closest Point (ICP) algorithm to create a model that combines high resolution and geometric accuracy of RGB images with the thermal information deriving from TIR images. The process can be carried out entirely by the aforesaid software in a simple and efficient way.
Assessment of the impact of modeling axial compression on PET image reconstruction.
Belzunce, Martin A; Reader, Andrew J
2017-07-06
To comprehensively evaluate both the acceleration and image-quality impacts of axial compression and its degree of modeling in fully 3D PET image reconstruction. Despite being used since the very dawn of 3D PET reconstruction, there are still no extensive studies on the impact of axial compression and its degree of modeling during reconstruction on the end-point reconstructed image quality. In this work, an evaluation of the impact of axial compression on the image quality is performed by extensively simulating data with span values from 1 to 121. In addition, two methods for modeling the axial compression in the reconstruction were evaluated. The first method models the axial compression in the system matrix, while the second method uses an unmatched projector/backprojector, where the axial compression is modeled only in the forward projector. The different system matrices were analyzed by computing their singular values and the point response functions for small subregions of the FOV. The two methods were evaluated with simulated and real data for the Biograph mMR scanner. For the simulated data, the axial compression with span values lower than 7 did not show a decrease in the contrast of the reconstructed images. For span 11, the standard sinogram size of the mMR scanner, losses of contrast in the range of 5-10 percentage points were observed when measured for a hot lesion. For higher span values, the spatial resolution was degraded considerably. However, impressively, for all span values of 21 and lower, modeling the axial compression in the system matrix compensated for the spatial resolution degradation and obtained similar contrast values as the span 1 reconstructions. Such approaches have the same processing times as span 1 reconstructions, but they permit significant reduction in storage requirements for the fully 3D sinograms. For higher span values, the system has a large condition number and it is therefore difficult to recover accurately the higher
NASA Technical Reports Server (NTRS)
Newman, Timothy; Santhanam, Naveen; Zhang, Huijuan; Gallagher, Dennis
2003-01-01
A new m