Science.gov

Sample records for accelerate image reconstruction

  1. Accelerated Compressed Sensing Based CT Image Reconstruction.

    PubMed

    Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R; Paul, Narinder S; Cobbold, Richard S C

    2015-01-01

    In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization. PMID:26167200

  2. Accelerated Compressed Sensing Based CT Image Reconstruction

    PubMed Central

    Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R.; Paul, Narinder S.; Cobbold, Richard S. C.

    2015-01-01

    In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization. PMID:26167200

  3. Computational acceleration for MR image reconstruction in partially parallel imaging.

    PubMed

    Ye, Xiaojing; Chen, Yunmei; Huang, Feng

    2011-05-01

    In this paper, we present a fast numerical algorithm for solving total variation and l(1) (TVL1) based image reconstruction with application in partially parallel magnetic resonance imaging. Our algorithm uses variable splitting method to reduce computational cost. Moreover, the Barzilai-Borwein step size selection method is adopted in our algorithm for much faster convergence. Experimental results on clinical partially parallel imaging data demonstrate that the proposed algorithm requires much fewer iterations and/or less computational cost than recently developed operator splitting and Bregman operator splitting methods, which can deal with a general sensing matrix in reconstruction framework, to get similar or even better quality of reconstructed images. PMID:20833599

  4. An accelerated and convergent iterative algorithm in image reconstruction

    NASA Astrophysics Data System (ADS)

    Yan, Jianhua; Yu, Jun

    2007-05-01

    Positron emission tomography (PET) is becoming increasingly important in the field of medicine and biology. The maximum-likelihood expectation-maximization (ML-EM) algorithm is becoming more important than filtered back-projection (FBP) algorithm which can incorporate various physical models into image reconstruction scheme. However, ML-EM converges slowly. In this paper, we propose a new algorithm named AC-ML-EM (accelerated and convergent maximum likelihood expectation maximization) by introducing gradually decreasing correction factor into ML-EM. AC-ML-EM has a higher speed of convergence. Through the experiments of computer simulated phantom data and real phantom data, AC-ML-EM is shown faster and better quantitatively than conventional ML-EM algorithm.

  5. Image oscillation reduction and convergence acceleration for OSEM reconstruction

    SciTech Connect

    Huang, S.C.

    1999-06-01

    The authors have investigated the use of two approaches to reduce the image oscillation of OSEM reconstruction that is due to the inconsistencies among different partial subsets of the projection measurements (sinogram) when considering as a group. One approach pre-processes the sinogram to make it satisfy a sinogram consistency condition. The second approach takes the average of the intermediary images (i.e., smoothes image values over sub-iterations). Both approaches were found to be capable of reducing the image oscillation, and combination of both was most effective. With these approaches, the convergence of OSEM reconstruction is further improved. For computer simulated data and real PET data, a single iteration of these new OSEM reconstruction was shown to yield images comparable to those with 80 EM iterations.

  6. Accelerating image reconstruction in three-dimensional optoacoustic tomography on graphics processing units

    PubMed Central

    Wang, Kun; Huang, Chao; Kao, Yu-Jiun; Chou, Cheng-Ying; Oraevsky, Alexander A.; Anastasio, Mark A.

    2013-01-01

    Purpose: Optoacoustic tomography (OAT) is inherently a three-dimensional (3D) inverse problem. However, most studies of OAT image reconstruction still employ two-dimensional imaging models. One important reason is because 3D image reconstruction is computationally burdensome. The aim of this work is to accelerate existing image reconstruction algorithms for 3D OAT by use of parallel programming techniques. Methods: Parallelization strategies are proposed to accelerate a filtered backprojection (FBP) algorithm and two different pairs of projection/backprojection operations that correspond to two different numerical imaging models. The algorithms are designed to fully exploit the parallel computing power of graphics processing units (GPUs). In order to evaluate the parallelization strategies for the projection/backprojection pairs, an iterative image reconstruction algorithm is implemented. Computer simulation and experimental studies are conducted to investigate the computational efficiency and numerical accuracy of the developed algorithms. Results: The GPU implementations improve the computational efficiency by factors of 1000, 125, and 250 for the FBP algorithm and the two pairs of projection/backprojection operators, respectively. Accurate images are reconstructed by use of the FBP and iterative image reconstruction algorithms from both computer-simulated and experimental data. Conclusions: Parallelization strategies for 3D OAT image reconstruction are proposed for the first time. These GPU-based implementations significantly reduce the computational time for 3D image reconstruction, complementing our earlier work on 3D OAT iterative image reconstruction. PMID:23387778

  7. Accelerating frequency-domain diffuse optical tomographic image reconstruction using graphics processing units.

    PubMed

    Prakash, Jaya; Chandrasekharan, Venkittarayan; Upendra, Vishwajith; Yalavarthy, Phaneendra K

    2010-01-01

    Diffuse optical tomographic image reconstruction uses advanced numerical models that are computationally costly to be implemented in the real time. The graphics processing units (GPUs) offer desktop massive parallelization that can accelerate these computations. An open-source GPU-accelerated linear algebra library package is used to compute the most intensive matrix-matrix calculations and matrix decompositions that are used in solving the system of linear equations. These open-source functions were integrated into the existing frequency-domain diffuse optical image reconstruction algorithms to evaluate the acceleration capability of the GPUs (NVIDIA Tesla C 1060) with increasing reconstruction problem sizes. These studies indicate that single precision computations are sufficient for diffuse optical tomographic image reconstruction. The acceleration per iteration can be up to 40, using GPUs compared to traditional CPUs in case of three-dimensional reconstruction, where the reconstruction problem is more underdetermined, making the GPUs more attractive in the clinical settings. The current limitation of these GPUs in the available onboard memory (4 GB) that restricts the reconstruction of a large set of optical parameters, more than 13,377.

  8. New Image Reconstruction Methods for Accelerated Quantitative Parameter Mapping and Magnetic Resonance Angiography

    NASA Astrophysics Data System (ADS)

    Velikina, J. V.; Samsonov, A. A.

    2016-02-01

    Advanced MRI techniques often require sampling in additional (non-spatial) dimensions such as time or parametric dimensions, which significantly elongate scan time. Our purpose was to develop novel iterative image reconstruction methods to reduce amount of acquired data in such applications using prior knowledge about signal in the extra dimensions. The efforts have been made to accelerate two applications, namely, time resolved contrast enhanced MR angiography and T1 mapping. Our result demonstrate that significant acceleration (up to 27x times) may be achieved using our proposed iterative reconstruction techniques.

  9. Acceleration of the direct reconstruction of linear parametric images using nested algorithms.

    PubMed

    Wang, Guobao; Qi, Jinyi

    2010-03-01

    Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.

  10. GPU based acceleration of 3D USCT image reconstruction with efficient integration into MATLAB

    NASA Astrophysics Data System (ADS)

    Kretzek, Ernst; Zapf, Michael; Birk, Matthias; Gemmeke, Hartmut; Ruiter, Nicole V.

    2013-03-01

    3D ultrasound computer tomography (3D USCT) promises reproducible high-resolution images for early detection of breast tumors. The synthetic aperture focusing technique (SAFT) used for image reconstruction is highly computeintensive but suitable for an accelerated execution on GPUs. In this paper we investigate how a previous implementation of the SAFT algorithm in CUDA C can be further accelerated and integrated into the existing MATLAB signal and image processing chain for 3D USCT. The focus is on an efficient preprocessing and preparation of data blocks in MATLAB as well as an improved utilisation of special hardware like the texture fetching units on GPUs. For 64 slices with 1024×1024 pixels each the overall runtime of the reconstruction including data loading and preprocessing could be decreased from 35 hours with CPU to 2.4 hours with eight GPUs.

  11. Accelerated three-dimensional cine phase contrast imaging using randomly undersampled echo planar imaging with compressed sensing reconstruction.

    PubMed

    Basha, Tamer A; Akçakaya, Mehmet; Goddu, Beth; Berg, Sophie; Nezafat, Reza

    2015-01-01

    The aim of this study was to implement and evaluate an accelerated three-dimensional (3D) cine phase contrast MRI sequence by combining a randomly sampled 3D k-space acquisition sequence with an echo planar imaging (EPI) readout. An accelerated 3D cine phase contrast MRI sequence was implemented by combining EPI readout with randomly undersampled 3D k-space data suitable for compressed sensing (CS) reconstruction. The undersampled data were then reconstructed using low-dimensional structural self-learning and thresholding (LOST). 3D phase contrast MRI was acquired in 11 healthy adults using an overall acceleration of 7 (EPI factor of 3 and CS rate of 3). For comparison, a single two-dimensional (2D) cine phase contrast scan was also performed with sensitivity encoding (SENSE) rate 2 and approximately at the level of the pulmonary artery bifurcation. The stroke volume and mean velocity in both the ascending and descending aorta were measured and compared between two sequences using Bland-Altman plots. An average scan time of 3 min and 30 s, corresponding to an acceleration rate of 7, was achieved for 3D cine phase contrast scan with one direction flow encoding, voxel size of 2 × 2 × 3 mm(3) , foot-head coverage of 6 cm and temporal resolution of 30 ms. The mean velocity and stroke volume in both the ascending and descending aorta were statistically equivalent between the proposed 3D sequence and the standard 2D cine phase contrast sequence. The combination of EPI with a randomly undersampled 3D k-space sampling sequence using LOST reconstruction allows a seven-fold reduction in scan time of 3D cine phase contrast MRI without compromising blood flow quantification.

  12. Acceleration of iterative tomographic image reconstruction by reference-based back projection

    NASA Astrophysics Data System (ADS)

    Cheng, Chang-Chieh; Li, Ping-Hui; Ching, Yu-Tai

    2016-03-01

    The purpose of this paper is to design and implement an efficient iterative reconstruction algorithm for computational tomography. We accelerate the reconstruction speed of algebraic reconstruction technique (ART), an iterative reconstruction method, by using the result of filtered backprojection (FBP), a wide used algorithm of analytical reconstruction, to be an initial guess and the reference for the first iteration and each back projection stage respectively. Both two improvements can reduce the error between the forward projection of each iteration and the measurements. We use three methods of quantitative analysis, root-mean-square error (RMSE), peak signal to noise ratio (PSNR), and structural content (SC), to show that our method can reduce the number of iterations by more than half and the quality of the result is better than the original ART.

  13. GPU accelerated generation of digitally reconstructed radiographs for 2-D/3-D image registration.

    PubMed

    Dorgham, Osama M; Laycock, Stephen D; Fisher, Mark H

    2012-09-01

    Recent advances in programming languages for graphics processing units (GPUs) provide developers with a convenient way of implementing applications which can be executed on the CPU and GPU interchangeably. GPUs are becoming relatively cheap, powerful, and widely available hardware components, which can be used to perform intensive calculations. The last decade of hardware performance developments shows that GPU-based computation is progressing significantly faster than CPU-based computation, particularly if one considers the execution of highly parallelisable algorithms. Future predictions illustrate that this trend is likely to continue. In this paper, we introduce a way of accelerating 2-D/3-D image registration by developing a hybrid system which executes on the CPU and utilizes the GPU for parallelizing the generation of digitally reconstructed radiographs (DRRs). Based on the advancements of the GPU over the CPU, it is timely to exploit the benefits of many-core GPU technology by developing algorithms for DRR generation. Although some previous work has investigated the rendering of DRRs using the GPU, this paper investigates approximations which reduce the computational overhead while still maintaining a quality consistent with that needed for 2-D/3-D registration with sufficient accuracy to be clinically acceptable in certain applications of radiation oncology. Furthermore, by comparing implementations of 2-D/3-D registration on the CPU and GPU, we investigate current performance and propose an optimal framework for PC implementations addressing the rigid registration problem. Using this framework, we are able to render DRR images from a 256×256×133 CT volume in ~24 ms using an NVidia GeForce 8800 GTX and in ~2 ms using NVidia GeForce GTX 580. In addition to applications requiring fast automatic patient setup, these levels of performance suggest image-guided radiation therapy at video frame rates is technically feasible using relatively low cost PC

  14. An accelerated photo-magnetic imaging reconstruction algorithm based on an analytical forward solution and a fast Jacobian assembly method

    NASA Astrophysics Data System (ADS)

    Nouizi, F.; Erkol, H.; Luk, A.; Marks, M.; Unlu, M. B.; Gulsen, G.

    2016-10-01

    We previously introduced photo-magnetic imaging (PMI), an imaging technique that illuminates the medium under investigation with near-infrared light and measures the induced temperature increase using magnetic resonance thermometry (MRT). Using a multiphysics solver combining photon migration and heat diffusion, PMI models the spatiotemporal distribution of temperature variation and recovers high resolution optical absorption images using these temperature maps. In this paper, we present a new fast non-iterative reconstruction algorithm for PMI. This new algorithm uses analytic methods during the resolution of the forward problem and the assembly of the sensitivity matrix. We validate our new analytic-based algorithm with the first generation finite element method (FEM) based reconstruction algorithm previously developed by our team. The validation is performed using, first synthetic data and afterwards, real MRT measured temperature maps. Our new method accelerates the reconstruction process 30-fold when compared to a single iteration of the FEM-based algorithm.

  15. Fast iterative image reconstruction using sparse matrix factorization with GPU acceleration

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Qi, Jinyi

    2011-03-01

    Statistically based iterative approaches for image reconstruction have gained much attention in medical imaging. An accurate system matrix that defines the mapping from the image space to the data space is the key to high-resolution image reconstruction. However, an accurate system matrix is often associated with high computational cost and huge storage requirement. Here we present a method to address this problem by using sparse matrix factorization and parallel computing on a graphic processing unit (GPU).We factor the accurate system matrix into three sparse matrices: a sinogram blurring matrix, a geometric projection matrix, and an image blurring matrix. The sinogram blurring matrix models the detector response. The geometric projection matrix is based on a simple line integral model. The image blurring matrix is to compensate for the line-of-response (LOR) degradation due to the simplified geometric projection matrix. The geometric projection matrix is precomputed, while the sinogram and image blurring matrices are estimated by minimizing the difference between the factored system matrix and the original system matrix. The resulting factored system matrix has much less number of nonzero elements than the original system matrix and thus substantially reduces the storage and computation cost. The smaller size also allows an efficient implement of the forward and back projectors on GPUs, which have limited amount of memory. Our simulation studies show that the proposed method can dramatically reduce the computation cost of high-resolution iterative image reconstruction. The proposed technique is applicable to image reconstruction for different imaging modalities, including x-ray CT, PET, and SPECT.

  16. An accelerated threshold-based back-projection algorithm for Compton camera image reconstruction

    SciTech Connect

    Mundy, Daniel W.; Herman, Michael G.

    2011-01-15

    Purpose: Compton camera imaging (CCI) systems are currently under investigation for radiotherapy dose reconstruction and verification. The ability of such a system to provide real-time images during dose delivery will be limited by the computational speed of the image reconstruction algorithm. In this work, the authors present a fast and simple method by which to generate an initial back-projected image from acquired CCI data, suitable for use in a filtered back-projection algorithm or as a starting point for iterative reconstruction algorithms, and compare its performance to the current state of the art. Methods: Each detector event in a CCI system describes a conical surface that includes the true point of origin of the detected photon. Numerical image reconstruction algorithms require, as a first step, the back-projection of each of these conical surfaces into an image space. The algorithm presented here first generates a solution matrix for each slice of the image space by solving the intersection of the conical surface with the image plane. Each element of the solution matrix is proportional to the distance of the corresponding voxel from the true intersection curve. A threshold function was developed to extract those pixels sufficiently close to the true intersection to generate a binary intersection curve. This process is repeated for each image plane for each CCI detector event, resulting in a three-dimensional back-projection image. The performance of this algorithm was tested against a marching algorithm known for speed and accuracy. Results: The threshold-based algorithm was found to be approximately four times faster than the current state of the art with minimal deficit to image quality, arising from the fact that a generically applicable threshold function cannot provide perfect results in all situations. The algorithm fails to extract a complete intersection curve in image slices near the detector surface for detector event cones having axes nearly

  17. Fast and efficient fully 3D PET image reconstruction using sparse system matrix factorization with GPU acceleration

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Qi, Jinyi

    2011-10-01

    Statistically based iterative image reconstruction has been widely used in positron emission tomography (PET) imaging. The quality of reconstructed images depends on the accuracy of the system matrix that defines the mapping from the image space to the data space. However, an accurate system matrix is often associated with high computation cost and huge storage requirement. In this paper, we present a method to address this problem using sparse matrix factorization and graphics processor unit (GPU) acceleration. We factor the accurate system matrix into three highly sparse matrices: a sinogram blurring matrix, a geometric projection matrix and an image blurring matrix. The geometrical projection matrix is precomputed based on a simple line integral model, while the sinogram and image blurring matrices are estimated from point-source measurements. The resulting factored system matrix has far less nonzero elements than the original system matrix, which substantially reduces the storage and computation cost. The smaller matrix size also allows an efficient implementation of the forward and backward projectors on a GPU, which often has a limited memory space. Our experimental studies show that the proposed method can dramatically reduce the computation cost of high-resolution iterative image reconstruction, while achieving better performance than existing factorization methods.

  18. Accelerating the reconstruction of magnetic resonance imaging by three-dimensional dual-dictionary learning using CUDA.

    PubMed

    Jiansen Li; Jianqi Sun; Ying Song; Yanran Xu; Jun Zhao

    2014-01-01

    An effective way to improve the data acquisition speed of magnetic resonance imaging (MRI) is using under-sampled k-space data, and dictionary learning method can be used to maintain the reconstruction quality. Three-dimensional dictionary trains the atoms in dictionary in the form of blocks, which can utilize the spatial correlation among slices. Dual-dictionary learning method includes a low-resolution dictionary and a high-resolution dictionary, for sparse coding and image updating respectively. However, the amount of data is huge for three-dimensional reconstruction, especially when the number of slices is large. Thus, the procedure is time-consuming. In this paper, we first utilize the NVIDIA Corporation's compute unified device architecture (CUDA) programming model to design the parallel algorithms on graphics processing unit (GPU) to accelerate the reconstruction procedure. The main optimizations operate in the dictionary learning algorithm and the image updating part, such as the orthogonal matching pursuit (OMP) algorithm and the k-singular value decomposition (K-SVD) algorithm. Then we develop another version of CUDA code with algorithmic optimization. Experimental results show that more than 324 times of speedup is achieved compared with the CPU-only codes when the number of MRI slices is 24.

  19. Accelerating statistical image reconstruction algorithms for fan-beam x-ray CT using cloud computing

    NASA Astrophysics Data System (ADS)

    Srivastava, Somesh; Rao, A. Ravishankar; Sheinin, Vadim

    2011-03-01

    Statistical image reconstruction algorithms potentially offer many advantages to x-ray computed tomography (CT), e.g. lower radiation dose. But, their adoption in practical CT scanners requires extra computation power, which is traditionally provided by incorporating additional computing hardware (e.g. CPU-clusters, GPUs, FPGAs etc.) into a scanner. An alternative solution is to access the required computation power over the internet from a cloud computing service, which is orders-of-magnitude more cost-effective. This is because users only pay a small pay-as-you-go fee for the computation resources used (i.e. CPU time, storage etc.), and completely avoid purchase, maintenance and upgrade costs. In this paper, we investigate the benefits and shortcomings of using cloud computing for statistical image reconstruction. We parallelized the most time-consuming parts of our application, the forward and back projectors, using MapReduce, the standard parallelization library on clouds. From preliminary investigations, we found that a large speedup is possible at a very low cost. But, communication overheads inside MapReduce can limit the maximum speedup, and a better MapReduce implementation might become necessary in the future. All the experiments for this paper, including development and testing, were completed on the Amazon Elastic Compute Cloud (EC2) for less than $20.

  20. Accelerated time-of-flight (TOF) PET image reconstruction using TOF bin subsetization and TOF weighting matrix pre-computation

    NASA Astrophysics Data System (ADS)

    Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib

    2016-02-01

    FDG-PET study also revealed that for the same noise level, a higher contrast recovery can be obtained by increasing the number of TOF subsets. It can be concluded that the proposed TOF weighting matrix pre-computation and subsetization approaches enable to further accelerate and improve the convergence properties of OSEM and MLEM algorithms, thus opening new avenues for accelerated TOF PET image reconstruction.

  1. Accelerated time-of-flight (TOF) PET image reconstruction using TOF bin subsetization and TOF weighting matrix pre-computation.

    PubMed

    Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib

    2016-02-01

    FDG-PET study also revealed that for the same noise level, a higher contrast recovery can be obtained by increasing the number of TOF subsets. It can be concluded that the proposed TOF weighting matrix pre-computation and subsetization approaches enable to further accelerate and improve the convergence properties of OSEM and MLEM algorithms, thus opening new avenues for accelerated TOF PET image reconstruction.

  2. Optimal resolution in maximum entropy image reconstruction from projections with multigrid acceleration

    NASA Technical Reports Server (NTRS)

    Limber, Mark A.; Manteuffel, Thomas A.; Mccormick, Stephen F.; Sholl, David S.

    1993-01-01

    We consider the problem of image reconstruction from a finite number of projections over the space L(sup 1)(Omega), where Omega is a compact subset of the set of Real numbers (exp 2). We prove that, given a discretization of the projection space, the function that generates the correct projection data and maximizes the Boltzmann-Shannon entropy is piecewise constant on a certain discretization of Omega, which we call the 'optimal grid'. It is on this grid that one obtains the maximum resolution given the problem setup. The size of this grid grows very quickly as the number of projections and number of cells per projection grow, indicating fast computational methods are essential to make its use feasible. We use a Fenchel duality formulation of the problem to keep the number of variables small while still using the optimal discretization, and propose a multilevel scheme to improve convergence of a simple cyclic maximization scheme applied to the dual problem.

  3. Convex accelerated maximum entropy reconstruction

    NASA Astrophysics Data System (ADS)

    Worley, Bradley

    2016-04-01

    Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm - called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm - is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra.

  4. Accelerated image reconstruction in fluorescence molecular tomography using a nonuniform updating scheme with momentum and ordered subsets methods

    NASA Astrophysics Data System (ADS)

    Zhu, Dianwen; Li, Changqing

    2016-01-01

    Fluorescence molecular tomography (FMT) is a significant preclinical imaging modality that has been actively studied in the past two decades. It remains a challenging task to obtain fast and accurate reconstruction of fluorescent probe distribution in small animals due to the large computational burden and the ill-posed nature of the inverse problem. We have recently studied a nonuniform multiplicative updating algorithm that combines with the ordered subsets (OS) method for fast convergence. However, increasing the number of OS leads to greater approximation errors and the speed gain from larger number of OS is limited. We propose to further enhance the convergence speed by incorporating a first-order momentum method that uses previous iterations to achieve optimal convergence rate. Using numerical simulations and a cubic phantom experiment, we have systematically compared the effects of the momentum technique, the OS method, and the nonuniform updating scheme in accelerating the FMT reconstruction. We found that the proposed combined method can produce a high-quality image using an order of magnitude less time.

  5. Accelerating an Ordered-Subset Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction with a Power Factor and Total Variation Minimization

    PubMed Central

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2016-01-01

    In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate. PMID:27073853

  6. Accelerating an Ordered-Subset Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction with a Power Factor and Total Variation Minimization.

    PubMed

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2016-01-01

    In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate. PMID:27073853

  7. A Fourier-based compressed sensing technique for accelerated CT image reconstruction using first-order methods

    NASA Astrophysics Data System (ADS)

    Choi, Kihwan; Li, Ruijiang; Nam, Haewon; Xing, Lei

    2014-06-01

    As a solution to iterative CT image reconstruction, first-order methods are prominent for the large-scale capability and the fast convergence rate {O}(1/k^2). In practice, the CT system matrix with a large condition number may lead to slow convergence speed despite the theoretically promising upper bound. The aim of this study is to develop a Fourier-based scaling technique to enhance the convergence speed of first-order methods applied to CT image reconstruction. Instead of working in the projection domain, we transform the projection data and construct a data fidelity model in Fourier space. Inspired by the filtered backprojection formalism, the data are appropriately weighted in Fourier space. We formulate an optimization problem based on weighted least-squares in the Fourier space and total-variation (TV) regularization in image space for parallel-beam, fan-beam and cone-beam CT geometry. To achieve the maximum computational speed, the optimization problem is solved using a fast iterative shrinkage-thresholding algorithm with backtracking line search and GPU implementation of projection/backprojection. The performance of the proposed algorithm is demonstrated through a series of digital simulation and experimental phantom studies. The results are compared with the existing TV regularized techniques based on statistics-based weighted least-squares as well as basic algebraic reconstruction technique. The proposed Fourier-based compressed sensing (CS) method significantly improves both the image quality and the convergence rate compared to the existing CS techniques.

  8. Overview of Image Reconstruction

    SciTech Connect

    Marr, R. B.

    1980-04-01

    Image reconstruction (or computerized tomography, etc.) is any process whereby a function, f, on Rn is estimated from empirical data pertaining to its integrals, ∫f(x) dx, for some collection of hyperplanes of dimension k < n. The paper begins with background information on how image reconstruction problems have arisen in practice, and describes some of the application areas of past or current interest; these include radioastronomy, optics, radiology and nuclear medicine, electron microscopy, acoustical imaging, geophysical tomography, nondestructive testing, and NMR zeugmatography. Then the various reconstruction algorithms are discussed in five classes: summation, or simple back-projection; convolution, or filtered back-projection; Fourier and other functional transforms; orthogonal function series expansion; and iterative methods. Certain more technical mathematical aspects of image reconstruction are considered from the standpoint of uniqueness, consistency, and stability of solution. The paper concludes by presenting certain open problems. 73 references. (RWR)

  9. Augmented Likelihood Image Reconstruction.

    PubMed

    Stille, Maik; Kleine, Matthias; Hägele, Julian; Barkhausen, Jörg; Buzug, Thorsten M

    2016-01-01

    The presence of high-density objects remains an open problem in medical CT imaging. Data of projections passing through objects of high density, such as metal implants, are dominated by noise and are highly affected by beam hardening and scatter. Reconstructed images become less diagnostically conclusive because of pronounced artifacts that manifest as dark and bright streaks. A new reconstruction algorithm is proposed with the aim to reduce these artifacts by incorporating information about shape and known attenuation coefficients of a metal implant. Image reconstruction is considered as a variational optimization problem. The afore-mentioned prior knowledge is introduced in terms of equality constraints. An augmented Lagrangian approach is adapted in order to minimize the associated log-likelihood function for transmission CT. During iterations, temporally appearing artifacts are reduced with a bilateral filter and new projection values are calculated, which are used later on for the reconstruction. A detailed evaluation in cooperation with radiologists is performed on software and hardware phantoms, as well as on clinically relevant patient data of subjects with various metal implants. Results show that the proposed reconstruction algorithm is able to outperform contemporary metal artifact reduction methods such as normalized metal artifact reduction.

  10. LOFAR sparse image reconstruction

    NASA Astrophysics Data System (ADS)

    Garsden, H.; Girard, J. N.; Starck, J. L.; Corbel, S.; Tasse, C.; Woiselle, A.; McKean, J. P.; van Amesfoort, A. S.; Anderson, J.; Avruch, I. M.; Beck, R.; Bentum, M. J.; Best, P.; Breitling, F.; Broderick, J.; Brüggen, M.; Butcher, H. R.; Ciardi, B.; de Gasperin, F.; de Geus, E.; de Vos, M.; Duscha, S.; Eislöffel, J.; Engels, D.; Falcke, H.; Fallows, R. A.; Fender, R.; Ferrari, C.; Frieswijk, W.; Garrett, M. A.; Grießmeier, J.; Gunst, A. W.; Hassall, T. E.; Heald, G.; Hoeft, M.; Hörandel, J.; van der Horst, A.; Juette, E.; Karastergiou, A.; Kondratiev, V. I.; Kramer, M.; Kuniyoshi, M.; Kuper, G.; Mann, G.; Markoff, S.; McFadden, R.; McKay-Bukowski, D.; Mulcahy, D. D.; Munk, H.; Norden, M. J.; Orru, E.; Paas, H.; Pandey-Pommier, M.; Pandey, V. N.; Pietka, G.; Pizzo, R.; Polatidis, A. G.; Renting, A.; Röttgering, H.; Rowlinson, A.; Schwarz, D.; Sluman, J.; Smirnov, O.; Stappers, B. W.; Steinmetz, M.; Stewart, A.; Swinbank, J.; Tagger, M.; Tang, Y.; Tasse, C.; Thoudam, S.; Toribio, C.; Vermeulen, R.; Vocks, C.; van Weeren, R. J.; Wijnholds, S. J.; Wise, M. W.; Wucknitz, O.; Yatawatta, S.; Zarka, P.; Zensus, A.

    2015-03-01

    Context. The LOw Frequency ARray (LOFAR) radio telescope is a giant digital phased array interferometer with multiple antennas distributed in Europe. It provides discrete sets of Fourier components of the sky brightness. Recovering the original brightness distribution with aperture synthesis forms an inverse problem that can be solved by various deconvolution and minimization methods. Aims: Recent papers have established a clear link between the discrete nature of radio interferometry measurement and the "compressed sensing" (CS) theory, which supports sparse reconstruction methods to form an image from the measured visibilities. Empowered by proximal theory, CS offers a sound framework for efficient global minimization and sparse data representation using fast algorithms. Combined with instrumental direction-dependent effects (DDE) in the scope of a real instrument, we developed and validated a new method based on this framework. Methods: We implemented a sparse reconstruction method in the standard LOFAR imaging tool and compared the photometric and resolution performance of this new imager with that of CLEAN-based methods (CLEAN and MS-CLEAN) with simulated and real LOFAR data. Results: We show that i) sparse reconstruction performs as well as CLEAN in recovering the flux of point sources; ii) performs much better on extended objects (the root mean square error is reduced by a factor of up to 10); and iii) provides a solution with an effective angular resolution 2-3 times better than the CLEAN images. Conclusions: Sparse recovery gives a correct photometry on high dynamic and wide-field images and improved realistic structures of extended sources (of simulated and real LOFAR datasets). This sparse reconstruction method is compatible with modern interferometric imagers that handle DDE corrections (A- and W-projections) required for current and future instruments such as LOFAR and SKA.

  11. Exercises in PET Image Reconstruction

    NASA Astrophysics Data System (ADS)

    Nix, Oliver

    These exercises are complementary to the theoretical lectures about positron emission tomography (PET) image reconstruction. They aim at providing some hands on experience in PET image reconstruction and focus on demonstrating the different data preprocessing steps and reconstruction algorithms needed to obtain high quality PET images. Normalisation, geometric-, attenuation- and scatter correction are introduced. To explain the necessity of those some basics about PET scanner hardware, data acquisition and organisation are reviewed. During the course the students use a software application based on the STIR (software for tomographic image reconstruction) library 1,2 which allows them to dynamically select or deselect corrections and reconstruction methods as well as to modify their most important parameters. Following the guided tutorial, the students get an impression on the effect the individual data precorrections have on image quality and what happens if they are forgotten. Several data sets in sinogram format are provided, such as line source data, Jaszczak phantom data sets with high and low statistics and NEMA whole body phantom data. The two most frequently used reconstruction algorithms in PET image reconstruction, filtered back projection (FBP) and the iterative OSEM (ordered subset expectation maximation) approach are used to reconstruct images. The exercise should help the students gaining an understanding what the reasons for inferior image quality and artefacts are and how to improve quality by a clever choice of reconstruction parameters.

  12. Crystallographic image reconstruction problem

    NASA Astrophysics Data System (ADS)

    ten Eyck, Lynn F.

    1993-11-01

    The crystallographic X-ray diffraction experiment gives the amplitudes of the Fourier series expansion of the electron density distribution within the crystal. The 'phase problem' in crystallography is the determination of the phase angles of the Fourier coefficients required to calculate the Fourier synthesis and reveal the molecular structure. The magnitude of this task varies enormously as the size of the structures ranges from a few atoms to thousands of atoms, and the number of Fourier coefficients ranges from hundreds to hundreds of thousands. The issue is further complicated for large structures by limited resolution. This problem is solved for 'small' molecules (up to 200 atoms and a few thousand Fourier coefficients) by methods based on probabilistic models which depend on atomic resolution. These methods generally fail for larger structures such as proteins. The phase problem for protein molecules is generally solved either by laborious experimental methods or by exploiting known similarities to solved structures. Various direct methods have been attempted for very large structures over the past 15 years, with gradually improving results -- but so far no complete success. This paper reviews the features of the crystallographic image reconstruction problem which render it recalcitrant, and describes recent encouraging progress in the application of maximum entropy methods to this problem.

  13. Combined Parallel and Partial Fourier MR Reconstruction for Accelerated 8-Channel Hyperpolarized Carbon-13 In Vivo Magnetic Resonance Spectroscopic Imaging (MRSI)

    PubMed Central

    Ohliger, Michael A.; Larson, Peder E.Z.; Bok, Robert A.; Shin, Peter; Hu, Simon; Tropp, James; Robb, Fraser; Carvajal, Lucas; Nelson, Sarah J.; Kurhanewicz, John; Vigneron, Daniel B.

    2013-01-01

    Purpose To implement and evaluate combined parallel magnetic resonance imaging (MRI) and partial Fourier acquisition and reconstruction for rapid hyperpolarized carbon-13 (13C) spectroscopic imaging. Short acquisition times mitigate hyperpolarized signal losses that occur due to T1 decay, metabolism, and radiofrequency (RF) saturation. Human applications additionally require rapid imaging to permit breath-holding and to minimize the effects of physiologic motion. Materials and Methods Numerical simulations were employed to validate and characterize the reconstruction. In vivo MR spectroscopic images were obtained from a rat following injection of hyperpolarized 13C pyruvate using an 8-channel array of carbon-tuned receive elements. Results For small spectroscopic matrix sizes, combined parallel imaging and partial Fourier undersampling resulted primarily in decreased spatial resolution, with relatively less visible spatial aliasing. Parallel reconstruction qualitatively restored lost image detail, although some pixel spectra had persistent numerical error. With this technique, a 30 × 10 × 16 matrix of 4800 3D MR spectroscopy imaging voxels from a whole rat with isotropic 8 mm3 resolution was acquired within 11 seconds. Conclusion Parallel MRI and partial Fourier acquisitions can provide the shorter imaging times and wider spatial coverage that will be necessary as hyperpolarized 13C techniques move toward human clinical applications. PMID:23293097

  14. An accelerated ordered subsets reconstruction algorithm using an accelerating power factor for emission tomography

    NASA Astrophysics Data System (ADS)

    Hsiao, Ing-Tsung; Huang, Hsuan-Ming

    2010-02-01

    We proposed a speed-enhanced tomographic reconstruction algorithm, ACOSEM (accelerated complete-data ordered-subset expectation-maximization), to accelerate a convergent OS-type algorithm (COSEM). The ACOSEM algorithm was based on modification of the COSEM update by applying an accelerating power factor or a bigger step size. Unlike the limited enhancement of the other speed-enhanced algorithm (E-COSEM), the proposed ACOSEM with an appropriate power factor can lead to a much faster reconstruction speed than COSEM and ECOSEM. Similar to COSEM, there is no free user-entered relaxation parameter needed for ACOSEM to ensure the convergent performance as required in the other type of fast convergent algorithms such as RAMLA. We derived the ACOSEM algorithm, and compared its performance to those of other fast convergent algorithms including COSEM, ECOSEM and RAMLA with optimized relaxation parameters. Though not convergent, OSEM is the current state-of-the-art reconstruction method used in the clinics, and thus included in the noise performance comparison. The results showed that ACOSEM reached the same image quality as in COSEM but two times faster when a power factor of 2.0 was used. An upper limit of 5 for the power factor with monotonically increasing log-likelihood in ACOSEM was observed under various conditions. Although the convergence proof is not available now, the results of log-likelihood values and noise studies showed that ACOSEM performed consistently with COSEM but with an accelerating reconstruction speed.

  15. Algebraic Reconstruction Technique (ART) for parallel imaging reconstruction of undersampled radial data: Application to cardiac cine

    PubMed Central

    Li, Shu; Chan, Cheong; Stockmann, Jason P.; Tagare, Hemant; Adluru, Ganesh; Tam, Leo K.; Galiana, Gigi; Constable, R. Todd; Kozerke, Sebastian; Peters, Dana C.

    2014-01-01

    Purpose To investigate algebraic reconstruction technique (ART) for parallel imaging reconstruction of radial data, applied to accelerated cardiac cine. Methods A GPU-accelerated ART reconstruction was implemented and applied to simulations, point spread functions (PSF) and in twelve subjects imaged with radial cardiac cine acquisitions. Cine images were reconstructed with radial ART at multiple undersampling levels (192 Nr x Np = 96 to 16). Images were qualitatively and quantitatively analyzed for sharpness and artifacts, and compared to filtered back-projection (FBP), and conjugate gradient SENSE (CG SENSE). Results Radial ART provided reduced artifacts and mainly preserved spatial resolution, for both simulations and in vivo data. Artifacts were qualitatively and quantitatively less with ART than FBP using 48, 32, and 24 Np, although FBP provided quantitatively sharper images at undersampling levels of 48-24 Np (all p<0.05). Use of undersampled radial data for generating auto-calibrated coil-sensitivity profiles resulted in slightly reduced quality. ART was comparable to CG SENSE. GPU-acceleration increased ART reconstruction speed 15-fold, with little impact on the images. Conclusion GPU-accelerated ART is an alternative approach to image reconstruction for parallel radial MR imaging, providing reduced artifacts while mainly maintaining sharpness compared to FBP, as shown by its first application in cardiac studies. PMID:24753213

  16. Image processing and reconstruction

    SciTech Connect

    Chartrand, Rick

    2012-06-15

    This talk will examine some mathematical methods for image processing and the solution of underdetermined, linear inverse problems. The talk will have a tutorial flavor, mostly accessible to undergraduates, while still presenting research results. The primary approach is the use of optimization problems. We will find that relaxing the usual assumption of convexity will give us much better results.

  17. Noise and resolution of Bayesian reconstruction for multiple image configurations

    SciTech Connect

    Chinn, G.; Huang, Sung Cheng

    1993-12-01

    Images reconstructed by Bayesian and maximum-likelihood (ML) using a Gibbs prior with prior weight {beta} were compared with images produced by filtered back projection (FBP) from sinogram data simulated with different counts and image configurations. Bayesian images were generated by the OSL algorithm accelerated by an over relaxation parameter. For relatively low {beta}, Bayesian images can yield an overall improvement to the images compared to ML reconstruction. However, for larger {beta}, Bayesian images degrade from the standpoint of noise and quantitation. Compared to FBP, the ML images were superior in a mean square error sense in regions of low activity level and for small structures. At a comparable noise level to FBP, Bayesian reconstruction can be used to effectively recover higher resolution images. The overall performance is dependent on the image structure and the weight of the Bayesian prior.

  18. Imaging using accelerated heavy ions

    SciTech Connect

    Chu, W.T.

    1982-05-01

    Several methods for imaging using accelerated heavy ion beams are being investigated at Lawrence Berkeley Laboratory. Using the HILAC (Heavy-Ion Linear Accelerator) as an injector, the Bevalac can accelerate fully stripped atomic nuclei from carbon (Z = 6) to krypton (Z = 34), and partly stripped ions up to uranium (Z = 92). Radiographic studies to date have been conducted with helium (from 184-inch cyclotron), carbon, oxygen, and neon beams. Useful ranges in tissue of 40 cm or more are available. To investigate the potential of heavy-ion projection radiography and computed tomography (CT), several methods and instrumentation have been studied.

  19. Maximum entropy image reconstruction from projections

    NASA Astrophysics Data System (ADS)

    Bara, N.; Murata, K.

    1981-07-01

    The maximum entropy method is applied to image reconstruction from projections, of which angular view is restricted. The relaxation parameters are introduced to the maximum entropy reconstruction and after iteration the median filtering is implemented. These procedures improve the quality of the reconstructed image from noisy projections

  20. Reconstruction of coded aperture images

    NASA Technical Reports Server (NTRS)

    Bielefeld, Michael J.; Yin, Lo I.

    1987-01-01

    Balanced correlation method and the Maximum Entropy Method (MEM) were implemented to reconstruct a laboratory X-ray source as imaged by a Uniformly Redundant Array (URA) system. Although the MEM method has advantages over the balanced correlation method, it is computationally time consuming because of the iterative nature of its solution. Massively Parallel Processing, with its parallel array structure is ideally suited for such computations. These preliminary results indicate that it is possible to use the MEM method in future coded-aperture experiments with the help of the MPP.

  1. Accelerating Dynamic Cardiac MR Imaging Using Structured Sparse Representation

    PubMed Central

    Cai, Nian; Wang, Shengru; Zhu, Shasha

    2013-01-01

    Compressed sensing (CS) has produced promising results on dynamic cardiac MR imaging by exploiting the sparsity in image series. In this paper, we propose a new method to improve the CS reconstruction for dynamic cardiac MRI based on the theory of structured sparse representation. The proposed method user the PCA subdictionaries for adaptive sparse representation and suppresses the sparse coding noise to obtain good reconstructions. An accelerated iterative shrinkage algorithm is used to solve the optimization problem and achieve a fast convergence rate. Experimental results demonstrate that the proposed method improves the reconstruction quality of dynamic cardiac cine MRI over the state-of-the-art CS method. PMID:24454528

  2. Method for position emission mammography image reconstruction

    DOEpatents

    Smith, Mark Frederick

    2004-10-12

    An image reconstruction method comprising accepting coincidence datat from either a data file or in real time from a pair of detector heads, culling event data that is outside a desired energy range, optionally saving the desired data for each detector position or for each pair of detector pixels on the two detector heads, and then reconstructing the image either by backprojection image reconstruction or by iterative image reconstruction. In the backprojection image reconstruction mode, rays are traced between centers of lines of response (LOR's), counts are then either allocated by nearest pixel interpolation or allocated by an overlap method and then corrected for geometric effects and attenuation and the data file updated. If the iterative image reconstruction option is selected, one implementation is to compute a grid Siddon retracing, and to perform maximum likelihood expectation maiximization (MLEM) computed by either: a) tracing parallel rays between subpixels on opposite detector heads; or b) tracing rays between randomized endpoint locations on opposite detector heads.

  3. Reconstructing HST Images of Asteroids

    NASA Astrophysics Data System (ADS)

    Storrs, A. D.; Bank, S.; Gerhardt, H.; Makhoul, K.

    2003-12-01

    We present reconstructions of images of 22 large main belt asteroids that were observed by Hubble Space Telescope with the Wide-Field/Planetary cameras. All images were restored with the MISTRAL program (Mugnier, Fusco, and Conan 2003) at enhanced spatial resolution. This is possible thanks to the well-studied and stable point spread function (PSF) on HST. We present some modeling of this process and determine that the Strehl ratio for WF/PC (aberrated) images can be improved to 130 ratio of 80 We will report sizes, shapes, and albedos for these objects, as well as any surface features. Images taken with the WFPC-2 instrument were made in a variety of filters so that it should be possible to investigate changes in mineralogy across the surface of the larger asteroids in a manner similar to that done on 4 Vesta by Binzel et al. (1997). Of particular interest are a possible water of hydration feature on 1 Ceres, and the non-observation of a constriction or gap between the components of 216 Kleopatra. Reduction of this data was aided by grant HST-GO-08583.08A from the Space Telescope Science Institute. References: Mugnier, L.M., T. Fusco, and J.-M. Conan, 2003. JOSA A (submitted) Binzel, R.P., Gaffey, M.J., Thomas, P.C., Zellner, B.H., Storrs, A.D., and Wells, E.N. 1997. Icarus 128 pp. 95-103

  4. Image reconstruction via truncated lambda tomography

    NASA Astrophysics Data System (ADS)

    Yu, Hengyong; Ye, Yangbo; Wang, Ge

    2006-08-01

    This paper investigates the feasibility of reconstructing a Computed Tomography (CT) image from truncated Lambda Tomography (LT), a gradient-like image of it's original. An LT image can be regarded as a convolution of the object image and the point spread function (PSF) of the Calderon operator. The PSF's infinite support provides the LT image infinite support; even the original CT image is of compact support. When the support of a truncated LT image fully covers the compact support of the corresponding CT image, we develop an extrapolation method to recover the CT image more precisely. When the support of the CT image fully covers the support of the truncated LT image, we design a template-based scheme to compensate the cupping effects and reconstruct a satisfactory image. Our algorithms are evaluated in numerical simulations and the results demonstrate the feasibilities of our methods. Our approaches provide a new way to reconstruct high-quality CT images.

  5. Accelerated median root prior reconstruction for pinhole single-photon emission tomography (SPET)

    NASA Astrophysics Data System (ADS)

    Sohlberg, Antti; Ruotsalainen, Ulla; Watabe, Hiroshi; Iida, Hidehiro; Kuikka, Jyrki T.

    2003-07-01

    Pinhole collimation can be used to improve spatial resolution in SPET. However, the resolution improvement is achieved at the cost of reduced sensitivity, which leads to projection images with poor statistics. Images reconstructed from these projections using the maximum likelihood expectation maximization (ML-EM) algorithms, which have been used to reduce the artefacts generated by the filtered backprojection (FBP) based reconstruction, suffer from noise/bias trade-off: noise contaminates the images at high iteration numbers, whereas early abortion of the algorithm produces images that are excessively smooth and biased towards the initial estimate of the algorithm. To limit the noise accumulation we propose the use of the pinhole median root prior (PH-MRP) reconstruction algorithm. MRP is a Bayesian reconstruction method that has already been used in PET imaging and shown to possess good noise reduction and edge preservation properties. In this study the PH-MRP algorithm was accelerated with the ordered subsets (OS) procedure and compared to the FBP, OS-EM and conventional Bayesian reconstruction methods in terms of noise reduction, quantitative accuracy, edge preservation and visual quality. The results showed that the accelerated PH-MRP algorithm was very robust. It provided visually pleasing images with lower noise level than the FBP or OS-EM and with smaller bias and sharper edges than the conventional Bayesian methods.

  6. Heuristic reconstructions of neutron penumbral images

    SciTech Connect

    Nozaki, Shinya; Chen Yenwei

    2004-10-01

    Penumbral imaging is a technique of coded aperture imaging proposed for imaging of highly penetrating radiations. To date, the penumbral imaging technique has been successfully applied to neutron imaging in laser fusion experiments. Since the reconstruction of penumbral images is based on linear deconvolution methods, such as inverse filter and Wiener filer, the point spread function of apertures should be space invariant; it is also sensitive to the noise contained in penumbral images. In this article, we propose a new heuristic reconstruction method for neutron penumbral imaging, which can be used for a space-variant imaging system and is also very tolerant to the noise.

  7. Structured image reconstruction for three-dimensional ghost imaging lidar.

    PubMed

    Yu, Hong; Li, Enrong; Gong, Wenlin; Han, Shensheng

    2015-06-01

    A structured image reconstruction method has been proposed to obtain high quality images in three-dimensional ghost imaging lidar. By considering the spatial structure relationship between recovered images of scene slices at different longitudinal distances, orthogonality constraint has been incorporated to reconstruct the three-dimensional scenes in remote sensing. Numerical simulations have been performed to demonstrate that scene slices with various sparse ratios can be recovered more accurately by applying orthogonality constraint, and the enhancement is significant especially for ghost imaging with less measurements. A simulated three-dimensional city scene has been successfully reconstructed by using structured image reconstruction in three-dimensional ghost imaging lidar. PMID:26072814

  8. High-resolution reconstruction for terahertz imaging.

    PubMed

    Xu, Li-Min; Fan, Wen-Hui; Liu, Jia

    2014-11-20

    We present a high-resolution (HR) reconstruction model and algorithms for terahertz imaging, taking advantage of super-resolution methodology and algorithms. The algorithms used include projection onto a convex sets approach, iterative backprojection approach, Lucy-Richardson iteration, and 2D wavelet decomposition reconstruction. Using the first two HR reconstruction methods, we successfully obtain HR terahertz images with improved definition and lower noise from four low-resolution (LR) 22×24 terahertz images taken from our homemade THz-TDS system at the same experimental conditions with 1.0 mm pixel. Using the last two HR reconstruction methods, we transform one relatively LR terahertz image to a HR terahertz image with decreased noise. This indicates potential application of HR reconstruction methods in terahertz imaging with pulsed and continuous wave terahertz sources.

  9. Studies on image compression and image reconstruction

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Nori, Sekhar; Araj, A.

    1994-01-01

    During this six month period our works concentrated on three, somewhat different areas. We looked at and developed a number of error concealment schemes for use in a variety of video coding environments. This work is described in an accompanying (draft) Masters thesis. In the thesis we describe application of this techniques to the MPEG video coding scheme. We felt that the unique frame ordering approach used in the MPEG scheme would be a challenge to any error concealment/error recovery technique. We continued with our work in the vector quantization area. We have also developed a new type of vector quantizer, which we call a scan predictive vector quantization. The scan predictive VQ was tested on data processed at Goddard to approximate Landsat 7 HRMSI resolution and compared favorably with existing VQ techniques. A paper describing this work is included. The third area is concerned more with reconstruction than compression. While there is a variety of efficient lossless image compression schemes, they all have a common property that they use past data to encode future data. This is done either via taking differences, context modeling, or by building dictionaries. When encoding large images, this common property becomes a common flaw. When the user wishes to decode just a portion of the image, the requirement that the past history be available forces the decoding of a significantly larger portion of the image than desired by the user. Even with intelligent partitioning of the image dataset, the number of pixels decoded may be four times the number of pixels requested. We have developed an adaptive scanning strategy which can be used with any lossless compression scheme and which lowers the additional number of pixels to be decoded to about 7 percent of the number of pixels requested! A paper describing these results is included.

  10. Optimal reconstruction of images from localized phase.

    PubMed

    Urieli, S; Porat, M; Cohen, N

    1998-01-01

    The importance of localized phase in signal representation is investigated. The convergence rate of the POCS algorithm (projection onto convex sets) used for image reconstruction from spectral phase is defined and analyzed, and the characteristics of images optimally reconstructed from phase-only information are presented. It is concluded that images of geometric form are most efficiently reconstructed from their spectral phase, whereas images of symmetric form have the poorest convergence characteristics. The transition between the two extremes is shown to be continuous. The results provide a new approach and analysis of the previously reported advantages of the localized phase representation over the global approach, and suggest possible compression schemes.

  11. "Keyhole" method for accelerating imaging of contrast agent uptake.

    PubMed

    van Vaals, J J; Brummer, M E; Dixon, W T; Tuithof, H H; Engels, H; Nelson, R C; Gerety, B M; Chezmar, J L; den Boer, J A

    1993-01-01

    Magnetic resonance (MR) imaging methods with good spatial and contrast resolution are often too slow to follow the uptake of contrast agents with the desired temporal resolution. Imaging can be accelerated by skipping the acquisition of data normally taken with strong phase-encoding gradients, restricting acquisition to weak-gradient data only. If the usual procedure of substituting zeroes for the missing data is followed, blurring results. Substituting instead reference data taken before or well after contrast agent injection reduces this problem. Volunteer and patient images obtained by using such reference data show that imaging can be usefully accelerated severalfold. Cortical and medullary regions of interest and whole kidney regions were studied, and both gradient- and spin-echo images are shown. The method is believed to be compatible with other acceleration methods such as half-Fourier reconstruction and reading of more than one line of k space per excitation.

  12. Fast Dictionary-Based Reconstruction for Diffusion Spectrum Imaging

    PubMed Central

    Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F.; Yendiki, Anastasia; Wald, Lawrence L.; Adalsteinsson, Elfar

    2015-01-01

    Diffusion Spectrum Imaging (DSI) reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation (TV) transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using Matlab running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using Principal Component Analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm. PMID:23846466

  13. Image Reconstruction Using Analysis Model Prior.

    PubMed

    Han, Yu; Du, Huiqian; Lam, Fan; Mei, Wenbo; Fang, Liping

    2016-01-01

    The analysis model has been previously exploited as an alternative to the classical sparse synthesis model for designing image reconstruction methods. Applying a suitable analysis operator on the image of interest yields a cosparse outcome which enables us to reconstruct the image from undersampled data. In this work, we introduce additional prior in the analysis context and theoretically study the uniqueness issues in terms of analysis operators in general position and the specific 2D finite difference operator. We establish bounds on the minimum measurement numbers which are lower than those in cases without using analysis model prior. Based on the idea of iterative cosupport detection (ICD), we develop a novel image reconstruction model and an effective algorithm, achieving significantly better reconstruction performance. Simulation results on synthetic and practical magnetic resonance (MR) images are also shown to illustrate our theoretical claims. PMID:27379171

  14. Image Reconstruction Using Analysis Model Prior

    PubMed Central

    Han, Yu; Du, Huiqian; Lam, Fan; Mei, Wenbo; Fang, Liping

    2016-01-01

    The analysis model has been previously exploited as an alternative to the classical sparse synthesis model for designing image reconstruction methods. Applying a suitable analysis operator on the image of interest yields a cosparse outcome which enables us to reconstruct the image from undersampled data. In this work, we introduce additional prior in the analysis context and theoretically study the uniqueness issues in terms of analysis operators in general position and the specific 2D finite difference operator. We establish bounds on the minimum measurement numbers which are lower than those in cases without using analysis model prior. Based on the idea of iterative cosupport detection (ICD), we develop a novel image reconstruction model and an effective algorithm, achieving significantly better reconstruction performance. Simulation results on synthetic and practical magnetic resonance (MR) images are also shown to illustrate our theoretical claims. PMID:27379171

  15. Spectral image reconstruction through the PCA transform

    NASA Astrophysics Data System (ADS)

    Ma, Long; Qiu, Xuewei; Cong, Yangming

    2015-12-01

    Digital color image reproduction based on spectral information has become a field of much interest and practical importance in recent years. The representation of color in digital form with multi-band images is not very accurate, hence the use of spectral image is justified. Reconstructing high-dimensional spectral reflectance images from relatively low-dimensional camera signals is generally an ill-posed problem. The aim of this study is to use the Principal component analysis (PCA) transform in spectral reflectance images reconstruction. The performance is evaluated by the mean, median and standard deviation of color difference values. The values of mean, median and standard deviation of root mean square (GFC) errors between the reconstructed and the actual spectral image were also calculated. Simulation experiments conducted on a six-channel camera system and on spectral test images show the performance of the suggested method.

  16. Image reconstruction for robot assisted ultrasound tomography

    NASA Astrophysics Data System (ADS)

    Aalamifar, Fereshteh; Zhang, Haichong K.; Rahmim, Arman; Boctor, Emad M.

    2016-04-01

    An investigation of several image reconstruction methods for robot-assisted ultrasound (US) tomography setup is presented. In the robot-assisted setup, an expert moves the US probe to the location of interest, and a robotic arm automatically aligns another US probe with it. The two aligned probes can then transmit and receive US signals which are subsequently used for tomographic reconstruction. This study focuses on reconstruction of the speed of sound. In various simulation evaluations as well as in an experiment with a millimeter-range inaccuracy, we demonstrate that the limited data provided by two probes can be used to reconstruct pixel-wise images differentiating between media with different speeds of sound. Combining the results of this investigation with the developed robot-assisted US tomography setup, we envision feasibility of this setup for tomographic imaging in applications beyond breast imaging, with potentially significant efficacy in cancer diagnosis.

  17. Image Reconstruction for Prostate Specific Nuclear Medicine imagers

    SciTech Connect

    Mark Smith

    2007-01-11

    There is increasing interest in the design and construction of nuclear medicine detectors for dedicated prostate imaging. These include detectors designed for imaging the biodistribution of radiopharmaceuticals labeled with single gamma as well as positron-emitting radionuclides. New detectors and acquisition geometries present challenges and opportunities for image reconstruction. In this contribution various strategies for image reconstruction for these special purpose imagers are reviewed. Iterative statistical algorithms provide a framework for reconstructing prostate images from a wide variety of detectors and acquisition geometries for PET and SPECT. The key to their success is modeling the physics of photon transport and data acquisition and the Poisson statistics of nuclear decay. Analytic image reconstruction methods can be fast and are useful for favorable acquisition geometries. Future perspectives on algorithm development and data analysis for prostate imaging are presented.

  18. Hardware-accelerated cone-beam reconstruction on a mobile C-arm

    NASA Astrophysics Data System (ADS)

    Churchill, Michael; Pope, Gordon; Penman, Jeffrey; Riabkov, Dmitry; Xue, Xinwei; Cheryauka, Arvi

    2007-03-01

    The three-dimensional image reconstruction process used in interventional CT imaging is computationally demanding. Implementation on general-purpose computational platforms requires a substantial time, which is undesirable during time-critical surgical and minimally invasive procedures. Field Programmable Gate Arrays (FPGA)s and Graphics Processing Units (GPU)s have been studied as a platform to accelerate 3-D imaging. FPGA and GPU devices offer a reprogrammable hardware architecture, configurable for pipelining and high levels of parallel processing to increase computational throughput, as well as the benefits of being off-the-shelf and effective 'performance-to-watt' solutions. The main focus of this paper is on the backprojection step of the image reconstruction process, since it is the most computationally intensive part. Using the popular Feldkamp-Davis-Kress (FDK) cone-beam algorithm, our studies indicate the entire 256 3 image reconstruction process can be accelerated to real or near real-time (i.e. immediately after a finished scan of 15-30 seconds duration) on a mobile X-ray C-arm system using available resources on built-in FPGA board. High resolution 512 3 image backprojection can be also accomplished within the same scanning time on a high-end GPU board comprising up to 128 streaming processors.

  19. Bayesian image reconstruction: Application to emission tomography

    SciTech Connect

    Nunez, J.; Llacer, J.

    1989-02-01

    In this paper we propose a Maximum a Posteriori (MAP) method of image reconstruction in the Bayesian framework for the Poisson noise case. We use entropy to define the prior probability and likelihood to define the conditional probability. The method uses sharpness parameters which can be theoretically computed or adjusted, allowing us to obtain MAP reconstructions without the problem of the grey'' reconstructions associated with the pre Bayesian reconstructions. We have developed several ways to solve the reconstruction problem and propose a new iterative algorithm which is stable, maintains positivity and converges to feasible images faster than the Maximum Likelihood Estimate method. We have successfully applied the new method to the case of Emission Tomography, both with simulated and real data. 41 refs., 4 figs., 1 tab.

  20. Bayesian image reconstruction - The pixon and optimal image modeling

    NASA Technical Reports Server (NTRS)

    Pina, R. K.; Puetter, R. C.

    1993-01-01

    In this paper we describe the optimal image model, maximum residual likelihood method (OptMRL) for image reconstruction. OptMRL is a Bayesian image reconstruction technique for removing point-spread function blurring. OptMRL uses both a goodness-of-fit criterion (GOF) and an 'image prior', i.e., a function which quantifies the a priori probability of the image. Unlike standard maximum entropy methods, which typically reconstruct the image on the data pixel grid, OptMRL varies the image model in order to find the optimal functional basis with which to represent the image. We show how an optimal basis for image representation can be selected and in doing so, develop the concept of the 'pixon' which is a generalized image cell from which this basis is constructed. By allowing both the image and the image representation to be variable, the OptMRL method greatly increases the volume of solution space over which the image is optimized. Hence the likelihood of the final reconstructed image is greatly increased. For the goodness-of-fit criterion, OptMRL uses the maximum residual likelihood probability distribution introduced previously by Pina and Puetter (1992). This GOF probability distribution, which is based on the spatial autocorrelation of the residuals, has the advantage that it ensures spatially uncorrelated image reconstruction residuals.

  1. Weighted iterative reconstruction for magnetic particle imaging

    NASA Astrophysics Data System (ADS)

    Knopp, T.; Rahmer, J.; Sattel, T. F.; Biederer, S.; Weizenecker, J.; Gleich, B.; Borgert, J.; Buzug, T. M.

    2010-03-01

    Magnetic particle imaging (MPI) is a new imaging technique capable of imaging the distribution of superparamagnetic particles at high spatial and temporal resolution. For the reconstruction of the particle distribution, a system of linear equations has to be solved. The mathematical solution to this linear system can be obtained using a least-squares approach. In this paper, it is shown that the quality of the least-squares solution can be improved by incorporating a weighting matrix using the reciprocal of the matrix-row energy as weights. A further benefit of this weighting is that iterative algorithms, such as the conjugate gradient method, converge rapidly yielding the same image quality as obtained by singular value decomposition in only a few iterations. Thus, the weighting strategy in combination with the conjugate gradient method improves the image quality and substantially shortens the reconstruction time. The performance of weighting strategy and reconstruction algorithms is assessed with experimental data of a 2D MPI scanner.

  2. Evolutionary approach to image reconstruction from projections

    NASA Astrophysics Data System (ADS)

    Nakao, Zensho; Ali, Fathelalem F.; Takashibu, Midori; Chen, Yen-Wei

    1997-10-01

    We present an evolutionary approach for reconstructing CT images; the algorithm reconstructs two-dimensional unknown images from four one-dimensional projections. A genetic algorithm works on a randomly generated population of strings each of which contains encodings of an image. The traditional, as well as new, genetic operators are applied on each generation. The mean square error between the projection data of the image encoded into a string and original projection data is used to estimate the string fitness. A Laplacian constraint term is included in the fitness function of the genetic algorithm for handling smooth images. Two new modified versions of the original genetic algorithm are presented. Results obtained by the original algorithm and the modified versions are compared to those obtained by the well-known algebraic reconstruction technique (ART), and it was found that the evolutionary method is more effective than ART in the particular case of limiting projection directions to four.

  3. Improving Tritium Exposure Reconstructions Using Accelerator Mass Spectrometry

    SciTech Connect

    Love, A; Hunt, J; Knezovich, J

    2003-06-01

    Exposure reconstructions for radionuclides are inherently difficult. As a result, most reconstructions are based primarily on mathematical models of environmental fate and transport. These models can have large uncertainties, as important site-specific information is unknown, missing, or crudely estimated. Alternatively, surrogate environmental measurements of exposure can be used for site-specific reconstructions. In cases where environmental transport processes are complex, well-chosen environmental surrogates can have smaller exposure uncertainty than mathematical models. Because existing methodologies have significant limitations, the development or improvement of methodologies for reconstructing exposure from environmental measurements would provide important additional tools in assessing the health effects of chronic exposure. As an example, the direct measurement of tritium atoms by accelerator mass spectrometry (AMS) enables rapid low-activity tritium measurements from milligram-sized samples, which permit greater ease of sample collection, faster throughput, and increased spatial and/or temporal resolution. Tritium AMS was previously demonstrated for a tree growing on known levels of tritiated water and for trees exposed to atmospheric releases of tritiated water vapor. In these analyses, tritium levels were measured from milligram-sized samples with sample preparation times of a few days. Hundreds of samples were analyzed within a few months of sample collection and resulted in the reconstruction of spatial and temporal exposure from tritium releases.

  4. Multiresolution reconstruction method to optoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Patrickeyev, Igor; Oraevsky, Alexander A.

    2003-06-01

    A new method for reconstruction of optoacoustic images is proposed. The method of image reconstruction incorporates multiresolution wavelet filtering into spherical back-projection algorithm. According to our method, each optoacoustic signal detected with an array of ultrawide-band transducers is decomposed into a set of self-similar wavelets with different resolution (characteristic frequency) and then back-projected along the spherical traces for each resolution scale separately. The advantage of this approach is that one can reconstruct objects of a preferred size or a range of sizes. The sum of all images reconstructed with different resolutions yields an image that visualizes small and large objects at once. An approximate speed of the proposed algorithm is of the same order as algorithm, based on the Fast Fourier Transform (FFT). The accuracy of the proposed method is illustrated by images, which are reconstructed from simulated optoacoustic signals as well as signals measured with the Laser Optoacoustic Imaging System (LOIS) from a loop of blood vessel embedded in a gel phantom. The method can be used for contrast-enhanced optoacoustic imaging in the depth of tissue, i.e. for medical applications such as breast cancer or prostate cancer detection.

  5. Accelerated motion corrected three‐dimensional abdominal MRI using total variation regularized SENSE reconstruction

    PubMed Central

    Atkinson, David; Buerger, Christian; Schaeffter, Tobias; Prieto, Claudia

    2015-01-01

    Purpose Develop a nonrigid motion corrected reconstruction for highly accelerated free‐breathing three‐dimensional (3D) abdominal images without external sensors or additional scans. Methods The proposed method accelerates the acquisition by undersampling and performs motion correction directly in the reconstruction using a general matrix description of the acquisition. Data are acquired using a self‐gated 3D golden radial phase encoding trajectory, enabling a two stage reconstruction to estimate and then correct motion of the same data. In the first stage total variation regularized iterative SENSE is used to reconstruct highly undersampled respiratory resolved images. A nonrigid registration of these images is performed to estimate the complex motion in the abdomen. In the second stage, the estimated motion fields are incorporated in a general matrix reconstruction, which uses total variation regularization and incorporates k‐space data from multiple respiratory positions. The proposed approach was tested on nine healthy volunteers and compared against a standard gated reconstruction using measures of liver sharpness, gradient entropy, visual assessment of image sharpness and overall image quality by two experts. Results The proposed method achieves similar quality to the gated reconstruction with nonsignificant differences for liver sharpness (1.18 and 1.00, respectively), gradient entropy (1.00 and 1.00), visual score of image sharpness (2.22 and 2.44), and visual rank of image quality (3.33 and 3.39). An average reduction of the acquisition time from 102 s to 39 s could be achieved with the proposed method. Conclusion In vivo results demonstrate the feasibility of the proposed method showing similar image quality to the standard gated reconstruction while using data corresponding to a significantly reduced acquisition time. Magn Reson Med, 2015. © 2015 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of

  6. Heuristic optimization in penumbral image for high resolution reconstructed image

    SciTech Connect

    Azuma, R.; Nozaki, S.; Fujioka, S.; Chen, Y. W.; Namihira, Y.

    2010-10-15

    Penumbral imaging is a technique which uses the fact that spatial information can be recovered from the shadow or penumbra that an unknown source casts through a simple large circular aperture. The size of the penumbral image on the detector can be mathematically determined as its aperture size, object size, and magnification. Conventional reconstruction methods are very sensitive to noise. On the other hand, the heuristic reconstruction method is very tolerant of noise. However, the aperture size influences the accuracy and resolution of the reconstructed image. In this article, we propose the optimization of the aperture size for the neutron penumbral imaging.

  7. Similarity-regulation of OS-EM for accelerated SPECT reconstruction

    NASA Astrophysics Data System (ADS)

    Vaissier, P. E. B.; Beekman, F. J.; Goorden, M. C.

    2016-06-01

    Ordered subsets expectation maximization (OS-EM) is widely used to accelerate image reconstruction in single photon emission computed tomography (SPECT). Speedup of OS-EM over maximum likelihood expectation maximization (ML-EM) is close to the number of subsets used. Although a high number of subsets can shorten reconstruction times significantly, it can also cause severe image artifacts such as improper erasure of reconstructed activity if projections contain few counts. We recently showed that such artifacts can be prevented by using a count-regulated OS-EM (CR-OS-EM) algorithm which automatically adapts the number of subsets for each voxel based on the estimated number of counts that the voxel contributed to the projections. While CR-OS-EM reached high speed-up over ML-EM in high-activity regions of images, speed in low-activity regions could still be very slow. In this work we propose similarity-regulated OS-EM (SR-OS-EM) as a much faster alternative to CR-OS-EM. SR-OS-EM also automatically and locally adapts the number of subsets, but it uses a different criterion for subset regulation: the number of subsets that is used for updating an individual voxel depends on how similar the reconstruction algorithm would update the estimated activity in that voxel with different subsets. Reconstructions of an image quality phantom and in vivo scans show that SR-OS-EM retains all of the favorable properties of CR-OS-EM, while reconstruction speed can be up to an order of magnitude higher in low-activity regions. Moreover our results suggest that SR-OS-EM can be operated with identical reconstruction parameters (including the number of iterations) for a wide range of count levels, which can be an additional advantage from a user perspective since users would only have to post-filter an image to present it at an appropriate noise level.

  8. BIOFILM IMAGE RECONSTRUCTION FOR ASSESSING STRUCTURAL PARAMETERS

    PubMed Central

    Renslow, Ryan; Lewandowski, Zbigniew; Beyenal, Haluk

    2011-01-01

    The structure of biofilms can be numerically quantified from microscopy images using structural parameters. These parameters are used in biofilm image analysis to compare biofilms, to monitor temporal variation in biofilm structure, to quantify the effects of antibiotics on biofilm structure and to determine the effects of environmental conditions on biofilm structure. It is often hypothesized that biofilms with similar structural parameter values will have similar structures; however, this hypothesis has never been tested. The main goal was to test the hypothesis that the commonly used structural parameters can characterize the differences or similarities between biofilm structures. To achieve this goal 1) biofilm image reconstruction was developed as a new tool for assessing structural parameters, 2) independent reconstructions using the same starting structural parameters were tested to see how they differed from each other, 3) the effect of the original image parameter values on reconstruction success was evaluated and 4) the effect of the number and type of the parameters on reconstruction success was evaluated. It was found that two biofilms characterized by identical commonly used structural parameter values may look different, that the number and size of clusters in the original biofilm image affect image reconstruction success and that, in general, a small set of arbitrarily selected parameters may not reveal relevant differences between biofilm structures. PMID:21280029

  9. Approach for reconstructing anisoplanatic adaptive optics images.

    PubMed

    Aubailly, Mathieu; Roggemann, Michael C; Schulz, Timothy J

    2007-08-20

    Atmospheric turbulence corrupts astronomical images formed by ground-based telescopes. Adaptive optics systems allow the effects of turbulence-induced aberrations to be reduced for a narrow field of view corresponding approximately to the isoplanatic angle theta(0). For field angles larger than theta(0), the point spread function (PSF) gradually degrades as the field angle increases. We present a technique to estimate the PSF of an adaptive optics telescope as function of the field angle, and use this information in a space-varying image reconstruction technique. Simulated anisoplanatic intensity images of a star field are reconstructed by means of a block-processing method using the predicted local PSF. Two methods for image recovery are used: matrix inversion with Tikhonov regularization, and the Lucy-Richardson algorithm. Image reconstruction results obtained using the space-varying predicted PSF are compared to space invariant deconvolution results obtained using the on-axis PSF. The anisoplanatic reconstruction technique using the predicted PSF provides a significant improvement of the mean squared error between the reconstructed image and the object compared to the deconvolution performed using the on-axis PSF. PMID:17712366

  10. Efficient MR image reconstruction for compressed MR imaging.

    PubMed

    Huang, Junzhou; Zhang, Shaoting; Metaxas, Dimitris

    2011-10-01

    In this paper, we propose an efficient algorithm for MR image reconstruction. The algorithm minimizes a linear combination of three terms corresponding to a least square data fitting, total variation (TV) and L1 norm regularization. This has been shown to be very powerful for the MR image reconstruction. First, we decompose the original problem into L1 and TV norm regularization subproblems respectively. Then, these two subproblems are efficiently solved by existing techniques. Finally, the reconstructed image is obtained from the weighted average of solutions from two subproblems in an iterative framework. We compare the proposed algorithm with previous methods in term of the reconstruction accuracy and computation complexity. Numerous experiments demonstrate the superior performance of the proposed algorithm for compressed MR image reconstruction. PMID:21742542

  11. Efficient MR image reconstruction for compressed MR imaging.

    PubMed

    Huang, Junzhou; Zhang, Shaoting; Metaxas, Dimitris

    2010-01-01

    In this paper, we propose an efficient algorithm for MR image reconstruction. The algorithm minimizes a linear combination of three terms corresponding to a least square data fitting, total variation (TV) and L1 norm regularization. This has been shown to be very powerful for the MR image reconstruction. First, we decompose the original problem into L1 and TV norm regularization subproblems respectively. Then, these two subproblems are efficiently solved by existing techniques. Finally, the reconstructed image is obtained from the weighted average of solutions from two subproblems in an iterative framework. We compare the proposed algorithm with previous methods in term of the reconstruction accuracy and computation complexity. Numerous experiments demonstrate the superior performance of the proposed algorithm for compressed MR image reconstruction. PMID:20879224

  12. Wave-CAIPI for Highly Accelerated 3D Imaging

    PubMed Central

    Bilgic, Berkin; Gagoski, Borjan A.; Cauley, Stephen F.; Fan, Audrey P.; Polimeni, Jonathan R.; Grant, P. Ellen; Wald, Lawrence L.; Setsompop, Kawin

    2014-01-01

    Purpose To introduce the Wave-CAIPI (Controlled Aliasing in Parallel Imaging) acquisition and reconstruction technique for highly accelerated 3D imaging with negligible g-factor and artifact penalties. Methods The Wave-CAIPI 3D acquisition involves playing sinusoidal gy and gz gradients during the readout of each kx encoding line, while modifying the 3D phase encoding strategy to incur inter-slice shifts as in 2D-CAIPI acquisitions. The resulting acquisition spreads the aliasing evenly in all spatial directions, thereby taking full advantage of 3D coil sensitivity distribution. By expressing the voxel spreading effect as a convolution in image space, an efficient reconstruction scheme that does not require data gridding is proposed. Rapid acquisition and high quality image reconstruction with Wave-CAIPI is demonstrated for high-resolution magnitude and phase imaging and Quantitative Susceptibility Mapping (QSM). Results Wave-CAIPI enables full-brain gradient echo (GRE) acquisition at 1 mm isotropic voxel size and R=3×3 acceleration with maximum g-factors of 1.08 at 3T, and 1.05 at 7T. Relative to the other advanced Cartesian encoding strategies 2D-CAIPI and Bunched Phase Encoding, Wave-CAIPI yields up to 2-fold reduction in maximum g-factor for 9-fold acceleration at both field strengths. Conclusion Wave-CAIPI allows highly accelerated 3D acquisitions with low artifact and negligible g-factor penalties, and may facilitate clinical application of high-resolution volumetric imaging. PMID:24986223

  13. Edge-Preserving PET Image Reconstruction Using Trust Optimization Transfer

    PubMed Central

    Wang, Guobao; Qi, Jinyi

    2014-01-01

    Iterative image reconstruction for positron emission tomography (PET) can improve image quality by using spatial regularization. The most commonly used quadratic penalty often over-smoothes sharp edges and fine features in reconstructed images, while non-quadratic penalties can preserve edges and achieve higher contrast recovery. Existing optimization algorithms such as the expectation maximization (EM) and preconditioned conjugate gradient (PCG) algorithms work well for the quadratic penalty, but are less efficient for high-curvature or non-smooth edge-preserving regularizations. This paper proposes a new algorithm to accelerate edge-preserving image reconstruction by using two strategies: trust surrogate and optimization transfer descent. Trust surrogate approximates the original penalty by a smoother function at each iteration, but guarantees the algorithm to descend monotonically; Optimization transfer descent accelerates a conventional optimization transfer algorithm by using conjugate gradient and line search. Results of computer simulations and real 3D data show that the proposed algorithm converges much faster than the conventional EM and PCG for smooth edge-preserving regularization and can also be more efficient than the current state-of-art algorithms for the non-smooth ℓ1 regularization. PMID:25438302

  14. Geometric reconstruction using tracked ultrasound strain imaging

    NASA Astrophysics Data System (ADS)

    Pheiffer, Thomas S.; Simpson, Amber L.; Ondrake, Janet E.; Miga, Michael I.

    2013-03-01

    The accurate identification of tumor margins during neurosurgery is a primary concern for the surgeon in order to maximize resection of malignant tissue while preserving normal function. The use of preoperative imaging for guidance is standard of care, but tumor margins are not always clear even when contrast agents are used, and so margins are often determined intraoperatively by visual and tactile feedback. Ultrasound strain imaging creates a quantitative representation of tissue stiffness which can be used in real-time. The information offered by strain imaging can be placed within a conventional image-guidance workflow by tracking the ultrasound probe and calibrating the image plane, which facilitates interpretation of the data by placing it within a common coordinate space with preoperative imaging. Tumor geometry in strain imaging is then directly comparable to the geometry in preoperative imaging. This paper presents a tracked ultrasound strain imaging system capable of co-registering with preoperative tomograms and also of reconstructing a 3D surface using the border of the strain lesion. In a preliminary study using four phantoms with subsurface tumors, tracked strain imaging was registered to preoperative image volumes and then tumor surfaces were reconstructed using contours extracted from strain image slices. The volumes of the phantom tumors reconstructed from tracked strain imaging were approximately between 1.5 to 2.4 cm3, which was similar to the CT volumes of 1.0 to 2.3 cm3. Future work will be done to robustly characterize the reconstruction accuracy of the system.

  15. PET Image Reconstruction Using Kernel Method

    PubMed Central

    Wang, Guobao; Qi, Jinyi

    2014-01-01

    Image reconstruction from low-count PET projection data is challenging because the inverse problem is ill-posed. Prior information can be used to improve image quality. Inspired by the kernel methods in machine learning, this paper proposes a kernel based method that models PET image intensity in each pixel as a function of a set of features obtained from prior information. The kernel-based image model is incorporated into the forward model of PET projection data and the coefficients can be readily estimated by the maximum likelihood (ML) or penalized likelihood image reconstruction. A kernelized expectation-maximization (EM) algorithm is presented to obtain the ML estimate. Computer simulations show that the proposed approach can achieve better bias versus variance trade-off and higher contrast recovery for dynamic PET image reconstruction than the conventional maximum likelihood method with and without post-reconstruction denoising. Compared with other regularization-based methods, the kernel method is easier to implement and provides better image quality for low-count data. Application of the proposed kernel method to a 4D dynamic PET patient dataset showed promising results. PMID:25095249

  16. Image reconstruction by regularized nonlinear inversion--joint estimation of coil sensitivities and image content.

    PubMed

    Uecker, Martin; Hohage, Thorsten; Block, Kai Tobias; Frahm, Jens

    2008-09-01

    The use of parallel imaging for scan time reduction in MRI faces problems with image degradation when using GRAPPA or SENSE for high acceleration factors. Although an inherent loss of SNR in parallel MRI is inevitable due to the reduced measurement time, the sensitivity to image artifacts that result from severe undersampling can be ameliorated by alternative reconstruction methods. While the introduction of GRAPPA and SENSE extended MRI reconstructions from a simple unitary transformation (Fourier transform) to the inversion of an ill-conditioned linear system, the next logical step is the use of a nonlinear inversion. Here, a respective algorithm based on a Newton-type method with appropriate regularization terms is demonstrated to improve the performance of autocalibrating parallel MRI--mainly due to a better estimation of the coil sensitivity profiles. The approach yields images with considerably reduced artifacts for high acceleration factors and/or a low number of reference lines.

  17. Image reconstruction algorithms with wavelet filtering for optoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Gawali, S.; Leggio, L.; Broadway, C.; González, P.; Sánchez, M.; Rodríguez, S.; Lamela, H.

    2016-03-01

    Optoacoustic imaging (OAI) is a hybrid biomedical imaging modality based on the generation and detection of ultrasound by illuminating the target tissue by laser light. Typically, laser light in visible or near infrared spectrum is used as an excitation source. OAI is based on the implementation of image reconstruction algorithms using the spatial distribution of optical absorption in tissues. In this work, we apply a time-domain back-projection (BP) reconstruction algorithm and a wavelet filtering for point and line detection, respectively. A comparative study between point detection and integrated line detection has been carried out by evaluating their effects on the image reconstructed. Our results demonstrate that the back-projection algorithm proposed is efficient for reconstructing high-resolution images of absorbing spheres embedded in a non-absorbing medium when it is combined with the wavelet filtering.

  18. Accelerated high-frame-rate mouse heart cine-MRI using compressed sensing reconstruction.

    PubMed

    Motaal, Abdallah G; Coolen, Bram F; Abdurrachim, Desiree; Castro, Rui M; Prompers, Jeanine J; Florack, Luc M J; Nicolay, Klaas; Strijkers, Gustav J

    2013-04-01

    We introduce a new protocol to obtain very high-frame-rate cinematographic (Cine) MRI movies of the beating mouse heart within a reasonable measurement time. The method is based on a self-gated accelerated fast low-angle shot (FLASH) acquisition and compressed sensing reconstruction. Key to our approach is that we exploit the stochastic nature of the retrospective triggering acquisition scheme to produce an undersampled and random k-t space filling that allows for compressed sensing reconstruction and acceleration. As a standard, a self-gated FLASH sequence with a total acquisition time of 10 min was used to produce single-slice Cine movies of seven mouse hearts with 90 frames per cardiac cycle. Two times (2×) and three times (3×) k-t space undersampled Cine movies were produced from 2.5- and 1.5-min data acquisitions, respectively. The accelerated 90-frame Cine movies of mouse hearts were successfully reconstructed with a compressed sensing algorithm. The movies had high image quality and the undersampling artifacts were effectively removed. Left ventricular functional parameters, i.e. end-systolic and end-diastolic lumen surface areas and early-to-late filling rate ratio as a parameter to evaluate diastolic function, derived from the standard and accelerated Cine movies, were nearly identical.

  19. Multi-contrast magnetic resonance image reconstruction

    NASA Astrophysics Data System (ADS)

    Liu, Meng; Chen, Yunmei; Zhang, Hao; Huang, Feng

    2015-03-01

    In clinical exams, multi-contrast images from conventional MRI are scanned with the same field of view (FOV) for complementary diagnostic information, such as proton density- (PD-), T1- and T2-weighted images. Their sharable information can be utilized for more robust and accurate image reconstruction. In this work, we propose a novel model and an efficient algorithm for joint image reconstruction and coil sensitivity estimation in multi-contrast partially parallel imaging (PPI) in MRI. Our algorithm restores the multi-contrast images by minimizing an energy function consisting of an L2-norm fidelity term to reduce construction errors caused by motion, a regularization term of underlying images to preserve common anatomical features by using vectorial total variation (VTV) regularizer, and updating sensitivity maps by Tikhonov smoothness based on their physical property. We present the numerical results including T1- and T2-weighted MR images recovered from partially scanned k-space data and provide the comparisons between our results and those obtained from the related existing works. Our numerical results indicate that the proposed method using vectorial TV and penalties on sensitivities can be made promising and widely used for multi-contrast multi-channel MR image reconstruction.

  20. CUDA accelerated uniform re-sampling for non-Cartesian MR reconstruction.

    PubMed

    Feng, Chaolu; Zhao, Dazhe

    2015-01-01

    A grid-driven gridding (GDG) method is proposed to uniformly re-sample non-Cartesian raw data acquired in PROPELLER, in which a trajectory window for each Cartesian grid is first computed. The intensity of the reconstructed image at this grid is the weighted average of raw data in this window. Taking consider of the single instruction multiple data (SIMD) property of the proposed GDG, a CUDA accelerated method is then proposed to improve the performance of the proposed GDG. Two groups of raw data sampled by PROPELLER in two resolutions are reconstructed by the proposed method. To balance computation resources of the GPU and obtain the best performance improvement, four thread-block strategies are adopted. Experimental results demonstrate that although the proposed GDG is more time consuming than traditional DDG, the CUDA accelerated GDG is almost 10 times faster than traditional DDG. PMID:26406102

  1. Improving tritium exposure reconstructions using accelerator mass spectrometry

    PubMed Central

    Hunt, J. R.; Vogel, J. S.; Knezovich, J. P.

    2010-01-01

    Direct measurement of tritium atoms by accelerator mass spectrometry (AMS) enables rapid low-activity tritium measurements from milligram-sized samples and permits greater ease of sample collection, faster throughput, and increased spatial and/or temporal resolution. Because existing methodologies for quantifying tritium have some significant limitations, the development of tritium AMS has allowed improvements in reconstructing tritium exposure concentrations from environmental measurements and provides an important additional tool in assessing the temporal and spatial distribution of chronic exposure. Tritium exposure reconstructions using AMS were previously demonstrated for a tree growing on known levels of tritiated water and for trees exposed to atmospheric releases of tritiated water vapor. In these analyses, tritium levels were measured from milligram-sized samples with sample preparation times of a few days. Hundreds of samples were analyzed within a few months of sample collection and resulted in the reconstruction of spatial and temporal exposure from tritium releases. Although the current quantification limit of tritium AMS is not adequate to determine natural environmental variations in tritium concentrations, it is expected to be sufficient for studies assessing possible health effects from chronic environmental tritium exposure. PMID:14735274

  2. Image reconstructions with the rotating RF coil

    NASA Astrophysics Data System (ADS)

    Trakic, A.; Wang, H.; Weber, E.; Li, B. K.; Poole, M.; Liu, F.; Crozier, S.

    2009-12-01

    Recent studies have shown that rotating a single RF transceive coil (RRFC) provides a uniform coverage of the object and brings a number of hardware advantages (i.e. requires only one RF channel, averts coil-coil coupling interactions and facilitates large-scale multi-nuclear imaging). Motion of the RF coil sensitivity profile however violates the standard Fourier Transform definition of a time-invariant signal, and the images reconstructed in this conventional manner can be degraded by ghosting artifacts. To overcome this problem, this paper presents Time Division Multiplexed — Sensitivity Encoding (TDM-SENSE), as a new image reconstruction scheme that exploits the rotation of the RF coil sensitivity profile to facilitate ghost-free image reconstructions and reductions in image acquisition time. A transceive RRFC system for head imaging at 2 Tesla was constructed and applied in a number of in vivo experiments. In this initial study, alias-free head images were obtained in half the usual scan time. It is hoped that new sequences and methods will be developed by taking advantage of coil motion.

  3. Iterative image reconstruction in spectral CT

    NASA Astrophysics Data System (ADS)

    Hernandez, Daniel; Michel, Eric; Kim, Hye S.; Kim, Jae G.; Han, Byung H.; Cho, Min H.; Lee, Soo Y.

    2012-03-01

    Scan time of spectral-CTs is much longer than conventional CTs due to limited number of x-ray photons detectable by photon-counting detectors. However, the spectral pixel information in spectral-CT has much richer information on physiological and pathological status of the tissues than the CT-number in conventional CT, which makes the spectral- CT one of the promising future imaging modalities. One simple way to reduce the scan time in spectral-CT imaging is to reduce the number of views in the acquisition of projection data. But, this may result in poorer SNR and strong streak artifacts which can severely compromise the image quality. In this work, spectral-CT projection data were obtained from a lab-built spectral-CT consisting of a single CdTe photon counting detector, a micro-focus x-ray tube and scan mechanics. For the image reconstruction, we used two iterative image reconstruction methods, the simultaneous iterative reconstruction technique (SIRT) and the total variation minimization based on conjugate gradient method (CG-TV), along with the filtered back-projection (FBP) to compare the image quality. From the imaging of the iodine containing phantoms, we have observed that SIRT and CG-TV are superior to the FBP method in terms of SNR and streak artifacts.

  4. Image reconstruction of compressed sensing MRI using graph-based redundant wavelet transform.

    PubMed

    Lai, Zongying; Qu, Xiaobo; Liu, Yunsong; Guo, Di; Ye, Jing; Zhan, Zhifang; Chen, Zhong

    2016-01-01

    Compressed sensing magnetic resonance imaging has shown great capacity for accelerating magnetic resonance imaging if an image can be sparsely represented. How the image is sparsified seriously affects its reconstruction quality. In the present study, a graph-based redundant wavelet transform is introduced to sparsely represent magnetic resonance images in iterative image reconstructions. With this transform, image patches is viewed as vertices and their differences as edges, and the shortest path on the graph minimizes the total difference of all image patches. Using the l1 norm regularized formulation of the problem solved by an alternating-direction minimization with continuation algorithm, the experimental results demonstrate that the proposed method outperforms several state-of-the-art reconstruction methods in removing artifacts and achieves fewer reconstruction errors on the tested datasets.

  5. Pragmatic fully 3D image reconstruction for the MiCES mouse imaging PET scanner

    NASA Astrophysics Data System (ADS)

    Lee, Kisung; Kinahan, Paul E.; Fessler, Jeffrey A.; Miyaoka, Robert S.; Janes, Marie; Lewellen, Tom K.

    2004-10-01

    We present a pragmatic approach to image reconstruction for data from the micro crystal elements system (MiCES) fully 3D mouse imaging positron emission tomography (PET) scanner under construction at the University of Washington. Our approach is modelled on fully 3D image reconstruction used in clinical PET scanners, which is based on Fourier rebinning (FORE) followed by 2D iterative image reconstruction using ordered-subsets expectation-maximization (OSEM). The use of iterative methods allows modelling of physical effects (e.g., statistical noise, detector blurring, attenuation, etc), while FORE accelerates the reconstruction process by reducing the fully 3D data to a stacked set of independent 2D sinograms. Previous investigations have indicated that non-stationary detector point-spread response effects, which are typically ignored for clinical imaging, significantly impact image quality for the MiCES scanner geometry. To model the effect of non-stationary detector blurring (DB) in the FORE+OSEM(DB) algorithm, we have added a factorized system matrix to the ASPIRE reconstruction library. Initial results indicate that the proposed approach produces an improvement in resolution without an undue increase in noise and without a significant increase in the computational burden. The impact on task performance, however, remains to be evaluated.

  6. Stochastic image reconstruction for a dual-particle imaging system

    NASA Astrophysics Data System (ADS)

    Hamel, M. C.; Polack, J. K.; Poitrasson-Rivière, A.; Flaska, M.; Clarke, S. D.; Pozzi, S. A.; Tomanin, A.; Peerani, P.

    2016-02-01

    Stochastic image reconstruction has been applied to a dual-particle imaging system being designed for nuclear safeguards applications. The dual-particle imager (DPI) is a combined Compton-scatter and neutron-scatter camera capable of producing separate neutron and photon images. The stochastic origin ensembles (SOE) method was investigated as an imaging method for the DPI because only a minimal estimation of system response is required to produce images with quality that is comparable to common maximum-likelihood methods. This work contains neutron and photon SOE image reconstructions for a 252Cf point source, two mixed-oxide (MOX) fuel canisters representing point sources, and the MOX fuel canisters representing a distributed source. Simulation of the DPI using MCNPX-PoliMi is validated by comparison of simulated and measured results. Because image quality is dependent on the number of counts and iterations used, the relationship between these quantities is investigated.

  7. Integrated Image Reconstruction and Gradient Nonlinearity Correction

    PubMed Central

    Tao, Shengzhen; Trzasko, Joshua D.; Shu, Yunhong; Huston, John; Bernstein, Matt A.

    2014-01-01

    Purpose To describe a model-based reconstruction strategy for routine magnetic resonance imaging (MRI) that accounts for gradient nonlinearity (GNL) during rather than after transformation to the image domain, and demonstrate that this approach reduces the spatial resolution loss that occurs during strictly image-domain GNL-correction. Methods After reviewing conventional GNL-correction methods, we propose a generic signal model for GNL-affected MRI acquisitions, discuss how it incorporates into contemporary image reconstruction platforms, and describe efficient non-uniform fast Fourier transform (NUFFT)-based computational routines for these. The impact of GNL-correction on spatial resolution by the conventional and proposed approaches is investigated on phantom data acquired at varying offsets from gradient isocenter, as well as on fully-sampled and (retrospectively) undersampled in vivo acquisitions. Results Phantom results demonstrate that resolution loss that occurs during GNL-correction is significantly less for the proposed strategy than for the standard approach at distances >10 cm from isocenter with a 35 cm FOV gradient coil. The in vivo results suggest that the proposed strategy better preserves fine anatomical detail than retrospective GNL-correction while offering comparable geometric correction. Conclusion Accounting for GNL during image reconstruction allows geometric distortion to be corrected with less spatial resolution loss than is typically observed with the conventional image domain correction strategy. PMID:25298258

  8. GPU-accelerated regularized iterative reconstruction for few-view cone beam CT

    SciTech Connect

    Matenine, Dmitri; Goussard, Yves

    2015-04-15

    Purpose: The present work proposes an iterative reconstruction technique designed for x-ray transmission computed tomography (CT). The main objective is to provide a model-based solution to the cone-beam CT reconstruction problem, yielding accurate low-dose images via few-views acquisitions in clinically acceptable time frames. Methods: The proposed technique combines a modified ordered subsets convex (OSC) algorithm and the total variation minimization (TV) regularization technique and is called OSC-TV. The number of subsets of each OSC iteration follows a reduction pattern in order to ensure the best performance of the regularization method. Considering the high computational cost of the algorithm, it is implemented on a graphics processing unit, using parallelization to accelerate computations. Results: The reconstructions were performed on computer-simulated as well as human pelvic cone-beam CT projection data and image quality was assessed. In terms of convergence and image quality, OSC-TV performs well in reconstruction of low-dose cone-beam CT data obtained via a few-view acquisition protocol. It compares favorably to the few-view TV-regularized projections onto convex sets (POCS-TV) algorithm. It also appears to be a viable alternative to full-dataset filtered backprojection. Execution times are of 1–2 min and are compatible with the typical clinical workflow for nonreal-time applications. Conclusions: Considering the image quality and execution times, this method may be useful for reconstruction of low-dose clinical acquisitions. It may be of particular benefit to patients who undergo multiple acquisitions by reducing the overall imaging radiation dose and associated risks.

  9. Tomographic image reconstruction and rendering with texture-mapping hardware

    SciTech Connect

    Azevedo, S.G.; Cabral, B.K.; Foran, J.

    1994-07-01

    The image reconstruction problem, also known as the inverse Radon transform, for x-ray computed tomography (CT) is found in numerous applications in medicine and industry. The most common algorithm used in these cases is filtered backprojection (FBP), which, while a simple procedure, is time-consuming for large images on any type of computational engine. Specially-designed, dedicated parallel processors are commonly used in medical CT scanners, whose results are then passed to graphics workstation for rendering and analysis. However, a fast direct FBP algorithm can be implemented on modern texture-mapping hardware in current high-end workstation platforms. This is done by casting the FBP algorithm as an image warping operation with summing. Texture-mapping hardware, such as that on the Silicon Graphics Reality Engine (TM), shows around 600 times speedup of backprojection over a CPU-based implementation (a 100 Mhz R4400 in this case). This technique has the further advantages of flexibility and rapid programming. In addition, the same hardware can be used for both image reconstruction and for volumetric rendering. The techniques can also be used to accelerate iterative reconstruction algorithms. The hardware architecture also allows more complex operations than straight-ray backprojection if they are required, including fan-beam, cone-beam, and curved ray paths, with little or no speed penalties.

  10. Tomographic image reconstruction and rendering with texture-mapping hardware

    NASA Astrophysics Data System (ADS)

    Azevedo, Stephen G.; Cabral, Brian K.; Foran, Jim

    1994-07-01

    The image reconstruction problem, also known as the inverse Radon transform, for x-ray computed tomography (CT) is found in numerous applications in medicine and industry. The most common algorithm used in these cases is filtered backprojection (FBP), which, while a simple procedure, is time-consuming for large images on any type of computational engine. Specially designed, dedicated parallel processors are commonly used in medical CT scanners, whose results are then passed to a graphics workstation for rendering and analysis. However, a fast direct FBP algorithm can be implemented on modern texture-mapping hardware in current high-end workstation platforms. This is done by casting the FBP algorithm as an image warping operation with summing. Texture- mapping hardware, such as that on the silicon Graphics Reality Engine, shows around 600 times speedup of backprojection over a CPU-based implementation (a 100 Mhz R4400 in our case). This technique has the further advantages of flexibility and rapid programming. In addition, the same hardware can be used for both image reconstruction and for volumetric rendering. Our technique can also be used to accelerate iterative reconstruction algorithms. The hardware architecture also allows more complex operations than straight-ray backprojection if they are required, including fan-beam, cone-beam, and curved ray paths, with little or no speed penalties.

  11. Optimal Discretization Resolution in Algebraic Image Reconstruction

    NASA Astrophysics Data System (ADS)

    Sharif, Behzad; Kamalabadi, Farzad

    2005-11-01

    In this paper, we focus on data-limited tomographic imaging problems where the underlying linear inverse problem is ill-posed. A typical regularized reconstruction algorithm uses algebraic formulation with a predetermined discretization resolution. If the selected resolution is too low, we may loose useful details of the underlying image and if it is too high, the reconstruction will be unstable and the representation will fit irrelevant features. In this work, two approaches are introduced to address this issue. The first approach is using Mallow's CL method or generalized cross-validation. For each of the two methods, a joint estimator of regularization parameter and discretization resolution is proposed and their asymptotic optimality is investigated. The second approach is a Bayesian estimator of the model order using a complexity-penalizing prior. Numerical experiments focus on a space imaging application from a set of limited-angle tomographic observations.

  12. Mirror Surface Reconstruction from a Single Image.

    PubMed

    Liu, Miaomiao; Hartley, Richard; Salzmann, Mathieu

    2015-04-01

    This paper tackles the problem of reconstructing the shape of a smooth mirror surface from a single image. In particular, we consider the case where the camera is observing the reflection of a static reference target in the unknown mirror. We first study the reconstruction problem given dense correspondences between 3D points on the reference target and image locations. In such conditions, our differential geometry analysis provides a theoretical proof that the shape of the mirror surface can be recovered if the pose of the reference target is known. We then relax our assumptions by considering the case where only sparse correspondences are available. In this scenario, we formulate reconstruction as an optimization problem, which can be solved using a nonlinear least-squares method. We demonstrate the effectiveness of our method on both synthetic and real images. We then provide a theoretical analysis of the potential degenerate cases with and without prior knowledge of the pose of the reference target. Finally we show that our theory can be similarly applied to the reconstruction of the surface of transparent object.

  13. Sparse image reconstruction for molecular imaging.

    PubMed

    Ting, Michael; Raich, Raviv; Hero, Alfred O

    2009-06-01

    The application that motivates this paper is molecular imaging at the atomic level. When discretized at subatomic distances, the volume is inherently sparse. Noiseless measurements from an imaging technology can be modeled by convolution of the image with the system point spread function (psf). Such is the case with magnetic resonance force microscopy (MRFM), an emerging technology where imaging of an individual tobacco mosaic virus was recently demonstrated with nanometer resolution. We also consider additive white Gaussian noise (AWGN) in the measurements. Many prior works of sparse estimators have focused on the case when H has low coherence; however, the system matrix H in our application is the convolution matrix for the system psf. A typical convolution matrix has high coherence. This paper, therefore, does not assume a low coherence H. A discrete-continuous form of the Laplacian and atom at zero (LAZE) p.d.f. used by Johnstone and Silverman is formulated, and two sparse estimators derived by maximizing the joint p.d.f. of the observation and image conditioned on the hyperparameters. A thresholding rule that generalizes the hard and soft thresholding rule appears in the course of the derivation. This so-called hybrid thresholding rule, when used in the iterative thresholding framework, gives rise to the hybrid estimator, a generalization of the lasso. Estimates of the hyperparameters for the lasso and hybrid estimator are obtained via Stein's unbiased risk estimate (SURE). A numerical study with a Gaussian psf and two sparse images shows that the hybrid estimator outperforms the lasso.

  14. Speckle image reconstruction of the adaptive optics solar images.

    PubMed

    Zhong, Libo; Tian, Yu; Rao, Changhui

    2014-11-17

    Speckle image reconstruction, in which the speckle transfer function (STF) is modeled as annular distribution according to the angular dependence of adaptive optics (AO) compensation and the individual STF in each annulus is obtained by the corresponding Fried parameter calculated from the traditional spectral ratio method, is used to restore the solar images corrected by AO system in this paper. The reconstructions of the solar images acquired by a 37-element AO system validate this method and the image quality is improved evidently. Moreover, we found the photometric accuracy of the reconstruction is field dependent due to the influence of AO correction. With the increase of angular separation of the object from the AO lockpoint, the relative improvement becomes approximately more and more effective and tends to identical in the regions far away the central field of view. The simulation results show this phenomenon is mainly due to the disparity of the calculated STF from the real AO STF with the angular dependence.

  15. Accelerated signal encoding and reconstruction using pixon method

    DOEpatents

    Puetter, Richard; Yahil, Amos; Pina, Robert

    2005-05-17

    The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape, size, and/or position) as needed to best fit the data.

  16. Accelerator Test of an Imaging Calorimeter

    NASA Technical Reports Server (NTRS)

    Christl, Mark J.; Adams, James H., Jr.; Binns, R. W.; Derrickson, J. H.; Fountain, W. F.; Howell, L. W.; Gregory, J. C.; Hink, P. L.; Israel, M. H.; Kippen, R. M.; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    The Imaging Calorimeter for ACCESS (ICA) utilizes a thin sampling calorimeter concept for direct measurements of high-energy cosmic rays. The ICA design uses arrays of small scintillating fibers to measure the energy and trajectory of the produced cascades. A test instrument has been developed to study the performance of this concept at accelerator energies and for comparison with simulations. Two test exposures have been completed using a CERN test beam. Some results from the accelerator tests are presented.

  17. Propagation phasor approach for holographic image reconstruction

    PubMed Central

    Luo, Wei; Zhang, Yibo; Göröcs, Zoltán; Feizi, Alborz; Ozcan, Aydogan

    2016-01-01

    To achieve high-resolution and wide field-of-view, digital holographic imaging techniques need to tackle two major challenges: phase recovery and spatial undersampling. Previously, these challenges were separately addressed using phase retrieval and pixel super-resolution algorithms, which utilize the diversity of different imaging parameters. Although existing holographic imaging methods can achieve large space-bandwidth-products by performing pixel super-resolution and phase retrieval sequentially, they require large amounts of data, which might be a limitation in high-speed or cost-effective imaging applications. Here we report a propagation phasor approach, which for the first time combines phase retrieval and pixel super-resolution into a unified mathematical framework and enables the synthesis of new holographic image reconstruction methods with significantly improved data efficiency. In this approach, twin image and spatial aliasing signals, along with other digital artifacts, are interpreted as noise terms that are modulated by phasors that analytically depend on the lateral displacement between hologram and sensor planes, sample-to-sensor distance, wavelength, and the illumination angle. Compared to previous holographic reconstruction techniques, this new framework results in five- to seven-fold reduced number of raw measurements, while still achieving a competitive resolution and space-bandwidth-product. We also demonstrated the success of this approach by imaging biological specimens including Papanicolaou and blood smears. PMID:26964671

  18. Propagation phasor approach for holographic image reconstruction

    NASA Astrophysics Data System (ADS)

    Luo, Wei; Zhang, Yibo; Göröcs, Zoltán; Feizi, Alborz; Ozcan, Aydogan

    2016-03-01

    To achieve high-resolution and wide field-of-view, digital holographic imaging techniques need to tackle two major challenges: phase recovery and spatial undersampling. Previously, these challenges were separately addressed using phase retrieval and pixel super-resolution algorithms, which utilize the diversity of different imaging parameters. Although existing holographic imaging methods can achieve large space-bandwidth-products by performing pixel super-resolution and phase retrieval sequentially, they require large amounts of data, which might be a limitation in high-speed or cost-effective imaging applications. Here we report a propagation phasor approach, which for the first time combines phase retrieval and pixel super-resolution into a unified mathematical framework and enables the synthesis of new holographic image reconstruction methods with significantly improved data efficiency. In this approach, twin image and spatial aliasing signals, along with other digital artifacts, are interpreted as noise terms that are modulated by phasors that analytically depend on the lateral displacement between hologram and sensor planes, sample-to-sensor distance, wavelength, and the illumination angle. Compared to previous holographic reconstruction techniques, this new framework results in five- to seven-fold reduced number of raw measurements, while still achieving a competitive resolution and space-bandwidth-product. We also demonstrated the success of this approach by imaging biological specimens including Papanicolaou and blood smears.

  19. Context dependent anti-aliasing image reconstruction

    NASA Technical Reports Server (NTRS)

    Beaudet, Paul R.; Hunt, A.; Arlia, N.

    1989-01-01

    Image Reconstruction has been mostly confined to context free linear processes; the traditional continuum interpretation of digital array data uses a linear interpolator with or without an enhancement filter. Here, anti-aliasing context dependent interpretation techniques are investigated for image reconstruction. Pattern classification is applied to each neighborhood to assign it a context class; a different interpolation/filter is applied to neighborhoods of differing context. It is shown how the context dependent interpolation is computed through ensemble average statistics using high resolution training imagery from which the lower resolution image array data is obtained (simulation). A quadratic least squares (LS) context-free image quality model is described from which the context dependent interpolation coefficients are derived. It is shown how ensembles of high-resolution images can be used to capture the a priori special character of different context classes. As a consequence, a priori information such as the translational invariance of edges along the edge direction, edge discontinuity, and the character of corners is captured and can be used to interpret image array data with greater spatial resolution than would be expected by the Nyquist limit. A Gibb-like artifact associated with this super-resolution is discussed. More realistic context dependent image quality models are needed and a suggestion is made for using a quality model which now is finding application in data compression.

  20. Three-dimensional volumetric object reconstruction using computational integral imaging.

    PubMed

    Hong, Seung-Hyun; Jang, Ju-Seog; Javidi, Bahram

    2004-02-01

    We propose a three-dimensional (3D) imaging technique that can sense a 3D scene and computationally reconstruct it as a 3D volumetric image. Sensing of the 3D scene is carried out by obtaining elemental images optically using a pickup microlens array and a detector array. Reconstruction of volume pixels of the scene is accomplished by computationally simulating optical reconstruction according to ray optics. The entire pixels of the recorded elemental images contribute to volumetric reconstruction of the 3D scene. Image display planes at arbitrary distances from the display microlens array are computed and reconstructed by back propagating the elemental images through a computer synthesized pinhole array based on ray optics. We present experimental results of 3D image sensing and volume pixel reconstruction to test and verify the performance of the algorithm and the imaging system. The volume pixel values can be used for 3D image surface reconstruction.

  1. Near Real-Time Solar Image Reconstruction

    NASA Astrophysics Data System (ADS)

    Yang, G.; Denker, C.; Wang, H.

    2001-05-01

    We use a Linux Beowulf cluster to build a system for near real-time solar image reconstruction with the goal to obtain diffraction limited solar images at a cadence of one minute. This gives us immediate access to high-level data products and enables direct visualization of dynamic processes on the Sun. Space weather warnings and flare forecasting will benefit from this project. The image processing algorithms are based on the speckle masking method combined with frame selection. The parallel programs use explicit message passing via Parallel Virtual Machine (PVM). The preliminary results are very promising. Now, we can construct a 256 by 256 pixel image out of 50 short-exposure images within one minute on a Beowulf cluster with four 500~MHz CPUs. In addition, we want to explore the possibility of applying parallel computing on Beowulf clusters to other complex data reduction and analysis problems that we encounter, e.g., in multi-dimensional spectro-polarimetry.

  2. Performance-based assessment of reconstructed images

    SciTech Connect

    Hanson, Kenneth

    2009-01-01

    During the early 90s, I engaged in a productive and enjoyable collaboration with Robert Wagner and his colleague, Kyle Myers. We explored the ramifications of the principle that tbe quality of an image should be assessed on the basis of how well it facilitates the performance of appropriate visual tasks. We applied this principle to algorithms used to reconstruct scenes from incomplete and/or noisy projection data. For binary visual tasks, we used both the conventional disk detection and a new challenging task, inspired by the Rayleigh resolution criterion, of deciding whether an object was a blurred version of two dots or a bar. The results of human and machine observer tests were summarized with the detectability index based on the area under the ROC curve. We investigated a variety of reconstruction algorithms, including ART, with and without a nonnegativity constraint, and the MEMSYS3 algorithm. We concluded that the performance of the Raleigh task was optimized when the strength of the prior was near MEMSYS's default 'classic' value for both human and machine observers. A notable result was that the most-often-used metric of rms error in the reconstruction was not necessarily indicative of the value of a reconstructed image for the purpose of performing visual tasks.

  3. Hyperspectral image reconstruction for diffuse optical tomography

    PubMed Central

    Larusson, Fridrik; Fantini, Sergio; Miller, Eric L.

    2011-01-01

    We explore the development and performance of algorithms for hyperspectral diffuse optical tomography (DOT) for which data from hundreds of wavelengths are collected and used to determine the concentration distribution of chromophores in the medium under investigation. An efficient method is detailed for forming the images using iterative algorithms applied to a linearized Born approximation model assuming the scattering coefficient is spatially constant and known. The L-surface framework is employed to select optimal regularization parameters for the inverse problem. We report image reconstructions using 126 wavelengths with estimation error in simulations as low as 0.05 and mean square error of experimental data of 0.18 and 0.29 for ink and dye concentrations, respectively, an improvement over reconstructions using fewer specifically chosen wavelengths. PMID:21483616

  4. Highly accelerated cardiac MRI using iterative SENSE reconstruction: initial clinical experience.

    PubMed

    Allen, Bradley D; Carr, Maria; Botelho, Marcos P F; Rahsepar, Amir Ali; Markl, Michael; Zenge, Michael O; Schmidt, Michaela; Nadar, Mariappan S; Spottiswoode, Bruce; Collins, Jeremy D; Carr, James C

    2016-06-01

    To evaluate the qualitative and quantitative performance of an accelerated cardiovascular MRI (CMR) protocol that features iterative SENSE reconstruction and spatio-temporal L1-regularization (IS SENSE). Twenty consecutively recruited patients and 9 healthy volunteers were included. 2D steady state free precession cine images including 3-chamber, 4-chamber, and short axis slices were acquired using standard parallel imaging (GRAPPA, acceleration factor = 2), spatio-temporal undersampled TSENSE (acceleration factor = 4), and IS SENSE techniques (acceleration factor = 4). Acquisition times, quantitative cardiac functional parameters, wall motion abnormalities (WMA), and qualitative performance (scale: 1-poor to 5-excellent for overall image quality, noise, and artifact) were compared. Breath-hold times for IS SENSE (3.0 ± 0.6 s) and TSENSE (3.3 ± 0.6) were both reduced relative to GRAPPA (8.4 ± 1.7 s, p < 0.001). No difference in quantitative cardiac function was present between the three techniques (p = 0.89 for ejection fraction). GRAPPA and IS SENSE had similar image quality (4.7 ± 0.4 vs. 4.5 ± 0.6, p = 0.09) while, both techniques were superior to TSENSE (quality: 4.1 ± 0.7, p < 0.001). GRAPPA WMA agreement with IS SENSE was good (κ > 0.60, p < 0.001), while agreement with TSENSE was poor (κ < 0.40, p < 0.001). IS SENSE is a viable clinical CMR acceleration approach to reduce acquisition times while maintaining satisfactory qualitative and quantitative performance. PMID:26894256

  5. HYPR: constrained reconstruction for enhanced SNR in dynamic medical imaging

    NASA Astrophysics Data System (ADS)

    Mistretta, C.; Wieben, O.; Velikina, J.; Wu, Y.; Johnson, K.; Korosec, F.; Unal, O.; Chen, G.; Fain, S.; Christian, B.; Nalcioglu, O.; Kruger, R. A.; Block, W.; Samsonov, A.; Speidel, M.; Van Lysel, M.; Rowley, H.; Supanich, M.; Turski, P.; Wu, Yan; Holmes, J.; Kecskemeti, S.; Moran, C.; O'Halloran, R.; Keith, L.; Alexander, A.; Brodsky, E.; Lee, J. E.; Hall, T.; Zagzebski, J.

    2008-03-01

    During the last eight years our group has developed radial acquisitions with angular undersampling factors of several hundred that accelerate MRI in selected applications. As with all previous acceleration techniques, SNR typically falls as least as fast as the inverse square root of the undersampling factor. This limits the SNR available to support the small voxels that these methods can image over short time intervals in applications like time-resolved contrast-enhanced MR angiography (CE-MRA). Instead of processing each time interval independently, we have developed constrained reconstruction methods that exploit the significant correlation between temporal sampling points. A broad class of methods, termed HighlY Constrained Back PRojection (HYPR), generalizes this concept to other modalities and sampling dimensions.

  6. Deep Reconstruction Models for Image Set Classification.

    PubMed

    Hayat, Munawar; Bennamoun, Mohammed; An, Senjian

    2015-04-01

    Image set classification finds its applications in a number of real-life scenarios such as classification from surveillance videos, multi-view camera networks and personal albums. Compared with single image based classification, it offers more promises and has therefore attracted significant research attention in recent years. Unlike many existing methods which assume images of a set to lie on a certain geometric surface, this paper introduces a deep learning framework which makes no such prior assumptions and can automatically discover the underlying geometric structure. Specifically, a Template Deep Reconstruction Model (TDRM) is defined whose parameters are initialized by performing unsupervised pre-training in a layer-wise fashion using Gaussian Restricted Boltzmann Machines (GRBMs). The initialized TDRM is then separately trained for images of each class and class-specific DRMs are learnt. Based on the minimum reconstruction errors from the learnt class-specific models, three different voting strategies are devised for classification. Extensive experiments are performed to demonstrate the efficacy of the proposed framework for the tasks of face and object recognition from image sets. Experimental results show that the proposed method consistently outperforms the existing state of the art methods. PMID:26353289

  7. A sparse reconstruction algorithm for ultrasonic images in nondestructive testing.

    PubMed

    Guarneri, Giovanni Alfredo; Pipa, Daniel Rodrigues; Neves Junior, Flávio; de Arruda, Lúcia Valéria Ramos; Zibetti, Marcelo Victor Wüst

    2015-01-01

    Ultrasound imaging systems (UIS) are essential tools in nondestructive testing (NDT). In general, the quality of images depends on two factors: system hardware features and image reconstruction algorithms. This paper presents a new image reconstruction algorithm for ultrasonic NDT. The algorithm reconstructs images from A-scan signals acquired by an ultrasonic imaging system with a monostatic transducer in pulse-echo configuration. It is based on regularized least squares using a l1 regularization norm. The method is tested to reconstruct an image of a point-like reflector, using both simulated and real data. The resolution of reconstructed image is compared with four traditional ultrasonic imaging reconstruction algorithms: B-scan, SAFT, ω-k SAFT and regularized least squares (RLS). The method demonstrates significant resolution improvement when compared with B-scan-about 91% using real data. The proposed scheme also outperforms traditional algorithms in terms of signal-to-noise ratio (SNR). PMID:25905700

  8. Monte Carlo-based fluorescence molecular tomography reconstruction method accelerated by a cluster of graphic processing units.

    PubMed

    Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming

    2011-02-01

    High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.

  9. Sparsity-promoting orthogonal dictionary updating for image reconstruction from highly undersampled magnetic resonance data.

    PubMed

    Huang, Jinhong; Guo, Li; Feng, Qianjin; Chen, Wufan; Feng, Yanqiu

    2015-07-21

    Image reconstruction from undersampled k-space data accelerates magnetic resonance imaging (MRI) by exploiting image sparseness in certain transform domains. Employing image patch representation over a learned dictionary has the advantage of being adaptive to local image structures and thus can better sparsify images than using fixed transforms (e.g. wavelets and total variations). Dictionary learning methods have recently been introduced to MRI reconstruction, and these methods demonstrate significantly reduced reconstruction errors compared to sparse MRI reconstruction using fixed transforms. However, the synthesis sparse coding problem in dictionary learning is NP-hard and computationally expensive. In this paper, we present a novel sparsity-promoting orthogonal dictionary updating method for efficient image reconstruction from highly undersampled MRI data. The orthogonality imposed on the learned dictionary enables the minimization problem in the reconstruction to be solved by an efficient optimization algorithm which alternately updates representation coefficients, orthogonal dictionary, and missing k-space data. Moreover, both sparsity level and sparse representation contribution using updated dictionaries gradually increase during iterations to recover more details, assuming the progressively improved quality of the dictionary. Simulation and real data experimental results both demonstrate that the proposed method is approximately 10 to 100 times faster than the K-SVD-based dictionary learning MRI method and simultaneously improves reconstruction accuracy.

  10. Sparsity-promoting orthogonal dictionary updating for image reconstruction from highly undersampled magnetic resonance data

    NASA Astrophysics Data System (ADS)

    Huang, Jinhong; Guo, Li; Feng, Qianjin; Chen, Wufan; Feng, Yanqiu

    2015-07-01

    Image reconstruction from undersampled k-space data accelerates magnetic resonance imaging (MRI) by exploiting image sparseness in certain transform domains. Employing image patch representation over a learned dictionary has the advantage of being adaptive to local image structures and thus can better sparsify images than using fixed transforms (e.g. wavelets and total variations). Dictionary learning methods have recently been introduced to MRI reconstruction, and these methods demonstrate significantly reduced reconstruction errors compared to sparse MRI reconstruction using fixed transforms. However, the synthesis sparse coding problem in dictionary learning is NP-hard and computationally expensive. In this paper, we present a novel sparsity-promoting orthogonal dictionary updating method for efficient image reconstruction from highly undersampled MRI data. The orthogonality imposed on the learned dictionary enables the minimization problem in the reconstruction to be solved by an efficient optimization algorithm which alternately updates representation coefficients, orthogonal dictionary, and missing k-space data. Moreover, both sparsity level and sparse representation contribution using updated dictionaries gradually increase during iterations to recover more details, assuming the progressively improved quality of the dictionary. Simulation and real data experimental results both demonstrate that the proposed method is approximately 10 to 100 times faster than the K-SVD-based dictionary learning MRI method and simultaneously improves reconstruction accuracy.

  11. Prior image constrained image reconstruction in emerging computed tomography applications

    NASA Astrophysics Data System (ADS)

    Brunner, Stephen T.

    Advances have been made in computed tomography (CT), especially in the past five years, by incorporating prior images into the image reconstruction process. In this dissertation, we investigate prior image constrained image reconstruction in three emerging CT applications: dual-energy CT, multi-energy photon-counting CT, and cone-beam CT in image-guided radiation therapy. First, we investigate the application of Prior Image Constrained Compressed Sensing (PICCS) in dual-energy CT, which has been called "one of the hottest research areas in CT." Phantom and animal studies are conducted using a state-of-the-art 64-slice GE Discovery 750 HD CT scanner to investigate the extent to which PICCS can enable radiation dose reduction in material density and virtual monochromatic imaging. Second, we extend the application of PICCS from dual-energy CT to multi-energy photon-counting CT, which has been called "one of the 12 topics in CT to be critical in the next decade." Numerical simulations are conducted to generate multiple energy bin images for a photon-counting CT acquisition and to investigate the extent to which PICCS can enable radiation dose efficiency improvement. Third, we investigate the performance of a newly proposed prior image constrained scatter correction technique to correct scatter-induced shading artifacts in cone-beam CT, which, when used in image-guided radiation therapy procedures, can assist in patient localization, and potentially, dose verification and adaptive radiation therapy. Phantom studies are conducted using a Varian 2100 EX system with an on-board imager to investigate the extent to which the prior image constrained scatter correction technique can mitigate scatter-induced shading artifacts in cone-beam CT. Results show that these prior image constrained image reconstruction techniques can reduce radiation dose in dual-energy CT by 50% in phantom and animal studies in material density and virtual monochromatic imaging, can lead to radiation

  12. Correlation-Based Image Reconstruction Methods for Magnetic Particle Imaging

    NASA Astrophysics Data System (ADS)

    Ishihara, Yasutoshi; Kuwabara, Tsuyoshi; Honma, Takumi; Nakagawa, Yohei

    Magnetic particle imaging (MPI), in which the nonlinear interaction between internally administered magnetic nanoparticles (MNPs) and electromagnetic waves irradiated from outside of the body is utilized, has attracted attention for its potential to achieve early diagnosis of diseases such as cancer. In MPI, the local magnetic field distribution is scanned, and the magnetization signal from MNPs within a selected region is detected. However, the signal sensitivity and image resolution are degraded by interference from magnetization signals generated by MNPs outside of the selected region, mainly because of imperfections (limited gradients) in the local magnetic field distribution. Here, we propose new methods based on correlation information between the observed signal and the system function—defined as the interaction between the magnetic field distribution and the magnetizing properties of MNPs. We performed numerical analyses and found that, although the images were somewhat blurred, image artifacts could be significantly reduced and accurate images could be reconstructed without the inverse-matrix operation used in conventional image reconstruction methods.

  13. Analysis of Cultural Heritage by Accelerator Techniques and Analytical Imaging

    NASA Astrophysics Data System (ADS)

    Ide-Ektessabi, Ari; Toque, Jay Arre; Murayama, Yusuke

    2011-12-01

    In this paper we present the result of experimental investigation using two very important accelerator techniques: (1) synchrotron radiation XRF and XAFS; and (2) accelerator mass spectrometry and multispectral analytical imaging for the investigation of cultural heritage. We also want to introduce a complementary approach to the investigation of artworks which is noninvasive and nondestructive that can be applied in situ. Four major projects will be discussed to illustrate the potential applications of these accelerator and analytical imaging techniques: (1) investigation of Mongolian Textile (Genghis Khan and Kublai Khan Period) using XRF, AMS and electron microscopy; (2) XRF studies of pigments collected from Korean Buddhist paintings; (3) creating a database of elemental composition and spectral reflectance of more than 1000 Japanese pigments which have been used for traditional Japanese paintings; and (4) visible light-near infrared spectroscopy and multispectral imaging of degraded malachite and azurite. The XRF measurements of the Japanese and Korean pigments could be used to complement the results of pigment identification by analytical imaging through spectral reflectance reconstruction. On the other hand, analysis of the Mongolian textiles revealed that they were produced between 12th and 13th century. Elemental analysis of the samples showed that they contained traces of gold, copper, iron and titanium. Based on the age and trace elements in the samples, it was concluded that the textiles were produced during the height of power of the Mongol empire, which makes them a valuable cultural heritage. Finally, the analysis of the degraded and discolored malachite and azurite demonstrates how multispectral analytical imaging could be used to complement the results of high energy-based techniques.

  14. Fluorescence molecular tomographic image reconstruction based on reduced measurement data

    NASA Astrophysics Data System (ADS)

    Zou, Wei; Wang, Jiajun; Feng, David Dagan; Fang, Erxi

    2015-07-01

    The analysis of fluorescence molecular tomography is important for medical diagnosis and treatment. Although the quality of reconstructed results can be improved with the increasing number of measurement data, the scale of the matrices involved in the reconstruction of fluorescence molecular tomography will also become larger, which may slow down the reconstruction process. A new method is proposed where measurement data are reduced according to the rows of the Jacobian matrix and the projection residual error. To further accelerate the reconstruction process, the global inverse problem is solved with level-by-level Schur complement decomposition. Simulation results demonstrate that the speed of the reconstruction process can be improved with the proposed algorithm.

  15. Image reconstruction of IRAS survey scans

    NASA Technical Reports Server (NTRS)

    Bontekoe, Tj. Romke

    1990-01-01

    The IRAS survey data can be used successfully to produce images of extended objects. The major difficulties, viz. non-uniform sampling, different response functions for each detector, and varying signal-to-noise levels for each detector for each scan, were resolved. The results of three different image construction techniques are compared: co-addition, constrained least squares, and maximum entropy. The maximum entropy result is superior. An image of the galaxy M51 with an average spatial resolution of 45 arc seconds is presented, using 60 micron survey data. This exceeds the telescope diffraction limit of 1 minute of arc, at this wavelength. Data fusion is a proposed method for combining data from different instruments, with different spacial resolutions, at different wavelengths. Data estimates of the physical parameters, temperature, density and composition, can be made from the data without prior image (re-)construction. An increase in the accuracy of these parameters is expected as the result of this more systematic approach.

  16. Iterative feature refinement for accurate undersampled MR image reconstruction

    NASA Astrophysics Data System (ADS)

    Wang, Shanshan; Liu, Jianbo; Liu, Qiegen; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong

    2016-05-01

    Accelerating MR scan is of great significance for clinical, research and advanced applications, and one main effort to achieve this is the utilization of compressed sensing (CS) theory. Nevertheless, the existing CSMRI approaches still have limitations such as fine structure loss or high computational complexity. This paper proposes a novel iterative feature refinement (IFR) module for accurate MR image reconstruction from undersampled K-space data. Integrating IFR with CSMRI which is equipped with fixed transforms, we develop an IFR-CS method to restore meaningful structures and details that are originally discarded without introducing too much additional complexity. Specifically, the proposed IFR-CS is realized with three iterative steps, namely sparsity-promoting denoising, feature refinement and Tikhonov regularization. Experimental results on both simulated and in vivo MR datasets have shown that the proposed module has a strong capability to capture image details, and that IFR-CS is comparable and even superior to other state-of-the-art reconstruction approaches.

  17. 3D reconstruction and particle acceleration properties of Coronal Shock Waves During Powerful Solar Particle Events

    NASA Astrophysics Data System (ADS)

    Plotnikov, Illya; Vourlidas, Angelos; Tylka, Allan J.; Pinto, Rui; Rouillard, Alexis; Tirole, Margot

    2016-07-01

    Identifying the physical mechanisms that produce the most energetic particles is a long-standing observational and theoretical challenge in astrophysics. Strong pressure waves have been proposed as efficient accelerators both in the solar and astrophysical contexts via various mechanisms such as diffusive-shock/shock-drift acceleration and betatron effects. In diffusive-shock acceleration, the efficacy of the process relies on shock waves being super-critical or moving several times faster than the characteristic speed of the medium they propagate through (a high Alfven Mach number) and on the orientation of the magnetic field upstream of the shock front. High-cadence, multipoint imaging using the NASA STEREO, SOHO and SDO spacecrafts now permits the 3-D reconstruction of pressure waves formed during the eruption of coronal mass ejections. Using these unprecedented capabilities, some recent studies have provided new insights on the timing and longitudinal extent of solar energetic particles, including the first derivations of the time-dependent 3-dimensional distribution of the expansion speed and Mach numbers of coronal shock waves. We will review these recent developments by focusing on particle events that occurred between 2011 and 2015. These new techniques also provide the opportunity to investigate the enigmatic long-duration gamma ray events.

  18. Photogrammetric 3D reconstruction using mobile imaging

    NASA Astrophysics Data System (ADS)

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  19. MMSE Reconstruction for 3D Freehand Ultrasound Imaging

    PubMed Central

    Huang, Wei; Zheng, Yibin

    2008-01-01

    The reconstruction of 3D ultrasound (US) images from mechanically registered, but otherwise irregularly positioned, B-scan slices is of great interest in image guided therapy procedures. Conventional 3D ultrasound algorithms have low computational complexity, but the reconstructed volume suffers from severe speckle contamination. Furthermore, the current method cannot reconstruct uniform high-resolution data from several low-resolution B-scans. In this paper, the minimum mean-squared error (MMSE) method is applied to 3D ultrasound reconstruction. Data redundancies due to overlapping samples as well as correlation of the target and speckle are naturally accounted for in the MMSE reconstruction algorithm. Thus, the reconstruction process unifies the interpolation and spatial compounding. Simulation results for synthetic US images are presented to demonstrate the excellent reconstruction. PMID:18382623

  20. Accelerated 3D MERGE Carotid Imaging using Compressed Sensing with a Hidden Markov Tree Model

    PubMed Central

    Makhijani, Mahender K.; Balu, Niranjan; Yamada, Kiyofumi; Yuan, Chun; Nayak, Krishna S.

    2012-01-01

    Purpose To determine the potential for accelerated 3D carotid magnetic resonance imaging (MRI) using wavelet based compressed sensing (CS) with a hidden Markov tree (HMT) model. Materials and Methods We retrospectively applied HMT model-based CS and conventional CS to 3D carotid MRI data with 0.7 mm isotropic resolution, from six subjects with known carotid stenosis (12 carotids). We applied a wavelet-tree model learnt from a training database of carotid images to improve CS reconstruction. Quantitative endpoints such as lumen area, wall area, mean and maximum wall thickness, plaque calicification, and necrotic core area, were measured and compared using Bland-Altman analysis along with image quality. Results Rate-4.5 acceleration with HMT model-based CS provided image quality comparable to that of rate-3 acceleration with conventional CS and fully sampled reference reconstructions. Morphological measurements made on rate-4.5 HMT model-based CS reconstructions were in good agreement with measurements made on fully sampled reference images. There was no significant bias or correlation between mean and difference of measurements when comparing rate 4.5 HMT model-based CS with fully sampled reference images. Conclusion HMT model-based CS can potentially be used to accelerate clinical carotid MRI by a factor of 4.5 without impacting diagnostic quality or quantitative endpoints. PMID:22826159

  1. Total variation minimization-based multimodality medical image reconstruction

    NASA Astrophysics Data System (ADS)

    Cui, Xuelin; Yu, Hengyong; Wang, Ge; Mili, Lamine

    2014-09-01

    Since its recent inception, simultaneous image reconstruction for multimodality fusion has received a great deal of attention due to its superior imaging performance. On the other hand, the compressed sensing (CS)-based image reconstruction methods have undergone a rapid development because of their ability to significantly reduce the amount of raw data. In this work, we combine computed tomography (CT) and magnetic resonance imaging (MRI) into a single CS-based reconstruction framework. From a theoretical viewpoint, the CS-based reconstruction methods require prior sparsity knowledge to perform reconstruction. In addition to the conventional data fidelity term, the multimodality imaging information is utilized to improve the reconstruction quality. Prior information in this context is that most of the medical images can be approximated as piecewise constant model, and the discrete gradient transform (DGT), whose norm is the total variation (TV), can serve as a sparse representation. More importantly, the multimodality images from the same object must share structural similarity, which can be captured by DGT. The prior information on similar distributions from the sparse DGTs is employed to improve the CT and MRI image quality synergistically for a CT-MRI scanner platform. Numerical simulation with undersampled CT and MRI datasets is conducted to demonstrate the merits of the proposed hybrid image reconstruction approach. Our preliminary results confirm that the proposed method outperforms the conventional CT and MRI reconstructions when they are applied separately.

  2. Tree STEM Reconstruction Using Vertical Fisheye Images: a Preliminary Study

    NASA Astrophysics Data System (ADS)

    Berveglieri, A.; Tommaselli, A. M. G.

    2016-06-01

    A preliminary study was conducted to assess a tree stem reconstruction technique with panoramic images taken with fisheye lenses. The concept is similar to the Structure from Motion (SfM) technique, but the acquisition and data preparation rely on fisheye cameras to generate a vertical image sequence with height variations of the camera station. Each vertical image is rectified to four vertical planes, producing horizontal lateral views. The stems in the lateral view are rectified to the same scale in the image sequence to facilitate image matching. Using bundle adjustment, the stems are reconstructed, enabling later measurement and extraction of several attributes. The 3D reconstruction was performed with the proposed technique and compared with SfM. The preliminary results showed that the stems were correctly reconstructed by using the lateral virtual images generated from the vertical fisheye images and with the advantage of using fewer images and taken from one single station.

  3. Fast iterative image reconstruction of 3D PET data

    SciTech Connect

    Kinahan, P.E.; Townsend, D.W.; Michel, C.

    1996-12-31

    For count-limited PET imaging protocols, two different approaches to reducing statistical noise are volume, or 3D, imaging to increase sensitivity, and statistical reconstruction methods to reduce noise propagation. These two approaches have largely been developed independently, likely due to the perception of the large computational demands of iterative 3D reconstruction methods. We present results of combining the sensitivity of 3D PET imaging with the noise reduction and reconstruction speed of 2D iterative image reconstruction methods. This combination is made possible by using the recently-developed Fourier rebinning technique (FORE), which accurately and noiselessly rebins 3D PET data into a 2D data set. The resulting 2D sinograms are then reconstructed independently by the ordered-subset EM (OSEM) iterative reconstruction method, although any other 2D reconstruction algorithm could be used. We demonstrate significant improvements in image quality for whole-body 3D PET scans by using the FORE+OSEM approach compared with the standard 3D Reprojection (3DRP) algorithm. In addition, the FORE+OSEM approach involves only 2D reconstruction and it therefore requires considerably less reconstruction time than the 3DRP algorithm, or any fully 3D statistical reconstruction algorithm.

  4. Parallel programming of gradient-based iterative image reconstruction schemes for optical tomography.

    PubMed

    Hielscher, Andreas H; Bartel, Sebastian

    2004-02-01

    Optical tomography (OT) is a fast developing novel imaging modality that uses near-infrared (NIR) light to obtain cross-sectional views of optical properties inside the human body. A major challenge remains the time-consuming, computational-intensive image reconstruction problem that converts NIR transmission measurements into cross-sectional images. To increase the speed of iterative image reconstruction schemes that are commonly applied for OT, we have developed and implemented several parallel algorithms on a cluster of workstations. Static process distribution as well as dynamic load balancing schemes suitable for heterogeneous clusters and varying machine performances are introduced and tested. The resulting algorithms are shown to accelerate the reconstruction process to various degrees, substantially reducing the computation times for clinically relevant problems.

  5. Simultaneous reconstruction of the activity image and registration of the CT image in TOF-PET.

    PubMed

    Rezaei, Ahmadreza; Michel, Christian; Casey, Michael E; Nuyts, Johan

    2016-02-21

    Previously, maximum-likelihood methods have been proposed to jointly estimate the activity image and the attenuation image or the attenuation sinogram from time-of-flight (TOF) positron emission tomography (PET) data. In this contribution, we propose a method that addresses the possible alignment problem of the TOF-PET emission data and the computed tomography (CT) attenuation data, by combining reconstruction and registration. The method, called MLRR, iteratively reconstructs the activity image while registering the available CT-based attenuation image, so that the pair of activity and attenuation images maximise the likelihood of the TOF emission sinogram. The algorithm is slow to converge, but some acceleration could be achieved by using Nesterov's momentum method and by applying a multi-resolution scheme for the non-rigid displacement estimation. The latter also helps to avoid local optima, although convergence to the global optimum cannot be guaranteed. The results are evaluated on 2D and 3D simulations as well as a respiratory gated clinical scan. Our experiments indicate that the proposed method is able to correct for possible misalignment of the CT-based attenuation image, and is therefore a very promising approach to suppressing attenuation artefacts in clinical PET/CT. When applied to respiratory gated data of a patient scan, it produced deformations that are compatible with breathing motion and which reduced the well known attenuation artefact near the dome of the liver. Since the method makes use of the energy-converted CT attenuation image, the scale problem of joint reconstruction is automatically solved. PMID:26854817

  6. Patient-adaptive reconstruction and acquisition in dynamic imaging with sensitivity encoding (PARADISE).

    PubMed

    Sharif, Behzad; Derbyshire, J Andrew; Faranesh, Anthony Z; Bresler, Yoram

    2010-08-01

    MRI of the human heart without explicit cardiac synchronization promises to extend the applicability of cardiac MR to a larger patient population and potentially expand its diagnostic capabilities. However, conventional nongated imaging techniques typically suffer from low image quality or inadequate spatio-temporal resolution and fidelity. Patient-Adaptive Reconstruction and Acquisition in Dynamic Imaging with Sensitivity Encoding (PARADISE) is a highly accelerated nongated dynamic imaging method that enables artifact-free imaging with high spatio-temporal resolutions by utilizing novel computational techniques to optimize the imaging process. In addition to using parallel imaging, the method gains acceleration from a physiologically driven spatio-temporal support model; hence, it is doubly accelerated. The support model is patient adaptive, i.e., its geometry depends on dynamics of the imaged slice, e.g., subject's heart rate and heart location within the slice. The proposed method is also doubly adaptive as it adapts both the acquisition and reconstruction schemes. Based on the theory of time-sequential sampling, the proposed framework explicitly accounts for speed limitations of gradient encoding and provides performance guarantees on achievable image quality. The presented in-vivo results demonstrate the effectiveness and feasibility of the PARADISE method for high-resolution nongated cardiac MRI during short breath-hold. PMID:20665794

  7. Synergistic image reconstruction for hybrid ultrasound and photoacoustic computed tomography

    NASA Astrophysics Data System (ADS)

    Matthews, Thomas P.; Wang, Kun; Wang, Lihong V.; Anastasio, Mark A.

    2015-03-01

    Conventional photoacoustic computed tomography (PACT) image reconstruction methods assume that the object and surrounding medium are described by a constant speed-of-sound (SOS) value. In order to accurately recover fine structures, SOS heterogeneities should be quantified and compensated for during PACT reconstruction. To address this problem, several groups have proposed hybrid systems that combine PACT with ultrasound computed tomography (USCT). In such systems, a SOS map is reconstructed first via USCT. Consequently, this SOS map is employed to inform the PACT reconstruction method. Additionally, the SOS map can provide structural information regarding tissue, which is complementary to the functional information from the PACT image. We propose a paradigm shift in the way that images are reconstructed in hybrid PACT-USCT imaging. Inspired by our observation that information about the SOS distribution is encoded in PACT measurements, we propose to jointly reconstruct the absorbed optical energy density and SOS distributions from a combined set of USCT and PACT measurements, thereby reducing the two reconstruction problems into one. This innovative approach has several advantages over conventional approaches in which PACT and USCT images are reconstructed independently: (1) Variations in the SOS will automatically be accounted for, optimizing PACT image quality; (2) The reconstructed PACT and USCT images will possess minimal systematic artifacts because errors in the imaging models will be optimally balanced during the joint reconstruction; (3) Due to the exploitation of information regarding the SOS distribution in the full-view PACT data, our approach will permit high-resolution reconstruction of the SOS distribution from sparse array data.

  8. Calibration and Image Reconstruction for the Hurricane Imaging Radiometer (HIRAD)

    NASA Technical Reports Server (NTRS)

    Ruf, Christopher; Roberts, J. Brent; Biswas, Sayak; James, Mark W.; Miller, Timothy

    2012-01-01

    The Hurricane Imaging Radiometer (HIRAD) is a new airborne passive microwave synthetic aperture radiometer designed to provide wide swath images of ocean surface wind speed under heavy precipitation and, in particular, in tropical cyclones. It operates at 4, 5, 6 and 6.6 GHz and uses interferometric signal processing to synthesize a pushbroom imager in software from a low profile planar antenna with no mechanical scanning. HIRAD participated in NASA s Genesis and Rapid Intensification Processes (GRIP) mission during Fall 2010 as its first science field campaign. HIRAD produced images of upwelling brightness temperature over a aprox 70 km swath width with approx 3 km spatial resolution. From this, ocean surface wind speed and column averaged atmospheric liquid water content can be retrieved across the swath. The calibration and image reconstruction algorithms that were used to verify HIRAD functional performance during and immediately after GRIP were only preliminary and used a number of simplifying assumptions and approximations about the instrument design and performance. The development and performance of a more detailed and complete set of algorithms are reported here.

  9. Numerical modelling and image reconstruction in diffuse optical tomography

    PubMed Central

    Dehghani, Hamid; Srinivasan, Subhadra; Pogue, Brian W.; Gibson, Adam

    2009-01-01

    The development of diffuse optical tomography as a functional imaging modality has relied largely on the use of model-based image reconstruction. The recovery of optical parameters from boundary measurements of light propagation within tissue is inherently a difficult one, because the problem is nonlinear, ill-posed and ill-conditioned. Additionally, although the measured near-infrared signals of light transmission through tissue provide high imaging contrast, the reconstructed images suffer from poor spatial resolution due to the diffuse propagation of light in biological tissue. The application of model-based image reconstruction is reviewed in this paper, together with a numerical modelling approach to light propagation in tissue as well as generalized image reconstruction using boundary data. A comprehensive review and details of the basis for using spatial and structural prior information are also discussed, whereby the use of spectral and dual-modality systems can improve contrast and spatial resolution. PMID:19581256

  10. On multigrid methods for image reconstruction from projections

    SciTech Connect

    Henson, V.E.; Robinson, B.T.; Limber, M.

    1994-12-31

    The sampled Radon transform of a 2D function can be represented as a continuous linear map R : L{sup 1} {yields} R{sup N}. The image reconstruction problem is: given a vector b {element_of} R{sup N}, find an image (or density function) u(x, y) such that Ru = b. Since in general there are infinitely many solutions, the authors pick the solution with minimal 2-norm. Numerous proposals have been made regarding how best to discretize this problem. One can, for example, select a set of functions {phi}{sub j} that span a particular subspace {Omega} {contained_in} L{sup 1}, and model R : {Omega} {yields} R{sup N}. The subspace {Omega} may be chosen as a member of a sequence of subspaces whose limit is dense in L{sup 1}. One approach to the choice of {Omega} gives rise to a natural pixel discretization of the image space. Two possible choices of the set {phi}{sub j} are the set of characteristic functions of finite-width `strips` representing energy transmission paths and the set of intersections of such strips. The authors have studied the eigenstructure of the matrices B resulting from these choices and the effect of applying a Gauss-Seidel iteration to the problem Bw = b. There exists a near null space into which the error vectors migrate with iteration, after which Gauss-Seidel iteration stalls. The authors attempt to accelerate convergence via a multilevel scheme, based on the principles of McCormick`s Multilevel Projection Method (PML). Coarsening is achieved by thickening the rays which results in a much smaller discretization of an optimal grid, and a halving of the number of variables. This approach satisfies all the requirements of the PML scheme. They have observed that a multilevel approach based on this idea accelerates convergence at least to the point where noise in the data dominates.

  11. Reconstruction of biofilm images: combining local and global structural parameters

    SciTech Connect

    Resat, Haluk; Renslow, Ryan S.; Beyenal, Haluk

    2014-11-07

    Digitized images can be used for quantitative comparison of biofilms grown under different conditions. Using biofilm image reconstruction, it was previously found that biofilms with a completely different look can have nearly identical structural parameters and that the most commonly utilized global structural parameters were not sufficient to uniquely define these biofilms. Here, additional local and global parameters are introduced to show that these parameters considerably increase the reliability of the image reconstruction process. Assessment using human evaluators indicated that the correct identification rate of the reconstructed images increased from 50% to 72% with the introduction of the new parameters into the reconstruction procedure. An expanded set of parameters especially improved the identification of biofilm structures with internal orientational features and of structures in which colony sizes and spatial locations varied. Hence, the newly introduced structural parameter sets helped to better classify the biofilms by incorporating finer local structural details into the reconstruction process.

  12. PAINTER: a spatiospectral image reconstruction algorithm for optical interferometry.

    PubMed

    Schutz, Antony; Ferrari, André; Mary, David; Soulez, Ferréol; Thiébaut, Éric; Vannier, Martin

    2014-11-01

    Astronomical optical interferometers sample the Fourier transform of the intensity distribution of a source at the observation wavelength. Because of rapid perturbations caused by atmospheric turbulence, the phases of the complex Fourier samples (visibilities) cannot be directly exploited. Consequently, specific image reconstruction methods have been devised in the last few decades. Modern polychromatic optical interferometric instruments are now paving the way to multiwavelength imaging. This paper is devoted to the derivation of a spatiospectral (3D) image reconstruction algorithm, coined Polychromatic opticAl INTErferometric Reconstruction software (PAINTER). The algorithm relies on an iterative process, which alternates estimation of polychromatic images and complex visibilities. The complex visibilities are not only estimated from squared moduli and closure phases, but also differential phases, which helps to better constrain the polychromatic reconstruction. Simulations on synthetic data illustrate the efficiency of the algorithm and, in particular, the relevance of injecting a differential phases model in the reconstruction.

  13. FPGA Coprocessor for Accelerated Classification of Images

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Scharenbroich, Lucas J.; Werne, Thomas A.

    2008-01-01

    An effort related to that described in the preceding article focuses on developing a spaceborne processing platform for fast and accurate onboard classification of image data, a critical part of modern satellite image processing. The approach again has been to exploit the versatility of recently developed hybrid Virtex-4FX field-programmable gate array (FPGA) to run diverse science applications on embedded processors while taking advantage of the reconfigurable hardware resources of the FPGAs. In this case, the FPGA serves as a coprocessor that implements legacy C-language support-vector-machine (SVM) image-classification algorithms to detect and identify natural phenomena such as flooding, volcanic eruptions, and sea-ice break-up. The FPGA provides hardware acceleration for increased onboard processing capability than previously demonstrated in software. The original C-language program demonstrated on an imaging instrument aboard the Earth Observing-1 (EO-1) satellite implements a linear-kernel SVM algorithm for classifying parts of the images as snow, water, ice, land, or cloud or unclassified. Current onboard processors, such as on EO-1, have limited computing power, extremely limited active storage capability and are no longer considered state-of-the-art. Using commercially available software that translates C-language programs into hardware description language (HDL) files, the legacy C-language program, and two newly formulated programs for a more capable expanded-linear-kernel and a more accurate polynomial-kernel SVM algorithm, have been implemented in the Virtex-4FX FPGA. In tests, the FPGA implementations have exhibited significant speedups over conventional software implementations running on general-purpose hardware.

  14. Prospective acceleration of diffusion tensor imaging with compressed sensing using adaptive dictionaries

    PubMed Central

    McClymont, Darryl; Teh, Irvin; Whittington, Hannah J.; Grau, Vicente

    2015-01-01

    Purpose Diffusion MRI requires acquisition of multiple diffusion‐weighted images, resulting in long scan times. Here, we investigate combining compressed sensing and a fast imaging sequence to dramatically reduce acquisition times in cardiac diffusion MRI. Methods Fully sampled and prospectively undersampled diffusion tensor imaging data were acquired in five rat hearts at acceleration factors of between two and six using a fast spin echo (FSE) sequence. Images were reconstructed using a compressed sensing framework, enforcing sparsity by means of decomposition by adaptive dictionaries. A tensor was fit to the reconstructed images and fiber tractography was performed. Results Acceleration factors of up to six were achieved, with a modest increase in root mean square error of mean apparent diffusion coefficient (ADC), fractional anisotropy (FA), and helix angle. At an acceleration factor of six, mean values of ADC and FA were within 2.5% and 5% of the ground truth, respectively. Marginal differences were observed in the fiber tracts. Conclusion We developed a new k‐space sampling strategy for acquiring prospectively undersampled diffusion‐weighted data, and validated a novel compressed sensing reconstruction algorithm based on adaptive dictionaries. The k‐space undersampling and FSE acquisition each reduced acquisition times by up to 6× and 8×, respectively, as compared to fully sampled spin echo imaging. Magn Reson Med 76:248–258, 2016. © 2015 Wiley Periodicals, Inc. PMID:26302363

  15. Super-resolution image reconstruction using diffuse source models.

    PubMed

    Ellis, Michael A; Viola, Francesco; Walker, William F

    2010-06-01

    Image reconstruction is central to many scientific fields, from medical ultrasound and sonar to computed tomography and computer vision. Although lenses play a critical reconstruction role in these fields, digital sensors enable more sophisticated computational approaches. A variety of computational methods have thus been developed, with the common goal of increasing contrast and resolution to extract the greatest possible information from raw data. This paper describes a new image reconstruction method named the Diffuse Time-domain Optimized Near-field Estimator (dTONE). dTONE represents each hypothetical target in the system model as a diffuse region of targets rather than a single discrete target, which more accurately represents the experimental data that arise from signal sources in continuous space, with no additional computational requirements at the time of image reconstruction. Simulation and experimental ultrasound images of animal tissues show that dTONE achieves image resolution and contrast far superior to those of conventional image reconstruction methods. We also demonstrate the increased robustness of the diffuse target model to major sources of image degradation through the addition of electronic noise, phase aberration and magnitude aberration to ultrasound simulations. Using experimental ultrasound data from a tissue-mimicking phantom containing a 3-mm-diameter anechoic cyst, the conventionally reconstructed image has a cystic contrast of -6.3 dB, whereas the dTONE image has a cystic contrast of -14.4 dB.

  16. Quantitative image quality evaluation for cardiac CT reconstructions

    NASA Astrophysics Data System (ADS)

    Tseng, Hsin-Wu; Fan, Jiahua; Kupinski, Matthew A.; Balhorn, William; Okerlund, Darin R.

    2016-03-01

    Maintaining image quality in the presence of motion is always desirable and challenging in clinical Cardiac CT imaging. Different image-reconstruction algorithms are available on current commercial CT systems that attempt to achieve this goal. It is widely accepted that image-quality assessment should be task-based and involve specific tasks, observers, and associated figures of merits. In this work, we developed an observer model that performed the task of estimating the percentage of plaque in a vessel from CT images. We compared task performance of Cardiac CT image data reconstructed using a conventional FBP reconstruction algorithm and the SnapShot Freeze (SSF) algorithm, each at default and optimal reconstruction cardiac phases. The purpose of this work is to design an approach for quantitative image-quality evaluation of temporal resolution for Cardiac CT systems. To simulate heart motion, a moving coronary type phantom synchronized with an ECG signal was used. Three different percentage plaques embedded in a 3 mm vessel phantom were imaged multiple times under motion free, 60 bpm, and 80 bpm heart rates. Static (motion free) images of this phantom were taken as reference images for image template generation. Independent ROIs from the 60 bpm and 80 bpm images were generated by vessel tracking. The observer performed estimation tasks using these ROIs. Ensemble mean square error (EMSE) was used as the figure of merit. Results suggest that the quality of SSF images is superior to the quality of FBP images in higher heart-rate scans.

  17. Robust image reconstruction enhancement based on Gaussian mixture model estimation

    NASA Astrophysics Data System (ADS)

    Zhao, Fan; Zhao, Jian; Han, Xizhen; Wang, He; Liu, Bochao

    2016-03-01

    The low quality of an image is often characterized by low contrast and blurred edge details. Gradients have a direct relationship with image edge details. More specifically, the larger the gradients, the clearer the image details become. Robust image reconstruction enhancement based on Gaussian mixture model estimation is proposed here. First, image is transformed to its gradient domain, obtaining the gradient histogram. Second, the gradient histogram is estimated and extended using a Gaussian mixture model, and the predetermined function is constructed. Then, using histogram specification technology, the gradient field is enhanced with the constraint of the predetermined function. Finally, a matrix sine transform-based method is applied to reconstruct the enhanced image from the enhanced gradient field. Experimental results show that the proposed algorithm can effectively enhance different types of images such as medical image, aerial image, and visible image, providing high-quality image information for high-level processing.

  18. Basis Functions in Image Reconstruction From Projections: A Tutorial Introduction

    NASA Astrophysics Data System (ADS)

    Herman, Gabor T.

    2015-11-01

    The series expansion approaches to image reconstruction from projections assume that the object to be reconstructed can be represented as a linear combination of fixed basis functions and the task of the reconstruction algorithm is to estimate the coefficients in such a linear combination based on the measured projection data. It is demonstrated that using spherically symmetric basis functions (blobs), instead of ones based on the more traditional pixels, yields superior reconstructions of medically relevant objects. The demonstration uses simulated computerized tomography projection data of head cross-sections and the series expansion method ART for the reconstruction. In addition to showing the results of one anecdotal example, the relative efficacy of using pixel and blob basis functions in image reconstruction from projections is also evaluated using a statistical hypothesis testing based task oriented comparison methodology. The superiority of the efficacy of blob basis functions over that of pixel basis function is found to be statistically significant.

  19. A wavelet-based single-view reconstruction approach for cone beam x-ray luminescence tomography imaging

    PubMed Central

    Liu, Xin; Wang, Hongkai; Xu, Mantao; Nie, Shengdong; Lu, Hongbing

    2014-01-01

    Single-view x-ray luminescence computed tomography (XLCT) imaging has short data collection time that allows non-invasively and fast resolving the three-dimensional (3-D) distribution of x-ray-excitable nanophosphors within small animal in vivo. However, the single-view reconstruction suffers from a severe ill-posed problem because only one angle data is used in the reconstruction. To alleviate the ill-posedness, in this paper, we propose a wavelet-based reconstruction approach, which is achieved by applying a wavelet transformation to the acquired singe-view measurements. To evaluate the performance of the proposed method, in vivo experiment was performed based on a cone beam XLCT imaging system. The experimental results demonstrate that the proposed method cannot only use the full set of measurements produced by CCD, but also accelerate image reconstruction while preserving the spatial resolution of the reconstruction. Hence, it is suitable for dynamic XLCT imaging study. PMID:25426315

  20. Iterative Image Reconstruction for Limited-Angle CT Using Optimized Initial Image

    PubMed Central

    Guo, Jingyu; Qi, Hongliang; Xu, Yuan; Chen, Zijia; Li, Shulong; Zhou, Linghong

    2016-01-01

    Limited-angle computed tomography (CT) has great impact in some clinical applications. Existing iterative reconstruction algorithms could not reconstruct high-quality images, leading to severe artifacts nearby edges. Optimal selection of initial image would influence the iterative reconstruction performance but has not been studied deeply yet. In this work, we proposed to generate optimized initial image followed by total variation (TV) based iterative reconstruction considering the feature of image symmetry. The simulated data and real data reconstruction results indicate that the proposed method effectively removes the artifacts nearby edges. PMID:27066107

  1. Artificial neural network Radon inversion for image reconstruction.

    PubMed

    Rodriguez, A F; Blass, W E; Missimer, J H; Leenders, K L

    2001-04-01

    Image reconstruction techniques are essential to computer tomography. Algorithms such as filtered backprojection (FBP) or algebraic techniques are most frequently used. This paper presents an attempt to apply a feed-forward back-propagation supervised artificial neural network (BPN) to tomographic image reconstruction, specifically to positron emission tomography (PET). The main result is that the network trained with Gaussian test images proved to be successful at reconstructing images from projection sets derived from arbitrary objects. Additional results relate to the design of the network and the full width at half maximum (FWHM) of the Gaussians in the training sets. First, the optimal number of nodes in the middle layer is about an order of magnitude less than the number of input or output nodes. Second, the number of iterations required to achieve a required training set tolerance appeared to decrease exponentially with the number of nodes in the middle layer. Finally, for training sets containing Gaussians of a single width, the optimal accuracy of reconstructing the control set is obtained with a FWHM of three pixels. Intended to explore feasibility, the BPN presented in the following does not provide reconstruction accuracy adequate for immediate application to PET. However, the trained network does reconstruct general images independent of the data with which it was trained. Proposed in the concluding section are several possible refinements that should permit the development of a network capable of fast reconstruction of three-dimensional images from the discrete, noisy projection data characteristic of PET.

  2. MREJ: MRE elasticity reconstruction on ImageJ.

    PubMed

    Xiang, Kui; Zhu, Xia Li; Wang, Chang Xin; Li, Bing Nan

    2013-08-01

    Magnetic resonance elastography (MRE) is a promising method for health evaluation and disease diagnosis. It makes use of elastic waves as a virtual probe to quantify soft tissue elasticity. The wave actuator, imaging modality and elasticity interpreter are all essential components for an MRE system. Efforts have been made to develop more effective actuating mechanisms, imaging protocols and reconstructing algorithms. However, translating MRE wave images into soft tissue elasticity is a nontrivial issue for health professionals. This study contributes an open-source platform - MREJ - for MRE image processing and elasticity reconstruction. It is established on the widespread image-processing program ImageJ. Two algorithms for elasticity reconstruction were implemented with spatiotemporal directional filtering. The usability of the method is shown through virtual palpation on different phantoms and patients. Based on the results, we conclude that MREJ offers the MRE community a convenient and well-functioning program for image processing and elasticity interpretation.

  3. Sparsity-constrained PET image reconstruction with learned dictionaries

    NASA Astrophysics Data System (ADS)

    Tang, Jing; Yang, Bao; Wang, Yanhua; Ying, Leslie

    2016-09-01

    PET imaging plays an important role in scientific and clinical measurement of biochemical and physiological processes. Model-based PET image reconstruction such as the iterative expectation maximization algorithm seeking the maximum likelihood solution leads to increased noise. The maximum a posteriori (MAP) estimate removes divergence at higher iterations. However, a conventional smoothing prior or a total-variation (TV) prior in a MAP reconstruction algorithm causes over smoothing or blocky artifacts in the reconstructed images. We propose to use dictionary learning (DL) based sparse signal representation in the formation of the prior for MAP PET image reconstruction. The dictionary to sparsify the PET images in the reconstruction process is learned from various training images including the corresponding MR structural image and a self-created hollow sphere. Using simulated and patient brain PET data with corresponding MR images, we study the performance of the DL-MAP algorithm and compare it quantitatively with a conventional MAP algorithm, a TV-MAP algorithm, and a patch-based algorithm. The DL-MAP algorithm achieves improved bias and contrast (or regional mean values) at comparable noise to what the other MAP algorithms acquire. The dictionary learned from the hollow sphere leads to similar results as the dictionary learned from the corresponding MR image. Achieving robust performance in various noise-level simulation and patient studies, the DL-MAP algorithm with a general dictionary demonstrates its potential in quantitative PET imaging.

  4. MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods

    PubMed Central

    Schmidt, Johannes F. M.; Santelli, Claudio; Kozerke, Sebastian

    2016-01-01

    An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675

  5. Sparsity-constrained PET image reconstruction with learned dictionaries.

    PubMed

    Tang, Jing; Yang, Bao; Wang, Yanhua; Ying, Leslie

    2016-09-01

    PET imaging plays an important role in scientific and clinical measurement of biochemical and physiological processes. Model-based PET image reconstruction such as the iterative expectation maximization algorithm seeking the maximum likelihood solution leads to increased noise. The maximum a posteriori (MAP) estimate removes divergence at higher iterations. However, a conventional smoothing prior or a total-variation (TV) prior in a MAP reconstruction algorithm causes over smoothing or blocky artifacts in the reconstructed images. We propose to use dictionary learning (DL) based sparse signal representation in the formation of the prior for MAP PET image reconstruction. The dictionary to sparsify the PET images in the reconstruction process is learned from various training images including the corresponding MR structural image and a self-created hollow sphere. Using simulated and patient brain PET data with corresponding MR images, we study the performance of the DL-MAP algorithm and compare it quantitatively with a conventional MAP algorithm, a TV-MAP algorithm, and a patch-based algorithm. The DL-MAP algorithm achieves improved bias and contrast (or regional mean values) at comparable noise to what the other MAP algorithms acquire. The dictionary learned from the hollow sphere leads to similar results as the dictionary learned from the corresponding MR image. Achieving robust performance in various noise-level simulation and patient studies, the DL-MAP algorithm with a general dictionary demonstrates its potential in quantitative PET imaging. PMID:27494441

  6. Sparse-CAPR: Highly-Accelerated 4D CE-MRA with Parallel Imaging and Nonconvex Compressive Sensing

    PubMed Central

    Trzasko, Joshua D.; Haider, Clifton R.; Borisch, Eric A.; Campeau, Norbert G.; Glockner, James F.; Riederer, Stephen J.; Manduca, Armando

    2012-01-01

    CAPR is a SENSE-type parallel 3DFT acquisition paradigm for 4D contrast-enhanced magnetic resonance angiography (CE-MRA) that has been demonstrated capable of providing high spatial and temporal resolution, diagnostic-quality images at very high acceleration rates. However, CAPR images are typically reconstructed online using Tikhonov regularization and partial Fourier methods, which are prone to exhibit noise amplification and undersampling artifacts when operating at very high acceleration rates. In this work, a sparsity-driven offline reconstruction framework for CAPR is developed and demonstrated to consistently provide improvements over the currently-employed reconstruction strategy against these ill-effects. Moreover, the proposed reconstruction strategy requires no changes to the existing CAPR acquisition protocol, and an efficient numerical optimization and hardware system are described that allow for a 256×160×80 volume CE-MRA volume to be reconstructed from an 8-channel data set in less than two minutes. PMID:21608028

  7. Surface reconstruction from microscopic images in optical lithography.

    PubMed

    Estellers, Virginia; Thiran, Jean-Philippe; Gabrani, Maria

    2014-08-01

    This paper presents a method to reconstruct 3D surfaces of silicon wafers from 2D images of printed circuits taken with a scanning electron microscope. Our reconstruction method combines the physical model of the optical acquisition system with prior knowledge about the shapes of the patterns in the circuit; the result is a shape-from-shading technique with a shape prior. The reconstruction of the surface is formulated as an optimization problem with an objective functional that combines a data-fidelity term on the microscopic image with two prior terms on the surface. The data term models the acquisition system through the irradiance equation characteristic of the microscope; the first prior is a smoothness penalty on the reconstructed surface, and the second prior constrains the shape of the surface to agree with the expected shape of the pattern in the circuit. In order to account for the variability of the manufacturing process, this second prior includes a deformation field that allows a nonlinear elastic deformation between the expected pattern and the reconstructed surface. As a result, the minimization problem has two unknowns, and the reconstruction method provides two outputs: 1) a reconstructed surface and 2) a deformation field. The reconstructed surface is derived from the shading observed in the image and the prior knowledge about the pattern in the circuit, while the deformation field produces a mapping between the expected shape and the reconstructed surface that provides a measure of deviation between the circuit design models and the real manufacturing process.

  8. Infrared Astronomical Satellite (IRAS) image reconstruction and restoration

    NASA Technical Reports Server (NTRS)

    Gonsalves, R. A.; Lyons, T. D.; Price, S. D.; Levan, P. D.; Aumann, H. H.

    1987-01-01

    IRAS sky mapping data is being reconstructed as images, and an entropy-based restoration algorithm is being applied in an attempt to improve spatial resolution in extended sources. Reconstruction requires interpolation of non-uniformly sampled data. Restoration is accomplished with an iterative algorithm which begins with an inverse filter solution and iterates on it with a weighted entropy-based spectral subtraction.

  9. Improved Diffusion Imaging through SNR-Enhancing Joint Reconstruction

    PubMed Central

    Haldar, Justin P.; Wedeen, Van J.; Nezamzadeh, Marzieh; Dai, Guangping; Weiner, Michael W.; Schuff, Norbert; Liang, Zhi-Pei

    2012-01-01

    Quantitative diffusion imaging is a powerful technique for the characterization of complex tissue microarchitecture. However, long acquisition times and limited signal-to-noise ratio (SNR) represent significant hurdles for many in vivo applications. This paper presents a new approach to reduce noise while largely maintaining resolution in diffusion weighted images, using a statistical reconstruction method that takes advantage of the high level of structural correlation observed in typical datasets. Compared to existing denoising methods, the proposed method performs reconstruction directly from the measured complex k-space data, allowing for Gaussian noise modeling and theoretical characterizations of the resolution and SNR of the reconstructed images. In addition, the proposed method is compatible with many different models of the diffusion signal (e.g., diffusion tensor modeling, q-space modeling, etc.). The joint reconstruction method can provide significant improvements in SNR relative to conventional reconstruction techniques, with a relatively minor corresponding loss in image resolution. Results are shown in the context of diffusion spectrum imaging tractography and diffusion tensor imaging, illustrating the potential of this SNR-enhancing joint reconstruction approach for a range of different diffusion imaging experiments. PMID:22392528

  10. Image reconstruction in transcranial photoacoustic computed tomography of the brain

    NASA Astrophysics Data System (ADS)

    Mitsuhashi, Kenji; Wang, Lihong V.; Anastasio, Mark A.

    2015-03-01

    Photoacoustic computed tomography (PACT) holds great promise for transcranial brain imaging. However, the strong reflection, scattering, attenuation, and mode-conversion of photoacoustic waves in the skull pose serious challenges to establishing the method. The lack of an appropriate model of solid media in conventional PACT imaging models, which are based on the canonical scalar wave equation, causes a significant model mismatch in the presence of the skull and thus results in deteriorated reconstructed images. The goal of this study was to develop an image reconstruction algorithm that accurately models the skull and thereby ameliorates the quality of reconstructed images. The propagation of photoacoustic waves through the skull was modeled by a viscoelastic stress tensor wave equation, which was subsequently discretized by use of a staggered grid fourth-order finite-difference time-domain (FDTD) method. The matched adjoint of the FDTD-based wave propagation operator was derived for implementing a back-projection operator. Systematic computer simulations were conducted to demonstrate the effectiveness of the back-projection operator for reconstructing images in a realistic three-dimensional PACT brain imaging system. The results suggest that the proposed algorithm can successfully reconstruct images from transcranially-measured pressure data and readily be translated to clinical PACT brain imaging applications.

  11. Exponential filtering of singular values improves photoacoustic image reconstruction.

    PubMed

    Bhatt, Manish; Gutta, Sreedevi; Yalavarthy, Phaneendra K

    2016-09-01

    Model-based image reconstruction techniques yield better quantitative accuracy in photoacoustic image reconstruction. In this work, an exponential filtering of singular values was proposed for carrying out the image reconstruction in photoacoustic tomography. The results were compared with widely popular Tikhonov regularization, time reversal, and the state of the art least-squares QR-based reconstruction algorithms for three digital phantom cases with varying signal-to-noise ratios of data. It was shown that exponential filtering provides superior photoacoustic images of better quantitative accuracy. Moreover, the proposed filtering approach was observed to be less biased toward the regularization parameter and did not come with any additional computational burden as it was implemented within the Tikhonov filtering framework. It was also shown that the standard Tikhonov filtering becomes an approximation to the proposed exponential filtering. PMID:27607501

  12. Digital infrared thermal imaging following anterior cruciate ligament reconstruction.

    PubMed

    Barker, Lauren E; Markowski, Alycia M; Henneman, Kimberly

    2012-03-01

    This case describes the selective use of digital infrared thermal imaging for a 48-year-old woman who was being treated by a physical therapist following left anterior cruciate ligament (ACL) reconstruction with a semitendinosus autograft. PMID:22383168

  13. Online reconstruction of 3D magnetic particle imaging data

    NASA Astrophysics Data System (ADS)

    Knopp, T.; Hofmann, M.

    2016-06-01

    Magnetic particle imaging is a quantitative functional imaging technique that allows imaging of the spatial distribution of super-paramagnetic iron oxide particles at high temporal resolution. The raw data acquisition can be performed at frame rates of more than 40 volumes s‑1. However, to date image reconstruction is performed in an offline step and thus no direct feedback is available during the experiment. Considering potential interventional applications such direct feedback would be mandatory. In this work, an online reconstruction framework is implemented that allows direct visualization of the particle distribution on the screen of the acquisition computer with a latency of about 2 s. The reconstruction process is adaptive and performs block-averaging in order to optimize the signal quality for a given amount of reconstruction time.

  14. Online reconstruction of 3D magnetic particle imaging data

    NASA Astrophysics Data System (ADS)

    Knopp, T.; Hofmann, M.

    2016-06-01

    Magnetic particle imaging is a quantitative functional imaging technique that allows imaging of the spatial distribution of super-paramagnetic iron oxide particles at high temporal resolution. The raw data acquisition can be performed at frame rates of more than 40 volumes s-1. However, to date image reconstruction is performed in an offline step and thus no direct feedback is available during the experiment. Considering potential interventional applications such direct feedback would be mandatory. In this work, an online reconstruction framework is implemented that allows direct visualization of the particle distribution on the screen of the acquisition computer with a latency of about 2 s. The reconstruction process is adaptive and performs block-averaging in order to optimize the signal quality for a given amount of reconstruction time.

  15. Time-of-flight PET image reconstruction using origin ensembles.

    PubMed

    Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven

    2015-03-01

    The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.

  16. Advanced photoacoustic image reconstruction using the k-Wave toolbox

    NASA Astrophysics Data System (ADS)

    Treeby, B. E.; Jaros, J.; Cox, B. T.

    2016-03-01

    Reconstructing images from measured time domain signals is an essential step in tomography-mode photoacoustic imaging. However, in practice, there are many complicating factors that make it difficult to obtain high-resolution images. These include incomplete or undersampled data, filtering effects, acoustic and optical attenuation, and uncertainties in the material parameters. Here, the processing and image reconstruction steps routinely used by the Photoacoustic Imaging Group at University College London are discussed. These include correction for acoustic and optical attenuation, spatial resampling, material parameter selection, image reconstruction, and log compression. The effect of each of these steps is demonstrated using a representative in vivo dataset. All of the algorithms discussed form part of the open-source k-Wave toolbox (available from http://www.k-wave.org).

  17. Application of mathematical modelling methods for acoustic images reconstruction

    NASA Astrophysics Data System (ADS)

    Bolotina, I.; Kazazaeva, A.; Kvasnikov, K.; Kazazaev, A.

    2016-04-01

    The article considers the reconstruction of images by Synthetic Aperture Focusing Technique (SAFT). The work compares additive and multiplicative methods for processing signals received from antenna array. We have proven that the multiplicative method gives a better resolution. The study includes the estimation of beam trajectories for antenna arrays using analytical and numerical methods. We have shown that the analytical estimation method allows decreasing the image reconstruction time in case of linear antenna array implementation.

  18. Beyond maximum entropy: Fractal Pixon-based image reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard C.; Pina, R. K.

    1994-01-01

    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.

  19. Super-Resolution Image Reconstruction Applied to Medical Ultrasound

    NASA Astrophysics Data System (ADS)

    Ellis, Michael

    Ultrasound is the preferred imaging modality for many diagnostic applications due to its real-time image reconstruction and low cost. Nonetheless, conventional ultrasound is not used in many applications because of limited spatial resolution and soft tissue contrast. Most commercial ultrasound systems reconstruct images using a simple delay-and-sum architecture on receive, which is fast and robust but does not utilize all information available in the raw data. Recently, more sophisticated image reconstruction methods have been developed that make use of far more information in the raw data to improve resolution and contrast. One such method is the Time-Domain Optimized Near-Field Estimator (TONE), which employs a maximum a priori estimation to solve a highly underdetermined problem, given a well-defined system model. TONE has been shown to significantly improve both the contrast and resolution of ultrasound images when compared to conventional methods. However, TONE's lack of robustness to variations from the system model and extremely high computational cost hinder it from being readily adopted in clinical scanners. This dissertation aims to reduce the impact of TONE's shortcomings, transforming it from an academic construct to a clinically viable image reconstruction algorithm. By altering the system model from a collection of individual hypothetical scatterers to a collection of weighted, diffuse regions, dTONE is able to achieve much greater robustness to modeling errors. A method for efficient parallelization of dTONE is presented that reduces reconstruction time by more than an order of magnitude with little loss in image fidelity. An alternative reconstruction algorithm, called qTONE, is also developed and is able to reduce reconstruction times by another two orders of magnitude while simultaneously improving image contrast. Each of these methods for improving TONE are presented, their limitations are explored, and all are used in concert to reconstruct in

  20. Reconstruction of images from radiofrequency electron paramagnetic resonance spectra.

    PubMed

    Smith, C M; Stevens, A D

    1994-12-01

    This paper discusses methods for obtaining image reconstructions from electron paramagnetic resonance (EPR) spectra which constitute object projections. An automatic baselining technique is described which treats each spectrum consistently; rotating the non-horizontal baselines which are caused by stray magnetic effects onto the horizontal axis. The convolved backprojection method is described for both two- and three-dimensional reconstruction and the effect of cut-off frequency on the reconstruction is illustrated. A slower, indirect, iterative method, which does a non-linear fit to the projection data, is shown to give a far smoother reconstructed image when the method of maximum entropy is used to determine the value of the final residual sum of squares. Although this requires more computing time than the convolved backprojection method, it is more flexible and overcomes the problem of numerical instability encountered in deconvolution. Images from phantom samples in vitro are discussed. The spectral data for these have been accumulated quickly and have a low signal-to-noise ratio. The results show that as few as 16 spectra can still be processed to give an image. Artifacts in the image due to a small number of projections using the convolved backprojection reconstruction method can be removed by applying a threshold, i.e. only plotting contours higher than a given value. These artifacts are not present in an image which has been reconstructed by the maximum entropy technique. At present these techniques are being applied directly to in vivo studies.

  1. Functional imaging of murine hearts using accelerated self-gated UTE cine MRI.

    PubMed

    Motaal, Abdallah G; Noorman, Nils; de Graaf, Wolter L; Hoerr, Verena; Florack, Luc M J; Nicolay, Klaas; Strijkers, Gustav J

    2015-01-01

    We introduce a fast protocol for ultra-short echo time (UTE) Cine magnetic resonance imaging (MRI) of the beating murine heart. The sequence involves a self-gated UTE with golden-angle radial acquisition and compressed sensing reconstruction. The self-gated acquisition is performed asynchronously with the heartbeat, resulting in a randomly undersampled kt-space that facilitates compressed sensing reconstruction. The sequence was tested in 4 healthy rats and 4 rats with chronic myocardial infarction, approximately 2 months after surgery. As a control, a non-accelerated self-gated multi-slice FLASH sequence with an echo time (TE) of 2.76 ms, 4.5 signal averages, a matrix of 192 × 192, and an acquisition time of 2 min 34 s per slice was used to obtain Cine MRI with 15 frames per heartbeat. Non-accelerated UTE MRI was performed with TE = 0.29 ms, a reconstruction matrix of 192 × 192, and an acquisition time of 3 min 47 s per slice for 3.5 averages. Accelerated imaging with 2×, 4× and 5× undersampled kt-space data was performed with 1 min, 30 and 15 s acquisitions, respectively. UTE Cine images up to 5× undersampled kt-space data could be successfully reconstructed using a compressed sensing algorithm. In contrast to the FLASH Cine images, flow artifacts in the UTE images were nearly absent due to the short echo time, simplifying segmentation of the left ventricular (LV) lumen. LV functional parameters derived from the control and the accelerated Cine movies were statistically identical.

  2. Method for image reconstruction of moving radionuclide source distribution

    DOEpatents

    Stolin, Alexander V.; McKisson, John E.; Lee, Seung Joon; Smith, Mark Frederick

    2012-12-18

    A method for image reconstruction of moving radionuclide distributions. Its particular embodiment is for single photon emission computed tomography (SPECT) imaging of awake animals, though its techniques are general enough to be applied to other moving radionuclide distributions as well. The invention eliminates motion and blurring artifacts for image reconstructions of moving source distributions. This opens new avenues in the area of small animal brain imaging with radiotracers, which can now be performed without the perturbing influences of anesthesia or physical restraint on the biological system.

  3. Reconstruction Techniques for Sparse Multistatic Linear Array Microwave Imaging

    SciTech Connect

    Sheen, David M.; Hall, Thomas E.

    2014-06-09

    Sequentially-switched linear arrays are an enabling technology for a number of near-field microwave imaging applications. Electronically sequencing along the array axis followed by mechanical scanning along an orthogonal axis allows dense sampling of a two-dimensional aperture in near real-time. In this paper, a sparse multi-static array technique will be described along with associated Fourier-Transform-based and back-projection-based image reconstruction algorithms. Simulated and measured imaging results are presented that show the effectiveness of the sparse array technique along with the merits and weaknesses of each image reconstruction approach.

  4. High-performance image reconstruction in fluorescence tomography on desktop computers and graphics hardware.

    PubMed

    Freiberger, Manuel; Egger, Herbert; Liebmann, Manfred; Scharfetter, Hermann

    2011-11-01

    Image reconstruction in fluorescence optical tomography is a three-dimensional nonlinear ill-posed problem governed by a system of partial differential equations. In this paper we demonstrate that a combination of state of the art numerical algorithms and a careful hardware optimized implementation allows to solve this large-scale inverse problem in a few seconds on standard desktop PCs with modern graphics hardware. In particular, we present methods to solve not only the forward but also the non-linear inverse problem by massively parallel programming on graphics processors. A comparison of optimized CPU and GPU implementations shows that the reconstruction can be accelerated by factors of about 15 through the use of the graphics hardware without compromising the accuracy in the reconstructed images.

  5. Acceleration of fluoro-CT reconstruction for a mobile C-Arm on GPU and FPGA hardware: a simulation study

    NASA Astrophysics Data System (ADS)

    Xue, Xinwei; Cheryauka, Arvi; Tubbs, David

    2006-03-01

    CT imaging in interventional and minimally-invasive surgery requires high-performance computing solutions that meet operational room demands, healthcare business requirements, and the constraints of a mobile C-arm system. The computational requirements of clinical procedures using CT-like data are increasing rapidly, mainly due to the need for rapid access to medical imagery during critical surgical procedures. The highly parallel nature of Radon transform and CT algorithms enables embedded computing solutions utilizing a parallel processing architecture to realize a significant gain of computational intensity with comparable hardware and program coding/testing expenses. In this paper, using a sample 2D and 3D CT problem, we explore the programming challenges and the potential benefits of embedded computing using commodity hardware components. The accuracy and performance results obtained on three computational platforms: a single CPU, a single GPU, and a solution based on FPGA technology have been analyzed. We have shown that hardware-accelerated CT image reconstruction can be achieved with similar levels of noise and clarity of feature when compared to program execution on a CPU, but gaining a performance increase at one or more orders of magnitude faster. 3D cone-beam or helical CT reconstruction and a variety of volumetric image processing applications will benefit from similar accelerations.

  6. Accelerating dual cardiac phase images using undersampled radial phase encoding trajectories.

    PubMed

    Letelier, Karis; Urbina, Jesus; Andía, Marcelo; Tejos, Cristián; Irarrazaval, Pablo; Prieto, Claudia; Uribe, Sergio

    2016-09-01

    A three-dimensional dual-cardiac-phase (3D-DCP) scan has been proposed to acquire two data sets of the whole heart and great vessels during the end-diastolic and end-systolic cardiac phases in a single free-breathing scan. This method has shown accurate assessment of cardiac anatomy and function but is limited by long acquisition times. This work proposes to accelerate the acquisition and reconstruction of 3D-DCP scans by exploiting redundant information of the outer k-space regions of both cardiac phases. This is achieved using a modified radial-phase-encoding trajectory and gridding reconstruction with uniform coil combination. The end-diastolic acquisition trajectory was angularly shifted with respect to the end-systolic phase. Initially, a fully-sampled 3D-DCP scan was acquired to determine the optimal percentage of the outer k-space data that can be combined between cardiac phases. Thereafter, prospectively undersampled data were reconstructed based on this percentage. As gold standard images, the undersampled data were also reconstructed using iterative SENSE. To validate the method, image quality assessments and a cardiac volume analysis were performed. The proposed method was tested in thirteen healthy volunteers (mean age, 30years). Prospectively undersampled data (R=4) reconstructed with 50% combination led high quality images. There were no significant differences in the image quality and in the cardiac volume analysis between our method and iterative SENSE. In addition, the proposed approach reduced the reconstruction time from 40min to 1min. In conclusion, the proposed method obtains 3D-DCP scans with an image quality comparable to those reconstructed with iterative SENSE, and within a clinically acceptable reconstruction time. PMID:27067473

  7. Accelerated speckle imaging with the ATST visible broadband imager

    NASA Astrophysics Data System (ADS)

    Wöger, Friedrich; Ferayorni, Andrew

    2012-09-01

    The Advanced Technology Solar Telescope (ATST), a 4 meter class telescope for observations of the solar atmosphere currently in construction phase, will generate data at rates of the order of 10 TB/day with its state of the art instrumentation. The high-priority ATST Visible Broadband Imager (VBI) instrument alone will create two data streams with a bandwidth of 960 MB/s each. Because of the related data handling issues, these data will be post-processed with speckle interferometry algorithms in near-real time at the telescope using the cost-effective Graphics Processing Unit (GPU) technology that is supported by the ATST Data Handling System. In this contribution, we lay out the VBI-specific approach to its image processing pipeline, put this into the context of the underlying ATST Data Handling System infrastructure, and finally describe the details of how the algorithms were redesigned to exploit data parallelism in the speckle image reconstruction algorithms. An algorithm re-design is often required to efficiently speed up an application using GPU technology; we have chosen NVIDIA's CUDA language as basis for our implementation. We present our preliminary results of the algorithm performance using our test facilities, and base a conservative estimate on the requirements of a full system that could achieve near real-time performance at ATST on these results.

  8. Fuzzy-rule-based image reconstruction for positron emission tomography

    NASA Astrophysics Data System (ADS)

    Mondal, Partha P.; Rajan, K.

    2005-09-01

    Positron emission tomography (PET) and single-photon emission computed tomography have revolutionized the field of medicine and biology. Penalized iterative algorithms based on maximum a posteriori (MAP) estimation eliminate noisy artifacts by utilizing available prior information in the reconstruction process but often result in a blurring effect. MAP-based algorithms fail to determine the density class in the reconstructed image and hence penalize the pixels irrespective of the density class. Reconstruction with better edge information is often difficult because prior knowledge is not taken into account. The recently introduced median-root-prior (MRP)-based algorithm preserves the edges, but a steplike streaking effect is observed in the reconstructed image, which is undesirable. A fuzzy approach is proposed for modeling the nature of interpixel interaction in order to build an artifact-free edge-preserving reconstruction. The proposed algorithm consists of two elementary steps: (1) edge detection, in which fuzzy-rule-based derivatives are used for the detection of edges in the nearest neighborhood window (which is equivalent to recognizing nearby density classes), and (2) fuzzy smoothing, in which penalization is performed only for those pixels for which no edge is detected in the nearest neighborhood. Both of these operations are carried out iteratively until the image converges. Analysis shows that the proposed fuzzy-rule-based reconstruction algorithm is capable of producing qualitatively better reconstructed images than those reconstructed by MAP and MRP algorithms. The reconstructed images are sharper, with small features being better resolved owing to the nature of the fuzzy potential function.

  9. Compensation for air voids in photoacoustic computed tomography image reconstruction

    NASA Astrophysics Data System (ADS)

    Matthews, Thomas P.; Li, Lei; Wang, Lihong V.; Anastasio, Mark A.

    2016-03-01

    Most image reconstruction methods in photoacoustic computed tomography (PACT) assume that the acoustic properties of the object and the surrounding medium are homogeneous. This can lead to strong artifacts in the reconstructed images when there are significant variations in sound speed or density. Air voids represent a particular challenge due to the severity of the differences between the acoustic properties of air and water. In whole-body small animal imaging, the presence of air voids in the lungs, stomach, and gastrointestinal system can limit image quality over large regions of the object. Iterative reconstruction methods based on the photoacoustic wave equation can account for these acoustic variations, leading to improved resolution, improved contrast, and a reduction in the number of imaging artifacts. However, the strong acoustic heterogeneities can lead to instability or errors in the numerical wave solver. Here, the impact of air voids on PACT image reconstruction is investigated, and procedures for their compensation are proposed. The contributions of sound speed and density variations to the numerical stability of the wave solver are considered, and a novel approach for mitigating the impact of air voids while reducing the computational burden of image reconstruction is identified. These results are verified by application to an experimental phantom.

  10. Real-time imaging with radial GRAPPA: Implementation on a Heterogeneous Architecture for Low-Latency Reconstructions

    PubMed Central

    Saybasili, Haris; Herzka, Daniel A.; Seiberlich, Nicole; A.Griswold, Mark

    2014-01-01

    Combination of non-Cartesian trajectories with parallel MRI permits to attain unmatched acceleration rates when compared to traditional Cartesian MRI during real-time imaging.However, computationally demanding reconstructions of such imaging techniques, such as k-space domain radial generalized auto-calibrating partially parallel acquisitions (radial GRAPPA) and image domain conjugate gradient sensitivity encoding (CG-SENSE), lead to longer reconstruction times and unacceptable latency for online real-time MRI on conventional computational hardware. Though CG-SENSE has been shown to work with low-latency using a general purpose graphics processing unit (GPU), to the best of our knowledge, no such effort has been made for radial GRAPPA. radial GRAPPA reconstruction, which is robust even with highly undersampled acquisitions, is not iterative, requiring only significant computation during initial calibration while achieving good image quality for low-latency imaging applications. In this work, we present a very fast, low-latency, reconstruction framework based on a heterogeneous system using multi-core CPUs and GPUs. We demonstrate an implementation of radial GRAPPA that permits reconstruction times on par with or faster than acquisition of highly accelerated datasets in both cardiac and dynamic musculoskeletal imaging scenarios. Acquisition and reconstructions times are reported. PMID:24690453

  11. Real-time imaging with radial GRAPPA: Implementation on a heterogeneous architecture for low-latency reconstructions.

    PubMed

    Saybasili, Haris; Herzka, Daniel A; Seiberlich, Nicole; Griswold, Mark A

    2014-07-01

    Combination of non-Cartesian trajectories with parallel MRI permits to attain unmatched acceleration rates when compared to traditional Cartesian MRI during real-time imaging. However, computationally demanding reconstructions of such imaging techniques, such as k-space domain radial generalized auto-calibrating partially parallel acquisitions (radial GRAPPA) and image domain conjugate gradient sensitivity encoding (CG-SENSE), lead to longer reconstruction times and unacceptable latency for online real-time MRI on conventional computational hardware. Though CG-SENSE has been shown to work with low-latency using a general purpose graphics processing unit (GPU), to the best of our knowledge, no such effort has been made for radial GRAPPA. Radial GRAPPA reconstruction, which is robust even with highly undersampled acquisitions, is not iterative, requiring only significant computation during initial calibration while achieving good image quality for low-latency imaging applications. In this work, we present a very fast, low-latency, reconstruction framework based on a heterogeneous system using multi-core CPUs and GPUs. We demonstrate an implementation of radial GRAPPA that permits reconstruction times on par with or faster than acquisition of highly accelerated datasets in both cardiac and dynamic musculoskeletal imaging scenarios. Acquisition and reconstruction times are reported.

  12. Geoaccurate three-dimensional reconstruction via image-based geometry

    NASA Astrophysics Data System (ADS)

    Walvoord, Derek J.; Rossi, Adam J.; Paul, Bradley D.; Brower, Bernie; Pellechia, Matthew F.

    2013-05-01

    Recent technological advances in computing capabilities and persistent surveillance systems have led to increased focus on new methods of exploiting geospatial data, bridging traditional photogrammetric techniques and state-of-the-art multiple view geometry methodology. The structure from motion (SfM) problem in Computer Vision addresses scene reconstruction from uncalibrated cameras, and several methods exist to remove the inherent projective ambiguity. However, the reconstruction remains in an arbitrary world coordinate frame without knowledge of its relationship to a xed earth-based coordinate system. This work presents a novel approach for obtaining geoaccurate image-based 3-dimensional reconstructions in the absence of ground control points by using a SfM framework and the full physical sensor model of the collection system. Absolute position and orientation information provided by the imaging platform can be used to reconstruct the scene in a xed world coordinate system. Rather than triangulating pixels from multiple image-to-ground functions, each with its own random error, the relative reconstruction is computed via image-based geometry, i.e., geometry derived from image feature correspondences. In other words, the geolocation accuracy is improved using the relative distances provided by the SfM reconstruction. Results from the Exelis Wide-Area Motion Imagery (WAMI) system are provided to discuss conclusions and areas for future work.

  13. Influence of Iterative Reconstruction Algorithms on PET Image Resolution

    NASA Astrophysics Data System (ADS)

    Karpetas, G. E.; Michail, C. M.; Fountos, G. P.; Valais, I. G.; Nikolopoulos, D.; Kandarakis, I. S.; Panayiotakis, G. S.

    2015-09-01

    The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction. The simulated PET scanner was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the modulation transfer function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL, the ordered subsets separable paraboloidal surrogate (OSSPS), the median root prior (MRP) and OSMAPOSL with quadratic prior, algorithms. OSMAPOSL reconstruction was assessed by using fixed subsets and various iterations, as well as by using various beta (hyper) parameter values. MTF values were found to increase with increasing iterations. MTF also improves by using lower beta values. The simulated PET evaluation method, based on the TLC plane source, can be useful in the resolution assessment of PET scanners.

  14. Regularized image reconstruction algorithms for dual-isotope myocardial perfusion SPECT (MPS) imaging using a cross-tracer prior.

    PubMed

    He, Xin; Cheng, Lishui; Fessler, Jeffrey A; Frey, Eric C

    2011-06-01

    In simultaneous dual-isotope myocardial perfusion SPECT (MPS) imaging, data are simultaneously acquired to determine the distributions of two radioactive isotopes. The goal of this work was to develop penalized maximum likelihood (PML) algorithms for a novel cross-tracer prior that exploits the fact that the two images reconstructed from simultaneous dual-isotope MPS projection data are perfectly registered in space. We first formulated the simultaneous dual-isotope MPS reconstruction problem as a joint estimation problem. A cross-tracer prior that couples voxel values on both images was then proposed. We developed an iterative algorithm to reconstruct the MPS images that converges to the maximum a posteriori solution for this prior based on separable surrogate functions. To accelerate the convergence, we developed a fast algorithm for the cross-tracer prior based on the complete data OS-EM (COSEM) framework. The proposed algorithm was compared qualitatively and quantitatively to a single-tracer version of the prior that did not include the cross-tracer term. Quantitative evaluations included comparisons of mean and standard deviation images as well as assessment of image fidelity using the mean square error. We also evaluated the cross tracer prior using a three-class observer study with respect to the three-class MPS diagnostic task, i.e., classifying patients as having either no defect, reversible defect, or fixed defects. For this study, a comparison with conventional ordered subsets-expectation maximization (OS-EM) reconstruction with postfiltering was performed. The comparisons to the single-tracer prior demonstrated similar resolution for areas of the image with large intensity changes and reduced noise in uniform regions. The cross-tracer prior was also superior to the single-tracer version in terms of restoring image fidelity. Results of the three-class observer study showed that the proposed cross-tracer prior and the convergent algorithms improved the

  15. Probe and object function reconstruction in incoherent stem imaging

    SciTech Connect

    Nellist, P.D.; Pennycook, S.J.

    1996-09-01

    Using the phase-object approximation it is shown how an annular dark- field (ADF) detector in a scanning transmission electron microscope (STEM) leads to an image which can be described by an incoherent model. The point spread function is found to be simply the illuminating probe intensity. An important consequence of this is that there is no phase problem in the imaging process, which allows various image processing methods to be applied directly to the image intensity data. Using an image of a GaAs<110>, the probe intensity profile is reconstructed, confirming the existence of a 1.3 {Angstrom} probe in a 300kV STEM. It is shown that simply deconvolving this reconstructed probe from the image data does not improve its interpretability because the dominant effects of the imaging process arise simply from the restricted resolution of the microscope. However, use of the reconstructed probe in a maximum entropy reconstruction is demonstrated, which allows information beyond the resolution limit to be restored and does allow improved image interpretation.

  16. Reconstructing cosmic acceleration from modified and nonminimal gravity: The Yang-Mills case

    SciTech Connect

    Elizalde, E.; Lopez-Revelles, A. J.

    2010-09-15

    A variant of the accelerating cosmology reconstruction program is developed for f(R) gravity and for a modified Yang-Mills/Maxwell theory. Reconstruction schemes in terms of e-foldings and by using an auxiliary scalar field are developed and carefully compared, for the case of f(R) gravity. An example of a model with a transient phantom behavior without real matter is explicitly discussed in both schemes. Further, the two reconstruction schemes are applied to the more physically interesting case of a Yang-Mills/Maxwell theory, again with explicit examples. Detailed comparison of the two schemes of reconstruction is presented also for this theory. It seems to support, as well, physical nonequivalence of the two frames.

  17. Reconstruction of indoor scene from a single image

    NASA Astrophysics Data System (ADS)

    Wu, Di; Li, Hongyu; Zhang, Lin

    2015-03-01

    Given a single image of an indoor scene without any prior knowledge, is it possible for a computer to automatically reconstruct the structure of the scene? This letter proposes a reconstruction method, called RISSIM, to recover the 3D modelling of an indoor scene from a single image. The proposed method is composed of three steps: the estimation of vanishing points, the detection and classification of lines, and the plane mapping. To find vanishing points, a new feature descriptor, named "OCR", is defined to describe the texture orientation. With Phrase Congruency and Harris Detector, the line segments can be detected exactly, which is a prerequisite. Perspective transform is a defined as a reliable method whereby the points on the image can be represented on a 3D model. Experimental results show that the 3D structure of an indoor scene can be well reconstructed from a single image although the available depth information is limited.

  18. Fair-view image reconstruction with dual dictionaries.

    PubMed

    Lu, Yang; Zhao, Jun; Wang, Ge

    2012-01-01

    In this paper, we formulate the problem of computed tomography (CT)under sparsity and few-view constraints, and propose a novel algorithm for image reconstruction from few-view data utilizing the simultaneous algebraic reconstruction technique (SART) coupled with dictionary learning, sparse representation and total variation (TV) minimization on two interconnected levels. The main feature of our algorithm is the use of two dictionaries: a transitional dictionary for atom matching and a global dictionary for image updating. The atoms in the global and transitional dictionaries represent the image patches from high-quality and low-quality CT images, respectively.Experiments with simulated and real projections were performed to evaluate and validate the proposed algorithm. The results reconstructed using the proposed approach are significantly better than those using either SART or SART–TV.

  19. Few-view image reconstruction with dual dictionaries

    PubMed Central

    Lu, Yang; Zhao, Jun; Wang, Ge

    2011-01-01

    In this paper, we formulate the problem of computed tomography (CT) under sparsity and few-view constraints, and propose a novel algorithm for image reconstruction from few-view data utilizing the simultaneous algebraic reconstruction technique (SART) coupled with dictionary learning, sparse representation and total variation (TV) minimization on two interconnected levels. The main feature of our algorithm is the use of two dictionaries: a transitional dictionary for atom matching and a global dictionary for image updating. The atoms in the global and transitional dictionaries represent the image patches from high-quality and low-quality CT images, respectively. Experiments with simulated and real projections were performed to evaluate and validate the proposed algorithm. The results reconstructed using the proposed approach are significantly better than those using either SART or SART–TV. PMID:22155989

  20. Cervigram image segmentation based on reconstructive sparse representations

    NASA Astrophysics Data System (ADS)

    Zhang, Shaoting; Huang, Junzhou; Wang, Wei; Huang, Xiaolei; Metaxas, Dimitris

    2010-03-01

    We proposed an approach based on reconstructive sparse representations to segment tissues in optical images of the uterine cervix. Because of large variations in image appearance caused by the changing of the illumination and specular reflection, the color and texture features in optical images often overlap with each other and are not linearly separable. By leveraging sparse representations the data can be transformed to higher dimensions with sparse constraints and become more separated. K-SVD algorithm is employed to find sparse representations and corresponding dictionaries. The data can be reconstructed from its sparse representations and positive and/or negative dictionaries. Classification can be achieved based on comparing the reconstructive errors. In the experiments we applied our method to automatically segment the biomarker AcetoWhite (AW) regions in an archive of 60,000 images of the uterine cervix. Compared with other general methods, our approach showed lower space and time complexity and higher sensitivity.

  1. Bayesian image reconstruction for improving detection performance of muon tomography.

    PubMed

    Wang, Guobao; Schultz, Larry J; Qi, Jinyi

    2009-05-01

    Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.

  2. Rapid 3D dynamic arterial spin labeling with a sparse model-based image reconstruction.

    PubMed

    Zhao, Li; Fielden, Samuel W; Feng, Xue; Wintermark, Max; Mugler, John P; Meyer, Craig H

    2015-11-01

    Dynamic arterial spin labeling (ASL) MRI measures the perfusion bolus at multiple observation times and yields accurate estimates of cerebral blood flow in the presence of variations in arterial transit time. ASL has intrinsically low signal-to-noise ratio (SNR) and is sensitive to motion, so that extensive signal averaging is typically required, leading to long scan times for dynamic ASL. The goal of this study was to develop an accelerated dynamic ASL method with improved SNR and robustness to motion using a model-based image reconstruction that exploits the inherent sparsity of dynamic ASL data. The first component of this method is a single-shot 3D turbo spin echo spiral pulse sequence accelerated using a combination of parallel imaging and compressed sensing. This pulse sequence was then incorporated into a dynamic pseudo continuous ASL acquisition acquired at multiple observation times, and the resulting images were jointly reconstructed enforcing a model of potential perfusion time courses. Performance of the technique was verified using a numerical phantom and it was validated on normal volunteers on a 3-Tesla scanner. In simulation, a spatial sparsity constraint improved SNR and reduced estimation errors. Combined with a model-based sparsity constraint, the proposed method further improved SNR, reduced estimation error and suppressed motion artifacts. Experimentally, the proposed method resulted in significant improvements, with scan times as short as 20s per time point. These results suggest that the model-based image reconstruction enables rapid dynamic ASL with improved accuracy and robustness.

  3. Scalar wave-optical reconstruction of plenoptic camera images.

    PubMed

    Junker, André; Stenau, Tim; Brenner, Karl-Heinz

    2014-09-01

    We investigate the reconstruction of plenoptic camera images in a scalar wave-optical framework. Previous publications relating to this topic numerically simulate light propagation on the basis of ray tracing. However, due to continuing miniaturization of hardware components it can be assumed that in combination with low-aperture optical systems this technique may not be generally valid. Therefore, we study the differences between ray- and wave-optical object reconstructions of true plenoptic camera images. For this purpose we present a wave-optical reconstruction algorithm, which can be run on a regular computer. Our findings show that a wave-optical treatment is capable of increasing the detail resolution of reconstructed objects.

  4. Tomographic mesh generation for OSEM reconstruction of SPECT images

    NASA Astrophysics Data System (ADS)

    Lu, Yao; Yu, Bo; Vogelsang, Levon; Krol, Andrzej; Xu, Yuesheng; Hu, Xiaofei; Feiglin, David

    2009-02-01

    To improve quality of OSEM SPECT reconstruction in the mesh domain, we implemented an adaptive mesh generation method that produces tomographic mesh consisting of triangular elements with size and density commensurate with geometric detail of the objects. Node density and element size change smoothly as a function of distance from the edges and edge curvature without creation of 'bad' elements. Tomographic performance of mesh-based OSEM reconstruction is controlled by the tomographic mesh structure, i.e. node density distribution, which in turn is ruled by the number of key points on the boundaries. A greedy algorithm is used to influence the distribution of nodes on the boundaries. The relationship between tomographic mesh properties and OSEM reconstruction quality has been investigated. We conclude that by selecting adequate number of key points, one can produce a tomographic mesh with lowest number of nodes that is sufficient to provide desired quality of reconstructed images, appropriate for the imaging system properties.

  5. Prospective regularization design in prior-image-based reconstruction.

    PubMed

    Dang, Hao; Siewerdsen, Jeffrey H; Stayman, J Webster

    2015-12-21

    Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in

  6. Prospective regularization design in prior-image-based reconstruction

    NASA Astrophysics Data System (ADS)

    Dang, Hao; Siewerdsen, Jeffrey H.; Webster Stayman, J.

    2015-12-01

    Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in

  7. Three-dimensional image reconstruction for electrical impedance tomography.

    PubMed

    Kleinermann, F; Avis, N J; Judah, S K; Barber, D C

    1996-11-01

    Very little work has been conducted on three-dimensional aspects of electrical impedance tomography (EIT), partly due to the increased computational complexity over the two-dimensional aspects of EIT. Nevertheless, extending EIT to three-dimensional data acquisition and image reconstruction may afford significant advantages such as an increase in the size of the independent data set and improved spatial resolution. However, considerable challenges are associated with the software aspects of three-dimensional EIT systems due to the requirement for accurate three-dimensional forward problem modelling and the derivation of three-dimensional image reconstruction algorithms. This paper outlines the work performed to date to derive a three-dimensional image reconstruction algorithm for EIT based on the inversion of the sensitivity matrix approach for a finite right circular cylinder. A comparison in terms of the singular-value spectra and the singular vectors between the sensitivity matrices for a three-dimensional cylinder and a two-dimensional disc has been performed. This comparison shows that the three-dimensional image reconstruction algorithm recruits more central information at lower condition numbers than the two-dimensional image reconstruction algorithm.

  8. An adaptive filtered back-projection for photoacoustic image reconstruction

    SciTech Connect

    Huang, He; Bustamante, Gilbert; Peterson, Ralph; Ye, Jing Yong

    2015-05-15

    Purpose: The purpose of this study is to develop an improved filtered-back-projection (FBP) algorithm for photoacoustic tomography (PAT), which allows image reconstruction with higher quality compared to images reconstructed through traditional algorithms. Methods: A rigorous expression of a weighting function has been derived directly from a photoacoustic wave equation and used as a ramp filter in Fourier domain. The authors’ new algorithm utilizes this weighting function to precisely calculate each photoacoustic signal’s contribution and then reconstructs the image based on the retarded potential generated from the photoacoustic sources. In addition, an adaptive criterion has been derived for selecting the cutoff frequency of a low pass filter. Two computational phantoms were created to test the algorithm. The first phantom contained five spheres with each sphere having different absorbances. The phantom was used to test the capability for correctly representing both the geometry and the relative absorbed energy in a planar measurement system. The authors also used another phantom containing absorbers of different sizes with overlapping geometry to evaluate the performance of the new method for complicated geometry. In addition, random noise background was added to the simulated data, which were obtained by using an arc-shaped array of 50 evenly distributed transducers that spanned 160° over a circle with a radius of 65 mm. A normalized factor between the neighbored transducers was applied for correcting measurement signals in PAT simulations. The authors assumed that the scanned object was mounted on a holder that rotated over the full 360° and the scans were set to a sampling rate of 20.48 MHz. Results: The authors have obtained reconstructed images of the computerized phantoms by utilizing the new FBP algorithm. From the reconstructed image of the first phantom, one can see that this new approach allows not only obtaining a sharp image but also showing

  9. An adaptive filtered back-projection for photoacoustic image reconstruction

    PubMed Central

    Huang, He; Bustamante, Gilbert; Peterson, Ralph; Ye, Jing Yong

    2015-01-01

    Purpose: The purpose of this study is to develop an improved filtered-back-projection (FBP) algorithm for photoacoustic tomography (PAT), which allows image reconstruction with higher quality compared to images reconstructed through traditional algorithms. Methods: A rigorous expression of a weighting function has been derived directly from a photoacoustic wave equation and used as a ramp filter in Fourier domain. The authors’ new algorithm utilizes this weighting function to precisely calculate each photoacoustic signal’s contribution and then reconstructs the image based on the retarded potential generated from the photoacoustic sources. In addition, an adaptive criterion has been derived for selecting the cutoff frequency of a low pass filter. Two computational phantoms were created to test the algorithm. The first phantom contained five spheres with each sphere having different absorbances. The phantom was used to test the capability for correctly representing both the geometry and the relative absorbed energy in a planar measurement system. The authors also used another phantom containing absorbers of different sizes with overlapping geometry to evaluate the performance of the new method for complicated geometry. In addition, random noise background was added to the simulated data, which were obtained by using an arc-shaped array of 50 evenly distributed transducers that spanned 160° over a circle with a radius of 65 mm. A normalized factor between the neighbored transducers was applied for correcting measurement signals in PAT simulations. The authors assumed that the scanned object was mounted on a holder that rotated over the full 360° and the scans were set to a sampling rate of 20.48 MHz. Results: The authors have obtained reconstructed images of the computerized phantoms by utilizing the new FBP algorithm. From the reconstructed image of the first phantom, one can see that this new approach allows not only obtaining a sharp image but also showing

  10. Beyond maximum entropy: Fractal pixon-based image reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, R. C.; Pina, R. K.

    1994-01-01

    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other methods, including Goodness-of-Fit (e.g. Least-Squares and Lucy-Richardson) and Maximum Entropy (ME). Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME.

  11. Reconstruction of electrostatic force microscopy images

    NASA Astrophysics Data System (ADS)

    Strassburg, E.; Boag, A.; Rosenwaks, Y.

    2005-08-01

    An efficient algorithm to restore the actual surface potential image from Kelvin probe force microscopy measurements of semiconductors is presented. The three-dimensional potential of the tip-sample system is calculated using an integral equation-based boundary element method combined with modeling the semiconductor by an equivalent dipole-layer and image-charge model. The derived point spread function of the measuring tip is then used to restore the actual surface potential from the measured image, using noise filtration and deconvolution algorithms. The model is then used to restore high-resolution Kelvin probe microscopy images of semiconductor surfaces.

  12. Compressed sensing reconstruction for whole-heart imaging with 3D radial trajectories: a graphics processing unit implementation.

    PubMed

    Nam, Seunghoon; Akçakaya, Mehmet; Basha, Tamer; Stehning, Christian; Manning, Warren J; Tarokh, Vahid; Nezafat, Reza

    2013-01-01

    A disadvantage of three-dimensional (3D) isotropic acquisition in whole-heart coronary MRI is the prolonged data acquisition time. Isotropic 3D radial trajectories allow undersampling of k-space data in all three spatial dimensions, enabling accelerated acquisition of the volumetric data. Compressed sensing (CS) reconstruction can provide further acceleration in the acquisition by removing the incoherent artifacts due to undersampling and improving the image quality. However, the heavy computational overhead of the CS reconstruction has been a limiting factor for its application. In this article, a parallelized implementation of an iterative CS reconstruction method for 3D radial acquisitions using a commercial graphics processing unit is presented. The execution time of the graphics processing unit-implemented CS reconstruction was compared with that of the C++ implementation, and the efficacy of the undersampled 3D radial acquisition with CS reconstruction was investigated in both phantom and whole-heart coronary data sets. Subsequently, the efficacy of CS in suppressing streaking artifacts in 3D whole-heart coronary MRI with 3D radial imaging and its convergence properties were studied. The CS reconstruction provides improved image quality (in terms of vessel sharpness and suppression of noise-like artifacts) compared with the conventional 3D gridding algorithm, and the graphics processing unit implementation greatly reduces the execution time of CS reconstruction yielding 34-54 times speed-up compared with C++ implementation. PMID:22392604

  13. Optimization and image quality assessment of the alpha-image reconstruction algorithm: iterative reconstruction with well-defined image quality metrics

    NASA Astrophysics Data System (ADS)

    Lebedev, Sergej; Sawall, Stefan; Kuchenbecker, Stefan; Faby, Sebastian; Knaup, Michael; Kachelrieß, Marc

    2015-03-01

    The reconstruction of CT images with low noise and highest spatial resolution is a challenging task. Usually, a trade-off between at least these two demands has to be found or several reconstructions with mutually exclusive properties, i.e. either low noise or high spatial resolution, have to be performed. Iterative reconstruction methods might be suitable tools to overcome these limitations and provide images of highest diagnostic quality with formerly mutually exclusive image properties. While image quality metrics like the modulation transfer function (MTF) or the point spread function (PSF) are well-defined in case of standard reconstructions, e.g. filtered backprojection, the iterative algorithms lack these metrics. To overcome this issue alternate methodologies like the model observers have been proposed recently to allow a quantification of a usually task-dependent image quality metric.1 As an alternative we recently proposed an iterative reconstruction method, the alpha-image reconstruction (AIR), providing well-defined image quality metrics on a per-voxel basis.2 In particular, the AIR algorithm seeks to find weighting images, the alpha-images, that are used to blend between basis images with mutually exclusive image properties. The result is an image with highest diagnostic quality that provides a high spatial resolution and a low noise level. As the estimation of the alpha-images is computationally demanding we herein aim at optimizing this process and highlight the favorable properties of AIR using patient measurements.

  14. Respiratory motion correction in emission tomography image reconstruction.

    PubMed

    Reyes, Mauricio; Malandain, Grégoire; Koulibaly, Pierre Malick; González Ballester, Miguel A; Darcourt, Jacques

    2005-01-01

    In Emission Tomography imaging, respiratory motion causes artifacts in lungs and cardiac reconstructed images, which lead to misinterpretations and imprecise diagnosis. Solutions like respiratory gating, correlated dynamic PET techniques, list-mode data based techniques and others have been tested with improvements over the spatial activity distribution in lungs lesions, but with the disadvantages of requiring additional instrumentation or discarding part of the projection data used for reconstruction. The objective of this study is to incorporate respiratory motion correction directly into the image reconstruction process, without any additional acquisition protocol consideration. To this end, we propose an extension to the Maximum Likelihood Expectation Maximization (MLEM) algorithm that includes a respiratory motion model, which takes into account the displacements and volume deformations produced by the respiratory motion during the data acquisition process. We present results from synthetic simulations incorporating real respiratory motion as well as from phantom and patient data.

  15. Image Reconstruction for a Partially Collimated Whole Body PET Scanner.

    PubMed

    Alessio, Adam M; Schmitz, Ruth E; Macdonald, Lawrence R; Wollenweber, Scott D; Stearns, Charles W; Ross, Steven G; Ganin, Alex; Lewellen, Thomas K; Kinahan, Paul E

    2008-06-01

    Partially collimated PET systems have less collimation than conventional 2-D systems and have been shown to offer count rate improvements over 2-D and 3-D systems. Despite this potential, previous efforts have not established image-based improvements with partial collimation and have not customized the reconstruction method for partially collimated data. This work presents an image reconstruction method tailored for partially collimated data. Simulated and measured sensitivity patterns are presented and provide a basis for modification of a fully 3-D reconstruction technique. The proposed method uses a measured normalization correction term to account for the unique sensitivity to true events. This work also proposes a modified scatter correction based on simulated data. Measured image quality data supports the use of the normalization correction term for true events, and suggests that the modified scatter correction is unnecessary.

  16. In vivo sensitivity estimation and imaging acceleration with rotating RF coil arrays at 7 Tesla.

    PubMed

    Li, Mingyan; Jin, Jin; Zuo, Zhentao; Liu, Feng; Trakic, Adnan; Weber, Ewald; Zhuo, Yan; Xue, Rong; Crozier, Stuart

    2015-03-01

    Using a new rotating SENSitivity Encoding (rotating-SENSE) algorithm, we have successfully demonstrated that the rotating radiofrequency coil array (RRFCA) was capable of achieving a significant reduction in scan time and a uniform image reconstruction for a homogeneous phantom at 7 Tesla. However, at 7 Tesla the in vivo sensitivity profiles (B1(-)) become distinct at various angular positions. Therefore, sensitivity maps at other angular positions cannot be obtained by numerically rotating the acquired ones. In this work, a novel sensitivity estimation method for the RRFCA was developed and validated with human brain imaging. This method employed a library database and registration techniques to estimate coil sensitivity at an arbitrary angular position. The estimated sensitivity maps were then compared to the acquired sensitivity maps. The results indicate that the proposed method is capable of accurately estimating both magnitude and phase of sensitivity at an arbitrary angular position, which enables us to employ the rotating-SENSE algorithm to accelerate acquisition and reconstruct image. Compared to a stationary coil array with the same number of coil elements, the RRFCA was able to reconstruct images with better quality at a high reduction factor. It is hoped that the proposed rotation-dependent sensitivity estimation algorithm and the acceleration ability of the RRFCA will be particularly useful for ultra high field MRI.

  17. In vivo sensitivity estimation and imaging acceleration with rotating RF coil arrays at 7 Tesla

    NASA Astrophysics Data System (ADS)

    Li, Mingyan; Jin, Jin; Zuo, Zhentao; Liu, Feng; Trakic, Adnan; Weber, Ewald; Zhuo, Yan; Xue, Rong; Crozier, Stuart

    2015-03-01

    Using a new rotating SENSitivity Encoding (rotating-SENSE) algorithm, we have successfully demonstrated that the rotating radiofrequency coil array (RRFCA) was capable of achieving a significant reduction in scan time and a uniform image reconstruction for a homogeneous phantom at 7 Tesla. However, at 7 Tesla the in vivo sensitivity profiles (B1-) become distinct at various angular positions. Therefore, sensitivity maps at other angular positions cannot be obtained by numerically rotating the acquired ones. In this work, a novel sensitivity estimation method for the RRFCA was developed and validated with human brain imaging. This method employed a library database and registration techniques to estimate coil sensitivity at an arbitrary angular position. The estimated sensitivity maps were then compared to the acquired sensitivity maps. The results indicate that the proposed method is capable of accurately estimating both magnitude and phase of sensitivity at an arbitrary angular position, which enables us to employ the rotating-SENSE algorithm to accelerate acquisition and reconstruct image. Compared to a stationary coil array with the same number of coil elements, the RRFCA was able to reconstruct images with better quality at a high reduction factor. It is hoped that the proposed rotation-dependent sensitivity estimation algorithm and the acceleration ability of the RRFCA will be particularly useful for ultra high field MRI.

  18. Sparse representation for the ISAR image reconstruction

    NASA Astrophysics Data System (ADS)

    Hu, Mengqi; Montalbo, John; Li, Shuxia; Sun, Ligang; Qiao, Zhijun G.

    2016-05-01

    In this paper, a sparse representation of the data for an inverse synthetic aperture radar (ISAR) system is provided in two dimensions. The proposed sparse representation motivates the use a of a Convex Optimization that recovers the image with far less samples, which is required by Nyquist-Shannon sampling theorem to increases the efficiency and decrease the cost of calculation in radar imaging.

  19. Ultra-Fast Image Reconstruction of Tomosynthesis Mammography Using GPU

    PubMed Central

    Arefan, D.; Talebpour, A.; Ahmadinejhad, N.; Kamali Asl, A.

    2015-01-01

    Digital Breast Tomosynthesis (DBT) is a technology that creates three dimensional (3D) images of breast tissue. Tomosynthesis mammography detects lesions that are not detectable with other imaging systems. If image reconstruction time is in the order of seconds, we can use Tomosynthesis systems to perform Tomosynthesis-guided Interventional procedures. This research has been designed to study ultra-fast image reconstruction technique for Tomosynthesis Mammography systems using Graphics Processing Unit (GPU). At first, projections of Tomosynthesis mammography have been simulated. In order to produce Tomosynthesis projections, it has been designed a 3D breast phantom from empirical data. It is based on MRI data in its natural form. Then, projections have been created from 3D breast phantom. The image reconstruction algorithm based on FBP was programmed with C++ language in two methods using central processing unit (CPU) card and the Graphics Processing Unit (GPU). It calculated the time of image reconstruction in two kinds of programming (using CPU and GPU). PMID:26171373

  20. Image reconstruction for hybrid true-color micro-CT.

    PubMed

    Xu, Qiong; Yu, Hengyong; Bennett, James; He, Peng; Zainon, Rafidah; Doesburg, Robert; Opie, Alex; Walsh, Mike; Shen, Haiou; Butler, Anthony; Butler, Phillip; Mou, Xuanqin; Wang, Ge

    2012-06-01

    X-ray micro-CT is an important imaging tool for biomedical researchers. Our group has recently proposed a hybrid "true-color" micro-CT system to improve contrast resolution with lower system cost and radiation dose. The system incorporates an energy-resolved photon-counting true-color detector into a conventional micro-CT configuration, and can be used for material decomposition. In this paper, we demonstrate an interior color-CT image reconstruction algorithm developed for this hybrid true-color micro-CT system. A compressive sensing-based statistical interior tomography method is employed to reconstruct each channel in the local spectral imaging chain, where the reconstructed global gray-scale image from the conventional imaging chain served as the initial guess. Principal component analysis was used to map the spectral reconstructions into the color space. The proposed algorithm was evaluated by numerical simulations, physical phantom experiments, and animal studies. The results confirm the merits of the proposed algorithm, and demonstrate the feasibility of the hybrid true-color micro-CT system. Additionally, a "color diffusion" phenomenon was observed whereby high-quality true-color images are produced not only inside the region of interest, but also in neighboring regions. It appears harnessing that this phenomenon could potentially reduce the color detector size for a given ROI, further reducing system cost and radiation dose.

  1. Alpha image reconstruction (AIR): A new iterative CT image reconstruction approach using voxel-wise alpha blending

    SciTech Connect

    Hofmann, Christian; Sawall, Stefan; Knaup, Michael; Kachelrieß, Marc

    2014-06-15

    Purpose: Iterative image reconstruction gains more and more interest in clinical routine, as it promises to reduce image noise (and thereby patient dose), to reduce artifacts, or to improve spatial resolution. Among vendors and researchers, however, there is no consensus of how to best achieve these aims. The general approach is to incorporatea priori knowledge into iterative image reconstruction, for example, by adding additional constraints to the cost function, which penalize variations between neighboring voxels. However, this approach to regularization in general poses a resolution noise trade-off because the stronger the regularization, and thus the noise reduction, the stronger the loss of spatial resolution and thus loss of anatomical detail. The authors propose a method which tries to improve this trade-off. The proposed reconstruction algorithm is called alpha image reconstruction (AIR). One starts with generating basis images, which emphasize certain desired image properties, like high resolution or low noise. The AIR algorithm reconstructs voxel-specific weighting coefficients that are applied to combine the basis images. By combining the desired properties of each basis image, one can generate an image with lower noise and maintained high contrast resolution thus improving the resolution noise trade-off. Methods: All simulations and reconstructions are performed in native fan-beam geometry. A water phantom with resolution bar patterns and low contrast disks is simulated. A filtered backprojection (FBP) reconstruction with a Ram-Lak kernel is used as a reference reconstruction. The results of AIR are compared against the FBP results and against a penalized weighted least squares reconstruction which uses total variation as regularization. The simulations are based on the geometry of the Siemens Somatom Definition Flash scanner. To quantitatively assess image quality, the authors analyze line profiles through resolution patterns to define a contrast

  2. Local fingerprint image reconstruction based on gabor filtering

    NASA Astrophysics Data System (ADS)

    Bakhtiari, Somayeh; Agaian, Sos S.; Jamshidi, Mo

    2012-06-01

    In this paper, we propose two solutions for fingerprint local image reconstruction based on Gabor filtering. Gabor filtering is a popular method for fingerprint image enhancement. However, the reliability of the information in the output image suffers, when the input image has a poor quality. This is the result of the spurious estimates of frequency and orientation by classical approaches, particularly in the scratch regions. In both techniques of this paper, the scratch marks are recognized initially using reliability image which is calculated using the gradient images. The first algorithm is based on an inpainting technique and the second method employs two different kernels for the scratch and the non-scratch parts of the image to calculate the gradient images. The simulation results show that both approaches allow the actual information of the image to be preserved while connecting discontinuities correctly by approximating the orientation matrix more genuinely.

  3. A rapid reconstruction algorithm for three-dimensional scanning images

    NASA Astrophysics Data System (ADS)

    Xiang, Jiying; Wu, Zhen; Zhang, Ping; Huang, Dexiu

    1998-04-01

    A `simulated fluorescence' three-dimensional reconstruction algorithm, which is especially suitable for confocal images of partial transparent biological samples, is proposed in this paper. To make the retina projection of the object reappear and to avoid excessive memory consumption, the original image is rotated and compressed before the processing. A left image and a right image are mixed by different colors to increase the sense of stereo. The details originally hidden in deep layers are well exhibited with the aid of an `auxiliary directional source'. In addition, the time consumption is greatly reduced compared with conventional methods such as `ray tracing'. The realization of the algorithm is interpreted by a group of reconstructed images.

  4. Improved image quality and computation reduction in 4-D reconstruction of cardiac-gated SPECT images.

    PubMed

    Narayanan, M V; King, M A; Wernick, M N; Byrne, C L; Soares, E J; Pretorius, P H

    2000-05-01

    Spatiotemporal reconstruction of cardiac-gated SPECT images permits us to obtain valuable information related to cardiac function. However, the task of reconstructing this four-dimensional (4-D) data set is computation intensive. Typically, these studies are reconstructed frame-by-frame: a nonoptimal approach because temporal correlations in the signal are not accounted for. In this work, we show that the compression and signal decorrelation properties of the Karhunen-Loève (KL) transform may be used to greatly simplify the spatiotemporal reconstruction problem. The gated projections are first KL transformed in the temporal direction. This results in a sequence of KL-transformed projection images for which the signal components are uncorrelated along the time axis. As a result, the 4-D reconstruction task is simplified to a series of three-dimensional (3-D) reconstructions in the KL domain. The reconstructed KL components are subsequently inverse KL transformed to obtain the entire spatiotemporal reconstruction set. Our simulation and clinical results indicate that KL processing provides image sequences that are less noisy than are conventional frame-by-frame reconstructions. Additionally, by discarding high-order KL components that are dominated by noise, we can achieve savings in computation time because fewer reconstructions are needed in comparison to conventional frame-by-frame reconstructions.

  5. A methodology to event reconstruction from trace images.

    PubMed

    Milliet, Quentin; Delémont, Olivier; Sapin, Eric; Margot, Pierre

    2015-03-01

    The widespread use of digital imaging devices for surveillance (CCTV) and entertainment (e.g., mobile phones, compact cameras) has increased the number of images recorded and opportunities to consider the images as traces or documentation of criminal activity. The forensic science literature focuses almost exclusively on technical issues and evidence assessment [1]. Earlier steps in the investigation phase have been neglected and must be considered. This article is the first comprehensive description of a methodology to event reconstruction using images. This formal methodology was conceptualised from practical experiences and applied to different contexts and case studies to test and refine it. Based on this practical analysis, we propose a systematic approach that includes a preliminary analysis followed by four main steps. These steps form a sequence for which the results from each step rely on the previous step. However, the methodology is not linear, but it is a cyclic, iterative progression for obtaining knowledge about an event. The preliminary analysis is a pre-evaluation phase, wherein potential relevance of images is assessed. In the first step, images are detected and collected as pertinent trace material; the second step involves organising and assessing their quality and informative potential. The third step includes reconstruction using clues about space, time and actions. Finally, in the fourth step, the images are evaluated and selected as evidence. These steps are described and illustrated using practical examples. The paper outlines how images elicit information about persons, objects, space, time and actions throughout the investigation process to reconstruct an event step by step. We emphasise the hypothetico-deductive reasoning framework, which demonstrates the contribution of images to generating, refining or eliminating propositions or hypotheses. This methodology provides a sound basis for extending image use as evidence and, more generally

  6. Dictionary Approaches to Image Compression and Reconstruction

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1998-01-01

    This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as phi(sub gamma), are discrete time signals, where gamma represents the dictionary index. A dictionary with a collection of these waveforms is typically complete or overcomplete. Given such a dictionary, the goal is to obtain a representation image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.

  7. Dictionary Approaches to Image Compression and Reconstruction

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1998-01-01

    This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as lambda, are discrete time signals, where y represents the dictionary index. A dictionary with a collection of these waveforms Is typically complete or over complete. Given such a dictionary, the goal is to obtain a representation Image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.

  8. Joint model of motion and anatomy for PET image reconstruction

    SciTech Connect

    Qiao Feng; Pan Tinsu; Clark, John W. Jr.; Mawlawi, Osama

    2007-12-15

    Anatomy-based positron emission tomography (PET) image enhancement techniques have been shown to have the potential for improving PET image quality. However, these techniques assume an accurate alignment between the anatomical and the functional images, which is not always valid when imaging the chest due to respiratory motion. In this article, we present a joint model of both motion and anatomical information by integrating a motion-incorporated PET imaging system model with an anatomy-based maximum a posteriori image reconstruction algorithm. The mismatched anatomical information due to motion can thus be effectively utilized through this joint model. A computer simulation and a phantom study were conducted to assess the efficacy of the joint model, whereby motion and anatomical information were either modeled separately or combined. The reconstructed images in each case were compared to corresponding reference images obtained using a quadratic image prior based maximum a posteriori reconstruction algorithm for quantitative accuracy. Results of these studies indicated that while modeling anatomical information or motion alone improved the PET image quantitation accuracy, a larger improvement in accuracy was achieved when using the joint model. In the computer simulation study and using similar image noise levels, the improvement in quantitation accuracy compared to the reference images was 5.3% and 19.8% when using anatomical or motion information alone, respectively, and 35.5% when using the joint model. In the phantom study, these results were 5.6%, 5.8%, and 19.8%, respectively. These results suggest that motion compensation is important in order to effectively utilize anatomical information in chest imaging using PET. The joint motion-anatomy model presented in this paper provides a promising solution to this problem.

  9. Charged particle velocity map image reconstruction with one-dimensional projections of spherical functions

    SciTech Connect

    Gerber, Thomas; Liu Yuzhu; Knopp, Gregor; Hemberger, Patrick; Bodi, Andras; Radi, Peter; Sych, Yaroslav

    2013-03-15

    Velocity map imaging (VMI) is used in mass spectrometry and in angle resolved photo-electron spectroscopy to determine the lateral momentum distributions of charged particles accelerated towards a detector. VM-images are composed of projected Newton spheres with a common centre. The 2D images are usually evaluated by a decomposition into base vectors each representing the 2D projection of a set of particles starting from a centre with a specific velocity distribution. We propose to evaluate 1D projections of VM-images in terms of 1D projections of spherical functions, instead. The proposed evaluation algorithm shows that all distribution information can be retrieved from an adequately chosen set of 1D projections, alleviating the numerical effort for the interpretation of VM-images considerably. The obtained results produce directly the coefficients of the involved spherical functions, making the reconstruction of sliced Newton spheres obsolete.

  10. Charged particle velocity map image reconstruction with one-dimensional projections of spherical functions.

    PubMed

    Gerber, Thomas; Liu, Yuzhu; Knopp, Gregor; Hemberger, Patrick; Bodi, Andras; Radi, Peter; Sych, Yaroslav

    2013-03-01

    Velocity map imaging (VMI) is used in mass spectrometry and in angle resolved photo-electron spectroscopy to determine the lateral momentum distributions of charged particles accelerated towards a detector. VM-images are composed of projected Newton spheres with a common centre. The 2D images are usually evaluated by a decomposition into base vectors each representing the 2D projection of a set of particles starting from a centre with a specific velocity distribution. We propose to evaluate 1D projections of VM-images in terms of 1D projections of spherical functions, instead. The proposed evaluation algorithm shows that all distribution information can be retrieved from an adequately chosen set of 1D projections, alleviating the numerical effort for the interpretation of VM-images considerably. The obtained results produce directly the coefficients of the involved spherical functions, making the reconstruction of sliced Newton spheres obsolete.

  11. Charged particle velocity map image reconstruction with one-dimensional projections of spherical functions

    NASA Astrophysics Data System (ADS)

    Gerber, Thomas; Liu, Yuzhu; Knopp, Gregor; Hemberger, Patrick; Bodi, Andras; Radi, Peter; Sych, Yaroslav

    2013-03-01

    Velocity map imaging (VMI) is used in mass spectrometry and in angle resolved photo-electron spectroscopy to determine the lateral momentum distributions of charged particles accelerated towards a detector. VM-images are composed of projected Newton spheres with a common centre. The 2D images are usually evaluated by a decomposition into base vectors each representing the 2D projection of a set of particles starting from a centre with a specific velocity distribution. We propose to evaluate 1D projections of VM-images in terms of 1D projections of spherical functions, instead. The proposed evaluation algorithm shows that all distribution information can be retrieved from an adequately chosen set of 1D projections, alleviating the numerical effort for the interpretation of VM-images considerably. The obtained results produce directly the coefficients of the involved spherical functions, making the reconstruction of sliced Newton spheres obsolete.

  12. Cryo-EM Structure Determination Using Segmented Helical Image Reconstruction.

    PubMed

    Fromm, S A; Sachse, C

    2016-01-01

    Treating helices as single-particle-like segments followed by helical image reconstruction has become the method of choice for high-resolution structure determination of well-ordered helical viruses as well as flexible filaments. In this review, we will illustrate how the combination of latest hardware developments with optimized image processing routines have led to a series of near-atomic resolution structures of helical assemblies. Originally, the treatment of helices as a sequence of segments followed by Fourier-Bessel reconstruction revealed the potential to determine near-atomic resolution structures from helical specimens. In the meantime, real-space image processing of helices in a stack of single particles was developed and enabled the structure determination of specimens that resisted classical Fourier helical reconstruction and also facilitated high-resolution structure determination. Despite the progress in real-space analysis, the combination of Fourier and real-space processing is still commonly used to better estimate the symmetry parameters as the imposition of the correct helical symmetry is essential for high-resolution structure determination. Recent hardware advancement by the introduction of direct electron detectors has significantly enhanced the image quality and together with improved image processing procedures has made segmented helical reconstruction a very productive cryo-EM structure determination method.

  13. Penalized maximum-likelihood image reconstruction for lesion detection

    NASA Astrophysics Data System (ADS)

    Qi, Jinyi; Huesman, Ronald H.

    2006-08-01

    Detecting cancerous lesions is one major application in emission tomography. In this paper, we study penalized maximum-likelihood image reconstruction for this important clinical task. Compared to analytical reconstruction methods, statistical approaches can improve the image quality by accurately modelling the photon detection process and measurement noise in imaging systems. To explore the full potential of penalized maximum-likelihood image reconstruction for lesion detection, we derived simplified theoretical expressions that allow fast evaluation of the detectability of a random lesion. The theoretical results are used to design the regularization parameters to improve lesion detectability. We conducted computer-based Monte Carlo simulations to compare the proposed penalty function, conventional penalty function, and a penalty function for isotropic point spread function. The lesion detectability is measured by a channelized Hotelling observer. The results show that the proposed penalty function outperforms the other penalty functions for lesion detection. The relative improvement is dependent on the size of the lesion. However, we found that the penalty function optimized for a 5 mm lesion still outperforms the other two penalty functions for detecting a 14 mm lesion. Therefore, it is feasible to use the penalty function designed for small lesions in image reconstruction, because detection of large lesions is relatively easy.

  14. Cryo-EM Structure Determination Using Segmented Helical Image Reconstruction.

    PubMed

    Fromm, S A; Sachse, C

    2016-01-01

    Treating helices as single-particle-like segments followed by helical image reconstruction has become the method of choice for high-resolution structure determination of well-ordered helical viruses as well as flexible filaments. In this review, we will illustrate how the combination of latest hardware developments with optimized image processing routines have led to a series of near-atomic resolution structures of helical assemblies. Originally, the treatment of helices as a sequence of segments followed by Fourier-Bessel reconstruction revealed the potential to determine near-atomic resolution structures from helical specimens. In the meantime, real-space image processing of helices in a stack of single particles was developed and enabled the structure determination of specimens that resisted classical Fourier helical reconstruction and also facilitated high-resolution structure determination. Despite the progress in real-space analysis, the combination of Fourier and real-space processing is still commonly used to better estimate the symmetry parameters as the imposition of the correct helical symmetry is essential for high-resolution structure determination. Recent hardware advancement by the introduction of direct electron detectors has significantly enhanced the image quality and together with improved image processing procedures has made segmented helical reconstruction a very productive cryo-EM structure determination method. PMID:27572732

  15. Compressed/reconstructed test images for CRAF/Cassini

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Cheung, K.-M.; Onyszchuk, I.; Pollara, F.; Arnold, S.

    1991-01-01

    A set of compressed, then reconstructed, test images submitted to the Comet Rendezvous Asteroid Flyby (CRAF)/Cassini project is presented as part of its evaluation of near lossless high compression algorithms for representing image data. A total of seven test image files were provided by the project. The seven test images were compressed, then reconstructed with high quality (root mean square error of approximately one or two gray levels on an 8 bit gray scale), using discrete cosine transforms or Hadamard transforms and efficient entropy coders. The resulting compression ratios varied from about 2:1 to about 10:1, depending on the activity or randomness in the source image. This was accomplished without any special effort to optimize the quantizer or to introduce special postprocessing to filter the reconstruction errors. A more complete set of measurements, showing the relative performance of the compression algorithms over a wide range of compression ratios and reconstruction errors, shows that additional compression is possible at a small sacrifice in fidelity.

  16. Gadgetron: an open source framework for medical image reconstruction.

    PubMed

    Hansen, Michael Schacht; Sørensen, Thomas Sangild

    2013-06-01

    This work presents a new open source framework for medical image reconstruction called the "Gadgetron." The framework implements a flexible system for creating streaming data processing pipelines where data pass through a series of modules or "Gadgets" from raw data to reconstructed images. The data processing pipeline is configured dynamically at run-time based on an extensible markup language configuration description. The framework promotes reuse and sharing of reconstruction modules and new Gadgets can be added to the Gadgetron framework through a plugin-like architecture without recompiling the basic framework infrastructure. Gadgets are typically implemented in C/C++, but the framework includes wrapper Gadgets that allow the user to implement new modules in the Python scripting language for rapid prototyping. In addition to the streaming framework infrastructure, the Gadgetron comes with a set of dedicated toolboxes in shared libraries for medical image reconstruction. This includes generic toolboxes for data-parallel (e.g., GPU-based) execution of compute-intensive components. The basic framework architecture is independent of medical imaging modality, but this article focuses on its application to Cartesian and non-Cartesian parallel magnetic resonance imaging.

  17. Acceleration of integral imaging based incoherent Fourier hologram capture using graphic processing unit.

    PubMed

    Jeong, Kyeong-Min; Kim, Hee-Seung; Hong, Sung-In; Lee, Sung-Keun; Jo, Na-Young; Kim, Yong-Soo; Lim, Hong-Gi; Park, Jae-Hyeung

    2012-10-01

    Speed enhancement of integral imaging based incoherent Fourier hologram capture using a graphic processing unit is reported. Integral imaging based method enables exact hologram capture of real-existing three-dimensional objects under regular incoherent illumination. In our implementation, we apply parallel computation scheme using the graphic processing unit, accelerating the processing speed. Using enhanced speed of hologram capture, we also implement a pseudo real-time hologram capture and optical reconstruction system. The overall operation speed is measured to be 1 frame per second.

  18. Application of accelerated acquisition and highly constrained reconstruction methods to MR

    NASA Astrophysics Data System (ADS)

    Wang, Kang

    2011-12-01

    There are many Magnetic Resonance Imaging (MRI) applications that require rapid data acquisition. In conventional proton MRI, representative applications include real-time dynamic imaging, whole-chest pulmonary perfusion imaging, high resolution coronary imaging, MR T1 or T2 mapping, etc. The requirement for fast acquisition and novel reconstruction methods is either due to clinical demand for high temporal resolution, high spatial resolution, or both. Another important category in which fast MRI methods are highly desirable is imaging with hyperpolarized (HP) contrast media, such as HP 3He imaging for evaluation of pulmonary function, and imaging of HP 13C-labeled substrates for the study of in vivo metabolic processes. To address these needs, numerous MR undersampling methods have been developed and combined with novel image reconstruction techniques. This thesis aims to develop novel data acquisition and image reconstruction techniques for the following applications. (I) Ultrashort echo time spectroscopic imaging (UTESI). The need for acquiring many echo images in spectroscopic imaging with high spatial resolution usually results in extended scan times, and thus requires k-space undersampling and novel imaging reconstruction methods to overcome the artifacts related to the undersampling. (2) Dynamic hyperpolarized 13C spectroscopic imaging. HP 13C compounds exhibit non-equilibrium T1 decay and rapidly evolving spectral dynamics, and therefore it is vital to utilize the polarized signal wisely and efficiently to observe the entire temporal dynamic of the injected "C compounds as well as the corresponding downstream metabolites. (3) Time-resolved contrast-enhanced MR angiography. The diagnosis of vascular diseases often requires large coverage of human body anatomies with high spatial resolution and sufficient temporal resolution for the separation of arterial phases from venous phases. The goal of simultaneously achieving high spatial and temporal resolution has

  19. Coronary x-ray angiographic reconstruction and image orientation

    SciTech Connect

    Sprague, Kevin; Drangova, Maria; Lehmann, Glen

    2006-03-15

    We have developed an interactive geometric method for 3D reconstruction of the coronary arteries using multiple single-plane angiographic views with arbitrary orientations. Epipolar planes and epipolar lines are employed to trace corresponding vessel segments on these views. These points are utilized to reconstruct 3D vessel centerlines. The accuracy of the reconstruction is assessed using: (1) near-intersection distances of the rays that connect x-ray sources with projected points, (2) distances between traced and projected centerlines. These same two measures enter into a fitness function for a genetic search algorithm (GA) employed to orient the angiographic image planes automatically in 3D avoiding local minima in the search for optimized parameters. Furthermore, the GA utilizes traced vessel shapes (as opposed to isolated anchor points) to assist the optimization process. Differences between two-view and multiview reconstructions are evaluated. Vessel radii are measured and used to render the coronary tree in 3D as a surface. Reconstruction fidelity is demonstrated via (1) virtual phantom, (2) real phantom, and (3) patient data sets, the latter two of which utilize the GA. These simulated and measured angiograms illustrate that the vessel centerlines are reconstructed in 3D with accuracy below 1 mm. The reconstruction method is thus accurate compared to typical vessel dimensions of 1-3 mm. The methods presented should enable a combined interpretation of the severity of coronary artery stenoses and the hemodynamic impact on myocardial perfusion in patients with coronary artery disease.

  20. Coronary x-ray angiographic reconstruction and image orientation.

    PubMed

    Sprague, Kevin; Drangova, Maria; Lehmann, Glen; Slomka, Piotr; Levin, David; Chow, Benjamin; deKemp, Robert

    2006-03-01

    We have developed an interactive geometric method for 3D reconstruction of the coronary arteries using multiple single-plane angiographic views with arbitrary orientations. Epipolar planes and epipolar lines are employed to trace corresponding vessel segments on these views. These points are utilized to reconstruct 3D vessel centerlines. The accuracy of the reconstruction is assessed using: (1) near-intersection distances of the rays that connect x-ray sources with projected points, (2) distances between traced and projected centerlines. These same two measures enter into a fitness function for a genetic search algorithm (GA) employed to orient the angiographic image planes automatically in 3D avoiding local minima in the search for optimized parameters. Furthermore, the GA utilizes traced vessel shapes (as opposed to isolated anchor points) to assist the optimization process. Differences between two-view and multiview reconstructions are evaluated. Vessel radii are measured and used to render the coronary tree in 3D as a surface. Reconstruction fidelity is demonstrated via (1) virtual phantom, (2) real phantom, and (3) patient data sets, the latter two of which utilize the GA. These simulated and measured angiograms illustrate that the vessel center-lines are reconstructed in 3D with accuracy below 1 mm. The reconstruction method is thus accurate compared to typical vessel dimensions of 1-3 mm. The methods presented should enable a combined interpretation of the severity of coronary artery stenoses and the hemodynamic impact on myocardial perfusion in patients with coronary artery disease.

  1. Kalman filter techniques for accelerated Cartesian dynamic cardiac imaging.

    PubMed

    Feng, Xue; Salerno, Michael; Kramer, Christopher M; Meyer, Craig H

    2013-05-01

    In dynamic MRI, spatial and temporal parallel imaging can be exploited to reduce scan time. Real-time reconstruction enables immediate visualization during the scan. Commonly used view-sharing techniques suffer from limited temporal resolution, and many of the more advanced reconstruction methods are either retrospective, time-consuming, or both. A Kalman filter model capable of real-time reconstruction can be used to increase the spatial and temporal resolution in dynamic MRI reconstruction. The original study describing the use of the Kalman filter in dynamic MRI was limited to non-Cartesian trajectories because of a limitation intrinsic to the dynamic model used in that study. Here the limitation is overcome, and the model is applied to the more commonly used Cartesian trajectory with fast reconstruction. Furthermore, a combination of the Kalman filter model with Cartesian parallel imaging is presented to further increase the spatial and temporal resolution and signal-to-noise ratio. Simulations and experiments were conducted to demonstrate that the Kalman filter model can increase the temporal resolution of the image series compared with view-sharing techniques and decrease the spatial aliasing compared with TGRAPPA. The method requires relatively little computation, and thus is suitable for real-time reconstruction.

  2. Principles of MR image formation and reconstruction.

    PubMed

    Duerk, J L

    1999-11-01

    This article describes a number of concepts that provide insights into the process of MR imaging. The use of shaped, fixed-bandwidth RF pulses and magnetic field gradients is described to provide an understanding of the methods used for slice selection. Variations in the slice-excitation profile are shown as a function of the RF pulse shape used, the truncation method used, and the tip angle. It should be remembered that although the goal is to obtain uniform excitation across the slice, this goal is never achieved in practice, thus necessitating the use of slice gaps in some cases. Excitation, refocusing, and inversion pulses are described. Excitation pulses nutate the spins from the longitudinal axis into the transverse plane, where their magnetization can be detected. Refocusing pulses are used to flip the magnetization through 180 degrees once it is in the transverse plane, so that the influence of magnetic field inhomogeneities is eliminated. Inversion pulses are used to flip the magnetization from the +z to the -z direction in invesrsion-recovery sequences. Radiofrequency pulses can also be used to eliminate either fat or water protons from the images because of the small differences in resonant frequency between these two types of protons. Selective methods based on chemical shift and binomial methods are described. Once the desired magnetization has been tipped into the transverse plane by the slice-selection process, two imaging axes remain to be spatially encoded. One axis is easily encoded by the application of a second magnetic field gradient that establishes a one-to-one mapping between position and frequency during the time that the signal is converted from analog to digital sampling. This frequency-encoding gradient is used in combination with the Fourier transform to determine the location of the precessing magnetization. The second image axis is encoded by a process known as phase encoding. The collected data can be described as the 2D Fourier

  3. Reconstruction techniques for sparse multistatic linear array microwave imaging

    NASA Astrophysics Data System (ADS)

    Sheen, David M.; Hall, Thomas E.

    2014-06-01

    Sequentially-switched linear arrays are an enabling technology for a number of near-field microwave imaging applications. Electronically sequencing along the array axis followed by mechanical scanning along an orthogonal axis allows dense sampling of a two-dimensional aperture in near real-time. The Pacific Northwest National Laboratory (PNNL) has developed this technology for several applications including concealed weapon detection, groundpenetrating radar, and non-destructive inspection and evaluation. These techniques form three-dimensional images by scanning a diverging beam swept frequency transceiver over a two-dimensional aperture and mathematically focusing or reconstructing the data into three-dimensional images. Recently, a sparse multi-static array technology has been developed that reduces the number of antennas required to densely sample the linear array axis of the spatial aperture. This allows a significant reduction in cost and complexity of the linear-array-based imaging system. The sparse array has been specifically designed to be compatible with Fourier-Transform-based image reconstruction techniques; however, there are limitations to the use of these techniques, especially for extreme near-field operation. In the extreme near-field of the array, back-projection techniques have been developed that account for the exact location of each transmitter and receiver in the linear array and the 3-D image location. In this paper, the sparse array technique will be described along with associated Fourier-Transform-based and back-projection-based image reconstruction algorithms. Simulated imaging results are presented that show the effectiveness of the sparse array technique along with the merits and weaknesses of each image reconstruction approach.

  4. Optimisation techniques for digital image reconstruction from their projections

    NASA Astrophysics Data System (ADS)

    Durrani, T. S.; Goutis, C. E.

    1980-09-01

    A method is proposed for the digital reconstruction of images from their projections based on optimizing specified performance criteria. The reconstruction problem is embedded into the framework of constrained optimization and its solution is shown to lead to a relationship between the image and the one-dimensional Lagrange functions associated with each cost criterion. Two types of geometries (the parallel-beam and fan-beam systems) are considered for the acquisition of projection data and the constrained-optimization problem is solved for both. The ensuing algorithms allow the reconstruction of multidimensional objects from one-dimensional functions only. For digital data a fast reconstruction algorithm is proposed which exploits the symmetries inherent in both a circular domain of image reconstruction and in projections obtained at equispaced angles. Computational complexity is significantly reduced by the use of fast-Fourier-transform techniques, as the underlying relationship between the available projection data and the associated Lagrange multipliers is shown to possess a block circulant matrix structure.

  5. Efficient iterative image reconstruction algorithm for dedicated breast CT

    NASA Astrophysics Data System (ADS)

    Antropova, Natalia; Sanchez, Adrian; Reiser, Ingrid S.; Sidky, Emil Y.; Boone, John; Pan, Xiaochuan

    2016-03-01

    Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.

  6. Improved satellite image compression and reconstruction via genetic algorithms

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael; Lamont, Gary

    2008-10-01

    A wide variety of signal and image processing applications, including the US Federal Bureau of Investigation's fingerprint compression standard [3] and the JPEG-2000 image compression standard [26], utilize wavelets. This paper describes new research that demonstrates how a genetic algorithm (GA) may be used to evolve transforms that outperform wavelets for satellite image compression and reconstruction under conditions subject to quantization error. The new approach builds upon prior work by simultaneously evolving real-valued coefficients representing matched forward and inverse transform pairs at each of three levels of a multi-resolution analysis (MRA) transform. The training data for this investigation consists of actual satellite photographs of strategic urban areas. Test results show that a dramatic reduction in the error present in reconstructed satellite images may be achieved without sacrificing the compression capabilities of the forward transform. The transforms evolved during this research outperform previous start-of-the-art solutions, which optimized coefficients for the reconstruction transform only. These transforms also outperform wavelets, reducing error by more than 0.76 dB at a quantization level of 64. In addition, transforms trained using representative satellite images do not perform quite as well when subsequently tested against images from other classes (such as fingerprints or portraits). This result suggests that the GA developed for this research is automatically learning to exploit specific attributes common to the class of images represented in the training population.

  7. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.

    PubMed

    Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut

    2014-06-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms.

  8. Whole Mouse Brain Image Reconstruction from Serial Coronal Sections Using FIJI (ImageJ).

    PubMed

    Paletzki, Ronald; Gerfen, Charles R

    2015-10-01

    Whole-brain reconstruction of the mouse enables comprehensive analysis of the distribution of neurochemical markers, the distribution of anterogradely labeled axonal projections or retrogradely labeled neurons projecting to a specific brain site, or the distribution of neurons displaying activity-related markers in behavioral paradigms. This unit describes a method to produce whole-brain reconstruction image sets from coronal brain sections with up to four fluorescent markers using the freely available image-processing program FIJI (ImageJ).

  9. Building Facade Reconstruction by Fusing Terrestrial Laser Points and Images

    PubMed Central

    Pu, Shi; Vosselman, George

    2009-01-01

    Laser data and optical data have a complementary nature for three dimensional feature extraction. Efficient integration of the two data sources will lead to a more reliable and automated extraction of three dimensional features. This paper presents a semiautomatic building facade reconstruction approach, which efficiently combines information from terrestrial laser point clouds and close range images. A building facade's general structure is discovered and established using the planar features from laser data. Then strong lines in images are extracted using Canny extractor and Hough transformation, and compared with current model edges for necessary improvement. Finally, textures with optimal visibility are selected and applied according to accurate image orientations. Solutions to several challenge problems throughout the collaborated reconstruction, such as referencing between laser points and multiple images and automated texturing, are described. The limitations and remaining works of this approach are also discussed. PMID:22408539

  10. Building facade reconstruction by fusing terrestrial laser points and images.

    PubMed

    Pu, Shi; Vosselman, George

    2009-01-01

    Laser data and optical data have a complementary nature for three dimensional feature extraction. Efficient integration of the two data sources will lead to a more reliable and automated extraction of three dimensional features. This paper presents a semiautomatic building facade reconstruction approach, which efficiently combines information from terrestrial laser point clouds and close range images. A building facade's general structure is discovered and established using the planar features from laser data. Then strong lines in images are extracted using Canny extractor and Hough transformation, and compared with current model edges for necessary improvement. Finally, textures with optimal visibility are selected and applied according to accurate image orientations. Solutions to several challenge problems throughout the collaborated reconstruction, such as referencing between laser points and multiple images and automated texturing, are described. The limitations and remaining works of this approach are also discussed.

  11. Compressed hyperspectral image sensing with joint sparsity reconstruction

    NASA Astrophysics Data System (ADS)

    Liu, Haiying; Li, Yunsong; Zhang, Jing; Song, Juan; Lv, Pei

    2011-10-01

    Recent compressed sensing (CS) results show that it is possible to accurately reconstruct images from a small number of linear measurements via convex optimization techniques. In this paper, according to the correlation analysis of linear measurements for hyperspectral images, a joint sparsity reconstruction algorithm based on interband prediction and joint optimization is proposed. In the method, linear prediction is first applied to remove the correlations among successive spectral band measurement vectors. The obtained residual measurement vectors are then recovered using the proposed joint optimization based POCS (projections onto convex sets) algorithm with the steepest descent method. In addition, a pixel-guided stopping criterion is introduced to stop the iteration. Experimental results show that the proposed algorithm exhibits its superiority over other known CS reconstruction algorithms in the literature at the same measurement rates, while with a faster convergence speed.

  12. Double diffraction of quasiperiodic structures and Bayesian image reconstruction

    NASA Astrophysics Data System (ADS)

    Xu, Jian

    2006-04-01

    We study the spectrum of quasiperiodic structures by using quasiperiodic pulse trains. We find a single sharp diffraction peak when the dynamics of the incident wave matches the arrangement of the scatterers, that is, when the pulse train and the scatterers are in resonance. The maximum diffraction angle and the resonant pulse train determine the positions of the scatterers. These results may provide a methodology for identifying quasicrystals with a very large signal-to-noise ratio. We propose a double diffraction scheme to identify one-dimensional quasiperiodic structures with high precision. The scheme uses a set of scatterers to produce a sequence of quasiperiodic pulses from a single pulse, and then uses these pulses to determine the structure of the second set of scatterers. We find the maximum allowable number of target scatterers, given an experimental setup. Our calculation confirms our simulation results. The reverse problem of spectroscopy is reconstruction that is, given an experimental image, how to reconstruct the original as faithfully as possible. We study the general image reconstruction problem under the Bayesian inference framework. We designed a modified multiplicity prior distribution, and use Gibbs sampling to reconstruct the latent image. In contrast with the traditional entropy prior, our modified multiplicity prior avoids the Sterling's formula approximation, incorporates an Occam's razor, and automatically adapts for the information content in the noisy input. We argue that the mean posterior image is a better representation than the maximum a posterior (MAP) image. We also optimize the Gibbs sampling algorithm to determine the high-dimensional posterior density distribution with high efficiency. Our algorithm runs N2 faster than traditional Gibbs sampler. With the knowledge of the full posterior distribution, statistical measures such as standard error and confident interval can be easily generated. Our algorithm is not only useful for

  13. Lunar Surface Reconstruction from Apollo MC Images

    NASA Astrophysics Data System (ADS)

    Elaksher, Ahmed F.; Al-Jarrah, Ahmad; Walker, Kyle

    2015-07-01

    The last three Apollo lunar missions (15, 16, and 17) carried an integrated photogrammetric mapping system of a metric camera (MC), a high-resolution panoramic camera, a star camera, and a laser altimeter. Recently images taken by the MC were scanned by Arizona State University (ASU); these images contain valuable information for scientific exploration, engineering analysis, and visualization of the Moon's surface. In this article, we took advantage of the large overlaps, the multi viewing, and the high ground resolution of the images taken by the Apollo MC in generating an accurate and reliable surface of the Moon. We started by computing the relative positions and orientations of the exposure stations through a rigorous photogrammetric bundle adjustment process. We then generated a surface model using a hierarchical correlation-based matching algorithm. The matching algorithm was implemented in a multi-photo scheme and permits the exclusion of obscured pixels. The generated surface model was registered with LOLA topographic data and the comparison between the two surfaces yielded an average absolute difference of 36 m. These results look very promising and demonstrate the effectiveness of the proposed algorithm in accounting for depth discontinuities, occlusions, and image-signal noise.

  14. Cascaded diffractive optical elements for improved multiplane image reconstruction.

    PubMed

    Gülses, A Alkan; Jenkins, B Keith

    2013-05-20

    Computer-generated phase-only diffractive optical elements in a cascaded setup are designed by one deterministic and one stochastic algorithm for multiplane image formation. It is hypothesized that increasing the number of elements as wavefront modulators in the longitudinal dimension would enlarge the available solution space, thus enabling enhanced image reconstruction. Numerical results show that increasing the number of holograms improves quality at the output. Design principles, computational methods, and specific conditions are discussed.

  15. Advances in imaging technologies for planning breast reconstruction

    PubMed Central

    Mohan, Anita T.

    2016-01-01

    The role and choice of preoperative imaging for planning in breast reconstruction is still a disputed topic in the reconstructive community, with varying opinion on the necessity, the ideal imaging modality, costs and impact on patient outcomes. Since the advent of perforator flaps their use in microsurgical breast reconstruction has grown. Perforator based flaps afford lower donor morbidity by sparing the underlying muscle provide durable results, superior cosmesis to create a natural looking new breast, and are preferred in the context of radiation therapy. However these surgeries are complex; more technically challenging that implant based reconstruction, and leaves little room for error. The role of imaging in breast reconstruction can assist the surgeon in exploring or confirming flap choices based on donor site characteristics and presence of suitable perforators. Vascular anatomical studies in the lab have provided the surgeon a foundation of knowledge on location and vascular territories of individual perforators to improve our understanding for flap design and safe flap harvest. The creation of a presurgical map in patients can highlight any abnormal or individual anatomical variance to optimize flap design, intraoperative decision-making and execution of flap harvest with greater predictability and efficiency. This article highlights the role and techniques for preoperative planning using the newer technologies that have been adopted in reconstructive clinical practice: computed tomographic angiography (CTA), magnetic resonance angiography (MRA), laser-assisted indocyanine green fluorescence angiography (LA-ICGFA) and dynamic infrared thermography (DIRT). The primary focus of this paper is on the application of CTA and MRA imaging modalities. PMID:27047790

  16. Advances in imaging technologies for planning breast reconstruction.

    PubMed

    Mohan, Anita T; Saint-Cyr, Michel

    2016-04-01

    The role and choice of preoperative imaging for planning in breast reconstruction is still a disputed topic in the reconstructive community, with varying opinion on the necessity, the ideal imaging modality, costs and impact on patient outcomes. Since the advent of perforator flaps their use in microsurgical breast reconstruction has grown. Perforator based flaps afford lower donor morbidity by sparing the underlying muscle provide durable results, superior cosmesis to create a natural looking new breast, and are preferred in the context of radiation therapy. However these surgeries are complex; more technically challenging that implant based reconstruction, and leaves little room for error. The role of imaging in breast reconstruction can assist the surgeon in exploring or confirming flap choices based on donor site characteristics and presence of suitable perforators. Vascular anatomical studies in the lab have provided the surgeon a foundation of knowledge on location and vascular territories of individual perforators to improve our understanding for flap design and safe flap harvest. The creation of a presurgical map in patients can highlight any abnormal or individual anatomical variance to optimize flap design, intraoperative decision-making and execution of flap harvest with greater predictability and efficiency. This article highlights the role and techniques for preoperative planning using the newer technologies that have been adopted in reconstructive clinical practice: computed tomographic angiography (CTA), magnetic resonance angiography (MRA), laser-assisted indocyanine green fluorescence angiography (LA-ICGFA) and dynamic infrared thermography (DIRT). The primary focus of this paper is on the application of CTA and MRA imaging modalities. PMID:27047790

  17. RECONSTRUCTION OF HUMAN LUNG MORPHOLOGY MODELS FROM MAGNETIC RESONANCE IMAGES

    EPA Science Inventory


    Reconstruction of Human Lung Morphology Models from Magnetic Resonance Images
    T. B. Martonen (Experimental Toxicology Division, U.S. EPA, Research Triangle Park, NC 27709) and K. K. Isaacs (School of Public Health, University of North Carolina, Chapel Hill, NC 27514)

  18. An automated 3D reconstruction method of UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  19. Optimized satellite image compression and reconstruction via evolution strategies

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael

    2009-05-01

    This paper describes the automatic discovery, via an Evolution Strategy with Covariance Matrix Adaptation (CMA-ES), of vectors of real-valued coefficients representing matched forward and inverse transforms that outperform the 9/7 Cohen-Daubechies-Feauveau (CDF) discrete wavelet transform (DWT) for satellite image compression and reconstruction under conditions subject to quantization error. The best transform evolved during this study reduces the mean squared error (MSE) present in reconstructed satellite images by an average of 33.78% (1.79 dB), while maintaining the average information entropy (IE) of compressed images at 99.57% in comparison to the wavelet. In addition, this evolved transform achieves 49.88% (3.00 dB) average MSE reduction when tested on 80 images from the FBI fingerprint test set, and 42.35% (2.39 dB) average MSE reduction when tested on a set of 18 digital photographs, while achieving average IE of 104.36% and 100.08%, respectively. These results indicate that our evolved transform greatly improves the quality of reconstructed images without substantial loss of compression capability over a broad range of image classes.

  20. Super-resolution image reconstruction for ultrasonic nondestructive evaluation.

    PubMed

    Li, Shanglei; Chu, Tsuchin Philip

    2013-12-01

    Ultrasonic testing is one of the most successful nondestructive evaluation (NDE) techniques for the inspection of carbon-fiber-reinforced polymer (CFRP) materials. This paper discusses the application of the iterative backprojection (IBP) super-resolution image reconstruction technique to carbon epoxy laminates with simulated defects to obtain high-resolution images for NDE. Super-resolution image reconstruction is an approach used to overcome the inherent resolution limitations of an existing ultrasonic system. It can greatly improve the image quality and allow more detailed inspection of the region of interest (ROI) with high resolution, improving defect evaluation and accuracy. First, three artificially simulated delamination defects in a CFRP panel were considered to evaluate and validate the application of the IBP method. The results of the validation indicate that both the contrast-tonoise ratio (CNR) and the peak signal-to-noise ratio (PSNR) value of the super-resolution result are better than the bicubic interpolation method. Then, the IBP method was applied to the low-resolution ultrasonic C-scan image sequence with subpixel displacement of two types of defects (delamination and porosity) which were obtained by the micro-scanning imaging technique. The result demonstrated that super-resolution images achieved better visual quality with an improved image resolution compared with raw C-scan images.

  1. A novel data processing technique for image reconstruction of penumbral imaging

    NASA Astrophysics Data System (ADS)

    Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin

    2011-06-01

    CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.

  2. High Resolution Image Reconstruction from Projection of Low Resolution Images DIffering in Subpixel Shifts

    NASA Technical Reports Server (NTRS)

    Mareboyana, Manohar; Le Moigne-Stewart, Jacqueline; Bennett, Jerome

    2016-01-01

    In this paper, we demonstrate a simple algorithm that projects low resolution (LR) images differing in subpixel shifts on a high resolution (HR) also called super resolution (SR) grid. The algorithm is very effective in accuracy as well as time efficiency. A number of spatial interpolation techniques using nearest neighbor, inverse-distance weighted averages, Radial Basis Functions (RBF) etc. used in projection yield comparable results. For best accuracy of reconstructing SR image by a factor of two requires four LR images differing in four independent subpixel shifts. The algorithm has two steps: i) registration of low resolution images and (ii) shifting the low resolution images to align with reference image and projecting them on high resolution grid based on the shifts of each low resolution image using different interpolation techniques. Experiments are conducted by simulating low resolution images by subpixel shifts and subsampling of original high resolution image and the reconstructing the high resolution images from the simulated low resolution images. The results of accuracy of reconstruction are compared by using mean squared error measure between original high resolution image and reconstructed image. The algorithm was tested on remote sensing images and found to outperform previously proposed techniques such as Iterative Back Projection algorithm (IBP), Maximum Likelihood (ML), and Maximum a posterior (MAP) algorithms. The algorithm is robust and is not overly sensitive to the registration inaccuracies.

  3. High resolution image reconstruction from projection of low resolution images differing in subpixel shifts

    NASA Astrophysics Data System (ADS)

    Mareboyana, Manohar; Le Moigne, Jacqueline; Bennett, Jerome

    2016-05-01

    In this paper, we demonstrate simple algorithms that project low resolution (LR) images differing in subpixel shifts on a high resolution (HR) also called super resolution (SR) grid. The algorithms are very effective in accuracy as well as time efficiency. A number of spatial interpolation techniques using nearest neighbor, inverse-distance weighted averages, Radial Basis Functions (RBF) etc. are used in projection. For best accuracy of reconstructing SR image by a factor of two requires four LR images differing in four independent subpixel shifts. The algorithm has two steps: i) registration of low resolution images and (ii) shifting the low resolution images to align with reference image and projecting them on high resolution grid based on the shifts of each low resolution image using different interpolation techniques. Experiments are conducted by simulating low resolution images by subpixel shifts and subsampling of original high resolution image and the reconstructing the high resolution images from the simulated low resolution images. The results of accuracy of reconstruction are compared by using mean squared error measure between original high resolution image and reconstructed image. The algorithm was tested on remote sensing images and found to outperform previously proposed techniques such as Iterative Back Projection algorithm (IBP), Maximum Likelihood (ML) algorithms. The algorithms are robust and are not overly sensitive to the registration inaccuracies.

  4. Statistical reconstruction algorithms for continuous wave electron spin resonance imaging

    NASA Astrophysics Data System (ADS)

    Kissos, Imry; Levit, Michael; Feuer, Arie; Blank, Aharon

    2013-06-01

    Electron spin resonance imaging (ESRI) is an important branch of ESR that deals with heterogeneous samples ranging from semiconductor materials to small live animals and even humans. ESRI can produce either spatial images (providing information about the spatially dependent radical concentration) or spectral-spatial images, where an extra dimension is added to describe the absorption spectrum of the sample (which can also be spatially dependent). The mapping of oxygen in biological samples, often referred to as oximetry, is a prime example of an ESRI application. ESRI suffers frequently from a low signal-to-noise ratio (SNR), which results in long acquisition times and poor image quality. A broader use of ESRI is hampered by this slow acquisition, which can also be an obstacle for many biological applications where conditions may change relatively quickly over time. The objective of this work is to develop an image reconstruction scheme for continuous wave (CW) ESRI that would make it possible to reduce the data acquisition time without degrading the reconstruction quality. This is achieved by adapting the so-called "statistical reconstruction" method, recently developed for other medical imaging modalities, to the specific case of CW ESRI. Our new algorithm accounts for unique ESRI aspects such as field modulation, spectral-spatial imaging, and possible limitation on the gradient magnitude (the so-called "limited angle" problem). The reconstruction method shows improved SNR and contrast recovery vs. commonly used back-projection-based methods, for a variety of simulated synthetic samples as well as in actual CW ESRI experiments.

  5. Investigation of limited-view image reconstruction in optoacoustic tomography employing a priori structural information

    NASA Astrophysics Data System (ADS)

    Huang, Chao; Oraevsky, Alexander A.; Anastasio, Mark A.

    2010-08-01

    Optoacoustic tomography (OAT) is an emerging ultrasound-mediated biophotonic imaging modality that has exciting potential for many biomedical imaging applications. There is great interest in conducting B-mode ultrasound and OAT imaging studies for breast cancer detection using a common transducer. In this situation, the range of tomographic view angles is limited, which can result in distortions in the reconstructed OAT image if conventional reconstruction algorithms are applied to limited-view measurement data. In this work, we investigate an image reconstruction method that utilizes information regarding target boundaries to improve the quality of the reconstructed OAT images. This is accomplished by developing boundary-constrained image reconstruction algorithm for OAT based on Bayesian image reconstruction theory. The computer-simulation studies demonstrate that the Bayesian approach can effectively reduce the artifact and noise levels and preserve the edges in reconstructed limited-view OAT images as compared to those produced by a conventional OAT reconstruction algorithm.

  6. Reconstruction of magnetic domain structure using the reverse Monte Carlo method with an extended Fourier image

    PubMed Central

    Tokii, Maki; Kita, Eiji; Mitsumata, Chiharu; Ono, Kanta; Yanagihara, Hideto

    2015-01-01

    Visualization of the magnetic domain structure is indispensable to the investigation of magnetization processes and the coercivity mechanism. It is necessary to develop a reconstruction method from the reciprocal-space image to the real-space image. For this purpose, it is necessary to solve the problem of missing phase information in the reciprocal-space image. We propose the method of extend Fourier image with mean-value padding to compensate for the phase information. We visualized the magnetic domain structure using the Reverse Monte Carlo method with simulated annealing to accelerate the calculation. With this technique, we demonstrated the restoration of the magnetic domain structure, obtained magnetization and magnetic domain width, and reproduced the characteristic form that constitutes a magnetic domain. PMID:25991875

  7. Colored three-dimensional reconstruction of vehicular thermal infrared images

    NASA Astrophysics Data System (ADS)

    Sun, Shaoyuan; Leung, Henry; Shen, Zhenyi

    2015-06-01

    Enhancement of vehicular night vision thermal infrared images is an important problem in intelligent vehicles. We propose to create a colorful three-dimensional (3-D) display of infrared images for the vehicular night vision assistant driving system. We combine the plane parameter Markov random field (PP-MRF) model-based depth estimation with classification-based infrared image colorization to perform colored 3-D reconstruction of vehicular thermal infrared images. We first train the PP-MRF model to learn the relationship between superpixel features and plane parameters. The infrared images are then colorized and we perform superpixel segmentation and feature extraction on the colorized images. The PP-MRF model is used to estimate the superpixel plane parameter and to analyze the structure of the superpixels according to the characteristics of vehicular thermal infrared images. Finally, we estimate the depth of each pixel to perform 3-D reconstruction. Experimental results demonstrate that the proposed method can give a visually pleasing and daytime-like colorful 3-D display from a monochromatic vehicular thermal infrared image, which can help drivers to have a better understanding of the environment.

  8. Three-dimensional imaging reconstruction algorithm of gated-viewing laser imaging with compressive sensing.

    PubMed

    Li, Li; Xiao, Wei; Jian, Weijian

    2014-11-20

    Three-dimensional (3D) laser imaging combining compressive sensing (CS) has an advantage in lower power consumption and less imaging sensors; however, it brings enormous stress to subsequent calculation devices. In this paper we proposed a fast 3D imaging reconstruction algorithm to deal with time-slice images sampled by single-pixel detectors. The algorithm implements 3D imaging reconstruction before CS recovery, thus it saves plenty of runtime of CS recovery. Several experiments are conducted to verify the performance of the algorithm. Simulation results demonstrated that the proposed algorithm has better performance in terms of efficiency compared to an existing algorithm.

  9. Atmospheric isoplanatism and astronomical image reconstruction on Mauna Kea

    SciTech Connect

    Cowie, L.L.; Songaila, A.

    1988-07-01

    Atmospheric isoplanatism for visual wavelength image-reconstruction applications was measured on Mauna Kea in Hawaii. For most nights the correlation of the transform functions is substantially wider than the long-exposure transform function at separations up to 30 arcsec. Theoretical analysis shows that this is reasonable if the mean Fried parameter is approximately 30 cm at 5500 A. Reconstructed image quality may be described by a Gaussian with a FWHM of lambda/s/sub 0/. Under average conditions, s/sub 0/ (30 arcsec) exceeds 55 cm at 7000 A. The results show that visual image quality in the 0.1--0.2 arcsec range is obtainable over much of the sky with large ground-based telescopes on this site.

  10. Improved Subspace Estimation for Low-Rank Model-Based Accelerated Cardiac Imaging

    PubMed Central

    Hitchens, T. Kevin; Wu, Yijen L.; Ho, Chien; Liang, Zhi-Pei

    2014-01-01

    Sparse sampling methods have emerged as effective tools to accelerate cardiac magnetic resonance imaging (MRI). Low-rank model-based cardiac imaging uses a pre-determined temporal subspace for image reconstruction from highly under-sampled (k, t)-space data and has been demonstrated effective for high-speed cardiac MRI. The accuracy of the temporal subspace is a key factor in these methods, yet little work has been published on data acquisition strategies to improve subspace estimation. This paper investigates the use of non-Cartesian k-space trajectories to replace the Cartesian trajectories which are omnipresent but are highly sensitive to readout direction. We also propose “self-navigated” pulse sequences which collect both navigator data (for determining the temporal subspace) and imaging data after every RF pulse, allowing for even greater acceleration. We investigate subspace estimation strategies through analysis of phantom images and demonstrate in vivo cardiac imaging in rats and mice without the use of ECG or respiratory gating. The proposed methods achieved 3-D imaging of wall motion, first-pass myocardial perfusion, and late gadolinium enhancement in rats at 74 frames per second (fps), as well as 2-D imaging of wall motion in mice at 97 fps. PMID:24801352

  11. Reconstruction of pulse noisy images via stochastic resonance.

    PubMed

    Han, Jing; Liu, Hongjun; Sun, Qibing; Huang, Nan

    2015-01-01

    We investigate a practical technology for reconstructing nanosecond pulse noisy images via stochastic resonance, which is based on the modulation instability. A theoretical model of this method for optical pulse signal is built to effectively recover the pulse image. The nanosecond noise-hidden images grow at the expense of noise during the stochastic resonance process in a photorefractive medium. The properties of output images are mainly determined by the input signal-to-noise intensity ratio, the applied voltage across the medium, and the correlation length of noise background. A high cross-correlation gain is obtained by optimizing these parameters. This provides a potential method for detecting low-level or hidden pulse images in various imaging applications. PMID:26067911

  12. Reconstruction of pulse noisy images via stochastic resonance

    NASA Astrophysics Data System (ADS)

    Han, Jing; Liu, Hongjun; Sun, Qibing; Huang, Nan

    2015-06-01

    We investigate a practical technology for reconstructing nanosecond pulse noisy images via stochastic resonance, which is based on the modulation instability. A theoretical model of this method for optical pulse signal is built to effectively recover the pulse image. The nanosecond noise-hidden images grow at the expense of noise during the stochastic resonance process in a photorefractive medium. The properties of output images are mainly determined by the input signal-to-noise intensity ratio, the applied voltage across the medium, and the correlation length of noise background. A high cross-correlation gain is obtained by optimizing these parameters. This provides a potential method for detecting low-level or hidden pulse images in various imaging applications.

  13. Force reconstruction using the sum of weighted accelerations technique -- Max-Flat procedure

    SciTech Connect

    Carne, T.G.; Mayes, R.L.; Bateman, V.I.

    1993-12-31

    Force reconstruction is a procedure in which the externally applied force is inferred from measured structural response rather than directly measured. In a recently developed technique, the response acceleration time-histories are multiplied by scalar weights and summed to produce the reconstructed force. This reconstruction is called the Sum of Weighted Accelerations Technique (SWAT). One step in the application of this technique is the calculation of the appropriate scalar weights. In this paper a new method of estimating the weights, using measured frequency response function data, is developed and contrasted with the traditional SWAT method of inverting the mode-shape matrix. The technique uses frequency response function data, but is not based on deconvolution. An application that will be discussed as part of this paper is the impact into a rigid barrier of a weapon system with an energy-absorbing nose. The nose had been designed to absorb the energy of impact and to mitigate the shock to the interior components.

  14. The SRT reconstruction algorithm for semiquantification in PET imaging

    SciTech Connect

    Kastis, George A.; Gaitanis, Anastasios; Samartzis, Alexandros P.; Fokas, Athanasios S.

    2015-10-15

    Purpose: The spline reconstruction technique (SRT) is a new, fast algorithm based on a novel numerical implementation of an analytic representation of the inverse Radon transform. The mathematical details of this algorithm and comparisons with filtered backprojection were presented earlier in the literature. In this study, the authors present a comparison between SRT and the ordered-subsets expectation–maximization (OSEM) algorithm for determining contrast and semiquantitative indices of {sup 18}F-FDG uptake. Methods: The authors implemented SRT in the software for tomographic image reconstruction (STIR) open-source platform and evaluated this technique using simulated and real sinograms obtained from the GE Discovery ST positron emission tomography/computer tomography scanner. All simulations and reconstructions were performed in STIR. For OSEM, the authors used the clinical protocol of their scanner, namely, 21 subsets and two iterations. The authors also examined images at one, four, six, and ten iterations. For the simulation studies, the authors analyzed an image-quality phantom with cold and hot lesions. Two different versions of the phantom were employed at two different hot-sphere lesion-to-background ratios (LBRs), namely, 2:1 and 4:1. For each noiseless sinogram, 20 Poisson realizations were created at five different noise levels. In addition to making visual comparisons of the reconstructed images, the authors determined contrast and bias as a function of the background image roughness (IR). For the real-data studies, sinograms of an image-quality phantom simulating the human torso were employed. The authors determined contrast and LBR as a function of the background IR. Finally, the authors present plots of contrast as a function of IR after smoothing each reconstructed image with Gaussian filters of six different sizes. Statistical significance was determined by employing the Wilcoxon rank-sum test. Results: In both simulated and real studies, SRT

  15. LOR-interleaving image reconstruction for PET imaging with fractional-crystal collimation

    NASA Astrophysics Data System (ADS)

    Li, Yusheng; Matej, Samuel; Karp, Joel S.; Metzler, Scott D.

    2015-01-01

    Positron emission tomography (PET) has become an important modality in medical and molecular imaging. However, in most PET applications, the resolution is still mainly limited by the physical crystal sizes or the detector’s intrinsic spatial resolution. To achieve images with better spatial resolution in a central region of interest (ROI), we have previously proposed using collimation in PET scanners. The collimator is designed to partially mask detector crystals to detect lines of response (LORs) within fractional crystals. A sequence of collimator-encoded LORs is measured with different collimation configurations. This novel collimated scanner geometry makes the reconstruction problem challenging, as both detector and collimator effects need to be modeled to reconstruct high-resolution images from collimated LORs. In this paper, we present a LOR-interleaving (LORI) algorithm, which incorporates these effects and has the advantage of reusing existing reconstruction software, to reconstruct high-resolution images for PET with fractional-crystal collimation. We also develop a 3D ray-tracing model incorporating both the collimator and crystal penetration for simulations and reconstructions of the collimated PET. By registering the collimator-encoded LORs with the collimator configurations, high-resolution LORs are restored based on the modeled transfer matrices using the non-negative least-squares method and EM algorithm. The resolution-enhanced images are then reconstructed from the high-resolution LORs using the MLEM or OSEM algorithm. For validation, we applied the LORI method to a small-animal PET scanner, A-PET, with a specially designed collimator. We demonstrate through simulated reconstructions with a hot-rod phantom and MOBY phantom that the LORI reconstructions can substantially improve spatial resolution and quantification compared to the uncollimated reconstructions. The LORI algorithm is crucial to improve overall image quality of collimated PET, which

  16. LOR-interleaving image reconstruction for PET imaging with fractional-crystal collimation

    PubMed Central

    Li, Yusheng; Matej, Samuel; Karp, Joel S.; Metzler, Scott D.

    2015-01-01

    Positron emission tomography (PET) has become an important modality in medical and molecular imaging. However, in most PET applications, the resolution is still mainly limited by the physical crystal sizes or the detector’s intrinsic spatial resolution. To achieve images with better spatial resolution in a central region of interest (ROI), we have previously proposed using collimation in PET scanner. The collimator is designed to partially mask detector crystals to detect lines of response (LORs) within fractional crystals. A sequence of collimator-encoded LORs is measured with different collimation configurations. This novel collimated scanner geometry makes the reconstruction problem challenging, as both detector and collimator effects need to be modeled to reconstruct high-resolution images from collimated LORs. In this paper, we present an LOR-interleaving (LORI) algorithm, which incorporates these effects and has the advantage of reusing existing reconstruction software, to reconstruct high-resolution images for PET with fractional-crystal collimation. We also develop a 3-D ray-tracing model incorporating both the collimator and crystal penetration for simulations and reconstructions of the collimated PET. By registering the collimator-encoded LORs with the collimator configurations, high-resolution LORs are restored based on the modeled transfer matrices using the nonnegative least-squares method and EM algorithm. The resolution-enhanced images are then reconstructed from the high-resolution LORs using the MLEM or OSEM algorithm. For validation, we applied the LORI method to a small-animal PET scanner, A-PET, with a specially designed collimator. We demonstrate through simulated reconstructions with a hot-rod phantom and MOBY phantom that the LORI reconstructions can substantially improve spatial resolution and quantification compared to the uncollimated reconstructions. The LORI algorithm is crucial to improve overall image quality of collimated PET, which

  17. SU-E-J-02: 4D Digital Tomosynthesis Based On Algebraic Image Reconstruction and Total-Variation Minimization for the Improvement of Image Quality

    SciTech Connect

    Kim, D; Kang, S; Kim, T; Suh, T; Kim, S

    2014-06-01

    Purpose: In this paper, we implemented the four-dimensional (4D) digital tomosynthesis (DTS) imaging based on algebraic image reconstruction technique and total-variation minimization method in order to compensate the undersampled projection data and improve the image quality. Methods: The projection data were acquired as supposed the cone-beam computed tomography system in linear accelerator by the Monte Carlo simulation and the in-house 4D digital phantom generation program. We performed 4D DTS based upon simultaneous algebraic reconstruction technique (SART) among the iterative image reconstruction technique and total-variation minimization method (TVMM). To verify the effectiveness of this reconstruction algorithm, we performed systematic simulation studies to investigate the imaging performance. Results: The 4D DTS algorithm based upon the SART and TVMM seems to give better results than that based upon the existing method, or filtered-backprojection. Conclusion: The advanced image reconstruction algorithm for the 4D DTS would be useful to validate each intra-fraction motion during radiation therapy. In addition, it will be possible to give advantage to real-time imaging for the adaptive radiation therapy. This research was supported by Leading Foreign Research Institute Recruitment Program (Grant No.2009-00420) and Basic Atomic Energy Research Institute (BAERI); (Grant No. 2009-0078390) through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT and Future Planning (MSIP)

  18. Light field display and 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Iwane, Toru

    2016-06-01

    Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.

  19. Accuracy of quantitative reconstructions in SPECT/CT imaging

    NASA Astrophysics Data System (ADS)

    Shcherbinin, S.; Celler, A.; Belhocine, T.; van der Werf, R.; Driedger, A.

    2008-09-01

    The goal of this study was to determine the quantitative accuracy of our OSEM-APDI reconstruction method based on SPECT/CT imaging for Tc-99m, In-111, I-123, and I-131 isotopes. Phantom studies were performed on a SPECT/low-dose multislice CT system (Infinia-Hawkeye-4 slice, GE Healthcare) using clinical acquisition protocols. Two radioactive sources were centrally and peripherally placed inside an anthropometric Thorax phantom filled with non-radioactive water. Corrections for attenuation, scatter, collimator blurring and collimator septal penetration were applied and their contribution to the overall accuracy of the reconstruction was evaluated. Reconstruction with the most comprehensive set of corrections resulted in activity estimation with error levels of 3-5% for all the isotopes.

  20. A 32-Channel Head Coil Array with Circularly Symmetric Geometry for Accelerated Human Brain Imaging

    PubMed Central

    Chu, Ying-Hua; Hsu, Yi-Cheng; Keil, Boris; Kuo, Wen-Jui; Lin, Fa-Hsuan

    2016-01-01

    The goal of this study is to optimize a 32-channel head coil array for accelerated 3T human brain proton MRI using either a Cartesian or a radial k-space trajectory. Coils had curved trapezoidal shapes and were arranged in a circular symmetry (CS) geometry. Coils were optimally overlapped to reduce mutual inductance. Low-noise pre-amplifiers were used to further decouple between coils. The SNR and noise amplification in accelerated imaging were compared to results from a head coil array with a soccer-ball (SB) geometry. The maximal SNR in the CS array was about 120% (1070 vs. 892) and 62% (303 vs. 488) of the SB array at the periphery and the center of the FOV on a transverse plane, respectively. In one-dimensional 4-fold acceleration, the CS array has higher averaged SNR than the SB array across the whole FOV. Compared to the SB array, the CS array has a smaller g-factor at head periphery in all accelerated acquisitions. Reconstructed images using a radial k-space trajectory show that the CS array has a smaller error than the SB array in 2- to 5-fold accelerations. PMID:26909652

  1. Complications of anterior cruciate ligament reconstruction: MR imaging.

    PubMed

    Papakonstantinou, Olympia; Chung, Christine B; Chanchairujira, Kullanuch; Resnick, Donald L

    2003-05-01

    Arthroscopic reconstruction of the anterior cruciate ligament (ACL) using autografts or allografts is being performed with increasing frequency, particularly in young athletes. Although the procedure is generally well tolerated, with good success rates, early and late complications have been documented. As clinical manifestations of graft complications are often non-specific and plain radiographs cannot directly visualize the graft and the adjacent soft tissues, MR imaging has a definite role in the diagnosis of complications after ACL reconstruction and may direct subsequent therapeutic management. Our purpose is to review the normal MR imaging of the ACL graft and present the MR imaging findings of a wide spectrum of complications after ACL reconstruction, such as graft impingement, graft rupture, cystic degeneration of the graft, postoperative infection of the knee, diffuse and localized (i.e., cyclops lesion) arthrofibrosis, and associated donor site abnormalities. Awareness of the MR imaging findings of complications as well as the normal appearances of the normal ACL graft is essential for correct interpretation.

  2. Complications of anterior cruciate ligament reconstruction: MR imaging.

    PubMed

    Papakonstantinou, Olympia; Chung, Christine B; Chanchairujira, Kullanuch; Resnick, Donald L

    2003-05-01

    Arthroscopic reconstruction of the anterior cruciate ligament (ACL) using autografts or allografts is being performed with increasing frequency, particularly in young athletes. Although the procedure is generally well tolerated, with good success rates, early and late complications have been documented. As clinical manifestations of graft complications are often non-specific and plain radiographs cannot directly visualize the graft and the adjacent soft tissues, MR imaging has a definite role in the diagnosis of complications after ACL reconstruction and may direct subsequent therapeutic management. Our purpose is to review the normal MR imaging of the ACL graft and present the MR imaging findings of a wide spectrum of complications after ACL reconstruction, such as graft impingement, graft rupture, cystic degeneration of the graft, postoperative infection of the knee, diffuse and localized (i.e., cyclops lesion) arthrofibrosis, and associated donor site abnormalities. Awareness of the MR imaging findings of complications as well as the normal appearances of the normal ACL graft is essential for correct interpretation. PMID:12695835

  3. Performance validation of phase diversity image reconstruction techniques

    NASA Astrophysics Data System (ADS)

    Hirzberger, J.; Feller, A.; Riethmüller, T. L.; Gandorfer, A.; Solanki, S. K.

    2011-05-01

    We present a performance study of a phase diversity (PD) image reconstruction algorithm based on artificial solar images obtained from MHD simulations and on seeing-free data obtained with the SuFI instrument on the Sunrise balloon borne observatory. The artificial data were altered by applying different levels of degradation with synthesised wavefront errors and noise. The PD algorithm was modified by changing the number of fitted polynomials, the shape of the pupil and the applied noise filter. The obtained reconstructions are evaluated by means of the resulting rms intensity contrast and by the conspicuousness of appearing artifacts. The results show that PD is a robust method which consistently recovers the initial unaffected image contents. The efficiency of the reconstruction is, however, strongly dependent on the number of used fitting polynomials and the noise level of the images. If the maximum number of fitted polynomials is higher than 21, artifacts have to be accepted and for noise levels higher than 10-3 the commonly used noise filtering techniques are not able to avoid amplification of spurious structures.

  4. Missing data reconstruction using Gaussian mixture models for fingerprint images

    NASA Astrophysics Data System (ADS)

    Agaian, Sos S.; Yeole, Rushikesh D.; Rao, Shishir P.; Mulawka, Marzena; Troy, Mike; Reinecke, Gary

    2016-05-01

    Publisher's Note: This paper, originally published on 25 May 2016, was replaced with a revised version on 16 June 2016. If you downloaded the original PDF, but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance. One of the most important areas in biometrics is matching partial fingerprints in fingerprint databases. Recently, significant progress has been made in designing fingerprint identification systems for missing fingerprint information. However, a dependable reconstruction of fingerprint images still remains challenging due to the complexity and the ill-posed nature of the problem. In this article, both binary and gray-level images are reconstructed. This paper also presents a new similarity score to evaluate the performance of the reconstructed binary image. The offered fingerprint image identification system can be automated and extended to numerous other security applications such as postmortem fingerprints, forensic science, investigations, artificial intelligence, robotics, all-access control, and financial security, as well as for the verification of firearm purchasers, driver license applicants, etc.

  5. Parallel expectation-maximization algorithms for PET image reconstruction

    NASA Astrophysics Data System (ADS)

    Jeng, Wei-Min

    1999-10-01

    Image reconstruction using Positron Emission Tomography (PET) involves estimating an unknown number of photon pairs emitted from the radiopharmaceuticals within the tissues of the patient's body. The generation of the photons can be described as a Poisson process, and the difficulty of image reconstruction involves approximating the parameter of the tissue density distribution function. A significant amount of artifactual noise exists in the reconstructed image with the convolution back projection method. Using the Maximum Likelihood (ML) formulation, a better estimate can be made for the unknown image information. Despite the better quality of images, the Expectation Maximization (EM) iterative algorithm is not being used in practice due to the tremendous processing time. This research proposes new techniques in designing parallel algorithms in order to speed the reconstruction process. Using the EM algorithm as an example, several general parallel techniques were studied for both distributed-memory architecture and message-passing programming paradigm. Both intra- and inter-iteration latency-hiding schemes were designed to effectively reduce the communication time. Dependencies that exist in and between iterations were rearranged by overlap communication and computation with MPI's non-blocking collective reduction operation. A performance model was established to estimate the processing time of the algorithms and was found to agree with the experimental results. A second strategy, the sparse matrix compaction technique, was developed to reduce the computational time of the computation-bound EM algorithm with better use of PET system geometry. The proposed techniques are generally applicable to many scientific computation problems that involve sparse matrix operations as well as iterative types, of algorithms.

  6. A dual oxygenation and fluorescence imaging platform for reconstructive surgery

    NASA Astrophysics Data System (ADS)

    Ashitate, Yoshitomo; Nguyen, John N.; Venugopal, Vivek; Stockdale, Alan; Neacsu, Florin; Kettenring, Frank; Lee, Bernard T.; Frangioni, John V.; Gioux, Sylvain

    2013-03-01

    There is a pressing clinical need to provide image guidance during surgery. Currently, assessment of tissue that needs to be resected or avoided is performed subjectively, leading to a large number of failures, patient morbidity, and increased healthcare costs. Because near-infrared (NIR) optical imaging is safe, noncontact, inexpensive, and can provide relatively deep information (several mm), it offers unparalleled capabilities for providing image guidance during surgery. These capabilities are well illustrated through the clinical translation of fluorescence imaging during oncologic surgery. In this work, we introduce a novel imaging platform that combines two complementary NIR optical modalities: oxygenation imaging and fluorescence imaging. We validated this platform during facial reconstructive surgery on large animals approaching the size of humans. We demonstrate that NIR fluorescence imaging provides identification of perforator arteries, assesses arterial perfusion, and can detect thrombosis, while oxygenation imaging permits the passive monitoring of tissue vital status, as well as the detection and origin of vascular compromise simultaneously. Together, the two methods provide a comprehensive approach to identifying problems and intervening in real time during surgery before irreparable damage occurs. Taken together, this novel platform provides fully integrated and clinically friendly endogenous and exogenous NIR optical imaging for improved image-guided intervention during surgery.

  7. Atomic resolution tomography reconstruction of tilt series based on a GPU accelerated hybrid input-output algorithm using polar Fourier transform.

    PubMed

    Lu, Xiangwen; Gao, Wenpei; Zuo, Jian-Min; Yuan, Jiabin

    2015-02-01

    Advances in diffraction and transmission electron microscopy (TEM) have greatly improved the prospect of three-dimensional (3D) structure reconstruction from two-dimensional (2D) images or diffraction patterns recorded in a tilt series at atomic resolution. Here, we report a new graphics processing unit (GPU) accelerated iterative transformation algorithm (ITA) based on polar fast Fourier transform for reconstructing 3D structure from 2D diffraction patterns. The algorithm also applies to image tilt series by calculating diffraction patterns from the recorded images using the projection-slice theorem. A gold icosahedral nanoparticle of 309 atoms is used as the model to test the feasibility, performance and robustness of the developed algorithm using simulations. Atomic resolution in 3D is achieved for the 309 atoms Au nanoparticle using 75 diffraction patterns covering 150° rotation. The capability demonstrated here provides an opportunity to uncover the 3D structure of small objects of nanometers in size by electron diffraction.

  8. Anterior cruciate ligament augmentation for rotational instability following primary reconstruction with an accelerated physical therapy protocol.

    PubMed

    Carey, Timothy; Oliver, David; Pniewski, Josh; Mueller, Terry; Bojescul, John

    2013-01-01

    The purpose of the present study is to present the results of anterior cruciate ligament (ACL) augmentation for patients having rotational instability despite an intact vertical graft in lieu of conventional revision ACL reconstruction. ACL augmentation surgery with a horizontal graft was performed to augment a healed vertical graft on five patients and an accelerated rehabilitation protocol was instituted. Functional outcomes were assessed by the Lower Extremity Functional Scale (LEFS) and the Modified Cincinnati Rating System (MCRS). All patients completed physical therapy within 5 months and were able to return to full military duty without limitation. LEFS and MCRS were significantly improved. ACL augmentation with a horizontal graft provides an excellent alternative to ACL revision reconstruction for patients with an intact vertical graft, allowing an earlier return to duty for military service members.

  9. Ultrafast image reconstruction of a dual-head PET system by use of CUDA architecture

    NASA Astrophysics Data System (ADS)

    Hung, YuKai; Dong, Yun; Chern, Felix R.; Wang, Weichung; Kao, Chien-Min; Chen, Chin-Tu; Chou, Cheng-Ying

    2011-03-01

    Positron emission tomography (PET) is an important imaging modality in both clinical usage and research studies. For small-animal PET imaging, it is of major interest to improve the sensitivity and resolution. We have developed a compact high-sensitivity PET system that consisted of two large-area panel PET detector heads. The highly accurate system response matrix can be computed by use of Monte Carlo simulations, and stored for iterative reconstruction methods. The detector head employs 2.1x2.1x20 mm3 LSO/LYSO crystals of pitch size equal to 2.4 mm, and thus will produce more than 224 millions lines of response (LORs). By exploiting the symmetry property in the dual-head system, the computational demands can be dramatically reduced. Nevertheless, the tremendously large system size and repetitive reading of system response matrix from the hard drive will result in extremely long reconstruction times. The implementation of an ordered subset expectation maximization (OSEM) algorithm on a CPU system (four Athlon x64 2.0 GHz PCs) took about 2 days for 1 iteration. Consequently, it is imperative to significantly accelerate the reconstruction process to make it more useful for practical applications. Specifically, the graphic processing unit (GPU), which possesses highly parallel computational architecture of computing units can be exploited to achieve a substantial speedup. In this work, we employed the state-of-art GPU, NVIDIA Tesla C2050 based on the Fermi-generation of the compute united device architecture (CUDA) architecture, to yield a reconstruction process within a few minutes. We demonstrated that reconstruction times can be drastically reduced by using the GPU. The OSEM reconstruction algorithms were implemented employing both GPU-based and CPU-based codes, and their computational performance was quantitatively analyzed and compared.

  10. An efficient simultaneous reconstruction technique for tomographic particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Atkinson, Callum; Soria, Julio

    2009-10-01

    To date, Tomo-PIV has involved the use of the multiplicative algebraic reconstruction technique (MART), where the intensity of each 3D voxel is iteratively corrected to satisfy one recorded projection, or pixel intensity, at a time. This results in reconstruction times of multiple hours for each velocity field and requires considerable computer memory in order to store the associated weighting coefficients and intensity values for each point in the volume. In this paper, a rapid and less memory intensive reconstruction algorithm is presented based on a multiplicative line-of-sight (MLOS) estimation that determines possible particle locations in the volume, followed by simultaneous iterative correction. Reconstructions of simulated images are presented for two simultaneous algorithms (SART and SMART) as well as the now standard MART algorithm, which indicate that the same accuracy as MART can be achieved 5.5 times faster or 77 times faster with 15 times less memory if the processing and storage of the weighting matrix is considered. Application of MLOS-SMART and MART to a turbulent boundary layer at Re θ = 2200 using a 4 camera Tomo-PIV system with a volume of 1,000 × 1,000 × 160 voxels is discussed. Results indicate improvements in reconstruction speed of 15 times that of MART with precalculated weighting matrix, or 65 times if calculation of the weighting matrix is considered. Furthermore the memory needed to store a large weighting matrix and volume intensity is reduced by almost 40 times in this case.

  11. Research on image matching method of big data image of three-dimensional reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Chunsen; Qiu, Zhenguo; Zhu, Shihuan; Wang, Xiqi; Xu, Xiaolei; Zhong, Sidong

    2015-12-01

    Image matching is the main flow of a three-dimensional reconstruction. With the development of computer processing technology, seeking the image to be matched from the large date image sets which acquired from different image formats, different scales and different locations has put forward a new request for image matching. To establish the three dimensional reconstruction based on image matching from big data images, this paper put forward a new effective matching method based on visual bag of words model. The main technologies include building the bag of words model and image matching. First, extracting the SIFT feature points from images in the database, and clustering the feature points to generate the bag of words model. We established the inverted files based on the bag of words. The inverted files can represent all images corresponding to each visual word. We performed images matching depending on the images under the same word to improve the efficiency of images matching. Finally, we took the three-dimensional model with those images. Experimental results indicate that this method is able to improve the matching efficiency, and is suitable for the requirements of large data reconstruction.

  12. Reconstruction of hyperspectral image using matting model for classification

    NASA Astrophysics Data System (ADS)

    Xie, Weiying; Li, Yunsong; Ge, Chiru

    2016-05-01

    Although hyperspectral images (HSIs) captured by satellites provide much information in spectral regions, some bands are redundant or have large amounts of noise, which are not suitable for image analysis. To address this problem, we introduce a method for reconstructing the HSI with noise reduction and contrast enhancement using a matting model for the first time. The matting model refers to each spectral band of an HSI that can be decomposed into three components, i.e., alpha channel, spectral foreground, and spectral background. First, one spectral band of an HSI with more refined information than most other bands is selected, and is referred to as an alpha channel of the HSI to estimate the hyperspectral foreground and hyperspectral background. Finally, a combination operation is applied to reconstruct the HSI. In addition, the support vector machine (SVM) classifier and three sparsity-based classifiers, i.e., orthogonal matching pursuit (OMP), simultaneous OMP, and OMP based on first-order neighborhood system weighted classifiers, are utilized on the reconstructed HSI and the original HSI to verify the effectiveness of the proposed method. Specifically, using the reconstructed HSI, the average accuracy of the SVM classifier can be improved by as much as 19%.

  13. Limited Angle Reconstruction Method for Reconstructing Terrestrial Plasmaspheric Densities from EUV Images

    NASA Technical Reports Server (NTRS)

    Newman, Timothy; Santhanam, Naveen; Zhang, Huijuan; Gallagher, Dennis

    2003-01-01

    A new method for reconstructing the global 3D distribution of plasma densities in the plasmasphere from a limited number of 2D views is presented. The method is aimed at using data from the Extreme Ultra Violet (EUV) sensor on NASA s Imager for Magnetopause-to-Aurora Global Exploration (IMAGE) satellite. Physical properties of the plasmasphere are exploited by the method to reduce the level of inaccuracy imposed by the limited number of views. The utility of the method is demonstrated on synthetic data.

  14. Stokes image reconstruction for two-color microgrid polarization imaging systems.

    PubMed

    Lemaster, Daniel A

    2011-07-18

    The Air Force Research Laboratory has developed a new microgrid polarization imaging system capable of simultaneously reconstructing linear Stokes parameter images in two colors on a single focal plane array. In this paper, an effective method for extracting Stokes images is presented for this type of camera system. It is also shown that correlations between the color bands can be exploited to significantly increase overall spatial resolution. Test data is used to show the advantages of this approach over bilinear interpolation. The bounds (in terms of available reconstruction bandwidth) on image resolution are also provided. PMID:21934823

  15. Optimizing modelling in iterative image reconstruction for preclinical pinhole PET

    NASA Astrophysics Data System (ADS)

    Goorden, Marlies C.; van Roosmalen, Jarno; van der Have, Frans; Beekman, Freek J.

    2016-05-01

    The recently developed versatile emission computed tomography (VECTor) technology enables high-energy SPECT and simultaneous SPECT and PET of small animals at sub-mm resolutions. VECTor uses dedicated clustered pinhole collimators mounted in a scanner with three stationary large-area NaI(Tl) gamma detectors. Here, we develop and validate dedicated image reconstruction methods that compensate for image degradation by incorporating accurate models for the transport of high-energy annihilation gamma photons. Ray tracing software was used to calculate photon transport through the collimator structures and into the gamma detector. Input to this code are several geometric parameters estimated from system calibration with a scanning 99mTc point source. Effects on reconstructed images of (i) modelling variable depth-of-interaction (DOI) in the detector, (ii) incorporating photon paths that go through multiple pinholes (‘multiple-pinhole paths’ (MPP)), and (iii) including various amounts of point spread function (PSF) tail were evaluated. Imaging 18F in resolution and uniformity phantoms showed that including large parts of PSFs is essential to obtain good contrast-noise characteristics and that DOI modelling is highly effective in removing deformations of small structures, together leading to 0.75 mm resolution PET images of a hot-rod Derenzo phantom. Moreover, MPP modelling reduced the level of background noise. These improvements were also clearly visible in mouse images. Performance of VECTor can thus be significantly improved by accurately modelling annihilation gamma photon transport.

  16. Optimizing modelling in iterative image reconstruction for preclinical pinhole PET.

    PubMed

    Goorden, Marlies C; van Roosmalen, Jarno; van der Have, Frans; Beekman, Freek J

    2016-05-21

    The recently developed versatile emission computed tomography (VECTor) technology enables high-energy SPECT and simultaneous SPECT and PET of small animals at sub-mm resolutions. VECTor uses dedicated clustered pinhole collimators mounted in a scanner with three stationary large-area NaI(Tl) gamma detectors. Here, we develop and validate dedicated image reconstruction methods that compensate for image degradation by incorporating accurate models for the transport of high-energy annihilation gamma photons. Ray tracing software was used to calculate photon transport through the collimator structures and into the gamma detector. Input to this code are several geometric parameters estimated from system calibration with a scanning (99m)Tc point source. Effects on reconstructed images of (i) modelling variable depth-of-interaction (DOI) in the detector, (ii) incorporating photon paths that go through multiple pinholes ('multiple-pinhole paths' (MPP)), and (iii) including various amounts of point spread function (PSF) tail were evaluated. Imaging (18)F in resolution and uniformity phantoms showed that including large parts of PSFs is essential to obtain good contrast-noise characteristics and that DOI modelling is highly effective in removing deformations of small structures, together leading to 0.75 mm resolution PET images of a hot-rod Derenzo phantom. Moreover, MPP modelling reduced the level of background noise. These improvements were also clearly visible in mouse images. Performance of VECTor can thus be significantly improved by accurately modelling annihilation gamma photon transport. PMID:27082049

  17. Boundary conditions in photoacoustic tomography and image reconstruction.

    PubMed

    Wang, Lihong V; Yang, Xinmai

    2007-01-01

    Recently, the field of photoacoustic tomography has experienced considerable growth. Although several commercially available pure optical imaging modalities, including confocal microscopy, two-photon microscopy, and optical coherence tomography, have been highly successful, none of these technologies can penetrate beyond approximately 1 mm into scattering biological tissues because all of them are based on ballistic and quasiballistic photons. Consequently, heretofore there has been a void in high-resolution optical imaging beyond this depth limit. Photoacoustic tomography has filled this void by combining high ultrasonic resolution and strong optical contrast in a single modality. However, it has been assumed in reconstruction of photoacoustic tomography until now that ultrasound propagates in a boundary-free infinite medium. We present the boundary conditions that must be considered in certain imaging configurations; the associated inverse solutions for image reconstruction are provided and validated by numerical simulation and experiment. Partial planar, cylindrical, and spherical detection configurations with a planar boundary are covered, where the boundary can be either hard or soft. Analogously to the method of images of sources, which is commonly used in forward problems, the ultrasonic detectors are imaged about the boundary to satisfy the boundary condition in the inverse problem. PMID:17343502

  18. Laser-wakefield accelerators as hard x-ray sources for 3D medical imaging of human bone.

    PubMed

    Cole, J M; Wood, J C; Lopes, N C; Poder, K; Abel, R L; Alatabi, S; Bryant, J S J; Jin, A; Kneip, S; Mecseki, K; Symes, D R; Mangles, S P D; Najmudin, Z

    2015-01-01

    A bright μm-sized source of hard synchrotron x-rays (critical energy Ecrit > 30 keV) based on the betatron oscillations of laser wakefield accelerated electrons has been developed. The potential of this source for medical imaging was demonstrated by performing micro-computed tomography of a human femoral trabecular bone sample, allowing full 3D reconstruction to a resolution below 50 μm. The use of a 1 cm long wakefield accelerator means that the length of the beamline (excluding the laser) is dominated by the x-ray imaging distances rather than the electron acceleration distances. The source possesses high peak brightness, which allows each image to be recorded with a single exposure and reduces the time required for a full tomographic scan. These properties make this an interesting laboratory source for many tomographic imaging applications.

  19. Laser-wakefield accelerators as hard x-ray sources for 3D medical imaging of human bone.

    PubMed

    Cole, J M; Wood, J C; Lopes, N C; Poder, K; Abel, R L; Alatabi, S; Bryant, J S J; Jin, A; Kneip, S; Mecseki, K; Symes, D R; Mangles, S P D; Najmudin, Z

    2015-01-01

    A bright μm-sized source of hard synchrotron x-rays (critical energy Ecrit > 30 keV) based on the betatron oscillations of laser wakefield accelerated electrons has been developed. The potential of this source for medical imaging was demonstrated by performing micro-computed tomography of a human femoral trabecular bone sample, allowing full 3D reconstruction to a resolution below 50 μm. The use of a 1 cm long wakefield accelerator means that the length of the beamline (excluding the laser) is dominated by the x-ray imaging distances rather than the electron acceleration distances. The source possesses high peak brightness, which allows each image to be recorded with a single exposure and reduces the time required for a full tomographic scan. These properties make this an interesting laboratory source for many tomographic imaging applications. PMID:26283308

  20. POCSENSE: POCS-based reconstruction for sensitivity encoded magnetic resonance imaging.

    PubMed

    Samsonov, Alexei A; Kholmovski, Eugene G; Parker, Dennis L; Johnson, Chris R

    2004-12-01

    A novel method for iterative reconstruction of images from undersampled MRI data acquired by multiple receiver coil systems is presented. Based on Projection onto Convex Sets (POCS) formalism, the method for SENSitivity Encoded data reconstruction (POCSENSE) can be readily modified to include various linear and nonlinear reconstruction constraints. Such constraints may be beneficial for reconstructing highly and overcritically undersampled data sets to improve image quality. POCSENSE is conceptually simple and numerically efficient and can reconstruct images from data sampled on arbitrary k-space trajectories. The applicability of POCSENSE for image reconstruction with nonlinear constraining was demonstrated using a wide range of simulated and real MRI data.

  1. Image reconstruction and optimization using a terahertz scanned imaging system

    NASA Astrophysics Data System (ADS)

    Yıldırım, İhsan Ozan; Özkan, Vedat A.; Idikut, Fırat; Takan, Taylan; Şahin, Asaf B.; Altan, Hakan

    2014-10-01

    Due to the limited number of array detection architectures in the millimeter wave to terahertz region of the electromagnetic spectrum, imaging schemes with scan architectures are typically employed. In these configurations the interplay between the frequencies used to illuminate the scene and the optics used play an important role in the quality of the formed image. Using a multiplied Schottky-diode based terahertz transceiver operating at 340 GHz, in a stand-off detection scheme; the effect of image quality of a metal target was assessed based on the scanning speed of the galvanometer mirrors as well as the optical system that was constructed. Background effects such as leakage on the receiver were minimized by conditioning the signal at the output of the transceiver. Then, the image of the target was simulated based on known parameters of the optical system and the measured images were compared to the simulation. By using an image quality index based on χ2 algorithm the simulated and measured images were found to be in good agreement with a value of χ2 = 0 .14. The measurements as shown here will aid in the future development of larger stand-off imaging systems that work in the terahertz frequency range.

  2. Model-based microwave image reconstruction: simulations and experiments

    SciTech Connect

    Ciocan, Razvan; Jiang Huabei

    2004-12-01

    We describe an integrated microwave imaging system that can provide spatial maps of dielectric properties of heterogeneous media with tomographically collected data. The hardware system (800-1200 MHz) was built based on a lock-in amplifier with 16 fixed antennas. The reconstruction algorithm was implemented using a Newton iterative method with combined Marquardt-Tikhonov regularizations. System performance was evaluated using heterogeneous media mimicking human breast tissue. Finite element method coupled with the Bayliss and Turkel radiation boundary conditions were applied to compute the electric field distribution in the heterogeneous media of interest. The results show that inclusions embedded in a 76-diameter background medium can be quantitatively reconstructed from both simulated and experimental data. Quantitative analysis of the microwave images obtained suggests that an inclusion of 14 mm in diameter is the smallest object that can be fully characterized presently using experimental data, while objects as small as 10 mm in diameter can be quantitatively resolved with simulated data.

  3. PET image reconstruction with anatomical edge guided level set prior

    NASA Astrophysics Data System (ADS)

    Cheng-Liao, Jinxiu; Qi, Jinyi

    2011-11-01

    Acquiring both anatomical and functional images during one scan, PET/CT systems improve the ability to detect and localize abnormal uptakes. In addition, CT images provide anatomical boundary information that can be used to regularize positron emission tomography (PET) images. Here we propose a new approach to maximum a posteriori reconstruction of PET images with a level set prior guided by anatomical edges. The image prior models both the smoothness of PET images and the similarity between functional boundaries in PET and anatomical boundaries in CT. Level set functions (LSFs) are used to represent smooth and closed functional boundaries. The proposed method does not assume an exact match between PET and CT boundaries. Instead, it encourages similarity between the two boundaries, while allowing different region definition in PET images to accommodate possible signal and position mismatch between functional and anatomical images. While the functional boundaries are guaranteed to be closed by the LSFs, the proposed method does not require closed anatomical boundaries and can utilize incomplete edges obtained from an automatic edge detection algorithm. We conducted computer simulations to evaluate the performance of the proposed method. Two digital phantoms were constructed based on the Digimouse data and a human CT image, respectively. Anatomical edges were extracted automatically from the CT images. Tumors were simulated in the PET phantoms with different mismatched anatomical boundaries. Compared with existing methods, the new method achieved better bias-variance performance. The proposed method was also applied to real mouse data and achieved higher contrast than other methods.

  4. Fully three-dimensional OSEM-based image reconstruction for Compton imaging using optimized ordering schemes

    NASA Astrophysics Data System (ADS)

    Kim, Soo Mee; Lee, Jae Sung; Lee, Chun Sik; Kim, Chan Hyeong; Lee, Myung Chul; Lee, Dong Soo; Lee, Soo-Jin

    2010-09-01

    Although the ordered subset expectation maximization (OSEM) algorithm does not converge to a true maximum likelihood solution, it is known to provide a good solution if the projections that constitute each subset are reasonably balanced. The Compton scattered data can be allocated to subsets using scattering angles (SA) or detected positions (DP) or a combination of the two (AP (angles and positions)). To construct balanced subsets, the data were first arranged using three ordering schemes: the random ordering scheme (ROS), the multilevel ordering scheme (MLS) and the weighted-distance ordering scheme (WDS). The arranged data were then split into J subsets. To compare the three ordering schemes, we calculated the coefficients of variation (CVs) of angular and positional differences between the arranged data and the percentage errors between mathematical phantoms and reconstructed images. All ordering schemes showed an order-of-magnitude acceleration over the standard EM, and their computation times were similar. The SA-based MLS and the DP-based WDS led to the best-balanced subsets (they provided the largest angular and positional differences for SA- and DP-based arrangements, respectively). The WDS exhibited minimum CVs for both the SA- and DP-based arrangements (the deviation in mean angular and positional differences between the ordered subsets was smallest). The combination of AP and WDS yielded the best results with the lowest percentage errors by providing larger and more uniform angular and positional differences for the SA and DP arrangements, and thus, is probably optimal Compton camera reconstruction using OSEM.

  5. Fully three-dimensional OSEM-based image reconstruction for Compton imaging using optimized ordering schemes.

    PubMed

    Kim, Soo Mee; Lee, Jae Sung; Lee, Chun Sik; Kim, Chan Hyeong; Lee, Myung Chul; Lee, Dong Soo; Lee, Soo-Jin

    2010-09-01

    Although the ordered subset expectation maximization (OSEM) algorithm does not converge to a true maximum likelihood solution, it is known to provide a good solution if the projections that constitute each subset are reasonably balanced. The Compton scattered data can be allocated to subsets using scattering angles (SA) or detected positions (DP) or a combination of the two (AP (angles and positions)). To construct balanced subsets, the data were first arranged using three ordering schemes: the random ordering scheme (ROS), the multilevel ordering scheme (MLS) and the weighted-distance ordering scheme (WDS). The arranged data were then split into J subsets. To compare the three ordering schemes, we calculated the coefficients of variation (CVs) of angular and positional differences between the arranged data and the percentage errors between mathematical phantoms and reconstructed images. All ordering schemes showed an order-of-magnitude acceleration over the standard EM, and their computation times were similar. The SA-based MLS and the DP-based WDS led to the best-balanced subsets (they provided the largest angular and positional differences for SA- and DP-based arrangements, respectively). The WDS exhibited minimum CVs for both the SA- and DP-based arrangements (the deviation in mean angular and positional differences between the ordered subsets was smallest). The combination of AP and WDS yielded the best results with the lowest percentage errors by providing larger and more uniform angular and positional differences for the SA and DP arrangements, and thus, is probably optimal Compton camera reconstruction using OSEM.

  6. Three-dimensional image reconstruction in object space

    SciTech Connect

    Kinahan, P.E.; Rogers, J.G.; Harrop, R.; Johnson, R.R.

    1988-02-01

    An analytic three-dimensional image reconstruction algorithm which can utilize the cross-plane gamma rays detected by a wide solid-angle PET system is presented. Unlike current analytic algorithms it does not use Fourier transform methods, although mathematical equivalence to Fourier transform methods is proven. Results of implementing the algorithm are briefly discussed. An extension of the algorithm to utilize all measured cross-plane gamma rays is discussed.

  7. LIRA: Low-counts Image Reconstruction and Analysis

    NASA Astrophysics Data System (ADS)

    Connors, Alanna; Kashyap, Vinay; Siemiginowska, Aneta; van Dyk, David; Stein, Nathan M.

    2016-01-01

    LIRA (Low-counts Image Reconstruction and Analysis) deconvolves any unknown sky components, provides a fully Poisson 'goodness-of-fit' for any best-fit model, and quantifies uncertainties on the existence and shape of unknown sky. It does this without resorting to χ2 or rebinning, which can lose high-resolution information. It is written in R and requires the FITSio package.

  8. Enhanced imaging of microcalcifications in digital breast tomosynthesis through improved image-reconstruction algorithms

    SciTech Connect

    Sidky, Emil Y.; Pan Xiaochuan; Reiser, Ingrid S.; Nishikawa, Robert M.; Moore, Richard H.; Kopans, Daniel B.

    2009-11-15

    Purpose: The authors develop a practical, iterative algorithm for image-reconstruction in undersampled tomographic systems, such as digital breast tomosynthesis (DBT). Methods: The algorithm controls image regularity by minimizing the image total p variation (TpV), a function that reduces to the total variation when p=1.0 or the image roughness when p=2.0. Constraints on the image, such as image positivity and estimated projection-data tolerance, are enforced by projection onto convex sets. The fact that the tomographic system is undersampled translates to the mathematical property that many widely varied resultant volumes may correspond to a given data tolerance. Thus the application of image regularity serves two purposes: (1) Reduction in the number of resultant volumes out of those allowed by fixing the data tolerance, finding the minimum image TpV for fixed data tolerance, and (2) traditional regularization, sacrificing data fidelity for higher image regularity. The present algorithm allows for this dual role of image regularity in undersampled tomography. Results: The proposed image-reconstruction algorithm is applied to three clinical DBT data sets. The DBT cases include one with microcalcifications and two with masses. Conclusions: Results indicate that there may be a substantial advantage in using the present image-reconstruction algorithm for microcalcification imaging.

  9. Enhanced imaging of microcalcifications in digital breast tomosynthesis through improved image-reconstruction algorithms

    PubMed Central

    Sidky, Emil Y.; Pan, Xiaochuan; Reiser, Ingrid S.; Nishikawa, Robert M.; Moore, Richard H.; Kopans, Daniel B.

    2009-01-01

    Purpose: The authors develop a practical, iterative algorithm for image-reconstruction in undersampled tomographic systems, such as digital breast tomosynthesis (DBT). Methods: The algorithm controls image regularity by minimizing the image total p variation (TpV), a function that reduces to the total variation when p=1.0 or the image roughness whenp=2.0. Constraints on the image, such as image positivity and estimated projection-data tolerance, are enforced by projection onto convex sets. The fact that the tomographic system is undersampled translates to the mathematical property that many widely varied resultant volumes may correspond to a given data tolerance. Thus the application of image regularity serves two purposes: (1) Reduction in the number of resultant volumes out of those allowed by fixing the data tolerance, finding the minimum image TpV for fixed data tolerance, and (2) traditional regularization, sacrificing data fidelity for higher image regularity. The present algorithm allows for this dual role of image regularity in undersampled tomography. Results: The proposed image-reconstruction algorithm is applied to three clinical DBT data sets. The DBT cases include one with microcalcifications and two with masses. Conclusions: Results indicate that there may be a substantial advantage in using the present image-reconstruction algorithm for microcalcification imaging. PMID:19994501

  10. Reconstruction of three-dimensional occluded object using optical flow and triangular mesh reconstruction in integral imaging.

    PubMed

    Jung, Jae-Hyun; Hong, Keehoon; Park, Gilbae; Chung, Indeok; Park, Jae-Hyeung; Lee, Byoungho

    2010-12-01

    We proposed a reconstruction method for the occluded region of three-dimensional (3D) object using the depth extraction based on the optical flow and triangular mesh reconstruction in integral imaging. The depth information of sub-images from the acquired elemental image set is extracted using the optical flow with sub-pixel accuracy, which alleviates the depth quantization problem. The extracted depth maps of sub-image array are segmented by the depth threshold from the histogram based segmentation, which is represented as the point clouds. The point clouds are projected to the viewpoint of center sub-image and reconstructed by the triangular mesh reconstruction. The experimental results support the validity of the proposed method with high accuracy of peak signal-to-noise ratio and normalized cross-correlation in 3D image recognition.

  11. Impact of measurement precision and noise on superresolution image reconstruction.

    PubMed

    Wood, Sally L; Lee, Shu-Ting; Yang, Gao; Christensen, Marc P; Rajan, Dinesh

    2008-04-01

    The performance of uniform and nonuniform detector arrays for application to the PANOPTES (processing arrays of Nyquist-limited observations to produce a thin electro-optic sensor) flat camera design is analyzed for measurement noise environments including quantization noise and Gaussian and Poisson processes. Image data acquired from a commercial camera with 8 bit and 14 bit output options are analyzed, and estimated noise levels are computed. Noise variances estimated from the measurement values are used in the optimal linear estimators for superresolution image reconstruction.

  12. Image stitching and image reconstruction of intestines captured using radial imaging capsule endoscope

    NASA Astrophysics Data System (ADS)

    Ou-Yang, Mang; Jeng, Wei-De; Wu, Yin-Yi; Dung, Lan-Rong; Wu, Hsien-Ming; Weng, Ping-Kuo; Huang, Ker-Jer; Chiu, Luan-Jiau

    2012-05-01

    This study investigates image processing using the radial imaging capsule endoscope (RICE) system. First, an experimental environment is established in which a simulated object has a shape that is similar to a cylinder, such that a triaxial platform can be used to push the RICE into the sample and capture radial images. Then four algorithms (mean absolute error, mean square error, Pearson correlation coefficient, and deformation processing) are used to stitch the images together. The Pearson correlation coefficient method is the most effective algorithm because it yields the highest peak signal-to-noise ratio, higher than 80.69 compared to the original image. Furthermore, a living animal experiment is carried out. Finally, the Pearson correlation coefficient method and vector deformation processing are used to stitch the images that were captured in the living animal experiment. This method is very attractive because unlike the other methods, in which two lenses are required to reconstruct the geometrical image, RICE uses only one lens and one mirror.

  13. A biological phantom for evaluation of CT image reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Cammin, J.; Fung, G. S. K.; Fishman, E. K.; Siewerdsen, J. H.; Stayman, J. W.; Taguchi, K.

    2014-03-01

    In recent years, iterative algorithms have become popular in diagnostic CT imaging to reduce noise or radiation dose to the patient. The non-linear nature of these algorithms leads to non-linearities in the imaging chain. However, the methods to assess the performance of CT imaging systems were developed assuming the linear process of filtered backprojection (FBP). Those methods may not be suitable any longer when applied to non-linear systems. In order to evaluate the imaging performance, a phantom is typically scanned and the image quality is measured using various indices. For reasons of practicality, cost, and durability, those phantoms often consist of simple water containers with uniform cylinder inserts. However, these phantoms do not represent the rich structure and patterns of real tissue accurately. As a result, the measured image quality or detectability performance for lesions may not reflect the performance on clinical images. The discrepancy between estimated and real performance may be even larger for iterative methods which sometimes produce "plastic-like", patchy images with homogeneous patterns. Consequently, more realistic phantoms should be used to assess the performance of iterative algorithms. We designed and constructed a biological phantom consisting of porcine organs and tissue that models a human abdomen, including liver lesions. We scanned the phantom on a clinical CT scanner and compared basic image quality indices between filtered backprojection and an iterative reconstruction algorithm.

  14. Sub-angstrom microscopy through incoherent imaging and image reconstruction

    NASA Astrophysics Data System (ADS)

    Pennycook, S. J.; Jesson, D. E.; Chisholm, M. F.; Ferridge, A. G.; Seddon, M. J.

    1992-03-01

    Z-contrast scanning transmission electron microscopy (STEM) with a high-angle annular detector breaks the coherence of the imaging process, and provides an incoherent image of a crystal projection. Even in the presence of strong dynamical diffraction, the image can be accurately described as a convolution between an object function, sharply peaked at the projected atomic sites, and the probe intensity profile. Such an image can be inverted intuitively without the need for model structures, and therefore provides the important capability to reveal unanticipated interfacial arrangements. It represents a direct image of the crystal projection, revealing the location of the atomic columns and their relative high-angle scattering power. Since no phase is associated with a peak in the object function or the contrast transfer function, extension to higher resolution is also straightforward. Image restoration techniques such as maximum entropy, in conjunction with the 1.3 (Angstrom) probe anticipated for a 300 kV STEM, appear to provide a simple and robust route to the achievement of sub-(Angstrom) resolution electron microscopy.

  15. Task-based optimization of image reconstruction in breast CT

    NASA Astrophysics Data System (ADS)

    Sanchez, Adrian A.; Sidky, Emil Y.; Pan, Xiaochuan

    2014-03-01

    We demonstrate a task-based assessment of image quality in dedicated breast CT in order to optimize the number of projection views acquired. The methodology we employ is based on the Hotelling Observer (HO) and its associated metrics. We consider two tasks: the Rayleigh task of discerning between two resolvable objects and a single larger object, and the signal detection task of classifying an image as belonging to either a signalpresent or signal-absent hypothesis. HO SNR values are computed for 50, 100, 200, 500, and 1000 projection view images, with the total imaging radiation dose held constant. We use the conventional fan-beam FBP algorithm and investigate the effect of varying the width of a Hanning window used in the reconstruction, since this affects both the noise properties of the image and the under-sampling artifacts which can arise in the case of sparse-view acquisitions. Our results demonstrate that fewer projection views should be used in order to increase HO performance, which in this case constitutes an upper-bound on human observer performance. However, the impact on HO SNR of using fewer projection views, each with a higher dose, is not as significant as the impact of employing regularization in the FBP reconstruction through a Hanning filter.

  16. Iterative Self-Dual Reconstruction on Radar Image Recovery

    SciTech Connect

    Martins, Charles; Medeiros, Fatima; Ushizima, Daniela; Bezerra, Francisco; Marques, Regis; Mascarenhas, Nelson

    2010-05-21

    Imaging systems as ultrasound, sonar, laser and synthetic aperture radar (SAR) are subjected to speckle noise during image acquisition. Before analyzing these images, it is often necessary to remove the speckle noise using filters. We combine properties of two mathematical morphology filters with speckle statistics to propose a signal-dependent noise filter to multiplicative noise. We describe a multiscale scheme that preserves sharp edges while it smooths homogeneous areas, by combining local statistics with two mathematical morphology filters: the alternating sequential and the self-dual reconstruction algorithms. The experimental results show that the proposed approach is less sensitive to varying window sizes when applied to simulated and real SAR images in comparison with standard filters.

  17. 3D Lunar Terrain Reconstruction from Apollo Images

    NASA Technical Reports Server (NTRS)

    Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.

    2009-01-01

    Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission

  18. Multiresolution 3-D reconstruction from side-scan sonar images.

    PubMed

    Coiras, Enrique; Petillot, Yvan; Lane, David M

    2007-02-01

    In this paper, a new method for the estimation of seabed elevation maps from side-scan sonar images is presented. The side-scan image formation process is represented by a Lambertian diffuse model, which is then inverted by a multiresolution optimization procedure inspired by expectation-maximization to account for the characteristics of the imaged seafloor region. On convergence of the model, approximations for seabed reflectivity, side-scan beam pattern, and seabed altitude are obtained. The performance of the system is evaluated against a real structure of known dimensions. Reconstruction results for images acquired by different sonar sensors are presented. Applications to augmented reality for the simulation of targets in sonar imagery are also discussed.

  19. Reconstructing chromosphere concentration images directly by continuous-wave diffuse optical tomography.

    PubMed

    Li, Ang; Zhang, Quan; Culver, Joseph P; Miller, Eric L; Boas, David A

    2004-02-01

    We present an algorithm to reconstruct chromosphere concentration images directly rather than following the traditional two-step process of reconstructing wavelength-dependent absorption coefficient images and then calculating chromosphere concentration images. This procedure imposes prior spectral information into the image reconstruction that results in a dramatic improvement in the image contrast-to-noise ratio of better than 100%. We demonstrate this improvement with simulations and a dynamic blood phantom experiment. PMID:14759043

  20. GPU-Accelerated Forward and Back-Projections with Spatially Varying Kernels for 3D DIRECT TOF PET Reconstruction

    PubMed Central

    Ha, S.; Matej, S.; Ispiryan, M.; Mueller, K.

    2013-01-01

    We describe a GPU-accelerated framework that efficiently models spatially (shift) variant system response kernels and performs forward- and back-projection operations with these kernels for the DIRECT (Direct Image Reconstruction for TOF) iterative reconstruction approach. Inherent challenges arise from the poor memory cache performance at non-axis aligned TOF directions. Focusing on the GPU memory access patterns, we utilize different kinds of GPU memory according to these patterns in order to maximize the memory cache performance. We also exploit the GPU instruction-level parallelism to efficiently hide long latencies from the memory operations. Our experiments indicate that our GPU implementation of the projection operators has slightly faster or approximately comparable time performance than FFT-based approaches using state-of-the-art FFTW routines. However, most importantly, our GPU framework can also efficiently handle any generic system response kernels, such as spatially symmetric and shift-variant as well as spatially asymmetric and shift-variant, both of which an FFT-based approach cannot cope with. PMID:23531763

  1. Isotope specific resolution recovery image reconstruction in high resolution PET imaging

    SciTech Connect

    Kotasidis, Fotis A.; Angelis, Georgios I.; Anton-Rodriguez, Jose; Matthews, Julian C.; Reader, Andrew J.; Zaidi, Habib

    2014-05-15

    Purpose: Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to perform the PSF measurements. As such, non-optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction, usually underestimating the true PSF owing to the difference in positron range. In high resolution brain and preclinical imaging, this effect is of particular importance since the PSFs become more positron range limited and isotope-specific PSFs can help maximize the performance benefit from using resolution recovery image reconstruction algorithms. Methods: In this work, the authors used a printing technique to simultaneously measure multiple point sources on the High Resolution Research Tomograph (HRRT), and the authors demonstrated the feasibility of deriving isotope-dependent system matrices from fluorine-18 and carbon-11 point sources. Furthermore, the authors evaluated the impact of incorporating them within RM image reconstruction, using carbon-11 phantom and clinical datasets on the HRRT. Results: The results obtained using these two isotopes illustrate that even small differences in positron range can result in different PSF maps, leading to further improvements in contrast recovery when used in image reconstruction. The difference is more pronounced in the centre of the field-of-view where the full width at half maximum (FWHM) from the positron range has a larger contribution to the overall FWHM compared to the edge where the parallax error dominates the overall FWHM. Conclusions: Based on the proposed methodology, measured isotope-specific and spatially variant PSFs can be reliably derived and used for improved spatial resolution and variance performance in resolution

  2. Superresolution image reconstruction using panchromatic and multispectral image fusion

    NASA Astrophysics Data System (ADS)

    Elbakary, M. I.; Alam, M. S.

    2008-08-01

    Hyperspectral imagery is used for a wide variety of applications, including target detection, tacking, agricultural monitoring and natural resources exploration. The main reason for using hyperspectral imagery is that these images reveal spectral information about the scene that is not available in a single band. Unfortunately, many factors such as sensor noise and atmospheric scattering degrade the spatial quality of these images. Recently, many algorithms are introduced in the literature to improve the resolution of hyperspectral images using co-registered high special-resolution imagery such as panchromatic imagery. In this paper, we propose a new algorithm to enhance the spatial resolution of low resolution hyperspectral bands using strongly correlated and co-registered high special-resolution panchromatic imagery. The proposed algorithm constructs the superresolution bands corresponding to the low resolution bands to enhance the resolution using a global correlation enhancement technique. The global enhancement is based on the least square regression and the histogram matching to improve the estimated interpolation of the spatial resolution. The introduced algorithm is considered as an improvement for Price’s algorithm which uses the global correlation only for the spatial resolution enhancement. Numerous studies are conducted to investigate the effect of the proposed algorithm for achieving the enhancement compared to the traditional algorithm for superresolution enhancement. Experiments results obtained using hyperspectral data derived from airborne imaging sensor are presented to verify the superiority of the proposed algorithm.

  3. The Pixon Method for Data Compression Image Classification, and Image Reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard; Yahil, Amos

    2002-01-01

    As initially proposed, this program had three goals: (1) continue to develop the highly successful Pixon method for image reconstruction and support other scientist in implementing this technique for their applications; (2) develop image compression techniques based on the Pixon method; and (3) develop artificial intelligence algorithms for image classification based on the Pixon approach for simplifying neural networks. Subsequent to proposal review the scope of the program was greatly reduced and it was decided to investigate the ability of the Pixon method to provide superior restorations of images compressed with standard image compression schemes, specifically JPEG-compressed images.

  4. Application of DIRI dynamic infrared imaging in reconstructive surgery

    NASA Astrophysics Data System (ADS)

    Pawlowski, Marek; Wang, Chengpu; Jin, Feng; Salvitti, Matthew; Tenorio, Xavier

    2006-04-01

    We have developed the BioScanIR System based on QWIP (Quantum Well Infrared Photodetector). Data collected by this sensor are processed using the DIRI (Dynamic Infrared Imaging) algorithms. The combination of DIRI data processing methods with the unique characteristics of the QWIP sensor permit the creation of a new imaging modality capable of detecting minute changes in temperature at the surface of the tissue and organs associated with blood perfusion due to certain diseases such as cancer, vascular disease and diabetes. The BioScanIR System has been successfully applied in reconstructive surgery to localize donor flap feeding vessels (perforators) during the pre-surgical planning stage. The device is also used in post-surgical monitoring of skin flap perfusion. Since the BioScanIR is mobile; it can be moved to the bedside for such monitoring. In comparison to other modalities, the BioScanIR can localize perforators in a single, 20 seconds scan with definitive results available in minutes. The algorithms used include (FFT) Fast Fourier Transformation, motion artifact correction, spectral analysis and thermal image scaling. The BioScanIR is completely non-invasive and non-toxic, requires no exogenous contrast agents and is free of ionizing radiation. In addition to reconstructive surgery applications, the BioScanIR has shown promise as a useful functional imaging modality in neurosurgery, drug discovery in pre-clinical animal models, wound healing and peripheral vascular disease management.

  5. Simultaneous reconstruction and segmentation for dynamic SPECT imaging

    NASA Astrophysics Data System (ADS)

    Burger, Martin; Rossmanith, Carolin; Zhang, Xiaoqun

    2016-10-01

    This work deals with the reconstruction of dynamic images that incorporate characteristic dynamics in certain subregions, as arising for the kinetics of many tracers in emission tomography (SPECT, PET). We make use of a basis function approach for the unknown tracer concentration by assuming that the region of interest can be divided into subregions with spatially constant concentration curves. Applying a regularised variational framework reminiscent of the Chan-Vese model for image segmentation we simultaneously reconstruct both the labelling functions of the subregions as well as the subconcentrations within each region. Our particular focus is on applications in SPECT with the Poisson noise model, resulting in a Kullback-Leibler data fidelity in the variational approach. We present a detailed analysis of the proposed variational model and prove existence of minimisers as well as error estimates. The latter apply to a more general class of problems and generalise existing results in literature since we deal with a nonlinear forward operator and a nonquadratic data fidelity. A computational algorithm based on alternating minimisation and splitting techniques is developed for the solution of the problem and tested on appropriately designed synthetic data sets. For those we compare the results to those of standard EM reconstructions and investigate the effects of Poisson noise in the data.

  6. Constrain static target kinetic iterative image reconstruction for 4D cardiac CT imaging

    NASA Astrophysics Data System (ADS)

    Alessio, Adam M.; La Riviere, Patrick J.

    2011-03-01

    Iterative image reconstruction offers improved signal to noise properties for CT imaging. A primary challenge with iterative methods is the substantial computation time. This computation time is even more prohibitive in 4D imaging applications, such as cardiac gated or dynamic acquisition sequences. In this work, we propose only updating the time-varying elements of a 4D image sequence while constraining the static elements to be fixed or slowly varying in time. We test the method with simulations of 4D acquisitions based on measured cardiac patient data from a) a retrospective cardiac-gated CT acquisition and b) a dynamic perfusion CT acquisition. We target the kinetic elements with one of two methods: 1) position a circular ROI on the heart, assuming area outside ROI is essentially static throughout imaging time; and 2) select varying elements from the coefficient of variation image formed from fast analytic reconstruction of all time frames. Targeted kinetic elements are updated with each iteration, while static elements remain fixed at initial image values formed from the reconstruction of data from all time frames. Results confirm that the computation time is proportional to the number of targeted elements; our simulations suggest that <30% of elements need to be updated in each frame leading to >3 times reductions in reconstruction time. The images reconstructed with the proposed method have matched mean square error with full 4D reconstruction. The proposed method is amenable to most optimization algorithms and offers the potential for significant computation improvements, which could be traded off for more sophisticated system models or penalty terms.

  7. Spectral-overlap approach to multiframe superresolution image reconstruction.

    PubMed

    Cohen, Edward; Picard, Richard H; Crabtree, Peter N

    2016-05-20

    Various techniques and algorithms have been developed to improve the resolution of sensor-aliased imagery captured with multiple subpixel-displaced frames on an undersampled pixelated image plane. These dealiasing algorithms are typically known as multiframe superresolution (SR), or geometric SR to emphasize the role of the focal-plane array. Multiple low-resolution (LR) aliased frames of the same scene are captured and allocated to a common high-resolution (HR) reconstruction grid, leading to the possibility of an alias-free reconstruction, as long as the HR sampling rate is above the Nyquist rate. Allocating LR-frame irradiances to HR frames requires the use of appropriate weights. Here we present a novel approach in the spectral domain to calculating exactly weights based on spatial overlap areas, which we call the spectral-overlap (SO) method. We emphasize that the SO method is not a spectral approach but rather an approach to calculating spatial weights that uses spectral decompositions to exploit the array properties of the HR and LR pixels. The method is capable of dealing with arbitrary aliasing factors and interframe motions consisting of in-plane translations and rotations. We calculate example reconstructed HR images (the inverse problem) from synthetic aliased images for integer and for fractional aliasing factors. We show the utility of the SO-generated overlap-area weights in both noniterative and iterative reconstructions with known or unknown aliasing factor. We show how the overlap weights can be used to generate the Green's function (pixel response function) for noniterative dealiasing. In addition, we show how the overlap-area weights can be used to generate synthetic aliased images (the forward problem). We compare the SO approach to the spatial-domain geometric approach of O'Rourke and find virtually identical high accuracy but with significant enhancements in speed for SO. We also compare the SO weights to interpolated weights and find that

  8. Improved proton computed tomography by dual modality image reconstruction

    SciTech Connect

    Hansen, David C. Bassler, Niels; Petersen, Jørgen Breede Baltzer; Sørensen, Thomas Sangild

    2014-03-15

    Purpose: Proton computed tomography (CT) is a promising image modality for improving the stopping power estimates and dose calculations for particle therapy. However, the finite range of about 33 cm of water of most commercial proton therapy systems limits the sites that can be scanned from a full 360° rotation. In this paper the authors propose a method to overcome the problem using a dual modality reconstruction (DMR) combining the proton data with a cone-beam x-ray prior. Methods: A Catphan 600 phantom was scanned using a cone beam x-ray CT scanner. A digital replica of the phantom was created in the Monte Carlo code Geant4 and a 360° proton CT scan was simulated, storing the entrance and exit position and momentum vector of every proton. Proton CT images were reconstructed using a varying number of angles from the scan. The proton CT images were reconstructed using a constrained nonlinear conjugate gradient algorithm, minimizing total variation and the x-ray CT prior while remaining consistent with the proton projection data. The proton histories were reconstructed along curved cubic-spline paths. Results: The spatial resolution of the cone beam CT prior was retained for the fully sampled case and the 90° interval case, with the MTF = 0.5 (modulation transfer function) ranging from 5.22 to 5.65 linepairs/cm. In the 45° interval case, the MTF = 0.5 dropped to 3.91 linepairs/cm For the fully sampled DMR, the maximal root mean square (RMS) error was 0.006 in units of relative stopping power. For the limited angle cases the maximal RMS error was 0.18, an almost five-fold improvement over the cone beam CT estimate. Conclusions: Dual modality reconstruction yields the high spatial resolution of cone beam x-ray CT while maintaining the improved stopping power estimation of proton CT. In the case of limited angles, the use of prior image proton CT greatly improves the resolution and stopping power estimate, but does not fully achieve the quality of a 360

  9. Fast parallel MR image reconstruction via B1-based, adaptive restart, iterative soft thresholding algorithms (BARISTA).

    PubMed

    Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A

    2015-02-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.

  10. Fast Parallel MR Image Reconstruction via B1-based, Adaptive Restart, Iterative Soft Thresholding Algorithms (BARISTA)

    PubMed Central

    Noll, Douglas C.; Fessler, Jeffrey A.

    2014-01-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms. PMID:25330484

  11. Noise spatial nonuniformity and the impact of statistical image reconstruction in CT myocardial perfusion imaging

    SciTech Connect

    Lauzier, Pascal Theriault; Tang Jie; Speidel, Michael A.; Chen Guanghong

    2012-07-15

    Purpose: To achieve high temporal resolution in CT myocardial perfusion imaging (MPI), images are often reconstructed using filtered backprojection (FBP) algorithms from data acquired within a short-scan angular range. However, the variation in the central angle from one time frame to the next in gated short scans has been shown to create detrimental partial scan artifacts when performing quantitative MPI measurements. This study has two main purposes. (1) To demonstrate the existence of a distinct detrimental effect in short-scan FBP, i.e., the introduction of a nonuniform spatial image noise distribution; this nonuniformity can lead to unexpectedly high image noise and streaking artifacts, which may affect CT MPI quantification. (2) To demonstrate that statistical image reconstruction (SIR) algorithms can be a potential solution to address the nonuniform spatial noise distribution problem and can also lead to radiation dose reduction in the context of CT MPI. Methods: Projection datasets from a numerically simulated perfusion phantom and an in vivo animal myocardial perfusion CT scan were used in this study. In the numerical phantom, multiple realizations of Poisson noise were added to projection data at each time frame to investigate the spatial distribution of noise. Images from all datasets were reconstructed using both FBP and SIR reconstruction algorithms. To quantify the spatial distribution of noise, the mean and standard deviation were measured in several regions of interest (ROIs) and analyzed across time frames. In the in vivo study, two low-dose scans at tube currents of 25 and 50 mA were reconstructed using FBP and SIR. Quantitative perfusion metrics, namely, the normalized upslope (NUS), myocardial blood volume (MBV), and first moment transit time (FMT), were measured for two ROIs and compared to reference values obtained from a high-dose scan performed at 500 mA. Results: Images reconstructed using FBP showed a highly nonuniform spatial distribution

  12. Image reconstruction for the ClearPET™ Neuro

    NASA Astrophysics Data System (ADS)

    Weber, Simone; Morel, Christian; Simon, Luc; Krieguer, Magalie; Rey, Martin; Gundlich, Brigitte; Khodaverdi, Maryam

    2006-12-01

    ClearPET™ is a family of small-animal PET scanners which are currently under development within the Crystal Clear Collaboration (CERN). All scanners are based on the same detector block design using individual LSO and LuYAP crystals in phoswich configuration, coupled to multi-anode photomultiplier tubes. One of the scanners, the ClearPET™ Neuro is designed for applications in neuroscience. Four detector blocks with 64 2×2×10 mm LSO and LuYAP crystals, arranged in line, build a module. Twenty modules are arranged in a ring with a ring diameter of 13.8 cm and an axial size of 11.2 cm. An insensitive region at the border of the detector heads results in gaps between the detectors axially and tangentially. The detectors are rotating by 360° in step and shoot mode during data acquisition. Every second module is shifted axially to compensate partly for the gaps between the detector blocks in a module. This unconventional scanner geometry requires dedicated image reconstruction procedures. Data acquisition acquires single events that are stored with a time mark in a dedicated list mode format. Coincidences are associated off line by software. After sorting the data into 3D sinograms, image reconstruction is performed using the Ordered Subset Maximum A Posteriori One-Step Late (OSMAPOSL) iterative algorithm implemented in the Software for Tomographic Image Reconstruction (STIR) library. Due to the non-conventional scanner design, careful estimation of the sensitivity matrix is needed to obtain artifact-free images from the ClearPET™ Neuro.

  13. Scattering robust 3D reconstruction via polarized transient imaging.

    PubMed

    Wu, Rihui; Suo, Jinli; Dai, Feng; Zhang, Yongdong; Dai, Qionghai

    2016-09-01

    Reconstructing 3D structure of scenes in the scattering medium is a challenging task with great research value. Existing techniques often impose strong assumptions on the scattering behaviors and are of limited performance. Recently, a low-cost transient imaging system has provided a feasible way to resolve the scene depth, by detecting the reflection instant on the time profile of a surface point. However, in cases with scattering medium, the rays are both reflected and scattered during transmission, and the depth calculated from the time profile largely deviates from the true value. To handle this problem, we used the different polarization behaviors of the reflection and scattering components, and introduced active polarization to separate the reflection component to estimate the scattering robust depth. Our experiments have demonstrated that our approach can accurately reconstruct the 3D structure underlying the scattering medium. PMID:27607944

  14. Modeling and image reconstruction in spectrally resolved bioluminescence tomography

    NASA Astrophysics Data System (ADS)

    Dehghani, Hamid; Pogue, Brian W.; Davis, Scott C.; Patterson, Michael S.

    2007-02-01

    Recent interest in modeling and reconstruction algorithms for Bioluminescence Tomography (BLT) has increased and led to the general consensus that non-spectrally resolved intensity-based BLT results in a non-unique problem. However, the light emitted from, for example firefly Luciferase, is widely distributed over the band of wavelengths from 500 nm to 650 nm and above, with the dominant fraction emitted from tissue being above 550 nm. This paper demonstrates the development of an algorithm used for multi-wavelength 3D spectrally resolved BLT image reconstruction in a mouse model. It is shown that using a single view data, bioluminescence sources of up to 15 mm deep can be successfully recovered given correct information about the underlying tissue absorption and scatter.

  15. Automatic speed of sound correction with photoacoustic image reconstruction

    NASA Astrophysics Data System (ADS)

    Ye, Meng; Cao, Meng; Feng, Ting; Yuan, Jie; Cheng, Qian; Liu, XIaojun; Xu, Guan; Wang, Xueding

    2016-03-01

    Sound velocity measurement is of great importance to the application of biomedical especially in the research of acoustic detection and acoustic tomography. Using correct sound velocities in each medium other than one unified sound propagation speed, we can effectively enhance sound based imaging resolution. Photoacoustic tomography (PAT), is defined as cross-sectional or three-dimensional (3D) imaging of a material based on the photoacoustic effect and it is a developing, non-invasive imaging method in biomedical research. This contribution proposes a method to concurrently calculate multiple acoustic speeds in different mediums. Firstly, we get the size of infra-structure of the target by B-mode ultrasonic imaging method. Then we build the photoacoustic (PA) image of the same target with different acoustic speed in different medium. By repeatedly evaluate the quality of reconstruct PA image, we dynamically calibrate the acoustic speeds in different medium to build a finest PA image. Thus, we take these speeds of sound as the correct acoustic propagation velocities in according mediums. Experiments show that our non-invasive method can yield correct speed of sound with less than 0.3% error which might benefit future research in biomedical science.

  16. General Structure of Regularization Procedures in Image Reconstruction

    NASA Astrophysics Data System (ADS)

    Titterington, D. M.

    1985-03-01

    Regularization procedures are portrayed as compromises between the conflicting aims of fidelity with the observed image and perfect smoothness. The selection of an estimated image involves the choice of a prescription, indicating the manner of smoothing, and of a smoothing parameter, which defines the degree of smoothing. Prescriptions of the minimum-penalized- distance type are considered and are shown to be equivalent to maximum-penalized-smoothness prescriptions. These include, therefore, constrained least-squares and constrained maximum entropy methods. The formal link with Bayesian statistical analysis is pointed out. Two important methods of choosing the degree of smoothing are described, one based on criteria of consistency with the data and one based on minimizing a risk function. The latter includes minimum mean-squared error criteria. Although the maximum entropy method has some practical advantages, there seems no case for it to hold a special place on philosophical grounds, in the context of image reconstruction.

  17. Spectral image reconstruction by a tunable LED illumination

    NASA Astrophysics Data System (ADS)

    Lin, Meng-Chieh; Tsai, Chen-Wei; Tien, Chung-Hao

    2013-09-01

    Spectral reflectance estimation of an object via low-dimensional snapshot requires both image acquisition and a post numerical estimation analysis. In this study, we set up a system incorporating a homemade cluster of LEDs with spectral modulation for scene illumination, and a multi-channel CCD to acquire multichannel images by means of fully digital process. Principal component analysis (PCA) and pseudo inverse transformation were used to reconstruct the spectral reflectance in a constrained training set, such as Munsell and Macbeth Color Checker. The average reflectance spectral RMS error from 34 patches of a standard color checker were 0.234. The purpose is to investigate the use of system in conjunction with the imaging analysis for industry or medical inspection in a fast and acceptable accuracy, where the approach was preliminary validated.

  18. A generalized Fourier penalty in prior-image-based reconstruction for cross-platform imaging

    NASA Astrophysics Data System (ADS)

    Pourmorteza, A.; Siewerdsen, J. H.; Stayman, J. W.

    2016-03-01

    Sequential CT studies present an excellent opportunity to apply prior-image-based reconstruction (PIBR) methods that leverage high-fidelity prior imaging studies to improve image quality and/or reduce x-ray exposure in subsequent studies. One major obstacle in using PIBR is that the initial and subsequent studies are often performed on different scanners (e.g. diagnostic CT followed by CBCT for interventional guidance); this results in mismatch in attenuation values due to hardware and software differences. While improved artifact correction techniques can potentially mitigate such differences, the correction is often incomplete. Here, we present an alternate strategy where the PIBR itself is used to mitigate these differences. We define a new penalty for the previously introduced PIBR called Reconstruction of Difference (RoD). RoD differs from many other PIBRs in that it reconstructs only changes in the anatomy (vs. reconstructing the current anatomy). Direct regularization of the difference image in RoD provides an opportunity to selectively penalize spatial frequencies of the difference image (e.g. low frequency differences associated with attenuation offsets and shading artifacts) without interfering with the variations in unchanged background image. We leverage this flexibility and introduce a novel regularization strategy using a generalized Fourier penalty within the RoD framework and develop the modified reconstruction algorithm. We evaluate the performance of the new approach in both simulation studies and in physical CBCT test-bench data. We find that generalized Fourier penalty can be highly effective in reducing low-frequency x-ray artifacts through selective suppression of spatial frequencies in the reconstructed difference image.

  19. Statistics-based reconstruction method with high random-error tolerance for integral imaging.

    PubMed

    Zhang, Juan; Zhou, Liqiu; Jiao, Xiaoxue; Zhang, Lei; Song, Lipei; Zhang, Bo; Zheng, Yi; Zhang, Zan; Zhao, Xing

    2015-10-01

    A three-dimensional (3D) digital reconstruction method for integral imaging with high random-error tolerance based on statistics is proposed. By statistically analyzing the points reconstructed by triangulation from all corresponding image points in an elemental images array, 3D reconstruction with high random-error tolerance could be realized. To simulate the impacts of random errors, random offsets with different error levels are added to a different number of elemental images in simulation and optical experiments. The results of simulation and optical experiments showed that the proposed statistic-based reconstruction method has relatively stable and better reconstruction accuracy than the conventional reconstruction method. It can be verified that the proposed method can effectively reduce the impacts of random errors on 3D reconstruction of integral imaging. This method is simple and very helpful to the development of integral imaging technology.

  20. A new combined prior based reconstruction method for compressed sensing in 3D ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Uddin, Muhammad S.; Islam, Rafiqul; Tahtali, Murat; Lambert, Andrew J.; Pickering, Mark R.

    2015-03-01

    Ultrasound (US) imaging is one of the most popular medical imaging modalities, with 3D US imaging gaining popularity recently due to its considerable advantages over 2D US imaging. However, as it is limited by long acquisition times and the huge amount of data processing it requires, methods for reducing these factors have attracted considerable research interest. Compressed sensing (CS) is one of the best candidates for accelerating the acquisition rate and reducing the data processing time without degrading image quality. However, CS is prone to introduce noise-like artefacts due to random under-sampling. To address this issue, we propose a combined prior-based reconstruction method for 3D US imaging. A Laplacian mixture model (LMM) constraint in the wavelet domain is combined with a total variation (TV) constraint to create a new regularization regularization prior. An experimental evaluation conducted to validate our method using synthetic 3D US images shows that it performs better than other approaches in terms of both qualitative and quantitative measures.

  1. LIRA: Low-Count Image Reconstruction and Analysis

    NASA Astrophysics Data System (ADS)

    Stein, Nathan; van Dyk, David; Connors, Alanna; Siemiginowska, Aneta; Kashyap, Vinay

    2009-09-01

    LIRA is a new software package for the R statistical computing language. The package is designed for multi-scale non-parametric image analysis for use in high-energy astrophysics. The code implements an MCMC sampler that simultaneously fits the image and the necessary tuning/smoothing parameters in the model (an advance from `EMC2' of Esch et al. 2004). The model-based approach allows for quantification of the standard error of the fitted image and can be used to access the statistical significance of features in the image or to evaluate the goodness-of-fit of a proposed model. The method does not rely on Gaussian approximations, instead modeling image counts as Poisson data, making it suitable for images with extremely low counts. LIRA can include a null (or background) model and fit the departure between the observed data and the null model via a wavelet-like multi-scale component. The technique is therefore suited for problems in which some aspect of an observation is well understood (e.g, a point source), but questions remain about observed departures. To quantitatively test for the presence of diffuse structure unaccounted for by a point source null model, first, the observed image is fit with the null model. Second, multiple simulated images, generated as Poisson realizations of the point source model, are fit using the same null model. MCMC samples from the posterior distributions of the parameters of the fitted models can be compared and can be used to calibrate the misfit between the observed data and the null model. Additionally, output from LIRA includes the MCMC draws of the multi-scale component images, so that the departure of the (simulated or observed) data from the point source null model can be examined visually. To demonstrate LIRA, an example of reconstructing Chandra images of high redshift quasars with jets is presented.

  2. Interleaved Variable Density Sampling with a Constrained Parallel Imaging Reconstruction for Dynamic Contrast-Enhanced MR Angiography

    PubMed Central

    Wang, Kang; Busse, Reed F.; Holmes, James H.; Beatty, Philip J.; Brittain, Jean H.; Francois, Christopher J.; Reeder, Scott B.; Du, Jiang; Korosec, Frank R.

    2012-01-01

    For MR applications such as contrast-enhanced MR angiography (CE-MRA), it is desirable to achieve simultaneously high spatial and temporal resolution. The current clinical standard uses view sharing methods combined with parallel imaging; however this approach still provides limited spatial and temporal resolution. To improve on the clinical standard, we present an Interleaved Variable Density (IVD) sampling method that pseudorandomly undersamples each individual frame of a 3D Cartesian ky-kz plane combined with parallel imaging acceleration. From this data set, time-resolved images are reconstructed with a method that combines parallel imaging with a multiplicative constraint. Total acceleration factors on the order of 20 are achieved for CE-MRA of the lower extremities, and improvements in temporal fidelity of the depiction of the contrast bolus passage are demonstrated relative to the clinical standard. PMID:21360740

  3. Interleaved variable density sampling with a constrained parallel imaging reconstruction for dynamic contrast-enhanced MR angiography.

    PubMed

    Wang, Kang; Busse, Reed F; Holmes, James H; Beatty, Philip J; Brittain, Jean H; Francois, Christopher J; Reeder, Scott B; Du, Jiang; Korosec, Frank R

    2011-08-01

    For MR applications such as contrast-enhanced MR angiography, it is desirable to achieve simultaneously high spatial and temporal resolution. The current clinical standard uses view-sharing methods combined with parallel imaging; however, this approach still provides limited spatial and temporal resolution. To improve on the clinical standard, we present an interleaved variable density (IVD) sampling method that pseudorandomly undersamples each individual frame of a 3D Cartesian ky-kz plane combined with parallel imaging acceleration. From this dataset, time-resolved images are reconstructed with a method that combines parallel imaging with a multiplicative constraint. Total acceleration factors on the order of 20 are achieved for contrast-enhanced MR angiography of the lower extremities, and improvements in temporal fidelity of the depiction of the contrast bolus passage are demonstrated relative to the clinical standard.

  4. An Iterative CT Reconstruction Algorithm for Fast Fluid Flow Imaging.

    PubMed

    Van Eyndhoven, Geert; Batenburg, K Joost; Kazantsev, Daniil; Van Nieuwenhove, Vincent; Lee, Peter D; Dobson, Katherine J; Sijbers, Jan

    2015-11-01

    The study of fluid flow through solid matter by computed tomography (CT) imaging has many applications, ranging from petroleum and aquifer engineering to biomedical, manufacturing, and environmental research. To avoid motion artifacts, current experiments are often limited to slow fluid flow dynamics. This severely limits the applicability of the technique. In this paper, a new iterative CT reconstruction algorithm for improved a temporal/spatial resolution in the imaging of fluid flow through solid matter is introduced. The proposed algorithm exploits prior knowledge in two ways. First, the time-varying object is assumed to consist of stationary (the solid matter) and dynamic regions (the fluid flow). Second, the attenuation curve of a particular voxel in the dynamic region is modeled by a piecewise constant function over time, which is in accordance with the actual advancing fluid/air boundary. Quantitative and qualitative results on different simulation experiments and a real neutron tomography data set show that, in comparison with the state-of-the-art algorithms, the proposed algorithm allows reconstruction from substantially fewer projections per rotation without image quality loss. Therefore, the temporal resolution can be substantially increased, and thus fluid flow experiments with faster dynamics can be performed.

  5. Adaptive reconstruction of millimeter-wave radiometric images.

    PubMed

    Sarkis, Michel

    2012-09-01

    We present a robust method to reconstruct a millimeter-wave image from a passive sensor. The method operates directly on the raw samples from the radiometer. It allocates for each pixel to be estimated a patch in the space formed by all the raw samples of the image. It then estimates the noise in the patch by measuring some distances that reflect how far the samples are from forming a piecewise smooth surface. It then allocates a weight for each sample that defines its contribution in the pixel reconstruction. This is done via a smoothing Kernel that enforces the distances to have a piecewise smooth variation inside the patch. Results on real datasets show that our scheme leads to more contrast and less noise and the shape of an object is better preserved in a constructed image compared to state-of-the-art schemes. The proposed scheme produces better results even with low integration times, i.e., 10% of the total integration time used in our experiments.

  6. Absolute phase image reconstruction: a stochastic nonlinear filtering approach.

    PubMed

    Leitão, J N; Figueiredo, M A

    1998-01-01

    This paper formulates and proposes solutions to the problem of estimating/reconstructing the absolute (not simply modulo-2pi) phase of a complex random field from noisy observations of its real and imaginary parts. This problem is representative of a class of important imaging techniques such as interferometric synthetic aperture radar, optical interferometry, magnetic resonance imaging, and diffraction tomography. We follow a Bayesian approach; then, not only a probabilistic model of the observation mechanism, but also prior knowledge concerning the (phase) image to be reconstructed, are needed. We take as prior a nonsymmetrical half plane autoregressive (NSHP AR) Gauss-Markov random field (GMRF). Based on a reduced order state-space formulation of the (linear) NSHP AR model and on the (nonlinear) observation mechanism, a recursive stochastic nonlinear filter is derived, The corresponding estimates are compared with those obtained by the extended Kalman-Bucy filter, a classical linearizing approach to the same problem. A set of examples illustrate the effectiveness of the proposed approach. PMID:18276299

  7. Local Surface Reconstruction from MER images using Stereo Workstation

    NASA Astrophysics Data System (ADS)

    Shin, Dongjoe; Muller, Jan-Peter

    2010-05-01

    The authors present a semi-automatic workflow that reconstructs the 3D shape of the martian surface from local stereo images delivered by PnCam or NavCam on systems such as the NASA Mars Exploration Rover (MER) Mission and in the future the ESA-NASA ExoMars rover PanCam. The process is initiated with manually selected tiepoints on a stereo workstation which is then followed by a tiepoint refinement, stereo-matching using region growing and Levenberg-Marquardt Algorithm (LMA)-based bundle adjustment processing. The stereo workstation, which is being developed by UCL in collaboration with colleagues at the Jet Propulsion Laboratory (JPL) within the EU FP7 ProVisG project, includes a set of practical GUI-based tools that enable an operator to define a visually correct tiepoint via a stereo display. To achieve platform and graphic hardware independence, the stereo application has been implemented using JPL's JADIS graphic library which is written in JAVA and the remaining processing blocks used in the reconstruction workflow have also been developed as a JAVA package to increase the code re-usability, portability and compatibility. Although initial tiepoints from the stereo workstation are reasonably acceptable as true correspondences, it is often required to employ an optional validity check and/or quality enhancing process. To meet this requirement, the workflow has been designed to include a tiepoint refinement process based on the Adaptive Least Square Correlation (ALSC) matching algorithm so that the initial tiepoints can be further enhanced to sub-pixel precision or rejected if they fail to pass the ALSC matching threshold. Apart from the accuracy of reconstruction, it is obvious that the other criterion to assess the quality of reconstruction is the density (or completeness) of reconstruction, which is not attained in the refinement process. Thus, we re-implemented a stereo region growing process, which is a core matching algorithm within the UCL

  8. A new reconstruction strategy for image improvement in pinhole SPECT.

    PubMed

    Zeniya, Tsutomu; Watabe, Hiroshi; Aoi, Toshiyuki; Kim, Kyeong Min; Teramoto, Noboru; Hayashi, Takuya; Sohlberg, Antti; Kudo, Hiroyuki; Iida, Hidehiro

    2004-08-01

    Pinhole single-photon emission computed tomography (SPECT) is able to provide information on the biodistribution of several radioligands in small laboratory animals, but has limitations associated with non-uniform spatial resolution or axial blurring. We have hypothesised that this blurring is due to incompleteness of the projection data acquired by a single circular pinhole orbit, and have evaluated a new strategy for accurate image reconstruction with better spatial resolution uniformity. A pinhole SPECT system using two circular orbits and a dedicated three-dimensional ordered subsets expectation maximisation (3D-OSEM) reconstruction method were developed. In this system, not the camera but the object rotates, and the two orbits are at 90 degrees and 45 degrees relative to the object's axis. This system satisfies Tuy's condition, and is thus able to provide complete data for 3D pinhole SPECT reconstruction within the whole field of view (FOV). To evaluate this system, a series of experiments was carried out using a multiple-disk phantom filled with 99mTc solution. The feasibility of the proposed method for small animal imaging was tested with a mouse bone study using 99mTc-hydroxymethylene diphosphonate. Feldkamp's filtered back-projection (FBP) method and the 3D-OSEM method were applied to these data sets, and the visual and statistical properties were examined. Axial blurring, which was still visible at the edge of the FOV even after applying the conventional 3D-OSEM instead of FBP for single-orbit data, was not visible after application of 3D-OSEM using two-orbit data. 3D-OSEM using two-orbit data dramatically reduced the resolution non-uniformity and statistical noise, and also demonstrated considerably better image quality in the mouse scan. This system may be of use in quantitative assessment of bio-physiological functions in small animals.

  9. Cardiac-state-driven CT image reconstruction algorithm for cardiac imaging

    NASA Astrophysics Data System (ADS)

    Cesmeli, Erdogan; Edic, Peter M.; Iatrou, Maria; Hsieh, Jiang; Gupta, Rajiv; Pfoh, Armin H.

    2002-05-01

    Multi-slice CT scanners use EKG gating to predict the cardiac phase during slice reconstruction from projection data. Cardiac phase is generally defined with respect to the RR interval. The implicit assumption made is that the duration of events in a RR interval scales linearly when the heart rate changes. Using a more detailed EKG analysis, we evaluate the impact of relaxing this assumption on image quality. We developed a reconstruction algorithm that analyzes the associated EKG waveform to extract the natural cardiac states. A wavelet transform was used to decompose each RR-interval into P, QRS, and T waves. Subsequently, cardiac phase was defined with respect to these waves instead of a percentage or time delay from the beginning or the end of RR intervals. The projection data was then tagged with the cardiac phase and processed using temporal weights that are function of their cardiac phases. Finally, the tagged projection data were combined from multiple cardiac cycles using a multi-sector algorithm to reconstruct images. The new algorithm was applied to clinical data, collected on a 4-slice (GE LightSpeed Qx/i) and 8-slice CT scanner (GE LightSpeed Plus), with heart rates of 40 to 80 bpm. The quality of reconstruction is assessed by the visualization of the major arteries, e.g. RCA, LAD, LC in the reformat 3D images. Preliminary results indicate that Cardiac State Driven reconstruction algorithm offers better image quality than their RR-based counterparts.

  10. Performance analysis of different surface reconstruction algorithms for 3D reconstruction of outdoor objects from their digital images.

    PubMed

    Maiti, Abhik; Chakravarty, Debashish

    2016-01-01

    3D reconstruction of geo-objects from their digital images is a time-efficient and convenient way of studying the structural features of the object being modelled. This paper presents a 3D reconstruction methodology which can be used to generate photo-realistic 3D watertight surface of different irregular shaped objects, from digital image sequences of the objects. The 3D reconstruction approach described here is robust, simplistic and can be readily used in reconstructing watertight 3D surface of any object from its digital image sequence. Here, digital images of different objects are used to build sparse, followed by dense 3D point clouds of the objects. These image-obtained point clouds are then used for generation of photo-realistic 3D surfaces, using different surface reconstruction algorithms such as Poisson reconstruction and Ball-pivoting algorithm. Different control parameters of these algorithms are identified, which affect the quality and computation time of the reconstructed 3D surface. The effects of these control parameters in generation of 3D surface from point clouds of different density are studied. It is shown that the reconstructed surface quality of Poisson reconstruction depends on Samples per node (SN) significantly, greater SN values resulting in better quality surfaces. Also, the quality of the 3D surface generated using Ball-Pivoting algorithm is found to be highly depend upon Clustering radius and Angle threshold values. The results obtained from this study give the readers of the article a valuable insight into the effects of different control parameters on determining the reconstructed surface quality. PMID:27386376

  11. Performance analysis of different surface reconstruction algorithms for 3D reconstruction of outdoor objects from their digital images.

    PubMed

    Maiti, Abhik; Chakravarty, Debashish

    2016-01-01

    3D reconstruction of geo-objects from their digital images is a time-efficient and convenient way of studying the structural features of the object being modelled. This paper presents a 3D reconstruction methodology which can be used to generate photo-realistic 3D watertight surface of different irregular shaped objects, from digital image sequences of the objects. The 3D reconstruction approach described here is robust, simplistic and can be readily used in reconstructing watertight 3D surface of any object from its digital image sequence. Here, digital images of different objects are used to build sparse, followed by dense 3D point clouds of the objects. These image-obtained point clouds are then used for generation of photo-realistic 3D surfaces, using different surface reconstruction algorithms such as Poisson reconstruction and Ball-pivoting algorithm. Different control parameters of these algorithms are identified, which affect the quality and computation time of the reconstructed 3D surface. The effects of these control parameters in generation of 3D surface from point clouds of different density are studied. It is shown that the reconstructed surface quality of Poisson reconstruction depends on Samples per node (SN) significantly, greater SN values resulting in better quality surfaces. Also, the quality of the 3D surface generated using Ball-Pivoting algorithm is found to be highly depend upon Clustering radius and Angle threshold values. The results obtained from this study give the readers of the article a valuable insight into the effects of different control parameters on determining the reconstructed surface quality.

  12. Algorithms for image reconstruction from projections in optical tomography

    NASA Astrophysics Data System (ADS)

    Zhu, Lin-Sheng; Huang, Su-Yi

    1993-09-01

    It is well known that the determination ofthe temperature field by holographic interferometry is a successful method in the measurement of thermophysics. In this paper some practical algorithms for image reconstruction from projections are presented to produce the temperature field. The algorithms developed consists in that the Radon transform integral equation is directly solved by grid method and that the Radon inversion formula is numerically evaluated by twodimensional Fourier transform technique. Some examples are given to verify the validity of the above methods in practice.

  13. Image reconstruction by the speckle-masking method.

    PubMed

    Weigelt, G; Wirnitzer, B

    1983-07-01

    Speckle masking is a method for reconstructing high-resolution images of general astronomical objects from stellar speckle interferograms. In speckle masking no unresolvable star is required within the isoplanatic patch of the object. We present digital applications of speckle masking to close spectroscopic double stars. The speckle interferograms were recorded with the European Southern Observatory's 3.6-m telescope. Diffraction-limited resolution (0.03 arc see) was achieved, which is about 30 times higher than the resolution of conventional astrophotography.

  14. Image reconstruction by the speckle-masking method.

    PubMed

    Weigelt, G; Wirnitzer, B

    1983-07-01

    Speckle masking is a method for reconstructing high-resolution images of general astronomical objects from stellar speckle interferograms. In speckle masking no unresolvable star is required within the isoplanatic patch of the object. We present digital applications of speckle masking to close spectroscopic double stars. The speckle interferograms were recorded with the European Southern Observatory's 3.6-m telescope. Diffraction-limited resolution (0.03 arc see) was achieved, which is about 30 times higher than the resolution of conventional astrophotography. PMID:19718124

  15. Fast Multigrid Techniques in Total Variation-Based Image Reconstruction

    NASA Technical Reports Server (NTRS)

    Oman, Mary Ellen

    1996-01-01

    Existing multigrid techniques are used to effect an efficient method for reconstructing an image from noisy, blurred data. Total Variation minimization yields a nonlinear integro-differential equation which, when discretized using cell-centered finite differences, yields a full matrix equation. A fixed point iteration is applied with the intermediate matrix equations solved via a preconditioned conjugate gradient method which utilizes multi-level quadrature (due to Brandt and Lubrecht) to apply the integral operator and a multigrid scheme (due to Ewing and Shen) to invert the differential operator. With effective preconditioning, the method presented seems to require Omicron(n) operations. Numerical results are given for a two-dimensional example.

  16. GPU acceleration for digitally reconstructed radiographs using bindless texture objects and CUDA/OpenGL interoperability.

    PubMed

    Abdellah, Marwan; Eldeib, Ayman; Owis, Mohamed I

    2015-01-01

    This paper features an advanced implementation of the X-ray rendering algorithm that harnesses the giant computing power of the current commodity graphics processors to accelerate the generation of high resolution digitally reconstructed radiographs (DRRs). The presented pipeline exploits the latest features of NVIDIA Graphics Processing Unit (GPU) architectures, mainly bindless texture objects and dynamic parallelism. The rendering throughput is substantially improved by exploiting the interoperability mechanisms between CUDA and OpenGL. The benchmarks of our optimized rendering pipeline reflect its capability of generating DRRs with resolutions of 2048(2) and 4096(2) at interactive and semi interactive frame-rates using an NVIDIA GeForce 970 GTX device. PMID:26737231

  17. GPU acceleration for digitally reconstructed radiographs using bindless texture objects and CUDA/OpenGL interoperability.

    PubMed

    Abdellah, Marwan; Eldeib, Ayman; Owis, Mohamed I

    2015-01-01

    This paper features an advanced implementation of the X-ray rendering algorithm that harnesses the giant computing power of the current commodity graphics processors to accelerate the generation of high resolution digitally reconstructed radiographs (DRRs). The presented pipeline exploits the latest features of NVIDIA Graphics Processing Unit (GPU) architectures, mainly bindless texture objects and dynamic parallelism. The rendering throughput is substantially improved by exploiting the interoperability mechanisms between CUDA and OpenGL. The benchmarks of our optimized rendering pipeline reflect its capability of generating DRRs with resolutions of 2048(2) and 4096(2) at interactive and semi interactive frame-rates using an NVIDIA GeForce 970 GTX device.

  18. 3D Reconstruction of Human Motion from Monocular Image Sequences.

    PubMed

    Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo

    2016-08-01

    This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement. PMID:27093439

  19. 3D Reconstruction of Human Motion from Monocular Image Sequences.

    PubMed

    Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo

    2016-08-01

    This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement.

  20. Accelerated nanoscale magnetic resonance imaging through phase multiplexing

    SciTech Connect

    Moores, B. A.; Eichler, A. Takahashi, H.; Navaretti, P.; Degen, C. L.; Tao, Y.

    2015-05-25

    We report a method for accelerated nanoscale nuclear magnetic resonance imaging by detecting several signals in parallel. Our technique relies on phase multiplexing, where the signals from different nuclear spin ensembles are encoded in the phase of an ultrasensitive magnetic detector. We demonstrate this technique by simultaneously acquiring statistically polarized spin signals from two different nuclear species ({sup 1}H, {sup 19}F) and from up to six spatial locations in a nanowire test sample using a magnetic resonance force microscope. We obtain one-dimensional imaging resolution better than 5 nm, and subnanometer positional accuracy.

  1. List-mode PET image reconstruction for motion correction using the Intel XEON PHI co-processor

    NASA Astrophysics Data System (ADS)

    Ryder, W. J.; Angelis, G. I.; Bashar, R.; Gillam, J. E.; Fulton, R.; Meikle, S.

    2014-03-01

    List-mode image reconstruction with motion correction is computationally expensive, as it requires projection of hundreds of millions of rays through a 3D array. To decrease reconstruction time it is possible to use symmetric multiprocessing computers or graphics processing units. The former can have high financial costs, while the latter can require refactoring of algorithms. The Xeon Phi is a new co-processor card with a Many Integrated Core architecture that can run 4 multiple-instruction, multiple data threads per core with each thread having a 512-bit single instruction, multiple data vector register. Thus, it is possible to run in the region of 220 threads simultaneously. The aim of this study was to investigate whether the Xeon Phi co-processor card is a viable alternative to an x86 Linux server for accelerating List-mode PET image reconstruction for motion correction. An existing list-mode image reconstruction algorithm with motion correction was ported to run on the Xeon Phi coprocessor with the multi-threading implemented using pthreads. There were no differences between images reconstructed using the Phi co-processor card and images reconstructed using the same algorithm run on a Linux server. However, it was found that the reconstruction runtimes were 3 times greater for the Phi than the server. A new version of the image reconstruction algorithm was developed in C++ using OpenMP for mutli-threading and the Phi runtimes decreased to 1.67 times that of the host Linux server. Data transfer from the host to co-processor card was found to be a rate-limiting step; this needs to be carefully considered in order to maximize runtime speeds. When considering the purchase price of a Linux workstation with Xeon Phi co-processor card and top of the range Linux server, the former is a cost-effective computation resource for list-mode image reconstruction. A multi-Phi workstation could be a viable alternative to cluster computers at a lower cost for medical imaging

  2. Image reconstruction from limited angle projections collected by multisource interior x-ray imaging systems

    NASA Astrophysics Data System (ADS)

    Liu, Baodong; Wang, Ge; Ritman, Erik L.; Cao, Guohua; Lu, Jianping; Zhou, Otto; Zeng, Li; Yu, Hengyong

    2011-10-01

    A multisource x-ray interior imaging system with limited angle scanning is investigated to study the possibility of building an ultrafast micro-CT for dynamic small animal imaging, and two methods are employed to perform interior reconstruction from a limited number of projections collected by the multisource interior x-ray system. The first is total variation minimization with the steepest descent search (TVM-SD) and the second is total difference minimization with soft-threshold filtering (TDM-STF). Comprehensive numerical simulations and animal studies are performed to validate the associated reconstructed methods and demonstrate the feasibility and application of the proposed system configuration. The image reconstruction results show that both of the two reconstruction methods can significantly improve the image quality and the TDM-SFT is slightly superior to the TVM-SD. Finally, quantitative image analysis shows that it is possible to make an ultrafast micro-CT using a multisource interior x-ray system scheme combined with the state-of-the-art interior tomography.

  3. Image reconstruction from limited angle projections collected by multisource interior x-ray imaging systems.

    PubMed

    Liu, Baodong; Wang, Ge; Ritman, Erik L; Cao, Guohua; Lu, Jianping; Zhou, Otto; Zeng, Li; Yu, Hengyong

    2011-10-01

    A multisource x-ray interior imaging system with limited angle scanning is investigated to study the possibility of building an ultrafast micro-CT for dynamic small animal imaging, and two methods are employed to perform interior reconstruction from a limited number of projections collected by the multisource interior x-ray system. The first is total variation minimization with the steepest descent search (TVM-SD) and the second is total difference minimization with soft-threshold filtering (TDM-STF). Comprehensive numerical simulations and animal studies are performed to validate the associated reconstructed methods and demonstrate the feasibility and application of the proposed system configuration. The image reconstruction results show that both of the two reconstruction methods can significantly improve the image quality and the TDM-SFT is slightly superior to the TVM-SD. Finally, quantitative image analysis shows that it is possible to make an ultrafast micro-CT using a multisource interior x-ray system scheme combined with the state-of-the-art interior tomography.

  4. Bayesian Super-Resolved Surface Reconstruction From Multiple Images

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, V. N.; Cheesman, P.; Maluf, D. A.; Morris, R. D.; Swanson, Keith (Technical Monitor)

    1999-01-01

    Bayesian inference has been wed successfully for many problems where the aim is to infer the parameters of a model of interest. In this paper we formulate the three dimensional reconstruction problem as the problem of inferring the parameters of a surface model from image data, and show how Bayesian methods can be used to estimate the parameters of this model given the image data. Thus we recover the three dimensional description of the scene. This approach also gives great flexibility. We can specify the geometrical properties of the model to suit our purpose, and can also use different models for how the surface reflects the light incident upon it. In common with other Bayesian inference problems, the estimation methodology requires that we can simulate the data that would have been recoded for any values of the model parameters. In this application this means that if we have image data we must be able to render the surface model. However it also means that we can infer the parameters of a model whose resolution can be chosen irrespective of the resolution of the images, and may be super-resolved. We present results of the inference of surface models from simulated aerial photographs for the case of super-resolution, where many surface elements project into a single pixel in the low-resolution images.

  5. A High Precision Terahertz Wave Image Reconstruction Algorithm

    PubMed Central

    Guo, Qijia; Chang, Tianying; Geng, Guoshuai; Jia, Chengyan; Cui, Hong-Liang

    2016-01-01

    With the development of terahertz (THz) technology, the applications of this spectrum have become increasingly wide-ranging, in areas such as non-destructive testing, security applications and medical scanning, in which one of the most important methods is imaging. Unlike remote sensing applications, THz imaging features sources of array elements that are almost always supposed to be spherical wave radiators, including single antennae. As such, well-developed methodologies such as Range-Doppler Algorithm (RDA) are not directly applicable in such near-range situations. The Back Projection Algorithm (BPA) can provide products of high precision at the the cost of a high computational burden, while the Range Migration Algorithm (RMA) sacrifices the quality of images for efficiency. The Phase-shift Migration Algorithm (PMA) is a good alternative, the features of which combine both of the classical algorithms mentioned above. In this research, it is used for mechanical scanning, and is extended to array imaging for the first time. In addition, the performances of PMA are studied in detail in contrast to BPA and RMA. It is demonstrated in our simulations and experiments described herein that the algorithm can reconstruct images with high precision. PMID:27455269

  6. A High Precision Terahertz Wave Image Reconstruction Algorithm.

    PubMed

    Guo, Qijia; Chang, Tianying; Geng, Guoshuai; Jia, Chengyan; Cui, Hong-Liang

    2016-01-01

    With the development of terahertz (THz) technology, the applications of this spectrum have become increasingly wide-ranging, in areas such as non-destructive testing, security applications and medical scanning, in which one of the most important methods is imaging. Unlike remote sensing applications, THz imaging features sources of array elements that are almost always supposed to be spherical wave radiators, including single antennae. As such, well-developed methodologies such as Range-Doppler Algorithm (RDA) are not directly applicable in such near-range situations. The Back Projection Algorithm (BPA) can provide products of high precision at the the cost of a high computational burden, while the Range Migration Algorithm (RMA) sacrifices the quality of images for efficiency. The Phase-shift Migration Algorithm (PMA) is a good alternative, the features of which combine both of the classical algorithms mentioned above. In this research, it is used for mechanical scanning, and is extended to array imaging for the first time. In addition, the performances of PMA are studied in detail in contrast to BPA and RMA. It is demonstrated in our simulations and experiments described herein that the algorithm can reconstruct images with high precision. PMID:27455269

  7. Poisson image reconstruction with Hessian Schatten-norm regularization.

    PubMed

    Lefkimmiatis, Stamatios; Unser, Michael

    2013-11-01

    Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.

  8. Plenoptic camera image simulation for reconstruction algorithm verification

    NASA Astrophysics Data System (ADS)

    Schwiegerling, Jim

    2014-09-01

    Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.

  9. Poisson image reconstruction with Hessian Schatten-norm regularization.

    PubMed

    Lefkimmiatis, Stamatios; Unser, Michael

    2013-11-01

    Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework. PMID:23846472

  10. 3D reconstruction of concave surfaces using polarisation imaging

    NASA Astrophysics Data System (ADS)

    Sohaib, A.; Farooq, A. R.; Ahmed, J.; Smith, L. N.; Smith, M. L.

    2015-06-01

    This paper presents a novel algorithm for improved shape recovery using polarisation-based photometric stereo. The majority of previous research using photometric stereo involves 3D reconstruction using both the diffuse and specular components of light; however, this paper suggests the use of the specular component only as it is the only form of light that comes directly off the surface without subsurface scattering or interreflections. Experiments were carried out on both real and synthetic surfaces. Real images were obtained using a polarisation-based photometric stereo device while synthetic images were generated using PovRay® software. The results clearly demonstrate that the proposed method can extract three-dimensional (3D) surface information effectively even for concave surfaces with complex texture and surface reflectance.

  11. Reconstruction of mechanically recorded sound by image processing

    SciTech Connect

    Fadeyev, Vitaliy; Haber, Carl

    2003-03-26

    Audio information stored in the undulations of grooves in a medium such as a phonograph record may be reconstructed, with no or minimal contact, by measuring the groove shape using precision metrology methods and digital image processing. The effects of damage, wear, and contamination may be compensated, in many cases, through image processing and analysis methods. The speed and data handling capacity of available computing hardware make this approach practical. Various aspects of this approach are discussed. A feasibility test is reported which used a general purpose optical metrology system to study a 50 year old 78 r.p.m. phonograph record. Comparisons are presented with stylus playback of the record and with a digitally re-mastered version of the original magnetic recording. A more extensive implementation of this approach, with dedicated hardware and software, is considered.

  12. ISIS: Image reconstruction experiments and comparison of various array configurations

    NASA Astrophysics Data System (ADS)

    Reinheimer, T.; Hofmann, K.-H.; Weigelt, G.

    1987-08-01

    The application of speckle masking (triple correlation processing) to coherent, telescope arrays in space is introduced. True diffraction-limited images are obtained since speckle masking is the solution of the phase problem in speckle interferometry. For example, a 14 m array can yield a resolution of 0.004 arcsec at 200 nm wavelength. Resolution of 0.000001 arcsec can be obtained with a 40 km array at 200nm. Computer simulations of optical aperture synthesis by speckle masking are shown. Simulations of a two-dimensional ring-shaped array and of a linear one-dimensional array are described. The dependence of the signal-to-noise ratio in the reconstructed image on photon noise is discussed.

  13. High-quality image reconstruction method for ptychography with partially coherent illumination

    NASA Astrophysics Data System (ADS)

    Yu, Wei; Wang, Shouyu; Veetil, Suhas; Gao, Shumei; Liu, Cheng; Zhu, Jianqiang

    2016-06-01

    The influence of partial coherence on the image reconstruction in ptychography is analyzed, and a simple method is proposed to reconstruct a clear image for the weakly scattering object with partially coherent illumination. It is demonstrated numerically and experimentally that by illuminating a weakly scattering object with a divergent radiation beam, and doing the reconstruction only from the bright-field diffraction data, the mathematical ambiguity and corresponding reconstruction errors related to the partial coherency can be remarkably suppressed, thus clear reconstructed images can be generated even under seriously incoherent illumination.

  14. Modeling of polychromatic attenuation using computed tomography reconstructed images

    NASA Technical Reports Server (NTRS)

    Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S.

    1999-01-01

    This paper presents a procedure for estimating an accurate model of the CT imaging process including spectral effects. As raw projection data are typically unavailable to the end-user, we adopt a post-processing approach that utilizes the reconstructed images themselves. This approach includes errors from x-ray scatter and the nonidealities of the built-in soft tissue correction into the beam characteristics, which is crucial to beam hardening correction algorithms that are designed to be applied directly to CT reconstructed images. We formulate this approach as a quadratic programming problem and propose two different methods, dimension reduction and regularization, to overcome ill conditioning in the model. For the regularization method we use a statistical procedure, Cross Validation, to select the regularization parameter. We have constructed step-wedge phantoms to estimate the effective beam spectrum of a GE CT-I scanner. Using the derived spectrum, we computed the attenuation ratios for the wedge phantoms and found that the worst case modeling error is less than 3% of the corresponding attenuation ratio. We have also built two test (hybrid) phantoms to evaluate the effective spectrum. Based on these test phantoms, we have shown that the effective beam spectrum provides an accurate model for the CT imaging process. Last, we used a simple beam hardening correction experiment to demonstrate the effectiveness of the estimated beam profile for removing beam hardening artifacts. We hope that this estimation procedure will encourage more independent research on beam hardening corrections and will lead to the development of application-specific beam hardening correction algorithms.

  15. Iterative PET Image Reconstruction Using Translation Invariant Wavelet Transform

    PubMed Central

    Zhou, Jian; Senhadji, Lotfi; Coatrieux, Jean-Louis; Luo, Limin

    2009-01-01

    The present work describes a Bayesian maximum a posteriori (MAP) method using a statistical multiscale wavelet prior model. Rather than using the orthogonal discrete wavelet transform (DWT), this prior is built on the translation invariant wavelet transform (TIWT). The statistical modeling of wavelet coefficients relies on the generalized Gaussian distribution. Image reconstruction is performed in spatial domain with a fast block sequential iteration algorithm. We study theoretically the TIWT MAP method by analyzing the Hessian of the prior function to provide some insights on noise and resolution properties of image reconstruction. We adapt the key concept of local shift invariance and explore how the TIWT MAP algorithm behaves with different scales. It is also shown that larger support wavelet filters do not offer better performance in contrast recovery studies. These theoretical developments are confirmed through simulation studies. The results show that the proposed method is more attractive than other MAP methods using either the conventional Gibbs prior or the DWT-based wavelet prior. PMID:21869846

  16. High-efficiency imaging through scattering media in noisy environments via sparse image reconstruction

    NASA Astrophysics Data System (ADS)

    Wu, Tengfei; Shao, Xiaopeng; Gong, Changmei; Li, Huijuan; Liu, Jietao

    2015-11-01

    High-efficiency imaging through highly scattering media is urgently desired for various applications. Imaging speed and imaging quality, which determine the imaging efficiency, are two inevitable indices for any optical imaging area. Based on random walk analysis in statistical optics, the elements in a transmission matrix (TM) actually obey Gaussian distribution. Instead of dealing with large amounts of data contained in TM and speckle pattern, imaging can be achieved with only a small number of the data via sparse representation. We make a detailed mathematical analysis of the elements-distribution of the TM of a scattering imaging system and study the imaging method of sparse image reconstruction (SIR). More specifically, we focus on analyzing the optimum sampling rates for the imaging of different structures of targets, which significantly influences both imaging speed and imaging quality. Results show that the optimum sampling rate exists in any noise-level environment if a target can be sparsely represented, and by searching for the optimum sampling rate, it can effectively balance the imaging quality and the imaging speed, which can maximize the imaging efficiency. This work is helpful for practical applications of imaging through highly scattering media with the SIR method.

  17. WE-G-18C-08: Real Time Tumor Imaging Using a Novel Dynamic Keyhole MRI Reconstruction Technique

    SciTech Connect

    Lee, D; Pollock, S; Whelan, B; Keall, P; Greer, P; Kim, T

    2014-06-15

    Purpose: To test the hypothesis that the novel Dynamic Keyhole MRI reconstruction technique can accelerate image acquisition whilst maintaining high image quality for lung cancer patients. Methods: 18 MRI datasets from 5 lung cancer patients were acquired using a 3T MRI scanner. These datasets were retrospectively reconstructed using (A) The novel Dynamic Keyhole technique, (B) The conventional keyhole technique and (C) the conventional zero filling technique. The dynamic keyhole technique in MRI refers to techniques in which previously acquired k-space data is used to supplement under sampled data obtained in real time. The novel Dynamic Keyhole technique utilizes a previously acquired a library of kspace datasets in conjunction with central k-space datasets acquired in realtime. A simultaneously acquired respiratory signal is utilized to sort, match and combine the two k-space streams with respect to respiratory displacement. Reconstruction performance was quantified by (1) comparing the keyhole size (which corresponds to imaging speed) required to achieve the same image quality, and (2) maintaining a constant keyhole size across the three reconstruction methods to compare the resulting image quality to the ground truth image. Results: (1) The dynamic keyhole method required a mean keyhole size which was 48% smaller than the conventional keyhole technique and 60% smaller than the zero filling technique to achieve the same image quality. This directly corresponds to faster imaging. (2) When a constant keyhole size was utilized, the Dynamic Keyhole technique resulted in the smallest difference of the tumor region compared to the ground truth. Conclusion: The dynamic keyhole is a simple and adaptable technique for clinical applications requiring real-time imaging and tumor monitoring such as MRI guided radiotherapy. Based on the results from this study, the dynamic keyhole method could increase the imaging frequency by a factor of five compared with full k

  18. A Convex Formulation for Magnetic Particle Imaging X-Space Reconstruction

    PubMed Central

    Konkle, Justin J.; Goodwill, Patrick W.; Hensley, Daniel W.; Orendorff, Ryan D.; Lustig, Michael; Conolly, Steven M.

    2015-01-01

    Magnetic Particle Imaging (mpi) is an emerging imaging modality with exceptional promise for clinical applications in rapid angiography, cell therapy tracking, cancer imaging, and inflammation imaging. Recent publications have demonstrated quantitative mpi across rat sized fields of view with x-space reconstruction methods. Critical to any medical imaging technology is the reliability and accuracy of image reconstruction. Because the average value of the mpi signal is lost during direct-feedthrough signal filtering, mpi reconstruction algorithms must recover this zero-frequency value. Prior x-space mpi recovery techniques were limited to 1d approaches which could introduce artifacts when reconstructing a 3d image. In this paper, we formulate x-space reconstruction as a 3d convex optimization problem and apply robust a priori knowledge of image smoothness and non-negativity to reduce non-physical banding and haze artifacts. We conclude with a discussion of the powerful extensibility of the presented formulation for future applications. PMID:26495839

  19. A Convex Formulation for Magnetic Particle Imaging X-Space Reconstruction.

    PubMed

    Konkle, Justin J; Goodwill, Patrick W; Hensley, Daniel W; Orendorff, Ryan D; Lustig, Michael; Conolly, Steven M

    2015-01-01

    Magnetic Particle Imaging (mpi) is an emerging imaging modality with exceptional promise for clinical applications in rapid angiography, cell therapy tracking, cancer imaging, and inflammation imaging. Recent publications have demonstrated quantitative mpi across rat sized fields of view with x-space reconstruction methods. Critical to any medical imaging technology is the reliability and accuracy of image reconstruction. Because the average value of the mpi signal is lost during direct-feedthrough signal filtering, mpi reconstruction algorithms must recover this zero-frequency value. Prior x-space mpi recovery techniques were limited to 1d approaches which could introduce artifacts when reconstructing a 3d image. In this paper, we formulate x-space reconstruction as a 3d convex optimization problem and apply robust a priori knowledge of image smoothness and non-negativity to reduce non-physical banding and haze artifacts. We conclude with a discussion of the powerful extensibility of the presented formulation for future applications.

  20. 3D Dose reconstruction: Banding artefacts in cine mode EPID images during VMAT delivery

    NASA Astrophysics Data System (ADS)

    Woodruff, H. C.; Greer, P. B.

    2013-06-01

    Cine (continuous) mode images obtained during VMAT delivery are heavily degraded by banding artefacts. We have developed a method to reconstruct the pulse sequence (and hence dose deposited) from open field images. For clinical VMAT fields we have devised a frame averaging strategy that greatly improves image quality and dosimetric information for three-dimensional dose reconstruction.

  1. 3D image reconstruction algorithms for cryo-electron-microscopy images of virus particles

    NASA Astrophysics Data System (ADS)

    Doerschuk, Peter C.; Johnson, John E.

    2000-11-01

    A statistical model for the object and the complete image formation process in cryo electron microscopy of viruses is presented. Using this model, maximum likelihood reconstructions of the 3D structure of viruses are computed using the expectation maximization algorithm and an example based on Cowpea mosaic virus is provided.

  2. Filling factor characteristics of masking phase-only hologram on the quality of reconstructed images

    NASA Astrophysics Data System (ADS)

    Deng, Yuanbo; Chu, Daping

    2016-03-01

    The present study evaluates the filling factor characteristics of masking phase-only hologram on its corresponding reconstructed image. A square aperture with different filling factor is added on the phase-only hologram of the target image, and average cross-section intensity profile of the reconstructed image is obtained and deconvolved with that of the target image to calculate the point spread function (PSF) of the image. Meanwhile, Lena image is used as the target image and evaluated by metrics RMSE and SSIM to assess the quality of reconstructed image. The results show that the PSF of the image agrees with the PSF of the Fourier transform of the mask, and as the filling factor of the mask decreases, the width of PSF increases and the quality of reconstructed image drops. These characteristics could be used in practical situations where phase-only hologram is confined or need to be sliced or tiled.

  3. Characterization and Reconstruction of Nanolipoprotein Particles (Nlps) by Cryo-EM and Image Reconstruction

    SciTech Connect

    Pesavento, J B; Morgan, D; Bermingham, R; Zamora, D; Chromy, B; Segelke, B; Coleman, M; Xing, L; Cheng, H; Bench, G; Hoeprich, P

    2007-06-07

    Nanolipoprotein particles (NLPs) are small 10-20 nm diameter assemblies of apolipoproteins and lipids. At Lawrence Livermore National Laboratory (LLNL), they have constructed multiple variants of these assemblies. NLPs have been generated from a variety of lipoproteins, including apolipoprotein Al, apolipophorin III, apolipoprotein E4 22K, and MSP1T2 (nanodisc, Inc.). Lipids used included DMPC (bulk of the bilayer material), DMPE (in various amounts), and DPPC. NLPs were made in either the absence or presence of the detergent cholate. They have collected electron microscopy data as a part of the characterization component of this research. Although purified by size exclusion chromatography (SEC), samples are somewhat heterogeneous when analyzed at the nanoscale by negative stained cryo-EM. Images reveal a broad range of shape heterogeneity, suggesting variability in conformational flexibility, in fact, modeling studies point to dynamics of inter-helical loop regions within apolipoproteins as being a possible source for observed variation in NLP size. Initial attempts at three-dimensional reconstructions have proven to be challenging due to this size and shape disparity. They are pursuing a strategy of computational size exclusion to group particles into subpopulations based on average particle diameter. They show here results from their ongoing efforts at statistically and computationally subdividing NLP populations to realize greater homogeneity and then generate 3D reconstructions.

  4. Patch-based image reconstruction for PET using prior-image derived dictionaries

    NASA Astrophysics Data System (ADS)

    Tahaei, Marzieh S.; Reader, Andrew J.

    2016-09-01

    In PET image reconstruction, regularization is often needed to reduce the noise in the resulting images. Patch-based image processing techniques have recently been successfully used for regularization in medical image reconstruction through a penalized likelihood framework. Re-parameterization within reconstruction is another powerful regularization technique in which the object in the scanner is re-parameterized using coefficients for spatially-extensive basis vectors. In this work, a method for extracting patch-based basis vectors from the subject’s MR image is proposed. The coefficients for these basis vectors are then estimated using the conventional MLEM algorithm. Furthermore, using the alternating direction method of multipliers, an algorithm for optimizing the Poisson log-likelihood while imposing sparsity on the parameters is also proposed. This novel method is then utilized to find sparse coefficients for the patch-based basis vectors extracted from the MR image. The results indicate the superiority of the proposed methods to patch-based regularization using the penalized likelihood framework.

  5. Event-by-event PET image reconstruction using list-mode origin ensembles algorithm

    NASA Astrophysics Data System (ADS)

    Andreyev, Andriy

    2016-03-01

    There is a great demand for real time or event-by-event (EBE) image reconstruction in emission tomography. Ideally, as soon as event has been detected by the acquisition electronics, it needs to be used in the image reconstruction software. This would greatly speed up the image reconstruction since most of the data will be processed and reconstructed while the patient is still undergoing the scan. Unfortunately, the current industry standard is that the reconstruction of the image would not start until all the data for the current image frame would be acquired. Implementing an EBE reconstruction for MLEM family of algorithms is possible, but not straightforward as multiple (computationally expensive) updates to the image estimate are required. In this work an alternative Origin Ensembles (OE) image reconstruction algorithm for PET imaging is converted to EBE mode and is investigated whether it is viable alternative for real-time image reconstruction. In OE algorithm all acquired events are seen as points that are located somewhere along the corresponding line-of-responses (LORs), together forming a point cloud. Iteratively, with a multitude of quasi-random shifts following the likelihood function the point cloud converges to a reflection of an actual radiotracer distribution with the degree of accuracy that is similar to MLEM. New data can be naturally added into the point cloud. Preliminary results with simulated data show little difference between regular reconstruction and EBE mode, proving the feasibility of the proposed approach.

  6. 2-D Fused Image Reconstruction approach for Microwave Tomography: a theoretical assessment using FDTD Model.

    PubMed

    Bindu, G; Semenov, S

    2013-01-01

    This paper describes an efficient two-dimensional fused image reconstruction approach for Microwave Tomography (MWT). Finite Difference Time Domain (FDTD) models were created for a viable MWT experimental system having the transceivers modelled using thin wire approximation with resistive voltage sources. Born Iterative and Distorted Born Iterative methods have been employed for image reconstruction with the extremity imaging being done using a differential imaging technique. The forward solver in the imaging algorithm employs the FDTD method of solving the time domain Maxwell's equations with the regularisation parameter computed using a stochastic approach. The algorithm is tested with 10% noise inclusion and successful image reconstruction has been shown implying its robustness.

  7. Image reconstruction for PET/CT scanners: past achievements and future challenges

    PubMed Central

    Tong, Shan; Alessio, Adam M; Kinahan, Paul E

    2011-01-01

    PET is a medical imaging modality with proven clinical value for disease diagnosis and treatment monitoring. The integration of PET and CT on modern scanners provides a synergy of the two imaging modalities. Through different mathematical algorithms, PET data can be reconstructed into the spatial distribution of the injected radiotracer. With dynamic imaging, kinetic parameters of specific biological processes can also be determined. Numerous efforts have been devoted to the development of PET image reconstruction methods over the last four decades, encompassing analytic and iterative reconstruction methods. This article provides an overview of the commonly used methods. Current challenges in PET image reconstruction include more accurate quantitation, TOF imaging, system modeling, motion correction and dynamic reconstruction. Advances in these aspects could enhance the use of PET/CT imaging in patient care and in clinical research studies of pathophysiology and therapeutic interventions. PMID:21339831

  8. Transaxial system models for jPET-D4 image reconstruction.

    PubMed

    Yamaya, Taiga; Hagiwara, Naoki; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki; Kitamura, Keishi; Hasegawa, Tomoyuki; Haneishi, Hideaki; Yoshida, Eiji; Inadama, Naoko; Murayama, Hideo

    2005-11-21

    A high-performance brain PET scanner, jPET-D4, which provides four-layer depth-of-interaction (DOI) information, is being developed to achieve not only high spatial resolution, but also high scanner sensitivity. One technical issue to be dealt with is the data dimensions which increase in proportion to the square of the number of DOI layers. It is, therefore, difficult to apply algebraic or statistical image reconstruction methods directly to DOI-PET, though they improve image quality through accurate system modelling. The process that requires the most computational time and storage space is the calculation of the huge number of system matrix elements. The DOI compression (DOIC) method, which we have previously proposed, reduces data dimensions by a factor of 1/5. In this paper, we propose a transaxial imaging system model optimized for jPET-D4 with the DOIC method. The proposed model assumes that detector response functions (DRFs) are uniform along line-of-responses (LORs). Then each element of the system matrix is calculated as the summed intersection lengths between a pixel and sub-LORs weighted by a value from the DRF look-up-table. 2D numerical simulation results showed that the proposed model cut the calculation time by a factor of several hundred while keeping image quality, compared with the accurate system model. A 3D image reconstruction with the on-the-fly calculation of the system matrix is within the practical limitations by incorporating the proposed model and the DOIC method with one-pass accelerated iterative methods.

  9. Transaxial system models for jPET-D4 image reconstruction

    NASA Astrophysics Data System (ADS)

    Yamaya, Taiga; Hagiwara, Naoki; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki; Kitamura, Keishi; Hasegawa, Tomoyuki; Haneishi, Hideaki; Yoshida, Eiji; Inadama, Naoko; Murayama, Hideo

    2005-11-01

    A high-performance brain PET scanner, jPET-D4, which provides four-layer depth-of-interaction (DOI) information, is being developed to achieve not only high spatial resolution, but also high scanner sensitivity. One technical issue to be dealt with is the data dimensions which increase in proportion to the square of the number of DOI layers. It is, therefore, difficult to apply algebraic or statistical image reconstruction methods directly to DOI-PET, though they improve image quality through accurate system modelling. The process that requires the most computational time and storage space is the calculation of the huge number of system matrix elements. The DOI compression (DOIC) method, which we have previously proposed, reduces data dimensions by a factor of 1/5. In this paper, we propose a transaxial imaging system model optimized for jPET-D4 with the DOIC method. The proposed model assumes that detector response functions (DRFs) are uniform along line-of-responses (LORs). Then each element of the system matrix is calculated as the summed intersection lengths between a pixel and sub-LORs weighted by a value from the DRF look-up-table. 2D numerical simulation results showed that the proposed model cut the calculation time by a factor of several hundred while keeping image quality, compared with the accurate system model. A 3D image reconstruction with the on-the-fly calculation of the system matrix is within the practical limitations by incorporating the proposed model and the DOIC method with one-pass accelerated iterative methods.

  10. A Weighted Two-Level Bregman Method with Dictionary Updating for Nonconvex MR Image Reconstruction

    PubMed Central

    Peng, Xi; Liu, Jianbo; Yang, Dingcheng

    2014-01-01

    Nonconvex optimization has shown that it needs substantially fewer measurements than l 1 minimization for exact recovery under fixed transform/overcomplete dictionary. In this work, two efficient numerical algorithms which are unified by the method named weighted two-level Bregman method with dictionary updating (WTBMDU) are proposed for solving lp optimization under the dictionary learning model and subjecting the fidelity to the partial measurements. By incorporating the iteratively reweighted norm into the two-level Bregman iteration method with dictionary updating scheme (TBMDU), the modified alternating direction method (ADM) solves the model of pursuing the approximated lp-norm penalty efficiently. Specifically, the algorithms converge after a relatively small number of iterations, under the formulation of iteratively reweighted l 1 and l 2 minimization. Experimental results on MR image simulations and real MR data, under a variety of sampling trajectories and acceleration factors, consistently demonstrate that the proposed method can efficiently reconstruct MR images from highly undersampled k-space data and presents advantages over the current state-of-the-art reconstruction approaches, in terms of higher PSNR and lower HFEN values. PMID:25431583

  11. MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction

    SciTech Connect

    Chen, G; Pan, X; Stayman, J; Samei, E

    2014-06-15

    Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within the reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical

  12. Image reconstruction for a Positron Emission Tomograph optimized for breast cancer imaging

    SciTech Connect

    Virador, Patrick R.G.

    2000-04-01

    The author performs image reconstruction for a novel Positron Emission Tomography camera that is optimized for breast cancer imaging. This work addresses for the first time, the problem of fully-3D, tomographic reconstruction using a septa-less, stationary, (i.e. no rotation or linear motion), and rectangular camera whose Field of View (FOV) encompasses the entire volume enclosed by detector modules capable of measuring Depth of Interaction (DOI) information. The camera is rectangular in shape in order to accommodate breasts of varying sizes while allowing for soft compression of the breast during the scan. This non-standard geometry of the camera exacerbates two problems: (a) radial elongation due to crystal penetration and (b) reconstructing images from irregularly sampled data. Packing considerations also give rise to regions in projection space that are not sampled which lead to missing information. The author presents new Fourier Methods based image reconstruction algorithms that incorporate DOI information and accommodate the irregular sampling of the camera in a consistent manner by defining lines of responses (LORs) between the measured interaction points instead of rebinning the events into predefined crystal face LORs which is the only other method to handle DOI information proposed thus far. The new procedures maximize the use of the increased sampling provided by the DOI while minimizing interpolation in the data. The new algorithms use fixed-width evenly spaced radial bins in order to take advantage of the speed of the Fast Fourier Transform (FFT), which necessitates the use of irregular angular sampling in order to minimize the number of unnormalizable Zero-Efficiency Bins (ZEBs). In order to address the persisting ZEBs and the issue of missing information originating from packing considerations, the algorithms (a) perform nearest neighbor smoothing in 2D in the radial bins (b) employ a semi-iterative procedure in order to estimate the unsampled data

  13. Bayesian 3D X-ray computed tomography image reconstruction with a scaled Gaussian mixture prior model

    SciTech Connect

    Wang, Li; Gac, Nicolas; Mohammad-Djafari, Ali

    2015-01-13

    In order to improve quality of 3D X-ray tomography reconstruction for Non Destructive Testing (NDT), we investigate in this paper hierarchical Bayesian methods. In NDT, useful prior information on the volume like the limited number of materials or the presence of homogeneous area can be included in the iterative reconstruction algorithms. In hierarchical Bayesian methods, not only the volume is estimated thanks to the prior model of the volume but also the hyper parameters of this prior. This additional complexity in the reconstruction methods when applied to large volumes (from 512{sup 3} to 8192{sup 3} voxels) results in an increasing computational cost. To reduce it, the hierarchical Bayesian methods investigated in this paper lead to an algorithm acceleration by Variational Bayesian Approximation (VBA) [1] and hardware acceleration thanks to projection and back-projection operators paralleled on many core processors like GPU [2]. In this paper, we will consider a Student-t prior on the gradient of the image implemented in a hierarchical way [3, 4, 1]. Operators H (forward or projection) and H{sup t} (adjoint or back-projection) implanted in multi-GPU [2] have been used in this study. Different methods will be evalued on synthetic volume 'Shepp and Logan' in terms of quality and time of reconstruction. We used several simple regularizations of order 1 and order 2. Other prior models also exists [5]. Sometimes for a discrete image, we can do the segmentation and reconstruction at the same time, then the reconstruction can be done with less projections.

  14. Computational and human observer image quality evaluation of low dose, knowledge-based CT iterative reconstruction

    SciTech Connect

    Eck, Brendan L.; Fahmi, Rachid; Miao, Jun; Brown, Kevin M.; Zabic, Stanislav; Raihani, Nilgoun; Wilson, David L.

    2015-10-15

    Purpose: Aims in this study are to (1) develop a computational model observer which reliably tracks the detectability of human observers in low dose computed tomography (CT) images reconstructed with knowledge-based iterative reconstruction (IMR™, Philips Healthcare) and filtered back projection (FBP) across a range of independent variables, (2) use the model to evaluate detectability trends across reconstructions and make predictions of human observer detectability, and (3) perform human observer studies based on model predictions to demonstrate applications of the model in CT imaging. Methods: Detectability (d′) was evaluated in phantom studies across a range of conditions. Images were generated using a numerical CT simulator. Trained observers performed 4-alternative forced choice (4-AFC) experiments across dose (1.3, 2.7, 4.0 mGy), pin size (4, 6, 8 mm), contrast (0.3%, 0.5%, 1.0%), and reconstruction (FBP, IMR), at fixed display window. A five-channel Laguerre–Gauss channelized Hotelling observer (CHO) was developed with internal noise added to the decision variable and/or to channel outputs, creating six different internal noise models. Semianalytic internal noise computation was tested against Monte Carlo and used to accelerate internal noise parameter optimization. Model parameters were estimated from all experiments at once using maximum likelihood on the probability correct, P{sub C}. Akaike information criterion (AIC) was used to compare models of different orders. The best model was selected according to AIC and used to predict detectability in blended FBP-IMR images, analyze trends in IMR detectability improvements, and predict dose savings with IMR. Predicted dose savings were compared against 4-AFC study results using physical CT phantom images. Results: Detection in IMR was greater than FBP in all tested conditions. The CHO with internal noise proportional to channel output standard deviations, Model-k4, showed the best trade-off between fit

  15. Computational and human observer image quality evaluation of low dose, knowledge-based CT iterative reconstruction

    PubMed Central

    Eck, Brendan L.; Fahmi, Rachid; Brown, Kevin M.; Zabic, Stanislav; Raihani, Nilgoun; Miao, Jun; Wilson, David L.

    2015-01-01

    Purpose: Aims in this study are to (1) develop a computational model observer which reliably tracks the detectability of human observers in low dose computed tomography (CT) images reconstructed with knowledge-based iterative reconstruction (IMR™, Philips Healthcare) and filtered back projection (FBP) across a range of independent variables, (2) use the model to evaluate detectability trends across reconstructions and make predictions of human observer detectability, and (3) perform human observer studies based on model predictions to demonstrate applications of the model in CT imaging. Methods: Detectability (d′) was evaluated in phantom studies across a range of conditions. Images were generated using a numerical CT simulator. Trained observers performed 4-alternative forced choice (4-AFC) experiments across dose (1.3, 2.7, 4.0 mGy), pin size (4, 6, 8 mm), contrast (0.3%, 0.5%, 1.0%), and reconstruction (FBP, IMR), at fixed display window. A five-channel Laguerre–Gauss channelized Hotelling observer (CHO) was developed with internal noise added to the decision variable and/or to channel outputs, creating six different internal noise models. Semianalytic internal noise computation was tested against Monte Carlo and used to accelerate internal noise parameter optimization. Model parameters were estimated from all experiments at once using maximum likelihood on the probability correct, PC. Akaike information criterion (AIC) was used to compare models of different orders. The best model was selected according to AIC and used to predict detectability in blended FBP-IMR images, analyze trends in IMR detectability improvements, and predict dose savings with IMR. Predicted dose savings were compared against 4-AFC study results using physical CT phantom images. Results: Detection in IMR was greater than FBP in all tested conditions. The CHO with internal noise proportional to channel output standard deviations, Model-k4, showed the best trade-off between fit and

  16. Investigation of optimization-based reconstruction with an image-total-variation constraint in PET

    NASA Astrophysics Data System (ADS)

    Zhang, Zheng; Ye, Jinghan; Chen, Buxin; Perkins, Amy E.; Rose, Sean; Sidky, Emil Y.; Kao, Chien-Min; Xia, Dan; Tung, Chi-Hua; Pan, Xiaochuan

    2016-08-01

    Interest remains in reconstruction-algorithm research and development for possible improvement of image quality in current PET imaging and for enabling innovative PET systems to enhance existing, and facilitate new, preclinical and clinical applications. Optimization-based image reconstruction has been demonstrated in recent years of potential utility for CT imaging applications. In this work, we investigate tailoring the optimization-based techniques to image reconstruction for PET systems with standard and non-standard scan configurations. Specifically, given an image-total-variation (TV) constraint, we investigated how the selection of different data divergences and associated parameters impacts the optimization-based reconstruction of PET images. The reconstruction robustness was explored also with respect to different data conditions and activity up-takes of practical relevance. A study was conducted particularly for image reconstruction from data collected by use of a PET configuration with sparsely populated detectors. Overall, the study demonstrates the robustness of the TV-constrained, optimization-based reconstruction for considerably different data conditions in PET imaging, as well as its potential to enable PET configurations with reduced numbers of detectors. Insights gained in the study may be exploited for developing algorithms for PET-image reconstruction and for enabling PET-configuration design of practical usefulness in preclinical and clinical applications.

  17. Modulus reconstruction from prostate ultrasound images using finite element modeling

    NASA Astrophysics Data System (ADS)

    Yan, Zhennan; Zhang, Shaoting; Alam, S. Kaisar; Metaxas, Dimitris N.; Garra, Brian S.; Feleppa, Ernest J.

    2012-03-01

    In medical diagnosis, use of elastography is becoming increasingly more useful. However, treatments usually assume a planar compression applied to tissue surfaces and measure the deformation. The stress distribution is relatively uniform close to the surface when using a large, flat compressor but it diverges gradually along tissue depth. Generally in prostate elastography, the transrectal probes used for scanning and compression are cylindrical side-fire or rounded end-fire probes, and the force is applied through the rectal wall. These make it very difficult to detect cancer in prostate, since the rounded contact surfaces exaggerate the non-uniformity of the applied stress, especially for the distal, anterior prostate. We have developed a preliminary 2D Finite Element Model (FEM) to simulate prostate deformation in elastography. The model includes a homogeneous prostate with a stiffer tumor in the proximal, posterior region of the gland. A force is applied to the rectal wall to deform the prostate, strain and stress distributions can be computed from the resultant displacements. Then, we assume the displacements as boundary condition and reconstruct the modulus distribution (inverse problem) using linear perturbation method. FEM simulation shows that strain and strain contrast (of the lesion) decrease very rapidly with increasing depth and lateral distance. Therefore, lesions would not be clearly visible if located far away from the probe. However, the reconstructed modulus image can better depict relatively stiff lesion wherever the lesion is located.

  18. Nonlinear algorithm for task-specific tomosynthetic image reconstruction

    NASA Astrophysics Data System (ADS)

    Webber, Richard L.; Underhill, Hunter A.; Hemler, Paul F.; Lavery, John E.

    1999-05-01

    This investigation defines and tests a simple, nonlinear, task-specific method for rapid tomosynthetic reconstruction of radiographic images designed to allow an increase in specificity at the expense of sensitivity. Representative lumpectomy specimens containing cancer from human breasts were radiographed with a digital mammographic machine. Resulting projective data were processed to yield a series of tomosynthetic slices distributed throughout the breast. Five board-certified radiologists compared tomographic displays of these tissues processed both linearly (control) and nonlinearly (test) and ranked them in terms of their perceived interpretability. In another task, a different set of nine observers estimated the relative depths of six holes bored in a solid Lucite block as perceived when observed in three dimensions as a tomosynthesized series of test and control slices. All participants preferred the nonlinearly generated tomosynthetic mammograms to those produced conventionally, with or without subsequent deblurring by means of iterative deconvolution. The result was similar (p less than 0.015) when the hole-depth experiment was performed objectively. We therefore conclude for certain tasks that are unduly compromised by tomosynthetic blurring, the nonlinear tomosynthetic reconstruction method described here may improve diagnostic performance with a negligible increase in cost or complexity.

  19. Theory of Adaptive Acquisition Method for Image Reconstruction from Projections and Application to EPR Imaging

    NASA Astrophysics Data System (ADS)

    Placidi, G.; Alecci, M.; Sotgiu, A.

    1995-07-01

    An adaptive method for selecting the projections to be used for image reconstruction is presented. The method starts with the acquisition of four projections at angles of 0°, 45°, 90°, 135° and selects the new angles by computing a function of the previous projections. This makes it possible to adapt the selection of projections to the arbitrary shape of the sample, thus measuring a more informative set of projections. When the sample is smooth or has internal symmetries, this technique allows a reduction in the number of projections required to reconstruct the image without loss of information. The method has been tested on simulated data at different values of signal-to-noise ratio (S/N) and on experimental data recorded by an EPR imaging apparatus.

  20. Partial scene reconstruction using Time-of-Flight imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Yuchen; Xiong, Hongkai

    2014-11-01

    This paper is devoted to generating the coordinates of partial 3D points in scene reconstruction via time of flight (ToF) images. Assuming the camera does not move, only the coordinates of the points in images are accessible. The exposure time is two trillionths of a second and the synthetic visualization shows that the light moves at half a trillion frames per second. In global light transport, direct components signify that the light is emitted from a light point and reflected from a scene point only once. Considering that the camera and source light point are supposed to be two focuses of an ellipsoid and have a constant distance at a time, we take into account both the constraints: (1) the distance is the sum of distances which light travels between the two focuses and the scene point; and (2) the focus of the camera, the scene point and the corresponding image point are in a line. It is worth mentioning that calibration is necessary to obtain the coordinates of the light point. The calibration can be done in the next two steps: (1) choose a scene that contains some pairs of points in the same depth, of which positions are known; and (2) take the positions into the last two constraints and get the coordinates of the light point. After calculating the coordinates of scene points, MeshLab is used to build the partial scene model. The proposed approach is favorable to estimate the exact distance between two scene points.

  1. Reconstructing Images in Astrophysics, an Inverse Problem Point of View

    NASA Astrophysics Data System (ADS)

    Theys, Céline; Aime, Claude

    2016-04-01

    After a short introduction, a first section provides a brief tutorial to the physics of image formation and its detection in the presence of noises. The rest of the chapter focuses on the resolution of the inverse problem . In the general form, the observed image is given by a Fredholm integral containing the object and the response of the instrument. Its inversion is formulated using a linear algebra. The discretized object and image of size N × N are stored in vectors x and y of length N 2. They are related one another by the linear relation y = H x, where H is a matrix of size N 2 × N 2 that contains the elements of the instrument response. This matrix presents particular properties for a shift invariant point spread function for which the Fredholm integral is reduced to a convolution relation. The presence of noise complicates the resolution of the problem. It is shown that minimum variance unbiased solutions fail to give good results because H is badly conditioned, leading to the need of a regularized solution. Relative strength of regularization versus fidelity to the data is discussed and briefly illustrated on an example using L-curves. The origins and construction of iterative algorithms are explained, and illustrations are given for the algorithms ISRA , for a Gaussian additive noise, and Richardson-Lucy , for a pure photodetected image (Poisson statistics). In this latter case, the way the algorithm modifies the spatial frequencies of the reconstructed image is illustrated for a diluted array of apertures in space. Throughout the chapter, the inverse problem is formulated in matrix form for the general case of the Fredholm integral, while numerical illustrations are limited to the deconvolution case, allowing the use of discrete Fourier transforms, because of computer limitations.

  2. Automatic lumen segmentation in IVOCT images using binary morphological reconstruction

    PubMed Central

    2013-01-01

    Background Atherosclerosis causes millions of deaths, annually yielding billions in expenses round the world. Intravascular Optical Coherence Tomography (IVOCT) is a medical imaging modality, which displays high resolution images of coronary cross-section. Nonetheless, quantitative information can only be obtained with segmentation; consequently, more adequate diagnostics, therapies and interventions can be provided. Since it is a relatively new modality, many different segmentation methods, available in the literature for other modalities, could be successfully applied to IVOCT images, improving accuracies and uses. Method An automatic lumen segmentation approach, based on Wavelet Transform and Mathematical Morphology, is presented. The methodology is divided into three main parts. First, the preprocessing stage attenuates and enhances undesirable and important information, respectively. Second, in the feature extraction block, wavelet is associated with an adapted version of Otsu threshold; hence, tissue information is discriminated and binarized. Finally, binary morphological reconstruction improves the binary information and constructs the binary lumen object. Results The evaluation was carried out by segmenting 290 challenging images from human and pig coronaries, and rabbit iliac arteries; the outcomes were compared with the gold standards made by experts. The resultant accuracy was obtained: True Positive (%) = 99.29 ± 2.96, False Positive (%) = 3.69 ± 2.88, False Negative (%) = 0.71 ± 2.96, Max False Positive Distance (mm) = 0.1 ± 0.07, Max False Negative Distance (mm) = 0.06 ± 0.1. Conclusions In conclusion, by segmenting a number of IVOCT images with various features, the proposed technique showed to be robust and more accurate than published studies; in addition, the method is completely automatic, providing a new tool for IVOCT segmentation. PMID:23937790

  3. Terrain reconstruction from Chang'e-3 PCAM images

    NASA Astrophysics Data System (ADS)

    Wang, Wen-Rui; Ren, Xin; Wang, Fen-Fei; Liu, Jian-Jun; Li, Chun-Lai

    2015-07-01

    The existing terrain models that describe the local lunar surface have limited resolution and accuracy, which can hardly meet the needs of rover navigation, positioning and geological analysis. China launched the lunar probe Chang'e-3 in December, 2013. Chang'e-3 encompassed a lander and a lunar rover called “Yutu” (Jade Rabbit). A set of panoramic cameras were installed on the rover mast. After acquiring panoramic images of four sites that were explored, the terrain models of the local lunar surface with resolution of 0.02m were reconstructed. Compared with other data sources, the models derived from Chang'e-3 data were clear and accurate enough that they could be used to plan the route of Yutu. Supported by the National Natural Science Foundation of China.

  4. Optimization-based image reconstruction with artifact reduction in C-arm CBCT

    NASA Astrophysics Data System (ADS)

    Xia, Dan; Langan, David A.; Solomon, Stephen B.; Zhang, Zheng; Chen, Buxin; Lai, Hao; Sidky, Emil Y.; Pan, Xiaochuan

    2016-10-01

    We investigate an optimization-based reconstruction, with an emphasis on image-artifact reduction, from data collected in C-arm cone-beam computed tomography (CBCT) employed in image-guided interventional procedures. In the study, an image to be reconstructed is formulated as a solution to a convex optimization program in which a weighted data divergence is minimized subject to a constraint on the image total variation (TV); a data-derivative fidelity is introduced in the program specifically for effectively suppressing dominant, low-frequency data artifact caused by, e.g. data truncation; and the Chambolle–Pock (CP) algorithm is tailored to reconstruct an image through solving the program. Like any other reconstructions, the optimization-based reconstruction considered depends upon numerous parameters. We elucidate the parameters, illustrate their determination, and demonstrate their impact on the reconstruction. The optimization-based reconstruction, when applied to data collected from swine and patient subjects, yields images with visibly reduced artifacts in contrast to the reference reconstruction, and it also appears to exhibit a high degree of robustness against distinctively different anatomies of imaged subjects and scanning conditions of clinical significance. Knowledge and insights gained in the study may be exploited for aiding in the design of practical reconstructions of truly clinical-application utility.

  5. Photoacoustic image reconstruction from ultrasound post-beamformed B-mode image

    NASA Astrophysics Data System (ADS)

    Zhang, Haichong K.; Guo, Xiaoyu; Kang, Hyun Jae; Boctor, Emad M.

    2016-03-01

    A requirement to reconstruct photoacoustic (PA) image is to have a synchronized channel data acquisition with laser firing. Unfortunately, most clinical ultrasound (US) systems don't offer an interface to obtain synchronized channel data. To broaden the impact of clinical PA imaging, we propose a PA image reconstruction algorithm utilizing US B-mode image, which is readily available from clinical scanners. US B-mode image involves a series of signal processing including beamforming, followed by envelope detection, and end with log compression. Yet, it will be defocused when PA signals are input due to incorrect delay function. Our approach is to reverse the order of image processing steps and recover the original US post-beamformed radio-frequency (RF) data, in which a synthetic aperture based PA rebeamforming algorithm can be further applied. Taking B-mode image as the input, we firstly recovered US postbeamformed RF data by applying log decompression and convoluting an acoustic impulse response to combine carrier frequency information. Then, the US post-beamformed RF data is utilized as pre-beamformed RF data for the adaptive PA beamforming algorithm, and the new delay function is applied by taking into account that the focus depth in US beamforming is at the half depth of the PA case. The feasibility of the proposed method was validated through simulation, and was experimentally demonstrated using an acoustic point source. The point source was successfully beamformed from a US B-mode image, and the full with at the half maximum of the point improved 3.97 times. Comparing this result to the ground-truth reconstruction using channel data, the FWHM was slightly degraded with 1.28 times caused by information loss during envelope detection and convolution of the RF information.

  6. Optoacoustic Imaging and Tomography: Reconstruction Approaches and Outstanding Challenges in Image Performance and Quantification

    PubMed Central

    Lutzweiler, Christian; Razansky, Daniel

    2013-01-01

    This paper comprehensively reviews the emerging topic of optoacoustic imaging from the image reconstruction and quantification perspective. Optoacoustic imaging combines highly attractive features, including rich contrast and high versatility in sensing diverse biological targets, excellent spatial resolution not compromised by light scattering, and relatively low cost of implementation. Yet, living objects present a complex target for optoacoustic imaging due to the presence of a highly heterogeneous tissue background in the form of strong spatial variations of scattering and absorption. Extracting quantified information on the actual distribution of tissue chromophores and other biomarkers constitutes therefore a challenging problem. Image quantification is further compromised by some frequently-used approximated inversion formulae. In this review, the currently available optoacoustic image reconstruction and quantification approaches are assessed, including back-projection and model-based inversion algorithms, sparse signal representation, wavelet-based approaches, methods for reduction of acoustic artifacts as well as multi-spectral methods for visualization of tissue bio-markers. Applicability of the different methodologies is further analyzed in the context of real-life performance in small animal and clinical in-vivo imaging scenarios. PMID:23736854

  7. Real-time maximum a-posteriori image reconstruction for fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Jabbar, Anwar A.; Dilipkumar, Shilpa; C K, Rasmi; Rajan, K.; Mondal, Partha P.

    2015-08-01

    Rapid reconstruction of multidimensional image is crucial for enabling real-time 3D fluorescence imaging. This becomes a key factor for imaging rapidly occurring events in the cellular environment. To facilitate real-time imaging, we have developed a graphics processing unit (GPU) based real-time maximum a-posteriori (MAP) image reconstruction system. The parallel processing capability of GPU device that consists of a large number of tiny processing cores and the adaptability of image reconstruction algorithm to parallel processing (that employ multiple independent computing modules called threads) results in high temporal resolution. Moreover, the proposed quadratic potential based MAP algorithm effectively deconvolves the images as well as suppresses the noise. The multi-node multi-threaded GPU and the Compute Unified Device Architecture (CUDA) efficiently execute the iterative image reconstruction algorithm that is ≈200-fold faster (for large dataset) when compared to existing CPU based systems.

  8. Improved least squares MR image reconstruction using estimates of k-space data consistency.

    PubMed

    Johnson, Kevin M; Block, Walter F; Reeder, Scott B; Samsonov, Alexey

    2012-06-01

    This study describes a new approach to reconstruct data that has been corrupted by unfavorable magnetization evolution. In this new framework, images are reconstructed in a weighted least squares fashion using all available data and a measure of consistency determined from the data itself. The reconstruction scheme optimally balances uncertainties from noise error with those from data inconsistency, is compatible with methods that model signal corruption, and may be advantageous for more accurate and precise reconstruction with many least squares-based image estimation techniques including parallel imaging and constrained reconstruction/compressed sensing applications. Performance of the several variants of the algorithm tailored for fast spin echo and self-gated respiratory gating applications was evaluated in simulations, phantom experiments, and in vivo scans. The data consistency weighting technique substantially improved image quality and reduced noise as compared to traditional reconstruction approaches.

  9. Improved Least Squares MR Image Reconstruction Using Estimates of k-Space Data Consistency

    PubMed Central

    Johnson, Kevin M.; Block, Walter F.; Reeder, Scott. B.; Samsonov, Alexey

    2011-01-01

    This work describes a new approach to reconstruct data that has been corrupted by unfavorable magnetization evolution. In this new framework, images are reconstructed in a weighted least squares fashion using all available data and a measure of consistency determined from the data itself. The reconstruction scheme optimally balances uncertainties from noise error with those from data inconsistency, is compatible with methods that model signal corruption, and may be advantageous for more accurate and precise reconstruction with many least-squares based image estimation techniques including parallel imaging and constrained reconstruction/compressed sensing applications. Performance of the several variants of the algorithm tailored for fast spin echo (FSE) and self gated respiratory gating applications was evaluated in simulations, phantom experiments, and in-vivo scans. The data consistency weighting technique substantially improved image quality and reduced noise as compared to traditional reconstruction approaches. PMID:22135155

  10. Research on super-resolution image reconstruction based on an improved POCS algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Haiming; Miao, Hong; Yang, Chong; Xiong, Cheng

    2015-07-01

    Super-resolution image reconstruction (SRIR) can improve the fuzzy image's resolution; solve the shortage of the spatial resolution, excessive noise, and low-quality problem of the image. Firstly, we introduce the image degradation model to reveal the essence of super-resolution reconstruction process is an ill-posed inverse problem in mathematics. Secondly, analysis the blurring reason of optical imaging process - light diffraction and small angle scattering is the main reason for the fuzzy; propose an image point spread function estimation method and an improved projection onto convex sets (POCS) algorithm which indicate effectiveness by analyzing the changes between the time domain and frequency domain algorithm in the reconstruction process, pointed out that the improved POCS algorithms based on prior knowledge have the effect to restore and approach the high frequency of original image scene. Finally, we apply the algorithm to reconstruct synchrotron radiation computer tomography (SRCT) image, and then use these images to reconstruct the three-dimensional slice images. Comparing the differences between the original method and super-resolution algorithm, it is obvious that the improved POCS algorithm can restrain the noise and enhance the image resolution, so it is indicated that the algorithm is effective. This study and exploration to super-resolution image reconstruction by improved POCS algorithm is proved to be an effective method. It has important significance and broad application prospects - for example, CT medical image processing and SRCT ceramic sintering analyze of microstructure evolution mechanism.

  11. Hardware acceleration of image recognition through a visual cortex model

    NASA Astrophysics Data System (ADS)

    Rice, Kenneth L.; Taha, Tarek M.; Vutsinas, Christopher N.

    2008-09-01

    Recent findings in neuroscience have led to the development of several new models describing the processes in the neocortex. These models excel at cognitive applications such as image analysis and movement control. This paper presents a hardware architecture to speed up image content recognition through a recently proposed model of the visual cortex. The system is based on a set of parallel computation nodes implemented in an FPGA. The design was optimized for hardware by reducing the data storage requirements, and removing the need for multiplies and divides. The reconfigurable logic hardware implementation running at 121 MHz provided a speedup of 148 times over a 2 GHz AMD Opteron processor. The results indicate the feasibility of specialized hardware to accelerate larger biological scale implementations of the model.

  12. Applying a PC accelerator board for medical imaging.

    PubMed

    Gray, J; Grenzow, F; Siedband, M

    1990-01-01

    An AT-compatible computer was used to expand X-ray images that had been compressed and stored on optical data cards. Initially, execution time for expansion of a single X-ray image was 25 min. The requirements were for an expansion time of under 10 s and costs of under $1000 for computing hardware. This meant a computational speed increase of over 150 times was needed. Tests showed that incorporating an 80287 coprocessor would only give a speed increase of five times. The DSP32-PC-160 floating-point accelerator board was selected as a cost-effective solution to the need for more computing power. This board provided adequate processor speed, onboard memory, and data bus width; floating-point math precision; and a high-level language compiler for code development.

  13. Iterative image reconstruction for planar small animal imaging with a multipinhole collimator

    SciTech Connect

    Smith, Mark

    2002-07-01

    A method to reconstruct images from planar multipinhole single photon projection data using the iterative maximum likelihood expectation maximization algorithm has been developed. The projection and backprojection steps in the algorithm are accomplished using raytracing through each of the pinholes. The method was applied to a Tc-99m syringe and box phantom study and to a Tc- 99m sestamibi cardiac study in a Sprague-Dawley rat. Projection data were acquired using a custom-built detector whose key components were a pixellated array of 1x1x5 mm' NaI(Tl) scintillation crystals, a position-sensitive photomultiplier tube and a four pinhole array. For the phantom study the syringe:box activity concentration ratio was 5:1. Syringe contrast with background was 0.22 in the projection image and 0.59 in the reconstructed image. The reconstructed rat cardiac image shows improved visualization of the left ventricular myocardium. These results are a step toward the practicalapplication of high re

  14. Globally accelerated reconstruction algorithm for diffusion tomography with continuous-wave source in an arbitrary convex shape domain.

    PubMed

    Pantong, Natee; Su, Jianzhong; Shan, Hua; Klibanov, Michael V; Liu, Hanli

    2009-03-01

    A new numerical imaging algorithm is presented for reconstruction of optical absorption coefficients from near-infrared light data with a continuous-wave source. As a continuation of our earlier efforts in developing a series of methods called "globally convergent reconstruction methods" [J. Opt. Soc. Am. A23, 2388 (2006)], this numerical algorithm solves the inverse problem through solution of a boundary-value problem for a Volterra-type integral partial differential equation. We deal here with the particular issues in solving the inverse problems in an arbitrary convex shape domain. It is demonstrated in numerical studies that this reconstruction technique is highly efficient and stable with respect to the complex distribution of actual unknown absorption coefficients. The method is particularly useful for reconstruction from a large data set obtained from a tissue or organ of particular shape, such as the prostate. Numerical reconstructions of a simulated prostate-shaped phantom with three different settings of absorption-inclusions are presented.

  15. Does thorax EIT image analysis depend on the image reconstruction method?

    NASA Astrophysics Data System (ADS)

    Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut

    2013-04-01

    Different methods were proposed to analyze the resulting images of electrical impedance tomography (EIT) measurements during ventilation. The aim of our study was to examine if the analysis methods based on back-projection deliver the same results when applied on images based on other reconstruction algorithms. Seven mechanically ventilated patients with ARDS were examined by EIT. The thorax contours were determined from the routine CT images. EIT raw data was reconstructed offline with (1) filtered back-projection with circular forward model (BPC); (2) GREIT reconstruction method with circular forward model (GREITC) and (3) GREIT with individual thorax geometry (GREITT). Three parameters were calculated on the resulting images: linearity, global ventilation distribution and regional ventilation distribution. The results of linearity test are 5.03±2.45, 4.66±2.25 and 5.32±2.30 for BPC, GREITC and GREITT, respectively (median ±interquartile range). The differences among the three methods are not significant (p = 0.93, Kruskal-Wallis test). The proportions of ventilation in the right lung are 0.58±0.17, 0.59±0.20 and 0.59±0.25 for BPC, GREITC and GREITT, respectively (p = 0.98). The differences of the GI index based on different reconstruction methods (0.53±0.16, 0.51±0.25 and 0.54±0.16 for BPC, GREITC and GREITT, respectively) are also not significant (p = 0.93). We conclude that the parameters developed for images generated with GREITT are comparable with filtered back-projection and GREITC.

  16. Design of the FEM-FIR filter for displacement reconstruction using accelerations and displacements measured at different sampling rates

    NASA Astrophysics Data System (ADS)

    Hong, Yun Hwa; Lee, Se Gun; Lee, Hae Sung

    2013-07-01

    This paper presents a displacement reconstruction scheme using acceleration measured at a high sampling rate and displacement measured at a considerably low sampling rate. The governing equation and the boundary conditions for the reconstruction are derived using the variational statement of an inverse problem to minimize the errors between measured and reconstructed responses. The transfer function of the governing equation is identically 1 over whole frequency domain, and the proposed scheme would not result in any reconstruction error. A finite impulse response filter (FIR filter) is formulated through the finite element discretization of the governing equation. The Hermitian shape function is adopted to interpolate the displacement in a finite element. The transfer functions of the FIR filter are derived, and their characteristics are thoroughly discussed. It is recommended that the displacement sampling rate should be higher than the Nyquist rate of the target frequency, which is the lowest physically meaningful frequency in measured acceleration. In case the displacement sampling rate is lower than the recommended rate, the use of a higher target accuracy, which is the predefined accuracy at the target frequency, is required. The reconstruction of velocity with the proposed scheme is also presented. The validity of the proposed scheme is demonstrated with a numerical simulation study and a field test on a simply-supported railway bridge.

  17. Iterative image reconstruction algorithms using wave-front intensity and phase variation

    NASA Astrophysics Data System (ADS)

    Quiney, H. M.; Nugent, K. A.; Peele, A. G.

    2005-07-01

    Iterative algorithms that reconstruct images from far-field x-ray diffraction data are plagued with convergence difficulties. An iterative image reconstruction algorithm is described that ameliorates these convergence difficulties through the use of diffraction data obtained with illumination modulated in both intensity and phase.

  18. Digital aberration correction of fluorescent images with coherent holographic image reconstruction by phase transfer (CHIRPT)

    NASA Astrophysics Data System (ADS)

    Field, Jeffrey J.; Bartels, Randy A.

    2016-03-01

    Coherent holographic image reconstruction by phase transfer (CHIRPT) is an imaging method that permits digital holographic propagation of fluorescent light. The image formation process in CHIRPT is based on illuminating the specimen with a precisely controlled spatio-temporally varying intensity pattern. This pattern is formed by focusing a spatially coherent illumination beam to a line focus on a spinning modulation mask, and image relaying the mask plane to the focal plane of an objective lens. Deviations from the designed spatio-temporal illumination pattern due to imperfect mounting of the circular modulation mask onto the rotation motor induce aberrations in the recovered image. Here we show that these aberrations can be measured and removed non-iteratively by measuring the disk aberration phase externally. We also demonstrate measurement and correction of systematic optical aberrations in the CHIRPT microscope.

  19. Preoperative digital mammography imaging in conservative mastectomy and immediate reconstruction

    PubMed Central

    Angrigiani, Claudio; Hammond, Dennis; Nava, Maurizio; Gonzalez, Eduardo; Rostagno, Roman; Gercovich, Gustavo

    2016-01-01

    Background Digital mammography clearly distinguishes gland tissue density from the overlying non-glandular breast tissue coverage, which corresponds to the existing tissue between the skin and the Cooper’s ligaments surrounding the gland (i.e., dermis and subcutaneous fat). Preoperative digital imaging can determine the thickness of this breast tissue coverage, thus facilitating planning of the most adequate surgical techniques and reconstructive procedures for each case. Methods This study aimed to describe the results of a retrospective study of 352 digital mammograms in 176 patients with different breast volumes who underwent preoperative conservative mastectomies. The breast tissue coverage thickness and its relationship with the breast volume were evaluated. Results The breast tissue coverage thickness ranged from 0.233 to 4.423 cm, with a mean value of 1.952 cm. A comparison of tissue coverage and breast volume revealed a non-direct relationship between these factors. Conclusions Preoperative planning should not depend only on breast volume. Flap evaluations based on preoperative imaging measurements might be helpful when planning a conservative mastectomy. Accordingly, we propose a breast tissue coverage classification (BTCC). PMID:26855903

  20. Bayer patterned high dynamic range image reconstruction using adaptive weighting function

    NASA Astrophysics Data System (ADS)

    Kang, Hee; Lee, Suk Ho; Song, Ki Sun; Kang, Moon Gi

    2014-12-01

    It is not easy to acquire a desired high dynamic range (HDR) image directly from a camera due to the limited dynamic range of most image sensors. Therefore, generally, a post-process called HDR image reconstruction is used, which reconstructs an HDR image from a set of differently exposed images to overcome the limited dynamic range. However, conventional HDR image reconstruction methods suffer from noise factors and ghost artifacts. This is due to the fact that the input images taken with a short exposure time contain much noise in the dark regions, which contributes to increased noise in the corresponding dark regions of the reconstructed HDR image. Furthermore, since input images are acquired at different times, the images contain different motion information, which results in ghost artifacts. In this paper, we propose an HDR image reconstruction method which reduces the impact of the noise factors and prevents ghost artifacts. To reduce the influence of the noise factors, the weighting function, which determines the contribution of a certain input image to the reconstructed HDR image, is designed to adapt to the exposure time and local motions. Furthermore, the weighting function is designed to exclude ghosting regions by considering the differences of the luminance and the chrominance values between several input images. Unlike conventional methods, which generally work on a color image processed by the image processing module (IPM), the proposed method works directly on the Bayer raw image. This allows for a linear camera response function and also improves the efficiency in hardware implementation. Experimental results show that the proposed method can reconstruct high-quality Bayer patterned HDR images while being robust against ghost artifacts and noise factors.

  1. Image artefact propagation in motion estimation and reconstruction in interventional cardiac C-arm CT.

    PubMed

    Müller, K; Maier, A K; Schwemmer, C; Lauritsch, G; De Buck, S; Wielandts, J-Y; Hornegger, J; Fahrig, R

    2014-06-21

    The acquisition of data for cardiac imaging using a C-arm computed tomography system requires several seconds and multiple heartbeats. Hence, incorporation of motion correction in the reconstruction step may improve the resulting image quality. Cardiac motion can be estimated by deformable three-dimensional (3D)/3D registration performed on initial 3D images of different heart phases. This motion information can be used for a motion-compensated reconstruction allowing the use of all acquired data for image reconstruction. However, the result of the registration procedure and hence the estimated deformations are influenced by the quality of the initial 3D images. In this paper, the sensitivity of the 3D/3D registration step to the image quality of the initial images is studied. Different reconstruction algorithms are evaluated for a recently proposed cardiac C-arm CT acquisition protocol. The initial 3D images are all based on retrospective electrocardiogram (ECG)-gated data. ECG-gating of data from a single C-arm rotation provides only a few projections per heart phase for image reconstruction. This view sparsity leads to prominent streak artefacts and a poor signal to noise ratio. Five different initial image reconstructions are evaluated: (1) cone beam filtered-backprojection (FDK), (2) cone beam filtered-backprojection and an additional bilateral filter (FFDK), (3) removal of the shadow of dense objects (catheter, pacing electrode, etc) before reconstruction with a cone beam filtered-backprojection (cathFDK), (4) removal of the shadow of dense objects before reconstruction with a cone beam filtered-backprojection and a bilateral filter (cathFFDK). The last method (5) is an iterative few-view reconstruction (FV), the prior image constrained compressed sensing combined with the improved total variation algorithm. All reconstructions are investigated with respect to the final motion-compensated reconstruction quality. The algorithms were tested on a mathematical

  2. Image artefact propagation in motion estimation and reconstruction in interventional cardiac C-arm CT

    NASA Astrophysics Data System (ADS)

    Müller, K.; Maier, A. K.; Schwemmer, C.; Lauritsch, G.; De Buck, S.; Wielandts, J.-Y.; Hornegger, J.; Fahrig, R.

    2014-06-01

    The acquisition of data for cardiac imaging using a C-arm computed tomography system requires several seconds and multiple heartbeats. Hence, incorporation of motion correction in the reconstruction step may improve the resulting image quality. Cardiac motion can be estimated by deformable three-dimensional (3D)/3D registration performed on initial 3D images of different heart phases. This motion information can be used for a motion-compensated reconstruction allowing the use of all acquired data for image reconstruction. However, the result of the registration procedure and hence the estimated deformations are influenced by the quality of the initial 3D images. In this paper, the sensitivity of the 3D/3D registration step to the image quality of the initial images is studied. Different reconstruction algorithms are evaluated for a recently proposed cardiac C-arm CT acquisition protocol. The initial 3D images are all based on retrospective electrocardiogram (ECG)-gated data. ECG-gating of data from a single C-arm rotation provides only a few projections per heart phase for image reconstruction. This view sparsity leads to prominent streak artefacts and a poor signal to noise ratio. Five different initial image reconstructions are evaluated: (1) cone beam filtered-backprojection (FDK), (2) cone beam filtered-backprojection and an additional bilateral filter (FFDK), (3) removal of the shadow of dense objects (catheter, pacing electrode, etc) before reconstruction with a cone beam filtered-backprojection (cathFDK), (4) removal of the shadow of dense objects before reconstruction with a cone beam filtered-backprojection and a bilateral filter (cathFFDK). The last method (5) is an iterative few-view reconstruction (FV), the prior image constrained compressed sensing combined with the improved total variation algorithm. All reconstructions are investigated with respect to the final motion-compensated reconstruction quality. The algorithms were tested on a mathematical

  3. Electrodynamics sensor for the image reconstruction process in an electrical charge tomography system.

    PubMed

    Rahmat, Mohd Fua'ad; Isa, Mohd Daud; Rahim, Ruzairi Abdul; Hussin, Tengku Ahmad Raja

    2009-01-01

    Electrical charge tomography (EChT) is a non-invasive imaging technique that is aimed to reconstruct the image of materials being conveyed based on data measured by an electrodynamics sensor installed around the pipe. Image reconstruction in electrical charge tomography is vital and has not been widely studied before. Three methods have been introduced before, namely the linear back projection method, the filtered back projection method and the least square method. These methods normally face ill-posed problems and their solutions are unstable and inaccurate. In order to ensure the stability and accuracy, a special solution should be applied to obtain a meaningful image reconstruction result. In this paper, a new image reconstruction method - Least squares with regularization (LSR) will be introduced to reconstruct the image of material in a gravity mode conveyor pipeline for electrical charge tomography. Numerical analysis results based on simulation data indicated that this algorithm efficiently overcomes the numerical instability. The results show that the accuracy of the reconstruction images obtained using the proposed algorithm was enhanced and similar to the image captured by a CCD Camera. As a result, an efficient method for electrical charge tomography image reconstruction has been introduced.

  4. Choice of reconstructed tissue properties affects interpretation of lung EIT images.

    PubMed

    Grychtol, Bartłomiej; Adler, Andy

    2014-06-01

    Electrical impedance tomography (EIT) estimates an image of change in electrical properties within a body from stimulations and measurements at surface electrodes. There is significant interest in EIT as a tool to monitor and guide ventilation therapy in mechanically ventilated patients. In lung EIT, the EIT inverse problem is commonly linearized and only changes in electrical properties are reconstructed. Early algorithms reconstructed changes in resistivity, while most recent work using the finite element method reconstructs conductivity. Recently, we demonstrated that EIT images of ventilation can be misleading if the electrical contrasts within the thorax are not taken into account during the image reconstruction process. In this paper, we explore the effect of the choice of the reconstructed electrical properties (resistivity or conductivity) on the resulting EIT images. We show in simulation and experimental data that EIT images reconstructed with the same algorithm but with different parametrizations lead to large and clinically significant differences in the resulting images, which persist even after attempts to eliminate the impact of the parameter choice by recovering volume changes from the EIT images. Since there is no consensus among the most popular reconstruction algorithms and devices regarding the parametrization, this finding has implications for potential clinical use of EIT. We propose a program of research to develop reconstruction techniques that account for both the relationship between air volume and electrical properties of the lung and artefacts introduced by the linearization. PMID:24844670

  5. Reconstruction of Human Monte Carlo Geometry from Segmented Images

    NASA Astrophysics Data System (ADS)

    Zhao, Kai; Cheng, Mengyun; Fan, Yanchang; Wang, Wen; Long, Pengcheng; Wu, Yican

    2014-06-01

    Human computational phantoms have been used extensively for scientific experimental analysis and experimental simulation. This article presented a method for human geometry reconstruction from a series of segmented images of a Chinese visible human dataset. The phantom geometry could actually describe detailed structure of an organ and could be converted into the input file of the Monte Carlo codes for dose calculation. A whole-body computational phantom of Chinese adult female has been established by FDS Team which is named Rad-HUMAN with about 28.8 billion voxel number. For being processed conveniently, different organs on images were segmented with different RGB colors and the voxels were assigned with positions of the dataset. For refinement, the positions were first sampled. Secondly, the large sums of voxels inside the organ were three-dimensional adjacent, however, there were not thoroughly mergence methods to reduce the cell amounts for the description of the organ. In this study, the voxels on the organ surface were taken into consideration of the mergence which could produce fewer cells for the organs. At the same time, an indexed based sorting algorithm was put forward for enhancing the mergence speed. Finally, the Rad-HUMAN which included a total of 46 organs and tissues was described by the cuboids into the Monte Carlo Monte Carlo Geometry for the simulation. The Monte Carlo geometry was constructed directly from the segmented images and the voxels was merged exhaustively. Each organ geometry model was constructed without ambiguity and self-crossing, its geometry information could represent the accuracy appearance and precise interior structure of the organs. The constructed geometry largely retaining the original shape of organs could easily be described into different Monte Carlo codes input file such as MCNP. Its universal property was testified and high-performance was experimentally verified

  6. Reconstruction of lattice parameters and beam momentum distribution from turn-by-turn beam position monitor readings in circular accelerators

    NASA Astrophysics Data System (ADS)

    Edmonds, C. S.; Gratus, J.; Hock, K. M.; Machida, S.; Muratori, B. D.; Torromé, R. G.; Wolski, A.

    2014-05-01

    In high chromaticity circular accelerators, rapid decoherence of the betatron motion of a particle beam can make the measurement of lattice and bunch values, such as Courant-Snyder parameters and betatron amplitude, difficult. A method for reconstructing the momentum distribution of a beam from beam position measurements is presented. Further analysis of the same beam position monitor data allows estimates to be made of the Courant-Snyder parameters and the amplitude of coherent betatron oscillation of the beam. The methods are tested through application to data taken on the linear nonscaling fixed field alternating gradient accelerator, EMMA.

  7. Field experiment and image reconstruction using a Fourier telescopy imaging system over a 600-m-long horizontal path.

    PubMed

    Yu, Shu-Hai; Dong, Lei; Liu, Xin-Yue; Lin, Xu-Dong; Megn, Hao-Ran; Zhong, Xing

    2016-08-20

    To confirm the effect of uplink atmospheric turbulence on Fourier telescopy (FT), we designed a system for far-field imaging, utilizing a T-type laser transmitting configuration with commercially available hardware, except for a green imaging laser. The horizontal light transmission distance for both uplink and downlink was ∼300  m. For both the transmitting and received beams, the height upon the ground was below 1 m. The imaging laser's pointing accuracy was ∼9.3  μrad. A novel image reconstruction approach was proposed, yielding significantly improved quality and Strehl ratio of reconstructed images. From the reconstruction result, we observed that the tip/tilt aberration is tolerated by the FT system even for Changchun's atmospheric coherence length parameter (r0) below 3 cm. The resolution of the reconstructed images was ∼0.615  μrad. PMID:27556991

  8. SU-E-I-73: Clinical Evaluation of CT Image Reconstructed Using Interior Tomography

    SciTech Connect

    Zhang, J; Ge, G; Winkler, M; Cong, W; Wang, G

    2014-06-01

    Purpose: Radiation dose reduction has been a long standing challenge in CT imaging of obese patients. Recent advances in interior tomography (reconstruction of an interior region of interest (ROI) from line integrals associated with only paths through the ROI) promise to achieve significant radiation dose reduction without compromising image quality. This study is to investigate the application of this technique in CT imaging through evaluating imaging quality reconstructed from patient data. Methods: Projection data were directly obtained from patients who had CT examinations in a Dual Source CT scanner (DSCT). Two detectors in a DSCT acquired projection data simultaneously. One detector provided projection data for full field of view (FOV, 50 cm) while another detectors provided truncated projection data for a FOV of 26 cm. Full FOV CT images were reconstructed using both filtered back projection and iterative algorithm; while interior tomography algorithm was implemented to reconstruct ROI images. For comparison reason, FBP was also used to reconstruct ROI images. Reconstructed CT images were evaluated by radiologists and compared with images from CT scanner. Results: The results show that the reconstructed ROI image was in excellent agreement with the truth inside the ROI, obtained from images from CT scanner, and the detailed features in the ROI were quantitatively accurate. Radiologists evaluation shows that CT images reconstructed with interior tomography met diagnosis requirements. Radiation dose may be reduced up to 50% using interior tomography, depending on patient size. Conclusion: This study shows that interior tomography can be readily employed in CT imaging for radiation dose reduction. It may be especially useful in imaging obese patients, whose subcutaneous tissue is less clinically relevant but may significantly increase radiation dose.

  9. Reconstruction of dynamic contrast enhanced magnetic resonance imaging of the breast with temporal constraints.

    PubMed

    Chen, Liyong; Schabel, Matthias C; DiBella, Edward V R

    2010-06-01

    A number of methods using temporal and spatial constraints have been proposed for reconstruction of undersampled dynamic magnetic resonance imaging (MRI) data. The complex data can be constrained or regularized in a number of different ways, for example, the time derivative of the magnitude and phase image voxels can be constrained separately or jointly. Intuitively, the performance of different regularizations will depend on both the data and the chosen temporal constraints. Here, a complex temporal total variation (TV) constraint was compared to the use of separate real and imaginary constraints, and to a magnitude constraint alone. Projection onto Convex Sets (POCS) with a gradient descent method was used to implement the diverse temporal constraints in reconstructions of DCE MRI data. For breast DCE data, serial POCS with separate real and imaginary TV constraints was found to give relatively poor results while serial/parallel POCS with a complex temporal TV constraint and serial POCS with a magnitude-only temporal TV constraint performed well with an acceleration factor as large as R=6. In the tumor area, the best method was found to be parallel POCS with complex temporal TV constraint. This method resulted in estimates for the pharmacokinetic parameters that were linearly correlated to those estimated from the fully-sampled data, with K(trans,R=6)=0.97 K(trans,R=1)+0.00 with correlation coefficient r=0.98, k(ep,R=6)=0.95 k(ep,R=1)+0.00 (r=0.85). These results suggest that it is possible to acquire highly undersampled breast DCE-MRI data with improved spatial and/or temporal resolution with minimal loss of image quality.

  10. Image reconstruction from phased-array data based on multichannel blind deconvolution.

    PubMed

    She, Huajun; Chen, Rong-Rong; Liang, Dong; Chang, Yuchou; Ying, Leslie

    2015-11-01

    In this paper we consider image reconstruction from fully sampled multichannel phased array MRI data without knowledge of the coil sensitivities. To overcome the non-uniformity of the conventional sum-of-square reconstruction, a new framework based on multichannel blind deconvolution (MBD) is developed for joint estimation of the image function and the sensitivity functions in image domain. The proposed approach addresses the non-uniqueness of the MBD problem by exploiting the smoothness of both functions in the image domain through regularization. Results using simulation, phantom and in vivo experiments demonstrate that the reconstructions by the proposed algorithm are more uniform than those by the existing methods.

  11. WE-G-18A-05: Cone-Beam CT Reconstruction with Deformed Prior Image

    SciTech Connect

    Zhang, H; Huang, J; Ma, J; Chen, W; Ouyang, L; Wang, J

    2014-06-15

    Purpose: Prior image can be incorporated into image reconstruction process to improve the quality of on-treatment cone-beam CT (CBCT) from sparseview or low-dose projections. However, the deformation between the prior image and on-treatment CBCT are not considered in current prior image based reconstructions (e.g., prior image constrained compressed sensing (PICCS)). The purpose of this work is to develop a deformed-prior-imagebased- reconstruction strategy (DPIR) to address the mismatch problem between the prior image and target image. Methods: The deformed prior image is obtained by a projection based registration approach. Specifically, the deformation vector fields (DVF) used to deform the prior image is estimated through matching the forward projection of the prior image and the measured on-treatment projection. The deformed prior image is then used as the prior image in the standard PICCS algorithm. Simulation studies on the XCAT phantom was conducted to evaluate the performance of the projection based registration procedure and the proposed DPIR strategy. Results: The deformed prior image matches the geometry of on-treatment CBCT closer as compared to the original prior image. Using the deformed prior image, the quality of the image reconstructed by DPIR from few-view projection data is greatly improved as compared to the standard PICCS algorithm. The relative image reconstruction error is reduced to 11.13% in the proposed DPIR from 17.57% in the original PICCS. Conclusion: The proposed DPIR approach can solve the mismatch problem between the prior image and target image, which overcomes the limitation of the original PICCS algorithm for CBCT reconstruction from sparse-view or low-dose projections.

  12. Direct reconstruction of the source intensity distribution of a clinical linear accelerator using a maximum likelihood expectation maximization algorithm.

    PubMed

    Papaconstadopoulos, P; Levesque, I R; Maglieri, R; Seuntjens, J

    2016-02-01

    Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size ([Formula: see text] cm(2)). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.

  13. Texture-preserving Bayesian image reconstruction for low-dose CT

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Han, Hao; Hu, Yifan; Liu, Yan; Ma, Jianhua; Li, Lihong; Moore, William; Liang, Zhengrong

    2016-03-01

    Markov random field (MRF) model has been widely used in Bayesian image reconstruction to reconstruct piecewise smooth images in the presence of noise, such as in low-dose X-ray computed tomography (LdCT). While it can preserve edge sharpness via edge-preserving potential function, its regional smoothing may sacrifice tissue image textures, which have been recognized as useful imaging biomarkers, and thus it compromises clinical tasks such as differentiating malignant vs. benign lesions, e.g., lung nodule or colon polyp. This study aims to shift the edge preserving regional noise smoothing paradigm to texture-preserving framework for LdCT image reconstruction while retaining the advantage of MRF's neighborhood system on edge preservation. Specifically, we adapted the MRF model to incorporate the image textures of lung, bone, fat, muscle, etc. from previous full-dose CT scan as a priori knowledge for texture-preserving Bayesian reconstruction of current LdCT images. To show the feasibility of proposed reconstruction framework, experiments using clinical patient scans (with lung nodule or colon polyp) were conducted. The experimental outcomes showed noticeable gain by the a priori knowledge for LdCT image reconstruction with the well-known Haralick texture measures. Thus, it is conjectured that texture-preserving LdCT reconstruction has advantages over edge-preserving regional smoothing paradigm for texture-specific clinical applications.

  14. Image reconstruction enables high resolution imaging at large penetration depths in fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Dilipkumar, Shilpa; Montalescot, Sandra; Mondal, Partha Pratim

    2013-10-01

    Imaging thick specimen at a large penetration depth is a challenge in biophysics and material science. Refractive index mismatch results in spherical aberration that is responsible for streaking artifacts, while Poissonian nature of photon emission and scattering introduces noise in the acquired three-dimensional image. To overcome these unwanted artifacts, we introduced a two-fold approach: first, point-spread function modeling with correction for spherical aberration and second, employing maximum-likelihood reconstruction technique to eliminate noise. Experimental results on fluorescent nano-beads and fluorescently coated yeast cells (encaged in Agarose gel) shows substantial minimization of artifacts. The noise is substantially suppressed, whereas the side-lobes (generated by streaking effect) drops by 48.6% as compared to raw data at a depth of 150 μm. Proposed imaging technique can be integrated to sophisticated fluorescence imaging techniques for rendering high resolution beyond 150 μm mark.

  15. GPU-accelerated denoising of 3D magnetic resonance images

    SciTech Connect

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  16. Effects of scatter radiation on reconstructed images in digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Liu, Bob; Li, Xinhua

    2009-02-01

    We evaluated the effects of scatter radiation on the reconstructed images in digital breast tomosynthesis. Projection images of a 6 cm anthropomorphic breast phantom were acquired using a Hologic prototype digital breast tomosynthesis system. Scatter intensities in projection images were sampled with a beam stop method. The scatter intensity at any pixel was obtained by two dimensional fitting. Primary-only projection images were generated by subtracting the scatter contributions from the original projection images. The 3-dimensional breast was reconstructed first based on original projection images which contained the contributions from both primary rays and scattered radiation using three different reconstruction algorithms. The same breast volume was reconstructed again using the same algorithms but based on primaryonly projection images. The image artifacts, pixel value difference to noise ratio (PDNR), and detected image features in these two sets of reconstructed slices were compared to evaluate the effects of scatter radiation. It was found that the scatter radiation caused inaccurate reconstruction of the x-ray attenuation property of the tissue. X-ray attenuation coefficients could be significantly underestimated in the region where scatter intensity is high. This phenomenon is similar to the cupping artifacts found in computed tomography. The scatter correction is important if accurate x-ray attenuation of the tissues is needed. No significant improvement in terms of numbers of detected image features was observed after scatter correction. More sophisticated phantom dedicated to digital breast tomosynthesis may be needed for further evaluation.

  17. Improving image quality in compressed ultrafast photography with a space- and intensity-constrained reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Zhu, Liren; Chen, Yujia; Liang, Jinyang; Gao, Liang; Ma, Cheng; Wang, Lihong V.

    2016-03-01

    The single-shot compressed ultrafast photography (CUP) camera is the fastest receive-only camera in the world. In this work, we introduce an external CCD camera and a space- and intensity-constrained (SIC) reconstruction algorithm to improve the image quality of CUP. The CCD camera takes a time-unsheared image of the dynamic scene. Unlike the previously used unconstrained algorithm, the proposed algorithm incorporates both spatial and intensity constraints, based on the additional prior information provided by the external CCD camera. First, a spatial mask is extracted from the time-unsheared image to define the zone of action. Second, an intensity threshold constraint is determined based on the similarity between the temporally projected image of the reconstructed datacube and the time-unsheared image taken by the external CCD. Both simulation and experimental studies showed that the SIC reconstruction improves the spatial resolution, contrast, and general quality of the reconstructed image.

  18. Medical image reconstruction algorithm based on the geometric information between sensor detector and ROI

    NASA Astrophysics Data System (ADS)

    Ham, Woonchul; Song, Chulgyu; Lee, Kangsan; Roh, Seungkuk

    2016-05-01

    In this paper, we propose a new image reconstruction algorithm considering the geometric information of acoustic sources and senor detector and review the two-step reconstruction algorithm which was previously proposed based on the geometrical information of ROI(region of interest) considering the finite size of acoustic sensor element. In a new image reconstruction algorithm, not only mathematical analysis is very simple but also its software implementation is very easy because we don't need to use the FFT. We verify the effectiveness of the proposed reconstruction algorithm by showing the simulation results by using Matlab k-wave toolkit.

  19. Region-of-interest image reconstruction with intensity weighting in circular cone-beam CT for image-guided radiation therapy.

    PubMed

    Cho, Seungryong; Pearson, Erik; Pelizzari, Charles A; Pan, Xiaochuan

    2009-04-01

    Imaging plays a vital role in radiation therapy and with recent advances in technology considerable emphasis has been placed on cone-beam CT (CBCT). Attaching a kV x-ray source and a flat panel detector directly to the linear accelerator gantry has enabled progress in target localization techniques, which can include daily CBCT setup scans for some treatments. However, with an increasing number of CT scans there is also an increasing concern for patient exposure. An intensity-weighted region-of-interest (IWROI) technique, which has the potential to greatly reduce CBCT dose, in conjunction with the chord-based backprojection-filtration (BPF) reconstruction algorithm, has been developed and its feasibility in clinical use is demonstrated in this article. A nonuniform filter is placed in the x-ray beam to create regions of two different beam intensities. In this manner, regions outside the target area can be given a reduced dose but still visualized with a lower contrast to noise ratio. Image artifacts due to transverse data truncation, which would have occurred in conventional reconstruction algorithms, are avoided and image noise levels of the low- and high-intensity regions are well controlled by use of the chord-based BPF reconstruction algorithm. The proposed IWROI technique can play an important role in image-guided radiation therapy. PMID:19472624

  20. Region-of-interest image reconstruction with intensity weighting in circular cone-beam CT for image-guided radiation therapy

    SciTech Connect

    Cho, Seungryong; Pearson, Erik; Pelizzari, Charles A.; Pan Xiaochuan

    2009-04-15

    Imaging plays a vital role in radiation therapy and with recent advances in technology considerable emphasis has been placed on cone-beam CT (CBCT). Attaching a kV x-ray source and a flat panel detector directly to the linear accelerator gantry has enabled progress in target localization techniques, which can include daily CBCT setup scans for some treatments. However, with an increasing number of CT scans there is also an increasing concern for patient exposure. An intensity-weighted region-of-interest (IWROI) technique, which has the potential to greatly reduce CBCT dose, in conjunction with the chord-based backprojection-filtration (BPF) reconstruction algorithm, has been developed and its feasibility in clinical use is demonstrated in this article. A nonuniform filter is placed in the x-ray beam to create regions of two different beam intensities. In this manner, regions outside the target area can be given a reduced dose but still visualized with a lower contrast to noise ratio. Image artifacts due to transverse data truncation, which would have occurred in conventional reconstruction algorithms, are avoided and image noise levels of the low- and high-intensity regions are well controlled by use of the chord-based BPF reconstruction algorithm. The proposed IWROI technique can play an important role in image-guided radiation therapy.

  1. Improving the accuracy of volumetric segmentation using pre-processing boundary detection and image reconstruction.

    PubMed

    Archibald, Rick; Hu, Jiuxiang; Gelb, Anne; Farin, Gerald

    2004-04-01

    The concentration edge -detection and Gegenbauer image-reconstruction methods were previously shown to improve the quality of segmentation in magnetic resonance imaging. In this study, these methods are utilized as a pre-processing step to the Weibull E-SD field segmentation. It is demonstrated that the combination of the concentration edge detection and Gegenbauer reconstruction method improves the accuracy of segmentation for the simulated test data and real magnetic resonance images used in this study. PMID:15376580

  2. Cone beam filtered backprojection (CB-FBP) image reconstruction by tracking re-sampled projection data

    NASA Astrophysics Data System (ADS)

    Tang, Xiangyang; Hsieh, Jiang; Nilsen, Roy A.; Mcolash, Scott M.

    2006-08-01

    The tomographic images reconstructed from cone beam projection data with a slice thickness larger than the nominal detector row width (namely thick image) is of practical importance in clinical CT imaging, such as neuro- and trauma- applications as well as applications for treatment planning in image guided radiation therapy. To get a balance optimization between image quality and computational efficiency, a cone beam filtered backprojection (CB-FBP) algorithm to reconstruct a thick image by tracking adaptively up-sampled cone beam projection of virtual reconstruction planes is proposed in this paper. Theoretically, a thick image is a weighted summation of a number of images with slice thickness corresponding to the nominal detector row width (namely thin image), and each thin image corresponds to a virtual reconstruction plane. To obtain the most achievable computational efficiency, the weighted summation has to be carried out in projection domain. However, it has been experimentally found that, to obtain a thick image with the reconstruction accuracy comparable to that of a thin image, the CB-FBP reconstruction algorithm has to be applied by tracking adaptively up-sampled cone beam projection data, which is the novelty of the proposed algorithm. The tracking process is carried out by making use of the cone beam projection data corresponding to the involved virtual reconstruction planes only, while the adaptive up-sampling process is implemented by interpolation along the z-direction at an adequate up-sampling rate. By using a helical body phantom, the performance of the proposed cone beam reconstruction algorithm, particularly its capability of suppressing artifacts, are experimentally evaluated and verified.

  3. In vivo microelectrode track reconstruction using magnetic resonance imaging

    PubMed Central

    Fung, S.H.; Burstein, D.; Born, R.T.

    2010-01-01

    To obtain more precise anatomical information about cortical sites of microelectrode recording and microstimulation experiments in alert animals, we have developed a non-invasive, magnetic resonance imaging (MRI) technique for reconstructing microelectrode tracks. We made microelectrode penetrations in the brains of anesthetized rats and marked sites along them by depositing metal, presumably iron, with anodic monophasic or biphasic current from the tip of a stainless steel microelectrode. The metal deposits were clearly visible in the living animal as approximately 200 μm wide hypointense punctate marks using gradient echo sequences in a 4.7T MRI scanner. We confirmed the MRI findings by comparing them directly to the postmortem histology in which the iron in the deposits could be rendered visible with a Prussian blue reaction. MRI-visible marks could be created using currents as low as 1 μA (anodic) for 5 s, and they remained stable in the brains of living rats for up to nine months. We were able to make marks using either direct current or biphasic current pulses. Biphasic pulses caused less tissue damage and were similar to those used by many laboratories for functional microstimulation studies in the brains of alert monkeys. PMID:9667395

  4. A knowledge-based modeling for plantar pressure image reconstruction.

    PubMed

    Ostadabbas, Sarah; Nourani, Mehrdad; Saeed, Adnan; Yousefi, Rasoul; Pompeo, Matthew

    2014-10-01

    It is known that prolonged pressure on the plantar area is one of the main factors in developing foot ulcers. With current technology, electronic pressure monitoring systems can be placed as an insole into regular shoes to continuously monitor the plantar area and provide evidence on ulcer formation process as well as insight for proper orthotic footwear design. The reliability of these systems heavily depends on the spatial resolution of their sensor platforms. However, due to the cost and energy constraints, practical wireless in-shoe pressure monitoring systems have a limited number of sensors, i.e., typically K < 10. In this paper, we present a knowledge-based regression model (SCPM) to reconstruct a spatially continuous plantar pressure image from a small number of pressure sensors. This model makes use of high-resolution pressure data collected clinically to train a per-subject regression function. SCPM is shown to outperform all other tested interpolation methods for K < 60 sensors, with less than one-third of the error for K = 10 sensors. SCPM bridges the gap between the technological capability and medical need and can play an important role in the adoption of sensing insole for a wide range of medical applications.

  5. 3-D Reconstruction From 2-D Radiographic Images and Its Application to Clinical Veterinary Medicine

    NASA Astrophysics Data System (ADS)

    Hamamoto, Kazuhiko; Sato, Motoyoshi

    3D imaging technique is very important and indispensable in diagnosis. The main stream of the technique is one in which 3D image is reconstructed from a set of slice images, such as X-ray CT and MRI. However, these systems require large space and high costs. On the other hand, a low cost and small size 3D imaging system is needed in clinical veterinary medicine, for example, in the case of diagnosis in X-ray car or pasture area. We propose a novel 3D imaging technique using 2-D X-ray radiographic images. This system can be realized by cheaper system than X-ray CT and enables to get 3D image in X-ray car or portable X-ray equipment. In this paper, a 3D visualization technique from 2-D radiographic images is proposed and several reconstructions are shown. These reconstructions are evaluated by veterinarians.

  6. An image reconstruction framework based on boundary voltages for ultrasound modulated electrical impedance tomography

    NASA Astrophysics Data System (ADS)

    Song, Xizi; Xu, Yanbin; Dong, Feng

    2016-11-01

    A new image reconstruction framework based on boundary voltages is presented for ultrasound modulated electrical impedance tomography (UMEIT). Combining the electric and acoustic modalities, UMEIT reconstructs the conductivity distribution with more measurements with position information. The proposed image reconstruction framework begins with approximately constructing the sensitivity matrix of the imaging object with inclusion. Then the conductivity is recovered from the boundary voltages of the imaging object. To solve the nonlinear inverse problem, an optimization method is adopted and the iterative method is tested. Compared with that for electrical resistance tomography (ERT), the newly constructed sensitivity matrix is more sensitive to the inclusion, even in the center of the imaging object, and it contains more effective information about the inclusions. Finally, image reconstruction is carried out by the conjugate gradient algorithm, and results show that reconstructed images with higher quality can be obtained for UMEIT with a faster convergence rate. Both theory and image reconstruction results validate the feasibility of the proposed framework for UMEIT and confirm that UMEIT is a potential imaging technique.

  7. Whole-body direct 4D parametric PET imaging employing nested generalized Patlak expectation–maximization reconstruction

    NASA Astrophysics Data System (ADS)

    Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib

    2016-08-01

    Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate K i as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting K i images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit K i bias of sPatlak analysis at regions with non-negligible 18F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source software for tomographic image reconstruction platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation–maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published 18F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced K i target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D versus the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10–20 sub-iterations. Moreover, systematic reduction in K i % bias and improved TBR were

  8. Whole-body direct 4D parametric PET imaging employing nested generalized Patlak expectation-maximization reconstruction

    NASA Astrophysics Data System (ADS)

    Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib

    2016-08-01

    Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate K i as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting K i images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit K i bias of sPatlak analysis at regions with non-negligible 18F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source software for tomographic image reconstruction platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published 18F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced K i target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D versus the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10-20 sub-iterations. Moreover, systematic reduction in K i % bias and improved TBR were

  9. Whole-body direct 4D parametric PET imaging employing nested generalized Patlak expectation-maximization reconstruction.

    PubMed

    Karakatsanis, Nicolas A; Casey, Michael E; Lodge, Martin A; Rahmim, Arman; Zaidi, Habib

    2016-08-01

    Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate K i as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting K i images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit K i bias of sPatlak analysis at regions with non-negligible (18)F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source software for tomographic image reconstruction platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published (18)F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced K i target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D versus the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10-20 sub-iterations. Moreover, systematic reduction in K i % bias and improved TBR were

  10. Investigation of discrete imaging models and iterative image reconstruction in differential X-ray phase-contrast tomography.

    PubMed

    Xu, Qiaofeng; Sidky, Emil Y; Pan, Xiaochuan; Stampanoni, Marco; Modregger, Peter; Anastasio, Mark A

    2012-05-01

    Differential X-ray phase-contrast tomography (DPCT) refers to a class of promising methods for reconstructing the X-ray refractive index distribution of materials that present weak X-ray absorption contrast. The tomographic projection data in DPCT, from which an estimate of the refractive index distribution is reconstructed, correspond to one-dimensional (1D) derivatives of the two-dimensional (2D) Radon transform of the refractive index distribution. There is an important need for the development of iterative image reconstruction methods for DPCT that can yield useful images from few-view projection data, thereby mitigating the long data-acquisition times and large radiation doses associated with use of analytic reconstruction methods. In this work, we analyze the numerical and statistical properties of two classes of discrete imaging models that form the basis for iterative image reconstruction in DPCT. We also investigate the use of one of the models with a modern image reconstruction algorithm for performing few-view image reconstruction of a tissue specimen.

  11. Tomographic reconstruction of damage images in hollow cylinders using Lamb waves.

    PubMed

    Hu, Bin; Hu, Ning; Li, Leilei; Li, Weiguo; Tang, Shan; Li, Yuan; Peng, Xianghe; Homma, Atsushi; Liu, Yaolu; Wu, Liangke; Ning, Huiming

    2014-09-01

    Lamb wave tomography (LWT) is a potential and efficient technique for non-destructive tomographic reconstruction of damage images in structural components or materials. A two-stage inverse algorithm proposed by the authors for quickly reconstructing the damage images was applied to hollow cylinders. An aluminum hollow cylinder with an internal surface pit and a Carbon Fiber Reinforced Plastic (CFRP) laminated hollow cylinder with an artificial internal surface damage were used to validate the proposed method. The results show that the present method is capable of successfully reconstructing the images of the above damages in a larger inspection area with much less experimental data compared to some conventional ultrasonic tomography techniques.

  12. Image Alignment for Tomography Reconstruction from Synchrotron X-Ray Microscopic Images

    PubMed Central

    Cheng, Chang-Chieh; Chien, Chia-Chi; Chen, Hsiang-Hsin; Hwu, Yeukuang; Ching, Yu-Tai

    2014-01-01

    A synchrotron X-ray microscope is a powerful imaging apparatus for taking high-resolution and high-contrast X-ray images of nanoscale objects. A sufficient number of X-ray projection images from different angles is required for constructing 3D volume images of an object. Because a synchrotron light source is immobile, a rotational object holder is required for tomography. At a resolution of 10 nm per pixel, the vibration of the holder caused by rotating the object cannot be disregarded if tomographic images are to be reconstructed accurately. This paper presents a computer method to compensate for the vibration of the rotational holder by aligning neighboring X-ray images. This alignment process involves two steps. The first step is to match the “projected feature points” in the sequence of images. The matched projected feature points in the - plane should form a set of sine-shaped loci. The second step is to fit the loci to a set of sine waves to compute the parameters required for alignment. The experimental results show that the proposed method outperforms two previously proposed methods, Xradia and SPIDER. The developed software system can be downloaded from the URL, http://www.cs.nctu.edu.tw/~chengchc/SCTA or http://goo.gl/s4AMx. PMID:24416264

  13. Feature-based face representations and image reconstruction from behavioral and neural data

    PubMed Central

    Nestor, Adrian; Plaut, David C.; Behrmann, Marlene

    2016-01-01

    The reconstruction of images from neural data can provide a unique window into the content of human perceptual representations. Although recent efforts have established the viability of this enterprise using functional magnetic resonance imaging (MRI) patterns, these efforts have relied on a variety of prespecified image features. Here, we take on the twofold task of deriving features directly from empirical data and of using these features for facial image reconstruction. First, we use a method akin to reverse correlation to derive visual features from functional MRI patterns elicited by a large set of homogeneous face exemplars. Then, we combine these features to reconstruct novel face images from the corresponding neural patterns. This approach allows us to estimate collections of features associated with different cortical areas as well as to successfully match image reconstructions to corresponding face exemplars. Furthermore, we establish the robustness and the utility of this approach by reconstructing images from patterns of behavioral data. From a theoretical perspective, the current results provide key insights into the nature of high-level visual representations, and from a practical perspective, these findings make possible a broad range of image-reconstruction applications via a straightforward methodological approach. PMID:26711997

  14. Three-dimensional reconstruction of light microscopy image sections: present and future.

    PubMed

    Wang, Yuzhen; Xu, Rui; Luo, Gaoxing; Wu, Jun

    2015-03-01

    Three-dimensional (3D) image reconstruction technologies can reveal previously hidden microstructures in human tissue. However, the lack of ideal, non-destructive cross-sectional imaging techniques is still a problem. Despite some drawbacks, histological sectioning remains one of the most powerful methods for accurate high-resolution representation of tissue structures. Computer technologies can produce 3D representations of interesting human tissue and organs that have been serial-sectioned, dyed or stained, imaged, and segmented for 3D visualization. 3D reconstruction also has great potential in the fields of tissue engineering and 3D printing. This article outlines the most common methods for 3D tissue section reconstruction. We describe the most important academic concepts in this field, and provide critical explanations and comparisons. We also note key steps in the reconstruction procedures, and highlight recent progress in the development of new reconstruction methods.

  15. Magnetoacoustic Tomography with Magnetic Induction: Bioimepedance reconstruction through vector source imaging

    PubMed Central

    Mariappan, Leo; He, Bin

    2013-01-01

    Magneto acoustic tomography with magnetic induction (MAT-MI) is a technique proposed to reconstruct the conductivity distribution in biological tissue at ultrasound imaging resolution. A magnetic pulse is used to generate eddy currents in the object, which in the presence of a static magnetic field induces Lorentz force based acoustic waves in the medium. This time resolved acoustic waves are collected with ultrasound transducers and, in the present work, these are used to reconstruct the current source which gives rise to the MAT-MI acoustic signal using vector imaging point spread functions. The reconstructed source is then used to estimate the conductivity distribution of the object. Computer simulations and phantom experiments are performed to demonstrate conductivity reconstruction through vector source imaging in a circular scanning geometry with a limited bandwidth finite size piston transducer. The results demonstrate that the MAT-MI approach is capable of conductivity reconstruction in a physical setting. PMID:23322761

  16. Three-dimensional reconstruction of light microscopy image sections: present and future.

    PubMed

    Wang, Yuzhen; Xu, Rui; Luo, Gaoxing; Wu, Jun

    2015-03-01

    Three-dimensional (3D) image reconstruction technologies can reveal previously hidden microstructures in human tissue. However, the lack of ideal, non-destructive cross-sectional imaging techniques is still a problem. Despite some drawbacks, histological sectioning remains one of the most powerful methods for accurate high-resolution representation of tissue structures. Computer technologies can produce 3D representations of interesting human tissue and organs that have been serial-sectioned, dyed or stained, imaged, and segmented for 3D visualization. 3D reconstruction also has great potential in the fields of tissue engineering and 3D printing. This article outlines the most common methods for 3D tissue section reconstruction. We describe the most important academic concepts in this field, and provide critical explanations and comparisons. We also note key steps in the reconstruction procedures, and highlight recent progress in the development of new reconstruction methods. PMID:24952302

  17. High resolution reconstruction of solar prominence images observed by the New Vacuum Solar Telescope

    NASA Astrophysics Data System (ADS)

    Xiang, Yong-yuan; Liu, Zhong; Jin, Zhen-yu

    2016-11-01

    A high resolution image showing fine structures is crucial for understanding the nature of solar prominence. In this paper, high resolution imaging of solar prominence on the New Vacuum Solar Telescope (NVST) is introduced, using speckle masking. Each step of the data reduction especially the image alignment is discussed. Accurate alignment of all frames and the non-isoplanatic calibration of each image are the keys for a successful reconstruction. Reconstructed high resolution images from NVST also indicate that under normal seeing condition, it is feasible to carry out high resolution observations of solar prominence by a ground-based solar telescope, even in the absence of adaptive optics.

  18. Neural portraits of perception: reconstructing face images from evoked brain activity.

    PubMed

    Cowen, Alan S; Chun, Marvin M; Kuhl, Brice A

    2014-07-01

    Recent neuroimaging advances have allowed visual experience to be reconstructed from patterns of brain activity. While neural reconstructions have ranged in complexity, they have relied almost exclusively on retinotopic mappings between visual input and activity in early visual cortex. However, subjective perceptual information is tied more closely to higher-level cortical regions that have not yet been used as the primary basis for neural reconstructions. Furthermore, no reconstruction studies to date have reported reconstructions of face images, which activate a highly distributed cortical network. Thus, we investigated (a) whether individual face images could be accurately reconstructed from distributed patterns of neural activity, and (b) whether this could be achieved even when excluding activity within occipital cortex. Our approach involved four steps. (1) Principal component analysis (PCA) was used to identify components that efficiently represented a set of training faces. (2) The identified components were then mapped, using a machine learning algorithm, to fMRI activity collected during viewing of the training faces. (3) Based on activity elicited by a new set of test faces, the algorithm predicted associated component scores. (4) Finally, these scores were transformed into reconstructed images. Using both objective and subjective validation measures, we show that our methods yield strikingly accurate neural reconstructions of faces even when excluding occipital cortex. This methodology not only represents a novel and promising approach for investigating face perception, but also suggests avenues for reconstructing 'offline' visual experiences-including dreams, memories, and imagination-which are chiefly represented in higher-level cortical areas.

  19. Fast computational integral imaging reconstruction by combined use of spatial filtering and rearrangement of elemental image pixels

    NASA Astrophysics Data System (ADS)

    Jang, Jae-Young; Cho, Myungjin

    2015-12-01

    In this paper, we propose a new fast computational integral imaging reconstruction (CIIR) scheme without the deterioration of the spatial filtering effect by combined use of spatial filtering and rearrangement of elemental image pixels. In the proposed scheme, the elemental image array (EIA) recorded by lenslet array is spatially filtered through the convolution of depth-dependent delta function array for a given depth. Then, the spatially filtered EIA is reconstructed as the 3D slice image using pixels of the elemental image rearrangement technique. Our scheme provides both the fast calculation with the same properties of the conventional CIIR and the improved visual quality of the reconstructed 3D slice image. To verify our scheme, we perform preliminary experiments and compare other techniques.

  20. AIRY: a complete tool for the simulation and the reconstruction of astronomical images

    NASA Astrophysics Data System (ADS)

    La Camera, Andrea; Carbillet, Marcel; Olivieri, Chiara; Boccacci, Patrizia; Bertero, Mario

    2012-07-01

    The Software Package AIRY (acronym for Astronomical Image Restoration in interferometrY) is a software tool designed to perform simulation and/or deconvolution of images of Fizeau interferometers as well as of any kind of optical telescopes. AIRY is written in IDL and is a Software Package of the CADS Problem Solving Environment (PSE): it is made of a set of modules, each one representing a specific task. We present here the last version of the software, arrived at its sixth release after 10 years of development. This version of AIRY summarizes the work done in recent years by our group, both on AIRY and on AIRY-LN, the version of the software dedicated to the image restoration of LINC-NIRVANA (LN), the Fizeau interferometer of the Large Binocular Telescope (LBT). AIRY v.6.0 includes a renewed deconvolution module implementing regularizations, accelerations, and stopping criteria of standard algorithms, such as OSEM and Richardson-Lucy. Several modules of AIRY have been improved and, in particular, the one used for the extraction and extrapolatioThe Software Package AIRY (acronym for Astronomical Image Restoration in interferometrY) is a software tool designed to perform simulation and/or deconvolution of images of Fizeau interferometers as well as of any kind of optical telescopes. AIRY is written in IDL and is a Software Package of the CAOS Problem Solving Environment (PSE): it is made of a set of modules, each one representing a speci_c task. We present here the last version of the software, arrived at its sixth release after 10 years of development. This version of AIRY summarizes the work done in recent years by our group, both on AIRY and on AIRY-LN, the version of the software dedicated to the image restoration of LINC-NIRVANA (LN), the Fizeau interferometer of the Large Binocular Telescope (LBT). AIRY v.6.0 includes a renewed deconvolution module implementing regularizations, accelerations, and stopping criteria of standard algorithms, such as OSEM and

  1. Investigation of iterative image reconstruction in low-dose breast CT

    NASA Astrophysics Data System (ADS)

    Bian, Junguo; Yang, Kai; Boone, John M.; Han, Xiao; Sidky, Emil Y.; Pan, Xiaochuan

    2014-06-01

    There is interest in developing computed tomography (CT) dedicated to breast-cancer imaging. Because breast tissues are radiation-sensitive, the total radiation exposure in a breast-CT scan is kept low, often comparable to a typical two-view mammography exam, thus resulting in a challenging low-dose-data-reconstruction problem. In recent years, evidence has been found that suggests that iterative reconstruction may yield images of improved quality from low-dose data. In this work, based upon the constrained image total-variation minimization program and its numerical solver, i.e., the adaptive steepest descent-projection onto the convex set (ASD-POCS), we investigate and evaluate iterative image reconstructions from low-dose breast-CT data of patients, with a focus on identifying and determining key reconstruction parameters, devising surrogate utility metrics for characterizing reconstruction quality, and tailoring the program and ASD-POCS to the specific reconstruction task under consideration. The ASD-POCS reconstructions appear to outperform the corresponding clinical FDK reconstructions, in terms of subjective visualization and surrogate utility metrics.

  2. Joint reconstruction of absorption and refractive properties in propagation-based x-ray phase-contrast tomography via a non-linear image reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Yujia; Wang, Kun; Gursoy, Doga; Soriano, Carmen; De Carlo, Francesco; Anastasio, Mark A.

    2016-03-01

    Propagation-based X-ray phase-contrast tomography (XPCT) provides the opportunity to image weakly absorbing objects and is being explored actively for a variety of important pre-clinical applications. Quantitative XPCT image reconstruction methods typically involve a phase retrieval step followed by application of an image reconstruction algorithm. Most approaches to phase retrieval require either acquiring multiple images at different object-to-detector distances or introducing simplifying assumptions, such as a single-material assumption, to linearize the imaging model. In order to overcome these limitations, a non-linear image reconstruction method has been proposed previously that jointly estimates the absorption and refractive properties of an object from XPCT projection data acquired at a single propagation distance, without the need to linearize the imaging model. However, the numerical properties of the associated non-convex optimization problem remain largely unexplored. In this study, computer simulations are conducted to investigate the feasibility of the joint reconstruction problem in practice. We demonstrate that the joint reconstruction problem is ill-posed and sensitive to system inconsistencies. Particularly, the method can generate accurate refractive index images only if the object is thin and has no phase-wrapping in the data. However, we also observed that, for weakly absorbing objects, the refractive index images reconstructed by the joint reconstruction method are, in general, more accurate than those reconstructed using methods that simply ignore the object's absorption.

  3. Modifications in SIFT-based 3D reconstruction from image sequence

    NASA Astrophysics Data System (ADS)

    Wei, Zhenzhong; Ding, Boshen; Wang, Wei

    2014-11-01

    In this paper, we aim to reconstruct 3D points of the scene from related images. Scale Invariant Feature Transform( SIFT) as a feature extraction and matching algorithm has been proposed and improved for years and has been widely used in image alignment and stitching, image recognition and 3D reconstruction. Because of the robustness and reliability of the SIFT's feature extracting and matching algorithm, we use it to find correspondences between images. Hence, we describe a SIFT-based method to reconstruct 3D sparse points from ordered images. In the process of matching, we make a modification in the process of finding the correct correspondences, and obtain a satisfying matching result. By rejecting the "questioned" points before initial matching could make the final matching more reliable. Given SIFT's attribute of being invariant to the image scale, rotation, and variable changes in environment, we propose a way to delete the multiple reconstructed points occurred in sequential reconstruction procedure, which improves the accuracy of the reconstruction. By removing the duplicated points, we avoid the possible collapsed situation caused by the inexactly initialization or the error accumulation. The limitation of some cases that all reprojected points are visible at all times also does not exist in our situation. "The small precision" could make a big change when the number of images increases. The paper shows the contrast between the modified algorithm and not. Moreover, we present an approach to evaluate the reconstruction by comparing the reconstructed angle and length ratio with actual value by using a calibration target in the scene. The proposed evaluation method is easy to be carried out and with a great applicable value. Even without the Internet image datasets, we could evaluate our own results. In this paper, the whole algorithm has been tested on several image sequences both on the internet and in our shots.

  4. Three-dimensional mammography reconstruction using low-dose projection images

    NASA Astrophysics Data System (ADS)

    Wu, Tao

    A method is described for the reconstruction of three-dimensional distribution of attenuation coefficient of the breast using a limited number of low dose projection images. This method uses the cone beam x-ray geometry, a digital detector and a constrained iterative reconstruction algorithm. The method has been tested on a digital Tomosynthesis mammography system. The total radiation dose to the patient is comparable to that used for one conventional mammogram. The reconstructed image has intrinsically high resolution (˜0.1mm) in two dimensions and lower resolution in the third dimension (˜1mm). Using this method, a breast that is projected into one two-dimensional image in conventional mammography is separated into layers parallel to the two high-resolution dimensions. The thickness of the layer is in the low-resolution dimension. The three-dimensional reconstruction increases the conspicuity of features that is often obscured by overlapping tissues in a single projection. Factors affecting the quality of reconstruction have been investigated by computer simulations. These factors include the scatter, the projection angular range, the shape of the breast and the x-ray energy. Non-uniform distribution of x-ray exposures among projection images and non-uniform-resolution image-acquisition are explored to optimize the image quality within an x-ray dose limit. The method is validated with reconstruction images of mammography phantoms, mastectomy specimens, computer simulations and volunteer patients.

  5. Optimization of SPECT-CT Hybrid Imaging Using Iterative Image Reconstruction for Low-Dose CT: A Phantom Study

    PubMed Central

    Grosser, Oliver S.; Kupitz, Dennis; Ruf, Juri; Czuczwara, Damian; Steffen, Ingo G.; Furth, Christian; Thormann, Markus; Loewenthal, David; Ricke, Jens; Amthauer, Holger

    2015-01-01

    Background Hybrid imaging combines nuclear medicine imaging such as single photon emission computed tomography (SPECT) or positron emission tomography (PET) with computed tomography (CT). Through this hybrid design, scanned patients accumulate radiation exposure from both applications. Imaging modalities have been the subject of long-term optimization efforts, focusing on diagnostic applications. It was the aim of this study to investigate the influence of an iterative CT image reconstruction algorithm (ASIR) on the image quality of the low-dose CT images. Methodology/Principal Findings Examinations were performed with a SPECT-CT scanner with standardized CT and SPECT-phantom geometries and CT protocols with systematically reduced X-ray tube currents. Analyses included image quality with respect to photon flux. Results were compared to the standard FBP reconstructed images. The general impact of the CT-based attenuation maps used during SPECT reconstruction was examined for two SPECT phantoms. Using ASIR for image reconstructions, image noise was reduced compared to FBP reconstructions for the same X-ray tube current. The Hounsfield unit (HU) values reconstructed by ASIR were correlated to the FBP HU values(R2 ≥ 0.88) and the contrast-to-noise ratio (CNR) was improved by ASIR. However, for a phantom with increased attenuation, the HU values shifted for low X-ray tube currents I ≤ 60 mA (p ≤ 0.04). In addition, the shift of the HU values was observed within the attenuation corrected SPECT images for very low X-ray tube currents (I ≤ 20 mA, p ≤ 0.001). Conclusion/Significance In general, the decrease in X-ray tube current up to 30 mA in combination with ASIR led to a reduction of CT-related radiation exposure without a significant decrease in image quality. PMID:26390216

  6. [Super-resolution image reconstruction algorithm based on projection onto convex sets and wavelet fusion].

    PubMed

    Cao, Yuzhen; Liu, Xiaoting; Wang, Wei; Xing, Zhanfeng

    2009-10-01

    In this paper a new super-resolution image reconstruction algorithm was proposed. With the improvement of the classical projection onto convex sets (POCS) algorithm, as ground work, and with the combined use of POCS and wavelet fusion, a high resolution CT image was restored by using a group of low resolution CT images. The experimental results showed: the proposed algorithm improves the ability of fusing different information, the detail of the image is more prominent, and the image quality is better.

  7. Imaging metallic samples using electrical capacitance tomography: forward modelling and reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Hosani, E. Al; Zhang, M.; Abascal, J. F. P. J.; Soleimani, M.

    2016-11-01

    Electrical capacitance tomography (ECT) is an imaging technology used to reconstruct the permittivity distribution within the sensing region. So far, ECT has been primarily used to image non-conductive media only, since if the conductivity of the imaged object is high, the capacitance measuring circuit will be almost shortened by the conductivity path and a clear image cannot be produced using the standard image reconstruction approaches. This paper tackles the problem of imaging metallic samples using conventional ECT systems by investigating the two main aspects of image reconstruction algorithms, namely the forward problem and the inverse problem. For the forward problem, two different methods to model the region of high conductivity in ECT is presented. On the other hand, for the inverse problem, three different algorithms to reconstruct the high contrast images are examined. The first two methods are the linear single step Tikhonov method and the iterative total variation regularization method, and use two sets of ECT data to reconstruct the image in time difference mode. The third method, namely the level set method, uses absolute ECT measurements and was developed using a metallic forward model. The results indicate that the applications of conventional ECT systems can be extended to metal samples using the suggested algorithms and forward model, especially using a level set algorithm to find the boundary of the metal.

  8. Edge-oriented dual-dictionary guided enrichment (EDGE) for MRI-CT image reconstruction.

    PubMed

    Li, Liang; Wang, Bigong; Wang, Ge

    2016-01-01

    In this paper, we formulate the joint/simultaneous X-ray CT and MRI image reconstruction. In particular, a novel algorithm is proposed for MRI image reconstruction from highly under-sampled MRI data and CT images. It consists of two steps. First, a training dataset is generated from a series of well-registered MRI and CT images on the same patients. Then, an initial MRI image of a patient can be reconstructed via edge-oriented dual-dictionary guided enrichment (EDGE) based on the training dataset and a CT image of the patient. Second, an MRI image is reconstructed using the dictionary learning (DL) algorithm from highly under-sampled k-space data and the initial MRI image. Our algorithm can establish a one-to-one correspondence between the two imaging modalities, and obtain a good initial MRI estimation. Both noise-free and noisy simulation studies were performed to evaluate and validate the proposed algorithm. The results with different under-sampling factors show that the proposed algorithm performed significantly better than those reconstructed using the DL algorithm from MRI data alone.

  9. A new nonlinear reconstruction method based on total variation regularization of neutron penumbral imaging

    SciTech Connect

    Qian Weixin; Qi Shuangxi; Wang Wanli; Cheng Jinming; Liu Dongbing

    2011-09-15

    Neutron penumbral imaging is a significant diagnostic technique in laser-driven inertial confinement fusion experiment. It is very important to develop a new reconstruction method to improve the resolution of neutron penumbral imaging. A new nonlinear reconstruction method based on total variation (TV) regularization is proposed in this paper. A TV-norm is used as regularized term to construct a smoothing functional for penumbral image reconstruction in the new method, in this way, the problem of penumbral image reconstruction is transformed to the problem of a functional minimization. In addition, a fixed point iteration scheme is introduced to solve the problem of functional minimization. The numerical experimental results show that, compared to linear reconstruction method based on Wiener filter, the TV regularized nonlinear reconstruction method is beneficial to improve the quality of reconstructed image with better performance of noise smoothing and edge preserving. Meanwhile, it can also obtain the spatial resolution with 5 {mu}m which is higher than the Wiener method.

  10. Interferometric synthetic aperture radar detection and estimation based 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Austin, Christian D.; Moses, Randolph L.

    2006-05-01

    This paper explores three-dimensional (3D) interferometric synthetic aperture radar (IFSAR) image reconstruction when multiple scattering centers and noise are present in a radar resolution cell. We introduce an IFSAR scattering model that accounts for both multiple scattering centers and noise. The problem of 3D image reconstruction is then posed as a multiple hypothesis detection and estimation problem; resolution cells containing a single scattering center are detected and the 3D location of these cells' pixels are estimated; all other pixels are rejected from the image. Detection and estimation statistics are derived using the multiple scattering center IFSAR model. A 3D image reconstruction algorithm using these statistics is then presented, and its performance is evaluated for a 3D reconstruction of a backhoe from noisy IFSAR data.

  11. Genetic algorithms applied to reconstructing coded imaging of neutrons and analysis of residual watermark.

    PubMed

    Zhang, Tiankui; Hu, Huasi; Jia, Qinggang; Zhang, Fengna; Chen, Da; Li, Zhenghong; Wu, Yuelei; Liu, Zhihua; Hu, Guang; Guo, Wei

    2012-11-01

    Monte-Carlo simulation of neutron coded imaging based on encoding aperture for Z-pinch of large field-of-view with 5 mm radius has been investigated, and then the coded image has been obtained. Reconstruction method of source image based on genetic algorithms (GA) has been established. "Residual watermark," which emerges unavoidably in reconstructed image, while the peak normalization is employed in GA fitness calculation because of its statistical fluctuation amplification, has been discovered and studied. Residual watermark is primarily related to the shape and other parameters of the encoding aperture cross section. The properties and essential causes of the residual watermark were analyzed, while the identification on equivalent radius of aperture was provided. By using the equivalent radius, the reconstruction can also be accomplished without knowing the point spread function (PSF) of actual aperture. The reconstruction result is close to that by using PSF of the actual aperture.

  12. Genetic algorithms applied to reconstructing coded imaging of neutrons and analysis of residual watermark

    SciTech Connect

    Zhang Tiankui; Hu Huasi; Jia Qinggang; Zhang Fengna; Liu Zhihua; Hu Guang; Guo Wei; Chen Da; Li Zhenghong; Wu Yuelei

    2012-11-15

    Monte-Carlo simulation of neutron coded imaging based on encoding aperture for Z-pinch of large field-of-view with 5 mm radius has been investigated, and then the coded image has been obtained. Reconstruction method of source image based on genetic algorithms (GA) has been established. 'Residual watermark,' which emerges unavoidably in reconstructed image, while the peak normalization is employed in GA fitness calculation because of its statistical fluctuation amplification, has been discovered and studied. Residual watermark is primarily related to the shape and other parameters of the encoding aperture cross section. The properties and essential causes of the residual watermark were analyzed, while the identification on equivalent radius of aperture was provided. By using the equivalent radius, the reconstruction can also be accomplished without knowing the point spread function (PSF) of actual aperture. The reconstruction result is close to that by using PSF of the actual aperture.

  13. Opti-acoustic stereo imaging: on system calibration and 3-D target reconstruction.

    PubMed

    Negahdaripour, Shahriar; Sekkati, Hicham; Pirsiavash, Hamed

    2009-06-01

    Utilization of an acoustic camera for range measurements is a key advantage for 3-D shape recovery of underwater targets by opti-acoustic stereo imaging, where the associated epipolar geometry of optical and acoustic image correspondences can be described in terms of conic sections. In this paper, we propose methods for system calibration and 3-D scene reconstruction by maximum likelihood estimation from noisy image measurements. The recursive 3-D reconstruction method utilized as initial condition a closed-form solution that integrates the advantages of two other closed-form solutions, referred to as the range and azimuth solutions. Synthetic data tests are given to provide insight into the merits of the new target imaging and 3-D reconstruction paradigm, while experiments with real data confirm the findings based on computer simulations, and demonstrate the merits of this novel 3-D reconstruction paradigm.

  14. Opti-acoustic stereo imaging: on system calibration and 3-D target reconstruction.

    PubMed

    Negahdaripour, Shahriar; Sekkati, Hicham; Pirsiavash, Hamed

    2009-06-01

    Utilization of an acoustic camera for range measurements is a key advantage for 3-D shape recovery of underwater targets by opti-acoustic stereo imaging, where the associated epipolar geometry of optical and acoustic image correspondences can be described in terms of conic sections. In this paper, we propose methods for system calibration and 3-D scene reconstruction by maximum likelihood estimation from noisy image measurements. The recursive 3-D reconstruction method utilized as initial condition a closed-form solution that integrates the advantages of two other closed-form solutions, referred to as the range and azimuth solutions. Synthetic data tests are given to provide insight into the merits of the new target imaging and 3-D reconstruction paradigm, while experiments with real data confirm the findings based on computer simulations, and demonstrate the merits of this novel 3-D reconstruction paradigm. PMID:19380272

  15. The actual measurements at the tide gauges do not support strongly accelerating twentieth-century sea-level rise reconstructions

    NASA Astrophysics Data System (ADS)

    Parker, A.

    2016-03-01

    Contrary to what is claimed by reconstructions of the Global Mean Sea Level (GMSL) indicating accelerating sea level rates of rise over the twentieth-century, the actual measurements at the tide gauges show the sea levels have not risen nor accelerated that much. The most recent estimation by Hay et al of the twentieth-century global mean sea level (GMSL) rise is the last attempt to give exact reconstructions without having enough information of the state of the world oceans over a century where unfortunately the good measurements were not that many. The information on relative rates of rise at the tide gauges and land subsidence of global positioning system (GPS) domes suggest the relative rate of rise is about 0.25mm/year, without any detectable acceleration. [The naïve average of all the world tide gauges of sufficient quality and length of the Permanent Service to Mean Sea Level (PSMSL) data base], Both the relative rates of rise at the tide gauges and the land vertical velocity of GPS domes of the Système d'Observation du Niveau des Eaux Littorales (SONEL) data base are strongly variable in space and time to make a nonsense the GMSL estimation.

  16. Coherent-weighted three-dimensional image reconstruction in linear-array-based photoacoustic tomography.

    PubMed

    Wang, Depeng; Wang, Yuehang; Zhou, Yang; Lovell, Jonathan F; Xia, Jun

    2016-05-01

    While the majority of photoacoustic imaging systems used custom-made transducer arrays, commercially-available linear transducer arrays hold the benefits of affordable price, handheld convenience and wide clinical recognition. They are not widely used in photoacoustic imaging primarily because of the poor elevation resolution. Here, without modifying the imaging geometry and system, we propose addressing this limitation purely through image reconstruction. Our approach is based on the integration of two advanced image reconstruction techniques: focal-line-based three-dimensional image reconstruction and coherent weighting. We first numerically validated our approach through simulation and then experimentally tested it in phantom and in vivo. Both simulation and experimental results proved that the method can significantly improve the elevation resolution (up to 4 times in our experiment) and enhance object contrast. PMID:27231634

  17. Coherent-weighted three-dimensional image reconstruction in linear-array-based photoacoustic tomography

    PubMed Central

    Wang, Depeng; Wang, Yuehang; Zhou, Yang; Lovell, Jonathan F.; Xia, Jun

    2016-01-01

    While the majority of photoacoustic imaging systems used custom-made transducer arrays, commercially-available linear transducer arrays hold the benefits of affordable price, handheld convenience and wide clinical recognition. They are not widely used in photoacoustic imaging primarily because of the poor elevation resolution. Here, without modifying the imaging geometry and system, we propose addressing this limitation purely through image reconstruction. Our approach is based on the integration of two advanced image reconstruction techniques: focal-line-based three-dimensional image reconstruction and coherent weighting. We first numerically validated our approach through simulation and then experimentally tested it in phantom and in vivo. Both simulation and experimental results proved that the method can significantly improve the elevation resolution (up to 4 times in our experiment) and enhance object contrast. PMID:27231634

  18. Visual image reconstruction from human brain activity: A modular decoding approach

    NASA Astrophysics Data System (ADS)

    Miyawaki, Yoichi; Uchida, Hajime; Yamashita, Okito; Sato, Masa-aki; Morito, Yusuke; Tanabe, Hiroki C.; Sadato, Norihiro; Kamitani, Yukiyasu

    2009-12-01

    Brain activity represents our perceptual experience. But the potential for reading out perceptual contents from human brain activity has not been fully explored. In this study, we demonstrate constraint-free reconstruction of visual images perceived by a subject, from the brain activity pattern. We reconstructed visual images by combining local image bases with multiple scales, whose contrasts were independently decoded from fMRI activity by automatically selecting relevant voxels and exploiting their correlated patterns. Binary-contrast, 10 x 10-patch images (2100 possible states), were accurately reconstructed without any image prior by measuring brain activity only for several hundred random images. The results suggest that our approach provides an effective means to read out complex perceptual states from brain activity while discovering information representation in multi-voxel patterns.

  19. Temporal resolved x-ray penumbral imaging technique using heuristic image reconstruction procedure and wide dynamic range x-ray streak camera

    SciTech Connect

    Fujioka, Shinsuke; Shiraga, Hiroyuki; Azechi, Hiroshi; Nishimura, Hiroaki; Izawa, Yasukazu; Nozaki, Shinya; Chen, Yen-wei

    2004-10-01

    Temporal resolved x-ray penumbral imaging has been developed using an image reconstruction procedure of the heuristic method and a wide dynamic range x-ray streak camera (XSC). Reconstruction procedure of the penumbral imaging is inherently intolerant to noise, a reconstructed image is strongly distorted by artifacts caused by noise in a penumbral image. Statistical fluctuation in the number of detected photon is the dominant source of noise in an x-ray image, however acceptable brightness of an image is limited by dynamic range of an XSC. The wide dynamic range XSC was used to obtain penumbral images bright enough to be reconstructed. Additionally, the heuristic method was introduced in the penumbral image reconstruction procedure. Distortion of reconstructed images is sufficiently suppressed by these improvements. Density profiles of laser driven brominated plastic and tin plasma were measured with this technique.

  20. Reconstruction of craniofacial image using rational cubic Ball interpolant and soft computing technique

    NASA Astrophysics Data System (ADS)

    Majeed, Abdul; Piah, Abd Rahni Mt

    2015-10-01

    Spline has been used extensively in engineering design and modelling for representation, analysis and manufacturing purposes. This paper presents an application of spline methods in bio-medical modelling. We reconstruct craniofacial fractured skull bone images using rational cubic Ball interpolant with two free parameters. The free parameters are optimized with the help of genetic algorithm. Our emphasis is placed on the accuracy and smoothness of the reconstructed images.

  1. CT x-ray tube voltage optimisation and image reconstruction evaluation using visual grading analysis

    NASA Astrophysics Data System (ADS)

    Zheng, Xiaoming; Kim, Ted M.; Davidson, Rob; Lee, Seongju; Shin, Cheongil; Yang, Sook

    2014-03-01

    The purposes of this work were to find an optimal x-ray voltage for CT imaging and to determine the diagnostic effectiveness of image reconstruction techniques by using the visual grading analysis (VGA). Images of the PH-5 CT abdomen phantom (Kagaku Co, Kyoto) were acquired by the Toshiba Aquillion One 320 slices CT system with various exposures (from 10 to 580 mAs) under different tube peak voltages (80, 100 and 120 kVp). The images were reconstructed by employing the FBP and the AIDR 3D iterative reconstructions with Mild, Standard and Strong FBP blending. Image quality was assessed by measuring noise, contrast to noise ratio and human observer's VGA scores. The CT dose index CTDIv was obtained from the values displayed on the images. The best fit for the curves of the image quality VGA vs dose CTDIv is a logistic function from the SPSS estimation. A threshold dose Dt is defined as the CTDIv at the just acceptable for diagnostic image quality and a figure of merit (FOM) is defined as the slope of the standardised logistic function. The Dt and FOM were found to be 5.4, 8.1 and 9.1 mGy and 0.47, 0.51 and 0.38 under the tube voltages of 80, 100 and 120 kVp, respectively, from images reconstructed by the FBP technique. The Dt and FOM values were lower from the images reconstructed by the AIDR 3D in comparison with the FBP technique. The optimal xray peak voltage for the imaging of the PH-5 abdomen phantom by the Aquillion One CT system was found to be at 100 kVp. The images reconstructed by the FBP are more diagnostically effective than that by the AIDR 3D but with a higher dose Dt to the patients.

  2. A study on the evaluation of usefulness depending on the image reconstruction method in bone SPECT scan

    NASA Astrophysics Data System (ADS)

    Kim, W. H.; Kim, H. S.; Chung, W. K.; Cho, J. H.; Joo, K. J.; Dong, K. R.

    2013-05-01

    The recent advances in image-processing techniques have led to the development of many methods to reduce the scan time without degrading the image quality. In particular, tomography has improved image reconstruction methods with the concomitant improvement of high-quality images. In this study, PRECEDENCE 16 was used to reconstruct images using the filtered back projection method, which is generally used, and the Astonish method and three-dimensional ordered-subsets expectation maximization method, which are based on repetition techniques. In qualitative and quantitative analysis of the reconstructed images, a comparison was made between images with different acquisition times and between images with the same acquisition time, which aimed at determining the optimal method for reconstructing high-quality images. A blind test for qualitative analysis confirmed almost no difference in image quality depending on the image acquisition time. Furthermore, in quantitative analysis, there was no significant difference in image quality depending on the image acquisition time. On the other hand, the results of the analysis in the image reconstruction method with the same acquisition time demonstrated a significant difference. The images reconstructed by the Astonish method, which uses a repetition technique, are believed to be excellent because they have high resolution and provide clinical diagnostic information. This study confirmed that the reconstruction method with a repetition technique could be used to improve image quality and reduce the scan time, despite not being in general use until recently due to the lengthy time needed for image reconstruction and lack of storage space.

  3. 3D high-density localization microscopy using hybrid astigmatic/ biplane imaging and sparse image reconstruction.

    PubMed

    Min, Junhong; Holden, Seamus J; Carlini, Lina; Unser, Michael; Manley, Suliana; Ye, Jong Chul

    2014-11-01

    Localization microscopy achieves nanoscale spatial resolution by iterative localization of sparsely activated molecules, which generally leads to a long acquisition time. By implementing advanced algorithms to treat overlapping point spread functions (PSFs), imaging of densely activated molecules can improve the limited temporal resolution, as has been well demonstrated in two-dimensional imaging. However, three-dimensional (3D) localization of high-density data remains challenging since PSFs are far more similar along the axial dimension than the lateral dimensions. Here, we present a new, high-density 3D imaging system and algorithm. The hybrid system is implemented by combining astigmatic and biplane imaging. The proposed 3D reconstruction algorithm is extended from our state-of-the art 2D high-density localization algorithm. Using mutual coherence analysis of model PSFs, we validated that the hybrid system is more suitable than astigmatic or biplane imaging alone for 3D localization of high-density data. The efficacy of the proposed method was confirmed via simulation and real data of microtubules. Furthermore, we also successfully demonstrated fluorescent-protein-based live cell 3D localization microscopy with a temporal resolution of just 3 seconds, capturing fast dynamics of the endoplasmic recticulum.

  4. Dictionary-based image reconstruction for superresolution in integrated circuit imaging.

    PubMed

    Cilingiroglu, T Berkin; Uyar, Aydan; Tuysuzoglu, Ahmet; Karl, W Clem; Konrad, Janusz; Goldberg, Bennett B; Ünlü, M Selim

    2015-06-01

    Resolution improvement through signal processing techniques for integrated circuit imaging is becoming more crucial as the rapid decrease in integrated circuit dimensions continues. Although there is a significant effort to push the limits of optical resolution for backside fault analysis through the use of solid immersion lenses, higher order laser beams, and beam apodization, signal processing techniques are required for additional improvement. In this work, we propose a sparse image reconstruction framework which couples overcomplete dictionary-based representation with a physics-based forward model to improve resolution and localization accuracy in high numerical aperture confocal microscopy systems for backside optical integrated circuit analysis. The effectiveness of the framework is demonstrated on experimental data.

  5. Bilateral bad pixel and Stokes image reconstruction for microgrid polarimetric imagers

    NASA Astrophysics Data System (ADS)

    LeMaster, Daniel A.; Ratliff, Bradley M.

    2015-09-01

    Uncorrected or poorly corrected bad pixels reduce the effectiveness of polarimetric clutter suppression. In conventional microgrid processing, bad pixel correction is accomplished as a separate step from Stokes image reconstruction. Here, these two steps are combined to speed processing and provide better estimates of the entire image, including missing samples. A variation on the bilateral filter enables both edge preservation in the Stokes imagery and bad pixel suppression. Understanding the newly presented filter requires two key insights. First, the adaptive nature of the bilateral filter is extended to correct for bad pixels by simply incorporating a bad pixel mask. Second, the bilateral filter for Stokes estimation is the sum of the normalized bilateral filters for estimating each analyzer channel individually. This paper describes the new approach and compares it to our legacy method using simulated imagery.

  6. Fully 4D motion-compensated reconstruction of cardiac SPECT images

    NASA Astrophysics Data System (ADS)

    Gravier, Erwan; Yang, Yongyi; King, Michael A.; Jin, Mingwu

    2006-09-01

    In this paper, we investigate the benefits of a spatiotemporal approach for reconstruction of image sequences. In the proposed approach, we introduce a temporal prior in the form of motion compensation to account for the statistical correlations among the frames in a sequence, and reconstruct all the frames collectively as a single function of space and time. The reconstruction algorithm is derived based on the maximum a posteriori estimate, for which the one-step late expectation-maximization algorithm is used. We demonstrated the method in our experiments using simulated single photon emission computed tomography (SPECT) cardiac perfusion images. The four-dimensional (4D) gated mathematical cardiac-torso phantom was used for simulation of gated SPECT perfusion imaging with Tc-99m-sestamibi. In addition to bias-variance analysis and time activity curves, we also used a channelized Hotelling observer to evaluate the detectability of perfusion defects in the reconstructed images. Our experimental results demonstrated that the incorporation of temporal regularization into image reconstruction could significantly improve the accuracy of cardiac images without causing any significant cross-frame blurring that may arise from the cardiac motion. This could lead to not only improved detection of perfusion defects, but also improved reconstruction of the heart wall which is important for functional assessment of the myocardium. This work was supported in part by the National Institutes of Health under grant no HL65425.

  7. A statistical examination of SENSE image reconstruction via an isomorphism representation.

    PubMed

    Bruce, Iain P; Karaman, M Muge; Rowe, Daniel B

    2011-11-01

    In magnetic resonance imaging, the parallel acquisition of subsampled spatial frequencies from an array of multiple receiver coils has become a common means of reducing data acquisition time. SENSitivity Encoding (SENSE) is a popular parallel image reconstruction model that uses a complex-valued least squares estimation process to unfold aliased images. In this article, the linear mathematical framework derived in Rowe et al. [J Neurosci Meth 159 (2007) 361-369] is built upon to perform image reconstruction with subsampled data acquired from multiple receiver coils, where the SENSE model is represented as a real-valued isomorphism. A statistical analysis is performed of the various image reconstruction operators utilized in the SENSE model, with an emphasis placed on the effects of each operator on voxel means, variances and correlations. It is shown that, despite the attractiveness of models that unfold the aliased images from subsampled data, there is an artificial correlation induced between reconstructed voxels from the different folds of aliased images. As such, the mathematical framework outlined in this manuscript could be further developed to provide a means of accounting for this unavoidable correlation induced by image reconstruction operators. PMID:21908127

  8. Texture enhanced optimization-based image reconstruction (TxE-OBIR) from sparse projection views

    NASA Astrophysics Data System (ADS)

    Xie, Huiqiao; Niu, Tianye; Yang, Yi; Ren, Yi; Tang, Xiangyang

    2016-03-01

    The optimization-based image reconstruction (OBIR) has been proposed and investigated in recent years to reduce radiation dose in X-ray computed tomography (CT) through acquiring sparse projection views. However, the OBIR usually generates images with a quite different noise texture compared to the clinical widely used reconstruction method (i.e. filtered back-projection - FBP). This may make the radiologists/physicians less confident while they are making clinical decisions. Recognizing the fact that the X-ray photon noise statistics is relatively uniform across the detector cells, which is enabled by beam forming devices (e.g. bowtie filters), we propose and evaluate a novel and practical texture enhancement method in this work. In the texture enhanced optimization-based image reconstruction (TxEOBIR), we first reconstruct a texture image with the FBP algorithm from a full set of synthesized projection views of noise. Then, the TxE-OBIR image is generated by adding the texture image into the OBIR reconstruction. As qualitatively confirmed by visual inspection and quantitatively by noise power spectrum (NPS) evaluation, the proposed method can produce images with textures that are visually identical to those of the gold standard FBP images.

  9. Local and Non-local Regularization Techniques in Emission (PET/SPECT) Tomographic Image Reconstruction Methods.

    PubMed

    Ahmad, Munir; Shahzad, Tasawar; Masood, Khalid; Rashid, Khalid; Tanveer, Muhammad; Iqbal, Rabail; Hussain, Nasir; Shahid, Abubakar; Fazal-E-Aleem

    2016-06-01

    Emission tomographic image reconstruction is an ill-posed problem due to limited and noisy data and various image-degrading effects affecting the data and leads to noisy reconstructions. Explicit regularization, through iterative reconstruction methods, is considered better to compensate for reconstruction-based noise. Local smoothing and edge-preserving regularization methods can reduce reconstruction-based noise. However, these methods produce overly smoothed images or blocky artefacts in the final image because they can only exploit local image properties. Recently, non-local regularization techniques have been introduced, to overcome these problems, by incorporating geometrical global continuity and connectivity present in the objective image. These techniques can overcome drawbacks of local regularization methods; however, they also have certain limitations, such as choice of the regularization function, neighbourhood size or calibration of several empirical parameters involved. This work compares different local and non-local regularization techniques used in emission tomographic imaging in general and emission computed tomography in specific for improved quality of the resultant images.

  10. Three-dimensional reconstruction of live embryos using robotic macroscope images.

    PubMed

    Brodland, G W; Veldhuis, J H

    1998-09-01

    To determine the three-dimensional (3-D) shape of a live embryo is a technically challenging task. We show that reconstructions of live embryos can be done by collecting images from different viewing angles using a robotic macroscope, establishing point correspondences between these views by block matching, and using a new 3-D reconstruction algorithm that accommodates camera positioning errors. The algorithm assumes that the images are orthographic projections of the object and that the camera scaling factors are known. Point positions and camera errors are found simultaneously. Reconstructions of test objects and embryos show that meaningful reconstructions are possible only when camera positioning and alignment errors are accommodated since these errors can be substantial. Reconstructions of early-stage axolotl embryos were made from sets of 33 images. In a typical reconstruction, 781 points, each visible in at least three different views, were used to form 1511 triangles to represent the embryo surface. The resulting reconstruction had a mean radius of error of 0.27 pixels (1.1 microns). Mathematical properties of the reconstruction algorithm are identified and discussed. PMID:9735567

  11. Reconstruction of burst activity from calcium imaging of neuronal population via Lq minimization and interval screening.

    PubMed

    Quan, Tingwei; Lv, Xiaohua; Liu, Xiuli; Zeng, Shaoqun

    2016-06-01

    Calcium imaging is becoming an increasingly popular technology to indirectly measure activity patterns in local neuronal networks. Based on the dependence of calcium fluorescence on neuronal spiking, two-photon calcium imaging affords single-cell resolution of neuronal population activity. However, it is still difficult to reconstruct neuronal activity from complex calcium fluorescence traces, particularly for traces contaminated by noise. Here, we describe a robust and efficient neuronal-activity reconstruction method that utilizes Lq minimization and interval screening (IS), which we refer to as LqIS. The simulation results show that LqIS performs satisfactorily in terms of both accuracy and speed of reconstruction. Reconstruction of simulation and experimental data also shows that LqIS has advantages in terms of the recall rate, precision rate, and timing error. Finally, LqIS is demonstrated to effectively reconstruct neuronal burst activity from calcium fluorescence traces recorded from large-size neuronal population. PMID:27375930

  12. Reconstruction of burst activity from calcium imaging of neuronal population via Lq minimization and interval screening

    PubMed Central

    Quan, Tingwei; Lv, Xiaohua; Liu, Xiuli; Zeng, Shaoqun

    2016-01-01

    Calcium imaging is becoming an increasingly popular technology to indirectly measure activity patterns in local neuronal networks. Based on the dependence of calcium fluorescence on neuronal spiking, two-photon calcium imaging affords single-cell resolution of neuronal population activity. However, it is still difficult to reconstruct neuronal activity from complex calcium fluorescence traces, particularly for traces contaminated by noise. Here, we describe a robust and efficient neuronal-activity reconstruction method that utilizes Lq minimization and interval screening (IS), which we refer to as LqIS. The simulation results show that LqIS performs satisfactorily in terms of both accuracy and speed of reconstruction. Reconstruction of simulation and experimental data also shows that LqIS has advantages in terms of the recall rate, precision rate, and timing error. Finally, LqIS is demonstrated to effectively reconstruct neuronal burst activity from calcium fluorescence traces recorded from large-size neuronal population. PMID:27375930

  13. ANL CT Image Reconstruction Algorithm for Utilizing Digital X-ray Detector Array

    2004-08-05

    Reconstructs X-ray computed tomographic images from large data sets known as 16-bit binary sinograms. The algorithm uses the concept of generation of an image from carefully obtained multiple l-D or 2-0 X-ray projections. The individual projections are filtered using a digital Fast Fourier Transform. The literature refers to this as filtered back projection. The software is capable of processing a large file for reconstructing single images or volumetnc (3-D) images from large area high resolutionmore » digital X-ray detectors.« less

  14. Simultaneous motion estimation and image reconstruction (SMEIR) for 4D cone-beam CT

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Gu, Xuejun

    2014-03-01

    Image reconstruction and motion model estimation in four dimensional cone-beam CT (4D-CBCT) are conventionally handled as two sequential steps. Due to the limited number of projections at each phase, the image quality of 4D-CBCT is degraded by view aliasing artifacts, and the accuracy of subsequent motion modeling is decreased by the inferior 4DCBCT. The objective of this work is to enhance both the image quality of 4D-CBCT and the accuracy of motion model estimation with a novel strategy enabling simultaneous motion estimation and image reconstruction (SMEIR). The proposed SMEIR algorithm consists of two alternating steps: 1) model-based iterative image reconstruction to obtain a motion-compensated primary CBCT (m-pCBCT) and 2) motion model estimation to obtain an optimal set of deformation vector fields (DVFs) between the m-pCBCT and other 4D-CBCT phases. The motion-compensated image reconstruction is based on the simultaneous algebraic reconstruction (SART) technique coupled with total variation minimization. During the forward- and back-projection of SART, measured projections from an entire set of 4D-CBCT are used for reconstruction of the m-pCBCT by utilizing the updated DVF. The DVF is estimated by matching the forward projection of the deformed m-pCBCT and measured projections of other phases of 4D-CBCT. The performance of the SMEIR algorithm is quantitatively evaluated on a 4D NCAT phantom. The SMEIR algorithm improves image reconstruction accuracy of 4D-CBCT and tumor motion trajectory estimation accuracy as compared to conventional sequential 4D-CBCT reconstruction and motion estimation.

  15. A robust state-space kinetics-guided framework for dynamic PET image reconstruction.

    PubMed

    Tong, S; Alessio, A M; Kinahan, P E; Liu, H; Shi, P

    2011-04-21

    Dynamic PET image reconstruction is a challenging issue due to the low SNR and the large quantity of spatio-temporal data. We propose a robust state-space image reconstruction (SSIR) framework for activity reconstruction in dynamic PET. Unlike statistically-based frame-by-frame methods, tracer kinetic modeling is incorporated to provide physiological guidance for the reconstruction, harnessing the temporal information of the dynamic data. Dynamic reconstruction is formulated in a state-space representation, where a compartmental model describes the kinetic processes in a continuous-time system equation, and the imaging data are expressed in a discrete measurement equation. Tracer activity concentrations are treated as the state variables, and are estimated from the dynamic data. Sampled-data H(∞) filtering is adopted for robust estimation. H(∞) filtering makes no assumptions on the system and measurement statistics, and guarantees bounded estimation error for finite-energy disturbances, leading to robust performance for dynamic data with low SNR and/or errors. This alternative reconstruction approach could help us to deal with unpredictable situations in imaging (e.g. data corruption from failed detector blocks) or inaccurate noise models. Experiments on synthetic phantom and patient PET data are performed to demonstrate feasibility of the SSIR framework, and to explore its potential advantages over frame-by-frame statistical reconstruction approaches.

  16. Applying the uniform resampling (URS) algorithm to a lissajous trajectory: fast image reconstruction with optimal gridding.

    PubMed

    Moriguchi, H; Wendt, M; Duerk, J L

    2000-11-01

    Various kinds of nonrectilinear Cartesian k-space trajectories have been studied, such as spiral, circular, and rosette trajectories. Although the nonrectilinear Cartesian sampling techniques generally have the advantage of fast data acquisition, the gridding process prior to 2D-FFT image reconstruction usually requires a number of additional calculations, thus necessitating an increase in the computation time. Further, the reconstructed image often exhibits artifacts resulting from both the k-space sampling pattern and the gridding procedure. To date, it has been demonstrated in only a few studies that the special geometric sampling patterns of certain specific trajectories facilitate fast image reconstruction. In other words, the inherent link among the trajectory, the sampling scheme, and the associated complexity of the regridding/reconstruction process has been investigated to only a limited extent. In this study, it is demonstrated that a Lissajous trajectory has the special geometric characteristics necessary for rapid reconstruction of nonrectilinear Cartesian k-space trajectories with constant sampling time intervals. Because of the applicability of a uniform resampling (URS) algorithm, a high-quality reconstructed image is obtained in a short reconstruction time when compared to other gridding algorithms. PMID:11064412

  17. Simultaneous motion estimation and image reconstruction (SMEIR) for 4D cone-beam CT

    SciTech Connect

    Wang, Jing; Gu, Xuejun

    2013-10-15

    Purpose: Image reconstruction and motion model estimation in four-dimensional cone-beam CT (4D-CBCT) are conventionally handled as two sequential steps. Due to the limited number of projections at each phase, the image quality of 4D-CBCT is degraded by view aliasing artifacts, and the accuracy of subsequent motion modeling is decreased by the inferior 4D-CBCT. The objective of this work is to enhance both the image quality of 4D-CBCT and the accuracy of motion model estimation with a nov