Accelerated Compressed Sensing Based CT Image Reconstruction
Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R.; Paul, Narinder S.; Cobbold, Richard S. C.
2015-01-01
In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization. PMID:26167200
An accelerated and convergent iterative algorithm in image reconstruction
NASA Astrophysics Data System (ADS)
Yan, Jianhua; Yu, Jun
2007-05-01
Positron emission tomography (PET) is becoming increasingly important in the field of medicine and biology. The maximum-likelihood expectation-maximization (ML-EM) algorithm is becoming more important than filtered back-projection (FBP) algorithm which can incorporate various physical models into image reconstruction scheme. However, ML-EM converges slowly. In this paper, we propose a new algorithm named AC-ML-EM (accelerated and convergent maximum likelihood expectation maximization) by introducing gradually decreasing correction factor into ML-EM. AC-ML-EM has a higher speed of convergence. Through the experiments of computer simulated phantom data and real phantom data, AC-ML-EM is shown faster and better quantitatively than conventional ML-EM algorithm.
Wang, Kun; Huang, Chao; Kao, Yu-Jiun; Chou, Cheng-Ying; Oraevsky, Alexander A.; Anastasio, Mark A.
2013-01-01
Purpose: Optoacoustic tomography (OAT) is inherently a three-dimensional (3D) inverse problem. However, most studies of OAT image reconstruction still employ two-dimensional imaging models. One important reason is because 3D image reconstruction is computationally burdensome. The aim of this work is to accelerate existing image reconstruction algorithms for 3D OAT by use of parallel programming techniques. Methods: Parallelization strategies are proposed to accelerate a filtered backprojection (FBP) algorithm and two different pairs of projection/backprojection operations that correspond to two different numerical imaging models. The algorithms are designed to fully exploit the parallel computing power of graphics processing units (GPUs). In order to evaluate the parallelization strategies for the projection/backprojection pairs, an iterative image reconstruction algorithm is implemented. Computer simulation and experimental studies are conducted to investigate the computational efficiency and numerical accuracy of the developed algorithms. Results: The GPU implementations improve the computational efficiency by factors of 1000, 125, and 250 for the FBP algorithm and the two pairs of projection/backprojection operators, respectively. Accurate images are reconstructed by use of the FBP and iterative image reconstruction algorithms from both computer-simulated and experimental data. Conclusions: Parallelization strategies for 3D OAT image reconstruction are proposed for the first time. These GPU-based implementations significantly reduce the computational time for 3D image reconstruction, complementing our earlier work on 3D OAT iterative image reconstruction. PMID:23387778
NASA Astrophysics Data System (ADS)
Velikina, J. V.; Samsonov, A. A.
2016-02-01
Advanced MRI techniques often require sampling in additional (non-spatial) dimensions such as time or parametric dimensions, which significantly elongate scan time. Our purpose was to develop novel iterative image reconstruction methods to reduce amount of acquired data in such applications using prior knowledge about signal in the extra dimensions. The efforts have been made to accelerate two applications, namely, time resolved contrast enhanced MR angiography and T1 mapping. Our result demonstrate that significant acceleration (up to 27x times) may be achieved using our proposed iterative reconstruction techniques.
Comparison of Parallel MRI Reconstruction Methods for Accelerated 3D Fast Spin-Echo Imaging
Xiao, Zhikui; Hoge, W. Scott; Mulkern, R.V.; Zhao, Lei; Hu, Guangshu; Kyriakos, Walid E.
2014-01-01
Parallel MRI (pMRI) achieves imaging acceleration by partially substituting gradient-encoding steps with spatial information contained in the component coils of the acquisition array. Variable-density subsampling in pMRI was previously shown to yield improved two-dimensional (2D) imaging in comparison to uniform subsampling, but has yet to be used routinely in clinical practice. In an effort to reduce acquisition time for 3D fast spin-echo (3D-FSE) sequences, this work explores a specific nonuniform sampling scheme for 3D imaging, subsampling along two phase-encoding (PE) directions on a rectilinear grid. We use two reconstruction methods—2D-GRAPPA-Operator and 2D-SPACE RIP—and present a comparison between them. We show that high-quality images can be reconstructed using both techniques. To evaluate the proposed sampling method and reconstruction schemes, results via simulation, phantom study, and in vivo 3D human data are shown. We find that fewer artifacts can be seen in the 2D-SPACE RIP reconstructions than in 2D-GRAPPA-Operator reconstructions, with comparable reconstruction times. PMID:18727083
Accelerated 4D Quantitative Single Point EPR Imaging Using Model-based Reconstruction
Jang, Hyungseok; Matsumoto, Shingo; Devasahayam, Nallathamby; Subramanian, Sankaran; Zhuo, Jiachen; Krishna, Murali C.; McMillan, Alan B
2014-01-01
Purpose EPRI has surfaced as a promising non-invasive imaging modality that is capable of imaging tissue oxygenation. Due to extremely short spin-spin relaxation time, EPRI benefits from single point imaging and inherently suffers from limited spatial and temporal resolution, preventing localization of small hypoxic tissues and differentiation of hypoxia dynamics, making accelerated imaging a crucial issue. Method In this study, methods for accelerated single point imaging were developed by combining a bilateral k-space extrapolation technique with model-based reconstruction that benefits from dense sampling in the parameter domain (measurement of the T2* decay of an FID). In bilateral k-space extrapolation, more k-space samples are obtained in a sparsely sampled region by bilaterally extrapolating data from temporally neighboring k-spaces. To improve the accuracy of T2* estimation, a principal component analysis (PCA)-based method was implemented. Result In a computer simulation and a phantom experiment, the proposed methods showed its capability for reliable T2* estimation with high acceleration (8-fold, 15-fold, and 30-fold accelerations for 61×61×61, 95×95×95, and 127×127×127 matrix, respectively). Conclusion By applying bilateral k-space extrapolation and model-based reconstruction, improved scan times with higher spatial resolution can be achieved in the current SP-EPRI modality. PMID:24803382
ACCELERATION OF CORONAL MASS EJECTIONS FROM THREE-DIMENSIONAL RECONSTRUCTION OF STEREO IMAGES
Joshi, Anand D.; Srivastava, Nandita
2011-09-20
We employ a three-dimensional (3D) reconstruction technique for the first time to study the kinematics of six coronal mass ejections (CMEs), using images obtained from the COR1 and COR2 coronagraphs on board the twin STEREO spacecraft, and also the eruptive prominences (EPs) associated with three of them using images from the Extreme UltraViolet Imager. A feature in the EPs and leading edges (LEs) of all the CMEs was identified and tracked in images from the two spacecraft, and a stereoscopic reconstruction technique was used to determine the 3D coordinates of these features. True velocity and acceleration were determined from the temporal evolution of the true height of the CME features. Our study of the kinematics of the CMEs in 3D reveals that the CME LE undergoes maximum acceleration typically below 2 R{sub sun}. The acceleration profiles of CMEs associated with flares and prominences exhibit different behaviors. While the CMEs not associated with prominences show a bimodal acceleration profile, those associated with prominences do not. Two of the three associated prominences in the study show a high and increasing value of acceleration up to a distance of almost 4 R{sub sun}, but acceleration of the corresponding CME LE does not show the same behavior, suggesting that the two may not be always driven by the same mechanism. One of the CMEs, although associated with a C-class flare, showed unusually high acceleration of over 1500 m s{sup -2}. Our results therefore suggest that only the flare-associated CMEs undergo residual acceleration, which indicates that the flux injection theoretical model holds well for the flare-associated CMEs, but a different mechanism should be considered for EP-associated CMEs.
Yu, Fengchao; Liu, Huafeng; Hu, Zhenghui; Shi, Pengcheng
2012-04-01
As a consequence of the random nature of photon emissions and detections, the data collected by a positron emission tomography (PET) imaging system can be shown to be Poisson distributed. Meanwhile, there have been considerable efforts within the tracer kinetic modeling communities aimed at establishing the relationship between the PET data and physiological parameters that affect the uptake and metabolism of the tracer. Both statistical and physiological models are important to PET reconstruction. The majority of previous efforts are based on simplified, nonphysical mathematical expression, such as Poisson modeling of the measured data, which is, on the whole, completed without consideration of the underlying physiology. In this paper, we proposed a graphics processing unit (GPU)-accelerated reconstruction strategy that can take both statistical model and physiological model into consideration with the aid of state-space evolution equations. The proposed strategy formulates the organ activity distribution through tracer kinetics models and the photon-counting measurements through observation equations, thus making it possible to unify these two constraints into a general framework. In order to accelerate reconstruction, GPU-based parallel computing is introduced. Experiments of Zubal-thorax-phantom data, Monte Carlo simulated phantom data, and real phantom data show the power of the method. Furthermore, thanks to the computing power of the GPU, the reconstruction time is practical for clinical application. PMID:22472843
Defrise, Michel; Gullberg, Grant T.
2006-04-05
We give an overview of the role of Physics in Medicine andBiology in development of tomographic reconstruction algorithms. We focuson imaging modalities involving ionizing radiation, CT, PET and SPECT,and cover a wide spectrum of reconstruction problems, starting withclassical 2D tomogra tomography in the 1970s up to 4D and 5D problemsinvolving dynamic imaging of moving organs.
Accelerating Advanced MRI Reconstructions on GPUs
Stone, S.S.; Haldar, J.P.; Tsao, S.C.; Hwu, W.-m.W.; Sutton, B.P.; Liang, Z.-P.
2008-01-01
Computational acceleration on graphics processing units (GPUs) can make advanced magnetic resonance imaging (MRI) reconstruction algorithms attractive in clinical settings, thereby improving the quality of MR images across a broad spectrum of applications. This paper describes the acceleration of such an algorithm on NVIDIA’s Quadro FX 5600. The reconstruction of a 3D image with 1283 voxels achieves up to 180 GFLOPS and requires just over one minute on the Quadro, while reconstruction on a quad-core CPU is twenty-one times slower. Furthermore, relative to the true image, the error exhibited by the advanced reconstruction is only 12%, while conventional reconstruction techniques incur error of 42%. PMID:21796230
Acceleration of iterative tomographic image reconstruction by reference-based back projection
NASA Astrophysics Data System (ADS)
Cheng, Chang-Chieh; Li, Ping-Hui; Ching, Yu-Tai
2016-03-01
The purpose of this paper is to design and implement an efficient iterative reconstruction algorithm for computational tomography. We accelerate the reconstruction speed of algebraic reconstruction technique (ART), an iterative reconstruction method, by using the result of filtered backprojection (FBP), a wide used algorithm of analytical reconstruction, to be an initial guess and the reference for the first iteration and each back projection stage respectively. Both two improvements can reduce the error between the forward projection of each iteration and the measurements. We use three methods of quantitative analysis, root-mean-square error (RMSE), peak signal to noise ratio (PSNR), and structural content (SC), to show that our method can reduce the number of iterations by more than half and the quality of the result is better than the original ART.
GPU accelerated generation of digitally reconstructed radiographs for 2-D/3-D image registration.
Dorgham, Osama M; Laycock, Stephen D; Fisher, Mark H
2012-09-01
Recent advances in programming languages for graphics processing units (GPUs) provide developers with a convenient way of implementing applications which can be executed on the CPU and GPU interchangeably. GPUs are becoming relatively cheap, powerful, and widely available hardware components, which can be used to perform intensive calculations. The last decade of hardware performance developments shows that GPU-based computation is progressing significantly faster than CPU-based computation, particularly if one considers the execution of highly parallelisable algorithms. Future predictions illustrate that this trend is likely to continue. In this paper, we introduce a way of accelerating 2-D/3-D image registration by developing a hybrid system which executes on the CPU and utilizes the GPU for parallelizing the generation of digitally reconstructed radiographs (DRRs). Based on the advancements of the GPU over the CPU, it is timely to exploit the benefits of many-core GPU technology by developing algorithms for DRR generation. Although some previous work has investigated the rendering of DRRs using the GPU, this paper investigates approximations which reduce the computational overhead while still maintaining a quality consistent with that needed for 2-D/3-D registration with sufficient accuracy to be clinically acceptable in certain applications of radiation oncology. Furthermore, by comparing implementations of 2-D/3-D registration on the CPU and GPU, we investigate current performance and propose an optimal framework for PC implementations addressing the rigid registration problem. Using this framework, we are able to render DRR images from a 256×256×133 CT volume in ~24 ms using an NVidia GeForce 8800 GTX and in ~2 ms using NVidia GeForce GTX 580. In addition to applications requiring fast automatic patient setup, these levels of performance suggest image-guided radiation therapy at video frame rates is technically feasible using relatively low cost PC
Fast iterative image reconstruction using sparse matrix factorization with GPU acceleration
NASA Astrophysics Data System (ADS)
Zhou, Jian; Qi, Jinyi
2011-03-01
Statistically based iterative approaches for image reconstruction have gained much attention in medical imaging. An accurate system matrix that defines the mapping from the image space to the data space is the key to high-resolution image reconstruction. However, an accurate system matrix is often associated with high computational cost and huge storage requirement. Here we present a method to address this problem by using sparse matrix factorization and parallel computing on a graphic processing unit (GPU).We factor the accurate system matrix into three sparse matrices: a sinogram blurring matrix, a geometric projection matrix, and an image blurring matrix. The sinogram blurring matrix models the detector response. The geometric projection matrix is based on a simple line integral model. The image blurring matrix is to compensate for the line-of-response (LOR) degradation due to the simplified geometric projection matrix. The geometric projection matrix is precomputed, while the sinogram and image blurring matrices are estimated by minimizing the difference between the factored system matrix and the original system matrix. The resulting factored system matrix has much less number of nonzero elements than the original system matrix and thus substantially reduces the storage and computation cost. The smaller size also allows an efficient implement of the forward and back projectors on GPUs, which have limited amount of memory. Our simulation studies show that the proposed method can dramatically reduce the computation cost of high-resolution iterative image reconstruction. The proposed technique is applicable to image reconstruction for different imaging modalities, including x-ray CT, PET, and SPECT.
An accelerated threshold-based back-projection algorithm for Compton camera image reconstruction
Mundy, Daniel W.; Herman, Michael G.
2011-01-15
Purpose: Compton camera imaging (CCI) systems are currently under investigation for radiotherapy dose reconstruction and verification. The ability of such a system to provide real-time images during dose delivery will be limited by the computational speed of the image reconstruction algorithm. In this work, the authors present a fast and simple method by which to generate an initial back-projected image from acquired CCI data, suitable for use in a filtered back-projection algorithm or as a starting point for iterative reconstruction algorithms, and compare its performance to the current state of the art. Methods: Each detector event in a CCI system describes a conical surface that includes the true point of origin of the detected photon. Numerical image reconstruction algorithms require, as a first step, the back-projection of each of these conical surfaces into an image space. The algorithm presented here first generates a solution matrix for each slice of the image space by solving the intersection of the conical surface with the image plane. Each element of the solution matrix is proportional to the distance of the corresponding voxel from the true intersection curve. A threshold function was developed to extract those pixels sufficiently close to the true intersection to generate a binary intersection curve. This process is repeated for each image plane for each CCI detector event, resulting in a three-dimensional back-projection image. The performance of this algorithm was tested against a marching algorithm known for speed and accuracy. Results: The threshold-based algorithm was found to be approximately four times faster than the current state of the art with minimal deficit to image quality, arising from the fact that a generically applicable threshold function cannot provide perfect results in all situations. The algorithm fails to extract a complete intersection curve in image slices near the detector surface for detector event cones having axes nearly
NASA Astrophysics Data System (ADS)
Zhou, Jian; Qi, Jinyi
2011-10-01
Statistically based iterative image reconstruction has been widely used in positron emission tomography (PET) imaging. The quality of reconstructed images depends on the accuracy of the system matrix that defines the mapping from the image space to the data space. However, an accurate system matrix is often associated with high computation cost and huge storage requirement. In this paper, we present a method to address this problem using sparse matrix factorization and graphics processor unit (GPU) acceleration. We factor the accurate system matrix into three highly sparse matrices: a sinogram blurring matrix, a geometric projection matrix and an image blurring matrix. The geometrical projection matrix is precomputed based on a simple line integral model, while the sinogram and image blurring matrices are estimated from point-source measurements. The resulting factored system matrix has far less nonzero elements than the original system matrix, which substantially reduces the storage and computation cost. The smaller matrix size also allows an efficient implementation of the forward and backward projectors on a GPU, which often has a limited memory space. Our experimental studies show that the proposed method can dramatically reduce the computation cost of high-resolution iterative image reconstruction, while achieving better performance than existing factorization methods.
GPU-accelerated 3D Bayesian image reconstruction from Compton scattered data
NASA Astrophysics Data System (ADS)
Nguyen, Van-Giang; Lee, Soo-Jin; Lee, Mi No
2011-05-01
This paper describes the development of fast Bayesian reconstruction methods for Compton cameras using commodity graphics hardware. For fast iterative reconstruction, not only is it important to increase the convergence rate, but also it is equally important to accelerate the computation of time-consuming and repeated operations, such as projection and backprojection. Since the size of the system matrix for a typical Compton camera is intractably large, it is impractical to use a conventional caching scheme that stores the pre-calculated elements of a system matrix and uses them for the calculation of projection and backprojection. In this paper we propose GPU (graphics processing unit)-accelerated methods that can rapidly perform conical projection and backprojection on the fly. Since the conventional ray-based backprojection method is inefficient for parallel computing on GPUs, we develop voxel-based conical backprojection methods using two different approximation schemes. In the first scheme, we approximate the intersecting chord length of the ray passing through a voxel by the perpendicular distance from the center to the ray. In the second scheme, each voxel is regarded as a dimensionless point rather than a cube so that the backprojection can be performed without the need for calculating intersecting chord lengths or their approximations. Our simulation studies show that the GPU-based method dramatically improves the computational speed with only minor loss of accuracy in reconstruction. With the development of high-resolution detectors, the difference in the reconstruction accuracy between the GPU-based method and the CPU-based method will eventually be negligible.
Accelerating statistical image reconstruction algorithms for fan-beam x-ray CT using cloud computing
NASA Astrophysics Data System (ADS)
Srivastava, Somesh; Rao, A. Ravishankar; Sheinin, Vadim
2011-03-01
Statistical image reconstruction algorithms potentially offer many advantages to x-ray computed tomography (CT), e.g. lower radiation dose. But, their adoption in practical CT scanners requires extra computation power, which is traditionally provided by incorporating additional computing hardware (e.g. CPU-clusters, GPUs, FPGAs etc.) into a scanner. An alternative solution is to access the required computation power over the internet from a cloud computing service, which is orders-of-magnitude more cost-effective. This is because users only pay a small pay-as-you-go fee for the computation resources used (i.e. CPU time, storage etc.), and completely avoid purchase, maintenance and upgrade costs. In this paper, we investigate the benefits and shortcomings of using cloud computing for statistical image reconstruction. We parallelized the most time-consuming parts of our application, the forward and back projectors, using MapReduce, the standard parallelization library on clouds. From preliminary investigations, we found that a large speedup is possible at a very low cost. But, communication overheads inside MapReduce can limit the maximum speedup, and a better MapReduce implementation might become necessary in the future. All the experiments for this paper, including development and testing, were completed on the Amazon Elastic Compute Cloud (EC2) for less than $20.
NASA Astrophysics Data System (ADS)
Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib
2016-02-01
FDG-PET study also revealed that for the same noise level, a higher contrast recovery can be obtained by increasing the number of TOF subsets. It can be concluded that the proposed TOF weighting matrix pre-computation and subsetization approaches enable to further accelerate and improve the convergence properties of OSEM and MLEM algorithms, thus opening new avenues for accelerated TOF PET image reconstruction.
NASA Technical Reports Server (NTRS)
Limber, Mark A.; Manteuffel, Thomas A.; Mccormick, Stephen F.; Sholl, David S.
1993-01-01
We consider the problem of image reconstruction from a finite number of projections over the space L(sup 1)(Omega), where Omega is a compact subset of the set of Real numbers (exp 2). We prove that, given a discretization of the projection space, the function that generates the correct projection data and maximizes the Boltzmann-Shannon entropy is piecewise constant on a certain discretization of Omega, which we call the 'optimal grid'. It is on this grid that one obtains the maximum resolution given the problem setup. The size of this grid grows very quickly as the number of projections and number of cells per projection grow, indicating fast computational methods are essential to make its use feasible. We use a Fenchel duality formulation of the problem to keep the number of variables small while still using the optimal discretization, and propose a multilevel scheme to improve convergence of a simple cyclic maximization scheme applied to the dual problem.
Convex accelerated maximum entropy reconstruction
NASA Astrophysics Data System (ADS)
Worley, Bradley
2016-04-01
Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm - called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm - is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra.
NASA Astrophysics Data System (ADS)
Zhu, Dianwen; Li, Changqing
2016-01-01
Fluorescence molecular tomography (FMT) is a significant preclinical imaging modality that has been actively studied in the past two decades. It remains a challenging task to obtain fast and accurate reconstruction of fluorescent probe distribution in small animals due to the large computational burden and the ill-posed nature of the inverse problem. We have recently studied a nonuniform multiplicative updating algorithm that combines with the ordered subsets (OS) method for fast convergence. However, increasing the number of OS leads to greater approximation errors and the speed gain from larger number of OS is limited. We propose to further enhance the convergence speed by incorporating a first-order momentum method that uses previous iterations to achieve optimal convergence rate. Using numerical simulations and a cubic phantom experiment, we have systematically compared the effects of the momentum technique, the OS method, and the nonuniform updating scheme in accelerating the FMT reconstruction. We found that the proposed combined method can produce a high-quality image using an order of magnitude less time.
Huang, Hsuan-Ming; Hsiao, Ing-Tsung
2016-01-01
In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate. PMID:27073853
Overview of Image Reconstruction
Marr, R. B.
1980-04-01
Image reconstruction (or computerized tomography, etc.) is any process whereby a function, f, on R^{n} is estimated from empirical data pertaining to its integrals, ∫f(x) dx, for some collection of hyperplanes of dimension k < n. The paper begins with background information on how image reconstruction problems have arisen in practice, and describes some of the application areas of past or current interest; these include radioastronomy, optics, radiology and nuclear medicine, electron microscopy, acoustical imaging, geophysical tomography, nondestructive testing, and NMR zeugmatography. Then the various reconstruction algorithms are discussed in five classes: summation, or simple back-projection; convolution, or filtered back-projection; Fourier and other functional transforms; orthogonal function series expansion; and iterative methods. Certain more technical mathematical aspects of image reconstruction are considered from the standpoint of uniqueness, consistency, and stability of solution. The paper concludes by presenting certain open problems. 73 references. (RWR)
Accelerating reconstruction of reference digital tomosynthesis using graphics hardware
Yan Hui; Ren Lei; Godfrey, Devon J.; Yin Fangfang
2007-10-15
The successful implementation of digital tomosynthesis (DTS) for on-board image guided radiation therapy (IGRT) requires fast DTS image reconstruction. Both target and reference DTS image sets are required to support an image registration application for IGRT. Target images are usually DTS image sets reconstructed from on-board projections, which can be accomplished quickly using the conventional filtered backprojection algorithm. Reference images are DTS image sets reconstructed from digitally reconstructed radiographs (DRRs) previously generated from conventional planning CT data. Generating a set of DRRs from planning CT is relatively slow using the conventional ray-casting algorithm. In order to facilitate DTS reconstruction within a clinically acceptable period of time, we implemented a high performance DRR reconstruction algorithm on a graphics processing unit of commercial PC graphics hardware. The performance of this new algorithm was evaluated and compared with that which is achieved using the conventional software-based ray-casting algorithm. DTS images were reconstructed from DRRs previously generated by both hardware and software algorithms. On average, the DRR reconstruction efficiency using the hardware method is improved by a factor of 67 over the software method. The image quality of the DRRs was comparable to those generated using the software-based ray-casting algorithm. Accelerated DRR reconstruction significantly reduces the overall time required to produce a set of reference DTS images from planning CT and makes this technique clinically practical for target localization for radiation therapy.
Augmented Likelihood Image Reconstruction.
Stille, Maik; Kleine, Matthias; Hägele, Julian; Barkhausen, Jörg; Buzug, Thorsten M
2016-01-01
The presence of high-density objects remains an open problem in medical CT imaging. Data of projections passing through objects of high density, such as metal implants, are dominated by noise and are highly affected by beam hardening and scatter. Reconstructed images become less diagnostically conclusive because of pronounced artifacts that manifest as dark and bright streaks. A new reconstruction algorithm is proposed with the aim to reduce these artifacts by incorporating information about shape and known attenuation coefficients of a metal implant. Image reconstruction is considered as a variational optimization problem. The afore-mentioned prior knowledge is introduced in terms of equality constraints. An augmented Lagrangian approach is adapted in order to minimize the associated log-likelihood function for transmission CT. During iterations, temporally appearing artifacts are reduced with a bilateral filter and new projection values are calculated, which are used later on for the reconstruction. A detailed evaluation in cooperation with radiologists is performed on software and hardware phantoms, as well as on clinically relevant patient data of subjects with various metal implants. Results show that the proposed reconstruction algorithm is able to outperform contemporary metal artifact reduction methods such as normalized metal artifact reduction. PMID:26208310
LOFAR sparse image reconstruction
NASA Astrophysics Data System (ADS)
Garsden, H.; Girard, J. N.; Starck, J. L.; Corbel, S.; Tasse, C.; Woiselle, A.; McKean, J. P.; van Amesfoort, A. S.; Anderson, J.; Avruch, I. M.; Beck, R.; Bentum, M. J.; Best, P.; Breitling, F.; Broderick, J.; Brüggen, M.; Butcher, H. R.; Ciardi, B.; de Gasperin, F.; de Geus, E.; de Vos, M.; Duscha, S.; Eislöffel, J.; Engels, D.; Falcke, H.; Fallows, R. A.; Fender, R.; Ferrari, C.; Frieswijk, W.; Garrett, M. A.; Grießmeier, J.; Gunst, A. W.; Hassall, T. E.; Heald, G.; Hoeft, M.; Hörandel, J.; van der Horst, A.; Juette, E.; Karastergiou, A.; Kondratiev, V. I.; Kramer, M.; Kuniyoshi, M.; Kuper, G.; Mann, G.; Markoff, S.; McFadden, R.; McKay-Bukowski, D.; Mulcahy, D. D.; Munk, H.; Norden, M. J.; Orru, E.; Paas, H.; Pandey-Pommier, M.; Pandey, V. N.; Pietka, G.; Pizzo, R.; Polatidis, A. G.; Renting, A.; Röttgering, H.; Rowlinson, A.; Schwarz, D.; Sluman, J.; Smirnov, O.; Stappers, B. W.; Steinmetz, M.; Stewart, A.; Swinbank, J.; Tagger, M.; Tang, Y.; Tasse, C.; Thoudam, S.; Toribio, C.; Vermeulen, R.; Vocks, C.; van Weeren, R. J.; Wijnholds, S. J.; Wise, M. W.; Wucknitz, O.; Yatawatta, S.; Zarka, P.; Zensus, A.
2015-03-01
Context. The LOw Frequency ARray (LOFAR) radio telescope is a giant digital phased array interferometer with multiple antennas distributed in Europe. It provides discrete sets of Fourier components of the sky brightness. Recovering the original brightness distribution with aperture synthesis forms an inverse problem that can be solved by various deconvolution and minimization methods. Aims: Recent papers have established a clear link between the discrete nature of radio interferometry measurement and the "compressed sensing" (CS) theory, which supports sparse reconstruction methods to form an image from the measured visibilities. Empowered by proximal theory, CS offers a sound framework for efficient global minimization and sparse data representation using fast algorithms. Combined with instrumental direction-dependent effects (DDE) in the scope of a real instrument, we developed and validated a new method based on this framework. Methods: We implemented a sparse reconstruction method in the standard LOFAR imaging tool and compared the photometric and resolution performance of this new imager with that of CLEAN-based methods (CLEAN and MS-CLEAN) with simulated and real LOFAR data. Results: We show that i) sparse reconstruction performs as well as CLEAN in recovering the flux of point sources; ii) performs much better on extended objects (the root mean square error is reduced by a factor of up to 10); and iii) provides a solution with an effective angular resolution 2-3 times better than the CLEAN images. Conclusions: Sparse recovery gives a correct photometry on high dynamic and wide-field images and improved realistic structures of extended sources (of simulated and real LOFAR datasets). This sparse reconstruction method is compatible with modern interferometric imagers that handle DDE corrections (A- and W-projections) required for current and future instruments such as LOFAR and SKA.
Gordon, Jeremy W.; Niles, David J.; Fain, Sean B.; Johnson, Kevin M.
2014-01-01
Purpose To develop a novel imaging technique to reduce the number of excitations and required scan time for hyperpolarized 13C imaging. Methods A least-squares based optimization and reconstruction is developed to simultaneously solve for both spatial and spectral encoding. By jointly solving both domains, spectral imaging can potentially be performed with a spatially oversampled single echo spiral acquisition. Digital simulations, phantom experiments, and initial in vivo hyperpolarized [1-13C]pyruvate experiments were performed to assess the performance of the algorithm as compared to a multi-echo approach. Results Simulations and phantom data indicate that accurate single echo imaging is possible when coupled with oversampling factors greater than six (corresponding to a worst case of pyruvate to metabolite ratio < 9%), even in situations of substantial T2* decay and B0 heterogeneity. With lower oversampling rates, two echoes are required for similar accuracy. These results were confirmed with in vivo data experiments, showing accurate single echo spectral imaging with an oversampling factor of 7 and two echo imaging with an oversampling factor of 4. Conclusion The proposed k-t approach increases data acquisition efficiency by reducing the number of echoes required to generate spectroscopic images, thereby allowing accelerated acquisition speed, preserved polarization, and/or improved temporal or spatial resolution. Magn Reson Med PMID:23716402
Bayesian image reconstruction in astronomy
NASA Astrophysics Data System (ADS)
Nunez, Jorge; Llacer, Jorge
1990-09-01
This paper presents the development and testing of a new iterative reconstruction algorithm for astronomy. A maximum a posteriori method of image reconstruction in the Bayesian statistical framework is proposed for the Poisson-noise case. The method uses the entropy with an adjustable 'sharpness parameter' to define the prior probability and the likelihood with 'data increment' parameters to define the conditional probability. The method makes it possible to obtain reconstructions with neither the problem of the 'grey' reconstructions associated with the pure Bayesian reconstructions nor the problem of image deterioration, typical of the maximum-likelihood method. The present iterative algorithm is fast and stable, maintains positivity, and converges to feasible images.
Cheng, Lishui; Hobbs, Robert F.; Sgouros, George; Frey, Eric C.
2014-01-01
Purpose: Three-dimensional (3D) dosimetry has the potential to provide better prediction of response of normal tissues and tumors and is based on 3D estimates of the activity distribution in the patient obtained from emission tomography. Dose–volume histograms (DVHs) are an important summary measure of 3D dosimetry and a widely used tool for treatment planning in radiation therapy. Accurate estimates of the radioactivity distribution in space and time are desirable for accurate 3D dosimetry. The purpose of this work was to develop and demonstrate the potential of penalized SPECT image reconstruction methods to improve DVHs estimates obtained from 3D dosimetry methods. Methods: The authors developed penalized image reconstruction methods, using maximum a posteriori (MAP) formalism, which intrinsically incorporate regularization in order to control noise and, unlike linear filters, are designed to retain sharp edges. Two priors were studied: one is a 3D hyperbolic prior, termed single-time MAP (STMAP), and the second is a 4D hyperbolic prior, termed cross-time MAP (CTMAP), using both the spatial and temporal information to control noise. The CTMAP method assumed perfect registration between the estimated activity distributions and projection datasets from the different time points. Accelerated and convergent algorithms were derived and implemented. A modified NURBS-based cardiac-torso phantom with a multicompartment kidney model and organ activities and parameters derived from clinical studies were used in a Monte Carlo simulation study to evaluate the methods. Cumulative dose-rate volume histograms (CDRVHs) and cumulative DVHs (CDVHs) obtained from the phantom and from SPECT images reconstructed with both the penalized algorithms and OS-EM were calculated and compared both qualitatively and quantitatively. The STMAP method was applied to patient data and CDRVHs obtained with STMAP and OS-EM were compared qualitatively. Results: The results showed that the
Image Contrast in Holographic Reconstructions
ERIC Educational Resources Information Center
Russell, B. R.
1969-01-01
The fundamental concepts of holography are explained using elementary wave ideas. Discusses wavefront reconstruction and contrast in hemigraphic images. The consequence of recording only the intensity at a given surface and using an oblique reference wave is shown to be an incomplete reconstruction resulting in image of low contrast. (LC)
Li, Shu; Chan, Cheong; Stockmann, Jason P.; Tagare, Hemant; Adluru, Ganesh; Tam, Leo K.; Galiana, Gigi; Constable, R. Todd; Kozerke, Sebastian; Peters, Dana C.
2014-01-01
Purpose To investigate algebraic reconstruction technique (ART) for parallel imaging reconstruction of radial data, applied to accelerated cardiac cine. Methods A GPU-accelerated ART reconstruction was implemented and applied to simulations, point spread functions (PSF) and in twelve subjects imaged with radial cardiac cine acquisitions. Cine images were reconstructed with radial ART at multiple undersampling levels (192 Nr x Np = 96 to 16). Images were qualitatively and quantitatively analyzed for sharpness and artifacts, and compared to filtered back-projection (FBP), and conjugate gradient SENSE (CG SENSE). Results Radial ART provided reduced artifacts and mainly preserved spatial resolution, for both simulations and in vivo data. Artifacts were qualitatively and quantitatively less with ART than FBP using 48, 32, and 24 Np, although FBP provided quantitatively sharper images at undersampling levels of 48-24 Np (all p<0.05). Use of undersampled radial data for generating auto-calibrated coil-sensitivity profiles resulted in slightly reduced quality. ART was comparable to CG SENSE. GPU-acceleration increased ART reconstruction speed 15-fold, with little impact on the images. Conclusion GPU-accelerated ART is an alternative approach to image reconstruction for parallel radial MR imaging, providing reduced artifacts while mainly maintaining sharpness compared to FBP, as shown by its first application in cardiac studies. PMID:24753213
Reconstruction techniques for optoacoustic imaging
NASA Astrophysics Data System (ADS)
Frenz, Martin; Koestli, Kornel P.; Paltauf, Guenther; Schmidt-Kloiber, Heinz; Weber, Heinz P.
2001-06-01
Optoacoustics is a method to gain information from inside a tissue. This is done by irradiating a tissue with a short light pulse, which generates a pressure distribution inside the tissue that mirrors the absorber distribution. The pressure distribution measured on the tissue-surface allows, by applying a back-projection method, to calculate a tomography image of the absorber distribution. This study presents a novel computational algorithm based on Fourier transform, which, at least in principle, yields an exact 3D reconstruction of the distribution of absorbed energy density inside turbid media. The reconstruction is based on 2D pressure distributions captured outside at different times. The FFT reconstruction algorithm is first tested in the back projection of simulated pressure transients of small model absorbers, and finally applied to reconstruct the distribution of artificial blood vessels in three dimensions.
Image processing and reconstruction
Chartrand, Rick
2012-06-15
This talk will examine some mathematical methods for image processing and the solution of underdetermined, linear inverse problems. The talk will have a tutorial flavor, mostly accessible to undergraduates, while still presenting research results. The primary approach is the use of optimization problems. We will find that relaxing the usual assumption of convexity will give us much better results.
Fast reconstruction of digital tomosynthesis using on-board images
Yan Hui; Godfrey, Devon J.; Yin Fangfang
2008-05-15
Digital tomosynthesis (DTS) is a method to reconstruct pseudo three-dimensional (3D) volume images from two-dimensional x-ray projections acquired over limited scan angles. Compared with cone-beam computed tomography, which is frequently used for 3D image guided radiation therapy, DTS requires less imaging time and dose. Successful implementation of DTS for fast target localization requires the reconstruction process to be accomplished within tight clinical time constraints (usually within 2 min). To achieve this goal, substantial improvement of reconstruction efficiency is necessary. In this study, a reconstruction process based upon the algorithm proposed by Feldkamp, Davis, and Kress was implemented on graphics hardware for the purpose of acceleration. The performance of the novel reconstruction implementation was tested for phantom and real patient cases. The efficiency of DTS reconstruction was improved by a factor of 13 on average, without compromising image quality. With acceleration of the reconstruction algorithm, the whole DTS generation process including data preprocessing, reconstruction, and DICOM conversion is accomplished within 1.5 min, which ultimately meets clinical requirement for on-line target localization.
Imaging using accelerated heavy ions
Chu, W.T.
1982-05-01
Several methods for imaging using accelerated heavy ion beams are being investigated at Lawrence Berkeley Laboratory. Using the HILAC (Heavy-Ion Linear Accelerator) as an injector, the Bevalac can accelerate fully stripped atomic nuclei from carbon (Z = 6) to krypton (Z = 34), and partly stripped ions up to uranium (Z = 92). Radiographic studies to date have been conducted with helium (from 184-inch cyclotron), carbon, oxygen, and neon beams. Useful ranges in tissue of 40 cm or more are available. To investigate the potential of heavy-ion projection radiography and computed tomography (CT), several methods and instrumentation have been studied.
Filtering in SPECT Image Reconstruction
Lyra, Maria; Ploussi, Agapi
2011-01-01
Single photon emission computed tomography (SPECT) imaging is widely implemented in nuclear medicine as its clinical role in the diagnosis and management of several diseases is, many times, very helpful (e.g., myocardium perfusion imaging). The quality of SPECT images are degraded by several factors such as noise because of the limited number of counts, attenuation, or scatter of photons. Image filtering is necessary to compensate these effects and, therefore, to improve image quality. The goal of filtering in tomographic images is to suppress statistical noise and simultaneously to preserve spatial resolution and contrast. The aim of this work is to describe the most widely used filters in SPECT applications and how these affect the image quality. The choice of the filter type, the cut-off frequency and the order is a major problem in clinical routine. In many clinical cases, information for specific parameters is not provided, and findings cannot be extrapolated to other similar SPECT imaging applications. A literature review for the determination of the mostly used filters in cardiac, brain, bone, liver, kidneys, and thyroid applications is also presented. As resulting from the overview, no filter is perfect, and the selection of the proper filters, most of the times, is done empirically. The standardization of image-processing results may limit the filter types for each SPECT examination to certain few filters and some of their parameters. Standardization, also, helps in reducing image processing time, as the filters and their parameters must be standardised before being put to clinical use. Commercial reconstruction software selections lead to comparable results interdepartmentally. The manufacturers normally supply default filters/parameters, but these may not be relevant in various clinical situations. After proper standardisation, it is possible to use many suitable filters or one optimal filter. PMID:21760768
Reconstruction of coded aperture images
NASA Technical Reports Server (NTRS)
Bielefeld, Michael J.; Yin, Lo I.
1987-01-01
Balanced correlation method and the Maximum Entropy Method (MEM) were implemented to reconstruct a laboratory X-ray source as imaged by a Uniformly Redundant Array (URA) system. Although the MEM method has advantages over the balanced correlation method, it is computationally time consuming because of the iterative nature of its solution. Massively Parallel Processing, with its parallel array structure is ideally suited for such computations. These preliminary results indicate that it is possible to use the MEM method in future coded-aperture experiments with the help of the MPP.
Compressed sensing sparse reconstruction for coherent field imaging
NASA Astrophysics Data System (ADS)
Bei, Cao; Xiu-Juan, Luo; Yu, Zhang; Hui, Liu; Ming-Lai, Chen
2016-04-01
Return signal processing and reconstruction plays a pivotal role in coherent field imaging, having a significant influence on the quality of the reconstructed image. To reduce the required samples and accelerate the sampling process, we propose a genuine sparse reconstruction scheme based on compressed sensing theory. By analyzing the sparsity of the received signal in the Fourier spectrum domain, we accomplish an effective random projection and then reconstruct the return signal from as little as 10% of traditional samples, finally acquiring the target image precisely. The results of the numerical simulations and practical experiments verify the correctness of the proposed method, providing an efficient processing approach for imaging fast-moving targets in the future. Project supported by the National Natural Science Foundation of China (Grant No. 61505248) and the Fund from Chinese Academy of Sciences, the Light of “Western” Talent Cultivation Plan “Dr. Western Fund Project” (Grant No. Y429621213).
Method for position emission mammography image reconstruction
Smith, Mark Frederick
2004-10-12
An image reconstruction method comprising accepting coincidence datat from either a data file or in real time from a pair of detector heads, culling event data that is outside a desired energy range, optionally saving the desired data for each detector position or for each pair of detector pixels on the two detector heads, and then reconstructing the image either by backprojection image reconstruction or by iterative image reconstruction. In the backprojection image reconstruction mode, rays are traced between centers of lines of response (LOR's), counts are then either allocated by nearest pixel interpolation or allocated by an overlap method and then corrected for geometric effects and attenuation and the data file updated. If the iterative image reconstruction option is selected, one implementation is to compute a grid Siddon retracing, and to perform maximum likelihood expectation maiximization (MLEM) computed by either: a) tracing parallel rays between subpixels on opposite detector heads; or b) tracing rays between randomized endpoint locations on opposite detector heads.
Matrix-based image reconstruction methods for tomography
Llacer, J.; Meng, J.D.
1984-10-01
Matrix methods of image reconstruction have not been used, in general, because of the large size of practical matrices, ill condition upon inversion and the success of Fourier-based techniques. An exception is the work that has been done at the Lawrence Berkeley Laboratory for imaging with accelerated radioactive ions. An extension of that work into more general imaging problems shows that, with a correct formulation of the problem, positron tomography with ring geometries results in well behaved matrices which can be used for image reconstruction with no distortion of the point response in the field of view and flexibility in the design of the instrument. Maximum Likelihood Estimator methods of reconstruction, which use the system matrices tailored to specific instruments and do not need matrix inversion, are shown to result in good preliminary images. A parallel processing computer structure based on multiple inexpensive microprocessors is proposed as a system to implement the matrix-MLE methods. 14 references, 7 figures.
Structured image reconstruction for three-dimensional ghost imaging lidar.
Yu, Hong; Li, Enrong; Gong, Wenlin; Han, Shensheng
2015-06-01
A structured image reconstruction method has been proposed to obtain high quality images in three-dimensional ghost imaging lidar. By considering the spatial structure relationship between recovered images of scene slices at different longitudinal distances, orthogonality constraint has been incorporated to reconstruct the three-dimensional scenes in remote sensing. Numerical simulations have been performed to demonstrate that scene slices with various sparse ratios can be recovered more accurately by applying orthogonality constraint, and the enhancement is significant especially for ghost imaging with less measurements. A simulated three-dimensional city scene has been successfully reconstructed by using structured image reconstruction in three-dimensional ghost imaging lidar. PMID:26072814
Accelerated median root prior reconstruction for pinhole single-photon emission tomography (SPET).
Sohlberg, Antti; Ruotsalainen, Ulla; Watabe, Hiroshi; Iida, Hidehiro; Kuikka, Jyrki T
2003-07-01
Pinhole collimation can be used to improve spatial resolution in SPET. However, the resolution improvement is achieved at the cost of reduced sensitivity, which leads to projection images with poor statistics. Images reconstructed from these projections using the maximum likelihood expectation maximization (ML-EM) algorithms, which have been used to reduce the artefacts generated by the filtered backprojection (FBP) based reconstruction, suffer from noise/bias trade-off: noise contaminates the images at high iteration numbers, whereas early abortion of the algorithm produces images that are excessively smooth and biased towards the initial estimate of the algorithm. To limit the noise accumulation we propose the use of the pinhole median root prior (PH-MRP) reconstruction algorithm. MRP is a Bayesian reconstruction method that has already been used in PET imaging and shown to possess good noise reduction and edge preservation properties. In this study the PH-MRP algorithm was accelerated with the ordered subsets (OS) procedure and compared to the FBP, OS-EM and conventional Bayesian reconstruction methods in terms of noise reduction, quantitative accuracy, edge preservation and visual quality. The results showed that the accelerated PH-MRP algorithm was very robust. It provided visually pleasing images with lower noise level than the FBP or OS-EM and with smaller bias and sharper edges than the conventional Bayesian methods. PMID:12884928
Accelerated median root prior reconstruction for pinhole single-photon emission tomography (SPET)
NASA Astrophysics Data System (ADS)
Sohlberg, Antti; Ruotsalainen, Ulla; Watabe, Hiroshi; Iida, Hidehiro; Kuikka, Jyrki T.
2003-07-01
Pinhole collimation can be used to improve spatial resolution in SPET. However, the resolution improvement is achieved at the cost of reduced sensitivity, which leads to projection images with poor statistics. Images reconstructed from these projections using the maximum likelihood expectation maximization (ML-EM) algorithms, which have been used to reduce the artefacts generated by the filtered backprojection (FBP) based reconstruction, suffer from noise/bias trade-off: noise contaminates the images at high iteration numbers, whereas early abortion of the algorithm produces images that are excessively smooth and biased towards the initial estimate of the algorithm. To limit the noise accumulation we propose the use of the pinhole median root prior (PH-MRP) reconstruction algorithm. MRP is a Bayesian reconstruction method that has already been used in PET imaging and shown to possess good noise reduction and edge preservation properties. In this study the PH-MRP algorithm was accelerated with the ordered subsets (OS) procedure and compared to the FBP, OS-EM and conventional Bayesian reconstruction methods in terms of noise reduction, quantitative accuracy, edge preservation and visual quality. The results showed that the accelerated PH-MRP algorithm was very robust. It provided visually pleasing images with lower noise level than the FBP or OS-EM and with smaller bias and sharper edges than the conventional Bayesian methods.
Studies on image compression and image reconstruction
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Nori, Sekhar; Araj, A.
1994-01-01
During this six month period our works concentrated on three, somewhat different areas. We looked at and developed a number of error concealment schemes for use in a variety of video coding environments. This work is described in an accompanying (draft) Masters thesis. In the thesis we describe application of this techniques to the MPEG video coding scheme. We felt that the unique frame ordering approach used in the MPEG scheme would be a challenge to any error concealment/error recovery technique. We continued with our work in the vector quantization area. We have also developed a new type of vector quantizer, which we call a scan predictive vector quantization. The scan predictive VQ was tested on data processed at Goddard to approximate Landsat 7 HRMSI resolution and compared favorably with existing VQ techniques. A paper describing this work is included. The third area is concerned more with reconstruction than compression. While there is a variety of efficient lossless image compression schemes, they all have a common property that they use past data to encode future data. This is done either via taking differences, context modeling, or by building dictionaries. When encoding large images, this common property becomes a common flaw. When the user wishes to decode just a portion of the image, the requirement that the past history be available forces the decoding of a significantly larger portion of the image than desired by the user. Even with intelligent partitioning of the image dataset, the number of pixels decoded may be four times the number of pixels requested. We have developed an adaptive scanning strategy which can be used with any lossless compression scheme and which lowers the additional number of pixels to be decoded to about 7 percent of the number of pixels requested! A paper describing these results is included.
Redler, Gage; Qiao, Zhiwei; Epel, Boris; Halpern, Howard J.
2015-01-01
The importance of tissue oxygenation has led to a great interest in methods for imaging pO2 in vivo. Electron paramagnetic resonance imaging (EPRI) provides noninvasive, near absolute 1 mm-resolved 3D images of pO2 in the tissues and tumors of living animals. Current EPRI image reconstruction methods tend to be time consuming and preclude real-time visualization of information. Methods are presented to significantly accelerate the reconstruction process in order to enable real-time reconstruction of EPRI pO2 images. These methods are image reconstruction using graphics processing unit (GPU)-based 3D filtered back-projection and lookup table parameter fitting. The combination of these methods leads to acceleration factors of over 650 compared to current methods and allows for real-time reconstruction of EPRI images of pO2 in vivo. PMID:26167137
Restoration and reconstruction from overlapping images
NASA Technical Reports Server (NTRS)
Reichenbach, Stephen E.; Kaiser, Daniel J.; Hanson, Andrew L.; Li, Jing
1997-01-01
This paper describes a technique for restoring and reconstructing a scene from overlapping images. In situations where there are multiple, overlapping images of the same scene, it may be desirable to create a single image that most closely approximates the scene, based on all of the data in the available images. For example, successive swaths acquired by NASA's planned Moderate Imaging Spectrometer (MODIS) will overlap, particularly at wide scan angles, creating a severe visual artifact in the output image. Resampling the overlapping swaths to produce a more accurate image on a uniform grid requires restoration and reconstruction. The one-pass restoration and reconstruction technique developed in this paper yields mean-square-optimal resampling, based on a comprehensive end-to-end system model that accounts for image overlap, and subject to user-defined and data-availability constraints on the spatial support of the filter.
Fast dictionary-based reconstruction for diffusion spectrum imaging.
Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F; Yendiki, Anastasia; Wald, Lawrence L; Adalsteinsson, Elfar
2013-11-01
Diffusion spectrum imaging reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using MATLAB running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using principal component analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm. PMID:23846466
Image Reconstruction Using Analysis Model Prior.
Han, Yu; Du, Huiqian; Lam, Fan; Mei, Wenbo; Fang, Liping
2016-01-01
The analysis model has been previously exploited as an alternative to the classical sparse synthesis model for designing image reconstruction methods. Applying a suitable analysis operator on the image of interest yields a cosparse outcome which enables us to reconstruct the image from undersampled data. In this work, we introduce additional prior in the analysis context and theoretically study the uniqueness issues in terms of analysis operators in general position and the specific 2D finite difference operator. We establish bounds on the minimum measurement numbers which are lower than those in cases without using analysis model prior. Based on the idea of iterative cosupport detection (ICD), we develop a novel image reconstruction model and an effective algorithm, achieving significantly better reconstruction performance. Simulation results on synthetic and practical magnetic resonance (MR) images are also shown to illustrate our theoretical claims. PMID:27379171
Image Reconstruction Using Analysis Model Prior
Han, Yu; Du, Huiqian; Lam, Fan; Mei, Wenbo; Fang, Liping
2016-01-01
The analysis model has been previously exploited as an alternative to the classical sparse synthesis model for designing image reconstruction methods. Applying a suitable analysis operator on the image of interest yields a cosparse outcome which enables us to reconstruct the image from undersampled data. In this work, we introduce additional prior in the analysis context and theoretically study the uniqueness issues in terms of analysis operators in general position and the specific 2D finite difference operator. We establish bounds on the minimum measurement numbers which are lower than those in cases without using analysis model prior. Based on the idea of iterative cosupport detection (ICD), we develop a novel image reconstruction model and an effective algorithm, achieving significantly better reconstruction performance. Simulation results on synthetic and practical magnetic resonance (MR) images are also shown to illustrate our theoretical claims. PMID:27379171
Spectral image reconstruction through the PCA transform
NASA Astrophysics Data System (ADS)
Ma, Long; Qiu, Xuewei; Cong, Yangming
2015-12-01
Digital color image reproduction based on spectral information has become a field of much interest and practical importance in recent years. The representation of color in digital form with multi-band images is not very accurate, hence the use of spectral image is justified. Reconstructing high-dimensional spectral reflectance images from relatively low-dimensional camera signals is generally an ill-posed problem. The aim of this study is to use the Principal component analysis (PCA) transform in spectral reflectance images reconstruction. The performance is evaluated by the mean, median and standard deviation of color difference values. The values of mean, median and standard deviation of root mean square (GFC) errors between the reconstructed and the actual spectral image were also calculated. Simulation experiments conducted on a six-channel camera system and on spectral test images show the performance of the suggested method.
4D image reconstruction for emission tomography
NASA Astrophysics Data System (ADS)
Reader, Andrew J.; Verhaeghe, Jeroen
2014-11-01
An overview of the theory of 4D image reconstruction for emission tomography is given along with a review of the current state of the art, covering both positron emission tomography and single photon emission computed tomography (SPECT). By viewing 4D image reconstruction as a matter of either linear or non-linear parameter estimation for a set of spatiotemporal functions chosen to approximately represent the radiotracer distribution, the areas of so-called ‘fully 4D’ image reconstruction and ‘direct kinetic parameter estimation’ are unified within a common framework. Many choices of linear and non-linear parameterization of these functions are considered (including the important case where the parameters have direct biological meaning), along with a review of the algorithms which are able to estimate these often non-linear parameters from emission tomography data. The other crucial components to image reconstruction (the objective function, the system model and the raw data format) are also covered, but in less detail due to the relatively straightforward extension from their corresponding components in conventional 3D image reconstruction. The key unifying concept is that maximum likelihood or maximum a posteriori (MAP) estimation of either linear or non-linear model parameters can be achieved in image space after carrying out a conventional expectation maximization (EM) update of the dynamic image series, using a Kullback-Leibler distance metric (comparing the modeled image values with the EM image values), to optimize the desired parameters. For MAP, an image-space penalty for regularization purposes is required. The benefits of 4D and direct reconstruction reported in the literature are reviewed, and furthermore demonstrated with simple simulation examples. It is clear that the future of reconstructing dynamic or functional emission tomography images, which often exhibit high levels of spatially correlated noise, should ideally exploit these 4D
Image reconstruction for robot assisted ultrasound tomography
NASA Astrophysics Data System (ADS)
Aalamifar, Fereshteh; Zhang, Haichong K.; Rahmim, Arman; Boctor, Emad M.
2016-04-01
An investigation of several image reconstruction methods for robot-assisted ultrasound (US) tomography setup is presented. In the robot-assisted setup, an expert moves the US probe to the location of interest, and a robotic arm automatically aligns another US probe with it. The two aligned probes can then transmit and receive US signals which are subsequently used for tomographic reconstruction. This study focuses on reconstruction of the speed of sound. In various simulation evaluations as well as in an experiment with a millimeter-range inaccuracy, we demonstrate that the limited data provided by two probes can be used to reconstruct pixel-wise images differentiating between media with different speeds of sound. Combining the results of this investigation with the developed robot-assisted US tomography setup, we envision feasibility of this setup for tomographic imaging in applications beyond breast imaging, with potentially significant efficacy in cancer diagnosis.
Implementation of GPU-Accelerated Back Projection for EPR imaging
Qiao, Zhiwei; Redler, Gage; Epel, Boris; Qian, Yuhua; Halpern, Howard
2016-01-01
Electron paramagnetic resonance (EPR) Imaging (EPRI) is a robust method for measuring in vivo oxygen concentration (pO2). For 3D pulse EPRI, a commonly used reconstruction algorithm is the filtered backprojection (FBP) algorithm, in which the backprojection process is computationally intensive and may be time consuming when implemented on a CPU. A multistage implementation of the backprojection can be used for acceleration, however it is not flexible (requires equal linear angle projection distribution) and may still be time consuming. In this work, single-stage backprojection is implemented on a GPU (Graphics Processing Units) having 1152 cores to accelerate the process. The GPU implementation results in acceleration by over a factor of 200 overall and by over a factor of 3500 if only the computing time is considered. Some important experiences regarding the implementation of GPU-accelerated backprojection for EPRI are summarized. The resulting accelerated image reconstruction is useful for real-time image reconstruction monitoring and other time sensitive applications. PMID:26410654
Kim, Joshua; Ionascu, Dan; Zhang, Tiezhi
2013-01-01
Purpose: To accelerate iterative algebraic reconstruction algorithms using a cylindrical image grid. Methods: Tetrahedron beam computed tomography (TBCT) is designed to overcome the scatter and detector problems of cone beam computed tomography (CBCT). Iterative algebraic reconstruction algorithms have been shown to mitigate approximate reconstruction artifacts that appear at large cone angles, but clinical implementation is limited by their high computational cost. In this study, a cylindrical voxelization method on a cylindrical grid is developed in order to take advantage of the symmetries of the cylindrical geometry. The cylindrical geometry is a natural fit for the circular scanning trajectory employed in volumetric CT methods such as CBCT and TBCT. This method was implemented in combination with the simultaneous algebraic reconstruction technique (SART). Both two- and three-dimensional numerical phantoms as well as a patient CT image were utilized to generate the projection sets used for reconstruction. The reconstructed images were compared to the original phantoms using a set of three figures of merit (FOM). Results: The cylindrical voxelization on a cylindrical reconstruction grid was successfully implemented in combination with the SART reconstruction algorithm. The FOM results showed that the cylindrical reconstructions were able to maintain the accuracy of the Cartesian reconstructions. In three dimensions, the cylindrical method provided better accuracy than the Cartesian methods. At the same time, the cylindrical method was able to provide a speedup factor of approximately 40 while also reducing the system matrix storage size by 2 orders of magnitude. Conclusions: TBCT image reconstruction using a cylindrical image grid was able to provide a significant improvement in the reconstruction time and a more compact system matrix for storage on the hard drive and in memory while maintaining the image quality provided by the Cartesian voxelization on a
Nuclear norm-regularized k-space-based parallel imaging reconstruction
NASA Astrophysics Data System (ADS)
Xu, Lin; Liu, Xiaoyun
2014-04-01
Parallel imaging reconstruction suffers from serious noise amplification at high accelerations that can be alleviated with regularization by imposing some prior information or constraints on image. Nevertheless, point-wise interpolation of missing k-space data restricts the use of prior information in k-space-based parallel imaging reconstructions like generalized auto-calibrating partial acquisitions (GRAPPA). In this study, a regularized k-space based parallel imaging reconstruction is presented. We first formulate the reconstruction of missing data within a patch as a linear inverse problem. Instead of exploiting prior information on image or its transform domain, the proposed method exploits the rank deficiency of structured matrix consisting of vectorized patches form entire k-space, which leads to a nuclear norm-regularized problem solved by the numeric algorithms iteratively. Brain imaging studies are performed, demonstrating that the proposed method is capable of mitigating noise at high accelerations in GRAPPA reconstruction.
Image Reconstruction for Prostate Specific Nuclear Medicine imagers
Mark Smith
2007-01-11
There is increasing interest in the design and construction of nuclear medicine detectors for dedicated prostate imaging. These include detectors designed for imaging the biodistribution of radiopharmaceuticals labeled with single gamma as well as positron-emitting radionuclides. New detectors and acquisition geometries present challenges and opportunities for image reconstruction. In this contribution various strategies for image reconstruction for these special purpose imagers are reviewed. Iterative statistical algorithms provide a framework for reconstructing prostate images from a wide variety of detectors and acquisition geometries for PET and SPECT. The key to their success is modeling the physics of photon transport and data acquisition and the Poisson statistics of nuclear decay. Analytic image reconstruction methods can be fast and are useful for favorable acquisition geometries. Future perspectives on algorithm development and data analysis for prostate imaging are presented.
Bayesian image reconstruction: Application to emission tomography
Nunez, J.; Llacer, J.
1989-02-01
In this paper we propose a Maximum a Posteriori (MAP) method of image reconstruction in the Bayesian framework for the Poisson noise case. We use entropy to define the prior probability and likelihood to define the conditional probability. The method uses sharpness parameters which can be theoretically computed or adjusted, allowing us to obtain MAP reconstructions without the problem of the grey'' reconstructions associated with the pre Bayesian reconstructions. We have developed several ways to solve the reconstruction problem and propose a new iterative algorithm which is stable, maintains positivity and converges to feasible images faster than the Maximum Likelihood Estimate method. We have successfully applied the new method to the case of Emission Tomography, both with simulated and real data. 41 refs., 4 figs., 1 tab.
Knee imaging after anterior cruciate ligament reconstruction.
Rodrigues, M B; Silva, J J; Homsi, C; Stump, X M; Lecouvet, F E
2001-01-01
An increasing number of reconstructions of the anterior cruciate ligament (ACL) are performed every year, due to both the increasing occurrence of sport related injuries and the development of diagnostic and surgical techniques. The most used surgical procedure for the torn ACL reconstruction is the use of autogenous material, most often the patellar and semitendinosus tendons. Magnetic resonance (MR) imaging and spiral-CT performed after arthrography with multiplanar reconstructions are the imaging methods of choice for post-operative evaluation of ACL ligamentoplasty. This paper provides a brief bibliographic and more extensive pictorial review of the normal evolution and possible complications after ACL repair. PMID:11817479
MODEL-BASED IMAGE RECONSTRUCTION FOR MRI
Fessler, Jeffrey A.
2010-01-01
Magnetic resonance imaging (MRI) is a sophisticated and versatile medical imaging modality. Traditionally, MR images are reconstructed from the raw measurements by a simple inverse 2D or 3D fast Fourier transform (FFT). However, there are a growing number of MRI applications where a simple inverse FFT is inadequate, e.g., due to non-Cartesian sampling patterns, non-Fourier physical effects, nonlinear magnetic fields, or deliberate under-sampling to reduce scan times. Such considerations have led to increasing interest in methods for model-based image reconstruction in MRI. PMID:21135916
Colorful holographic imaging reconstruction based on one thin phase plate
NASA Astrophysics Data System (ADS)
Zhu, Jing; Song, Qiang; Wang, Jian; Yue, Weirui; Zhang, Fang; Huang, Huijie
2014-11-01
One method of realizing color holographic imaging using one thin diffractive optical element (DOE) is proposed. This method can reconstruct a two-dimensional color image with one phase plate at user defined distance from DOE. For improving the resolution ratio of reproduced color images, the DOE is optimized by combining Gerchberg-Saxton algorithm and compensation algorithm. To accelerate the computational process, the Graphic Processing Unit (GPU) is used. In the end, the simulation result was analyzed to verify the validity of this method.
Joint image reconstruction and sensitivity estimation in SENSE (JSENSE).
Ying, Leslie; Sheng, Jinhua
2007-06-01
Parallel magnetic resonance imaging (pMRI) using multichannel receiver coils has emerged as an effective tool to reduce imaging time in various applications. However, the issue of accurate estimation of coil sensitivities has not been fully addressed, which limits the level of speed enhancement achievable with the technology. The self-calibrating (SC) technique for sensitivity extraction has been well accepted, especially for dynamic imaging, and complements the common calibration technique that uses a separate scan. However, the existing method to extract the sensitivity information from the SC data is not accurate enough when the number of data is small, and thus erroneous sensitivities affect the reconstruction quality when they are directly applied to the reconstruction equation. This paper considers this problem of error propagation in the sequential procedure of sensitivity estimation followed by image reconstruction in existing methods, such as sensitivity encoding (SENSE) and simultaneous acquisition of spatial harmonics (SMASH), and reformulates the image reconstruction problem as a joint estimation of the coil sensitivities and the desired image, which is solved by an iterative optimization algorithm. The proposed method was tested on various data sets. The results from a set of in vivo data are shown to demonstrate the effectiveness of the proposed method, especially when a rather large net acceleration factor is used. PMID:17534910
Hardware-accelerated cone-beam reconstruction on a mobile C-arm
NASA Astrophysics Data System (ADS)
Churchill, Michael; Pope, Gordon; Penman, Jeffrey; Riabkov, Dmitry; Xue, Xinwei; Cheryauka, Arvi
2007-03-01
The three-dimensional image reconstruction process used in interventional CT imaging is computationally demanding. Implementation on general-purpose computational platforms requires a substantial time, which is undesirable during time-critical surgical and minimally invasive procedures. Field Programmable Gate Arrays (FPGA)s and Graphics Processing Units (GPU)s have been studied as a platform to accelerate 3-D imaging. FPGA and GPU devices offer a reprogrammable hardware architecture, configurable for pipelining and high levels of parallel processing to increase computational throughput, as well as the benefits of being off-the-shelf and effective 'performance-to-watt' solutions. The main focus of this paper is on the backprojection step of the image reconstruction process, since it is the most computationally intensive part. Using the popular Feldkamp-Davis-Kress (FDK) cone-beam algorithm, our studies indicate the entire 256 3 image reconstruction process can be accelerated to real or near real-time (i.e. immediately after a finished scan of 15-30 seconds duration) on a mobile X-ray C-arm system using available resources on built-in FPGA board. High resolution 512 3 image backprojection can be also accomplished within the same scanning time on a high-end GPU board comprising up to 128 streaming processors.
Accelerating the reconstruction of genome-scale metabolic networks
Notebaart, Richard A; van Enckevort, Frank HJ; Francke, Christof; Siezen, Roland J; Teusink, Bas
2006-01-01
Background The genomic information of a species allows for the genome-scale reconstruction of its metabolic capacity. Such a metabolic reconstruction gives support to metabolic engineering, but also to integrative bioinformatics and visualization. Sequence-based automatic reconstructions require extensive manual curation, which can be very time-consuming. Therefore, we present a method to accelerate the time-consuming process of network reconstruction for a query species. The method exploits the availability of well-curated metabolic networks and uses high-resolution predictions of gene equivalency between species, allowing the transfer of gene-reaction associations from curated networks. Results We have evaluated the method using Lactococcus lactis IL1403, for which a genome-scale metabolic network was published recently. We recovered most of the gene-reaction associations (i.e. 74 – 85%) which are incorporated in the published network. Moreover, we predicted over 200 additional genes to be associated to reactions, including genes with unknown function, genes for transporters and genes with specific metabolic reactions, which are good candidates for an extension to the previously published network. In a comparison of our developed method with the well-established approach Pathologic, we predicted 186 additional genes to be associated to reactions. We also predicted a relatively high number of complete conserved protein complexes, which are derived from curated metabolic networks, illustrating the potential predictive power of our method for protein complexes. Conclusion We show that our methodology can be applied to accelerate the reconstruction of genome-scale metabolic networks by taking optimal advantage of existing, manually curated networks. As orthology detection is the first step in the method, only the translated open reading frames (ORFs) of a newly sequenced genome are necessary to reconstruct a metabolic network. When more manually curated metabolic
Heuristic optimization in penumbral image for high resolution reconstructed image
Azuma, R.; Nozaki, S.; Fujioka, S.; Chen, Y. W.; Namihira, Y.
2010-10-15
Penumbral imaging is a technique which uses the fact that spatial information can be recovered from the shadow or penumbra that an unknown source casts through a simple large circular aperture. The size of the penumbral image on the detector can be mathematically determined as its aperture size, object size, and magnification. Conventional reconstruction methods are very sensitive to noise. On the other hand, the heuristic reconstruction method is very tolerant of noise. However, the aperture size influences the accuracy and resolution of the reconstructed image. In this article, we propose the optimization of the aperture size for the neutron penumbral imaging.
BIOFILM IMAGE RECONSTRUCTION FOR ASSESSING STRUCTURAL PARAMETERS
Renslow, Ryan; Lewandowski, Zbigniew; Beyenal, Haluk
2011-01-01
The structure of biofilms can be numerically quantified from microscopy images using structural parameters. These parameters are used in biofilm image analysis to compare biofilms, to monitor temporal variation in biofilm structure, to quantify the effects of antibiotics on biofilm structure and to determine the effects of environmental conditions on biofilm structure. It is often hypothesized that biofilms with similar structural parameter values will have similar structures; however, this hypothesis has never been tested. The main goal was to test the hypothesis that the commonly used structural parameters can characterize the differences or similarities between biofilm structures. To achieve this goal 1) biofilm image reconstruction was developed as a new tool for assessing structural parameters, 2) independent reconstructions using the same starting structural parameters were tested to see how they differed from each other, 3) the effect of the original image parameter values on reconstruction success was evaluated and 4) the effect of the number and type of the parameters on reconstruction success was evaluated. It was found that two biofilms characterized by identical commonly used structural parameter values may look different, that the number and size of clusters in the original biofilm image affect image reconstruction success and that, in general, a small set of arbitrarily selected parameters may not reveal relevant differences between biofilm structures. PMID:21280029
Approach for reconstructing anisoplanatic adaptive optics images.
Aubailly, Mathieu; Roggemann, Michael C; Schulz, Timothy J
2007-08-20
Atmospheric turbulence corrupts astronomical images formed by ground-based telescopes. Adaptive optics systems allow the effects of turbulence-induced aberrations to be reduced for a narrow field of view corresponding approximately to the isoplanatic angle theta(0). For field angles larger than theta(0), the point spread function (PSF) gradually degrades as the field angle increases. We present a technique to estimate the PSF of an adaptive optics telescope as function of the field angle, and use this information in a space-varying image reconstruction technique. Simulated anisoplanatic intensity images of a star field are reconstructed by means of a block-processing method using the predicted local PSF. Two methods for image recovery are used: matrix inversion with Tikhonov regularization, and the Lucy-Richardson algorithm. Image reconstruction results obtained using the space-varying predicted PSF are compared to space invariant deconvolution results obtained using the on-axis PSF. The anisoplanatic reconstruction technique using the predicted PSF provides a significant improvement of the mean squared error between the reconstructed image and the object compared to the deconvolution performed using the on-axis PSF. PMID:17712366
Improving Tritium Exposure Reconstructions Using Accelerator Mass Spectrometry
Love, A; Hunt, J; Knezovich, J
2003-06-01
Exposure reconstructions for radionuclides are inherently difficult. As a result, most reconstructions are based primarily on mathematical models of environmental fate and transport. These models can have large uncertainties, as important site-specific information is unknown, missing, or crudely estimated. Alternatively, surrogate environmental measurements of exposure can be used for site-specific reconstructions. In cases where environmental transport processes are complex, well-chosen environmental surrogates can have smaller exposure uncertainty than mathematical models. Because existing methodologies have significant limitations, the development or improvement of methodologies for reconstructing exposure from environmental measurements would provide important additional tools in assessing the health effects of chronic exposure. As an example, the direct measurement of tritium atoms by accelerator mass spectrometry (AMS) enables rapid low-activity tritium measurements from milligram-sized samples, which permit greater ease of sample collection, faster throughput, and increased spatial and/or temporal resolution. Tritium AMS was previously demonstrated for a tree growing on known levels of tritiated water and for trees exposed to atmospheric releases of tritiated water vapor. In these analyses, tritium levels were measured from milligram-sized samples with sample preparation times of a few days. Hundreds of samples were analyzed within a few months of sample collection and resulted in the reconstruction of spatial and temporal exposure from tritium releases.
Atkinson, David; Buerger, Christian; Schaeffter, Tobias; Prieto, Claudia
2015-01-01
Purpose Develop a nonrigid motion corrected reconstruction for highly accelerated free‐breathing three‐dimensional (3D) abdominal images without external sensors or additional scans. Methods The proposed method accelerates the acquisition by undersampling and performs motion correction directly in the reconstruction using a general matrix description of the acquisition. Data are acquired using a self‐gated 3D golden radial phase encoding trajectory, enabling a two stage reconstruction to estimate and then correct motion of the same data. In the first stage total variation regularized iterative SENSE is used to reconstruct highly undersampled respiratory resolved images. A nonrigid registration of these images is performed to estimate the complex motion in the abdomen. In the second stage, the estimated motion fields are incorporated in a general matrix reconstruction, which uses total variation regularization and incorporates k‐space data from multiple respiratory positions. The proposed approach was tested on nine healthy volunteers and compared against a standard gated reconstruction using measures of liver sharpness, gradient entropy, visual assessment of image sharpness and overall image quality by two experts. Results The proposed method achieves similar quality to the gated reconstruction with nonsignificant differences for liver sharpness (1.18 and 1.00, respectively), gradient entropy (1.00 and 1.00), visual score of image sharpness (2.22 and 2.44), and visual rank of image quality (3.33 and 3.39). An average reduction of the acquisition time from 102 s to 39 s could be achieved with the proposed method. Conclusion In vivo results demonstrate the feasibility of the proposed method showing similar image quality to the standard gated reconstruction while using data corresponding to a significantly reduced acquisition time. Magn Reson Med, 2015. © 2015 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of
Efficient MR image reconstruction for compressed MR imaging.
Huang, Junzhou; Zhang, Shaoting; Metaxas, Dimitris
2011-10-01
In this paper, we propose an efficient algorithm for MR image reconstruction. The algorithm minimizes a linear combination of three terms corresponding to a least square data fitting, total variation (TV) and L1 norm regularization. This has been shown to be very powerful for the MR image reconstruction. First, we decompose the original problem into L1 and TV norm regularization subproblems respectively. Then, these two subproblems are efficiently solved by existing techniques. Finally, the reconstructed image is obtained from the weighted average of solutions from two subproblems in an iterative framework. We compare the proposed algorithm with previous methods in term of the reconstruction accuracy and computation complexity. Numerous experiments demonstrate the superior performance of the proposed algorithm for compressed MR image reconstruction. PMID:21742542
Efficient MR image reconstruction for compressed MR imaging.
Huang, Junzhou; Zhang, Shaoting; Metaxas, Dimitris
2010-01-01
In this paper, we propose an efficient algorithm for MR image reconstruction. The algorithm minimizes a linear combination of three terms corresponding to a least square data fitting, total variation (TV) and L1 norm regularization. This has been shown to be very powerful for the MR image reconstruction. First, we decompose the original problem into L1 and TV norm regularization subproblems respectively. Then, these two subproblems are efficiently solved by existing techniques. Finally, the reconstructed image is obtained from the weighted average of solutions from two subproblems in an iterative framework. We compare the proposed algorithm with previous methods in term of the reconstruction accuracy and computation complexity. Numerous experiments demonstrate the superior performance of the proposed algorithm for compressed MR image reconstruction. PMID:20879224
Edge-Preserving PET Image Reconstruction Using Trust Optimization Transfer
Wang, Guobao; Qi, Jinyi
2014-01-01
Iterative image reconstruction for positron emission tomography (PET) can improve image quality by using spatial regularization. The most commonly used quadratic penalty often over-smoothes sharp edges and fine features in reconstructed images, while non-quadratic penalties can preserve edges and achieve higher contrast recovery. Existing optimization algorithms such as the expectation maximization (EM) and preconditioned conjugate gradient (PCG) algorithms work well for the quadratic penalty, but are less efficient for high-curvature or non-smooth edge-preserving regularizations. This paper proposes a new algorithm to accelerate edge-preserving image reconstruction by using two strategies: trust surrogate and optimization transfer descent. Trust surrogate approximates the original penalty by a smoother function at each iteration, but guarantees the algorithm to descend monotonically; Optimization transfer descent accelerates a conventional optimization transfer algorithm by using conjugate gradient and line search. Results of computer simulations and real 3D data show that the proposed algorithm converges much faster than the conventional EM and PCG for smooth edge-preserving regularization and can also be more efficient than the current state-of-art algorithms for the non-smooth ℓ1 regularization. PMID:25438302
Similarity-regulation of OS-EM for accelerated SPECT reconstruction.
Vaissier, P E B; Beekman, F J; Goorden, M C
2016-06-01
Ordered subsets expectation maximization (OS-EM) is widely used to accelerate image reconstruction in single photon emission computed tomography (SPECT). Speedup of OS-EM over maximum likelihood expectation maximization (ML-EM) is close to the number of subsets used. Although a high number of subsets can shorten reconstruction times significantly, it can also cause severe image artifacts such as improper erasure of reconstructed activity if projections contain few counts. We recently showed that such artifacts can be prevented by using a count-regulated OS-EM (CR-OS-EM) algorithm which automatically adapts the number of subsets for each voxel based on the estimated number of counts that the voxel contributed to the projections. While CR-OS-EM reached high speed-up over ML-EM in high-activity regions of images, speed in low-activity regions could still be very slow. In this work we propose similarity-regulated OS-EM (SR-OS-EM) as a much faster alternative to CR-OS-EM. SR-OS-EM also automatically and locally adapts the number of subsets, but it uses a different criterion for subset regulation: the number of subsets that is used for updating an individual voxel depends on how similar the reconstruction algorithm would update the estimated activity in that voxel with different subsets. Reconstructions of an image quality phantom and in vivo scans show that SR-OS-EM retains all of the favorable properties of CR-OS-EM, while reconstruction speed can be up to an order of magnitude higher in low-activity regions. Moreover our results suggest that SR-OS-EM can be operated with identical reconstruction parameters (including the number of iterations) for a wide range of count levels, which can be an additional advantage from a user perspective since users would only have to post-filter an image to present it at an appropriate noise level. PMID:27206135
Similarity-regulation of OS-EM for accelerated SPECT reconstruction
NASA Astrophysics Data System (ADS)
Vaissier, P. E. B.; Beekman, F. J.; Goorden, M. C.
2016-06-01
Ordered subsets expectation maximization (OS-EM) is widely used to accelerate image reconstruction in single photon emission computed tomography (SPECT). Speedup of OS-EM over maximum likelihood expectation maximization (ML-EM) is close to the number of subsets used. Although a high number of subsets can shorten reconstruction times significantly, it can also cause severe image artifacts such as improper erasure of reconstructed activity if projections contain few counts. We recently showed that such artifacts can be prevented by using a count-regulated OS-EM (CR-OS-EM) algorithm which automatically adapts the number of subsets for each voxel based on the estimated number of counts that the voxel contributed to the projections. While CR-OS-EM reached high speed-up over ML-EM in high-activity regions of images, speed in low-activity regions could still be very slow. In this work we propose similarity-regulated OS-EM (SR-OS-EM) as a much faster alternative to CR-OS-EM. SR-OS-EM also automatically and locally adapts the number of subsets, but it uses a different criterion for subset regulation: the number of subsets that is used for updating an individual voxel depends on how similar the reconstruction algorithm would update the estimated activity in that voxel with different subsets. Reconstructions of an image quality phantom and in vivo scans show that SR-OS-EM retains all of the favorable properties of CR-OS-EM, while reconstruction speed can be up to an order of magnitude higher in low-activity regions. Moreover our results suggest that SR-OS-EM can be operated with identical reconstruction parameters (including the number of iterations) for a wide range of count levels, which can be an additional advantage from a user perspective since users would only have to post-filter an image to present it at an appropriate noise level.
Geometric reconstruction using tracked ultrasound strain imaging
NASA Astrophysics Data System (ADS)
Pheiffer, Thomas S.; Simpson, Amber L.; Ondrake, Janet E.; Miga, Michael I.
2013-03-01
The accurate identification of tumor margins during neurosurgery is a primary concern for the surgeon in order to maximize resection of malignant tissue while preserving normal function. The use of preoperative imaging for guidance is standard of care, but tumor margins are not always clear even when contrast agents are used, and so margins are often determined intraoperatively by visual and tactile feedback. Ultrasound strain imaging creates a quantitative representation of tissue stiffness which can be used in real-time. The information offered by strain imaging can be placed within a conventional image-guidance workflow by tracking the ultrasound probe and calibrating the image plane, which facilitates interpretation of the data by placing it within a common coordinate space with preoperative imaging. Tumor geometry in strain imaging is then directly comparable to the geometry in preoperative imaging. This paper presents a tracked ultrasound strain imaging system capable of co-registering with preoperative tomograms and also of reconstructing a 3D surface using the border of the strain lesion. In a preliminary study using four phantoms with subsurface tumors, tracked strain imaging was registered to preoperative image volumes and then tumor surfaces were reconstructed using contours extracted from strain image slices. The volumes of the phantom tumors reconstructed from tracked strain imaging were approximately between 1.5 to 2.4 cm3, which was similar to the CT volumes of 1.0 to 2.3 cm3. Future work will be done to robustly characterize the reconstruction accuracy of the system.
PET Image Reconstruction Using Kernel Method
Wang, Guobao; Qi, Jinyi
2014-01-01
Image reconstruction from low-count PET projection data is challenging because the inverse problem is ill-posed. Prior information can be used to improve image quality. Inspired by the kernel methods in machine learning, this paper proposes a kernel based method that models PET image intensity in each pixel as a function of a set of features obtained from prior information. The kernel-based image model is incorporated into the forward model of PET projection data and the coefficients can be readily estimated by the maximum likelihood (ML) or penalized likelihood image reconstruction. A kernelized expectation-maximization (EM) algorithm is presented to obtain the ML estimate. Computer simulations show that the proposed approach can achieve better bias versus variance trade-off and higher contrast recovery for dynamic PET image reconstruction than the conventional maximum likelihood method with and without post-reconstruction denoising. Compared with other regularization-based methods, the kernel method is easier to implement and provides better image quality for low-count data. Application of the proposed kernel method to a 4D dynamic PET patient dataset showed promising results. PMID:25095249
PET image reconstruction using kernel method.
Wang, Guobao; Qi, Jinyi
2015-01-01
Image reconstruction from low-count positron emission tomography (PET) projection data is challenging because the inverse problem is ill-posed. Prior information can be used to improve image quality. Inspired by the kernel methods in machine learning, this paper proposes a kernel based method that models PET image intensity in each pixel as a function of a set of features obtained from prior information. The kernel-based image model is incorporated into the forward model of PET projection data and the coefficients can be readily estimated by the maximum likelihood (ML) or penalized likelihood image reconstruction. A kernelized expectation-maximization algorithm is presented to obtain the ML estimate. Computer simulations show that the proposed approach can achieve better bias versus variance trade-off and higher contrast recovery for dynamic PET image reconstruction than the conventional maximum likelihood method with and without post-reconstruction denoising. Compared with other regularization-based methods, the kernel method is easier to implement and provides better image quality for low-count data. Application of the proposed kernel method to a 4-D dynamic PET patient dataset showed promising results. PMID:25095249
CT Image Reconstruction from Sparse Projections Using Adaptive TpV Regularization
Chen, Zijia; Zhou, Linghong
2015-01-01
Radiation dose reduction without losing CT image quality has been an increasing concern. Reducing the number of X-ray projections to reconstruct CT images, which is also called sparse-projection reconstruction, can potentially avoid excessive dose delivered to patients in CT examination. To overcome the disadvantages of total variation (TV) minimization method, in this work we introduce a novel adaptive TpV regularization into sparse-projection image reconstruction and use FISTA technique to accelerate iterative convergence. The numerical experiments demonstrate that the proposed method suppresses noise and artifacts more efficiently, and preserves structure information better than other existing reconstruction methods. PMID:26089962
Image reconstruction algorithms with wavelet filtering for optoacoustic imaging
NASA Astrophysics Data System (ADS)
Gawali, S.; Leggio, L.; Broadway, C.; González, P.; Sánchez, M.; Rodríguez, S.; Lamela, H.
2016-03-01
Optoacoustic imaging (OAI) is a hybrid biomedical imaging modality based on the generation and detection of ultrasound by illuminating the target tissue by laser light. Typically, laser light in visible or near infrared spectrum is used as an excitation source. OAI is based on the implementation of image reconstruction algorithms using the spatial distribution of optical absorption in tissues. In this work, we apply a time-domain back-projection (BP) reconstruction algorithm and a wavelet filtering for point and line detection, respectively. A comparative study between point detection and integrated line detection has been carried out by evaluating their effects on the image reconstructed. Our results demonstrate that the back-projection algorithm proposed is efficient for reconstructing high-resolution images of absorbing spheres embedded in a non-absorbing medium when it is combined with the wavelet filtering.
Multi-contrast magnetic resonance image reconstruction
NASA Astrophysics Data System (ADS)
Liu, Meng; Chen, Yunmei; Zhang, Hao; Huang, Feng
2015-03-01
In clinical exams, multi-contrast images from conventional MRI are scanned with the same field of view (FOV) for complementary diagnostic information, such as proton density- (PD-), T1- and T2-weighted images. Their sharable information can be utilized for more robust and accurate image reconstruction. In this work, we propose a novel model and an efficient algorithm for joint image reconstruction and coil sensitivity estimation in multi-contrast partially parallel imaging (PPI) in MRI. Our algorithm restores the multi-contrast images by minimizing an energy function consisting of an L2-norm fidelity term to reduce construction errors caused by motion, a regularization term of underlying images to preserve common anatomical features by using vectorial total variation (VTV) regularizer, and updating sensitivity maps by Tikhonov smoothness based on their physical property. We present the numerical results including T1- and T2-weighted MR images recovered from partially scanned k-space data and provide the comparisons between our results and those obtained from the related existing works. Our numerical results indicate that the proposed method using vectorial TV and penalties on sensitivities can be made promising and widely used for multi-contrast multi-channel MR image reconstruction.
CUDA accelerated uniform re-sampling for non-Cartesian MR reconstruction.
Feng, Chaolu; Zhao, Dazhe
2015-01-01
A grid-driven gridding (GDG) method is proposed to uniformly re-sample non-Cartesian raw data acquired in PROPELLER, in which a trajectory window for each Cartesian grid is first computed. The intensity of the reconstructed image at this grid is the weighted average of raw data in this window. Taking consider of the single instruction multiple data (SIMD) property of the proposed GDG, a CUDA accelerated method is then proposed to improve the performance of the proposed GDG. Two groups of raw data sampled by PROPELLER in two resolutions are reconstructed by the proposed method. To balance computation resources of the GPU and obtain the best performance improvement, four thread-block strategies are adopted. Experimental results demonstrate that although the proposed GDG is more time consuming than traditional DDG, the CUDA accelerated GDG is almost 10 times faster than traditional DDG. PMID:26406102
Wave-CAIPI for Highly Accelerated 3D Imaging
Bilgic, Berkin; Gagoski, Borjan A.; Cauley, Stephen F.; Fan, Audrey P.; Polimeni, Jonathan R.; Grant, P. Ellen; Wald, Lawrence L.; Setsompop, Kawin
2014-01-01
Purpose To introduce the Wave-CAIPI (Controlled Aliasing in Parallel Imaging) acquisition and reconstruction technique for highly accelerated 3D imaging with negligible g-factor and artifact penalties. Methods The Wave-CAIPI 3D acquisition involves playing sinusoidal gy and gz gradients during the readout of each kx encoding line, while modifying the 3D phase encoding strategy to incur inter-slice shifts as in 2D-CAIPI acquisitions. The resulting acquisition spreads the aliasing evenly in all spatial directions, thereby taking full advantage of 3D coil sensitivity distribution. By expressing the voxel spreading effect as a convolution in image space, an efficient reconstruction scheme that does not require data gridding is proposed. Rapid acquisition and high quality image reconstruction with Wave-CAIPI is demonstrated for high-resolution magnitude and phase imaging and Quantitative Susceptibility Mapping (QSM). Results Wave-CAIPI enables full-brain gradient echo (GRE) acquisition at 1 mm isotropic voxel size and R=3×3 acceleration with maximum g-factors of 1.08 at 3T, and 1.05 at 7T. Relative to the other advanced Cartesian encoding strategies 2D-CAIPI and Bunched Phase Encoding, Wave-CAIPI yields up to 2-fold reduction in maximum g-factor for 9-fold acceleration at both field strengths. Conclusion Wave-CAIPI allows highly accelerated 3D acquisitions with low artifact and negligible g-factor penalties, and may facilitate clinical application of high-resolution volumetric imaging. PMID:24986223
Medical Imaging Inspired Vertex Reconstruction at LHC
NASA Astrophysics Data System (ADS)
Hageböck, S.; von Toerne, E.
2012-12-01
Three-dimensional image reconstruction in medical applications (PET or X-ray CT) utilizes sophisticated filter algorithms to linear trajectories of coincident photon pairs or x-rays. The goal is to reconstruct an image of an emitter density distribution. In a similar manner, tracks in particle physics originate from vertices that need to be distinguished from background track combinations. In this study it is investigated if vertex reconstruction in high energy proton collisions may benefit from medical imaging methods. A new method of vertex finding, the Medical Imaging Vertexer (MIV), is presented based on a three-dimensional filtered backprojection algorithm. It is compared to the open-source RAVE vertexing package. The performance of the vertex finding algorithms is evaluated as a function of instantaneous luminosity using simulated LHC collisions. Tracks in these collisions are described by a simplified detector model which is inspired by the tracking performance of the LHC experiments. At high luminosities (25 pileup vertices and more), the medical imaging approach finds vertices with a higher efficiency and purity than the RAVE “Adaptive Vertex Reconstructor” algorithm. It is also much faster if more than 25 vertices are to be reconstructed because the amount of CPU time rises linearly with the number of tracks whereas it rises quadratically for the adaptive vertex fitter AVR.
Image reconstructions with the rotating RF coil.
Trakic, A; Wang, H; Weber, E; Li, B K; Poole, M; Liu, F; Crozier, S
2009-12-01
Recent studies have shown that rotating a single RF transceive coil (RRFC) provides a uniform coverage of the object and brings a number of hardware advantages (i.e. requires only one RF channel, averts coil-coil coupling interactions and facilitates large-scale multi-nuclear imaging). Motion of the RF coil sensitivity profile however violates the standard Fourier Transform definition of a time-invariant signal, and the images reconstructed in this conventional manner can be degraded by ghosting artifacts. To overcome this problem, this paper presents Time Division Multiplexed-Sensitivity Encoding (TDM-SENSE), as a new image reconstruction scheme that exploits the rotation of the RF coil sensitivity profile to facilitate ghost-free image reconstructions and reductions in image acquisition time. A transceive RRFC system for head imaging at 2 Tesla was constructed and applied in a number of in vivo experiments. In this initial study, alias-free head images were obtained in half the usual scan time. It is hoped that new sequences and methods will be developed by taking advantage of coil motion. PMID:19800824
Image reconstructions with the rotating RF coil
NASA Astrophysics Data System (ADS)
Trakic, A.; Wang, H.; Weber, E.; Li, B. K.; Poole, M.; Liu, F.; Crozier, S.
2009-12-01
Recent studies have shown that rotating a single RF transceive coil (RRFC) provides a uniform coverage of the object and brings a number of hardware advantages (i.e. requires only one RF channel, averts coil-coil coupling interactions and facilitates large-scale multi-nuclear imaging). Motion of the RF coil sensitivity profile however violates the standard Fourier Transform definition of a time-invariant signal, and the images reconstructed in this conventional manner can be degraded by ghosting artifacts. To overcome this problem, this paper presents Time Division Multiplexed — Sensitivity Encoding (TDM-SENSE), as a new image reconstruction scheme that exploits the rotation of the RF coil sensitivity profile to facilitate ghost-free image reconstructions and reductions in image acquisition time. A transceive RRFC system for head imaging at 2 Tesla was constructed and applied in a number of in vivo experiments. In this initial study, alias-free head images were obtained in half the usual scan time. It is hoped that new sequences and methods will be developed by taking advantage of coil motion.
3D EIT image reconstruction with GREIT.
Grychtol, Bartłomiej; Müller, Beat; Adler, Andy
2016-06-01
Most applications of thoracic EIT use a single plane of electrodes on the chest from which a transverse image 'slice' is calculated. However, interpretation of EIT images is made difficult by the large region above and below the electrode plane to which EIT is sensitive. Volumetric EIT images using two (or more) electrode planes should help compensate, but are little used currently. The Graz consensus reconstruction algorithm for EIT (GREIT) has become popular in lung EIT. One shortcoming of the original formulation of GREIT is its restriction to reconstruction onto a 2D planar image. We present an extension of the GREIT algorithm to 3D and develop open-source tools to evaluate its performance as a function of the choice of stimulation and measurement pattern. Results show 3D GREIT using two electrode layers has significantly more uniform sensitivity profiles through the chest region. Overall, the advantages of 3D EIT are compelling. PMID:27203184
Fast Image Reconstruction with L2-Regularization
Bilgic, Berkin; Chatnuntawech, Itthi; Fan, Audrey P.; Setsompop, Kawin; Cauley, Stephen F.; Wald, Lawrence L.; Adalsteinsson, Elfar
2014-01-01
Purpose We introduce L2-regularized reconstruction algorithms with closed-form solutions that achieve dramatic computational speed-up relative to state of the art L1- and L2-based iterative algorithms while maintaining similar image quality for various applications in MRI reconstruction. Materials and Methods We compare fast L2-based methods to state of the art algorithms employing iterative L1- and L2-regularization in numerical phantom and in vivo data in three applications; 1) Fast Quantitative Susceptibility Mapping (QSD), 2) Lipid artifact suppression in Magnetic Resonance Spectroscopic Imaging (MRSI), and 3) Diffusion Spectrum Imaging (DSI). In all cases, proposed L2-based methods are compared with the state of the art algorithms, and two to three orders of magnitude speed up is demonstrated with similar reconstruction quality. Results The closed-form solution developed for regularized QSM allows processing of a 3D volume under 5 seconds, the proposed lipid suppression algorithm takes under 1 second to reconstruct single-slice MRSI data, while the PCA based DSI algorithm estimates diffusion propagators from undersampled q-space for a single slice under 30 seconds, all running in Matlab using a standard workstation. Conclusion For the applications considered herein, closed-form L2-regularization can be a faster alternative to its iterative counterpart or L1-based iterative algorithms, without compromising image quality. PMID:24395184
Optimal Statistical Approach to Optoacoustic Image Reconstruction
NASA Astrophysics Data System (ADS)
Zhulina, Yulia V.
2000-11-01
An optimal statistical approach is applied to the task of image reconstruction in photoacoustics. The physical essence of the task is as follows: Pulse laser irradiation induces an ultrasound wave on the inhomogeneities inside the investigated volume. This acoustic wave is received by the set of receivers outside this volume. It is necessary to reconstruct a spatial image of these inhomogeneities. Developed mathematical techniques of the radio location theory are used for solving the task. An algorithm of maximum likelihood is synthesized for the image reconstruction. The obtained algorithm is investigated by digital modeling. The number of receivers and their disposition in space are arbitrary. Results of the synthesis are applied to noninvasive medical diagnostics (breast cancer). The capability of the algorithm is tested on real signals. The image is built with use of signals obtained in vitro . The essence of the algorithm includes (i) summing of all signals in the image plane with the transform from the time coordinates of signals to the spatial coordinates of the image and (ii) optimal spatial filtration of this sum. The results are shown in the figures.
Pragmatic fully 3D image reconstruction for the MiCES mouse imaging PET scanner
NASA Astrophysics Data System (ADS)
Lee, Kisung; Kinahan, Paul E.; Fessler, Jeffrey A.; Miyaoka, Robert S.; Janes, Marie; Lewellen, Tom K.
2004-10-01
We present a pragmatic approach to image reconstruction for data from the micro crystal elements system (MiCES) fully 3D mouse imaging positron emission tomography (PET) scanner under construction at the University of Washington. Our approach is modelled on fully 3D image reconstruction used in clinical PET scanners, which is based on Fourier rebinning (FORE) followed by 2D iterative image reconstruction using ordered-subsets expectation-maximization (OSEM). The use of iterative methods allows modelling of physical effects (e.g., statistical noise, detector blurring, attenuation, etc), while FORE accelerates the reconstruction process by reducing the fully 3D data to a stacked set of independent 2D sinograms. Previous investigations have indicated that non-stationary detector point-spread response effects, which are typically ignored for clinical imaging, significantly impact image quality for the MiCES scanner geometry. To model the effect of non-stationary detector blurring (DB) in the FORE+OSEM(DB) algorithm, we have added a factorized system matrix to the ASPIRE reconstruction library. Initial results indicate that the proposed approach produces an improvement in resolution without an undue increase in noise and without a significant increase in the computational burden. The impact on task performance, however, remains to be evaluated.
Improving tritium exposure reconstructions using accelerator mass spectrometry
Hunt, J. R.; Vogel, J. S.; Knezovich, J. P.
2010-01-01
Direct measurement of tritium atoms by accelerator mass spectrometry (AMS) enables rapid low-activity tritium measurements from milligram-sized samples and permits greater ease of sample collection, faster throughput, and increased spatial and/or temporal resolution. Because existing methodologies for quantifying tritium have some significant limitations, the development of tritium AMS has allowed improvements in reconstructing tritium exposure concentrations from environmental measurements and provides an important additional tool in assessing the temporal and spatial distribution of chronic exposure. Tritium exposure reconstructions using AMS were previously demonstrated for a tree growing on known levels of tritiated water and for trees exposed to atmospheric releases of tritiated water vapor. In these analyses, tritium levels were measured from milligram-sized samples with sample preparation times of a few days. Hundreds of samples were analyzed within a few months of sample collection and resulted in the reconstruction of spatial and temporal exposure from tritium releases. Although the current quantification limit of tritium AMS is not adequate to determine natural environmental variations in tritium concentrations, it is expected to be sufficient for studies assessing possible health effects from chronic environmental tritium exposure. PMID:14735274
Stochastic image reconstruction for a dual-particle imaging system
NASA Astrophysics Data System (ADS)
Hamel, M. C.; Polack, J. K.; Poitrasson-Rivière, A.; Flaska, M.; Clarke, S. D.; Pozzi, S. A.; Tomanin, A.; Peerani, P.
2016-02-01
Stochastic image reconstruction has been applied to a dual-particle imaging system being designed for nuclear safeguards applications. The dual-particle imager (DPI) is a combined Compton-scatter and neutron-scatter camera capable of producing separate neutron and photon images. The stochastic origin ensembles (SOE) method was investigated as an imaging method for the DPI because only a minimal estimation of system response is required to produce images with quality that is comparable to common maximum-likelihood methods. This work contains neutron and photon SOE image reconstructions for a 252Cf point source, two mixed-oxide (MOX) fuel canisters representing point sources, and the MOX fuel canisters representing a distributed source. Simulation of the DPI using MCNPX-PoliMi is validated by comparison of simulated and measured results. Because image quality is dependent on the number of counts and iterations used, the relationship between these quantities is investigated.
Simultaneous algebraic reconstruction technique based on guided image filtering.
Ji, Dongjiang; Qu, Gangrong; Liu, Baodong
2016-07-11
The challenge of computed tomography is to reconstruct high-quality images from few-view projections. Using a prior guidance image, guided image filtering smoothes images while preserving edge features. The prior guidance image can be incorporated into the image reconstruction process to improve image quality. We propose a new simultaneous algebraic reconstruction technique based on guided image filtering. Specifically, the prior guidance image is updated in the image reconstruction process, merging information iteratively. To validate the algorithm practicality and efficiency, experiments were performed with numerical phantom projection data and real projection data. The results demonstrate that the proposed method is effective and efficient for nondestructive testing and rock mechanics. PMID:27410859
Tomographic image reconstruction and rendering with texture-mapping hardware
Azevedo, S.G.; Cabral, B.K.; Foran, J.
1994-07-01
The image reconstruction problem, also known as the inverse Radon transform, for x-ray computed tomography (CT) is found in numerous applications in medicine and industry. The most common algorithm used in these cases is filtered backprojection (FBP), which, while a simple procedure, is time-consuming for large images on any type of computational engine. Specially-designed, dedicated parallel processors are commonly used in medical CT scanners, whose results are then passed to graphics workstation for rendering and analysis. However, a fast direct FBP algorithm can be implemented on modern texture-mapping hardware in current high-end workstation platforms. This is done by casting the FBP algorithm as an image warping operation with summing. Texture-mapping hardware, such as that on the Silicon Graphics Reality Engine (TM), shows around 600 times speedup of backprojection over a CPU-based implementation (a 100 Mhz R4400 in this case). This technique has the further advantages of flexibility and rapid programming. In addition, the same hardware can be used for both image reconstruction and for volumetric rendering. The techniques can also be used to accelerate iterative reconstruction algorithms. The hardware architecture also allows more complex operations than straight-ray backprojection if they are required, including fan-beam, cone-beam, and curved ray paths, with little or no speed penalties.
Tomographic image reconstruction and rendering with texture-mapping hardware
NASA Astrophysics Data System (ADS)
Azevedo, Stephen G.; Cabral, Brian K.; Foran, Jim
1994-07-01
The image reconstruction problem, also known as the inverse Radon transform, for x-ray computed tomography (CT) is found in numerous applications in medicine and industry. The most common algorithm used in these cases is filtered backprojection (FBP), which, while a simple procedure, is time-consuming for large images on any type of computational engine. Specially designed, dedicated parallel processors are commonly used in medical CT scanners, whose results are then passed to a graphics workstation for rendering and analysis. However, a fast direct FBP algorithm can be implemented on modern texture-mapping hardware in current high-end workstation platforms. This is done by casting the FBP algorithm as an image warping operation with summing. Texture- mapping hardware, such as that on the silicon Graphics Reality Engine, shows around 600 times speedup of backprojection over a CPU-based implementation (a 100 Mhz R4400 in our case). This technique has the further advantages of flexibility and rapid programming. In addition, the same hardware can be used for both image reconstruction and for volumetric rendering. Our technique can also be used to accelerate iterative reconstruction algorithms. The hardware architecture also allows more complex operations than straight-ray backprojection if they are required, including fan-beam, cone-beam, and curved ray paths, with little or no speed penalties.
Optimal Discretization Resolution in Algebraic Image Reconstruction
NASA Astrophysics Data System (ADS)
Sharif, Behzad; Kamalabadi, Farzad
2005-11-01
In this paper, we focus on data-limited tomographic imaging problems where the underlying linear inverse problem is ill-posed. A typical regularized reconstruction algorithm uses algebraic formulation with a predetermined discretization resolution. If the selected resolution is too low, we may loose useful details of the underlying image and if it is too high, the reconstruction will be unstable and the representation will fit irrelevant features. In this work, two approaches are introduced to address this issue. The first approach is using Mallow's CL method or generalized cross-validation. For each of the two methods, a joint estimator of regularization parameter and discretization resolution is proposed and their asymptotic optimality is investigated. The second approach is a Bayesian estimator of the model order using a complexity-penalizing prior. Numerical experiments focus on a space imaging application from a set of limited-angle tomographic observations.
Advances in the reconstruction of LBT LINC-NIRVANA images
NASA Astrophysics Data System (ADS)
La Camera, A.; Desiderá, G.; Arcidiacono, C.; Boccacci, P.; Bertero, M.
2007-09-01
Context: LINC-NIRVANA, the Fizeau interferometer of the Large Binocular Telescope (LBT), will require routine use of image reconstruction methods for data reduction. To this purpose our group has already developed the software package AIRY (Astronomical Image Restoration in interferometrY). Aims: Observations of a target, with different orientations of the baseline of LINC-NIRVANA, will provide images with different orientations with respect to the CCD camera. This rotation effect was not taken into account in our previous work. Therefore in this paper we propose a method able to compensate for the rotation of the field of view. Moreover we investigate acceleration techniques for reducing the computational burden of multiple image deconvolution. Methods: The basic method is a suitable modification of the Richardson-Lucy algorithm, also implementing an approach we proposed for reducing boundary effects. Acceleration techniques, proposed by Biggs & Andrews, are extended and applied to this new algorithm. Finally a method for estimating the unknown point spread function (PSF) by extracting and extrapolating the image of a reference star is developed and implemented. Results: The method introduced for compensating object rotation and reducing boundary effects, as well as its accelerated versions, are tested on simulated LINC-NIRVANA images, using the VLT image of the Crab Nebula as test object. The results are very promising. Moreover the method for PSFs extraction is tested on simulated images, derived from the LBT image of the galaxy NGC 6946 and obtained by convolving this image with PSFs computed by means of the numerical code LOST (Layer Oriented Simulation Tool).
GPU-accelerated regularized iterative reconstruction for few-view cone beam CT
Matenine, Dmitri; Goussard, Yves
2015-04-15
Purpose: The present work proposes an iterative reconstruction technique designed for x-ray transmission computed tomography (CT). The main objective is to provide a model-based solution to the cone-beam CT reconstruction problem, yielding accurate low-dose images via few-views acquisitions in clinically acceptable time frames. Methods: The proposed technique combines a modified ordered subsets convex (OSC) algorithm and the total variation minimization (TV) regularization technique and is called OSC-TV. The number of subsets of each OSC iteration follows a reduction pattern in order to ensure the best performance of the regularization method. Considering the high computational cost of the algorithm, it is implemented on a graphics processing unit, using parallelization to accelerate computations. Results: The reconstructions were performed on computer-simulated as well as human pelvic cone-beam CT projection data and image quality was assessed. In terms of convergence and image quality, OSC-TV performs well in reconstruction of low-dose cone-beam CT data obtained via a few-view acquisition protocol. It compares favorably to the few-view TV-regularized projections onto convex sets (POCS-TV) algorithm. It also appears to be a viable alternative to full-dataset filtered backprojection. Execution times are of 1–2 min and are compatible with the typical clinical workflow for nonreal-time applications. Conclusions: Considering the image quality and execution times, this method may be useful for reconstruction of low-dose clinical acquisitions. It may be of particular benefit to patients who undergo multiple acquisitions by reducing the overall imaging radiation dose and associated risks.
Singh, Gurmeet; Raj, Ashish; Kressler, Bryan; Nguyen, Thanh D.; Spincemaille, Pascal; Zabih, Ramin; Wang, Yi
2010-01-01
Among recent parallel MR imaging reconstruction advances, a Bayesian method called Edge-preserving Parallel Imaging with GRAph cut Minimization (EPIGRAM) has been demonstrated to significantly improve signal to noise ratio (SNR) compared to conventional regularized sensitivity encoding (SENSE) method. However, EPIGRAM requires a large number of iterations in proportion to the number of intensity labels in the image, making it computationally expensive for high dynamic range images. The objective of this study is to develop a Fast EPIGRAM reconstruction based on the efficient binary jump move algorithm that provides a logarithmic reduction in reconstruction time while maintaining image quality. Preliminary in vivo validation of the proposed algorithm is presented for 2D cardiac cine MR imaging and 3D coronary MR angiography at acceleration factors of 2-4. Fast EPIGRAM was found to provide similar image quality to EPIGRAM and maintain the previously reported SNR improvement over regularized SENSE, while reducing EPIGRAM reconstruction time by 25-50 times. PMID:20939095
Building reconstruction from images and laser scanning
NASA Astrophysics Data System (ADS)
Brenner, Claus
2005-03-01
The automatic extraction of objects from laser scans and images has been a topic of research for decades. Nowadays, with new services expected, especially in the area of navigation systems, location based services, and augmented reality, the need for automated, efficient extraction systems becomes more urgent than ever. This paper reviews a number of automatic and semi-automatic reconstruction methods in more detail in order to reveal their underlying principles. It then discusses some general properties of reconstruction approaches which have evolved. This shows that, although research is still far from the goal of the initially envisioned fully automatic reconstruction systems, there is now a much better understanding of the problem and the ways it can be tackled.
Propagation phasor approach for holographic image reconstruction
NASA Astrophysics Data System (ADS)
Luo, Wei; Zhang, Yibo; Göröcs, Zoltán; Feizi, Alborz; Ozcan, Aydogan
2016-03-01
To achieve high-resolution and wide field-of-view, digital holographic imaging techniques need to tackle two major challenges: phase recovery and spatial undersampling. Previously, these challenges were separately addressed using phase retrieval and pixel super-resolution algorithms, which utilize the diversity of different imaging parameters. Although existing holographic imaging methods can achieve large space-bandwidth-products by performing pixel super-resolution and phase retrieval sequentially, they require large amounts of data, which might be a limitation in high-speed or cost-effective imaging applications. Here we report a propagation phasor approach, which for the first time combines phase retrieval and pixel super-resolution into a unified mathematical framework and enables the synthesis of new holographic image reconstruction methods with significantly improved data efficiency. In this approach, twin image and spatial aliasing signals, along with other digital artifacts, are interpreted as noise terms that are modulated by phasors that analytically depend on the lateral displacement between hologram and sensor planes, sample-to-sensor distance, wavelength, and the illumination angle. Compared to previous holographic reconstruction techniques, this new framework results in five- to seven-fold reduced number of raw measurements, while still achieving a competitive resolution and space-bandwidth-product. We also demonstrated the success of this approach by imaging biological specimens including Papanicolaou and blood smears.
Context dependent anti-aliasing image reconstruction
NASA Technical Reports Server (NTRS)
Beaudet, Paul R.; Hunt, A.; Arlia, N.
1989-01-01
Image Reconstruction has been mostly confined to context free linear processes; the traditional continuum interpretation of digital array data uses a linear interpolator with or without an enhancement filter. Here, anti-aliasing context dependent interpretation techniques are investigated for image reconstruction. Pattern classification is applied to each neighborhood to assign it a context class; a different interpolation/filter is applied to neighborhoods of differing context. It is shown how the context dependent interpolation is computed through ensemble average statistics using high resolution training imagery from which the lower resolution image array data is obtained (simulation). A quadratic least squares (LS) context-free image quality model is described from which the context dependent interpolation coefficients are derived. It is shown how ensembles of high-resolution images can be used to capture the a priori special character of different context classes. As a consequence, a priori information such as the translational invariance of edges along the edge direction, edge discontinuity, and the character of corners is captured and can be used to interpret image array data with greater spatial resolution than would be expected by the Nyquist limit. A Gibb-like artifact associated with this super-resolution is discussed. More realistic context dependent image quality models are needed and a suggestion is made for using a quality model which now is finding application in data compression.
Propagation phasor approach for holographic image reconstruction
Luo, Wei; Zhang, Yibo; Göröcs, Zoltán; Feizi, Alborz; Ozcan, Aydogan
2016-01-01
To achieve high-resolution and wide field-of-view, digital holographic imaging techniques need to tackle two major challenges: phase recovery and spatial undersampling. Previously, these challenges were separately addressed using phase retrieval and pixel super-resolution algorithms, which utilize the diversity of different imaging parameters. Although existing holographic imaging methods can achieve large space-bandwidth-products by performing pixel super-resolution and phase retrieval sequentially, they require large amounts of data, which might be a limitation in high-speed or cost-effective imaging applications. Here we report a propagation phasor approach, which for the first time combines phase retrieval and pixel super-resolution into a unified mathematical framework and enables the synthesis of new holographic image reconstruction methods with significantly improved data efficiency. In this approach, twin image and spatial aliasing signals, along with other digital artifacts, are interpreted as noise terms that are modulated by phasors that analytically depend on the lateral displacement between hologram and sensor planes, sample-to-sensor distance, wavelength, and the illumination angle. Compared to previous holographic reconstruction techniques, this new framework results in five- to seven-fold reduced number of raw measurements, while still achieving a competitive resolution and space-bandwidth-product. We also demonstrated the success of this approach by imaging biological specimens including Papanicolaou and blood smears. PMID:26964671
Performance-based assessment of reconstructed images
Hanson, Kenneth
2009-01-01
During the early 90s, I engaged in a productive and enjoyable collaboration with Robert Wagner and his colleague, Kyle Myers. We explored the ramifications of the principle that tbe quality of an image should be assessed on the basis of how well it facilitates the performance of appropriate visual tasks. We applied this principle to algorithms used to reconstruct scenes from incomplete and/or noisy projection data. For binary visual tasks, we used both the conventional disk detection and a new challenging task, inspired by the Rayleigh resolution criterion, of deciding whether an object was a blurred version of two dots or a bar. The results of human and machine observer tests were summarized with the detectability index based on the area under the ROC curve. We investigated a variety of reconstruction algorithms, including ART, with and without a nonnegativity constraint, and the MEMSYS3 algorithm. We concluded that the performance of the Raleigh task was optimized when the strength of the prior was near MEMSYS's default 'classic' value for both human and machine observers. A notable result was that the most-often-used metric of rms error in the reconstruction was not necessarily indicative of the value of a reconstructed image for the purpose of performing visual tasks.
Niu, Tianye; Zhu, Lei
2012-01-01
Purpose: Recent advances in compressed sensing (CS) enable accurate CT image reconstruction from highly undersampled and noisy projection measurements, due to the sparsifiable feature of most CT images using total variation (TV). These novel reconstruction methods have demonstrated advantages in clinical applications where radiation dose reduction is critical, such as onboard cone-beam CT (CBCT) imaging in radiation therapy. The image reconstruction using CS is formulated as either a constrained problem to minimize the TV objective within a small and fixed data fidelity error, or an unconstrained problem to minimize the data fidelity error with TV regularization. However, the conventional solutions to the above two formulations are either computationally inefficient or involved with inconsistent regularization parameter tuning, which significantly limit the clinical use of CS-based iterative reconstruction. In this paper, we propose an optimization algorithm for CS reconstruction which overcomes the above two drawbacks. Methods: The data fidelity tolerance of CS reconstruction can be well estimated based on the measured data, as most of the projection errors are from Poisson noise after effective data correction for scatter and beam-hardening effects. We therefore adopt the TV optimization framework with a data fidelity constraint. To accelerate the convergence, we first convert such a constrained optimization using a logarithmic barrier method into a form similar to that of the conventional TV regularization based reconstruction but with an automatically adjusted penalty weight. The problem is then solved efficiently by gradient projection with an adaptive Barzilai–Borwein step-size selection scheme. The proposed algorithm is referred to as accelerated barrier optimization for CS (ABOCS), and evaluated using both digital and physical phantom studies. Results: ABOCS directly estimates the data fidelity tolerance from the raw projection data. Therefore, as
Accelerated signal encoding and reconstruction using pixon method
Puetter, Richard; Yahil, Amos; Pina, Robert
2005-05-17
The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape, size, and/or position) as needed to best fit the data.
Accelerated signal encoding and reconstruction using pixon method
Puetter, Richard; Yahil, Amos
2002-01-01
The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape size, and/or position) as needed to best fit the data.
Accelerated signal encoding and reconstruction using pixon method
Puetter, Richard; Yahil, Amos
2002-01-01
The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape, size, and/or position) as needed to best fit the data.
Hyperspectral image reconstruction for diffuse optical tomography
Larusson, Fridrik; Fantini, Sergio; Miller, Eric L.
2011-01-01
We explore the development and performance of algorithms for hyperspectral diffuse optical tomography (DOT) for which data from hundreds of wavelengths are collected and used to determine the concentration distribution of chromophores in the medium under investigation. An efficient method is detailed for forming the images using iterative algorithms applied to a linearized Born approximation model assuming the scattering coefficient is spatially constant and known. The L-surface framework is employed to select optimal regularization parameters for the inverse problem. We report image reconstructions using 126 wavelengths with estimation error in simulations as low as 0.05 and mean square error of experimental data of 0.18 and 0.29 for ink and dye concentrations, respectively, an improvement over reconstructions using fewer specifically chosen wavelengths. PMID:21483616
Deep Reconstruction Models for Image Set Classification.
Hayat, Munawar; Bennamoun, Mohammed; An, Senjian
2015-04-01
Image set classification finds its applications in a number of real-life scenarios such as classification from surveillance videos, multi-view camera networks and personal albums. Compared with single image based classification, it offers more promises and has therefore attracted significant research attention in recent years. Unlike many existing methods which assume images of a set to lie on a certain geometric surface, this paper introduces a deep learning framework which makes no such prior assumptions and can automatically discover the underlying geometric structure. Specifically, a Template Deep Reconstruction Model (TDRM) is defined whose parameters are initialized by performing unsupervised pre-training in a layer-wise fashion using Gaussian Restricted Boltzmann Machines (GRBMs). The initialized TDRM is then separately trained for images of each class and class-specific DRMs are learnt. Based on the minimum reconstruction errors from the learnt class-specific models, three different voting strategies are devised for classification. Extensive experiments are performed to demonstrate the efficacy of the proposed framework for the tasks of face and object recognition from image sets. Experimental results show that the proposed method consistently outperforms the existing state of the art methods. PMID:26353289
HYPR: constrained reconstruction for enhanced SNR in dynamic medical imaging
NASA Astrophysics Data System (ADS)
Mistretta, C.; Wieben, O.; Velikina, J.; Wu, Y.; Johnson, K.; Korosec, F.; Unal, O.; Chen, G.; Fain, S.; Christian, B.; Nalcioglu, O.; Kruger, R. A.; Block, W.; Samsonov, A.; Speidel, M.; Van Lysel, M.; Rowley, H.; Supanich, M.; Turski, P.; Wu, Yan; Holmes, J.; Kecskemeti, S.; Moran, C.; O'Halloran, R.; Keith, L.; Alexander, A.; Brodsky, E.; Lee, J. E.; Hall, T.; Zagzebski, J.
2008-03-01
During the last eight years our group has developed radial acquisitions with angular undersampling factors of several hundred that accelerate MRI in selected applications. As with all previous acceleration techniques, SNR typically falls as least as fast as the inverse square root of the undersampling factor. This limits the SNR available to support the small voxels that these methods can image over short time intervals in applications like time-resolved contrast-enhanced MR angiography (CE-MRA). Instead of processing each time interval independently, we have developed constrained reconstruction methods that exploit the significant correlation between temporal sampling points. A broad class of methods, termed HighlY Constrained Back PRojection (HYPR), generalizes this concept to other modalities and sampling dimensions.
Shih, Cheng-Ting; Chang, Yuan-Jen; Hsu, Jui-Ting; Chuang, Keh-Shih; Chang, Shu-Jun; Wu, Jay
2015-12-01
Optical computed tomography (optical CT) has been proven to be a useful tool for dose readouts of polymer gel dosimeters. In this study, the algebraic reconstruction technique (ART) for image reconstruction of gel dosimeters was used to improve the image quality of optical CT. Cylindrical phantoms filled with N-isopropyl-acrylamide polymer gels were irradiated using a medical linear accelerator. A circular dose distribution and a hexagonal dose distribution were produced by applying the VMAT technique and the six-field dose delivery, respectively. The phantoms were scanned using optical CT, and the images were reconstructed using the filtered back-projection (FBP) algorithm and the ART. For the circular dose distribution, the ART successfully reduced the ring artifacts and noise in the reconstructed image. For the hexagonal dose distribution, the ART reduced the hot spots at the entrances of the beams and increased the dose uniformity in the central region. Within 50% isodose line, the gamma pass rates for the 2 mm/3% criteria for the ART and FBP were 99.2% and 88.1%, respectively. The ART could be used for the reconstruction of optical CT images to improve image quality and provide accurate dose conversion for polymer gel dosimeters. PMID:26165178
Tomographic image reconstruction using systolic array algorithms
Azevedo, S.G.; DeGroot, A.J.; Schneberk, D.J.; Brase, J.M.; Martz, H.E.; Jain, A.K.; Current, K.W.; Hurst, P.J.
1988-12-22
Image reconstruction for Computed Tomography (CT) is a time consuming operation on current uniprocessor computers and even on array processors. This is particularly true for three-dimensional data sets or for limited-data reconstructions requiring iterative procedures. In these cases, the projection operation (Radon transform) and its inverse (filtered back-projection) are major computational tasks that are performed many times. Multiprocessor computers, especially in systolic array configurations, can provide dramatic improvements in reconstruction times at reasonable costs. An in-house systolic processor, called SPRINT, has been programmed to demonstrate these improved speeds while achieving near 100% efficiency of all processor elements. We report on these results in this paper. In addition, two proposed hardware implementations of a new architecture are shown to have even greater speedup possibilities. One, using standard DSP chips, has been simulated to give a factor of three improvement over SPRINT, while the other, using custom VLSI that is now in the early stages of design, could potentially perform 512/sup 2/ reconstructions at video rates (100 times further speedup). These processors are also interconnected in a systolic array configuration. Experimental and projected results, with future plans, are also reported in this paper. 11 refs., 5 figs., 1 tab.
A sparse reconstruction algorithm for ultrasonic images in nondestructive testing.
Guarneri, Giovanni Alfredo; Pipa, Daniel Rodrigues; Neves Junior, Flávio; de Arruda, Lúcia Valéria Ramos; Zibetti, Marcelo Victor Wüst
2015-01-01
Ultrasound imaging systems (UIS) are essential tools in nondestructive testing (NDT). In general, the quality of images depends on two factors: system hardware features and image reconstruction algorithms. This paper presents a new image reconstruction algorithm for ultrasonic NDT. The algorithm reconstructs images from A-scan signals acquired by an ultrasonic imaging system with a monostatic transducer in pulse-echo configuration. It is based on regularized least squares using a l1 regularization norm. The method is tested to reconstruct an image of a point-like reflector, using both simulated and real data. The resolution of reconstructed image is compared with four traditional ultrasonic imaging reconstruction algorithms: B-scan, SAFT, ω-k SAFT and regularized least squares (RLS). The method demonstrates significant resolution improvement when compared with B-scan-about 91% using real data. The proposed scheme also outperforms traditional algorithms in terms of signal-to-noise ratio (SNR). PMID:25905700
A Sparse Reconstruction Algorithm for Ultrasonic Images in Nondestructive Testing
Guarneri, Giovanni Alfredo; Pipa, Daniel Rodrigues; Junior, Flávio Neves; de Arruda, Lúcia Valéria Ramos; Zibetti, Marcelo Victor Wüst
2015-01-01
Ultrasound imaging systems (UIS) are essential tools in nondestructive testing (NDT). In general, the quality of images depends on two factors: system hardware features and image reconstruction algorithms. This paper presents a new image reconstruction algorithm for ultrasonic NDT. The algorithm reconstructs images from A-scan signals acquired by an ultrasonic imaging system with a monostatic transducer in pulse-echo configuration. It is based on regularized least squares using a l1 regularization norm. The method is tested to reconstruct an image of a point-like reflector, using both simulated and real data. The resolution of reconstructed image is compared with four traditional ultrasonic imaging reconstruction algorithms: B-scan, SAFT, ω-k SAFT and regularized least squares (RLS). The method demonstrates significant resolution improvement when compared with B-scan—about 91% using real data. The proposed scheme also outperforms traditional algorithms in terms of signal-to-noise ratio (SNR). PMID:25905700
Highly accelerated cardiac MRI using iterative SENSE reconstruction: initial clinical experience.
Allen, Bradley D; Carr, Maria; Botelho, Marcos P F; Rahsepar, Amir Ali; Markl, Michael; Zenge, Michael O; Schmidt, Michaela; Nadar, Mariappan S; Spottiswoode, Bruce; Collins, Jeremy D; Carr, James C
2016-06-01
To evaluate the qualitative and quantitative performance of an accelerated cardiovascular MRI (CMR) protocol that features iterative SENSE reconstruction and spatio-temporal L1-regularization (IS SENSE). Twenty consecutively recruited patients and 9 healthy volunteers were included. 2D steady state free precession cine images including 3-chamber, 4-chamber, and short axis slices were acquired using standard parallel imaging (GRAPPA, acceleration factor = 2), spatio-temporal undersampled TSENSE (acceleration factor = 4), and IS SENSE techniques (acceleration factor = 4). Acquisition times, quantitative cardiac functional parameters, wall motion abnormalities (WMA), and qualitative performance (scale: 1-poor to 5-excellent for overall image quality, noise, and artifact) were compared. Breath-hold times for IS SENSE (3.0 ± 0.6 s) and TSENSE (3.3 ± 0.6) were both reduced relative to GRAPPA (8.4 ± 1.7 s, p < 0.001). No difference in quantitative cardiac function was present between the three techniques (p = 0.89 for ejection fraction). GRAPPA and IS SENSE had similar image quality (4.7 ± 0.4 vs. 4.5 ± 0.6, p = 0.09) while, both techniques were superior to TSENSE (quality: 4.1 ± 0.7, p < 0.001). GRAPPA WMA agreement with IS SENSE was good (κ > 0.60, p < 0.001), while agreement with TSENSE was poor (κ < 0.40, p < 0.001). IS SENSE is a viable clinical CMR acceleration approach to reduce acquisition times while maintaining satisfactory qualitative and quantitative performance. PMID:26894256
Accelerator Test of an Imaging Calorimeter
NASA Technical Reports Server (NTRS)
Christl, Mark J.; Adams, James H., Jr.; Binns, R. W.; Derrickson, J. H.; Fountain, W. F.; Howell, L. W.; Gregory, J. C.; Hink, P. L.; Israel, M. H.; Kippen, R. M.; Whitaker, Ann F. (Technical Monitor)
2001-01-01
The Imaging Calorimeter for ACCESS (ICA) utilizes a thin sampling calorimeter concept for direct measurements of high-energy cosmic rays. The ICA design uses arrays of small scintillating fibers to measure the energy and trajectory of the produced cascades. A test instrument has been developed to study the performance of this concept at accelerator energies and for comparison with simulations. Two test exposures have been completed using a CERN test beam. Some results from the accelerator tests are presented.
Bayesian Image Reconstruction in Quantitative Photoacoustic Tomography.
Tarvainen, Tanja; Pulkkinen, Aki; Cox, Ben; Kaipio, Jari; Arridge, Simon
2013-08-30
Quantitative photoacoustic tomography is an emerging imaging technique aimed at estimating chromophore concentrations inside tissues from photoacoustic images, which are formed by combining optical information and ultrasonic propagation. This is a hybrid imaging problem in which the solution of one inverse problem acts as the data for another ill-posed inverse problem. In the optical reconstruction of quantitative photoacoustic tomography, the data is obtained as a solution of an acoustic inverse initial value problem. Thus, both the data and the noise are affected by the method applied to solve the acoustic inverse problem. In this paper, the noise of optical data is modelled as Gaussian distributed with mean and covariance approximated by solving several acoustic inverse initial value problems using acoustic noise samples as data. Furthermore, Bayesian approximation error modelling is applied to compensate for the modelling errors in the optical data caused by the acoustic solver. The results show that modelling of the noise statistics and the approximation errors can improve the optical reconstructions. PMID:24001987
NASA Astrophysics Data System (ADS)
Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming
2011-02-01
High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.
Wavelet-based stereo images reconstruction using depth images
NASA Astrophysics Data System (ADS)
Jovanov, Ljubomir; Pižurica, Aleksandra; Philips, Wilfried
2007-09-01
It is believed by many that three-dimensional (3D) television will be the next logical development toward a more natural and vivid home entertaiment experience. While classical 3D approach requires the transmission of two video streams, one for each view, 3D TV systems based on depth image rendering (DIBR) require a single stream of monoscopic images and a second stream of associated images usually termed depth images or depth maps, that contain per-pixel depth information. Depth map is a two-dimensional function that contains information about distance from camera to a certain point of the object as a function of the image coordinates. By using this depth information and the original image it is possible to reconstruct a virtual image of a nearby viewpoint by projecting the pixels of available image to their locations in 3D space and finding their position in the desired view plane. One of the most significant advantages of the DIBR is that depth maps can be coded more efficiently than two streams corresponding to left and right view of the scene, thereby reducing the bandwidth required for transmission, which makes it possible to reuse existing transmission channels for the transmission of 3D TV. This technique can also be applied for other 3D technologies such as multimedia systems. In this paper we propose an advanced wavelet domain scheme for the reconstruction of stereoscopic images, which solves some of the shortcommings of the existing methods discussed above. We perform the wavelet transform of both the luminance and depth images in order to obtain significant geometric features, which enable more sensible reconstruction of the virtual view. Motion estimation employed in our approach uses Markov random field smoothness prior for regularization of the estimated motion field. The evaluation of the proposed reconstruction method is done on two video sequences which are typically used for comparison of stereo reconstruction algorithms. The results demonstrate
Multiscale likelihood analysis and image reconstruction
NASA Astrophysics Data System (ADS)
Willett, Rebecca M.; Nowak, Robert D.
2003-11-01
The nonparametric multiscale polynomial and platelet methods presented here are powerful new tools for signal and image denoising and reconstruction. Unlike traditional wavelet-based multiscale methods, these methods are both well suited to processing Poisson or multinomial data and capable of preserving image edges. At the heart of these new methods lie multiscale signal decompositions based on polynomials in one dimension and multiscale image decompositions based on what the authors call platelets in two dimensions. Platelets are localized functions at various positions, scales and orientations that can produce highly accurate, piecewise linear approximations to images consisting of smooth regions separated by smooth boundaries. Polynomial and platelet-based maximum penalized likelihood methods for signal and image analysis are both tractable and computationally efficient. Polynomial methods offer near minimax convergence rates for broad classes of functions including Besov spaces. Upper bounds on the estimation error are derived using an information-theoretic risk bound based on squared Hellinger loss. Simulations establish the practical effectiveness of these methods in applications such as density estimation, medical imaging, and astronomy.
Pulsed holography for combustion diagnostics. [image reconstruction
NASA Technical Reports Server (NTRS)
Klein, N.; Dewilde, M. A.
1980-01-01
Image reconstruction and data extraction techniques were considered with respect to their application to combustion diagnostics. A system was designed and constructed that possesses sufficient stability and resolution to make quantitative data extraction possible. Example data were manually processed using the system to demonstrate its feasibility for the purpose intended. The system was interfaced with the PDP-11-04 computer for maximum design capability. It was concluded that the use of specialized digital hardware controlled by a relatively small computer provides the best combination of accuracy, speed, and versatility for this particular problem area.
Huang, Jinhong; Guo, Li; Feng, Qianjin; Chen, Wufan; Feng, Yanqiu
2015-07-21
Image reconstruction from undersampled k-space data accelerates magnetic resonance imaging (MRI) by exploiting image sparseness in certain transform domains. Employing image patch representation over a learned dictionary has the advantage of being adaptive to local image structures and thus can better sparsify images than using fixed transforms (e.g. wavelets and total variations). Dictionary learning methods have recently been introduced to MRI reconstruction, and these methods demonstrate significantly reduced reconstruction errors compared to sparse MRI reconstruction using fixed transforms. However, the synthesis sparse coding problem in dictionary learning is NP-hard and computationally expensive. In this paper, we present a novel sparsity-promoting orthogonal dictionary updating method for efficient image reconstruction from highly undersampled MRI data. The orthogonality imposed on the learned dictionary enables the minimization problem in the reconstruction to be solved by an efficient optimization algorithm which alternately updates representation coefficients, orthogonal dictionary, and missing k-space data. Moreover, both sparsity level and sparse representation contribution using updated dictionaries gradually increase during iterations to recover more details, assuming the progressively improved quality of the dictionary. Simulation and real data experimental results both demonstrate that the proposed method is approximately 10 to 100 times faster than the K-SVD-based dictionary learning MRI method and simultaneously improves reconstruction accuracy. PMID:26110788
Regularized image reconstruction for continuously self-imaging gratings.
Horisaki, Ryoichi; Piponnier, Martin; Druart, Guillaume; Guérineau, Nicolas; Primot, Jérôme; Goudail, François; Taboury, Jean; Tanida, Jun
2013-06-01
In this paper, we demonstrate two image reconstruction schemes for continuously self-imaging gratings (CSIGs). CSIGs are diffractive optical elements that generate a depth-invariant propagation pattern and sample objects with a sparse spatial frequency spectrum. To compensate for the sparse sampling, we apply two methods with different regularizations for CSIG imaging. The first method employs continuity of the spatial frequency spectrum, and the second one uses sparsity of the intensity pattern. The two methods are demonstrated with simulations and experiments. PMID:23736336
Accelerated image processing on FPGAs.
Draper, Bruce A; Beveridge, J Ross; Böhm, A P Willem; Ross, Charles; Chawathe, Monica
2003-01-01
The Cameron project has developed a language called single assignment C (SA-C), and a compiler for mapping image-based applications written in SA-C to field programmable gate arrays (FPGAs). The paper tests this technology by implementing several applications in SA-C and compiling them to an Annapolis Microsystems (AMS) WildStar board with a Xilinx XV2000E FPGA. The performance of these applications on the FPGA is compared to the performance of the same applications written in assembly code or C for an 800 MHz Pentium III. (Although no comparison across processors is perfect, these chips were the first of their respective classes fabricated at 0.18 microns, and are therefore of comparable ages.) We find that applications written in SA-C and compiled to FPGAs are between 8 and 800 times faster than the equivalent program run on the Pentium III. PMID:18244709
Acceleration of iterative image restoration algorithms.
Biggs, D S; Andrews, M
1997-03-10
A new technique for the acceleration of iterative image restoration algorithms is proposed. The method is based on the principles of vector extrapolation and does not require the minimization of a cost function. The algorithm is derived and its performance illustrated with Richardson-Lucy (R-L) and maximum entropy (ME) deconvolution algorithms and the Gerchberg-Saxton magnitude and phase retrieval algorithms. Considerable reduction in restoration times is achieved with little image distortion or computational overhead per iteration. The speedup achieved is shown to increase with the number of iterations performed and is easily adapted to suit different algorithms. An example R-L restoration achieves an average speedup of 40 times after 250 iterations and an ME method 20 times after only 50 iterations. An expression for estimating the acceleration factor is derived and confirmed experimentally. Comparisons with other acceleration techniques in the literature reveal significant improvements in speed and stability. PMID:18250863
Prior image constrained image reconstruction in emerging computed tomography applications
NASA Astrophysics Data System (ADS)
Brunner, Stephen T.
Advances have been made in computed tomography (CT), especially in the past five years, by incorporating prior images into the image reconstruction process. In this dissertation, we investigate prior image constrained image reconstruction in three emerging CT applications: dual-energy CT, multi-energy photon-counting CT, and cone-beam CT in image-guided radiation therapy. First, we investigate the application of Prior Image Constrained Compressed Sensing (PICCS) in dual-energy CT, which has been called "one of the hottest research areas in CT." Phantom and animal studies are conducted using a state-of-the-art 64-slice GE Discovery 750 HD CT scanner to investigate the extent to which PICCS can enable radiation dose reduction in material density and virtual monochromatic imaging. Second, we extend the application of PICCS from dual-energy CT to multi-energy photon-counting CT, which has been called "one of the 12 topics in CT to be critical in the next decade." Numerical simulations are conducted to generate multiple energy bin images for a photon-counting CT acquisition and to investigate the extent to which PICCS can enable radiation dose efficiency improvement. Third, we investigate the performance of a newly proposed prior image constrained scatter correction technique to correct scatter-induced shading artifacts in cone-beam CT, which, when used in image-guided radiation therapy procedures, can assist in patient localization, and potentially, dose verification and adaptive radiation therapy. Phantom studies are conducted using a Varian 2100 EX system with an on-board imager to investigate the extent to which the prior image constrained scatter correction technique can mitigate scatter-induced shading artifacts in cone-beam CT. Results show that these prior image constrained image reconstruction techniques can reduce radiation dose in dual-energy CT by 50% in phantom and animal studies in material density and virtual monochromatic imaging, can lead to radiation
Optimized Quasi-Interpolators for Image Reconstruction.
Sacht, Leonardo; Nehab, Diego
2015-12-01
We propose new quasi-interpolators for the continuous reconstruction of sampled images, combining a narrowly supported piecewise-polynomial kernel and an efficient digital filter. In other words, our quasi-interpolators fit within the generalized sampling framework and are straightforward to use. We go against standard practice and optimize for approximation quality over the entire Nyquist range, rather than focusing exclusively on the asymptotic behavior as the sample spacing goes to zero. In contrast to previous work, we jointly optimize with respect to all degrees of freedom available in both the kernel and the digital filter. We consider linear, quadratic, and cubic schemes, offering different tradeoffs between quality and computational cost. Experiments with compounded rotations and translations over a range of input images confirm that, due to the additional degrees of freedom and the more realistic objective function, our new quasi-interpolators perform better than the state of the art, at a similar computational cost. PMID:26390452
Fluorescence molecular tomographic image reconstruction based on reduced measurement data
NASA Astrophysics Data System (ADS)
Zou, Wei; Wang, Jiajun; Feng, David Dagan; Fang, Erxi
2015-07-01
The analysis of fluorescence molecular tomography is important for medical diagnosis and treatment. Although the quality of reconstructed results can be improved with the increasing number of measurement data, the scale of the matrices involved in the reconstruction of fluorescence molecular tomography will also become larger, which may slow down the reconstruction process. A new method is proposed where measurement data are reduced according to the rows of the Jacobian matrix and the projection residual error. To further accelerate the reconstruction process, the global inverse problem is solved with level-by-level Schur complement decomposition. Simulation results demonstrate that the speed of the reconstruction process can be improved with the proposed algorithm.
Image reconstruction of IRAS survey scans
NASA Technical Reports Server (NTRS)
Bontekoe, Tj. Romke
1990-01-01
The IRAS survey data can be used successfully to produce images of extended objects. The major difficulties, viz. non-uniform sampling, different response functions for each detector, and varying signal-to-noise levels for each detector for each scan, were resolved. The results of three different image construction techniques are compared: co-addition, constrained least squares, and maximum entropy. The maximum entropy result is superior. An image of the galaxy M51 with an average spatial resolution of 45 arc seconds is presented, using 60 micron survey data. This exceeds the telescope diffraction limit of 1 minute of arc, at this wavelength. Data fusion is a proposed method for combining data from different instruments, with different spacial resolutions, at different wavelengths. Data estimates of the physical parameters, temperature, density and composition, can be made from the data without prior image (re-)construction. An increase in the accuracy of these parameters is expected as the result of this more systematic approach.
Yang, Alice C; Kretzler, Madison; Sudarski, Sonja; Gulani, Vikas; Seiberlich, Nicole
2016-06-01
The family of sparse reconstruction techniques, including the recently introduced compressed sensing framework, has been extensively explored to reduce scan times in magnetic resonance imaging (MRI). While there are many different methods that fall under the general umbrella of sparse reconstructions, they all rely on the idea that a priori information about the sparsity of MR images can be used to reconstruct full images from undersampled data. This review describes the basic ideas behind sparse reconstruction techniques, how they could be applied to improve MRI, and the open challenges to their general adoption in a clinical setting. The fundamental principles underlying different classes of sparse reconstructions techniques are examined, and the requirements that each make on the undersampled data outlined. Applications that could potentially benefit from the accelerations that sparse reconstructions could provide are described, and clinical studies using sparse reconstructions reviewed. Lastly, technical and clinical challenges to widespread implementation of sparse reconstruction techniques, including optimization, reconstruction times, artifact appearance, and comparison with current gold standards, are discussed. PMID:27003227
Iterative feature refinement for accurate undersampled MR image reconstruction.
Wang, Shanshan; Liu, Jianbo; Liu, Qiegen; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong
2016-05-01
Accelerating MR scan is of great significance for clinical, research and advanced applications, and one main effort to achieve this is the utilization of compressed sensing (CS) theory. Nevertheless, the existing CSMRI approaches still have limitations such as fine structure loss or high computational complexity. This paper proposes a novel iterative feature refinement (IFR) module for accurate MR image reconstruction from undersampled K-space data. Integrating IFR with CSMRI which is equipped with fixed transforms, we develop an IFR-CS method to restore meaningful structures and details that are originally discarded without introducing too much additional complexity. Specifically, the proposed IFR-CS is realized with three iterative steps, namely sparsity-promoting denoising, feature refinement and Tikhonov regularization. Experimental results on both simulated and in vivo MR datasets have shown that the proposed module has a strong capability to capture image details, and that IFR-CS is comparable and even superior to other state-of-the-art reconstruction approaches. PMID:27032527
Iterative feature refinement for accurate undersampled MR image reconstruction
NASA Astrophysics Data System (ADS)
Wang, Shanshan; Liu, Jianbo; Liu, Qiegen; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong
2016-05-01
Accelerating MR scan is of great significance for clinical, research and advanced applications, and one main effort to achieve this is the utilization of compressed sensing (CS) theory. Nevertheless, the existing CSMRI approaches still have limitations such as fine structure loss or high computational complexity. This paper proposes a novel iterative feature refinement (IFR) module for accurate MR image reconstruction from undersampled K-space data. Integrating IFR with CSMRI which is equipped with fixed transforms, we develop an IFR-CS method to restore meaningful structures and details that are originally discarded without introducing too much additional complexity. Specifically, the proposed IFR-CS is realized with three iterative steps, namely sparsity-promoting denoising, feature refinement and Tikhonov regularization. Experimental results on both simulated and in vivo MR datasets have shown that the proposed module has a strong capability to capture image details, and that IFR-CS is comparable and even superior to other state-of-the-art reconstruction approaches.
Imaging, Reconstruction, And Display Of Corneal Topography
NASA Astrophysics Data System (ADS)
Klyce, Stephen D.; Wilson, Steven E.
1989-12-01
The cornea is the major refractive element in the eye; even minor surface distortions can produce a significant reduction in visual acuity. Standard clinical methods used to evaluate corneal shape include keratometry, which assumes the cornea is ellipsoidal in shape, and photokeratoscopy, which images a series of concentric light rings on the corneal surface. These methods fail to document many of the corneal distortions that can degrade visual acuity. Algorithms have been developed to reconstruct the three dimensional shape of the cornea from keratoscope images, and to present these data in the clinically useful display of color-coded contour maps of corneal surface power. This approach has been implemented on a new generation video keratoscope system (Computed Anatomy, Inc.) with rapid automatic digitization of the image rings by a rule-based approach. The system has found clinical use in the early diagnosis of corneal shape anomalies such as keratoconus and contact lens-induced corneal warpage, in the evaluation of cataract and corneal transplant procedures, and in the assessment of corneal refractive surgical procedures. Currently, ray tracing techniques are being used to correlate corneal surface topography with potential visual acuity in an effort to more fully understand the tolerances of corneal shape consistent with good vision and to help determine the site of dysfunction in the visually impaired.
Photogrammetric 3D reconstruction using mobile imaging
NASA Astrophysics Data System (ADS)
Fritsch, Dieter; Syll, Miguel
2015-03-01
In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.
Total variation minimization-based multimodality medical image reconstruction
NASA Astrophysics Data System (ADS)
Cui, Xuelin; Yu, Hengyong; Wang, Ge; Mili, Lamine
2014-09-01
Since its recent inception, simultaneous image reconstruction for multimodality fusion has received a great deal of attention due to its superior imaging performance. On the other hand, the compressed sensing (CS)-based image reconstruction methods have undergone a rapid development because of their ability to significantly reduce the amount of raw data. In this work, we combine computed tomography (CT) and magnetic resonance imaging (MRI) into a single CS-based reconstruction framework. From a theoretical viewpoint, the CS-based reconstruction methods require prior sparsity knowledge to perform reconstruction. In addition to the conventional data fidelity term, the multimodality imaging information is utilized to improve the reconstruction quality. Prior information in this context is that most of the medical images can be approximated as piecewise constant model, and the discrete gradient transform (DGT), whose norm is the total variation (TV), can serve as a sparse representation. More importantly, the multimodality images from the same object must share structural similarity, which can be captured by DGT. The prior information on similar distributions from the sparse DGTs is employed to improve the CT and MRI image quality synergistically for a CT-MRI scanner platform. Numerical simulation with undersampled CT and MRI datasets is conducted to demonstrate the merits of the proposed hybrid image reconstruction approach. Our preliminary results confirm that the proposed method outperforms the conventional CT and MRI reconstructions when they are applied separately.
Block-based reconstructions for compressive spectral imaging
NASA Astrophysics Data System (ADS)
Correa, Claudia V.; Arguello, Henry; Arce, Gonzalo R.
2013-05-01
Coded Aperture Snapshot Spectral Imaging system (CASSI) captures spectral information of a scene using a reduced amount of focal plane array (FPA) projections. These projections are highly structured and localized such that each measurement contains information of a small portion of the data cube. Compressed sensing reconstruction algorithms are then used to recover the underlying 3-dimensional (3D) scene. The computational burden to recover a hyperspectral scene in CASSI is overwhelming for some applications such that reconstructions can take hours in desktop architectures. This paper presents a new method to reconstruct a hyperspectral signal from its compressive measurements using several overlapped block reconstructions. This approach exploits the structure of the CASSI sensing matrix to separately reconstruct overlapped regions of the 3D scene. The resultant reconstructions are then assembled to obtain the full recovered data cube. Typically, block-processing causes undesired artifacts in the recovered signal. Vertical and horizontal overlaps between adjacent blocks are then used to avoid these artifacts and increase the quality of reconstructed images. The reconstruction time and the quality of the reconstructed images are calculated as a function of the block-size and the amount of overlapped regions. Simulations show that the quality of the reconstructions is increased up to 6 dB and the reconstruction time is reduced up to 4 times when using block-based reconstruction instead of full data cube recovery at once. The proposed method is suitable for multi-processor architectures in which each core recovers one block at a time.
Tree STEM Reconstruction Using Vertical Fisheye Images: a Preliminary Study
NASA Astrophysics Data System (ADS)
Berveglieri, A.; Tommaselli, A. M. G.
2016-06-01
A preliminary study was conducted to assess a tree stem reconstruction technique with panoramic images taken with fisheye lenses. The concept is similar to the Structure from Motion (SfM) technique, but the acquisition and data preparation rely on fisheye cameras to generate a vertical image sequence with height variations of the camera station. Each vertical image is rectified to four vertical planes, producing horizontal lateral views. The stems in the lateral view are rectified to the same scale in the image sequence to facilitate image matching. Using bundle adjustment, the stems are reconstructed, enabling later measurement and extraction of several attributes. The 3D reconstruction was performed with the proposed technique and compared with SfM. The preliminary results showed that the stems were correctly reconstructed by using the lateral virtual images generated from the vertical fisheye images and with the advantage of using fewer images and taken from one single station.
NASA Astrophysics Data System (ADS)
Plotnikov, Illya; Vourlidas, Angelos; Tylka, Allan J.; Pinto, Rui; Rouillard, Alexis; Tirole, Margot
2016-07-01
Identifying the physical mechanisms that produce the most energetic particles is a long-standing observational and theoretical challenge in astrophysics. Strong pressure waves have been proposed as efficient accelerators both in the solar and astrophysical contexts via various mechanisms such as diffusive-shock/shock-drift acceleration and betatron effects. In diffusive-shock acceleration, the efficacy of the process relies on shock waves being super-critical or moving several times faster than the characteristic speed of the medium they propagate through (a high Alfven Mach number) and on the orientation of the magnetic field upstream of the shock front. High-cadence, multipoint imaging using the NASA STEREO, SOHO and SDO spacecrafts now permits the 3-D reconstruction of pressure waves formed during the eruption of coronal mass ejections. Using these unprecedented capabilities, some recent studies have provided new insights on the timing and longitudinal extent of solar energetic particles, including the first derivations of the time-dependent 3-dimensional distribution of the expansion speed and Mach numbers of coronal shock waves. We will review these recent developments by focusing on particle events that occurred between 2011 and 2015. These new techniques also provide the opportunity to investigate the enigmatic long-duration gamma ray events.
Analysis of Cultural Heritage by Accelerator Techniques and Analytical Imaging
NASA Astrophysics Data System (ADS)
Ide-Ektessabi, Ari; Toque, Jay Arre; Murayama, Yusuke
2011-12-01
In this paper we present the result of experimental investigation using two very important accelerator techniques: (1) synchrotron radiation XRF and XAFS; and (2) accelerator mass spectrometry and multispectral analytical imaging for the investigation of cultural heritage. We also want to introduce a complementary approach to the investigation of artworks which is noninvasive and nondestructive that can be applied in situ. Four major projects will be discussed to illustrate the potential applications of these accelerator and analytical imaging techniques: (1) investigation of Mongolian Textile (Genghis Khan and Kublai Khan Period) using XRF, AMS and electron microscopy; (2) XRF studies of pigments collected from Korean Buddhist paintings; (3) creating a database of elemental composition and spectral reflectance of more than 1000 Japanese pigments which have been used for traditional Japanese paintings; and (4) visible light-near infrared spectroscopy and multispectral imaging of degraded malachite and azurite. The XRF measurements of the Japanese and Korean pigments could be used to complement the results of pigment identification by analytical imaging through spectral reflectance reconstruction. On the other hand, analysis of the Mongolian textiles revealed that they were produced between 12th and 13th century. Elemental analysis of the samples showed that they contained traces of gold, copper, iron and titanium. Based on the age and trace elements in the samples, it was concluded that the textiles were produced during the height of power of the Mongol empire, which makes them a valuable cultural heritage. Finally, the analysis of the degraded and discolored malachite and azurite demonstrates how multispectral analytical imaging could be used to complement the results of high energy-based techniques.
Concurrent image and dose reconstruction for image guided radiation therapy
NASA Astrophysics Data System (ADS)
Sheng, Ke
The importance of knowing the patient actual position is essential for intensity modulated radiation therapy (IMRT). This procedure uses tightened margin and escalated tumor dose. In order to eliminate the uncertainty of the geometry in IMRT, daily imaging is prefered. The imaging dose, limited field of view and the imaging concurrency of the MVCT (mega-voltage computerized tomography) are investigated in this work. By applying partial volume imaging (PVI), imaging dose can be reduced for a region of interest (ROI) imaging. The imaging dose and the image quality are quantitatively balanced with inverse imaging dose planning. With PVI, 72% average imaging dose reduction was observed on a typical prostate patient case. The algebraic reconstruction technique (ART) based projection onto convex sets (POCS) shows higher robustness than filtered back projection when available imaging data is not complete and continuous. However, when the projection is continuous as in the actual delivery, a non-iterative wavelet based multiresolution local tomography (WMLT) is able to achieve 1% accuracy within the ROI. The reduction of imaging dose is dependent on the size of ROI. The improvement of concurrency is also discussed based on the combination of PVI and WMLT. Useful target images were acquired with treatment beams and the temporal resolution can be increased to 20 seconds in tomotherapy. The data truncation problem with the portal imager was also studied. Results show that the image quality is not adversely affected by truncation when WMLT is employed. When the online imaging is available, a perturbation dose calculation (PDC) that estimates the actual delivered dose is proposed. Corrected from the Fano's theorem, PDC counts the first order term in the density variation to calculate the internal and external anatomy change. Although change in the dose distribution that is caused by the internal organ motion is less than 1% for 6 MV beams, the external anatomy change has
Active catheter tracking using parallel MRI and real-time image reconstruction.
Bock, Michael; Müller, Sven; Zuehlsdorff, Sven; Speier, Peter; Fink, Christian; Hallscheidt, Peter; Umathum, Reiner; Semmler, Wolfhard
2006-06-01
In this work active MR catheter tracking with automatic slice alignment was combined with an autocalibrated parallel imaging technique. Using an optimized generalized autocalibrating partially parallel acquisitions (GRAPPA) algorithm with an acceleration factor of 2, we were able to reduce the acquisition time per image by 34%. To accelerate real-time GRAPPA image reconstruction, the coil sensitivities were updated only after slice reorientation. For a 2D trueFISP acquisition (160 x 256 matrix, 80% phase matrix, half Fourier acquisition, TR = 3.7 ms, GRAPPA factor = 2) real-time image reconstruction was achieved with up to six imaging coils. In a single animal experiment the method was used to steer a catheter from the vena cava through the beating heart into the pulmonary vasculature at an image update rate of about five images per second. Under all slice orientations, parallel image reconstruction was accomplished with only minor image artifacts, and the increased temporal resolution provided a sharp delineation of intracardial structures, such as the papillary muscle. PMID:16683261
Simultaneous reconstruction of the activity image and registration of the CT image in TOF-PET
NASA Astrophysics Data System (ADS)
Rezaei, Ahmadreza; Michel, Christian; Casey, Michael E.; Nuyts, Johan
2016-02-01
Previously, maximum-likelihood methods have been proposed to jointly estimate the activity image and the attenuation image or the attenuation sinogram from time-of-flight (TOF) positron emission tomography (PET) data. In this contribution, we propose a method that addresses the possible alignment problem of the TOF-PET emission data and the computed tomography (CT) attenuation data, by combining reconstruction and registration. The method, called MLRR, iteratively reconstructs the activity image while registering the available CT-based attenuation image, so that the pair of activity and attenuation images maximise the likelihood of the TOF emission sinogram. The algorithm is slow to converge, but some acceleration could be achieved by using Nesterov’s momentum method and by applying a multi-resolution scheme for the non-rigid displacement estimation. The latter also helps to avoid local optima, although convergence to the global optimum cannot be guaranteed. The results are evaluated on 2D and 3D simulations as well as a respiratory gated clinical scan. Our experiments indicate that the proposed method is able to correct for possible misalignment of the CT-based attenuation image, and is therefore a very promising approach to suppressing attenuation artefacts in clinical PET/CT. When applied to respiratory gated data of a patient scan, it produced deformations that are compatible with breathing motion and which reduced the well known attenuation artefact near the dome of the liver. Since the method makes use of the energy-converted CT attenuation image, the scale problem of joint reconstruction is automatically solved.
Synergistic image reconstruction for hybrid ultrasound and photoacoustic computed tomography
NASA Astrophysics Data System (ADS)
Matthews, Thomas P.; Wang, Kun; Wang, Lihong V.; Anastasio, Mark A.
2015-03-01
Conventional photoacoustic computed tomography (PACT) image reconstruction methods assume that the object and surrounding medium are described by a constant speed-of-sound (SOS) value. In order to accurately recover fine structures, SOS heterogeneities should be quantified and compensated for during PACT reconstruction. To address this problem, several groups have proposed hybrid systems that combine PACT with ultrasound computed tomography (USCT). In such systems, a SOS map is reconstructed first via USCT. Consequently, this SOS map is employed to inform the PACT reconstruction method. Additionally, the SOS map can provide structural information regarding tissue, which is complementary to the functional information from the PACT image. We propose a paradigm shift in the way that images are reconstructed in hybrid PACT-USCT imaging. Inspired by our observation that information about the SOS distribution is encoded in PACT measurements, we propose to jointly reconstruct the absorbed optical energy density and SOS distributions from a combined set of USCT and PACT measurements, thereby reducing the two reconstruction problems into one. This innovative approach has several advantages over conventional approaches in which PACT and USCT images are reconstructed independently: (1) Variations in the SOS will automatically be accounted for, optimizing PACT image quality; (2) The reconstructed PACT and USCT images will possess minimal systematic artifacts because errors in the imaging models will be optimally balanced during the joint reconstruction; (3) Due to the exploitation of information regarding the SOS distribution in the full-view PACT data, our approach will permit high-resolution reconstruction of the SOS distribution from sparse array data.
Numerical modelling and image reconstruction in diffuse optical tomography
Dehghani, Hamid; Srinivasan, Subhadra; Pogue, Brian W.; Gibson, Adam
2009-01-01
The development of diffuse optical tomography as a functional imaging modality has relied largely on the use of model-based image reconstruction. The recovery of optical parameters from boundary measurements of light propagation within tissue is inherently a difficult one, because the problem is nonlinear, ill-posed and ill-conditioned. Additionally, although the measured near-infrared signals of light transmission through tissue provide high imaging contrast, the reconstructed images suffer from poor spatial resolution due to the diffuse propagation of light in biological tissue. The application of model-based image reconstruction is reviewed in this paper, together with a numerical modelling approach to light propagation in tissue as well as generalized image reconstruction using boundary data. A comprehensive review and details of the basis for using spatial and structural prior information are also discussed, whereby the use of spectral and dual-modality systems can improve contrast and spatial resolution. PMID:19581256
Reconstruction of biofilm images: combining local and global structural parameters.
Resat, Haluk; Renslow, Ryan S; Beyenal, Haluk
2014-10-01
Digitized images can be used for quantitative comparison of biofilms grown under different conditions. Using biofilm image reconstruction, it was previously found that biofilms with a completely different look can have nearly identical structural parameters and that the most commonly utilized global structural parameters were not sufficient to uniquely define these biofilms. Here, additional local and global parameters are introduced to show that these parameters considerably increase the reliability of the image reconstruction process. Assessment using human evaluators indicated that the correct identification rate of the reconstructed images increased from 50% to 72% with the introduction of the new parameters into the reconstruction procedure. An expanded set of parameters especially improved the identification of biofilm structures with internal orientational features and of structures in which colony sizes and spatial locations varied. Hence, the newly introduced structural parameter sets helped to better classify the biofilms by incorporating finer local structural details into the reconstruction process. PMID:25377487
Undersampled MR Image Reconstruction with Data-Driven Tight Frame
Liu, Jianbo; Wang, Shanshan; Peng, Xi; Liang, Dong
2015-01-01
Undersampled magnetic resonance image reconstruction employing sparsity regularization has fascinated many researchers in recent years under the support of compressed sensing theory. Nevertheless, most existing sparsity-regularized reconstruction methods either lack adaptability to capture the structure information or suffer from high computational load. With the aim of further improving image reconstruction accuracy without introducing too much computation, this paper proposes a data-driven tight frame magnetic image reconstruction (DDTF-MRI) method. By taking advantage of the efficiency and effectiveness of data-driven tight frame, DDTF-MRI trains an adaptive tight frame to sparsify the to-be-reconstructed MR image. Furthermore, a two-level Bregman iteration algorithm has been developed to solve the proposed model. The proposed method has been compared to two state-of-the-art methods on four datasets and encouraging performances have been achieved by DDTF-MRI. PMID:26199641
Research on THz CT system and image reconstruction algorithm
NASA Astrophysics Data System (ADS)
Li, Ming-liang; Wang, Cong; Cheng, Hong
2009-07-01
Terahertz Computed Tomography takes the advantages of not only high resolution in space and density without image overlap but also the capability of being directly used in digital processing and spectral analysis, which determine it to be a good choice in parameter detection for process control. But Diffraction and scattering of THz wave will obfuscate or distort the reconstructed image. In order to find the most effective reconstruction method to build THz CT model. Because of the expensive cost, a fan-shaped THz CT industrial detection system scanning model, which consists of 8 emitters and 32 receivers, is established based on studying infrared CT technology. The model contains control and interface, data collecting and image reconstruction sub-system. It analyzes all the sub-function modules then reconstructs images with algebraic reconstruction algorithm. The experimental result proves it to be an effective, efficient algorithm with high resolution and even better than back-projection method.
Reconstruction of biofilm images: combining local and global structural parameters
Resat, Haluk; Renslow, Ryan S.; Beyenal, Haluk
2014-11-07
Digitized images can be used for quantitative comparison of biofilms grown under different conditions. Using biofilm image reconstruction, it was previously found that biofilms with a completely different look can have nearly identical structural parameters and that the most commonly utilized global structural parameters were not sufficient to uniquely define these biofilms. Here, additional local and global parameters are introduced to show that these parameters considerably increase the reliability of the image reconstruction process. Assessment using human evaluators indicated that the correct identification rate of the reconstructed images increased from 50% to 72% with the introduction of the new parameters into the reconstruction procedure. An expanded set of parameters especially improved the identification of biofilm structures with internal orientational features and of structures in which colony sizes and spatial locations varied. Hence, the newly introduced structural parameter sets helped to better classify the biofilms by incorporating finer local structural details into the reconstruction process.
Evaluation of back projection methods for breast tomosynthesis image reconstruction.
Zhou, Weihua; Lu, Jianping; Zhou, Otto; Chen, Ying
2015-06-01
Breast cancer is the most common cancer among women in the USA. Compared to mammography, digital breast tomosynthesis is a new imaging technique that may improve the diagnostic accuracy by removing the ambiguities of overlapped tissues and providing 3D information of the breast. Tomosynthesis reconstruction algorithms generate 3D reconstructed slices from a few limited angle projection images. Among different reconstruction algorithms, back projection (BP) is considered an important foundation of quite a few reconstruction techniques with deblurring algorithms such as filtered back projection. In this paper, two BP variants, including α-trimmed BP and principal component analysis-based BP, were proposed to improve the image quality against that of traditional BP. Computer simulations and phantom studies demonstrated that the α-trimmed BP may improve signal response performance and suppress noise in breast tomosynthesis image reconstruction. PMID:25384538
Calibration and Image Reconstruction for the Hurricane Imaging Radiometer (HIRAD)
NASA Technical Reports Server (NTRS)
Ruf, Christopher; Roberts, J. Brent; Biswas, Sayak; James, Mark W.; Miller, Timothy
2012-01-01
The Hurricane Imaging Radiometer (HIRAD) is a new airborne passive microwave synthetic aperture radiometer designed to provide wide swath images of ocean surface wind speed under heavy precipitation and, in particular, in tropical cyclones. It operates at 4, 5, 6 and 6.6 GHz and uses interferometric signal processing to synthesize a pushbroom imager in software from a low profile planar antenna with no mechanical scanning. HIRAD participated in NASA s Genesis and Rapid Intensification Processes (GRIP) mission during Fall 2010 as its first science field campaign. HIRAD produced images of upwelling brightness temperature over a aprox 70 km swath width with approx 3 km spatial resolution. From this, ocean surface wind speed and column averaged atmospheric liquid water content can be retrieved across the swath. The calibration and image reconstruction algorithms that were used to verify HIRAD functional performance during and immediately after GRIP were only preliminary and used a number of simplifying assumptions and approximations about the instrument design and performance. The development and performance of a more detailed and complete set of algorithms are reported here.
On multigrid methods for image reconstruction from projections
Henson, V.E.; Robinson, B.T.; Limber, M.
1994-12-31
The sampled Radon transform of a 2D function can be represented as a continuous linear map R : L{sup 1} {yields} R{sup N}. The image reconstruction problem is: given a vector b {element_of} R{sup N}, find an image (or density function) u(x, y) such that Ru = b. Since in general there are infinitely many solutions, the authors pick the solution with minimal 2-norm. Numerous proposals have been made regarding how best to discretize this problem. One can, for example, select a set of functions {phi}{sub j} that span a particular subspace {Omega} {contained_in} L{sup 1}, and model R : {Omega} {yields} R{sup N}. The subspace {Omega} may be chosen as a member of a sequence of subspaces whose limit is dense in L{sup 1}. One approach to the choice of {Omega} gives rise to a natural pixel discretization of the image space. Two possible choices of the set {phi}{sub j} are the set of characteristic functions of finite-width `strips` representing energy transmission paths and the set of intersections of such strips. The authors have studied the eigenstructure of the matrices B resulting from these choices and the effect of applying a Gauss-Seidel iteration to the problem Bw = b. There exists a near null space into which the error vectors migrate with iteration, after which Gauss-Seidel iteration stalls. The authors attempt to accelerate convergence via a multilevel scheme, based on the principles of McCormick`s Multilevel Projection Method (PML). Coarsening is achieved by thickening the rays which results in a much smaller discretization of an optimal grid, and a halving of the number of variables. This approach satisfies all the requirements of the PML scheme. They have observed that a multilevel approach based on this idea accelerates convergence at least to the point where noise in the data dominates.
Chord-based image reconstruction from clinical projection data
NASA Astrophysics Data System (ADS)
King, Martin; Xia, Dan; Pan, Xiaochuan; Vannier, Michael; Köhler, Thomas; La Riviére, Patrick; Sidky, Emil; Giger, Maryellen
2008-03-01
Chord-based algorithms can eliminate cone-beam artifacts in images reconstructed from a clinical computed tomography (CT) scanner. The feasibility of using chord-based reconstruction algorithms was evaluated with three clinical CT projection data sets. The first projection data set was acquired using a clinical 64-channel CT scanner (Philips Brilliance 64) that consisted of an axial scan from a quality assurance phantom. Images were reconstructed using (1) a full-scan FDK algorithm, (2) a short-scan FDK algorithm, and (3) the chord-based backprojection filtration algorithm (BPF) using full-scan data. The BPF algorithm was capable of reproducing the morphology of the phantom quite well, but exhibited significantly less noise than the two FDK reconstructions as well as the reconstruction obtained from the clinical scanner. The second and third data sets were obtained from scans of a head phantom and a patient's thorax. For both of these data sets, the BPF reconstructions were comparable to the short-scan FDK reconstructions in terms of image quality, although sharper features were indistinct in the BPF reconstructions. This research demonstrates the feasibility of chord-based algorithms for reconstructing images from clinical CT projection data sets and provides a framework for implementing and testing algorithmic innovations.
Super-Resolution Image Reconstruction Using Diffuse Source Models
Ellis, Michael A.; Viola, Francesco; Walker, William F.
2010-01-01
Image reconstruction is central to many scientific fields, from medical ultrasound and sonar to computed tomography and computer vision. While lenses play a critical reconstruction role in these fields, digital sensors enable more sophisticated computational approaches. A variety of computational methods have thus been developed, with the common goal of increasing contrast and resolution to extract the greatest possible information from raw data. This paper describes a new image reconstruction method named the Diffuse Time-domain Optimized Near-field Estimator (dTONE). dTONE represents each hypothetical target in the system model as a diffuse region of targets rather than a single discrete target, which more accurately represents the experimental data that arise from signal sources in continuous space, with no additional computational requirements at the time of image reconstruction. Simulation and experimental ultrasound images of animal tissues show that dTONE achieves image resolution and contrast far superior to those of conventional image reconstruction methods. We also demonstrate the increased robustness of the diffuse target model to major sources of image degradation, through the addition of electronic noise, phase aberration, and magnitude aberration to ultrasound simulations. Using experimental ultrasound data from a tissue-mimicking phantom containing a 3 mm diameter anechoic cyst, the conventionally reconstructed image has a cystic contrast of −6.3 dB whereas the dTONE image has a cystic contrast of −14.4 dB. PMID:20447760
Super-resolution image reconstruction using diffuse source models.
Ellis, Michael A; Viola, Francesco; Walker, William F
2010-06-01
Image reconstruction is central to many scientific fields, from medical ultrasound and sonar to computed tomography and computer vision. Although lenses play a critical reconstruction role in these fields, digital sensors enable more sophisticated computational approaches. A variety of computational methods have thus been developed, with the common goal of increasing contrast and resolution to extract the greatest possible information from raw data. This paper describes a new image reconstruction method named the Diffuse Time-domain Optimized Near-field Estimator (dTONE). dTONE represents each hypothetical target in the system model as a diffuse region of targets rather than a single discrete target, which more accurately represents the experimental data that arise from signal sources in continuous space, with no additional computational requirements at the time of image reconstruction. Simulation and experimental ultrasound images of animal tissues show that dTONE achieves image resolution and contrast far superior to those of conventional image reconstruction methods. We also demonstrate the increased robustness of the diffuse target model to major sources of image degradation through the addition of electronic noise, phase aberration and magnitude aberration to ultrasound simulations. Using experimental ultrasound data from a tissue-mimicking phantom containing a 3-mm-diameter anechoic cyst, the conventionally reconstructed image has a cystic contrast of -6.3 dB, whereas the dTONE image has a cystic contrast of -14.4 dB. PMID:20447760
Quantitative image quality evaluation for cardiac CT reconstructions
NASA Astrophysics Data System (ADS)
Tseng, Hsin-Wu; Fan, Jiahua; Kupinski, Matthew A.; Balhorn, William; Okerlund, Darin R.
2016-03-01
Maintaining image quality in the presence of motion is always desirable and challenging in clinical Cardiac CT imaging. Different image-reconstruction algorithms are available on current commercial CT systems that attempt to achieve this goal. It is widely accepted that image-quality assessment should be task-based and involve specific tasks, observers, and associated figures of merits. In this work, we developed an observer model that performed the task of estimating the percentage of plaque in a vessel from CT images. We compared task performance of Cardiac CT image data reconstructed using a conventional FBP reconstruction algorithm and the SnapShot Freeze (SSF) algorithm, each at default and optimal reconstruction cardiac phases. The purpose of this work is to design an approach for quantitative image-quality evaluation of temporal resolution for Cardiac CT systems. To simulate heart motion, a moving coronary type phantom synchronized with an ECG signal was used. Three different percentage plaques embedded in a 3 mm vessel phantom were imaged multiple times under motion free, 60 bpm, and 80 bpm heart rates. Static (motion free) images of this phantom were taken as reference images for image template generation. Independent ROIs from the 60 bpm and 80 bpm images were generated by vessel tracking. The observer performed estimation tasks using these ROIs. Ensemble mean square error (EMSE) was used as the figure of merit. Results suggest that the quality of SSF images is superior to the quality of FBP images in higher heart-rate scans.
Iterative image reconstruction techniques: cardiothoracic computed tomography applications.
Cho, Young Jun; Schoepf, U Joseph; Silverman, Justin R; Krazinski, Aleksander W; Canstein, Christian; Deak, Zsuzsanna; Grimm, Jochen; Geyer, Lucas L
2014-07-01
Iterative image reconstruction algorithms provide significant improvements over traditional filtered back projection in computed tomography (CT). Clinically available through recent advances in modern CT technology, iterative reconstruction enhances image quality through cyclical image calculation, suppressing image noise and artifacts, particularly blooming artifacts. The advantages of iterative reconstruction are apparent in traditionally challenging cases-for example, in obese patients, those with significant artery calcification, or those with coronary artery stents. In addition, as clinical use of CT has grown, so have concerns over ionizing radiation associated with CT examinations. Through noise reduction, iterative reconstruction has been shown to permit radiation dose reduction while preserving diagnostic image quality. This approach is becoming increasingly attractive as the routine use of CT for pediatric and repeated follow-up evaluation grows ever more common. Cardiovascular CT in particular, with its focus on detailed structural and functional analyses, stands to benefit greatly from the promising iterative solutions that are readily available. PMID:24662334
Takata, Tadanori; Ichikawa, Katsuhiro; Hayashi, Hiroyuki; Mitsui, Wataru; Sakuta, Keita; Koshida, Haruka; Yokoi, Tomohiro; Matsubara, Kousuke; Horii, Jyunsei; Iida, Hiroji
2012-01-01
The purpose of this study was to evaluate the image quality of an iterative reconstruction method, the iterative reconstruction in image space (IRIS), which was implemented in a 128-slices multi-detector computed tomography system (MDCT), Siemens Somatom Definition Flash (Definition). We evaluated image noise by standard deviation (SD) as many researchers did before, and in addition, we measured modulation transfer function (MTF), noise power spectrum (NPS), and perceptual low-contrast detectability using a water phantom including a low-contrast object with a 10 Hounsfield unit (HU) contrast, to evaluate whether the noise reduction of IRIS was effective. The SD and NPS were measured from the images of a water phantom. The MTF was measured from images of a thin metal wire and a bar pattern phantom with the bar contrast of 125 HU. The NPS of IRIS was lower than that of filtered back projection (FBP) at middle and high frequency regions. The SD values were reduced by 21%. The MTF of IRIS and FBP measured by the wire phantom coincided precisely. However, for the bar pattern phantom, the MTF values of IRIS at 0.625 and 0.833 cycle/mm were lower than those of FBP. Despite the reduction of the SD and the NPS, the low-contrast detectability study indicated no significant difference between IRIS and FBP. From these results, it was demonstrated that IRIS had the noise reduction performance with exact preservation for high contrast resolution and slight degradation of middle contrast resolution, and could slightly improve the low contrast detectability but with no significance. PMID:22516592
SART-Type Image Reconstruction from Overlapped Projections
Yu, Hengyong; Ji, Changguo; Wang, Ge
2011-01-01
To maximize the time-integrated X-ray flux from multiple X-ray sources and shorten the data acquisition process, a promising way is to allow overlapped projections from multiple sources being simultaneously on without involving the source multiplexing technology. The most challenging task in this configuration is to perform image reconstruction effectively and efficiently from overlapped projections. Inspired by the single-source simultaneous algebraic reconstruction technique (SART), we hereby develop a multisource SART-type reconstruction algorithm regularized by a sparsity-oriented constraint in the soft-threshold filtering framework to reconstruct images from overlapped projections. Our numerical simulation results verify the correctness of the proposed algorithm and demonstrate the advantage of image reconstruction from overlapped projections. PMID:20871854
Iterative Image Reconstruction for Limited-Angle CT Using Optimized Initial Image
Guo, Jingyu; Qi, Hongliang; Xu, Yuan; Chen, Zijia; Li, Shulong; Zhou, Linghong
2016-01-01
Limited-angle computed tomography (CT) has great impact in some clinical applications. Existing iterative reconstruction algorithms could not reconstruct high-quality images, leading to severe artifacts nearby edges. Optimal selection of initial image would influence the iterative reconstruction performance but has not been studied deeply yet. In this work, we proposed to generate optimized initial image followed by total variation (TV) based iterative reconstruction considering the feature of image symmetry. The simulated data and real data reconstruction results indicate that the proposed method effectively removes the artifacts nearby edges. PMID:27066107
Sparsity-constrained PET image reconstruction with learned dictionaries.
Tang, Jing; Yang, Bao; Wang, Yanhua; Ying, Leslie
2016-09-01
PET imaging plays an important role in scientific and clinical measurement of biochemical and physiological processes. Model-based PET image reconstruction such as the iterative expectation maximization algorithm seeking the maximum likelihood solution leads to increased noise. The maximum a posteriori (MAP) estimate removes divergence at higher iterations. However, a conventional smoothing prior or a total-variation (TV) prior in a MAP reconstruction algorithm causes over smoothing or blocky artifacts in the reconstructed images. We propose to use dictionary learning (DL) based sparse signal representation in the formation of the prior for MAP PET image reconstruction. The dictionary to sparsify the PET images in the reconstruction process is learned from various training images including the corresponding MR structural image and a self-created hollow sphere. Using simulated and patient brain PET data with corresponding MR images, we study the performance of the DL-MAP algorithm and compare it quantitatively with a conventional MAP algorithm, a TV-MAP algorithm, and a patch-based algorithm. The DL-MAP algorithm achieves improved bias and contrast (or regional mean values) at comparable noise to what the other MAP algorithms acquire. The dictionary learned from the hollow sphere leads to similar results as the dictionary learned from the corresponding MR image. Achieving robust performance in various noise-level simulation and patient studies, the DL-MAP algorithm with a general dictionary demonstrates its potential in quantitative PET imaging. PMID:27494441
MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods
Schmidt, Johannes F. M.; Santelli, Claudio; Kozerke, Sebastian
2016-01-01
An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675
MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods.
Schmidt, Johannes F M; Santelli, Claudio; Kozerke, Sebastian
2016-01-01
An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675
Sparsity-constrained PET image reconstruction with learned dictionaries
NASA Astrophysics Data System (ADS)
Tang, Jing; Yang, Bao; Wang, Yanhua; Ying, Leslie
2016-09-01
PET imaging plays an important role in scientific and clinical measurement of biochemical and physiological processes. Model-based PET image reconstruction such as the iterative expectation maximization algorithm seeking the maximum likelihood solution leads to increased noise. The maximum a posteriori (MAP) estimate removes divergence at higher iterations. However, a conventional smoothing prior or a total-variation (TV) prior in a MAP reconstruction algorithm causes over smoothing or blocky artifacts in the reconstructed images. We propose to use dictionary learning (DL) based sparse signal representation in the formation of the prior for MAP PET image reconstruction. The dictionary to sparsify the PET images in the reconstruction process is learned from various training images including the corresponding MR structural image and a self-created hollow sphere. Using simulated and patient brain PET data with corresponding MR images, we study the performance of the DL-MAP algorithm and compare it quantitatively with a conventional MAP algorithm, a TV-MAP algorithm, and a patch-based algorithm. The DL-MAP algorithm achieves improved bias and contrast (or regional mean values) at comparable noise to what the other MAP algorithms acquire. The dictionary learned from the hollow sphere leads to similar results as the dictionary learned from the corresponding MR image. Achieving robust performance in various noise-level simulation and patient studies, the DL-MAP algorithm with a general dictionary demonstrates its potential in quantitative PET imaging.
Infrared Astronomical Satellite (IRAS) image reconstruction and restoration
NASA Technical Reports Server (NTRS)
Gonsalves, R. A.; Lyons, T. D.; Price, S. D.; Levan, P. D.; Aumann, H. H.
1987-01-01
IRAS sky mapping data is being reconstructed as images, and an entropy-based restoration algorithm is being applied in an attempt to improve spatial resolution in extended sources. Reconstruction requires interpolation of non-uniformly sampled data. Restoration is accomplished with an iterative algorithm which begins with an inverse filter solution and iterates on it with a weighted entropy-based spectral subtraction.
McClymont, Darryl; Teh, Irvin; Whittington, Hannah J.; Grau, Vicente
2015-01-01
Purpose Diffusion MRI requires acquisition of multiple diffusion‐weighted images, resulting in long scan times. Here, we investigate combining compressed sensing and a fast imaging sequence to dramatically reduce acquisition times in cardiac diffusion MRI. Methods Fully sampled and prospectively undersampled diffusion tensor imaging data were acquired in five rat hearts at acceleration factors of between two and six using a fast spin echo (FSE) sequence. Images were reconstructed using a compressed sensing framework, enforcing sparsity by means of decomposition by adaptive dictionaries. A tensor was fit to the reconstructed images and fiber tractography was performed. Results Acceleration factors of up to six were achieved, with a modest increase in root mean square error of mean apparent diffusion coefficient (ADC), fractional anisotropy (FA), and helix angle. At an acceleration factor of six, mean values of ADC and FA were within 2.5% and 5% of the ground truth, respectively. Marginal differences were observed in the fiber tracts. Conclusion We developed a new k‐space sampling strategy for acquiring prospectively undersampled diffusion‐weighted data, and validated a novel compressed sensing reconstruction algorithm based on adaptive dictionaries. The k‐space undersampling and FSE acquisition each reduced acquisition times by up to 6× and 8×, respectively, as compared to fully sampled spin echo imaging. Magn Reson Med 76:248–258, 2016. © 2015 Wiley Periodicals, Inc. PMID:26302363
Image reconstruction in transcranial photoacoustic computed tomography of the brain
NASA Astrophysics Data System (ADS)
Mitsuhashi, Kenji; Wang, Lihong V.; Anastasio, Mark A.
2015-03-01
Photoacoustic computed tomography (PACT) holds great promise for transcranial brain imaging. However, the strong reflection, scattering, attenuation, and mode-conversion of photoacoustic waves in the skull pose serious challenges to establishing the method. The lack of an appropriate model of solid media in conventional PACT imaging models, which are based on the canonical scalar wave equation, causes a significant model mismatch in the presence of the skull and thus results in deteriorated reconstructed images. The goal of this study was to develop an image reconstruction algorithm that accurately models the skull and thereby ameliorates the quality of reconstructed images. The propagation of photoacoustic waves through the skull was modeled by a viscoelastic stress tensor wave equation, which was subsequently discretized by use of a staggered grid fourth-order finite-difference time-domain (FDTD) method. The matched adjoint of the FDTD-based wave propagation operator was derived for implementing a back-projection operator. Systematic computer simulations were conducted to demonstrate the effectiveness of the back-projection operator for reconstructing images in a realistic three-dimensional PACT brain imaging system. The results suggest that the proposed algorithm can successfully reconstruct images from transcranially-measured pressure data and readily be translated to clinical PACT brain imaging applications.
Compressed Sensing Inspired Image Reconstruction from Overlapped Projections
Yang, Lin; Lu, Yang; Wang, Ge
2010-01-01
The key idea discussed in this paper is to reconstruct an image from overlapped projections so that the data acquisition process can be shortened while the image quality remains essentially uncompromised. To perform image reconstruction from overlapped projections, the conventional reconstruction approach (e.g., filtered backprojection (FBP) algorithms) cannot be directly used because of two problems. First, overlapped projections represent an imaging system in terms of summed exponentials, which cannot be transformed into a linear form. Second, the overlapped measurement carries less information than the traditional line integrals. To meet these challenges, we propose a compressive sensing-(CS-) based iterative algorithm for reconstruction from overlapped data. This algorithm starts with a good initial guess, relies on adaptive linearization, and minimizes the total variation (TV). Then, we demonstrated the feasibility of this algorithm in numerical tests. PMID:20689701
Online reconstruction of 3D magnetic particle imaging data
NASA Astrophysics Data System (ADS)
Knopp, T.; Hofmann, M.
2016-06-01
Magnetic particle imaging is a quantitative functional imaging technique that allows imaging of the spatial distribution of super-paramagnetic iron oxide particles at high temporal resolution. The raw data acquisition can be performed at frame rates of more than 40 volumes s‑1. However, to date image reconstruction is performed in an offline step and thus no direct feedback is available during the experiment. Considering potential interventional applications such direct feedback would be mandatory. In this work, an online reconstruction framework is implemented that allows direct visualization of the particle distribution on the screen of the acquisition computer with a latency of about 2 s. The reconstruction process is adaptive and performs block-averaging in order to optimize the signal quality for a given amount of reconstruction time.
Online reconstruction of 3D magnetic particle imaging data.
Knopp, T; Hofmann, M
2016-06-01
Magnetic particle imaging is a quantitative functional imaging technique that allows imaging of the spatial distribution of super-paramagnetic iron oxide particles at high temporal resolution. The raw data acquisition can be performed at frame rates of more than 40 volumes s(-1). However, to date image reconstruction is performed in an offline step and thus no direct feedback is available during the experiment. Considering potential interventional applications such direct feedback would be mandatory. In this work, an online reconstruction framework is implemented that allows direct visualization of the particle distribution on the screen of the acquisition computer with a latency of about 2 s. The reconstruction process is adaptive and performs block-averaging in order to optimize the signal quality for a given amount of reconstruction time. PMID:27182668
Exponential filtering of singular values improves photoacoustic image reconstruction.
Bhatt, Manish; Gutta, Sreedevi; Yalavarthy, Phaneendra K
2016-09-01
Model-based image reconstruction techniques yield better quantitative accuracy in photoacoustic image reconstruction. In this work, an exponential filtering of singular values was proposed for carrying out the image reconstruction in photoacoustic tomography. The results were compared with widely popular Tikhonov regularization, time reversal, and the state of the art least-squares QR-based reconstruction algorithms for three digital phantom cases with varying signal-to-noise ratios of data. It was shown that exponential filtering provides superior photoacoustic images of better quantitative accuracy. Moreover, the proposed filtering approach was observed to be less biased toward the regularization parameter and did not come with any additional computational burden as it was implemented within the Tikhonov filtering framework. It was also shown that the standard Tikhonov filtering becomes an approximation to the proposed exponential filtering. PMID:27607501
Reconstruction algorithms for optoacoustic imaging based on fiber optic detectors
NASA Astrophysics Data System (ADS)
Lamela, Horacio; Díaz-Tendero, Gonzalo; Gutiérrez, Rebeca; Gallego, Daniel
2011-06-01
Optoacoustic Imaging (OAI), a novel hybrid imaging technology, offers high contrast, molecular specificity and excellent resolution to overcome limitations of the current clinical modalities for detection of solid tumors. The exact time-domain reconstruction formula produces images with excellent resolution but poor contrast. Some approximate time-domain filtered back-projection reconstruction algorithms have also been reported to solve this problem. A wavelet transform implementation filtering can be used to sharpen object boundaries while simultaneously preserving high contrast of the reconstructed objects. In this paper, several algorithms, based on Back Projection (BP) techniques, have been suggested to process OA images in conjunction with signal filtering for ultrasonic point detectors and integral detectors. We apply these techniques first directly to a numerical generated sample image and then to the laserdigitalized image of a tissue phantom, obtaining in both cases the best results in resolution and contrast for a waveletbased filter.
Reconstructing the shape of an object from its mirror image
NASA Astrophysics Data System (ADS)
Hutt, T.; Simonetti, F.
2010-09-01
An image of an object can be achieved by sending multiple waves toward it and recording the reflections. In order to achieve a complete reconstruction it is usually necessary to send and receive waves from every possible direction [360° for two-dimensional (2D) imaging]. In practice this is often not possible and imaging must be performed with a limited view, which degrades the reconstruction. A proposed solution is to use a strongly scattering planar interface as a mirror to "look behind" the object. The mirror provides additional views that result in an improved reconstruction. We describe this technique and how it is implemented in the context of 2D acoustic imaging. The effect of the mirror on imaging is demonstrated by means of numerical examples that are also used to study the effects of noise. This technique could be used with many imaging methods and wave types, including microwaves, ultrasound, sonar, and seismic waves.
Three-dimensional surface reconstruction from multistatic SAR images.
Rigling, Brian D; Moses, Randolph L
2005-08-01
This paper discusses reconstruction of three-dimensional surfaces from multiple bistatic synthetic aperture radar (SAR) images. Techniques for surface reconstruction from multiple monostatic SAR images already exist, including interferometric processing and stereo SAR. We generalize these methods to obtain algorithms for bistatic interferometric SAR and bistatic stereo SAR. We also propose a framework for predicting the performance of our multistatic stereo SAR algorithm, and, from this framework, we suggest a metric for use in planning strategic deployment of multistatic assets. PMID:16121463
Beyond maximum entropy: Fractal Pixon-based image reconstruction
NASA Technical Reports Server (NTRS)
Puetter, Richard C.; Pina, R. K.
1994-01-01
We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.
FPGA Coprocessor for Accelerated Classification of Images
NASA Technical Reports Server (NTRS)
Pingree, Paula J.; Scharenbroich, Lucas J.; Werne, Thomas A.
2008-01-01
An effort related to that described in the preceding article focuses on developing a spaceborne processing platform for fast and accurate onboard classification of image data, a critical part of modern satellite image processing. The approach again has been to exploit the versatility of recently developed hybrid Virtex-4FX field-programmable gate array (FPGA) to run diverse science applications on embedded processors while taking advantage of the reconfigurable hardware resources of the FPGAs. In this case, the FPGA serves as a coprocessor that implements legacy C-language support-vector-machine (SVM) image-classification algorithms to detect and identify natural phenomena such as flooding, volcanic eruptions, and sea-ice break-up. The FPGA provides hardware acceleration for increased onboard processing capability than previously demonstrated in software. The original C-language program demonstrated on an imaging instrument aboard the Earth Observing-1 (EO-1) satellite implements a linear-kernel SVM algorithm for classifying parts of the images as snow, water, ice, land, or cloud or unclassified. Current onboard processors, such as on EO-1, have limited computing power, extremely limited active storage capability and are no longer considered state-of-the-art. Using commercially available software that translates C-language programs into hardware description language (HDL) files, the legacy C-language program, and two newly formulated programs for a more capable expanded-linear-kernel and a more accurate polynomial-kernel SVM algorithm, have been implemented in the Virtex-4FX FPGA. In tests, the FPGA implementations have exhibited significant speedups over conventional software implementations running on general-purpose hardware.
The speckle image reconstruction of the solar small scale features
NASA Astrophysics Data System (ADS)
Zhong, Libo; Tian, Yu; Rao, Changhui
2014-11-01
The resolution of the astronomical object observed by the earth-based telescope is limited due to the atmospheric turbulence. Speckle image reconstruction method provides access to detect small-scale solar features near the diffraction limit of the telescope. This paper describes the implementation of the reconstruction of images obtained by the 1-m new vacuum solar telescope at Full-Shine solar observatory. Speckle masking method is used to reconstruct the Fourier phases for its better dynamic range and resolution capabilities. Except of the phase reconstruction process, several problems encounter in the solar image reconstruction are discussed. The details of the implement including the flat-field, image segmentation, Fried parameter estimation and noise filter estimating are described particularly. It is demonstrated that the speckle image reconstruction is effective to restore the wide field of view images. The qualities of the restorations are evaluated by the contrast ratio. When the Fried parameter is 10cm, the contrast ratio of the sunspot and granulation can be improved from 0.3916 to 0.6845 and from 0.0248 to 0.0756 respectively.
Reconstruction Techniques for Sparse Multistatic Linear Array Microwave Imaging
Sheen, David M.; Hall, Thomas E.
2014-06-09
Sequentially-switched linear arrays are an enabling technology for a number of near-field microwave imaging applications. Electronically sequencing along the array axis followed by mechanical scanning along an orthogonal axis allows dense sampling of a two-dimensional aperture in near real-time. In this paper, a sparse multi-static array technique will be described along with associated Fourier-Transform-based and back-projection-based image reconstruction algorithms. Simulated and measured imaging results are presented that show the effectiveness of the sparse array technique along with the merits and weaknesses of each image reconstruction approach.
Method for image reconstruction of moving radionuclide source distribution
Stolin, Alexander V.; McKisson, John E.; Lee, Seung Joon; Smith, Mark Frederick
2012-12-18
A method for image reconstruction of moving radionuclide distributions. Its particular embodiment is for single photon emission computed tomography (SPECT) imaging of awake animals, though its techniques are general enough to be applied to other moving radionuclide distributions as well. The invention eliminates motion and blurring artifacts for image reconstructions of moving source distributions. This opens new avenues in the area of small animal brain imaging with radiotracers, which can now be performed without the perturbing influences of anesthesia or physical restraint on the biological system.
Freiberger, Manuel; Egger, Herbert; Liebmann, Manfred; Scharfetter, Hermann
2011-11-01
Image reconstruction in fluorescence optical tomography is a three-dimensional nonlinear ill-posed problem governed by a system of partial differential equations. In this paper we demonstrate that a combination of state of the art numerical algorithms and a careful hardware optimized implementation allows to solve this large-scale inverse problem in a few seconds on standard desktop PCs with modern graphics hardware. In particular, we present methods to solve not only the forward but also the non-linear inverse problem by massively parallel programming on graphics processors. A comparison of optimized CPU and GPU implementations shows that the reconstruction can be accelerated by factors of about 15 through the use of the graphics hardware without compromising the accuracy in the reconstructed images. PMID:22076279
Padhi, Shantanu K.; Howard, John
2013-01-01
Nonlinear microwave imaging heavily relies on an accurate numerical electromagnetic model of the antenna system. The model is used to simulate scattering data that is compared to its measured counterpart in order to reconstruct the image. In this paper an antenna system immersed in water is used to image different canonical objects in order to investigate the implication of modeling errors on the final reconstruction using a time domain-based iterative inverse reconstruction algorithm and three-dimensional FDTD modeling. With the test objects immersed in a background of air and tap water, respectively, we have studied the impact of antenna modeling errors, errors in the modeling of the background media, and made a comparison with a two-dimensional version of the algorithm. In conclusion even small modeling errors in the antennas can significantly alter the reconstructed image. Since the image reconstruction procedure is highly nonlinear general conclusions are very difficult to make. In our case it means that with the antenna system immersed in water and using our present FDTD-based electromagnetic model the imaging results are improved if refraining from modeling the water-wall-air interface and instead just use a homogeneous background of water in the model. PMID:23606825
Fuzzy-rule-based image reconstruction for positron emission tomography
NASA Astrophysics Data System (ADS)
Mondal, Partha P.; Rajan, K.
2005-09-01
Positron emission tomography (PET) and single-photon emission computed tomography have revolutionized the field of medicine and biology. Penalized iterative algorithms based on maximum a posteriori (MAP) estimation eliminate noisy artifacts by utilizing available prior information in the reconstruction process but often result in a blurring effect. MAP-based algorithms fail to determine the density class in the reconstructed image and hence penalize the pixels irrespective of the density class. Reconstruction with better edge information is often difficult because prior knowledge is not taken into account. The recently introduced median-root-prior (MRP)-based algorithm preserves the edges, but a steplike streaking effect is observed in the reconstructed image, which is undesirable. A fuzzy approach is proposed for modeling the nature of interpixel interaction in order to build an artifact-free edge-preserving reconstruction. The proposed algorithm consists of two elementary steps: (1) edge detection, in which fuzzy-rule-based derivatives are used for the detection of edges in the nearest neighborhood window (which is equivalent to recognizing nearby density classes), and (2) fuzzy smoothing, in which penalization is performed only for those pixels for which no edge is detected in the nearest neighborhood. Both of these operations are carried out iteratively until the image converges. Analysis shows that the proposed fuzzy-rule-based reconstruction algorithm is capable of producing qualitatively better reconstructed images than those reconstructed by MAP and MRP algorithms. The reconstructed images are sharper, with small features being better resolved owing to the nature of the fuzzy potential function.
Sparse-Coding-Based Computed Tomography Image Reconstruction
Yoon, Gang-Joon
2013-01-01
Computed tomography (CT) is a popular type of medical imaging that generates images of the internal structure of an object based on projection scans of the object from several angles. There are numerous methods to reconstruct the original shape of the target object from scans, but they are still dependent on the number of angles and iterations. To overcome the drawbacks of iterative reconstruction approaches like the algebraic reconstruction technique (ART), while the recovery is slightly impacted from a random noise (small amount of ℓ2 norm error) and projection scans (small amount of ℓ1 norm error) as well, we propose a medical image reconstruction methodology using the properties of sparse coding. It is a very powerful matrix factorization method which each pixel point is represented as a linear combination of a small number of basis vectors. PMID:23576898
Bayesian 2D Current Reconstruction from Magnetic Images
NASA Astrophysics Data System (ADS)
Clement, Colin B.; Bierbaum, Matthew K.; Nowack, Katja; Sethna, James P.
We employ a Bayesian image reconstruction scheme to recover 2D currents from magnetic flux imaged with scanning SQUIDs (Superconducting Quantum Interferometric Devices). Magnetic flux imaging is a versatile tool to locally probe currents and magnetic moments, however present reconstruction methods sacrifice resolution due to numerical instability. Using state-of-the-art blind deconvolution techniques we recover the currents, point-spread function and height of the SQUID loop by optimizing the probability of measuring an image. We obtain uncertainties on these quantities by sampling reconstructions. This generative modeling technique could be used to develop calibration protocols for scanning SQUIDs, to diagnose systematic noise in the imaging process, and can be applied to many tools beyond scanning SQUIDs.
Digital holographic method for tomography-image reconstruction
NASA Astrophysics Data System (ADS)
Liu, Cheng; Yan, Changchun; Gao, Shumei
2004-02-01
A digital holographic method for three-dimensional reconstruction of tomography images is demonstrated theoretically and experimentally. In this proposed method, a numerical hologram is first computed by calculating the total diffraction field of all transect images of a detected organ. Then, the numerical hologram is transferred to the usual recording medium to generate a physical hologram. Last, all the transect images are reconstructed in their original position by illuminating the physical hologram with a laser, thereby forming a three-dimensional transparent image of the organ detected. Due to its true third dimension, the reconstructed image using this method is much more vivid and accurate than that of other methods. Potentially, it may have great prospects for application in medical engineering.
Compensation for air voids in photoacoustic computed tomography image reconstruction
NASA Astrophysics Data System (ADS)
Matthews, Thomas P.; Li, Lei; Wang, Lihong V.; Anastasio, Mark A.
2016-03-01
Most image reconstruction methods in photoacoustic computed tomography (PACT) assume that the acoustic properties of the object and the surrounding medium are homogeneous. This can lead to strong artifacts in the reconstructed images when there are significant variations in sound speed or density. Air voids represent a particular challenge due to the severity of the differences between the acoustic properties of air and water. In whole-body small animal imaging, the presence of air voids in the lungs, stomach, and gastrointestinal system can limit image quality over large regions of the object. Iterative reconstruction methods based on the photoacoustic wave equation can account for these acoustic variations, leading to improved resolution, improved contrast, and a reduction in the number of imaging artifacts. However, the strong acoustic heterogeneities can lead to instability or errors in the numerical wave solver. Here, the impact of air voids on PACT image reconstruction is investigated, and procedures for their compensation are proposed. The contributions of sound speed and density variations to the numerical stability of the wave solver are considered, and a novel approach for mitigating the impact of air voids while reducing the computational burden of image reconstruction is identified. These results are verified by application to an experimental phantom.
Information Propagation in Prior-Image-Based Reconstruction
Stayman, J. Webster; Prince, Jerry L.; Siewerdsen, Jeffrey H.
2016-01-01
Advanced reconstruction methods for computed tomography include sophisticated forward models of the imaging system that capture the pertinent physical processes affecting the signal and noise in projection measurements. However, most do little to integrate prior knowledge of the subject – often relying only on very general notions of local smoothness or edges. In many cases, as in longitudinal surveillance or interventional imaging, a patient has undergone a sequence of studies prior to the current image acquisition that hold a wealth of prior information on patient-specific anatomy. While traditional techniques tend to treat each data acquisition as an isolated event and disregard such valuable patient-specific prior information, some reconstruction methods, such as PICCS[1] and PIR-PLE[2], can incorporate prior images into a reconstruction objective function. Inclusion of such information allows for dramatic reduction in the data fidelity requirements and more robustly accommodate substantial undersampling and exposure reduction with consequent benefits to imaging speed and reduced radiation dose. While such prior-image-based methods offer tremendous promise, the introduction of prior information in the reconstruction raises significant concern regarding the accurate representation of features in the image and whether those features arise from the current data acquisition or from the prior images. In this work we propose a novel framework to analyze the propagation of information in prior-image-based reconstruction by decomposing the estimation into distinct components supported by the current data acquisition and by the prior image. This decomposition quantifies the contributions from prior and current data as a spatial map and can trace specific features in the image to their source. Such “information source maps” can potentially be used as a check on confidence that a given image feature arises from the current data or from the prior and to more
Saybasili, Haris; Herzka, Daniel A.; Seiberlich, Nicole; A.Griswold, Mark
2014-01-01
Combination of non-Cartesian trajectories with parallel MRI permits to attain unmatched acceleration rates when compared to traditional Cartesian MRI during real-time imaging.However, computationally demanding reconstructions of such imaging techniques, such as k-space domain radial generalized auto-calibrating partially parallel acquisitions (radial GRAPPA) and image domain conjugate gradient sensitivity encoding (CG-SENSE), lead to longer reconstruction times and unacceptable latency for online real-time MRI on conventional computational hardware. Though CG-SENSE has been shown to work with low-latency using a general purpose graphics processing unit (GPU), to the best of our knowledge, no such effort has been made for radial GRAPPA. radial GRAPPA reconstruction, which is robust even with highly undersampled acquisitions, is not iterative, requiring only significant computation during initial calibration while achieving good image quality for low-latency imaging applications. In this work, we present a very fast, low-latency, reconstruction framework based on a heterogeneous system using multi-core CPUs and GPUs. We demonstrate an implementation of radial GRAPPA that permits reconstruction times on par with or faster than acquisition of highly accelerated datasets in both cardiac and dynamic musculoskeletal imaging scenarios. Acquisition and reconstructions times are reported. PMID:24690453
NASA Astrophysics Data System (ADS)
Xue, Xinwei; Cheryauka, Arvi; Tubbs, David
2006-03-01
CT imaging in interventional and minimally-invasive surgery requires high-performance computing solutions that meet operational room demands, healthcare business requirements, and the constraints of a mobile C-arm system. The computational requirements of clinical procedures using CT-like data are increasing rapidly, mainly due to the need for rapid access to medical imagery during critical surgical procedures. The highly parallel nature of Radon transform and CT algorithms enables embedded computing solutions utilizing a parallel processing architecture to realize a significant gain of computational intensity with comparable hardware and program coding/testing expenses. In this paper, using a sample 2D and 3D CT problem, we explore the programming challenges and the potential benefits of embedded computing using commodity hardware components. The accuracy and performance results obtained on three computational platforms: a single CPU, a single GPU, and a solution based on FPGA technology have been analyzed. We have shown that hardware-accelerated CT image reconstruction can be achieved with similar levels of noise and clarity of feature when compared to program execution on a CPU, but gaining a performance increase at one or more orders of magnitude faster. 3D cone-beam or helical CT reconstruction and a variety of volumetric image processing applications will benefit from similar accelerations.
Compressed Sensing MR Image Reconstruction Exploiting TGV and Wavelet Sparsity
Du, Huiqian; Han, Yu; Mei, Wenbo
2014-01-01
Compressed sensing (CS) based methods make it possible to reconstruct magnetic resonance (MR) images from undersampled measurements, which is known as CS-MRI. The reference-driven CS-MRI reconstruction schemes can further decrease the sampling ratio by exploiting the sparsity of the difference image between the target and the reference MR images in pixel domain. Unfortunately existing methods do not work well given that contrast changes are incorrectly estimated or motion compensation is inaccurate. In this paper, we propose to reconstruct MR images by utilizing the sparsity of the difference image between the target and the motion-compensated reference images in wavelet transform and gradient domains. The idea is attractive because it requires neither the estimation of the contrast changes nor multiple times motion compensations. In addition, we apply total generalized variation (TGV) regularization to eliminate the staircasing artifacts caused by conventional total variation (TV). Fast composite splitting algorithm (FCSA) is used to solve the proposed reconstruction problem in order to improve computational efficiency. Experimental results demonstrate that the proposed method can not only reduce the computational cost but also decrease sampling ratio or improve the reconstruction quality alternatively. PMID:25371704
Influence of Iterative Reconstruction Algorithms on PET Image Resolution
NASA Astrophysics Data System (ADS)
Karpetas, G. E.; Michail, C. M.; Fountos, G. P.; Valais, I. G.; Nikolopoulos, D.; Kandarakis, I. S.; Panayiotakis, G. S.
2015-09-01
The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction. The simulated PET scanner was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the modulation transfer function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL, the ordered subsets separable paraboloidal surrogate (OSSPS), the median root prior (MRP) and OSMAPOSL with quadratic prior, algorithms. OSMAPOSL reconstruction was assessed by using fixed subsets and various iterations, as well as by using various beta (hyper) parameter values. MTF values were found to increase with increasing iterations. MTF also improves by using lower beta values. The simulated PET evaluation method, based on the TLC plane source, can be useful in the resolution assessment of PET scanners.
Constrained TV-minimization image reconstruction for industrial CT system
NASA Astrophysics Data System (ADS)
Chen, Buxin; Yang, Min; Zhang, Zheng; Bian, Junguo; Han, Xiao; Sidky, Emil; Pan, Xiaochuan
2014-02-01
In this work, we investigate the applicability of the constrained total-variation (TV)-minimization reconstruction method to industrial CT system. In general, industrial CT systems have the same principles of imaging process with clinical CT systems, but different imaging objectives and evaluation metrics. Optimization-based image reconstruction methods have been actively developed to meet practical challenges and extensively tested for clinical CT systems. However, the utility of optimization-based reconstruction methods is task-specific and not necessarily transferrable among different tasks. In this work, we adopt constrained TV-minimization programs together with adaptive-steepest-descent-projection-ontoconvex-sets (ASD-POCS) algorithm for reconstructing images from data of a concrete sample collected using a laboratory industrial CT system developed for non-destructive evaluation. Our results, compared to those reconstructed from FBPbased algorithm, suggest that the constrained TV-minimization program combined with ASD-POCS algorithm can yield images with comparable or improved visual quality and achieve equivalent or better imaging objectives over the currently used FBP-based algorithm under dense sampling data condition.
Compressed sensing MR image reconstruction exploiting TGV and wavelet sparsity.
Zhao, Di; Du, Huiqian; Han, Yu; Mei, Wenbo
2014-01-01
Compressed sensing (CS) based methods make it possible to reconstruct magnetic resonance (MR) images from undersampled measurements, which is known as CS-MRI. The reference-driven CS-MRI reconstruction schemes can further decrease the sampling ratio by exploiting the sparsity of the difference image between the target and the reference MR images in pixel domain. Unfortunately existing methods do not work well given that contrast changes are incorrectly estimated or motion compensation is inaccurate. In this paper, we propose to reconstruct MR images by utilizing the sparsity of the difference image between the target and the motion-compensated reference images in wavelet transform and gradient domains. The idea is attractive because it requires neither the estimation of the contrast changes nor multiple times motion compensations. In addition, we apply total generalized variation (TGV) regularization to eliminate the staircasing artifacts caused by conventional total variation (TV). Fast composite splitting algorithm (FCSA) is used to solve the proposed reconstruction problem in order to improve computational efficiency. Experimental results demonstrate that the proposed method can not only reduce the computational cost but also decrease sampling ratio or improve the reconstruction quality alternatively. PMID:25371704
Lin, Yu; Liao, Ning-fang; Luo, Yong-dao; Cui, De-qi; Tan, Bo-neng; Wu, Wen-min
2010-08-01
In the present paper, the authors will introduce our research on spectral reconstruction of Fourier transform computed tomography imaging spectrometer by means of the algebraic reconstruction technology. A simulation experiment was carried out to demonstrate the algorithm. The spatial similarities and spectral similarities were evaluated using the normalized correlation coefficient. The performance of ART was evaluated when the quantity of projection is 45. In that case, filter back projection can't work well. Actual spectral slices were reconstructed by using ART in the last part of this paper. PMID:20939312
High-performance parallel image reconstruction for the New Vacuum Solar Telescope
NASA Astrophysics Data System (ADS)
Li, Xue-Bao; Liu, Zhong; Wang, Feng; Jin, Zhen-Yu; Xiang, Yong-Yuan; Zheng, Yan-Fang
2015-06-01
Many technologies have been developed to help improve spatial resolution of observational images for ground-based solar telescopes, such as adaptive optics (AO) systems and post-processing reconstruction. As any AO system correction is only partial, it is indispensable to use post-processing reconstruction techniques. In the New Vacuum Solar Telescope (NVST), a speckle-masking method is used to achieve the diffraction-limited resolution of the telescope. Although the method is very promising, the computation is quite intensive, and the amount of data is tremendous, requiring several months to reconstruct observational data of one day on a high-end computer. To accelerate image reconstruction, we parallelize the program package on a high-performance cluster. We describe parallel implementation details for several reconstruction procedures. The code is written in the C language using the Message Passing Interface (MPI) and is optimized for parallel processing in a multiprocessor environment. We show the excellent performance of parallel implementation, and the whole data processing speed is about 71 times faster than before. Finally, we analyze the scalability of the code to find possible bottlenecks, and propose several ways to further improve the parallel performance. We conclude that the presented program is capable of executing reconstruction applications in real-time at NVST.
Digital Three-dimensional Reconstruction Based On Integral Imaging
Li, Chao; Chen, Qian; Hua, Hong; Mao, Chen; Shao, Ajun
2015-01-01
This paper presents a digital three dimensional reconstruction method based on a set of small-baseline elemental images captured with a micro-lens array and a CCD sensor. In this paper, we adopt the ASIFT (Affine Scale-invariant feature transform) operator as the image registration method. Among the set of captured elemental images, the elemental image located in the middle of the overall image field is used as the reference and corresponding matching points in each elemental image around the reference elemental are calculated, which enables to accurately compute the depth value of object points relatively to the reference image frame. Using optimization algorithm with redundant matching points can achieve 3D reconstruction finally. Our experimental results are presented to demonstrate excellent performance in accuracy and speed of the proposed algorithm. PMID:26236151
Probe and object function reconstruction in incoherent stem imaging
Nellist, P.D.; Pennycook, S.J.
1996-09-01
Using the phase-object approximation it is shown how an annular dark- field (ADF) detector in a scanning transmission electron microscope (STEM) leads to an image which can be described by an incoherent model. The point spread function is found to be simply the illuminating probe intensity. An important consequence of this is that there is no phase problem in the imaging process, which allows various image processing methods to be applied directly to the image intensity data. Using an image of a GaAs<110>, the probe intensity profile is reconstructed, confirming the existence of a 1.3 {Angstrom} probe in a 300kV STEM. It is shown that simply deconvolving this reconstructed probe from the image data does not improve its interpretability because the dominant effects of the imaging process arise simply from the restricted resolution of the microscope. However, use of the reconstructed probe in a maximum entropy reconstruction is demonstrated, which allows information beyond the resolution limit to be restored and does allow improved image interpretation.
Cervigram image segmentation based on reconstructive sparse representations
NASA Astrophysics Data System (ADS)
Zhang, Shaoting; Huang, Junzhou; Wang, Wei; Huang, Xiaolei; Metaxas, Dimitris
2010-03-01
We proposed an approach based on reconstructive sparse representations to segment tissues in optical images of the uterine cervix. Because of large variations in image appearance caused by the changing of the illumination and specular reflection, the color and texture features in optical images often overlap with each other and are not linearly separable. By leveraging sparse representations the data can be transformed to higher dimensions with sparse constraints and become more separated. K-SVD algorithm is employed to find sparse representations and corresponding dictionaries. The data can be reconstructed from its sparse representations and positive and/or negative dictionaries. Classification can be achieved based on comparing the reconstructive errors. In the experiments we applied our method to automatically segment the biomarker AcetoWhite (AW) regions in an archive of 60,000 images of the uterine cervix. Compared with other general methods, our approach showed lower space and time complexity and higher sensitivity.
Reconstruction of indoor scene from a single image
NASA Astrophysics Data System (ADS)
Wu, Di; Li, Hongyu; Zhang, Lin
2015-03-01
Given a single image of an indoor scene without any prior knowledge, is it possible for a computer to automatically reconstruct the structure of the scene? This letter proposes a reconstruction method, called RISSIM, to recover the 3D modelling of an indoor scene from a single image. The proposed method is composed of three steps: the estimation of vanishing points, the detection and classification of lines, and the plane mapping. To find vanishing points, a new feature descriptor, named "OCR", is defined to describe the texture orientation. With Phrase Congruency and Harris Detector, the line segments can be detected exactly, which is a prerequisite. Perspective transform is a defined as a reliable method whereby the points on the image can be represented on a 3D model. Experimental results show that the 3D structure of an indoor scene can be well reconstructed from a single image although the available depth information is limited.
Accelerating dual cardiac phase images using undersampled radial phase encoding trajectories.
Letelier, Karis; Urbina, Jesus; Andía, Marcelo; Tejos, Cristián; Irarrazaval, Pablo; Prieto, Claudia; Uribe, Sergio
2016-09-01
A three-dimensional dual-cardiac-phase (3D-DCP) scan has been proposed to acquire two data sets of the whole heart and great vessels during the end-diastolic and end-systolic cardiac phases in a single free-breathing scan. This method has shown accurate assessment of cardiac anatomy and function but is limited by long acquisition times. This work proposes to accelerate the acquisition and reconstruction of 3D-DCP scans by exploiting redundant information of the outer k-space regions of both cardiac phases. This is achieved using a modified radial-phase-encoding trajectory and gridding reconstruction with uniform coil combination. The end-diastolic acquisition trajectory was angularly shifted with respect to the end-systolic phase. Initially, a fully-sampled 3D-DCP scan was acquired to determine the optimal percentage of the outer k-space data that can be combined between cardiac phases. Thereafter, prospectively undersampled data were reconstructed based on this percentage. As gold standard images, the undersampled data were also reconstructed using iterative SENSE. To validate the method, image quality assessments and a cardiac volume analysis were performed. The proposed method was tested in thirteen healthy volunteers (mean age, 30years). Prospectively undersampled data (R=4) reconstructed with 50% combination led high quality images. There were no significant differences in the image quality and in the cardiac volume analysis between our method and iterative SENSE. In addition, the proposed approach reduced the reconstruction time from 40min to 1min. In conclusion, the proposed method obtains 3D-DCP scans with an image quality comparable to those reconstructed with iterative SENSE, and within a clinically acceptable reconstruction time. PMID:27067473
Rapid 3D dynamic arterial spin labeling with a sparse model-based image reconstruction.
Zhao, Li; Fielden, Samuel W; Feng, Xue; Wintermark, Max; Mugler, John P; Meyer, Craig H
2015-11-01
Dynamic arterial spin labeling (ASL) MRI measures the perfusion bolus at multiple observation times and yields accurate estimates of cerebral blood flow in the presence of variations in arterial transit time. ASL has intrinsically low signal-to-noise ratio (SNR) and is sensitive to motion, so that extensive signal averaging is typically required, leading to long scan times for dynamic ASL. The goal of this study was to develop an accelerated dynamic ASL method with improved SNR and robustness to motion using a model-based image reconstruction that exploits the inherent sparsity of dynamic ASL data. The first component of this method is a single-shot 3D turbo spin echo spiral pulse sequence accelerated using a combination of parallel imaging and compressed sensing. This pulse sequence was then incorporated into a dynamic pseudo continuous ASL acquisition acquired at multiple observation times, and the resulting images were jointly reconstructed enforcing a model of potential perfusion time courses. Performance of the technique was verified using a numerical phantom and it was validated on normal volunteers on a 3-Tesla scanner. In simulation, a spatial sparsity constraint improved SNR and reduced estimation errors. Combined with a model-based sparsity constraint, the proposed method further improved SNR, reduced estimation error and suppressed motion artifacts. Experimentally, the proposed method resulted in significant improvements, with scan times as short as 20s per time point. These results suggest that the model-based image reconstruction enables rapid dynamic ASL with improved accuracy and robustness. PMID:26169322
A measurement system and image reconstruction in magnetic induction tomography.
Vauhkonen, M; Hamsch, M; Igney, C H
2008-06-01
Magnetic induction tomography (MIT) is a technique for imaging the internal conductivity distribution of an object. In MIT current-carrying coils are used to induce eddy currents in the object and the induced voltages are sensed with other coils. From these measurements, the internal conductivity distribution of the object can be reconstructed. In this paper, we introduce a 16-channel MIT measurement system that is capable of parallel readout of 16 receiver channels. The parallel measurements are carried out using high-quality audio sampling devices. Furthermore, approaches for reconstructing MIT images developed for the 16-channel MIT system are introduced. We consider low conductivity applications, conductivity less than 5 S m(-1), and we use a frequency of 10 MHz. In the image reconstruction, we use time-harmonic Maxwell's equation for the electric field. This equation is solved with the finite element method using edge elements and the images are reconstructed using a generalized Tikhonov regularization approach. Both difference and static image reconstruction approaches are considered. Results from simulations and real measurements collected with the Philips 16-channel MIT system are shown. PMID:18544825
Prospective regularization design in prior-image-based reconstruction
NASA Astrophysics Data System (ADS)
Dang, Hao; Siewerdsen, Jeffrey H.; Webster Stayman, J.
2015-12-01
Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in
Prospective regularization design in prior-image-based reconstruction.
Dang, Hao; Siewerdsen, Jeffrey H; Stayman, J Webster
2015-12-21
Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in
An adaptive filtered back-projection for photoacoustic image reconstruction
Huang, He; Bustamante, Gilbert; Peterson, Ralph; Ye, Jing Yong
2015-01-01
Purpose: The purpose of this study is to develop an improved filtered-back-projection (FBP) algorithm for photoacoustic tomography (PAT), which allows image reconstruction with higher quality compared to images reconstructed through traditional algorithms. Methods: A rigorous expression of a weighting function has been derived directly from a photoacoustic wave equation and used as a ramp filter in Fourier domain. The authors’ new algorithm utilizes this weighting function to precisely calculate each photoacoustic signal’s contribution and then reconstructs the image based on the retarded potential generated from the photoacoustic sources. In addition, an adaptive criterion has been derived for selecting the cutoff frequency of a low pass filter. Two computational phantoms were created to test the algorithm. The first phantom contained five spheres with each sphere having different absorbances. The phantom was used to test the capability for correctly representing both the geometry and the relative absorbed energy in a planar measurement system. The authors also used another phantom containing absorbers of different sizes with overlapping geometry to evaluate the performance of the new method for complicated geometry. In addition, random noise background was added to the simulated data, which were obtained by using an arc-shaped array of 50 evenly distributed transducers that spanned 160° over a circle with a radius of 65 mm. A normalized factor between the neighbored transducers was applied for correcting measurement signals in PAT simulations. The authors assumed that the scanned object was mounted on a holder that rotated over the full 360° and the scans were set to a sampling rate of 20.48 MHz. Results: The authors have obtained reconstructed images of the computerized phantoms by utilizing the new FBP algorithm. From the reconstructed image of the first phantom, one can see that this new approach allows not only obtaining a sharp image but also showing
An adaptive filtered back-projection for photoacoustic image reconstruction
Huang, He; Bustamante, Gilbert; Peterson, Ralph; Ye, Jing Yong
2015-05-15
Purpose: The purpose of this study is to develop an improved filtered-back-projection (FBP) algorithm for photoacoustic tomography (PAT), which allows image reconstruction with higher quality compared to images reconstructed through traditional algorithms. Methods: A rigorous expression of a weighting function has been derived directly from a photoacoustic wave equation and used as a ramp filter in Fourier domain. The authors’ new algorithm utilizes this weighting function to precisely calculate each photoacoustic signal’s contribution and then reconstructs the image based on the retarded potential generated from the photoacoustic sources. In addition, an adaptive criterion has been derived for selecting the cutoff frequency of a low pass filter. Two computational phantoms were created to test the algorithm. The first phantom contained five spheres with each sphere having different absorbances. The phantom was used to test the capability for correctly representing both the geometry and the relative absorbed energy in a planar measurement system. The authors also used another phantom containing absorbers of different sizes with overlapping geometry to evaluate the performance of the new method for complicated geometry. In addition, random noise background was added to the simulated data, which were obtained by using an arc-shaped array of 50 evenly distributed transducers that spanned 160° over a circle with a radius of 65 mm. A normalized factor between the neighbored transducers was applied for correcting measurement signals in PAT simulations. The authors assumed that the scanned object was mounted on a holder that rotated over the full 360° and the scans were set to a sampling rate of 20.48 MHz. Results: The authors have obtained reconstructed images of the computerized phantoms by utilizing the new FBP algorithm. From the reconstructed image of the first phantom, one can see that this new approach allows not only obtaining a sharp image but also showing
Total variation superiorization schemes in proton computed tomography image reconstruction
Penfold, S. N.; Schulte, R. W.; Censor, Y.; Rosenfeld, A. B.
2010-01-01
Purpose: Iterative projection reconstruction algorithms are currently the preferred reconstruction method in proton computed tomography (pCT). However, due to inconsistencies in the measured data arising from proton energy straggling and multiple Coulomb scattering, the noise in the reconstructed image increases with successive iterations. In the current work, the authors investigated the use of total variation superiorization (TVS) schemes that can be applied as an algorithmic add-on to perturbation-resilient iterative projection algorithms for pCT image reconstruction. Methods: The block-iterative diagonally relaxed orthogonal projections (DROP) algorithm was used for reconstructing GEANT4 Monte Carlo simulated pCT data sets. Two TVS schemes added on to DROP were investigated; the first carried out the superiorization steps once per cycle and the second once per block. Simplifications of these schemes, involving the elimination of the computationally expensive feasibility proximity checking step of the TVS framework, were also investigated. The modulation transfer function and contrast discrimination function were used to quantify spatial and density resolution, respectively. Results: With both TVS schemes, superior spatial and density resolution was achieved compared to the standard DROP algorithm. Eliminating the feasibility proximity check improved the image quality, in particular image noise, in the once-per-block superiorization, while also halving image reconstruction time. Overall, the greatest image quality was observed when carrying out the superiorization once per block and eliminating the feasibility proximity check. Conclusions: The low-contrast imaging made possible with TVS holds a promise for its incorporation into future pCT studies. PMID:21158301
Motion compensation for PET image reconstruction using deformable tetrahedral meshes
NASA Astrophysics Data System (ADS)
Manescu, P.; Ladjal, H.; Azencot, J.; Beuve, M.; Shariat, B.
2015-12-01
Respiratory-induced organ motion is a technical challenge to PET imaging. This motion induces displacements and deformation of the organs tissues, which need to be taken into account when reconstructing the spatial radiation activity. Classical image-based methods that describe motion using deformable image registration (DIR) algorithms cannot fully take into account the non-reproducibility of the respiratory internal organ motion nor the tissue volume variations that occur during breathing. In order to overcome these limitations, various biomechanical models of the respiratory system have been developed in the past decade as an alternative to DIR approaches. In this paper, we describe a new method of correcting motion artefacts in PET image reconstruction adapted to motion estimation models such as those based on the finite element method. In contrast with the DIR-based approaches, the radiation activity was reconstructed on deforming tetrahedral meshes. For this, we have re-formulated the tomographic reconstruction problem by introducing a time-dependent system matrix based calculated using tetrahedral meshes instead of voxelized images. The MLEM algorithm was chosen as the reconstruction method. The simulations performed in this study show that the motion compensated reconstruction based on tetrahedral deformable meshes has the capability to correct motion artefacts. Results demonstrate that, in the case of complex deformations, when large volume variations occur, the developed tetrahedral based method is more appropriate than the classical DIR-based one. This method can be used, together with biomechanical models controlled by external surrogates, to correct motion artefacts in PET images and thus reducing the need for additional internal imaging during the acquisition.
The concept of causality in image reconstruction
Llacer, J.; Veklerov, E.; Nunez, J.
1988-09-01
Causal images in emission tomography are defined as those which could have generated the data by the statistical process that governs the physics of the measurement. The concept of causality was previously applied to deciding when to stop the MLE iterative procedure in PET. The present paper further explores the concept, indicates the difficulty of carrying out a correct hypothesis testing for causality, discusses the assumption needed to justify the tests proposed and discusses a possible methodology for a justification of that assumption. The paper also describes several methods that we have found to generate causal images and it shows that the set of causal images is rather large. This set includes images judged to be superior to the best maximum likelihood images, but it also includes unacceptable and noisy images. The paper concludes by proposing to use causality as a constraint in optimization problems. 16 refs., 5 figs.
Beyond maximum entropy: Fractal pixon-based image reconstruction
NASA Technical Reports Server (NTRS)
Puetter, R. C.; Pina, R. K.
1994-01-01
We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other methods, including Goodness-of-Fit (e.g. Least-Squares and Lucy-Richardson) and Maximum Entropy (ME). Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME.
Reconstructing irregularly sampled images by neural networks
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Yellott, John I., Jr.
1989-01-01
Neural-network-like models of receptor position learning and interpolation function learning are being developed as models of how the human nervous system might handle the problems of keeping track of the receptor positions and interpolating the image between receptors. These models may also be of interest to designers of image processing systems desiring the advantages of a retina-like image sampling array.
Nam, Seunghoon; Akçakaya, Mehmet; Basha, Tamer; Stehning, Christian; Manning, Warren J; Tarokh, Vahid; Nezafat, Reza
2013-01-01
A disadvantage of three-dimensional (3D) isotropic acquisition in whole-heart coronary MRI is the prolonged data acquisition time. Isotropic 3D radial trajectories allow undersampling of k-space data in all three spatial dimensions, enabling accelerated acquisition of the volumetric data. Compressed sensing (CS) reconstruction can provide further acceleration in the acquisition by removing the incoherent artifacts due to undersampling and improving the image quality. However, the heavy computational overhead of the CS reconstruction has been a limiting factor for its application. In this article, a parallelized implementation of an iterative CS reconstruction method for 3D radial acquisitions using a commercial graphics processing unit is presented. The execution time of the graphics processing unit-implemented CS reconstruction was compared with that of the C++ implementation, and the efficacy of the undersampled 3D radial acquisition with CS reconstruction was investigated in both phantom and whole-heart coronary data sets. Subsequently, the efficacy of CS in suppressing streaking artifacts in 3D whole-heart coronary MRI with 3D radial imaging and its convergence properties were studied. The CS reconstruction provides improved image quality (in terms of vessel sharpness and suppression of noise-like artifacts) compared with the conventional 3D gridding algorithm, and the graphics processing unit implementation greatly reduces the execution time of CS reconstruction yielding 34-54 times speed-up compared with C++ implementation. PMID:22392604
NASA Astrophysics Data System (ADS)
Lebedev, Sergej; Sawall, Stefan; Kuchenbecker, Stefan; Faby, Sebastian; Knaup, Michael; Kachelrieß, Marc
2015-03-01
The reconstruction of CT images with low noise and highest spatial resolution is a challenging task. Usually, a trade-off between at least these two demands has to be found or several reconstructions with mutually exclusive properties, i.e. either low noise or high spatial resolution, have to be performed. Iterative reconstruction methods might be suitable tools to overcome these limitations and provide images of highest diagnostic quality with formerly mutually exclusive image properties. While image quality metrics like the modulation transfer function (MTF) or the point spread function (PSF) are well-defined in case of standard reconstructions, e.g. filtered backprojection, the iterative algorithms lack these metrics. To overcome this issue alternate methodologies like the model observers have been proposed recently to allow a quantification of a usually task-dependent image quality metric.1 As an alternative we recently proposed an iterative reconstruction method, the alpha-image reconstruction (AIR), providing well-defined image quality metrics on a per-voxel basis.2 In particular, the AIR algorithm seeks to find weighting images, the alpha-images, that are used to blend between basis images with mutually exclusive image properties. The result is an image with highest diagnostic quality that provides a high spatial resolution and a low noise level. As the estimation of the alpha-images is computationally demanding we herein aim at optimizing this process and highlight the favorable properties of AIR using patient measurements.
Accurate Sparse-Projection Image Reconstruction via Nonlocal TV Regularization
Zhang, Yi; Zhang, Weihua; Zhou, Jiliu
2014-01-01
Sparse-projection image reconstruction is a useful approach to lower the radiation dose; however, the incompleteness of projection data will cause degeneration of imaging quality. As a typical compressive sensing method, total variation has obtained great attention on this problem. Suffering from the theoretical imperfection, total variation will produce blocky effect on smooth regions and blur edges. To overcome this problem, in this paper, we introduce the nonlocal total variation into sparse-projection image reconstruction and formulate the minimization problem with new nonlocal total variation norm. The qualitative and quantitative analyses of numerical as well as clinical results demonstrate the validity of the proposed method. Comparing to other existing methods, our method more efficiently suppresses artifacts caused by low-rank reconstruction and reserves structure information better. PMID:24592168
Ultra-Fast Image Reconstruction of Tomosynthesis Mammography Using GPU
Arefan, D.; Talebpour, A.; Ahmadinejhad, N.; Kamali Asl, A.
2015-01-01
Digital Breast Tomosynthesis (DBT) is a technology that creates three dimensional (3D) images of breast tissue. Tomosynthesis mammography detects lesions that are not detectable with other imaging systems. If image reconstruction time is in the order of seconds, we can use Tomosynthesis systems to perform Tomosynthesis-guided Interventional procedures. This research has been designed to study ultra-fast image reconstruction technique for Tomosynthesis Mammography systems using Graphics Processing Unit (GPU). At first, projections of Tomosynthesis mammography have been simulated. In order to produce Tomosynthesis projections, it has been designed a 3D breast phantom from empirical data. It is based on MRI data in its natural form. Then, projections have been created from 3D breast phantom. The image reconstruction algorithm based on FBP was programmed with C++ language in two methods using central processing unit (CPU) card and the Graphics Processing Unit (GPU). It calculated the time of image reconstruction in two kinds of programming (using CPU and GPU). PMID:26171373
Ultra-Fast Image Reconstruction of Tomosynthesis Mammography Using GPU.
Arefan, D; Talebpour, A; Ahmadinejhad, N; Kamali Asl, A
2015-06-01
Digital Breast Tomosynthesis (DBT) is a technology that creates three dimensional (3D) images of breast tissue. Tomosynthesis mammography detects lesions that are not detectable with other imaging systems. If image reconstruction time is in the order of seconds, we can use Tomosynthesis systems to perform Tomosynthesis-guided Interventional procedures. This research has been designed to study ultra-fast image reconstruction technique for Tomosynthesis Mammography systems using Graphics Processing Unit (GPU). At first, projections of Tomosynthesis mammography have been simulated. In order to produce Tomosynthesis projections, it has been designed a 3D breast phantom from empirical data. It is based on MRI data in its natural form. Then, projections have been created from 3D breast phantom. The image reconstruction algorithm based on FBP was programmed with C++ language in two methods using central processing unit (CPU) card and the Graphics Processing Unit (GPU). It calculated the time of image reconstruction in two kinds of programming (using CPU and GPU). PMID:26171373
Sparse representation for the ISAR image reconstruction
NASA Astrophysics Data System (ADS)
Hu, Mengqi; Montalbo, John; Li, Shuxia; Sun, Ligang; Qiao, Zhijun G.
2016-05-01
In this paper, a sparse representation of the data for an inverse synthetic aperture radar (ISAR) system is provided in two dimensions. The proposed sparse representation motivates the use a of a Convex Optimization that recovers the image with far less samples, which is required by Nyquist-Shannon sampling theorem to increases the efficiency and decrease the cost of calculation in radar imaging.
Hofmann, Christian; Sawall, Stefan; Knaup, Michael; Kachelrieß, Marc
2014-06-15
Purpose: Iterative image reconstruction gains more and more interest in clinical routine, as it promises to reduce image noise (and thereby patient dose), to reduce artifacts, or to improve spatial resolution. Among vendors and researchers, however, there is no consensus of how to best achieve these aims. The general approach is to incorporatea priori knowledge into iterative image reconstruction, for example, by adding additional constraints to the cost function, which penalize variations between neighboring voxels. However, this approach to regularization in general poses a resolution noise trade-off because the stronger the regularization, and thus the noise reduction, the stronger the loss of spatial resolution and thus loss of anatomical detail. The authors propose a method which tries to improve this trade-off. The proposed reconstruction algorithm is called alpha image reconstruction (AIR). One starts with generating basis images, which emphasize certain desired image properties, like high resolution or low noise. The AIR algorithm reconstructs voxel-specific weighting coefficients that are applied to combine the basis images. By combining the desired properties of each basis image, one can generate an image with lower noise and maintained high contrast resolution thus improving the resolution noise trade-off. Methods: All simulations and reconstructions are performed in native fan-beam geometry. A water phantom with resolution bar patterns and low contrast disks is simulated. A filtered backprojection (FBP) reconstruction with a Ram-Lak kernel is used as a reference reconstruction. The results of AIR are compared against the FBP results and against a penalized weighted least squares reconstruction which uses total variation as regularization. The simulations are based on the geometry of the Siemens Somatom Definition Flash scanner. To quantitatively assess image quality, the authors analyze line profiles through resolution patterns to define a contrast
Superresolution image reconstruction from a sequence of aliased imagery.
Young, S Susan; Driggers, Ronald G
2006-07-20
We present a superresolution image reconstruction from a sequence of aliased imagery. The subpixel shifts (displacement) among the images are unknown due to the uncontrolled natural jitter of the imager. A correlation method is utilized to estimate subpixel shifts between each low-resolution aliased image with respect to a reference image. An error-energy reduction algorithm is derived to reconstruct the high-resolution alias-free output image. The main feature of this proposed error-energy reduction algorithm is that we treat the spatial samples from low-resolution images that possess unknown and irregular (uncontrolled) subpixel shifts as a set of constraints to populate an oversampled (sampled above the desired output bandwidth) processing array. The estimated subpixel locations of these samples and their values constitute a spatial domain constraint. Furthermore, the bandwidth of the alias-free image (or the sensor imposed bandwidth) is the criterion used as a spatial frequency domain constraint on the oversampled processing array. The results of testing the proposed algorithm on the simulated low- resolution forward-looking infrared (FLIR) images, real-world FLIR images, and visible images are provided. A comparison of the proposed algorithm with a standard interpolation algorithm for processing the simulated low-resolution FLIR images is also provided. PMID:16826246
Filter and slice thickness selection in SPECT image reconstruction
Ivanovic, M.; Weber, D.A.; Wilson, G.A.; O'Mara, R.E.
1985-05-01
The choice of filter and slice thickness in SPECT image reconstruction as function of activity and linear and angular sampling were investigated in phantom and patient imaging studies. Reconstructed transverse and longitudinal spatial resolution of the system were measured using a line source in a water filled phantom. Phantom studies included measurements of the Data Spectrum phantom; clinical studies included tomographic procedures in 40 patients undergoing imaging of the temporomandibular joint. Slices of the phantom and patient images were evaluated for spatial of the phantom and patient images were evaluated for spatial resolution, noise, and image quality. Major findings include; spatial resolution and image quality improve with increasing linear sampling frequencies over the range of 4-8 mm/p in the phantom images, best spatial resolution and image quality in clinical images were observed at a linear sampling frequency of 6mm/p, Shepp and Logan filter gives the best spatial resolution for phantom studies at the lowest linear sampling frequency; smoothed Shepp and Logan filter provides best quality images without loss of resolution at higher frequencies and, spatial resolution and image quality improve with increased angular sampling frequency in the phantom at 40 c/p but appear to be independent of angular sampling frequency at 400 c/p.
A rapid reconstruction algorithm for three-dimensional scanning images
NASA Astrophysics Data System (ADS)
Xiang, Jiying; Wu, Zhen; Zhang, Ping; Huang, Dexiu
1998-04-01
A `simulated fluorescence' three-dimensional reconstruction algorithm, which is especially suitable for confocal images of partial transparent biological samples, is proposed in this paper. To make the retina projection of the object reappear and to avoid excessive memory consumption, the original image is rotated and compressed before the processing. A left image and a right image are mixed by different colors to increase the sense of stereo. The details originally hidden in deep layers are well exhibited with the aid of an `auxiliary directional source'. In addition, the time consumption is greatly reduced compared with conventional methods such as `ray tracing'. The realization of the algorithm is interpreted by a group of reconstructed images.
Blockwise conjugate gradient methods for image reconstruction in volumetric CT.
Qiu, W; Titley-Peloquin, D; Soleimani, M
2012-11-01
Cone beam computed tomography (CBCT) enables volumetric image reconstruction from 2D projection data and plays an important role in image guided radiation therapy (IGRT). Filtered back projection is still the most frequently used algorithm in applications. The algorithm discretizes the scanning process (forward projection) into a system of linear equations, which must then be solved to recover images from measured projection data. The conjugate gradients (CG) algorithm and its variants can be used to solve (possibly regularized) linear systems of equations Ax=b and linear least squares problems minx∥b-Ax∥2, especially when the matrix A is very large and sparse. Their applications can be found in a general CT context, but in tomography problems (e.g. CBCT reconstruction) they have not widely been used. Hence, CBCT reconstruction using the CG-type algorithm LSQR was implemented and studied in this paper. In CBCT reconstruction, the main computational challenge is that the matrix A usually is very large, and storing it in full requires an amount of memory well beyond the reach of commodity computers. Because of these memory capacity constraints, only a small fraction of the weighting matrix A is typically used, leading to a poor reconstruction. In this paper, to overcome this difficulty, the matrix A is partitioned and stored blockwise, and blockwise matrix-vector multiplications are implemented within LSQR. This implementation allows us to use the full weighting matrix A for CBCT reconstruction without further enhancing computer standards. Tikhonov regularization can also be implemented in this fashion, and can produce significant improvement in the reconstructed images. PMID:22325240
Regularized reconstruction of wave fields from refracted images of water
NASA Astrophysics Data System (ADS)
Choudhury, K. Roy; O'Sullivan, F.; Samanta, M.; Shrira, V.; Caulliez, G.
2009-04-01
Refractive imaging of wave fields is often used for observation of short gravity and gravity-capillary waves in wave tanks and in the field. A light box placed under the waves emits light of spatially graduated intensity. The refracted light intensity image recorded overhead can be related to the wave slope field using a system of equations derived from the laws of refraction. Previous authors have proposed a two stage reconstruction strategy for the recovery of wave slope and height fields: i) estimation of local slope fields ii) global reconstruction of height and slope fields using local estimates. Our statistical analysis of local slope estimates reveals that estimation error variability increases considerably from the bright to the dark ends of the imaging area, with some concomitant bias. The reconstruction problem behaves like an ill posed inverse problem in the dark areas of the image. Illposedness is addressed by a reconstruction method that imposes Tikhonov regularization of directional wave slopes using penalized least squares. Other refinements proposed include a) bias correction of local slope estimates b) spatially weighted reconstruction using estimated variability of local slope estimates and c) more accurate estimates of reference light profiles from time sequence data. A computationally efficient algorithm that exploits sparsity in the resulting system of equations is employed to evaluate the regularized estimator. Simulation studies show that the refinements can result in substantial improvements in the mean squared error of reconstruction. The algorithm is applied to obtain wave field reconstructions from video recordings. Analysis of various video sequences demonstrates distinct spatial patterns at different wind speed and fetch combinations.
Local fingerprint image reconstruction based on gabor filtering
NASA Astrophysics Data System (ADS)
Bakhtiari, Somayeh; Agaian, Sos S.; Jamshidi, Mo
2012-06-01
In this paper, we propose two solutions for fingerprint local image reconstruction based on Gabor filtering. Gabor filtering is a popular method for fingerprint image enhancement. However, the reliability of the information in the output image suffers, when the input image has a poor quality. This is the result of the spurious estimates of frequency and orientation by classical approaches, particularly in the scratch regions. In both techniques of this paper, the scratch marks are recognized initially using reliability image which is calculated using the gradient images. The first algorithm is based on an inpainting technique and the second method employs two different kernels for the scratch and the non-scratch parts of the image to calculate the gradient images. The simulation results show that both approaches allow the actual information of the image to be preserved while connecting discontinuities correctly by approximating the orientation matrix more genuinely.
NASA Astrophysics Data System (ADS)
Akçakaya, Mehmet; Basha, Tamer A.; Weingärtner, Sebastian; Nezafat, Reza
2013-09-01
We propose an acquisition and reconstruction technique for accelerated free-breathing cardiac MRI acquisitions. For the acquisition, a random undersampling pattern, including a fully-sampled center of k-space, is generated prospectively. The k-space lines specified by this undersampling pattern is acquired with respiratory navigating (NAV), where only the central k-space lines are acquired within the prespecified gating window. For the outer k-space lines, if the NAV signal corresponding to a k-space segment is outside the gating window, the segment is rejected, but not re-acquired. The reconstruction approach jointly estimates the underlying image using a compressed-sensing based approach, and the translational motion parameters for each segment for the outer k-space segments acquired outside the gating window. The feasibility of the approach is demonstrated in healthy adult subjects using whole-heart coronary MRI with a 3-fold accelerated random undersampling pattern. The proposed acquisition and reconstruction technique is compared to parallel imaging with uniform undersampling with 3-fold undersampling. The two techniques exhibit similar image quality with a shorter acquisition time for the proposed approach (4:25+/-0:31 minutes versus 6:52+/-0:19).
Relaxed Linearized Algorithms for Faster X-Ray CT Image Reconstruction.
Nien, Hung; Fessler, Jeffrey A
2016-04-01
Statistical image reconstruction (SIR) methods are studied extensively for X-ray computed tomography (CT) due to the potential of acquiring CT scans with reduced X-ray dose while maintaining image quality. However, the longer reconstruction time of SIR methods hinders their use in X-ray CT in practice. To accelerate statistical methods, many optimization techniques have been investigated. Over-relaxation is a common technique to speed up convergence of iterative algorithms. For instance, using a relaxation parameter that is close to two in alternating direction method of multipliers (ADMM) has been shown to speed up convergence significantly. This paper proposes a relaxed linearized augmented Lagrangian (AL) method that shows theoretical faster convergence rate with over-relaxation and applies the proposed relaxed linearized AL method to X-ray CT image reconstruction problems. Experimental results with both simulated and real CT scan data show that the proposed relaxed algorithm (with ordered-subsets [OS] acceleration) is about twice as fast as the existing unrelaxed fast algorithms, with negligible computation and memory overhead. PMID:26685227
Accelerated CMR using zonal, parallel and prior knowledge driven imaging methods
Kozerke, Sebastian; Plein, Sven
2008-01-01
Accelerated imaging is highly relevant for many CMR applications as competing constraints with respect to spatiotemporal resolution and tolerable scan times are frequently posed. Three approaches, all involving data undersampling to increase scan efficiencies, are discussed in this review. Zonal imaging can be considered a niche but nevertheless has found application in coronary imaging and CMR flow measurements. Current work on parallel-transmit systems is expected to revive the interest in zonal imaging techniques. The second and main approach to speeding up CMR sequences has been parallel imaging. A wide range of CMR applications has benefited from parallel imaging with reduction factors of two to three routinely applied for functional assessment, perfusion, viability and coronary imaging. Large coil arrays, as are becoming increasingly available, are expected to support reduction factors greater than three to four in particular in combination with 3D imaging protocols. Despite these prospects, theoretical work has indicated fundamental limits of coil encoding at clinically available magnetic field strengths. In that respect, alternative approaches exploiting prior knowledge about the object being imaged as such or jointly with parallel imaging have attracted considerable attention. Five to eight-fold scan accelerations in cine and dynamic CMR applications have been reported and image quality has been found to be favorable relative to using parallel imaging alone. With all acceleration techniques, careful consideration of the limits and the trade-off between acceleration and occurrence of artifacts that may arise if these limits are breached is required. In parallel imaging the spatially varying noise has to be considered when measuring contrast- and signal-to-noise ratios. Also, temporal fidelity in images reconstructed with prior knowledge driven methods has to be studied carefully. PMID:18534005
A methodology to event reconstruction from trace images.
Milliet, Quentin; Delémont, Olivier; Sapin, Eric; Margot, Pierre
2015-03-01
The widespread use of digital imaging devices for surveillance (CCTV) and entertainment (e.g., mobile phones, compact cameras) has increased the number of images recorded and opportunities to consider the images as traces or documentation of criminal activity. The forensic science literature focuses almost exclusively on technical issues and evidence assessment [1]. Earlier steps in the investigation phase have been neglected and must be considered. This article is the first comprehensive description of a methodology to event reconstruction using images. This formal methodology was conceptualised from practical experiences and applied to different contexts and case studies to test and refine it. Based on this practical analysis, we propose a systematic approach that includes a preliminary analysis followed by four main steps. These steps form a sequence for which the results from each step rely on the previous step. However, the methodology is not linear, but it is a cyclic, iterative progression for obtaining knowledge about an event. The preliminary analysis is a pre-evaluation phase, wherein potential relevance of images is assessed. In the first step, images are detected and collected as pertinent trace material; the second step involves organising and assessing their quality and informative potential. The third step includes reconstruction using clues about space, time and actions. Finally, in the fourth step, the images are evaluated and selected as evidence. These steps are described and illustrated using practical examples. The paper outlines how images elicit information about persons, objects, space, time and actions throughout the investigation process to reconstruct an event step by step. We emphasise the hypothetico-deductive reasoning framework, which demonstrates the contribution of images to generating, refining or eliminating propositions or hypotheses. This methodology provides a sound basis for extending image use as evidence and, more generally
Dictionary Approaches to Image Compression and Reconstruction
NASA Technical Reports Server (NTRS)
Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.
1998-01-01
This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as phi(sub gamma), are discrete time signals, where gamma represents the dictionary index. A dictionary with a collection of these waveforms is typically complete or overcomplete. Given such a dictionary, the goal is to obtain a representation image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.
Dictionary Approaches to Image Compression and Reconstruction
NASA Technical Reports Server (NTRS)
Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.
1998-01-01
This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as lambda, are discrete time signals, where y represents the dictionary index. A dictionary with a collection of these waveforms Is typically complete or over complete. Given such a dictionary, the goal is to obtain a representation Image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.
NASA Astrophysics Data System (ADS)
Nunez, Jorge; Llacer, Jorge
1993-10-01
This paper describes a general Bayesian iterative algorithm with entropy prior for image reconstruction. It solves the cases of both pure Poisson data and Poisson data with Gaussian readout noise. The algorithm maintains positivity of the solution; it includes case-specific prior information (default map) and flatfield corrections; it removes background and can be accelerated to be faster than the Richardson-Lucy algorithm. In order to determine the hyperparameter that balances the entropy and liklihood terms in the Bayesian approach, we have used a liklihood cross-validation technique. Cross-validation is more robust than other methods because it is less demanding in terms of the knowledge of exact data characteristics and of the point-spread function. We have used the algorithm to reconstruct successfully images obtained in different space-and ground-based imaging situations. It has been possible to recover most of the original intended capabilities of the Hubble Space Telescope (HST) wide field and planetary camera (WFPC) and faint object camera (FOC) from images obtained in their present state. Semireal simulations for the future wide field planetary camera 2 show that even after the repair of the spherical abberration problem, image reconstruction can play a key role in improving the resolution of the cameras, well beyond the design of the Hubble instruments. We also show that ground-based images can be reconstructed successfully with the algorithm. A technique which consists of dividing the CCD observations into two frames, with one-half the exposure time each, emerges as a recommended procedure for the utilization of the described algorithms. We have compared our technique with two commonly used reconstruction algorithms: the Richardson-Lucy and the Cambridge maximum entropy algorithms.
Super-resolution reconstruction of terahertz images
NASA Astrophysics Data System (ADS)
Li, Yue; Li, Li; Hellicar, Andrew; Guo, Y. Jay
2008-04-01
A prototype of terahertz imaging system has been built in CSIRO. This imager uses a backward wave oscillator as the source and a Schottky diode as the detector. It has a bandwidth of 500-700 GHz and a source power 10 mW. The resolution at 610 GHz is about 0.85 mm. Even though this imaging system is a coherent system, only the signal power is measured at the detector and the phase information of the detected wave is lost. Some initial images of tree leaves, chocolate bars and pinholes have been acquired with this system. In this paper, we report experimental results of an attempt to improve the resolution of this imaging system beyond the limitation of diffraction (super-resolution). Due to the lack of phase information needed for applying any coherent super-resolution algorithms, the performance of the incoherent Richardson-Lucy super-resolution algorithm has been evaluated. Experimental results have demonstrated that the Richardson-Lucy algorithm can significantly improve the resolution of these images in some sample areas and produce some artifacts in other areas. These experimental results are analyzed and discussed.
Joint model of motion and anatomy for PET image reconstruction
Qiao Feng; Pan Tinsu; Clark, John W. Jr.; Mawlawi, Osama
2007-12-15
Anatomy-based positron emission tomography (PET) image enhancement techniques have been shown to have the potential for improving PET image quality. However, these techniques assume an accurate alignment between the anatomical and the functional images, which is not always valid when imaging the chest due to respiratory motion. In this article, we present a joint model of both motion and anatomical information by integrating a motion-incorporated PET imaging system model with an anatomy-based maximum a posteriori image reconstruction algorithm. The mismatched anatomical information due to motion can thus be effectively utilized through this joint model. A computer simulation and a phantom study were conducted to assess the efficacy of the joint model, whereby motion and anatomical information were either modeled separately or combined. The reconstructed images in each case were compared to corresponding reference images obtained using a quadratic image prior based maximum a posteriori reconstruction algorithm for quantitative accuracy. Results of these studies indicated that while modeling anatomical information or motion alone improved the PET image quantitation accuracy, a larger improvement in accuracy was achieved when using the joint model. In the computer simulation study and using similar image noise levels, the improvement in quantitation accuracy compared to the reference images was 5.3% and 19.8% when using anatomical or motion information alone, respectively, and 35.5% when using the joint model. In the phantom study, these results were 5.6%, 5.8%, and 19.8%, respectively. These results suggest that motion compensation is important in order to effectively utilize anatomical information in chest imaging using PET. The joint motion-anatomy model presented in this paper provides a promising solution to this problem.
Cryo-EM Structure Determination Using Segmented Helical Image Reconstruction.
Fromm, S A; Sachse, C
2016-01-01
Treating helices as single-particle-like segments followed by helical image reconstruction has become the method of choice for high-resolution structure determination of well-ordered helical viruses as well as flexible filaments. In this review, we will illustrate how the combination of latest hardware developments with optimized image processing routines have led to a series of near-atomic resolution structures of helical assemblies. Originally, the treatment of helices as a sequence of segments followed by Fourier-Bessel reconstruction revealed the potential to determine near-atomic resolution structures from helical specimens. In the meantime, real-space image processing of helices in a stack of single particles was developed and enabled the structure determination of specimens that resisted classical Fourier helical reconstruction and also facilitated high-resolution structure determination. Despite the progress in real-space analysis, the combination of Fourier and real-space processing is still commonly used to better estimate the symmetry parameters as the imposition of the correct helical symmetry is essential for high-resolution structure determination. Recent hardware advancement by the introduction of direct electron detectors has significantly enhanced the image quality and together with improved image processing procedures has made segmented helical reconstruction a very productive cryo-EM structure determination method. PMID:27572732
PET image reconstruction: mean, variance, and optimal minimax criterion
NASA Astrophysics Data System (ADS)
Liu, Huafeng; Gao, Fei; Guo, Min; Xue, Liying; Nie, Jing; Shi, Pengcheng
2015-04-01
Given the noise nature of positron emission tomography (PET) measurements, it is critical to know the image quality and reliability as well as expected radioactivity map (mean image) for both qualitative interpretation and quantitative analysis. While existing efforts have often been devoted to providing only the reconstructed mean image, we present a unified framework for joint estimation of the mean and corresponding variance of the radioactivity map based on an efficient optimal min-max criterion. The proposed framework formulates the PET image reconstruction problem to be a transformation from system uncertainties to estimation errors, where the minimax criterion is adopted to minimize the estimation errors with possibly maximized system uncertainties. The estimation errors, in the form of a covariance matrix, express the measurement uncertainties in a complete way. The framework is then optimized by ∞-norm optimization and solved with the corresponding H∞ filter. Unlike conventional statistical reconstruction algorithms, that rely on the statistical modeling methods of the measurement data or noise, the proposed joint estimation stands from the point of view of signal energies and can handle from imperfect statistical assumptions to even no a priori statistical assumptions. The performance and accuracy of reconstructed mean and variance images are validated using Monte Carlo simulations. Experiments on phantom scans with a small animal PET scanner and real patient scans are also conducted for assessment of clinical potential.
Parallel Image Reconstruction for New Vacuum Solar Telescope
NASA Astrophysics Data System (ADS)
Li, Xue-Bao; Wang, Feng; Xiang, Yong Yuan; Zheng, Yan Fang; Liu, Ying Bo; Deng, Hui; Ji, Kai Fan
2014-04-01
Many advanced ground-based solar telescopes improve the spatial resolution of observation images using an adaptive optics (AO) system. As any AO correction remains only partial, it is necessary to use post-processing image reconstruction techniques such as speckle masking or shift-and-add (SAA) to reconstruct a high-spatial-resolution image from atmospherically degraded solar images. In the New Vacuum Solar Telescope (NVST), the spatial resolution in solar images is improved by frame selection and SAA. In order to overcome the burden of massive speckle data processing, we investigate the possibility of using the speckle reconstruction program in a real-time application at the telescope site. The code has been written in the C programming language and optimized for parallel processing in a multi-processor environment. We analyze the scalability of the code to identify possible bottlenecks, and we conclude that the presented code is capable of being run in real-time reconstruction applications at NVST and future large aperture solar telescopes if care is taken that the multi-processor environment has low latencies between the computation nodes.
Compressed/reconstructed test images for CRAF/Cassini
NASA Technical Reports Server (NTRS)
Dolinar, S.; Cheung, K.-M.; Onyszchuk, I.; Pollara, F.; Arnold, S.
1991-01-01
A set of compressed, then reconstructed, test images submitted to the Comet Rendezvous Asteroid Flyby (CRAF)/Cassini project is presented as part of its evaluation of near lossless high compression algorithms for representing image data. A total of seven test image files were provided by the project. The seven test images were compressed, then reconstructed with high quality (root mean square error of approximately one or two gray levels on an 8 bit gray scale), using discrete cosine transforms or Hadamard transforms and efficient entropy coders. The resulting compression ratios varied from about 2:1 to about 10:1, depending on the activity or randomness in the source image. This was accomplished without any special effort to optimize the quantizer or to introduce special postprocessing to filter the reconstruction errors. A more complete set of measurements, showing the relative performance of the compression algorithms over a wide range of compression ratios and reconstruction errors, shows that additional compression is possible at a small sacrifice in fidelity.
Coronary x-ray angiographic reconstruction and image orientation
Sprague, Kevin; Drangova, Maria; Lehmann, Glen
2006-03-15
We have developed an interactive geometric method for 3D reconstruction of the coronary arteries using multiple single-plane angiographic views with arbitrary orientations. Epipolar planes and epipolar lines are employed to trace corresponding vessel segments on these views. These points are utilized to reconstruct 3D vessel centerlines. The accuracy of the reconstruction is assessed using: (1) near-intersection distances of the rays that connect x-ray sources with projected points, (2) distances between traced and projected centerlines. These same two measures enter into a fitness function for a genetic search algorithm (GA) employed to orient the angiographic image planes automatically in 3D avoiding local minima in the search for optimized parameters. Furthermore, the GA utilizes traced vessel shapes (as opposed to isolated anchor points) to assist the optimization process. Differences between two-view and multiview reconstructions are evaluated. Vessel radii are measured and used to render the coronary tree in 3D as a surface. Reconstruction fidelity is demonstrated via (1) virtual phantom, (2) real phantom, and (3) patient data sets, the latter two of which utilize the GA. These simulated and measured angiograms illustrate that the vessel centerlines are reconstructed in 3D with accuracy below 1 mm. The reconstruction method is thus accurate compared to typical vessel dimensions of 1-3 mm. The methods presented should enable a combined interpretation of the severity of coronary artery stenoses and the hemodynamic impact on myocardial perfusion in patients with coronary artery disease.
Image reconstructions from super-sampled data sets with resolution modeling in PET imaging
Li, Yusheng; Matej, Samuel; Metzler, Scott D.
2014-01-01
Purpose: Spatial resolution in positron emission tomography (PET) is still a limiting factor in many imaging applications. To improve the spatial resolution for an existing scanner with fixed crystal sizes, mechanical movements such as scanner wobbling and object shifting have been considered for PET systems. Multiple acquisitions from different positions can provide complementary information and increased spatial sampling. The objective of this paper is to explore an efficient and useful reconstruction framework to reconstruct super-resolution images from super-sampled low-resolution data sets. Methods: The authors introduce a super-sampling data acquisition model based on the physical processes with tomographic, downsampling, and shifting matrices as its building blocks. Based on the model, we extend the MLEM and Landweber algorithms to reconstruct images from super-sampled data sets. The authors also derive a backprojection-filtration-like (BPF-like) method for the super-sampling reconstruction. Furthermore, they explore variant methods for super-sampling reconstructions: the separate super-sampling resolution-modeling reconstruction and the reconstruction without downsampling to further improve image quality at the cost of more computation. The authors use simulated reconstruction of a resolution phantom to evaluate the three types of algorithms with different super-samplings at different count levels. Results: Contrast recovery coefficient (CRC) versus background variability, as an image-quality metric, is calculated at each iteration for all reconstructions. The authors observe that all three algorithms can significantly and consistently achieve increased CRCs at fixed background variability and reduce background artifacts with super-sampled data sets at the same count levels. For the same super-sampled data sets, the MLEM method achieves better image quality than the Landweber method, which in turn achieves better image quality than the BPF-like method. The
Nien, Hung; Fessler, Jeffrey A.
2014-01-01
Augmented Lagrangian (AL) methods for solving convex optimization problems with linear constraints are attractive for imaging applications with composite cost functions due to the empirical fast convergence rate under weak conditions. However, for problems such as X-ray computed tomography (CT) image reconstruction, where the inner least-squares problem is challenging and requires iterations, AL methods can be slow. This paper focuses on solving regularized (weighted) least-squares problems using a linearized variant of AL methods that replaces the quadratic AL penalty term in the scaled augmented Lagrangian with its separable quadratic surrogate (SQS) function, leading to a simpler ordered-subsets (OS) accelerable splitting-based algorithm, OS-LALM. To further accelerate the proposed algorithm, we use a second-order recursive system analysis to design a deterministic downward continuation approach that avoids tedious parameter tuning and provides fast convergence. Experimental results show that the proposed algorithm significantly accelerates the convergence of X-ray CT image reconstruction with negligible overhead and can reduce OS artifacts when using many subsets. PMID:25248178
Reconstruction techniques for sparse multistatic linear array microwave imaging
NASA Astrophysics Data System (ADS)
Sheen, David M.; Hall, Thomas E.
2014-06-01
Sequentially-switched linear arrays are an enabling technology for a number of near-field microwave imaging applications. Electronically sequencing along the array axis followed by mechanical scanning along an orthogonal axis allows dense sampling of a two-dimensional aperture in near real-time. The Pacific Northwest National Laboratory (PNNL) has developed this technology for several applications including concealed weapon detection, groundpenetrating radar, and non-destructive inspection and evaluation. These techniques form three-dimensional images by scanning a diverging beam swept frequency transceiver over a two-dimensional aperture and mathematically focusing or reconstructing the data into three-dimensional images. Recently, a sparse multi-static array technology has been developed that reduces the number of antennas required to densely sample the linear array axis of the spatial aperture. This allows a significant reduction in cost and complexity of the linear-array-based imaging system. The sparse array has been specifically designed to be compatible with Fourier-Transform-based image reconstruction techniques; however, there are limitations to the use of these techniques, especially for extreme near-field operation. In the extreme near-field of the array, back-projection techniques have been developed that account for the exact location of each transmitter and receiver in the linear array and the 3-D image location. In this paper, the sparse array technique will be described along with associated Fourier-Transform-based and back-projection-based image reconstruction algorithms. Simulated imaging results are presented that show the effectiveness of the sparse array technique along with the merits and weaknesses of each image reconstruction approach.
Fast texture and structure image reconstruction using the perceptual hash
NASA Astrophysics Data System (ADS)
Voronin, V. V.; Marchuk, V. I.; Frantc, V. A.; Egiazarian, Karen
2013-02-01
This paper focuses on the fast texture and structure reconstruction of images. The proposed method, applied to images, consists of several steps. The first one deals with the extracted textural features of the input images based on the Law's energy. The pixels around damaged image regions are clustered using these features, that allow to define the correspondence between pixels from different patches. Second, cubic spline curve is applied to reconstruct a structure and to connect edges and contours in the damaged area. The choice of the current pixel to be recovered is decided using the fast marching approach. The Telea method or modifications of the exemplar based method are used after this depending on the classification of the regions where to-be-restored pixel is located. In modification to quickly find patches we use the perceptual hash. Such a strategy allows to get some data structure containing the hashes of similar patches. This enables us to reduce the search procedure to the procedure for "calculations" of the patch. The proposed method is tested on various samples of images, with different geometrical features and compared with the state-of-the-art image inpainting methods; the proposed technique is shown to produce better results in reconstruction of missing small and large objects on test images.
Improved satellite image compression and reconstruction via genetic algorithms
NASA Astrophysics Data System (ADS)
Babb, Brendan; Moore, Frank; Peterson, Michael; Lamont, Gary
2008-10-01
A wide variety of signal and image processing applications, including the US Federal Bureau of Investigation's fingerprint compression standard [3] and the JPEG-2000 image compression standard [26], utilize wavelets. This paper describes new research that demonstrates how a genetic algorithm (GA) may be used to evolve transforms that outperform wavelets for satellite image compression and reconstruction under conditions subject to quantization error. The new approach builds upon prior work by simultaneously evolving real-valued coefficients representing matched forward and inverse transform pairs at each of three levels of a multi-resolution analysis (MRA) transform. The training data for this investigation consists of actual satellite photographs of strategic urban areas. Test results show that a dramatic reduction in the error present in reconstructed satellite images may be achieved without sacrificing the compression capabilities of the forward transform. The transforms evolved during this research outperform previous start-of-the-art solutions, which optimized coefficients for the reconstruction transform only. These transforms also outperform wavelets, reducing error by more than 0.76 dB at a quantization level of 64. In addition, transforms trained using representative satellite images do not perform quite as well when subsequently tested against images from other classes (such as fingerprints or portraits). This result suggests that the GA developed for this research is automatically learning to exploit specific attributes common to the class of images represented in the training population.
Efficient iterative image reconstruction algorithm for dedicated breast CT
NASA Astrophysics Data System (ADS)
Antropova, Natalia; Sanchez, Adrian; Reiser, Ingrid S.; Sidky, Emil Y.; Boone, John; Pan, Xiaochuan
2016-03-01
Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.
The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.
Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut
2014-06-01
Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms. PMID:24845059
Elasticity reconstructive imaging by means of stimulated echo MRI.
Chenevert, T L; Skovoroda, A R; O'Donnell, M; Emelianov, S Y
1998-03-01
A method is introduced to measure internal mechanical displacement and strain by means of MRI. Such measurements are needed to reconstruct an image of the elastic Young's modulus. A stimulated echo acquisition sequence with additional gradient pulses encodes internal displacements in response to an externally applied differential deformation. The sequence provides an accurate measure of static displacement by limiting the mechanical transitions to the mixing period of the simulated echo. Elasticity reconstruction involves definition of a region of interest having uniform Young's modulus along its boundary and subsequent solution of the discretized elasticity equilibrium equations. Data acquisition and reconstruction were performed on a urethane rubber phantom of known elastic properties and an ex vivo canine kidney phantom using <2% differential deformation. Regional elastic properties are well represented on Young's modulus images. The long-term objective of this work is to provide a means for remote palpation and elasticity quantitation in deep tissues otherwise inaccessible to manual palpation. PMID:9498605
Skin image reconstruction using Monte Carlo based color generation
NASA Astrophysics Data System (ADS)
Aizu, Yoshihisa; Maeda, Takaaki; Kuwahara, Tomohiro; Hirao, Tetsuji
2010-11-01
We propose a novel method of skin image reconstruction based on color generation using Monte Carlo simulation of spectral reflectance in the nine-layered skin tissue model. The RGB image and spectral reflectance of human skin are obtained by RGB camera and spectrophotometer, respectively. The skin image is separated into the color component and texture component. The measured spectral reflectance is used to evaluate scattering and absorption coefficients in each of the nine layers which are necessary for Monte Carlo simulation. Various skin colors are generated by Monte Carlo simulation of spectral reflectance in given conditions for the nine-layered skin tissue model. The new color component is synthesized to the original texture component to reconstruct the skin image. The method is promising for applications in the fields of dermatology and cosmetics.
In vivo sensitivity estimation and imaging acceleration with rotating RF coil arrays at 7 Tesla
NASA Astrophysics Data System (ADS)
Li, Mingyan; Jin, Jin; Zuo, Zhentao; Liu, Feng; Trakic, Adnan; Weber, Ewald; Zhuo, Yan; Xue, Rong; Crozier, Stuart
2015-03-01
Using a new rotating SENSitivity Encoding (rotating-SENSE) algorithm, we have successfully demonstrated that the rotating radiofrequency coil array (RRFCA) was capable of achieving a significant reduction in scan time and a uniform image reconstruction for a homogeneous phantom at 7 Tesla. However, at 7 Tesla the in vivo sensitivity profiles (B1-) become distinct at various angular positions. Therefore, sensitivity maps at other angular positions cannot be obtained by numerically rotating the acquired ones. In this work, a novel sensitivity estimation method for the RRFCA was developed and validated with human brain imaging. This method employed a library database and registration techniques to estimate coil sensitivity at an arbitrary angular position. The estimated sensitivity maps were then compared to the acquired sensitivity maps. The results indicate that the proposed method is capable of accurately estimating both magnitude and phase of sensitivity at an arbitrary angular position, which enables us to employ the rotating-SENSE algorithm to accelerate acquisition and reconstruct image. Compared to a stationary coil array with the same number of coil elements, the RRFCA was able to reconstruct images with better quality at a high reduction factor. It is hoped that the proposed rotation-dependent sensitivity estimation algorithm and the acceleration ability of the RRFCA will be particularly useful for ultra high field MRI.
In vivo sensitivity estimation and imaging acceleration with rotating RF coil arrays at 7 Tesla.
Li, Mingyan; Jin, Jin; Zuo, Zhentao; Liu, Feng; Trakic, Adnan; Weber, Ewald; Zhuo, Yan; Xue, Rong; Crozier, Stuart
2015-03-01
Using a new rotating SENSitivity Encoding (rotating-SENSE) algorithm, we have successfully demonstrated that the rotating radiofrequency coil array (RRFCA) was capable of achieving a significant reduction in scan time and a uniform image reconstruction for a homogeneous phantom at 7 Tesla. However, at 7 Tesla the in vivo sensitivity profiles (B1(-)) become distinct at various angular positions. Therefore, sensitivity maps at other angular positions cannot be obtained by numerically rotating the acquired ones. In this work, a novel sensitivity estimation method for the RRFCA was developed and validated with human brain imaging. This method employed a library database and registration techniques to estimate coil sensitivity at an arbitrary angular position. The estimated sensitivity maps were then compared to the acquired sensitivity maps. The results indicate that the proposed method is capable of accurately estimating both magnitude and phase of sensitivity at an arbitrary angular position, which enables us to employ the rotating-SENSE algorithm to accelerate acquisition and reconstruct image. Compared to a stationary coil array with the same number of coil elements, the RRFCA was able to reconstruct images with better quality at a high reduction factor. It is hoped that the proposed rotation-dependent sensitivity estimation algorithm and the acceleration ability of the RRFCA will be particularly useful for ultra high field MRI. PMID:25635352
Advances in imaging technologies for planning breast reconstruction
Mohan, Anita T.
2016-01-01
The role and choice of preoperative imaging for planning in breast reconstruction is still a disputed topic in the reconstructive community, with varying opinion on the necessity, the ideal imaging modality, costs and impact on patient outcomes. Since the advent of perforator flaps their use in microsurgical breast reconstruction has grown. Perforator based flaps afford lower donor morbidity by sparing the underlying muscle provide durable results, superior cosmesis to create a natural looking new breast, and are preferred in the context of radiation therapy. However these surgeries are complex; more technically challenging that implant based reconstruction, and leaves little room for error. The role of imaging in breast reconstruction can assist the surgeon in exploring or confirming flap choices based on donor site characteristics and presence of suitable perforators. Vascular anatomical studies in the lab have provided the surgeon a foundation of knowledge on location and vascular territories of individual perforators to improve our understanding for flap design and safe flap harvest. The creation of a presurgical map in patients can highlight any abnormal or individual anatomical variance to optimize flap design, intraoperative decision-making and execution of flap harvest with greater predictability and efficiency. This article highlights the role and techniques for preoperative planning using the newer technologies that have been adopted in reconstructive clinical practice: computed tomographic angiography (CTA), magnetic resonance angiography (MRA), laser-assisted indocyanine green fluorescence angiography (LA-ICGFA) and dynamic infrared thermography (DIRT). The primary focus of this paper is on the application of CTA and MRA imaging modalities. PMID:27047790
Advances in imaging technologies for planning breast reconstruction.
Mohan, Anita T; Saint-Cyr, Michel
2016-04-01
The role and choice of preoperative imaging for planning in breast reconstruction is still a disputed topic in the reconstructive community, with varying opinion on the necessity, the ideal imaging modality, costs and impact on patient outcomes. Since the advent of perforator flaps their use in microsurgical breast reconstruction has grown. Perforator based flaps afford lower donor morbidity by sparing the underlying muscle provide durable results, superior cosmesis to create a natural looking new breast, and are preferred in the context of radiation therapy. However these surgeries are complex; more technically challenging that implant based reconstruction, and leaves little room for error. The role of imaging in breast reconstruction can assist the surgeon in exploring or confirming flap choices based on donor site characteristics and presence of suitable perforators. Vascular anatomical studies in the lab have provided the surgeon a foundation of knowledge on location and vascular territories of individual perforators to improve our understanding for flap design and safe flap harvest. The creation of a presurgical map in patients can highlight any abnormal or individual anatomical variance to optimize flap design, intraoperative decision-making and execution of flap harvest with greater predictability and efficiency. This article highlights the role and techniques for preoperative planning using the newer technologies that have been adopted in reconstructive clinical practice: computed tomographic angiography (CTA), magnetic resonance angiography (MRA), laser-assisted indocyanine green fluorescence angiography (LA-ICGFA) and dynamic infrared thermography (DIRT). The primary focus of this paper is on the application of CTA and MRA imaging modalities. PMID:27047790
Acceleration of the universe: a reconstruction of the effective equation of state
NASA Astrophysics Data System (ADS)
Mukherjee, Ankan
2016-04-01
The present work is based upon a parametric reconstruction of the effective or total equation of state in a model for the universe with accelerated expansion. The constraints on the model parameters are obtained by maximum likelihood analysis using the supernova distance modulus data, observational Hubble data, baryon acoustic oscillation data and cosmic microwave background shift parameter data. For statistical comparison, the same analysis has also been carried out for the wCDM dark energy model. Different model selection criteria (Akaike information criterion (AIC)) and (Bayesian Information Criterion (BIC)) give the clear indication that the reconstructed model is well consistent with the wCDM model. Then both the models (weff(z) model and wCDM model) have also been presented through (q0,j0) parameter space. Tighter constraint on the present values of dark energy equation of state parameter (wDE(z = 0)) and cosmological jerk (j0) have been achieved for the reconstructed model.
Acceleration of the universe: a reconstruction of the effective equation of state
NASA Astrophysics Data System (ADS)
Mukherjee, Ankan
2016-07-01
The present work is based upon a parametric reconstruction of the effective or total equation of state in a model for the universe with accelerated expansion. The constraints on the model parameters are obtained by maximum likelihood analysis using the supernova distance modulus data, observational Hubble data, baryon acoustic oscillation data and cosmic microwave background shift parameter data. For statistical comparison, the same analysis has also been carried out for the wCDM dark energy model. Different model selection criteria (Akaike information criterion (AIC)) and (Bayesian Information Criterion (BIC)) give the clear indication that the reconstructed model is well consistent with the wCDM model. Then both the models (w_{eff}(z) model and wCDM model) have also been presented through (q_0 ,j_0 ) parameter space. Tighter constraint on the present values of dark energy equation of state parameter (w_{DE}(z = 0)) and cosmological jerk (j_0) have been achieved for the reconstructed model.
Cascaded diffractive optical elements for improved multiplane image reconstruction.
Gülses, A Alkan; Jenkins, B Keith
2013-05-20
Computer-generated phase-only diffractive optical elements in a cascaded setup are designed by one deterministic and one stochastic algorithm for multiplane image formation. It is hypothesized that increasing the number of elements as wavefront modulators in the longitudinal dimension would enlarge the available solution space, thus enabling enhanced image reconstruction. Numerical results show that increasing the number of holograms improves quality at the output. Design principles, computational methods, and specific conditions are discussed. PMID:23736247
Cortical Surface Reconstruction from High-Resolution MR Brain Images
Osechinskiy, Sergey; Kruggel, Frithjof
2012-01-01
Reconstruction of the cerebral cortex from magnetic resonance (MR) images is an important step in quantitative analysis of the human brain structure, for example, in sulcal morphometry and in studies of cortical thickness. Existing cortical reconstruction approaches are typically optimized for standard resolution (~1 mm) data and are not directly applicable to higher resolution images. A new PDE-based method is presented for the automated cortical reconstruction that is computationally efficient and scales well with grid resolution, and thus is particularly suitable for high-resolution MR images with submillimeter voxel size. The method uses a mathematical model of a field in an inhomogeneous dielectric. This field mapping, similarly to a Laplacian mapping, has nice laminar properties in the cortical layer, and helps to identify the unresolved boundaries between cortical banks in narrow sulci. The pial cortical surface is reconstructed by advection along the field gradient as a geometric deformable model constrained by topology-preserving level set approach. The method's performance is illustrated on exvivo images with 0.25–0.35 mm isotropic voxels. The method is further evaluated by cross-comparison with results of the FreeSurfer software on standard resolution data sets from the OASIS database featuring pairs of repeated scans for 20 healthy young subjects. PMID:22481909
Path method for reconstructing images in fluorescence optical tomography
Kravtsenyuk, Olga V; Lyubimov, Vladimir V; Kalintseva, Natalie A
2006-11-30
A reconstruction method elaborated for the optical diffusion tomography of the internal structure of objects containing absorbing and scattering inhomogeneities is considered. The method is developed for studying objects with fluorescing inhomogeneities and can be used for imaging of distributions of artificial fluorophores whose aggregations indicate the presence of various diseases or pathological deviations. (special issue devoted to multiple radiation scattering in random media)
RECONSTRUCTION OF HUMAN LUNG MORPHOLOGY MODELS FROM MAGNETIC RESONANCE IMAGES
Reconstruction of Human Lung Morphology Models from Magnetic Resonance Images
T. B. Martonen (Experimental Toxicology Division, U.S. EPA, Research Triangle Park, NC 27709) and K. K. Isaacs (School of Public Health, University of North Carolina, Chapel Hill, NC 27514)
NASA Astrophysics Data System (ADS)
Matson, Charles L.; Fox, Marsha; Hege, E. Keith; Hluck, Laura; Drummond, Jack; Harvey, David
1997-05-01
Speckle imaging techniques have been shown to mitigate atmospheric-resolution limits, allowing near-diffraction-limited images to be reconstructed. Few images of extended objects reconstructed by use of these techniques have been published, and most of these results are for relatively bright objects. We present image reconstructions of an orbiting Molniya 3 spacecraft from data collected by use of a 2.3-m ground-based telescope. The apparent brightness of the satellite was 15th visual magnitude. Power-spectrum and bispectrum speckle imaging techniques are used prior to image reconstruction to ameliorate atmospheric blurring. We discuss how these images, although poorly resolved, can be used to provide information on the satellite s functional status. It is shown that our previously published optimal algorithms produce a higher-quality image than do conventional speckle imaging methods.
Analysis operator learning and its application to image reconstruction.
Hawe, Simon; Kleinsteuber, Martin; Diepold, Klaus
2013-06-01
Exploiting a priori known structural information lies at the core of many image reconstruction methods that can be stated as inverse problems. The synthesis model, which assumes that images can be decomposed into a linear combination of very few atoms of some dictionary, is now a well established tool for the design of image reconstruction algorithms. An interesting alternative is the analysis model, where the signal is multiplied by an analysis operator and the outcome is assumed to be sparse. This approach has only recently gained increasing interest. The quality of reconstruction methods based on an analysis model severely depends on the right choice of the suitable operator. In this paper, we present an algorithm for learning an analysis operator from training images. Our method is based on l(p)-norm minimization on the set of full rank matrices with normalized columns. We carefully introduce the employed conjugate gradient method on manifolds, and explain the underlying geometry of the constraints. Moreover, we compare our approach to state-of-the-art methods for image denoising, inpainting, and single image super-resolution. Our numerical results show competitive performance of our general approach in all presented applications compared to the specialized state-of-the-art techniques. PMID:23412611
An automated 3D reconstruction method of UAV images
NASA Astrophysics Data System (ADS)
Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping
2015-10-01
In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.
Optimized satellite image compression and reconstruction via evolution strategies
NASA Astrophysics Data System (ADS)
Babb, Brendan; Moore, Frank; Peterson, Michael
2009-05-01
This paper describes the automatic discovery, via an Evolution Strategy with Covariance Matrix Adaptation (CMA-ES), of vectors of real-valued coefficients representing matched forward and inverse transforms that outperform the 9/7 Cohen-Daubechies-Feauveau (CDF) discrete wavelet transform (DWT) for satellite image compression and reconstruction under conditions subject to quantization error. The best transform evolved during this study reduces the mean squared error (MSE) present in reconstructed satellite images by an average of 33.78% (1.79 dB), while maintaining the average information entropy (IE) of compressed images at 99.57% in comparison to the wavelet. In addition, this evolved transform achieves 49.88% (3.00 dB) average MSE reduction when tested on 80 images from the FBI fingerprint test set, and 42.35% (2.39 dB) average MSE reduction when tested on a set of 18 digital photographs, while achieving average IE of 104.36% and 100.08%, respectively. These results indicate that our evolved transform greatly improves the quality of reconstructed images without substantial loss of compression capability over a broad range of image classes.
Bayesian PET image reconstruction incorporating anato-functional joint entropy
NASA Astrophysics Data System (ADS)
Tang, Jing; Rahmim, Arman
2009-12-01
We developed a maximum a posterior (MAP) reconstruction method for positron emission tomography (PET) image reconstruction incorporating magnetic resonance (MR) image information, with the joint entropy between the PET and MR image features serving as the regularization constraint. A non-parametric method was used to estimate the joint probability density of the PET and MR images. Using realistically simulated PET and MR human brain phantoms, the quantitative performance of the proposed algorithm was investigated. Incorporation of the anatomic information via this technique, after parameter optimization, was seen to dramatically improve the noise versus bias tradeoff in every region of interest, compared to the result from using conventional MAP reconstruction. In particular, hot lesions in the FDG PET image, which had no anatomical correspondence in the MR image, also had improved contrast versus noise tradeoff. Corrections were made to figures 3, 4 and 6, and to the second paragraph of section 3.1 on 13 November 2009. The corrected electronic version is identical to the print version.
Image reconstruction from Pulsed Fast Neutron Analysis
NASA Astrophysics Data System (ADS)
Bendahan, Joseph; Feinstein, Leon; Keeley, Doug; Loveman, Rob
1999-06-01
Pulsed Fast Neutron Analysis (PFNA) has been demonstrated to detect drugs and explosives in trucks and large cargo containers. PFNA uses a collimated beam of nanosecond-pulsed fast neutrons that interact with the cargo contents to produce gamma rays characteristic to their elemental composition. By timing the arrival of the emitted radiation to an array of gamma-ray detectors a three-dimensional elemental density map or image of the cargo is created. The process to determine the elemental densities is complex and requires a number of steps. The first step consists of extracting from the characteristic gamma-ray spectra the counts associated with the elements of interest. Other steps are needed to correct for physical quantities such as gamma-ray production cross sections and angular distributions. The image processing includes also phenomenological corrections that take into account the neutron attenuation through the cargo, and the attenuation of the gamma rays from the point they were generated to the gamma-ray detectors. Additional processing is required to map the elemental densities from the data acquisition system of coordinates to a rectilinear system. This paper describes the image processing used to compute the elemental densities from the counts observed in the gamma-ray detectors.
Image reconstruction from Pulsed Fast Neutron Analysis
Bendahan, Joseph; Feinstein, Leon; Keeley, Doug; Loveman, Rob
1999-06-10
Pulsed Fast Neutron Analysis (PFNA) has been demonstrated to detect drugs and explosives in trucks and large cargo containers. PFNA uses a collimated beam of nanosecond-pulsed fast neutrons that interact with the cargo contents to produce gamma rays characteristic to their elemental composition. By timing the arrival of the emitted radiation to an array of gamma-ray detectors a three-dimensional elemental density map or image of the cargo is created. The process to determine the elemental densities is complex and requires a number of steps. The first step consists of extracting from the characteristic gamma-ray spectra the counts associated with the elements of interest. Other steps are needed to correct for physical quantities such as gamma-ray production cross sections and angular distributions. The image processing includes also phenomenological corrections that take into account the neutron attenuation through the cargo, and the attenuation of the gamma rays from the point they were generated to the gamma-ray detectors. Additional processing is required to map the elemental densities from the data acquisition system of coordinates to a rectilinear system. This paper describes the image processing used to compute the elemental densities from the counts observed in the gamma-ray detectors.
Comparison of image reconstruction methods for structured illumination microscopy
NASA Astrophysics Data System (ADS)
Lukeš, Tomas; Hagen, Guy M.; Křížek, Pavel; Švindrych, Zdeněk.; Fliegel, Karel; Klíma, Miloš
2014-05-01
Structured illumination microscopy (SIM) is a recent microscopy technique that enables one to go beyond the diffraction limit using patterned illumination. The high frequency information is encoded through aliasing into the observed image. By acquiring multiple images with different illumination patterns aliased components can be separated and a highresolution image reconstructed. Here we investigate image processing methods that perform the task of high-resolution image reconstruction, namely square-law detection, scaled subtraction, super-resolution SIM (SR-SIM), and Bayesian estimation. The optical sectioning and lateral resolution improvement abilities of these algorithms were tested under various noise level conditions on simulated data and on fluorescence microscopy images of a pollen grain test sample and of a cultured cell stained for the actin cytoskeleton. In order to compare the performance of the algorithms, the following objective criteria were evaluated: Signal to Noise Ratio (SNR), Signal to Background Ratio (SBR), circular average of the power spectral density and the S3 sharpness index. The results show that SR-SIM and Bayesian estimation combine illumination patterned images more effectively and provide better lateral resolution in exchange for more complex image processing. SR-SIM requires one to precisely shift the separated spectral components to their proper positions in reciprocal space. High noise levels in the raw data can cause inaccuracies in the shifts of the spectral components which degrade the super-resolved image. Bayesian estimation has proven to be more robust to changes in noise level and illumination pattern frequency.
A novel data processing technique for image reconstruction of penumbral imaging
NASA Astrophysics Data System (ADS)
Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin
2011-06-01
CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.
Statistical reconstruction algorithms for continuous wave electron spin resonance imaging
NASA Astrophysics Data System (ADS)
Kissos, Imry; Levit, Michael; Feuer, Arie; Blank, Aharon
2013-06-01
Electron spin resonance imaging (ESRI) is an important branch of ESR that deals with heterogeneous samples ranging from semiconductor materials to small live animals and even humans. ESRI can produce either spatial images (providing information about the spatially dependent radical concentration) or spectral-spatial images, where an extra dimension is added to describe the absorption spectrum of the sample (which can also be spatially dependent). The mapping of oxygen in biological samples, often referred to as oximetry, is a prime example of an ESRI application. ESRI suffers frequently from a low signal-to-noise ratio (SNR), which results in long acquisition times and poor image quality. A broader use of ESRI is hampered by this slow acquisition, which can also be an obstacle for many biological applications where conditions may change relatively quickly over time. The objective of this work is to develop an image reconstruction scheme for continuous wave (CW) ESRI that would make it possible to reduce the data acquisition time without degrading the reconstruction quality. This is achieved by adapting the so-called "statistical reconstruction" method, recently developed for other medical imaging modalities, to the specific case of CW ESRI. Our new algorithm accounts for unique ESRI aspects such as field modulation, spectral-spatial imaging, and possible limitation on the gradient magnitude (the so-called "limited angle" problem). The reconstruction method shows improved SNR and contrast recovery vs. commonly used back-projection-based methods, for a variety of simulated synthetic samples as well as in actual CW ESRI experiments.
NASA Technical Reports Server (NTRS)
Mareboyana, Manohar; Le Moigne-Stewart, Jacqueline; Bennett, Jerome
2016-01-01
In this paper, we demonstrate a simple algorithm that projects low resolution (LR) images differing in subpixel shifts on a high resolution (HR) also called super resolution (SR) grid. The algorithm is very effective in accuracy as well as time efficiency. A number of spatial interpolation techniques using nearest neighbor, inverse-distance weighted averages, Radial Basis Functions (RBF) etc. used in projection yield comparable results. For best accuracy of reconstructing SR image by a factor of two requires four LR images differing in four independent subpixel shifts. The algorithm has two steps: i) registration of low resolution images and (ii) shifting the low resolution images to align with reference image and projecting them on high resolution grid based on the shifts of each low resolution image using different interpolation techniques. Experiments are conducted by simulating low resolution images by subpixel shifts and subsampling of original high resolution image and the reconstructing the high resolution images from the simulated low resolution images. The results of accuracy of reconstruction are compared by using mean squared error measure between original high resolution image and reconstructed image. The algorithm was tested on remote sensing images and found to outperform previously proposed techniques such as Iterative Back Projection algorithm (IBP), Maximum Likelihood (ML), and Maximum a posterior (MAP) algorithms. The algorithm is robust and is not overly sensitive to the registration inaccuracies.
Forward-Projection Architecture for Fast Iterative Image Reconstruction in X-ray CT.
Kim, Jung Kuk; Fessler, Jeffrey A; Zhang, Zhengya
2012-10-01
Iterative image reconstruction can dramatically improve the image quality in X-ray computed tomography (CT), but the computation involves iterative steps of 3D forward- and back-projection, which impedes routine clinical use. To accelerate forward-projection, we analyze the CT geometry to identify the intrinsic parallelism and data access sequence for a highly parallel hardware architecture. To improve the efficiency of this architecture, we propose a water-filling buffer to remove pipeline stalls, and an out-of-order sectored processing to reduce the off-chip memory access by up to three orders of magnitude. We make a floating-point to fixed-point conversion based on numerical simulations and demonstrate comparable image quality at a much lower implementation cost. As a proof of concept, a 5-stage fully pipelined, 55-way parallel separable-footprint forward-projector is prototyped on a Xilinx Virtex-5 FPGA for a throughput of 925.8 million voxel projections/s at 200 MHz clock frequency, 4.6 times higher than an optimized 16-threaded program running on an 8-core 2.8-GHz CPU. A similar architecture can be applied to back-projection for a complete iterative image reconstruction system. The proposed algorithm and architecture can also be applied to hardware platforms such as graphics processing unit and digital signal processor to achieve significant accelerations. PMID:23087589
Forward-Projection Architecture for Fast Iterative Image Reconstruction in X-ray CT
Kim, Jung Kuk; Fessler, Jeffrey A.; Zhang, Zhengya
2012-01-01
Iterative image reconstruction can dramatically improve the image quality in X-ray computed tomography (CT), but the computation involves iterative steps of 3D forward- and back-projection, which impedes routine clinical use. To accelerate forward-projection, we analyze the CT geometry to identify the intrinsic parallelism and data access sequence for a highly parallel hardware architecture. To improve the efficiency of this architecture, we propose a water-filling buffer to remove pipeline stalls, and an out-of-order sectored processing to reduce the off-chip memory access by up to three orders of magnitude. We make a floating-point to fixed-point conversion based on numerical simulations and demonstrate comparable image quality at a much lower implementation cost. As a proof of concept, a 5-stage fully pipelined, 55-way parallel separable-footprint forward-projector is prototyped on a Xilinx Virtex-5 FPGA for a throughput of 925.8 million voxel projections/s at 200 MHz clock frequency, 4.6 times higher than an optimized 16-threaded program running on an 8-core 2.8-GHz CPU. A similar architecture can be applied to back-projection for a complete iterative image reconstruction system. The proposed algorithm and architecture can also be applied to hardware platforms such as graphics processing unit and digital signal processor to achieve significant accelerations. PMID:23087589
Spectral reconstruction by scatter analysis for a linear accelerator photon beam.
Jalbout, Wassim T; Spyrou, Nicholas M
2006-05-01
Pre-existing methods for photon beam spectral reconstruction are briefly reviewed. An alternative reconstruction method by scatter analysis for linear accelerators is introduced. The method consists in irradiating a small plastic phantom at standard 100 cm SSD and inferring primary beam energy spectral information based on the measurement with a standard Farmer chamber of scatter around the phantom at several specific scatter angles: a scatter curve is measured which is indicative of the primary spectrum at hand. A Monte Carlo code is used to simulate the scatter measurement set-up and predict the relative magnitude of scatter measurements for mono-energetic primary beams. Based on mono-energetic primary scatter data, measured scatter curves are analysed and the spectrum unfolded as the sum of mono-energetic individual energy bins using the Schiff bremsstrahlung model. The method is applied to an Elekta/SL18 6 MV photon beam. The reconstructed spectrum matches the Monte Carlo calculated spectrum for the same beam within 6.2% (average error when spectra are compared bin by bin). Depth dose values calculated for the reconstructed spectrum agree with physically measured depth dose data to within 1%. Scatter analysis is preliminarily shown to have potential as a practical spectral reconstruction method requiring few measurements under standard 100 cm SSD and feasible in any radiotherapy department using a phantom and a Farmer chamber. PMID:16625037
Holographic particle image velocimetry: analysis using a conjugate reconstruction geometry
NASA Astrophysics Data System (ADS)
Barnhart, D. H.; Halliwell, N. A.; Coupland, J. M.
2000-10-01
Holographic recording techniques have recently been studied as a means to extend two-component, planar particle image velocimetry (PIV) techniques for three-component, whole-field velocity measurements. In a similar manner to two-component PIV, three-component, holographic PIV (HPIV) uses correlation-based techniques to extract particle displacement fields from double-exposure holograms. Since a holographic image contains information concerning both the phase and the amplitude of the scattered field it is possible to correlate either the intensity or the complex amplitude. In previous work we have shown that optical methods to compute the autocorrelation of the complex amplitude are inherently more tolerant to aberrations introduced in the reconstruction process, Coupland, Halliwell, Proc. Roy. Soc. 453 (1960) (1997) 1066. In this paper we introduce a new method of holographic recording and reconstruction that allows a constant image shift to be introduced to the particle image displacement. The technique, which we call conjugate reconstruction, resolves directional ambiguity and extends the dynamic range of HPIV. The theory of this method is examined in detail and a relationship between the image and object displacement is derived. Experimental verification of the theory is presented.
Toward 5D image reconstruction for optical interferometry
NASA Astrophysics Data System (ADS)
Baron, Fabien; Kloppenborg, Brian; Monnier, John
2012-07-01
We report on our progress toward a flexible image reconstruction software for optical interferometry capable of "5D imaging" of stellar surfaces. 5D imaging is here defined as the capability to image directly one or several stars in three dimensions, with both the time and wavelength dependencies taken into account during the reconstruction process. Our algorithm makes use of the Healpix (Gorski et al., 2005) sphere partition scheme to tesselate the stellar surface, 3D Open Graphics Language (OpenGL) to model the spheroid geometry, and the Open Compute Language (OpenCL) framework for all other computations. We use the Monte Carlo Markov Chain software SQUEEZE to solve the image reconstruction problem on the surfaces of these stars. Finally, the Compressed Sensing and Bayesian Evidence paradigms are employed to determine the best regularization for spotted stars. Our algorithm makes use of the Healpix (reference needed) sphere partition scheme to tesselate the stellar surface, 3D Open Graphics Language (OpenGL) to model the spheroid, and the Open Compute Language (OpenCL) framework to model the Roche gravitational potential equation.
Atmospheric isoplanatism and astronomical image reconstruction on Mauna Kea
Cowie, L.L.; Songaila, A.
1988-07-01
Atmospheric isoplanatism for visual wavelength image-reconstruction applications was measured on Mauna Kea in Hawaii. For most nights the correlation of the transform functions is substantially wider than the long-exposure transform function at separations up to 30 arcsec. Theoretical analysis shows that this is reasonable if the mean Fried parameter is approximately 30 cm at 5500 A. Reconstructed image quality may be described by a Gaussian with a FWHM of lambda/s/sub 0/. Under average conditions, s/sub 0/ (30 arcsec) exceeds 55 cm at 7000 A. The results show that visual image quality in the 0.1--0.2 arcsec range is obtainable over much of the sky with large ground-based telescopes on this site.
Colored three-dimensional reconstruction of vehicular thermal infrared images
NASA Astrophysics Data System (ADS)
Sun, Shaoyuan; Leung, Henry; Shen, Zhenyi
2015-06-01
Enhancement of vehicular night vision thermal infrared images is an important problem in intelligent vehicles. We propose to create a colorful three-dimensional (3-D) display of infrared images for the vehicular night vision assistant driving system. We combine the plane parameter Markov random field (PP-MRF) model-based depth estimation with classification-based infrared image colorization to perform colored 3-D reconstruction of vehicular thermal infrared images. We first train the PP-MRF model to learn the relationship between superpixel features and plane parameters. The infrared images are then colorized and we perform superpixel segmentation and feature extraction on the colorized images. The PP-MRF model is used to estimate the superpixel plane parameter and to analyze the structure of the superpixels according to the characteristics of vehicular thermal infrared images. Finally, we estimate the depth of each pixel to perform 3-D reconstruction. Experimental results demonstrate that the proposed method can give a visually pleasing and daytime-like colorful 3-D display from a monochromatic vehicular thermal infrared image, which can help drivers to have a better understanding of the environment.
PET Image Reconstruction Using Information Theoretic Anatomical Priors
Somayajula, Sangeetha; Panagiotou, Christos; Rangarajan, Anand; Li, Quanzheng; Arridge, Simon R.
2011-01-01
We describe a nonparametric framework for incorporating information from co-registered anatomical images into positron emission tomographic (PET) image reconstruction through priors based on information theoretic similarity measures. We compare and evaluate the use of mutual information (MI) and joint entropy (JE) between feature vectors extracted from the anatomical and PET images as priors in PET reconstruction. Scale-space theory provides a framework for the analysis of images at different levels of detail, and we use this approach to define feature vectors that emphasize prominent boundaries in the anatomical and functional images, and attach less importance to detail and noise that is less likely to be correlated in the two images. Through simulations that model the best case scenario of perfect agreement between the anatomical and functional images, and a more realistic situation with a real magnetic resonance image and a PET phantom that has partial volumes and a smooth variation of intensities, we evaluate the performance of MI and JE based priors in comparison to a Gaussian quadratic prior, which does not use any anatomical information. We also apply this method to clinical brain scan data using F18 Fallypride, a tracer that binds to dopamine receptors and therefore localizes mainly in the striatum. We present an efficient method of computing these priors and their derivatives based on fast Fourier transforms that reduce the complexity of their convolution-like expressions. Our results indicate that while sensitive to initialization and choice of hyperparameters, information theoretic priors can reconstruct images with higher contrast and superior quantitation than quadratic priors. PMID:20851790
Image reconstruction techniques applied to nuclear mass models
NASA Astrophysics Data System (ADS)
Morales, Irving O.; Isacker, P. Van; Velazquez, V.; Barea, J.; Mendoza-Temis, J.; Vieyra, J. C. López; Hirsch, J. G.; Frank, A.
2010-02-01
A new procedure is presented that combines well-known nuclear models with image reconstruction techniques. A color-coded image is built by taking the differences between measured masses and the predictions given by the different theoretical models. This image is viewed as part of a larger array in the (N,Z) plane, where unknown nuclear masses are hidden, covered by a “mask.” We apply a suitably adapted deconvolution algorithm, used in astronomical observations, to “open the window” and see the rest of the pattern. We show that it is possible to improve significantly mass predictions in regions not too far from measured nuclear masses.
Reconstruction of pulse noisy images via stochastic resonance
Han, Jing; Liu, Hongjun; Sun, Qibing; Huang, Nan
2015-01-01
We investigate a practical technology for reconstructing nanosecond pulse noisy images via stochastic resonance, which is based on the modulation instability. A theoretical model of this method for optical pulse signal is built to effectively recover the pulse image. The nanosecond noise-hidden images grow at the expense of noise during the stochastic resonance process in a photorefractive medium. The properties of output images are mainly determined by the input signal-to-noise intensity ratio, the applied voltage across the medium, and the correlation length of noise background. A high cross-correlation gain is obtained by optimizing these parameters. This provides a potential method for detecting low-level or hidden pulse images in various imaging applications. PMID:26067911
The SRT reconstruction algorithm for semiquantification in PET imaging
Kastis, George A.; Gaitanis, Anastasios; Samartzis, Alexandros P.; Fokas, Athanasios S.
2015-10-15
Purpose: The spline reconstruction technique (SRT) is a new, fast algorithm based on a novel numerical implementation of an analytic representation of the inverse Radon transform. The mathematical details of this algorithm and comparisons with filtered backprojection were presented earlier in the literature. In this study, the authors present a comparison between SRT and the ordered-subsets expectation–maximization (OSEM) algorithm for determining contrast and semiquantitative indices of {sup 18}F-FDG uptake. Methods: The authors implemented SRT in the software for tomographic image reconstruction (STIR) open-source platform and evaluated this technique using simulated and real sinograms obtained from the GE Discovery ST positron emission tomography/computer tomography scanner. All simulations and reconstructions were performed in STIR. For OSEM, the authors used the clinical protocol of their scanner, namely, 21 subsets and two iterations. The authors also examined images at one, four, six, and ten iterations. For the simulation studies, the authors analyzed an image-quality phantom with cold and hot lesions. Two different versions of the phantom were employed at two different hot-sphere lesion-to-background ratios (LBRs), namely, 2:1 and 4:1. For each noiseless sinogram, 20 Poisson realizations were created at five different noise levels. In addition to making visual comparisons of the reconstructed images, the authors determined contrast and bias as a function of the background image roughness (IR). For the real-data studies, sinograms of an image-quality phantom simulating the human torso were employed. The authors determined contrast and LBR as a function of the background IR. Finally, the authors present plots of contrast as a function of IR after smoothing each reconstructed image with Gaussian filters of six different sizes. Statistical significance was determined by employing the Wilcoxon rank-sum test. Results: In both simulated and real studies, SRT
LOR-interleaving image reconstruction for PET imaging with fractional-crystal collimation
Li, Yusheng; Matej, Samuel; Karp, Joel S.; Metzler, Scott D.
2015-01-01
Positron emission tomography (PET) has become an important modality in medical and molecular imaging. However, in most PET applications, the resolution is still mainly limited by the physical crystal sizes or the detector’s intrinsic spatial resolution. To achieve images with better spatial resolution in a central region of interest (ROI), we have previously proposed using collimation in PET scanner. The collimator is designed to partially mask detector crystals to detect lines of response (LORs) within fractional crystals. A sequence of collimator-encoded LORs is measured with different collimation configurations. This novel collimated scanner geometry makes the reconstruction problem challenging, as both detector and collimator effects need to be modeled to reconstruct high-resolution images from collimated LORs. In this paper, we present an LOR-interleaving (LORI) algorithm, which incorporates these effects and has the advantage of reusing existing reconstruction software, to reconstruct high-resolution images for PET with fractional-crystal collimation. We also develop a 3-D ray-tracing model incorporating both the collimator and crystal penetration for simulations and reconstructions of the collimated PET. By registering the collimator-encoded LORs with the collimator configurations, high-resolution LORs are restored based on the modeled transfer matrices using the nonnegative least-squares method and EM algorithm. The resolution-enhanced images are then reconstructed from the high-resolution LORs using the MLEM or OSEM algorithm. For validation, we applied the LORI method to a small-animal PET scanner, A-PET, with a specially designed collimator. We demonstrate through simulated reconstructions with a hot-rod phantom and MOBY phantom that the LORI reconstructions can substantially improve spatial resolution and quantification compared to the uncollimated reconstructions. The LORI algorithm is crucial to improve overall image quality of collimated PET, which
LOR-interleaving image reconstruction for PET imaging with fractional-crystal collimation
NASA Astrophysics Data System (ADS)
Li, Yusheng; Matej, Samuel; Karp, Joel S.; Metzler, Scott D.
2015-01-01
Positron emission tomography (PET) has become an important modality in medical and molecular imaging. However, in most PET applications, the resolution is still mainly limited by the physical crystal sizes or the detector’s intrinsic spatial resolution. To achieve images with better spatial resolution in a central region of interest (ROI), we have previously proposed using collimation in PET scanners. The collimator is designed to partially mask detector crystals to detect lines of response (LORs) within fractional crystals. A sequence of collimator-encoded LORs is measured with different collimation configurations. This novel collimated scanner geometry makes the reconstruction problem challenging, as both detector and collimator effects need to be modeled to reconstruct high-resolution images from collimated LORs. In this paper, we present a LOR-interleaving (LORI) algorithm, which incorporates these effects and has the advantage of reusing existing reconstruction software, to reconstruct high-resolution images for PET with fractional-crystal collimation. We also develop a 3D ray-tracing model incorporating both the collimator and crystal penetration for simulations and reconstructions of the collimated PET. By registering the collimator-encoded LORs with the collimator configurations, high-resolution LORs are restored based on the modeled transfer matrices using the non-negative least-squares method and EM algorithm. The resolution-enhanced images are then reconstructed from the high-resolution LORs using the MLEM or OSEM algorithm. For validation, we applied the LORI method to a small-animal PET scanner, A-PET, with a specially designed collimator. We demonstrate through simulated reconstructions with a hot-rod phantom and MOBY phantom that the LORI reconstructions can substantially improve spatial resolution and quantification compared to the uncollimated reconstructions. The LORI algorithm is crucial to improve overall image quality of collimated PET, which
Accuracy of quantitative reconstructions in SPECT/CT imaging
NASA Astrophysics Data System (ADS)
Shcherbinin, S.; Celler, A.; Belhocine, T.; van der Werf, R.; Driedger, A.
2008-09-01
The goal of this study was to determine the quantitative accuracy of our OSEM-APDI reconstruction method based on SPECT/CT imaging for Tc-99m, In-111, I-123, and I-131 isotopes. Phantom studies were performed on a SPECT/low-dose multislice CT system (Infinia-Hawkeye-4 slice, GE Healthcare) using clinical acquisition protocols. Two radioactive sources were centrally and peripherally placed inside an anthropometric Thorax phantom filled with non-radioactive water. Corrections for attenuation, scatter, collimator blurring and collimator septal penetration were applied and their contribution to the overall accuracy of the reconstruction was evaluated. Reconstruction with the most comprehensive set of corrections resulted in activity estimation with error levels of 3-5% for all the isotopes.
Light field display and 3D image reconstruction
NASA Astrophysics Data System (ADS)
Iwane, Toru
2016-06-01
Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.
Comparison of power spectra for tomosynthesis projections and reconstructed images
Engstrom, Emma; Reiser, Ingrid; Nishikawa, Robert
2009-01-01
Burgess et al. [Med. Phys. 28, 419–437 (2001)] showed that the power spectrum of mammographic breast background follows a power law and that lesion detectability is affected by the power-law exponent β which measures the amount of structure in the background. Following the study of Burgess et al., the authors measured and compared the power-law exponent of mammographic backgrounds in tomosynthesis projections and reconstructed slices to investigate the effect of tomosynthesis imaging on background structure. Our data set consisted of 55 patient cases. For each case, regions of interest (ROIs) were extracted from both projection images and reconstructed slices. The periodogram of each ROI was computed by taking the squared modulus of the Fourier transform of the ROI. The power-law exponent was determined for each periodogram and averaged across all ROIs extracted from all projections or reconstructed slices for each patient data set. For the projections, the mean β averaged across the 55 cases was 3.06 (standard deviation of 0.21), while it was 2.87 (0.24) for the corresponding reconstructions. The difference in β for a given patient between the projection ROIs and the reconstructed ROIs averaged across the 55 cases was 0.194, which was statistically significant (p<0.001). The 95% CI for the difference between the mean value of β for the projections and reconstructions was [0.170, 0.218]. The results are consistent with the observation that the amount of breast structure in the tomosynthesis slice is reduced compared to projection mammography and that this may lead to improved lesion detectability. PMID:19544793
Comparison of power spectra for tomosynthesis projections and reconstructed images
Engstrom, Emma; Reiser, Ingrid; Nishikawa, Robert
2009-05-15
Burgess et al. [Med. Phys. 28, 419-437 (2001)] showed that the power spectrum of mammographic breast background follows a power law and that lesion detectability is affected by the power-law exponent {beta} which measures the amount of structure in the background. Following the study of Burgess et al., the authors measured and compared the power-law exponent of mammographic backgrounds in tomosynthesis projections and reconstructed slices to investigate the effect of tomosynthesis imaging on background structure. Our data set consisted of 55 patient cases. For each case, regions of interest (ROIs) were extracted from both projection images and reconstructed slices. The periodogram of each ROI was computed by taking the squared modulus of the Fourier transform of the ROI. The power-law exponent was determined for each periodogram and averaged across all ROIs extracted from all projections or reconstructed slices for each patient data set. For the projections, the mean {beta} averaged across the 55 cases was 3.06 (standard deviation of 0.21), while it was 2.87 (0.24) for the corresponding reconstructions. The difference in {beta} for a given patient between the projection ROIs and the reconstructed ROIs averaged across the 55 cases was 0.194, which was statistically significant (p<0.001). The 95% CI for the difference between the mean value of {beta} for the projections and reconstructions was [0.170, 0.218]. The results are consistent with the observation that the amount of breast structure in the tomosynthesis slice is reduced compared to projection mammography and that this may lead to improved lesion detectability.
Kim, D; Kang, S; Kim, T; Suh, T; Kim, S
2014-06-01
Purpose: In this paper, we implemented the four-dimensional (4D) digital tomosynthesis (DTS) imaging based on algebraic image reconstruction technique and total-variation minimization method in order to compensate the undersampled projection data and improve the image quality. Methods: The projection data were acquired as supposed the cone-beam computed tomography system in linear accelerator by the Monte Carlo simulation and the in-house 4D digital phantom generation program. We performed 4D DTS based upon simultaneous algebraic reconstruction technique (SART) among the iterative image reconstruction technique and total-variation minimization method (TVMM). To verify the effectiveness of this reconstruction algorithm, we performed systematic simulation studies to investigate the imaging performance. Results: The 4D DTS algorithm based upon the SART and TVMM seems to give better results than that based upon the existing method, or filtered-backprojection. Conclusion: The advanced image reconstruction algorithm for the 4D DTS would be useful to validate each intra-fraction motion during radiation therapy. In addition, it will be possible to give advantage to real-time imaging for the adaptive radiation therapy. This research was supported by Leading Foreign Research Institute Recruitment Program (Grant No.2009-00420) and Basic Atomic Energy Research Institute (BAERI); (Grant No. 2009-0078390) through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT and Future Planning (MSIP)
Performance validation of phase diversity image reconstruction techniques
NASA Astrophysics Data System (ADS)
Hirzberger, J.; Feller, A.; Riethmüller, T. L.; Gandorfer, A.; Solanki, S. K.
2011-05-01
We present a performance study of a phase diversity (PD) image reconstruction algorithm based on artificial solar images obtained from MHD simulations and on seeing-free data obtained with the SuFI instrument on the Sunrise balloon borne observatory. The artificial data were altered by applying different levels of degradation with synthesised wavefront errors and noise. The PD algorithm was modified by changing the number of fitted polynomials, the shape of the pupil and the applied noise filter. The obtained reconstructions are evaluated by means of the resulting rms intensity contrast and by the conspicuousness of appearing artifacts. The results show that PD is a robust method which consistently recovers the initial unaffected image contents. The efficiency of the reconstruction is, however, strongly dependent on the number of used fitting polynomials and the noise level of the images. If the maximum number of fitted polynomials is higher than 21, artifacts have to be accepted and for noise levels higher than 10-3 the commonly used noise filtering techniques are not able to avoid amplification of spurious structures.
NASA Astrophysics Data System (ADS)
Wang, Li; Gac, Nicolas; Mohammad-Djafari, Ali
2015-01-01
In order to improve quality of 3D X-ray tomography reconstruction for Non Destructive Testing (NDT), we investigate in this paper hierarchical Bayesian methods. In NDT, useful prior information on the volume like the limited number of materials or the presence of homogeneous area can be included in the iterative reconstruction algorithms. In hierarchical Bayesian methods, not only the volume is estimated thanks to the prior model of the volume but also the hyper parameters of this prior. This additional complexity in the reconstruction methods when applied to large volumes (from 5123 to 81923 voxels) results in an increasing computational cost. To reduce it, the hierarchical Bayesian methods investigated in this paper lead to an algorithm acceleration by Variational Bayesian Approximation (VBA) [1] and hardware acceleration thanks to projection and back-projection operators paralleled on many core processors like GPU [2]. In this paper, we will consider a Student-t prior on the gradient of the image implemented in a hierarchical way [3, 4, 1]. Operators H (forward or projection) and Ht (adjoint or back-projection) implanted in multi-GPU [2] have been used in this study. Different methods will be evalued on synthetic volume "Shepp and Logan" in terms of quality and time of reconstruction. We used several simple regularizations of order 1 and order 2. Other prior models also exists [5]. Sometimes for a discrete image, we can do the segmentation and reconstruction at the same time, then the reconstruction can be done with less projections.
Force reconstruction using the sum of weighted accelerations technique -- Max-Flat procedure
Carne, T.G.; Mayes, R.L.; Bateman, V.I.
1993-12-31
Force reconstruction is a procedure in which the externally applied force is inferred from measured structural response rather than directly measured. In a recently developed technique, the response acceleration time-histories are multiplied by scalar weights and summed to produce the reconstructed force. This reconstruction is called the Sum of Weighted Accelerations Technique (SWAT). One step in the application of this technique is the calculation of the appropriate scalar weights. In this paper a new method of estimating the weights, using measured frequency response function data, is developed and contrasted with the traditional SWAT method of inverting the mode-shape matrix. The technique uses frequency response function data, but is not based on deconvolution. An application that will be discussed as part of this paper is the impact into a rigid barrier of a weapon system with an energy-absorbing nose. The nose had been designed to absorb the energy of impact and to mitigate the shock to the interior components.
An efficient simultaneous reconstruction technique for tomographic particle image velocimetry
NASA Astrophysics Data System (ADS)
Atkinson, Callum; Soria, Julio
2009-10-01
To date, Tomo-PIV has involved the use of the multiplicative algebraic reconstruction technique (MART), where the intensity of each 3D voxel is iteratively corrected to satisfy one recorded projection, or pixel intensity, at a time. This results in reconstruction times of multiple hours for each velocity field and requires considerable computer memory in order to store the associated weighting coefficients and intensity values for each point in the volume. In this paper, a rapid and less memory intensive reconstruction algorithm is presented based on a multiplicative line-of-sight (MLOS) estimation that determines possible particle locations in the volume, followed by simultaneous iterative correction. Reconstructions of simulated images are presented for two simultaneous algorithms (SART and SMART) as well as the now standard MART algorithm, which indicate that the same accuracy as MART can be achieved 5.5 times faster or 77 times faster with 15 times less memory if the processing and storage of the weighting matrix is considered. Application of MLOS-SMART and MART to a turbulent boundary layer at Re θ = 2200 using a 4 camera Tomo-PIV system with a volume of 1,000 × 1,000 × 160 voxels is discussed. Results indicate improvements in reconstruction speed of 15 times that of MART with precalculated weighting matrix, or 65 times if calculation of the weighting matrix is considered. Furthermore the memory needed to store a large weighting matrix and volume intensity is reduced by almost 40 times in this case.
A dual oxygenation and fluorescence imaging platform for reconstructive surgery
NASA Astrophysics Data System (ADS)
Ashitate, Yoshitomo; Nguyen, John N.; Venugopal, Vivek; Stockdale, Alan; Neacsu, Florin; Kettenring, Frank; Lee, Bernard T.; Frangioni, John V.; Gioux, Sylvain
2013-03-01
There is a pressing clinical need to provide image guidance during surgery. Currently, assessment of tissue that needs to be resected or avoided is performed subjectively, leading to a large number of failures, patient morbidity, and increased healthcare costs. Because near-infrared (NIR) optical imaging is safe, noncontact, inexpensive, and can provide relatively deep information (several mm), it offers unparalleled capabilities for providing image guidance during surgery. These capabilities are well illustrated through the clinical translation of fluorescence imaging during oncologic surgery. In this work, we introduce a novel imaging platform that combines two complementary NIR optical modalities: oxygenation imaging and fluorescence imaging. We validated this platform during facial reconstructive surgery on large animals approaching the size of humans. We demonstrate that NIR fluorescence imaging provides identification of perforator arteries, assesses arterial perfusion, and can detect thrombosis, while oxygenation imaging permits the passive monitoring of tissue vital status, as well as the detection and origin of vascular compromise simultaneously. Together, the two methods provide a comprehensive approach to identifying problems and intervening in real time during surgery before irreparable damage occurs. Taken together, this novel platform provides fully integrated and clinically friendly endogenous and exogenous NIR optical imaging for improved image-guided intervention during surgery.
Reconstruction of hyperspectral image using matting model for classification
NASA Astrophysics Data System (ADS)
Xie, Weiying; Li, Yunsong; Ge, Chiru
2016-05-01
Although hyperspectral images (HSIs) captured by satellites provide much information in spectral regions, some bands are redundant or have large amounts of noise, which are not suitable for image analysis. To address this problem, we introduce a method for reconstructing the HSI with noise reduction and contrast enhancement using a matting model for the first time. The matting model refers to each spectral band of an HSI that can be decomposed into three components, i.e., alpha channel, spectral foreground, and spectral background. First, one spectral band of an HSI with more refined information than most other bands is selected, and is referred to as an alpha channel of the HSI to estimate the hyperspectral foreground and hyperspectral background. Finally, a combination operation is applied to reconstruct the HSI. In addition, the support vector machine (SVM) classifier and three sparsity-based classifiers, i.e., orthogonal matching pursuit (OMP), simultaneous OMP, and OMP based on first-order neighborhood system weighted classifiers, are utilized on the reconstructed HSI and the original HSI to verify the effectiveness of the proposed method. Specifically, using the reconstructed HSI, the average accuracy of the SVM classifier can be improved by as much as 19%.
Image reconstruction algorithms for DOIS: a diffractive optic image spectrometer
NASA Astrophysics Data System (ADS)
Lyons, Denise M.; Whitcomb, Kevin J.
1996-06-01
The diffractive optic imaging spectrometer, DOIS, is a compact, economical, rugged, programmable, multi-spectral imager. The design implements a conventional CCD camera and emerging diffractive optical element (DOE) technology in an elegant configuration, adding spectroscopy capabilities to current imaging systems (Lyons 1995). This paper reports on the visible prototype DOIS that was designed, fabricated and characterized. Algorithms are presented for simulation and post-detection processing with digital image processing techniques. This improves the spectral resolution by removing unwanted blurred components from the spectral images. DOIS is a practical image spectrometer that can be built to operate at ultraviolet, visible or infrared wavelengths for applications in surveillance, remote sensing, law enforcement, environmental monitoring, laser communications, and laser counter intelligence.
Cloud based toolbox for image analysis, processing and reconstruction tasks.
Bednarz, Tomasz; Wang, Dadong; Arzhaeva, Yulia; Lagerstrom, Ryan; Vallotton, Pascal; Burdett, Neil; Khassapov, Alex; Szul, Piotr; Chen, Shiping; Sun, Changming; Domanski, Luke; Thompson, Darren; Gureyev, Timur; Taylor, John A
2015-01-01
This chapter describes a novel way of carrying out image analysis, reconstruction and processing tasks using cloud based service provided on the Australian National eResearch Collaboration Tools and Resources (NeCTAR) infrastructure. The toolbox allows users free access to a wide range of useful blocks of functionalities (imaging functions) that can be connected together in workflows allowing creation of even more complex algorithms that can be re-run on different data sets, shared with others or additionally adjusted. The functions given are in the area of cellular imaging, advanced X-ray image analysis, computed tomography and 3D medical imaging and visualisation. The service is currently available on the website www.cloudimaging.net.au . PMID:25381109
Optimizing modelling in iterative image reconstruction for preclinical pinhole PET
NASA Astrophysics Data System (ADS)
Goorden, Marlies C.; van Roosmalen, Jarno; van der Have, Frans; Beekman, Freek J.
2016-05-01
The recently developed versatile emission computed tomography (VECTor) technology enables high-energy SPECT and simultaneous SPECT and PET of small animals at sub-mm resolutions. VECTor uses dedicated clustered pinhole collimators mounted in a scanner with three stationary large-area NaI(Tl) gamma detectors. Here, we develop and validate dedicated image reconstruction methods that compensate for image degradation by incorporating accurate models for the transport of high-energy annihilation gamma photons. Ray tracing software was used to calculate photon transport through the collimator structures and into the gamma detector. Input to this code are several geometric parameters estimated from system calibration with a scanning 99mTc point source. Effects on reconstructed images of (i) modelling variable depth-of-interaction (DOI) in the detector, (ii) incorporating photon paths that go through multiple pinholes (‘multiple-pinhole paths’ (MPP)), and (iii) including various amounts of point spread function (PSF) tail were evaluated. Imaging 18F in resolution and uniformity phantoms showed that including large parts of PSFs is essential to obtain good contrast-noise characteristics and that DOI modelling is highly effective in removing deformations of small structures, together leading to 0.75 mm resolution PET images of a hot-rod Derenzo phantom. Moreover, MPP modelling reduced the level of background noise. These improvements were also clearly visible in mouse images. Performance of VECTor can thus be significantly improved by accurately modelling annihilation gamma photon transport.
Region of interest motion compensation for PET image reconstruction.
Qiao, Feng; Pan, Tinsu; Clark, John W; Mawlawi, Osama R
2007-05-21
A motion-incorporated reconstruction (MIR) method for gated PET imaging has recently been developed by several authors to correct for respiratory motion artifacts in PET imaging. This method however relies on a motion map derived from images (4D PET or 4D CT) of the entire field of view (FOV). In this study we present a region of interest (ROI)-based extension to this method, whereby only the motion map of a user-defined ROI is required and motion incorporation during image reconstruction is solely performed within the ROI. A phantom study and an NCAT computer simulation study were performed to test the feasibility of this method. The phantom study showed that the ROI-based MIR produced results that are within 1.26% of those obtained by the full image-based MIR approach when using the same accurate motion information. The NCAT phantom study on the other hand, further verified that motion of features of interest in an image can be estimated more efficiently and potentially more accurately using the ROI-based approach. A reduction of motion estimation time from 450 s to 30 and 73 s was achieved for two different ROIs respectively. In addition, the ROI-based approach showed a reduction in registration error of 43% for one ROI, which effectively reduced quantification bias by 44% and 32% using mean and maximum voxel values, respectively. PMID:17473344
Optimizing modelling in iterative image reconstruction for preclinical pinhole PET.
Goorden, Marlies C; van Roosmalen, Jarno; van der Have, Frans; Beekman, Freek J
2016-05-21
The recently developed versatile emission computed tomography (VECTor) technology enables high-energy SPECT and simultaneous SPECT and PET of small animals at sub-mm resolutions. VECTor uses dedicated clustered pinhole collimators mounted in a scanner with three stationary large-area NaI(Tl) gamma detectors. Here, we develop and validate dedicated image reconstruction methods that compensate for image degradation by incorporating accurate models for the transport of high-energy annihilation gamma photons. Ray tracing software was used to calculate photon transport through the collimator structures and into the gamma detector. Input to this code are several geometric parameters estimated from system calibration with a scanning (99m)Tc point source. Effects on reconstructed images of (i) modelling variable depth-of-interaction (DOI) in the detector, (ii) incorporating photon paths that go through multiple pinholes ('multiple-pinhole paths' (MPP)), and (iii) including various amounts of point spread function (PSF) tail were evaluated. Imaging (18)F in resolution and uniformity phantoms showed that including large parts of PSFs is essential to obtain good contrast-noise characteristics and that DOI modelling is highly effective in removing deformations of small structures, together leading to 0.75 mm resolution PET images of a hot-rod Derenzo phantom. Moreover, MPP modelling reduced the level of background noise. These improvements were also clearly visible in mouse images. Performance of VECTor can thus be significantly improved by accurately modelling annihilation gamma photon transport. PMID:27082049
Research on image matching method of big data image of three-dimensional reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Chunsen; Qiu, Zhenguo; Zhu, Shihuan; Wang, Xiqi; Xu, Xiaolei; Zhong, Sidong
2015-12-01
Image matching is the main flow of a three-dimensional reconstruction. With the development of computer processing technology, seeking the image to be matched from the large date image sets which acquired from different image formats, different scales and different locations has put forward a new request for image matching. To establish the three dimensional reconstruction based on image matching from big data images, this paper put forward a new effective matching method based on visual bag of words model. The main technologies include building the bag of words model and image matching. First, extracting the SIFT feature points from images in the database, and clustering the feature points to generate the bag of words model. We established the inverted files based on the bag of words. The inverted files can represent all images corresponding to each visual word. We performed images matching depending on the images under the same word to improve the efficiency of images matching. Finally, we took the three-dimensional model with those images. Experimental results indicate that this method is able to improve the matching efficiency, and is suitable for the requirements of large data reconstruction.
A maximum entropy reconstruction technique for tomographic particle image velocimetry
NASA Astrophysics Data System (ADS)
Bilsky, A. V.; Lozhkin, V. A.; Markovich, D. M.; Tokarev, M. P.
2013-04-01
This paper studies a novel approach for reducing tomographic PIV computational complexity. The proposed approach is an algebraic reconstruction technique, termed MENT (maximum entropy). This technique computes the three-dimensional light intensity distribution several times faster than SMART, using at least ten times less memory. Additionally, the reconstruction quality remains nearly the same as with SMART. This paper presents the theoretical computation performance comparison for MENT, SMART and MART, followed by validation using synthetic particle images. Both the theoretical assessment and validation of synthetic images demonstrate significant computational time reduction. The data processing accuracy of MENT was compared to that of SMART in a slot jet experiment. A comparison of the average velocity profiles shows a high level of agreement between the results obtained with MENT and those obtained with SMART.
3D scene reconstruction from multi-aperture images
NASA Astrophysics Data System (ADS)
Mao, Miao; Qin, Kaihuai
2014-04-01
With the development of virtual reality, there is a growing demand for 3D modeling of real scenes. This paper proposes a novel 3D scene reconstruction framework based on multi-aperture images. Our framework consists of four parts. Firstly, images with different apertures are captured via programmable aperture. Secondly, we use SIFT method for feature point matching. Then we exploit binocular stereo vision to calculate camera parameters and 3D positions of matching points, forming a sparse 3D scene model. Finally, we apply patch-based multi-view stereo to obtain a dense 3D scene model. Experimental results show that our method is practical and effective to reconstruct dense 3D scene.
Image reconstruction and optimization using a terahertz scanned imaging system
NASA Astrophysics Data System (ADS)
Yıldırım, İhsan Ozan; Özkan, Vedat A.; Idikut, Fırat; Takan, Taylan; Şahin, Asaf B.; Altan, Hakan
2014-10-01
Due to the limited number of array detection architectures in the millimeter wave to terahertz region of the electromagnetic spectrum, imaging schemes with scan architectures are typically employed. In these configurations the interplay between the frequencies used to illuminate the scene and the optics used play an important role in the quality of the formed image. Using a multiplied Schottky-diode based terahertz transceiver operating at 340 GHz, in a stand-off detection scheme; the effect of image quality of a metal target was assessed based on the scanning speed of the galvanometer mirrors as well as the optical system that was constructed. Background effects such as leakage on the receiver were minimized by conditioning the signal at the output of the transceiver. Then, the image of the target was simulated based on known parameters of the optical system and the measured images were compared to the simulation. By using an image quality index based on χ2 algorithm the simulated and measured images were found to be in good agreement with a value of χ2 = 0 .14. The measurements as shown here will aid in the future development of larger stand-off imaging systems that work in the terahertz frequency range.
The gridding method for image reconstruction by Fourier transformation
Schomberg, H.; Timmer, J.
1995-09-01
This paper explores a computational method for reconstructing an n-dimensional signal f from a sampled version of its Fourier transform {cflx f}. The method involves a window function {cflx w} and proceeds in three steps. First, the convolution {cflx g} = {cflx w} * {cflx f} is computed numerically on a Cartesian grid, using the available samples of {cflx f}. Then, g = wf is computed via the inverse discrete Fourier transform, and finally f is obtained as g/w. Due to the smoothing effect of the convolution, evaluating {cflx w} * {cflx f} is much less error prone than merely interpolating {cflx f}. The method was originally devised for image reconstruction in radio astronomy, but is actually applicable to a broad range of reconstructive imaging methods, including magnetic resonance imaging and computed tomography. In particular, it provides a fast and accurate alternative to the filtered backprojection. The basic method has several variants with other applications, such as the equidistant resampling of arbitrarily sampled signals or the fast computation of the Radon (Hough) transform.
Neutron source reconstruction from pinhole imaging at National Ignition Facility.
Volegov, P; Danly, C R; Fittinghoff, D N; Grim, G P; Guler, N; Izumi, N; Ma, T; Merrill, F E; Warrick, A L; Wilde, C H; Wilson, D C
2014-02-01
The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning deuterium-tritium plasma during the ignition stage of inertial confinement fusion (ICF) implosions at NIF. Since the neutron source is small (∼100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, single-sided tapers in gold. These apertures, which have triangular cross sections, produce distortions in the image, and the extended nature of the pinhole results in a non-stationary or spatially varying point spread function across the pinhole field of view. In this work, we have used iterative Maximum Likelihood techniques to remove the non-stationary distortions introduced by the aperture to reconstruct the underlying neutron source distributions. We present the detailed algorithms used for these reconstructions, the stopping criteria used and reconstructed sources from data collected at NIF with a discussion of the neutron imaging performance in light of other diagnostics. PMID:24593362
Model-based reconstructive elasticity imaging of deep venous thrombosis.
Aglyamov, Salavat; Skovoroda, Andrei R; Rubin, Jonathan M; O'Donnell, Matthew; Emelianov, Stanislav Y
2004-05-01
Deep venous thrombosis (DVT) and its sequela, pulmonary embolism, is a significant clinical problem. Once detected, DVT treatment is based on the age of the clot. There are no good noninvasive methods, however, to determine clot age. Previously, we demonstrated that imaging internal mechanical strains can identify and possibly age thrombus in a deep vein. In this study the deformation geometry for DVT elasticity imaging and its effect on Young's modulus estimates is addressed. A model-based reconstruction method is presented to estimate elasticity in which the clot-containing vessel is modeled as a layered cylinder. Compared to an unconstrained approach in reconstructive elasticity imaging, the proposed model-based approach has several advantages: only one component of the strain tensor is used; the minimization procedure is very fast; the method is highly efficient because an analytic solution of the forward elastic problem is used; and the method is not very sensitive to the details of the external load pattern--a characteristic that is important for free-hand, external, surface-applied deformation. The approach was tested theoretically using a numerical model, and experimentally on both tissue-like phantoms and an animal model of DVT. Results suggest that elasticity reconstruction may prove to be a practical adjunct to triplex scanning to detect, diagnose, and stage DVT. PMID:15217230
Neutron source reconstruction from pinhole imaging at National Ignition Facility
NASA Astrophysics Data System (ADS)
Volegov, P.; Danly, C. R.; Fittinghoff, D. N.; Grim, G. P.; Guler, N.; Izumi, N.; Ma, T.; Merrill, F. E.; Warrick, A. L.; Wilde, C. H.; Wilson, D. C.
2014-02-01
The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning deuterium-tritium plasma during the ignition stage of inertial confinement fusion (ICF) implosions at NIF. Since the neutron source is small (˜100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, single-sided tapers in gold. These apertures, which have triangular cross sections, produce distortions in the image, and the extended nature of the pinhole results in a non-stationary or spatially varying point spread function across the pinhole field of view. In this work, we have used iterative Maximum Likelihood techniques to remove the non-stationary distortions introduced by the aperture to reconstruct the underlying neutron source distributions. We present the detailed algorithms used for these reconstructions, the stopping criteria used and reconstructed sources from data collected at NIF with a discussion of the neutron imaging performance in light of other diagnostics.
Variational Reconstruction of Left Cardiac Structure from CMR Images
Wan, Min; Huang, Wei; Zhang, Jun-Mei; Zhao, Xiaodan; Tan, Ru San; Wan, Xiaofeng; Zhong, Liang
2015-01-01
Cardiovascular Disease (CVD), accounting for 17% of overall deaths in the USA, is the leading cause of death over the world. Advances in medical imaging techniques make the quantitative assessment of both the anatomy and function of heart possible. The cardiac modeling is an invariable prerequisite for quantitative analysis. In this study, a novel method is proposed to reconstruct the left cardiac structure from multi-planed cardiac magnetic resonance (CMR) images and contours. Routine CMR examination was performed to acquire both long axis and short axis images. Trained technologists delineated the endocardial contours. Multiple sets of two dimensional contours were projected into the three dimensional patient-based coordinate system and registered to each other. The union of the registered point sets was applied a variational surface reconstruction algorithm based on Delaunay triangulation and graph-cuts. The resulting triangulated surfaces were further post-processed. Quantitative evaluation on our method was performed via computing the overlapping ratio between the reconstructed model and the manually delineated long axis contours, which validates our method. We envisage that this method could be used by radiographers and cardiologists to diagnose and assess cardiac function in patients with diverse heart diseases. PMID:26689551
Variational Reconstruction of Left Cardiac Structure from CMR Images.
Wan, Min; Huang, Wei; Zhang, Jun-Mei; Zhao, Xiaodan; Tan, Ru San; Wan, Xiaofeng; Zhong, Liang
2015-01-01
Cardiovascular Disease (CVD), accounting for 17% of overall deaths in the USA, is the leading cause of death over the world. Advances in medical imaging techniques make the quantitative assessment of both the anatomy and function of heart possible. The cardiac modeling is an invariable prerequisite for quantitative analysis. In this study, a novel method is proposed to reconstruct the left cardiac structure from multi-planed cardiac magnetic resonance (CMR) images and contours. Routine CMR examination was performed to acquire both long axis and short axis images. Trained technologists delineated the endocardial contours. Multiple sets of two dimensional contours were projected into the three dimensional patient-based coordinate system and registered to each other. The union of the registered point sets was applied a variational surface reconstruction algorithm based on Delaunay triangulation and graph-cuts. The resulting triangulated surfaces were further post-processed. Quantitative evaluation on our method was performed via computing the overlapping ratio between the reconstructed model and the manually delineated long axis contours, which validates our method. We envisage that this method could be used by radiographers and cardiologists to diagnose and assess cardiac function in patients with diverse heart diseases. PMID:26689551
Neutron source reconstruction from pinhole imaging at National Ignition Facility
Volegov, P.; Danly, C. R.; Grim, G. P.; Guler, N.; Merrill, F. E.; Wilde, C. H.; Wilson, D. C.; Fittinghoff, D. N.; Izumi, N.; Ma, T.; Warrick, A. L.
2014-02-15
The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning deuterium-tritium plasma during the ignition stage of inertial confinement fusion (ICF) implosions at NIF. Since the neutron source is small (∼100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, single-sided tapers in gold. These apertures, which have triangular cross sections, produce distortions in the image, and the extended nature of the pinhole results in a non-stationary or spatially varying point spread function across the pinhole field of view. In this work, we have used iterative Maximum Likelihood techniques to remove the non-stationary distortions introduced by the aperture to reconstruct the underlying neutron source distributions. We present the detailed algorithms used for these reconstructions, the stopping criteria used and reconstructed sources from data collected at NIF with a discussion of the neutron imaging performance in light of other diagnostics.
Sidky, Emil Y.; Pan Xiaochuan; Reiser, Ingrid S.; Nishikawa, Robert M.; Moore, Richard H.; Kopans, Daniel B.
2009-11-15
Purpose: The authors develop a practical, iterative algorithm for image-reconstruction in undersampled tomographic systems, such as digital breast tomosynthesis (DBT). Methods: The algorithm controls image regularity by minimizing the image total p variation (TpV), a function that reduces to the total variation when p=1.0 or the image roughness when p=2.0. Constraints on the image, such as image positivity and estimated projection-data tolerance, are enforced by projection onto convex sets. The fact that the tomographic system is undersampled translates to the mathematical property that many widely varied resultant volumes may correspond to a given data tolerance. Thus the application of image regularity serves two purposes: (1) Reduction in the number of resultant volumes out of those allowed by fixing the data tolerance, finding the minimum image TpV for fixed data tolerance, and (2) traditional regularization, sacrificing data fidelity for higher image regularity. The present algorithm allows for this dual role of image regularity in undersampled tomography. Results: The proposed image-reconstruction algorithm is applied to three clinical DBT data sets. The DBT cases include one with microcalcifications and two with masses. Conclusions: Results indicate that there may be a substantial advantage in using the present image-reconstruction algorithm for microcalcification imaging.
Specialized Color Targets for Spectral Reflectance Reconstruction of Magnified Images
NASA Astrophysics Data System (ADS)
Kruschwitz, Jennifer D. T.
Digital images are used almost exclusively instead of film to capture visual information across many scientific fields. The colorimetric color representation within these digital images can be relayed from the digital counts produced by the camera with the use of a known color target. In image capture of magnified images, there is currently no reliable color target that can be used at multiple magnifications and give the user a solid understanding of the color ground truth within those images. The first part of this dissertation included the design, fabrication, and testing of a color target produced with optical interference coated microlenses for use in an off-axis illumination, compound microscope. An ideal target was designed to increase the color gamut for colorimetric imaging and provide the necessary "Block Dye" spectral reflectance profiles across the visible spectrum to reduce the number of color patches necessary for multiple filter imaging systems that rely on statistical models for spectral reflectance reconstruction. There are other scientific disciplines that can benefit from a specialized color target to determine the color ground truth in their magnified images and perform spectral estimation. Not every discipline has the luxury of having a multi-filter imaging system. The second part of this dissertation developed two unique ways of using an interference coated color mirror target: one that relies on multiple light-source angles, and one that leverages a dynamic color change with time. The source multi-angle technique would be used for the microelectronic discipline where the reconstructed spectral reflectance would be used to determine a dielectric film thickness on a silicon substrate, and the time varying technique would be used for a biomedical example to determine the thickness of human tear film.
Reconstruction of an AFM image based on estimation of the tip shape
NASA Astrophysics Data System (ADS)
Yuan, Shuai; Luan, Fangjun; Song, Xiaoyu; Liu, Lianqing; Liu, Jifei
2013-10-01
From the viewpoint of mathematical morphology, an atomic force microscopy (AFM) image contains the distortion effect of the tip convolution on a real sample surface. If tip shape can be characterized accurately, mathematical deconvolution can be applied to reduce the distortion to obtain more precise AFM images. AFM image reconstruction has practical significance in nanoscale observation and manipulation technology. Among recent tip modeling algorithms, the blind tip evaluation algorithm based on mathematical morphology is widely used. However, it takes considerable computing time, and the noise threshold is hard to optimize. To tackle these problems, a new blind modeling method is proposed in this paper to accelerate the computation of the algorithm and realize the optimum threshold estimation to build a precise tip model. The simulation verifies the efficiency of the new algorithm by comparing the computing time with the original one. The calculated tip shape is also validated by comparison with the SEM image of the tip. Finally, the reconstruction of a carbon nanotube image based on the precise tip model illustrates the feasibility and validity of the proposed algorithm.
Experimental/clinical evaluation of EIT image reconstruction with l1 data and image norms
NASA Astrophysics Data System (ADS)
Mamatjan, Yasin; Borsic, Andrea; Gürsoy, Doga; Adler, Andy
2013-04-01
Electrical impedance tomography (EIT) image reconstruction is ill-posed, and the spatial resolution of reconstructed images is low due to the diffuse propagation of current and limited number of independent measurements. Generally, image reconstruction is formulated using a regularized scheme in which l2 norms are preferred for both the data misfit and image prior terms due to computational convenience which result in smooth solutions. However, recent work on a Primal Dual-Interior Point Method (PDIPM) framework showed its effectiveness in dealing with the minimization problem. l1 norms on data and regularization terms in EIT image reconstruction address both problems of reconstruction with sharp edges and dealing with measurement errors. We aim for a clinical and experimental evaluation of the PDIPM method by selecting scenarios (human lung and dog breathing) with known electrode errors, which require a rigorous regularization and cause the failure of reconstructions with l2 norm. Results demonstrate the applicability of PDIPM algorithms, especially l1 data and regularization norms for clinical applications of EIT showing that l1 solution is not only more robust to measurement errors in clinical setting, but also provides high contrast resolution on organ boundaries.
Reconstruction of 3D scenes from sequences of images
NASA Astrophysics Data System (ADS)
Niu, Bei; Sang, Xinzhu; Chen, Duo; Cai, Yuanfa
2013-08-01
Reconstruction of three-dimensional (3D) scenes is an active research topic in the field of computer vision and 3D display. It's a challenge to model 3D objects rapidly and effectively. A 3D model can be extracted from multiple images. The system only requires a sequence of images taken with cameras without knowing the parameters of camera, which provide flexibility to a high degree. We focus on quickly merging point cloud of the object from depth map sequences. The whole system combines algorithms of different areas in computer vision, such as camera calibration, stereo correspondence, point cloud splicing and surface reconstruction. The procedure of 3D reconstruction is decomposed into a number of successive steps. Firstly, image sequences are received by the camera freely moving around the object. Secondly, the scene depth is obtained by a non-local stereo matching algorithm. The pairwise is realized with the Scale Invariant Feature Transform (SIFT) algorithm. An initial matching is then made for the first two images of the sequence. For the subsequent image that is processed with previous image, the point of interest corresponding to ones in previous images are refined or corrected. The vertical parallax between the images is eliminated. The next step is to calibrate camera, and intrinsic parameters and external parameters of the camera are calculated. Therefore, The relative position and orientation of camera are gotten. A sequence of depth maps are acquired by using a non-local cost aggregation method for stereo matching. Then point cloud sequence is achieved by the scene depths, which consists of point cloud model using the external parameters of camera and the point cloud sequence. The point cloud model is then approximated by a triangular wire-frame mesh to reduce geometric complexity and to tailor the model to the requirements of computer graphics visualization systems. Finally, the texture is mapped onto the wire-frame model, which can also be used for 3
He, Xin; Cheng, Lishui; Fessler, Jeffrey A.
2011-01-01
In simultaneous dual-isotope myocardial perfusion SPECT (MPS) imaging, data are simultaneously acquired to determine the distributions of two radioactive isotopes. The goal of this work was to develop penalized maximum likelihood (PML) algorithms for a novel cross-tracer prior that exploits the fact that the two images reconstructed from simultaneous dual-isotope MPS projection data are perfectly registered in space. We first formulated the simultaneous dual-isotope MPS reconstruction problem as a joint estimation problem. A cross-tracer prior that couples voxel values on both images was then proposed. We developed an iterative algorithm to reconstruct the MPS images that converges to the maximum a posteriori solution for this prior based on separable surrogate functions. To accelerate the convergence, we developed a fast algorithm for the cross-tracer prior based on the complete data OS-EM (COSEM) framework. The proposed algorithm was compared qualitatively and quantitatively to a single-tracer version of the prior that did not include the cross-tracer term. Quantitative evaluations included comparisons of mean and standard deviation images as well as assessment of image fidelity using the mean square error. We also evaluated the cross tracer prior using a three-class observer study with respect to the three-class MPS diagnostic task, i.e., classifying patients as having either no defect, reversible defect, or fixed defects. For this study, a comparison with conventional ordered subsets-expectation maximization (OS-EM) reconstruction with postfiltering was performed. The comparisons to the single-tracer prior demonstrated similar resolution for areas of the image with large intensity changes and reduced noise in uniform regions. The cross-tracer prior was also superior to the single-tracer version in terms of restoring image fidelity. Results of the three-class observer study showed that the proposed cross-tracer prior and the convergent algorithms improved the
Task-based optimization of image reconstruction in breast CT
NASA Astrophysics Data System (ADS)
Sanchez, Adrian A.; Sidky, Emil Y.; Pan, Xiaochuan
2014-03-01
We demonstrate a task-based assessment of image quality in dedicated breast CT in order to optimize the number of projection views acquired. The methodology we employ is based on the Hotelling Observer (HO) and its associated metrics. We consider two tasks: the Rayleigh task of discerning between two resolvable objects and a single larger object, and the signal detection task of classifying an image as belonging to either a signalpresent or signal-absent hypothesis. HO SNR values are computed for 50, 100, 200, 500, and 1000 projection view images, with the total imaging radiation dose held constant. We use the conventional fan-beam FBP algorithm and investigate the effect of varying the width of a Hanning window used in the reconstruction, since this affects both the noise properties of the image and the under-sampling artifacts which can arise in the case of sparse-view acquisitions. Our results demonstrate that fewer projection views should be used in order to increase HO performance, which in this case constitutes an upper-bound on human observer performance. However, the impact on HO SNR of using fewer projection views, each with a higher dose, is not as significant as the impact of employing regularization in the FBP reconstruction through a Hanning filter.
Super-resolution Reconstruction for Tongue MR Images
Woo, Jonghye; Bai, Ying; Roy, Snehashis; Murano, Emi Z.; Stone, Maureen; Prince, Jerry L.
2016-01-01
Magnetic resonance (MR) images of the tongue have been used in both clinical medicine and scientific research to reveal tongue structure and motion. In order to see different features of the tongue and its relation to the vocal tract it is beneficial to acquire three orthogonal image stacks—e.g., axial, sagittal and coronal volumes. In order to maintain both low noise and high visual detail, each set of images is typically acquired with in-plane resolution that is much better than the through-plane resolution. As a result, any one data set, by itself, is not ideal for automatic volumetric analyses such as segmentation and registration or even for visualization when oblique slices are required. This paper presents a method of super-resolution reconstruction of the tongue that generates an isotropic image volume using the three orthogonal image stacks. The method uses preprocessing steps that include intensity matching and registration and a data combination approach carried out by Markov random field optimization. The performance of the proposed method was demonstrated on five clinical datasets, yielding superior results when compared with conventional reconstruction methods. PMID:27239084
A biological phantom for evaluation of CT image reconstruction algorithms
NASA Astrophysics Data System (ADS)
Cammin, J.; Fung, G. S. K.; Fishman, E. K.; Siewerdsen, J. H.; Stayman, J. W.; Taguchi, K.
2014-03-01
In recent years, iterative algorithms have become popular in diagnostic CT imaging to reduce noise or radiation dose to the patient. The non-linear nature of these algorithms leads to non-linearities in the imaging chain. However, the methods to assess the performance of CT imaging systems were developed assuming the linear process of filtered backprojection (FBP). Those methods may not be suitable any longer when applied to non-linear systems. In order to evaluate the imaging performance, a phantom is typically scanned and the image quality is measured using various indices. For reasons of practicality, cost, and durability, those phantoms often consist of simple water containers with uniform cylinder inserts. However, these phantoms do not represent the rich structure and patterns of real tissue accurately. As a result, the measured image quality or detectability performance for lesions may not reflect the performance on clinical images. The discrepancy between estimated and real performance may be even larger for iterative methods which sometimes produce "plastic-like", patchy images with homogeneous patterns. Consequently, more realistic phantoms should be used to assess the performance of iterative algorithms. We designed and constructed a biological phantom consisting of porcine organs and tissue that models a human abdomen, including liver lesions. We scanned the phantom on a clinical CT scanner and compared basic image quality indices between filtered backprojection and an iterative reconstruction algorithm.
Depth-based selective image reconstruction using spatiotemporal image analysis
NASA Astrophysics Data System (ADS)
Haga, Tetsuji; Sumi, Kazuhiko; Hashimoto, Manabu; Seki, Akinobu
1999-03-01
In industrial plants, a remote monitoring system which removes physical tour inspection is often considered desirable. However the image sequence given from the mobile inspection robot is hard to see because interested objects are often partially occluded by obstacles such as pillars or fences. Our aim is to improve the image sequence that increases the efficiency and reliability of remote visual inspection. We propose a new depth-based image processing technique, which removes the needless objects from the foreground and recovers the occluded background electronically. Our algorithm is based on spatiotemporal analysis that enables fine and dense depth estimation, depth-based precise segmentation, and accurate interpolation. We apply this technique to a real time sequence given from the mobile inspection robot. The resulted image sequence is satisfactory in that the operator can make correct visual inspection with less fatigue.
The feasibility of images reconstructed with the method of sieves
Veklerov, E.; Llacer, J.
1989-04-01
The concept of sieves has been applied with the Maximum likelihood Estimator (MLE) to image reconstruction. While it makes it possible to recover smooth images consistent with the data, the degree of smoothness provided by it is arbitrary. It is shown that the concept of feasibility is able to resolve this arbitrariness. By varying the values of parameters determining the degree of smoothness, one can generate images on both sides of the feasibility region, as well as within the region. Feasible images recovered by using different sieve parameters are compared with feasible results of other procedures. One- and two-dimensional examples using both simulated and real data sets are considered. 12 refs., 3 figs., 2 tabs.
3D Lunar Terrain Reconstruction from Apollo Images
NASA Technical Reports Server (NTRS)
Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.
2009-01-01
Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission
Iterative Self-Dual Reconstruction on Radar Image Recovery
Martins, Charles; Medeiros, Fatima; Ushizima, Daniela; Bezerra, Francisco; Marques, Regis; Mascarenhas, Nelson
2010-05-21
Imaging systems as ultrasound, sonar, laser and synthetic aperture radar (SAR) are subjected to speckle noise during image acquisition. Before analyzing these images, it is often necessary to remove the speckle noise using filters. We combine properties of two mathematical morphology filters with speckle statistics to propose a signal-dependent noise filter to multiplicative noise. We describe a multiscale scheme that preserves sharp edges while it smooths homogeneous areas, by combining local statistics with two mathematical morphology filters: the alternating sequential and the self-dual reconstruction algorithms. The experimental results show that the proposed approach is less sensitive to varying window sizes when applied to simulated and real SAR images in comparison with standard filters.
NASA Astrophysics Data System (ADS)
Kakue, Takashi; Shimobaba, Tomoyoshi; Ito, Tomoyoshi
2015-05-01
We report a high-speed parallel phase-shifting digital holography system using a special-purpose computer for image reconstruction. Parallel phase-shifting digital holography is a technique capable of single-shot phase-shifting interferometry. This technique records information of multiple phase-shifted holograms required for calculation of phase-shifting interferometry with a single shot by using space-division multiplexing. This technique needs image-reconstruction process for a huge amount of recorded holograms. In particular, it takes a long time to calculate light propagation based on fast Fourier transform in the process and to obtain a motion picture of a dynamically and fast moving object. Then we designed a special-purpose computer for accelerating the image-reconstruction process of parallel phase-shifting digital holography. We developed a special-purpose computer consisting of VC707 evaluation kit (Xilinx Inc.) which is a field programmable gate array board. We also recorded holograms consisting of 128 × 128 pixels at a frame rate of 180,000 frames per second by the constructed parallel phase-shifting digital holography system. By applying the developed computer to the recorded holograms, we confirmed that the designed computer can accelerate the calculation of image-reconstruction process of parallel phase-shifting digital holography ~50 times faster than a CPU.
Li, Ang; Zhang, Quan; Culver, Joseph P; Miller, Eric L; Boas, David A
2004-02-01
We present an algorithm to reconstruct chromosphere concentration images directly rather than following the traditional two-step process of reconstructing wavelength-dependent absorption coefficient images and then calculating chromosphere concentration images. This procedure imposes prior spectral information into the image reconstruction that results in a dramatic improvement in the image contrast-to-noise ratio of better than 100%. We demonstrate this improvement with simulations and a dynamic blood phantom experiment. PMID:14759043
A 32-Channel Head Coil Array with Circularly Symmetric Geometry for Accelerated Human Brain Imaging.
Chu, Ying-Hua; Hsu, Yi-Cheng; Keil, Boris; Kuo, Wen-Jui; Lin, Fa-Hsuan
2016-01-01
The goal of this study is to optimize a 32-channel head coil array for accelerated 3T human brain proton MRI using either a Cartesian or a radial k-space trajectory. Coils had curved trapezoidal shapes and were arranged in a circular symmetry (CS) geometry. Coils were optimally overlapped to reduce mutual inductance. Low-noise pre-amplifiers were used to further decouple between coils. The SNR and noise amplification in accelerated imaging were compared to results from a head coil array with a soccer-ball (SB) geometry. The maximal SNR in the CS array was about 120% (1070 vs. 892) and 62% (303 vs. 488) of the SB array at the periphery and the center of the FOV on a transverse plane, respectively. In one-dimensional 4-fold acceleration, the CS array has higher averaged SNR than the SB array across the whole FOV. Compared to the SB array, the CS array has a smaller g-factor at head periphery in all accelerated acquisitions. Reconstructed images using a radial k-space trajectory show that the CS array has a smaller error than the SB array in 2- to 5-fold accelerations. PMID:26909652
A 32-Channel Head Coil Array with Circularly Symmetric Geometry for Accelerated Human Brain Imaging
Chu, Ying-Hua; Hsu, Yi-Cheng; Keil, Boris; Kuo, Wen-Jui; Lin, Fa-Hsuan
2016-01-01
The goal of this study is to optimize a 32-channel head coil array for accelerated 3T human brain proton MRI using either a Cartesian or a radial k-space trajectory. Coils had curved trapezoidal shapes and were arranged in a circular symmetry (CS) geometry. Coils were optimally overlapped to reduce mutual inductance. Low-noise pre-amplifiers were used to further decouple between coils. The SNR and noise amplification in accelerated imaging were compared to results from a head coil array with a soccer-ball (SB) geometry. The maximal SNR in the CS array was about 120% (1070 vs. 892) and 62% (303 vs. 488) of the SB array at the periphery and the center of the FOV on a transverse plane, respectively. In one-dimensional 4-fold acceleration, the CS array has higher averaged SNR than the SB array across the whole FOV. Compared to the SB array, the CS array has a smaller g-factor at head periphery in all accelerated acquisitions. Reconstructed images using a radial k-space trajectory show that the CS array has a smaller error than the SB array in 2- to 5-fold accelerations. PMID:26909652
Isotope specific resolution recovery image reconstruction in high resolution PET imaging
Kotasidis, Fotis A.; Angelis, Georgios I.; Anton-Rodriguez, Jose; Matthews, Julian C.; Reader, Andrew J.; Zaidi, Habib
2014-05-15
Purpose: Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to perform the PSF measurements. As such, non-optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction, usually underestimating the true PSF owing to the difference in positron range. In high resolution brain and preclinical imaging, this effect is of particular importance since the PSFs become more positron range limited and isotope-specific PSFs can help maximize the performance benefit from using resolution recovery image reconstruction algorithms. Methods: In this work, the authors used a printing technique to simultaneously measure multiple point sources on the High Resolution Research Tomograph (HRRT), and the authors demonstrated the feasibility of deriving isotope-dependent system matrices from fluorine-18 and carbon-11 point sources. Furthermore, the authors evaluated the impact of incorporating them within RM image reconstruction, using carbon-11 phantom and clinical datasets on the HRRT. Results: The results obtained using these two isotopes illustrate that even small differences in positron range can result in different PSF maps, leading to further improvements in contrast recovery when used in image reconstruction. The difference is more pronounced in the centre of the field-of-view where the full width at half maximum (FWHM) from the positron range has a larger contribution to the overall FWHM compared to the edge where the parallax error dominates the overall FWHM. Conclusions: Based on the proposed methodology, measured isotope-specific and spatially variant PSFs can be reliably derived and used for improved spatial resolution and variance performance in resolution
Nonlinear dual reconstruction of SPECT activity and attenuation images.
Liu, Huafeng; Guo, Min; Hu, Zhenghui; Shi, Pengcheng; Hu, Hongjie
2014-01-01
In single photon emission computed tomography (SPECT), accurate attenuation maps are needed to perform essential attenuation compensation for high quality radioactivity estimation. Formulating the SPECT activity and attenuation reconstruction tasks as coupled signal estimation and system parameter identification problems, where the activity distribution and the attenuation parameter are treated as random variables with known prior statistics, we present a nonlinear dual reconstruction scheme based on the unscented Kalman filtering (UKF) principles. In this effort, the dynamic changes of the organ radioactivity distribution are described through state space evolution equations, while the photon-counting SPECT projection data are measured through the observation equations. Activity distribution is then estimated with sub-optimal fixed attenuation parameters, followed by attenuation map reconstruction given these activity estimates. Such coupled estimation processes are iteratively repeated as necessary until convergence. The results obtained from Monte Carlo simulated data, physical phantom, and real SPECT scans demonstrate the improved performance of the proposed method both from visual inspection of the images and a quantitative evaluation, compared to the widely used EM-ML algorithms. The dual estimation framework has the potential to be useful for estimating the attenuation map from emission data only and thus benefit the radioactivity reconstruction. PMID:25225796
Nonlinear Dual Reconstruction of SPECT Activity and Attenuation Images
Liu, Huafeng; Guo, Min; Hu, Zhenghui; Shi, Pengcheng; Hu, Hongjie
2014-01-01
In single photon emission computed tomography (SPECT), accurate attenuation maps are needed to perform essential attenuation compensation for high quality radioactivity estimation. Formulating the SPECT activity and attenuation reconstruction tasks as coupled signal estimation and system parameter identification problems, where the activity distribution and the attenuation parameter are treated as random variables with known prior statistics, we present a nonlinear dual reconstruction scheme based on the unscented Kalman filtering (UKF) principles. In this effort, the dynamic changes of the organ radioactivity distribution are described through state space evolution equations, while the photon-counting SPECT projection data are measured through the observation equations. Activity distribution is then estimated with sub-optimal fixed attenuation parameters, followed by attenuation map reconstruction given these activity estimates. Such coupled estimation processes are iteratively repeated as necessary until convergence. The results obtained from Monte Carlo simulated data, physical phantom, and real SPECT scans demonstrate the improved performance of the proposed method both from visual inspection of the images and a quantitative evaluation, compared to the widely used EM-ML algorithms. The dual estimation framework has the potential to be useful for estimating the attenuation map from emission data only and thus benefit the radioactivity reconstruction. PMID:25225796
Hyperspectral image feature extraction accelerated by GPU
NASA Astrophysics Data System (ADS)
Qu, HaiCheng; Zhang, Ye; Lin, Zhouhan; Chen, Hao
2012-10-01
PCA (principal components analysis) algorithm is the most basic method of dimension reduction for high-dimensional data1, which plays a significant role in hyperspectral data compression, decorrelation, denoising and feature extraction. With the development of imaging technology, the number of spectral bands in a hyperspectral image is getting larger and larger, and the data cube becomes bigger in these years. As a consequence, operation of dimension reduction is more and more time-consuming nowadays. Fortunately, GPU-based high-performance computing has opened up a novel approach for hyperspectral data processing6. This paper is concerning on the two main processes in hyperspectral image feature extraction: (1) calculation of transformation matrix; (2) transformation in spectrum dimension. These two processes belong to computationally intensive and data-intensive data processing respectively. Through the introduction of GPU parallel computing technology, an algorithm containing PCA transformation based on eigenvalue decomposition 8(EVD) and feature matching identification is implemented, which is aimed to explore the characteristics of the GPU parallel computing and the prospects of GPU application in hyperspectral image processing by analysing thread invoking and speedup of the algorithm. At last, the result of the experiment shows that the algorithm has reached a 12x speedup in total, in which some certain step reaches higher speedups up to 270 times.
Image Reconstruction and Discrimination at Low Light Levels
NASA Astrophysics Data System (ADS)
Zerom, Petros
Quantum imaging is a recent and promising branch of quantum optics that exploits the quantum nature of light. Improving the limitations imposed by classical sources of light in optical imaging techniques or overcoming the classical boundaries of image formation is one of the key motivations in quantum imaging. In this thesis, I describe certain aspects of both quantum and thermal ghost imaging and I also study image discrimination with high fidelity at low light levels. First of all, I present a theoretical and experimental study of entangled-photon compressive ghost imaging. In quantum ghost imaging using entangled photon pairs, the brightness of readily available sources is rather weak. The usual technique of image acquisition in this imaging modality is to raster scan a single-pixel single-photon sensitive detector in one arm of a ghost imaging setup. In most imaging modalities, the number of measurements required to fully resolve an object is dependent on the measurement's Nyquist limit. In the first part of the thesis, I propose a ghost imaging (GI) configuration that uses bucket detectors (as opposed to a raster scanning detector) in both arms of the GI setup. High resolution image reconstruction using only 27% of the measurement's Nyquist limit using compressed sensing algorithms are presented. The second part of my thesis deals with thermal ghost imaging. Unlike in quantum GI, bright and spatially correlated classical sources of radiation are used in thermal GI. Usually high-contrast speckle patterns are used as sources of the correlated beams of radiation. I study the effect of the field statistics of the illuminating source on the quality of ghost images. I show theoretically and experimentally that a thermal GI setup can produce high quality images even when low-contrast (intensity-averaged) speckle patterns are used as an illuminating source, as long as the collected signal is mainly caused by the random fluctuation of the incident speckle field, as
Laser-wakefield accelerators as hard x-ray sources for 3D medical imaging of human bone.
Cole, J M; Wood, J C; Lopes, N C; Poder, K; Abel, R L; Alatabi, S; Bryant, J S J; Jin, A; Kneip, S; Mecseki, K; Symes, D R; Mangles, S P D; Najmudin, Z
2015-01-01
A bright μm-sized source of hard synchrotron x-rays (critical energy Ecrit > 30 keV) based on the betatron oscillations of laser wakefield accelerated electrons has been developed. The potential of this source for medical imaging was demonstrated by performing micro-computed tomography of a human femoral trabecular bone sample, allowing full 3D reconstruction to a resolution below 50 μm. The use of a 1 cm long wakefield accelerator means that the length of the beamline (excluding the laser) is dominated by the x-ray imaging distances rather than the electron acceleration distances. The source possesses high peak brightness, which allows each image to be recorded with a single exposure and reduces the time required for a full tomographic scan. These properties make this an interesting laboratory source for many tomographic imaging applications. PMID:26283308
Laser-wakefield accelerators as hard x-ray sources for 3D medical imaging of human bone
NASA Astrophysics Data System (ADS)
Cole, J. M.; Wood, J. C.; Lopes, N. C.; Poder, K.; Abel, R. L.; Alatabi, S.; Bryant, J. S. J.; Jin, A.; Kneip, S.; Mecseki, K.; Symes, D. R.; Mangles, S. P. D.; Najmudin, Z.
2015-08-01
A bright μm-sized source of hard synchrotron x-rays (critical energy Ecrit > 30 keV) based on the betatron oscillations of laser wakefield accelerated electrons has been developed. The potential of this source for medical imaging was demonstrated by performing micro-computed tomography of a human femoral trabecular bone sample, allowing full 3D reconstruction to a resolution below 50 μm. The use of a 1 cm long wakefield accelerator means that the length of the beamline (excluding the laser) is dominated by the x-ray imaging distances rather than the electron acceleration distances. The source possesses high peak brightness, which allows each image to be recorded with a single exposure and reduces the time required for a full tomographic scan. These properties make this an interesting laboratory source for many tomographic imaging applications.
Application of DIRI dynamic infrared imaging in reconstructive surgery
NASA Astrophysics Data System (ADS)
Pawlowski, Marek; Wang, Chengpu; Jin, Feng; Salvitti, Matthew; Tenorio, Xavier
2006-04-01
We have developed the BioScanIR System based on QWIP (Quantum Well Infrared Photodetector). Data collected by this sensor are processed using the DIRI (Dynamic Infrared Imaging) algorithms. The combination of DIRI data processing methods with the unique characteristics of the QWIP sensor permit the creation of a new imaging modality capable of detecting minute changes in temperature at the surface of the tissue and organs associated with blood perfusion due to certain diseases such as cancer, vascular disease and diabetes. The BioScanIR System has been successfully applied in reconstructive surgery to localize donor flap feeding vessels (perforators) during the pre-surgical planning stage. The device is also used in post-surgical monitoring of skin flap perfusion. Since the BioScanIR is mobile; it can be moved to the bedside for such monitoring. In comparison to other modalities, the BioScanIR can localize perforators in a single, 20 seconds scan with definitive results available in minutes. The algorithms used include (FFT) Fast Fourier Transformation, motion artifact correction, spectral analysis and thermal image scaling. The BioScanIR is completely non-invasive and non-toxic, requires no exogenous contrast agents and is free of ionizing radiation. In addition to reconstructive surgery applications, the BioScanIR has shown promise as a useful functional imaging modality in neurosurgery, drug discovery in pre-clinical animal models, wound healing and peripheral vascular disease management.
Electromagnetic testing and image reconstruction with flexible scanning tablets
NASA Astrophysics Data System (ADS)
Nishimura, Yoshihiro; Kanev, Kamen; Akira, Sasamoto; Suzuki, Takayuki; Inokawa, Hiroshi
2009-03-01
An eddy current testing (ECT) and an electromagnetic acoustic testing (EMAT) employ electromagnetic methods to induce an eddy current and to detect flaws on or within a sample without directly contacting it. ECT produces Lissajous curves, and EMAT gives us a time series of signal data, both of which can be directly displayed on nondestructive testing (NDT) equipment screens. Since the interpretation of such output is difficult for untrained persons, images need to be properly reconstructed and visualized. This could be carried out by single-probe 2/3D scanners with imaging capabilities or with array probes, but such equipment is often too large or heavy for ordinary on-site use. In this study, we introduce a flexible scanning tablet for on-site NDT and imaging of detected flaws. The flexible scanning tablet consists of a thin film or a paper with a digitally encoded coordinate system, applicable to flat and curved surfaces, that enables probe positions to be tracked by a specialized optical reader. We also discuss how ECT and EMAT probe coordinates and measurement data could be simultaneously derived and used for further image reconstruction and visualization.
Improved proton computed tomography by dual modality image reconstruction
Hansen, David C. Bassler, Niels; Petersen, Jørgen Breede Baltzer; Sørensen, Thomas Sangild
2014-03-15
Purpose: Proton computed tomography (CT) is a promising image modality for improving the stopping power estimates and dose calculations for particle therapy. However, the finite range of about 33 cm of water of most commercial proton therapy systems limits the sites that can be scanned from a full 360° rotation. In this paper the authors propose a method to overcome the problem using a dual modality reconstruction (DMR) combining the proton data with a cone-beam x-ray prior. Methods: A Catphan 600 phantom was scanned using a cone beam x-ray CT scanner. A digital replica of the phantom was created in the Monte Carlo code Geant4 and a 360° proton CT scan was simulated, storing the entrance and exit position and momentum vector of every proton. Proton CT images were reconstructed using a varying number of angles from the scan. The proton CT images were reconstructed using a constrained nonlinear conjugate gradient algorithm, minimizing total variation and the x-ray CT prior while remaining consistent with the proton projection data. The proton histories were reconstructed along curved cubic-spline paths. Results: The spatial resolution of the cone beam CT prior was retained for the fully sampled case and the 90° interval case, with the MTF = 0.5 (modulation transfer function) ranging from 5.22 to 5.65 linepairs/cm. In the 45° interval case, the MTF = 0.5 dropped to 3.91 linepairs/cm For the fully sampled DMR, the maximal root mean square (RMS) error was 0.006 in units of relative stopping power. For the limited angle cases the maximal RMS error was 0.18, an almost five-fold improvement over the cone beam CT estimate. Conclusions: Dual modality reconstruction yields the high spatial resolution of cone beam x-ray CT while maintaining the improved stopping power estimation of proton CT. In the case of limited angles, the use of prior image proton CT greatly improves the resolution and stopping power estimate, but does not fully achieve the quality of a 360
Spectral-overlap approach to multiframe superresolution image reconstruction.
Cohen, Edward; Picard, Richard H; Crabtree, Peter N
2016-05-20
Various techniques and algorithms have been developed to improve the resolution of sensor-aliased imagery captured with multiple subpixel-displaced frames on an undersampled pixelated image plane. These dealiasing algorithms are typically known as multiframe superresolution (SR), or geometric SR to emphasize the role of the focal-plane array. Multiple low-resolution (LR) aliased frames of the same scene are captured and allocated to a common high-resolution (HR) reconstruction grid, leading to the possibility of an alias-free reconstruction, as long as the HR sampling rate is above the Nyquist rate. Allocating LR-frame irradiances to HR frames requires the use of appropriate weights. Here we present a novel approach in the spectral domain to calculating exactly weights based on spatial overlap areas, which we call the spectral-overlap (SO) method. We emphasize that the SO method is not a spectral approach but rather an approach to calculating spatial weights that uses spectral decompositions to exploit the array properties of the HR and LR pixels. The method is capable of dealing with arbitrary aliasing factors and interframe motions consisting of in-plane translations and rotations. We calculate example reconstructed HR images (the inverse problem) from synthetic aliased images for integer and for fractional aliasing factors. We show the utility of the SO-generated overlap-area weights in both noniterative and iterative reconstructions with known or unknown aliasing factor. We show how the overlap weights can be used to generate the Green's function (pixel response function) for noniterative dealiasing. In addition, we show how the overlap-area weights can be used to generate synthetic aliased images (the forward problem). We compare the SO approach to the spatial-domain geometric approach of O'Rourke and find virtually identical high accuracy but with significant enhancements in speed for SO. We also compare the SO weights to interpolated weights and find that
Ha, S.; Matej, S.; Ispiryan, M.; Mueller, K.
2013-01-01
We describe a GPU-accelerated framework that efficiently models spatially (shift) variant system response kernels and performs forward- and back-projection operations with these kernels for the DIRECT (Direct Image Reconstruction for TOF) iterative reconstruction approach. Inherent challenges arise from the poor memory cache performance at non-axis aligned TOF directions. Focusing on the GPU memory access patterns, we utilize different kinds of GPU memory according to these patterns in order to maximize the memory cache performance. We also exploit the GPU instruction-level parallelism to efficiently hide long latencies from the memory operations. Our experiments indicate that our GPU implementation of the projection operators has slightly faster or approximately comparable time performance than FFT-based approaches using state-of-the-art FFTW routines. However, most importantly, our GPU framework can also efficiently handle any generic system response kernels, such as spatially symmetric and shift-variant as well as spatially asymmetric and shift-variant, both of which an FFT-based approach cannot cope with. PMID:23531763
Image reconstruction for the ClearPET™ Neuro
NASA Astrophysics Data System (ADS)
Weber, Simone; Morel, Christian; Simon, Luc; Krieguer, Magalie; Rey, Martin; Gundlich, Brigitte; Khodaverdi, Maryam
2006-12-01
ClearPET™ is a family of small-animal PET scanners which are currently under development within the Crystal Clear Collaboration (CERN). All scanners are based on the same detector block design using individual LSO and LuYAP crystals in phoswich configuration, coupled to multi-anode photomultiplier tubes. One of the scanners, the ClearPET™ Neuro is designed for applications in neuroscience. Four detector blocks with 64 2×2×10 mm LSO and LuYAP crystals, arranged in line, build a module. Twenty modules are arranged in a ring with a ring diameter of 13.8 cm and an axial size of 11.2 cm. An insensitive region at the border of the detector heads results in gaps between the detectors axially and tangentially. The detectors are rotating by 360° in step and shoot mode during data acquisition. Every second module is shifted axially to compensate partly for the gaps between the detector blocks in a module. This unconventional scanner geometry requires dedicated image reconstruction procedures. Data acquisition acquires single events that are stored with a time mark in a dedicated list mode format. Coincidences are associated off line by software. After sorting the data into 3D sinograms, image reconstruction is performed using the Ordered Subset Maximum A Posteriori One-Step Late (OSMAPOSL) iterative algorithm implemented in the Software for Tomographic Image Reconstruction (STIR) library. Due to the non-conventional scanner design, careful estimation of the sensitivity matrix is needed to obtain artifact-free images from the ClearPET™ Neuro.
Noll, Douglas C.; Fessler, Jeffrey A.
2014-01-01
Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms. PMID:25330484
The Pixon Method for Data Compression Image Classification, and Image Reconstruction
NASA Technical Reports Server (NTRS)
Puetter, Richard; Yahil, Amos
2002-01-01
As initially proposed, this program had three goals: (1) continue to develop the highly successful Pixon method for image reconstruction and support other scientist in implementing this technique for their applications; (2) develop image compression techniques based on the Pixon method; and (3) develop artificial intelligence algorithms for image classification based on the Pixon approach for simplifying neural networks. Subsequent to proposal review the scope of the program was greatly reduced and it was decided to investigate the ability of the Pixon method to provide superior restorations of images compressed with standard image compression schemes, specifically JPEG-compressed images.
Modeling and image reconstruction in spectrally resolved bioluminescence tomography
NASA Astrophysics Data System (ADS)
Dehghani, Hamid; Pogue, Brian W.; Davis, Scott C.; Patterson, Michael S.
2007-02-01
Recent interest in modeling and reconstruction algorithms for Bioluminescence Tomography (BLT) has increased and led to the general consensus that non-spectrally resolved intensity-based BLT results in a non-unique problem. However, the light emitted from, for example firefly Luciferase, is widely distributed over the band of wavelengths from 500 nm to 650 nm and above, with the dominant fraction emitted from tissue being above 550 nm. This paper demonstrates the development of an algorithm used for multi-wavelength 3D spectrally resolved BLT image reconstruction in a mouse model. It is shown that using a single view data, bioluminescence sources of up to 15 mm deep can be successfully recovered given correct information about the underlying tissue absorption and scatter.
Scattering robust 3D reconstruction via polarized transient imaging.
Wu, Rihui; Suo, Jinli; Dai, Feng; Zhang, Yongdong; Dai, Qionghai
2016-09-01
Reconstructing 3D structure of scenes in the scattering medium is a challenging task with great research value. Existing techniques often impose strong assumptions on the scattering behaviors and are of limited performance. Recently, a low-cost transient imaging system has provided a feasible way to resolve the scene depth, by detecting the reflection instant on the time profile of a surface point. However, in cases with scattering medium, the rays are both reflected and scattered during transmission, and the depth calculated from the time profile largely deviates from the true value. To handle this problem, we used the different polarization behaviors of the reflection and scattering components, and introduced active polarization to separate the reflection component to estimate the scattering robust depth. Our experiments have demonstrated that our approach can accurately reconstruct the 3D structure underlying the scattering medium. PMID:27607944
Incremental volume reconstruction and rendering for 3-D ultrasound imaging
NASA Astrophysics Data System (ADS)
Ohbuchi, Ryutarou; Chen, David; Fuchs, Henry
1992-09-01
In this paper, we present approaches toward an interactive visualization of a real time input, applied to 3-D visualizations of 2-D ultrasound echography data. The first, 3 degrees-of- freedom (DOF) incremental system visualizes a 3-D volume acquired as a stream of 2-D slices with location and orientation with 3 DOF. As each slice arrives, the system reconstructs a regular 3-D volume and renders it. Rendering is done by an incremental image-order ray- casting algorithm which stores and reuses the results of expensive resampling along the rays for speed. The second is our first experiment toward real-time 6 DOF acquisition and visualization. Two-dimensional slices with 6 DOF are reconstructed off-line, and visualized at an interactive rate using a parallel volume rendering code running on the graphics multicomputer Pixel-Planes 5.
Image and Data-analysis Tools For Paleoclimatic Reconstructions
NASA Astrophysics Data System (ADS)
Pozzi, M.
It comes here proposed a directory of instruments and computer science resources chosen in order to resolve the problematic ones that regard the paleoclimatic recon- structions. They will come discussed in particular the following points: 1) Numerical analysis of paleo-data (fossils abundances, species analyses, isotopic signals, chemical-physical parameters, biological data): a) statistical analyses (uni- variate, diversity, rarefaction, correlation, ANOVA, F and T tests, Chi^2) b) multidi- mensional analyses (principal components, corrispondence, cluster analysis, seriation, discriminant, autocorrelation, spectral analysis) neural analyses (backpropagation net, kohonen feature map, hopfield net genetic algorithms) 2) Graphical analysis (visu- alization tools) of paleo-data (quantitative and qualitative fossils abundances, species analyses, isotopic signals, chemical-physical parameters): a) 2-D data analyses (graph, histogram, ternary, survivorship) b) 3-D data analyses (direct volume rendering, iso- surfaces, segmentation, surface reconstruction, surface simplification,generation of tetrahedral grids). 3) Quantitative and qualitative digital image analysis (macro and microfossils image analysis, Scanning Electron Microscope. and Optical Polarized Microscope images capture and analysis, morphometric data analysis, 3-D reconstruc- tions): a) 2D image analysis (correction of image defects, enhancement of image de- tail, converting texture and directionality to grey scale or colour differences, visual enhancement using pseudo-colour, pseudo-3D, thresholding of image features, binary image processing, measurements, stereological measurements, measuring features on a white background) b) 3D image analysis (basic stereological procedures, two dimen- sional structures; area fraction from the point count, volume fraction from the point count, three dimensional structures: surface area and the line intercept count, three dimensional microstructures; line length and the
Spectral image reconstruction by a tunable LED illumination
NASA Astrophysics Data System (ADS)
Lin, Meng-Chieh; Tsai, Chen-Wei; Tien, Chung-Hao
2013-09-01
Spectral reflectance estimation of an object via low-dimensional snapshot requires both image acquisition and a post numerical estimation analysis. In this study, we set up a system incorporating a homemade cluster of LEDs with spectral modulation for scene illumination, and a multi-channel CCD to acquire multichannel images by means of fully digital process. Principal component analysis (PCA) and pseudo inverse transformation were used to reconstruct the spectral reflectance in a constrained training set, such as Munsell and Macbeth Color Checker. The average reflectance spectral RMS error from 34 patches of a standard color checker were 0.234. The purpose is to investigate the use of system in conjunction with the imaging analysis for industry or medical inspection in a fast and acceptable accuracy, where the approach was preliminary validated.
Image reconstruction of FT-IR microspectrometric data
NASA Astrophysics Data System (ADS)
Lasch, Peter; Lewis, E. Neil; Kidder, Linda H.; Naumann, Dieter
2000-03-01
FT-IR microspectrometry, particularly in combination with digital imaging techniques shows great promise for in-vivo and ex-vivo medical diagnosis. The statement is based on the knowledge that this method delivers information of the chemical structure and composition of a sample and the fact that any disease is linked to changes in the molecular and structural composition of cells and tissues. Typically, these changes are highly specific for a given tissue structure and are therefore potentially detectable by FT-IR microspectrometry. In this paper we present several approaches for the representation of mid-infrared microspectroscopic data acquired with high spatial resolution by the use of a MCT focal plane array detector. The applicability of image reassembling methodologies like functional group analysis, image reconstruction based on factor analysis and artificial neural network analysis to the IR data is discussed.
A generalized Fourier penalty in prior-image-based reconstruction for cross-platform imaging
NASA Astrophysics Data System (ADS)
Pourmorteza, A.; Siewerdsen, J. H.; Stayman, J. W.
2016-03-01
Sequential CT studies present an excellent opportunity to apply prior-image-based reconstruction (PIBR) methods that leverage high-fidelity prior imaging studies to improve image quality and/or reduce x-ray exposure in subsequent studies. One major obstacle in using PIBR is that the initial and subsequent studies are often performed on different scanners (e.g. diagnostic CT followed by CBCT for interventional guidance); this results in mismatch in attenuation values due to hardware and software differences. While improved artifact correction techniques can potentially mitigate such differences, the correction is often incomplete. Here, we present an alternate strategy where the PIBR itself is used to mitigate these differences. We define a new penalty for the previously introduced PIBR called Reconstruction of Difference (RoD). RoD differs from many other PIBRs in that it reconstructs only changes in the anatomy (vs. reconstructing the current anatomy). Direct regularization of the difference image in RoD provides an opportunity to selectively penalize spatial frequencies of the difference image (e.g. low frequency differences associated with attenuation offsets and shading artifacts) without interfering with the variations in unchanged background image. We leverage this flexibility and introduce a novel regularization strategy using a generalized Fourier penalty within the RoD framework and develop the modified reconstruction algorithm. We evaluate the performance of the new approach in both simulation studies and in physical CBCT test-bench data. We find that generalized Fourier penalty can be highly effective in reducing low-frequency x-ray artifacts through selective suppression of spatial frequencies in the reconstructed difference image.
Local Surface Reconstruction from MER images using Stereo Workstation
NASA Astrophysics Data System (ADS)
Shin, Dongjoe; Muller, Jan-Peter
2010-05-01
The authors present a semi-automatic workflow that reconstructs the 3D shape of the martian surface from local stereo images delivered by PnCam or NavCam on systems such as the NASA Mars Exploration Rover (MER) Mission and in the future the ESA-NASA ExoMars rover PanCam. The process is initiated with manually selected tiepoints on a stereo workstation which is then followed by a tiepoint refinement, stereo-matching using region growing and Levenberg-Marquardt Algorithm (LMA)-based bundle adjustment processing. The stereo workstation, which is being developed by UCL in collaboration with colleagues at the Jet Propulsion Laboratory (JPL) within the EU FP7 ProVisG project, includes a set of practical GUI-based tools that enable an operator to define a visually correct tiepoint via a stereo display. To achieve platform and graphic hardware independence, the stereo application has been implemented using JPL's JADIS graphic library which is written in JAVA and the remaining processing blocks used in the reconstruction workflow have also been developed as a JAVA package to increase the code re-usability, portability and compatibility. Although initial tiepoints from the stereo workstation are reasonably acceptable as true correspondences, it is often required to employ an optional validity check and/or quality enhancing process. To meet this requirement, the workflow has been designed to include a tiepoint refinement process based on the Adaptive Least Square Correlation (ALSC) matching algorithm so that the initial tiepoints can be further enhanced to sub-pixel precision or rejected if they fail to pass the ALSC matching threshold. Apart from the accuracy of reconstruction, it is obvious that the other criterion to assess the quality of reconstruction is the density (or completeness) of reconstruction, which is not attained in the refinement process. Thus, we re-implemented a stereo region growing process, which is a core matching algorithm within the UCL
A new combined prior based reconstruction method for compressed sensing in 3D ultrasound imaging
NASA Astrophysics Data System (ADS)
Uddin, Muhammad S.; Islam, Rafiqul; Tahtali, Murat; Lambert, Andrew J.; Pickering, Mark R.
2015-03-01
Ultrasound (US) imaging is one of the most popular medical imaging modalities, with 3D US imaging gaining popularity recently due to its considerable advantages over 2D US imaging. However, as it is limited by long acquisition times and the huge amount of data processing it requires, methods for reducing these factors have attracted considerable research interest. Compressed sensing (CS) is one of the best candidates for accelerating the acquisition rate and reducing the data processing time without degrading image quality. However, CS is prone to introduce noise-like artefacts due to random under-sampling. To address this issue, we propose a combined prior-based reconstruction method for 3D US imaging. A Laplacian mixture model (LMM) constraint in the wavelet domain is combined with a total variation (TV) constraint to create a new regularization regularization prior. An experimental evaluation conducted to validate our method using synthetic 3D US images shows that it performs better than other approaches in terms of both qualitative and quantitative measures.
[Image reconstruction of conductivity on magnetoacoustic tomography with magnetic induction].
Li, Jingyu; Yin, Tao; Liu, Zhipeng; Xu, Guohui
2010-04-01
The electric characteristics such as impedance and conductivity of the organization will change in the case where pathological changes occurred in the biological tissue. The change in electric characteristics usually took place before the change in the density of tissues, and also, the difference in electric characteristics such as conductivity between normal tissue and pathological tissue is obvious. The method of magneto-acoustic tomography with magnetic induction is based on the theory of magnetic eddy current induction, the principle of vibration generation and acoustic transmission to get the boundary of the pathological tissue. The pathological change could be inspected by electricity characteristic imaging which is invasive to the tissue. In this study, a two-layer concentric spherical model is established to simulate the malignant tumor tissue surrounded by normal tissue mutual relations of the magneto-sound coupling effect and the coupling equations in the magnetic field are used to get the algorithms for reconstructing the conductivity. Simulation study is conducted to test the proposed model and validate the performance of the reconstructed algorithms. The result indicates that the use of signal processing method in this paper can image the conductivity boundaries of the sample in the scanning cross section. The computer simulating results validate the feasibility of applying the method of magneto-acoustic tomography with magnetic induction for malignant tumor imaging. PMID:20481330
An Iterative CT Reconstruction Algorithm for Fast Fluid Flow Imaging.
Van Eyndhoven, Geert; Batenburg, K Joost; Kazantsev, Daniil; Van Nieuwenhove, Vincent; Lee, Peter D; Dobson, Katherine J; Sijbers, Jan
2015-11-01
The study of fluid flow through solid matter by computed tomography (CT) imaging has many applications, ranging from petroleum and aquifer engineering to biomedical, manufacturing, and environmental research. To avoid motion artifacts, current experiments are often limited to slow fluid flow dynamics. This severely limits the applicability of the technique. In this paper, a new iterative CT reconstruction algorithm for improved a temporal/spatial resolution in the imaging of fluid flow through solid matter is introduced. The proposed algorithm exploits prior knowledge in two ways. First, the time-varying object is assumed to consist of stationary (the solid matter) and dynamic regions (the fluid flow). Second, the attenuation curve of a particular voxel in the dynamic region is modeled by a piecewise constant function over time, which is in accordance with the actual advancing fluid/air boundary. Quantitative and qualitative results on different simulation experiments and a real neutron tomography data set show that, in comparison with the state-of-the-art algorithms, the proposed algorithm allows reconstruction from substantially fewer projections per rotation without image quality loss. Therefore, the temporal resolution can be substantially increased, and thus fluid flow experiments with faster dynamics can be performed. PMID:26259219
Maiti, Abhik; Chakravarty, Debashish
2016-01-01
3D reconstruction of geo-objects from their digital images is a time-efficient and convenient way of studying the structural features of the object being modelled. This paper presents a 3D reconstruction methodology which can be used to generate photo-realistic 3D watertight surface of different irregular shaped objects, from digital image sequences of the objects. The 3D reconstruction approach described here is robust, simplistic and can be readily used in reconstructing watertight 3D surface of any object from its digital image sequence. Here, digital images of different objects are used to build sparse, followed by dense 3D point clouds of the objects. These image-obtained point clouds are then used for generation of photo-realistic 3D surfaces, using different surface reconstruction algorithms such as Poisson reconstruction and Ball-pivoting algorithm. Different control parameters of these algorithms are identified, which affect the quality and computation time of the reconstructed 3D surface. The effects of these control parameters in generation of 3D surface from point clouds of different density are studied. It is shown that the reconstructed surface quality of Poisson reconstruction depends on Samples per node (SN) significantly, greater SN values resulting in better quality surfaces. Also, the quality of the 3D surface generated using Ball-Pivoting algorithm is found to be highly depend upon Clustering radius and Angle threshold values. The results obtained from this study give the readers of the article a valuable insight into the effects of different control parameters on determining the reconstructed surface quality. PMID:27386376
Image reconstruction by the speckle-masking method.
Weigelt, G; Wirnitzer, B
1983-07-01
Speckle masking is a method for reconstructing high-resolution images of general astronomical objects from stellar speckle interferograms. In speckle masking no unresolvable star is required within the isoplanatic patch of the object. We present digital applications of speckle masking to close spectroscopic double stars. The speckle interferograms were recorded with the European Southern Observatory's 3.6-m telescope. Diffraction-limited resolution (0.03 arc see) was achieved, which is about 30 times higher than the resolution of conventional astrophotography. PMID:19718124
Fast Multigrid Techniques in Total Variation-Based Image Reconstruction
NASA Technical Reports Server (NTRS)
Oman, Mary Ellen
1996-01-01
Existing multigrid techniques are used to effect an efficient method for reconstructing an image from noisy, blurred data. Total Variation minimization yields a nonlinear integro-differential equation which, when discretized using cell-centered finite differences, yields a full matrix equation. A fixed point iteration is applied with the intermediate matrix equations solved via a preconditioned conjugate gradient method which utilizes multi-level quadrature (due to Brandt and Lubrecht) to apply the integral operator and a multigrid scheme (due to Ewing and Shen) to invert the differential operator. With effective preconditioning, the method presented seems to require Omicron(n) operations. Numerical results are given for a two-dimensional example.
Cardiac-state-driven CT image reconstruction algorithm for cardiac imaging
NASA Astrophysics Data System (ADS)
Cesmeli, Erdogan; Edic, Peter M.; Iatrou, Maria; Hsieh, Jiang; Gupta, Rajiv; Pfoh, Armin H.
2002-05-01
Multi-slice CT scanners use EKG gating to predict the cardiac phase during slice reconstruction from projection data. Cardiac phase is generally defined with respect to the RR interval. The implicit assumption made is that the duration of events in a RR interval scales linearly when the heart rate changes. Using a more detailed EKG analysis, we evaluate the impact of relaxing this assumption on image quality. We developed a reconstruction algorithm that analyzes the associated EKG waveform to extract the natural cardiac states. A wavelet transform was used to decompose each RR-interval into P, QRS, and T waves. Subsequently, cardiac phase was defined with respect to these waves instead of a percentage or time delay from the beginning or the end of RR intervals. The projection data was then tagged with the cardiac phase and processed using temporal weights that are function of their cardiac phases. Finally, the tagged projection data were combined from multiple cardiac cycles using a multi-sector algorithm to reconstruct images. The new algorithm was applied to clinical data, collected on a 4-slice (GE LightSpeed Qx/i) and 8-slice CT scanner (GE LightSpeed Plus), with heart rates of 40 to 80 bpm. The quality of reconstruction is assessed by the visualization of the major arteries, e.g. RCA, LAD, LC in the reformat 3D images. Preliminary results indicate that Cardiac State Driven reconstruction algorithm offers better image quality than their RR-based counterparts.
Sparse/Low Rank Constrained Reconstruction for Dynamic PET Imaging
Yu, Xingjian; Chen, Shuhang; Hu, Zhenghui; Liu, Meng; Chen, Yunmei; Shi, Pengcheng; Liu, Huafeng
2015-01-01
In dynamic Positron Emission Tomography (PET), an estimate of the radio activity concentration is obtained from a series of frames of sinogram data taken at ranging in duration from 10 seconds to minutes under some criteria. So far, all the well-known reconstruction algorithms require known data statistical properties. It limits the speed of data acquisition, besides, it is unable to afford the separated information about the structure and the variation of shape and rate of metabolism which play a major role in improving the visualization of contrast for some requirement of the diagnosing in application. This paper presents a novel low rank-based activity map reconstruction scheme from emission sinograms of dynamic PET, termed as SLCR representing Sparse/Low Rank Constrained Reconstruction for Dynamic PET Imaging. In this method, the stationary background is formulated as a low rank component while variations between successive frames are abstracted to the sparse. The resulting nuclear norm and l1 norm related minimization problem can also be efficiently solved by many recently developed numerical methods. In this paper, the linearized alternating direction method is applied. The effectiveness of the proposed scheme is illustrated on three data sets. PMID:26540274
3D Reconstruction of Human Motion from Monocular Image Sequences.
Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo
2016-08-01
This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement. PMID:27093439
Exogenous specific fluorescence marker location reconstruction using surface fluorescence imaging
NASA Astrophysics Data System (ADS)
Avital, Garashi; Gannot, Israel; Chernomordik, Victor V.; Gannot, Gallya; Gandjbakhche, Amir H.
2003-07-01
Diseased tissue may be specifically marked by an exogenous fluorescent marker and then, following laser activation of the marker, optically and non-invasively detected through fluorescence imaging. Interaction of a fluorophore, conjugated to an appropriate antibody, with the antigen expressed by the diseased tissue, can indicate the presence of a specific disease. Using an optical detection system and a reconstruction algorithm, we were able to determine the fluorophore"s position in the tissue. We present 3D reconstructions of the location of a fluorescent marker, FITC, in the tongues of mice. One group of BALB/c mice was injected with squamous cell carcinoma (SqCC) cell line to the tongue, while another group served as the control. After tumor development, the mice"s tongues were injected with FITC conjugated to anti-CD3 and anti-CD 19 antibodies. An Argon laser excited the marker at 488 nm while a high precision fluorescent camera collected the emitted fluorescence. Measurements were performed with the fluorescent marker embedded at various simulated depths. The simulation was performed using agarose-based gel slabs applied to the tongue as tissue-like phantoms. A biopsy was taken from every mouse after the procedure and the excised tissue was histologically evaluated. We reconstruct the fluorescent marker"s location in 3D using an algorithm based on the random walk theory.
List-mode PET image reconstruction for motion correction using the Intel XEON PHI co-processor
NASA Astrophysics Data System (ADS)
Ryder, W. J.; Angelis, G. I.; Bashar, R.; Gillam, J. E.; Fulton, R.; Meikle, S.
2014-03-01
List-mode image reconstruction with motion correction is computationally expensive, as it requires projection of hundreds of millions of rays through a 3D array. To decrease reconstruction time it is possible to use symmetric multiprocessing computers or graphics processing units. The former can have high financial costs, while the latter can require refactoring of algorithms. The Xeon Phi is a new co-processor card with a Many Integrated Core architecture that can run 4 multiple-instruction, multiple data threads per core with each thread having a 512-bit single instruction, multiple data vector register. Thus, it is possible to run in the region of 220 threads simultaneously. The aim of this study was to investigate whether the Xeon Phi co-processor card is a viable alternative to an x86 Linux server for accelerating List-mode PET image reconstruction for motion correction. An existing list-mode image reconstruction algorithm with motion correction was ported to run on the Xeon Phi coprocessor with the multi-threading implemented using pthreads. There were no differences between images reconstructed using the Phi co-processor card and images reconstructed using the same algorithm run on a Linux server. However, it was found that the reconstruction runtimes were 3 times greater for the Phi than the server. A new version of the image reconstruction algorithm was developed in C++ using OpenMP for mutli-threading and the Phi runtimes decreased to 1.67 times that of the host Linux server. Data transfer from the host to co-processor card was found to be a rate-limiting step; this needs to be carefully considered in order to maximize runtime speeds. When considering the purchase price of a Linux workstation with Xeon Phi co-processor card and top of the range Linux server, the former is a cost-effective computation resource for list-mode image reconstruction. A multi-Phi workstation could be a viable alternative to cluster computers at a lower cost for medical imaging
Image reconstruction with uncertainty quantification in photoacoustic tomography.
Tick, Jenni; Pulkkinen, Aki; Tarvainen, Tanja
2016-04-01
Photoacoustic tomography is a hybrid imaging method that combines optical contrast and ultrasound resolution. The goal of photoacoustic tomography is to resolve an initial pressure distribution from detected ultrasound waves generated within an object due to an illumination of a short light pulse. In this work, a Bayesian approach to photoacoustic tomography is described. The solution of the inverse problem is derived and computation of the point estimates for image reconstruction and uncertainty quantification is described. The approach is investigated with simulations in different detector geometries, including limited view setup, and with different detector properties such as ideal point-like detectors, finite size detectors, and detectors with a finite bandwidth. The results show that the Bayesian approach can be used to provide accurate estimates of the initial pressure distribution, as well as information about the uncertainty of the estimates. PMID:27106341
Incomplete-data CT image reconstructions in industrial applications
NASA Astrophysics Data System (ADS)
Tam, K. C.; Eberhard, J. W.; Mitchell, K. W.
1990-06-01
In industrial X-ray computerized tomography (CT), the objects to be inspected are usually very attenuating to X-rays, and their shape may not permit complete scannings at all view angles; incomplete-data imaging situations usually result. Image reconstruction from incomplete data can be achieved through an iterative transform algorithm, which utilizes the a priori information on the object to compensate for the missing data. The results of validating the iterative transform algorithm on experimental data from a cross section of a high-pressure turbine blade made of Ni-based superalloy are reported. From the data set, two kinds of incomplete data situations are simulated: incomplete projection and limited-angle scanning. The results indicate that substantial improvements, both visually and in wall thickness measurements, were brought about in all cases through the use of the iterative transform algorithm.
Reconstruction of mechanically recorded sound by image processing
Fadeyev, Vitaliy; Haber, Carl
2003-03-26
Audio information stored in the undulations of grooves in a medium such as a phonograph record may be reconstructed, with no or minimal contact, by measuring the groove shape using precision metrology methods and digital image processing. The effects of damage, wear, and contamination may be compensated, in many cases, through image processing and analysis methods. The speed and data handling capacity of available computing hardware make this approach practical. Various aspects of this approach are discussed. A feasibility test is reported which used a general purpose optical metrology system to study a 50 year old 78 r.p.m. phonograph record. Comparisons are presented with stylus playback of the record and with a digitally re-mastered version of the original magnetic recording. A more extensive implementation of this approach, with dedicated hardware and software, is considered.
Plenoptic camera image simulation for reconstruction algorithm verification
NASA Astrophysics Data System (ADS)
Schwiegerling, Jim
2014-09-01
Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.
Bayesian Super-Resolved Surface Reconstruction From Multiple Images
NASA Technical Reports Server (NTRS)
Smelyanskiy, V. N.; Cheesman, P.; Maluf, D. A.; Morris, R. D.; Swanson, Keith (Technical Monitor)
1999-01-01
Bayesian inference has been wed successfully for many problems where the aim is to infer the parameters of a model of interest. In this paper we formulate the three dimensional reconstruction problem as the problem of inferring the parameters of a surface model from image data, and show how Bayesian methods can be used to estimate the parameters of this model given the image data. Thus we recover the three dimensional description of the scene. This approach also gives great flexibility. We can specify the geometrical properties of the model to suit our purpose, and can also use different models for how the surface reflects the light incident upon it. In common with other Bayesian inference problems, the estimation methodology requires that we can simulate the data that would have been recoded for any values of the model parameters. In this application this means that if we have image data we must be able to render the surface model. However it also means that we can infer the parameters of a model whose resolution can be chosen irrespective of the resolution of the images, and may be super-resolved. We present results of the inference of surface models from simulated aerial photographs for the case of super-resolution, where many surface elements project into a single pixel in the low-resolution images.
A High Precision Terahertz Wave Image Reconstruction Algorithm
Guo, Qijia; Chang, Tianying; Geng, Guoshuai; Jia, Chengyan; Cui, Hong-Liang
2016-01-01
With the development of terahertz (THz) technology, the applications of this spectrum have become increasingly wide-ranging, in areas such as non-destructive testing, security applications and medical scanning, in which one of the most important methods is imaging. Unlike remote sensing applications, THz imaging features sources of array elements that are almost always supposed to be spherical wave radiators, including single antennae. As such, well-developed methodologies such as Range-Doppler Algorithm (RDA) are not directly applicable in such near-range situations. The Back Projection Algorithm (BPA) can provide products of high precision at the the cost of a high computational burden, while the Range Migration Algorithm (RMA) sacrifices the quality of images for efficiency. The Phase-shift Migration Algorithm (PMA) is a good alternative, the features of which combine both of the classical algorithms mentioned above. In this research, it is used for mechanical scanning, and is extended to array imaging for the first time. In addition, the performances of PMA are studied in detail in contrast to BPA and RMA. It is demonstrated in our simulations and experiments described herein that the algorithm can reconstruct images with high precision. PMID:27455269
A High Precision Terahertz Wave Image Reconstruction Algorithm.
Guo, Qijia; Chang, Tianying; Geng, Guoshuai; Jia, Chengyan; Cui, Hong-Liang
2016-01-01
With the development of terahertz (THz) technology, the applications of this spectrum have become increasingly wide-ranging, in areas such as non-destructive testing, security applications and medical scanning, in which one of the most important methods is imaging. Unlike remote sensing applications, THz imaging features sources of array elements that are almost always supposed to be spherical wave radiators, including single antennae. As such, well-developed methodologies such as Range-Doppler Algorithm (RDA) are not directly applicable in such near-range situations. The Back Projection Algorithm (BPA) can provide products of high precision at the the cost of a high computational burden, while the Range Migration Algorithm (RMA) sacrifices the quality of images for efficiency. The Phase-shift Migration Algorithm (PMA) is a good alternative, the features of which combine both of the classical algorithms mentioned above. In this research, it is used for mechanical scanning, and is extended to array imaging for the first time. In addition, the performances of PMA are studied in detail in contrast to BPA and RMA. It is demonstrated in our simulations and experiments described herein that the algorithm can reconstruct images with high precision. PMID:27455269
High-quality image reconstruction method for ptychography with partially coherent illumination
NASA Astrophysics Data System (ADS)
Yu, Wei; Wang, Shouyu; Veetil, Suhas; Gao, Shumei; Liu, Cheng; Zhu, Jianqiang
2016-06-01
The influence of partial coherence on the image reconstruction in ptychography is analyzed, and a simple method is proposed to reconstruct a clear image for the weakly scattering object with partially coherent illumination. It is demonstrated numerically and experimentally that by illuminating a weakly scattering object with a divergent radiation beam, and doing the reconstruction only from the bright-field diffraction data, the mathematical ambiguity and corresponding reconstruction errors related to the partial coherency can be remarkably suppressed, thus clear reconstructed images can be generated even under seriously incoherent illumination.
Accelerated Gaussian mixture model and its application on image segmentation
NASA Astrophysics Data System (ADS)
Zhao, Jianhui; Zhang, Yuanyuan; Ding, Yihua; Long, Chengjiang; Yuan, Zhiyong; Zhang, Dengyi
2013-03-01
Gaussian mixture model (GMM) has been widely used for image segmentation in recent years due to its superior adaptability and simplicity of implementation. However, traditional GMM has the disadvantage of high computational complexity. In this paper an accelerated GMM is designed, for which the following approaches are adopted: establish the lookup table for Gaussian probability matrix to avoid the repetitive probability calculations on all pixels, employ the blocking detection method on each block of pixels to further decrease the complexity, change the structure of lookup table from 3D to 1D with more simple data type to reduce the space requirement. The accelerated GMM is applied on image segmentation with the help of OTSU method to decide the threshold value automatically. Our algorithm has been tested through image segmenting of flames and faces from a set of real pictures, and the experimental results prove its efficiency in segmentation precision and computational cost.
NASA Astrophysics Data System (ADS)
Liu, Baodong; Wang, Ge; Ritman, Erik L.; Cao, Guohua; Lu, Jianping; Zhou, Otto; Zeng, Li; Yu, Hengyong
2011-10-01
A multisource x-ray interior imaging system with limited angle scanning is investigated to study the possibility of building an ultrafast micro-CT for dynamic small animal imaging, and two methods are employed to perform interior reconstruction from a limited number of projections collected by the multisource interior x-ray system. The first is total variation minimization with the steepest descent search (TVM-SD) and the second is total difference minimization with soft-threshold filtering (TDM-STF). Comprehensive numerical simulations and animal studies are performed to validate the associated reconstructed methods and demonstrate the feasibility and application of the proposed system configuration. The image reconstruction results show that both of the two reconstruction methods can significantly improve the image quality and the TDM-SFT is slightly superior to the TVM-SD. Finally, quantitative image analysis shows that it is possible to make an ultrafast micro-CT using a multisource interior x-ray system scheme combined with the state-of-the-art interior tomography.
Modeling of polychromatic attenuation using computed tomography reconstructed images
NASA Technical Reports Server (NTRS)
Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S.
1999-01-01
This paper presents a procedure for estimating an accurate model of the CT imaging process including spectral effects. As raw projection data are typically unavailable to the end-user, we adopt a post-processing approach that utilizes the reconstructed images themselves. This approach includes errors from x-ray scatter and the nonidealities of the built-in soft tissue correction into the beam characteristics, which is crucial to beam hardening correction algorithms that are designed to be applied directly to CT reconstructed images. We formulate this approach as a quadratic programming problem and propose two different methods, dimension reduction and regularization, to overcome ill conditioning in the model. For the regularization method we use a statistical procedure, Cross Validation, to select the regularization parameter. We have constructed step-wedge phantoms to estimate the effective beam spectrum of a GE CT-I scanner. Using the derived spectrum, we computed the attenuation ratios for the wedge phantoms and found that the worst case modeling error is less than 3% of the corresponding attenuation ratio. We have also built two test (hybrid) phantoms to evaluate the effective spectrum. Based on these test phantoms, we have shown that the effective beam spectrum provides an accurate model for the CT imaging process. Last, we used a simple beam hardening correction experiment to demonstrate the effectiveness of the estimated beam profile for removing beam hardening artifacts. We hope that this estimation procedure will encourage more independent research on beam hardening corrections and will lead to the development of application-specific beam hardening correction algorithms.
3D Reconstruction of virtual colon structures from colonoscopy images.
Hong, DongHo; Tavanapong, Wallapak; Wong, Johnny; Oh, JungHwan; de Groen, Piet C
2014-01-01
This paper presents the first fully automated reconstruction technique of 3D virtual colon segments from individual colonoscopy images. It is the basis of new software applications that may offer great benefits for improving quality of care for colonoscopy patients. For example, a 3D map of the areas inspected and uninspected during colonoscopy can be shown on request of the endoscopist during the procedure. The endoscopist may revisit the suggested uninspected areas to reduce the chance of missing polyps that reside in these areas. The percentage of the colon surface seen by the endoscopist can be used as a coarse objective indicator of the quality of the procedure. The derived virtual colon models can be stored for post-procedure training of new endoscopists to teach navigation techniques that result in a higher level of procedure quality. Our technique does not require a prior CT scan of the colon or any global positioning device. Our experiments on endoscopy images of an Olympus synthetic colon model reveal encouraging results with small average reconstruction errors (4.1 mm for the fold depths and 12.1 mm for the fold circumferences). PMID:24225230
Abdellah, Marwan; Eldeib, Ayman; Owis, Mohamed I
2015-08-01
This paper features an advanced implementation of the X-ray rendering algorithm that harnesses the giant computing power of the current commodity graphics processors to accelerate the generation of high resolution digitally reconstructed radiographs (DRRs). The presented pipeline exploits the latest features of NVIDIA Graphics Processing Unit (GPU) architectures, mainly bindless texture objects and dynamic parallelism. The rendering throughput is substantially improved by exploiting the interoperability mechanisms between CUDA and OpenGL. The benchmarks of our optimized rendering pipeline reflect its capability of generating DRRs with resolutions of 2048(2) and 4096(2) at interactive and semi interactive frame-rates using an NVIDIA GeForce 970 GTX device. PMID:26737231
A Convex Formulation for Magnetic Particle Imaging X-Space Reconstruction
Konkle, Justin J.; Goodwill, Patrick W.; Hensley, Daniel W.; Orendorff, Ryan D.; Lustig, Michael; Conolly, Steven M.
2015-01-01
Magnetic Particle Imaging (mpi) is an emerging imaging modality with exceptional promise for clinical applications in rapid angiography, cell therapy tracking, cancer imaging, and inflammation imaging. Recent publications have demonstrated quantitative mpi across rat sized fields of view with x-space reconstruction methods. Critical to any medical imaging technology is the reliability and accuracy of image reconstruction. Because the average value of the mpi signal is lost during direct-feedthrough signal filtering, mpi reconstruction algorithms must recover this zero-frequency value. Prior x-space mpi recovery techniques were limited to 1d approaches which could introduce artifacts when reconstructing a 3d image. In this paper, we formulate x-space reconstruction as a 3d convex optimization problem and apply robust a priori knowledge of image smoothness and non-negativity to reduce non-physical banding and haze artifacts. We conclude with a discussion of the powerful extensibility of the presented formulation for future applications. PMID:26495839
WE-G-18C-08: Real Time Tumor Imaging Using a Novel Dynamic Keyhole MRI Reconstruction Technique
Lee, D; Pollock, S; Whelan, B; Keall, P; Greer, P; Kim, T
2014-06-15
Purpose: To test the hypothesis that the novel Dynamic Keyhole MRI reconstruction technique can accelerate image acquisition whilst maintaining high image quality for lung cancer patients. Methods: 18 MRI datasets from 5 lung cancer patients were acquired using a 3T MRI scanner. These datasets were retrospectively reconstructed using (A) The novel Dynamic Keyhole technique, (B) The conventional keyhole technique and (C) the conventional zero filling technique. The dynamic keyhole technique in MRI refers to techniques in which previously acquired k-space data is used to supplement under sampled data obtained in real time. The novel Dynamic Keyhole technique utilizes a previously acquired a library of kspace datasets in conjunction with central k-space datasets acquired in realtime. A simultaneously acquired respiratory signal is utilized to sort, match and combine the two k-space streams with respect to respiratory displacement. Reconstruction performance was quantified by (1) comparing the keyhole size (which corresponds to imaging speed) required to achieve the same image quality, and (2) maintaining a constant keyhole size across the three reconstruction methods to compare the resulting image quality to the ground truth image. Results: (1) The dynamic keyhole method required a mean keyhole size which was 48% smaller than the conventional keyhole technique and 60% smaller than the zero filling technique to achieve the same image quality. This directly corresponds to faster imaging. (2) When a constant keyhole size was utilized, the Dynamic Keyhole technique resulted in the smallest difference of the tumor region compared to the ground truth. Conclusion: The dynamic keyhole is a simple and adaptable technique for clinical applications requiring real-time imaging and tumor monitoring such as MRI guided radiotherapy. Based on the results from this study, the dynamic keyhole method could increase the imaging frequency by a factor of five compared with full k
Monte Carlo simulations for 20 MV X-ray spectrum reconstruction of a linear induction accelerator
NASA Astrophysics Data System (ADS)
Wang, Yi; Li, Qin; Jiang, Xiao-Guo
2012-09-01
To study the spectrum reconstruction of the 20 MV X-ray generated by the Dragon-I linear induction accelerator, the Monte Carlo method is applied to simulate the attenuations of the X-ray in the attenuators of different thicknesses and thus provide the transmission data. As is known, the spectrum estimation from transmission data is an ill-conditioned problem. The method based on iterative perturbations is employed to derive the X-ray spectra, where initial guesses are used to start the process. This algorithm takes into account not only the minimization of the differences between the measured and the calculated transmissions but also the smoothness feature of the spectrum function. In this work, various filter materials are put to use as the attenuator, and the condition for an accurate and robust solution of the X-ray spectrum calculation is demonstrated. The influences of the scattering photons within different intervals of emergence angle on the X-ray spectrum reconstruction are also analyzed.
Pesavento, J B; Morgan, D; Bermingham, R; Zamora, D; Chromy, B; Segelke, B; Coleman, M; Xing, L; Cheng, H; Bench, G; Hoeprich, P
2007-06-07
Nanolipoprotein particles (NLPs) are small 10-20 nm diameter assemblies of apolipoproteins and lipids. At Lawrence Livermore National Laboratory (LLNL), they have constructed multiple variants of these assemblies. NLPs have been generated from a variety of lipoproteins, including apolipoprotein Al, apolipophorin III, apolipoprotein E4 22K, and MSP1T2 (nanodisc, Inc.). Lipids used included DMPC (bulk of the bilayer material), DMPE (in various amounts), and DPPC. NLPs were made in either the absence or presence of the detergent cholate. They have collected electron microscopy data as a part of the characterization component of this research. Although purified by size exclusion chromatography (SEC), samples are somewhat heterogeneous when analyzed at the nanoscale by negative stained cryo-EM. Images reveal a broad range of shape heterogeneity, suggesting variability in conformational flexibility, in fact, modeling studies point to dynamics of inter-helical loop regions within apolipoproteins as being a possible source for observed variation in NLP size. Initial attempts at three-dimensional reconstructions have proven to be challenging due to this size and shape disparity. They are pursuing a strategy of computational size exclusion to group particles into subpopulations based on average particle diameter. They show here results from their ongoing efforts at statistically and computationally subdividing NLP populations to realize greater homogeneity and then generate 3D reconstructions.
Electron Trajectory Reconstruction for Advanced Compton Imaging of Gamma Rays
NASA Astrophysics Data System (ADS)
Plimley, Brian Christopher
Gamma-ray imaging is useful for detecting, characterizing, and localizing sources in a variety of fields, including nuclear physics, security, nuclear accident response, nuclear medicine, and astronomy. Compton imaging in particular provides sensitivity to weak sources and good angular resolution in a large field of view. However, the photon origin in a single event sequence is normally only limited to the surface of a cone. If the initial direction of the Compton-scattered electron can be measured, the cone can be reduced to a cone segment with width depending on the uncertainty in the direction measurement, providing a corresponding increase in imaging sensitivity. Measurement of the electron's initial direction in an efficient detection material requires very fine position resolution due to the electron's short range and tortuous path. A thick (650 mum), fully-depleted charge-coupled device (CCD) developed for infrared astronomy has 10.5-mum position resolution in two dimensions, enabling the initial trajectory measurement of electrons of energy as low as 100 keV. This is the first time the initial trajectories of electrons of such low energies have been measured in a solid material. In this work, the CCD's efficacy as a gamma-ray detector is demonstrated experimentally, using a reconstruction algorithm to measure the initial electron direction from the CCD track image. In addition, models of fast electron interaction physics, charge transport and readout were used to generate modeled tracks with known initial direction. These modeled tracks allowed the development and refinement of the reconstruction algorithm. The angular sensitivity of the reconstruction algorithm is evaluated extensively with models for tracks below 480 keV, showing a FWHM as low as 20° in the pixel plane, and 30° RMS sensitivity to the magnitude of the out-of-plane angle. The measurement of the trajectories of electrons with energies as low as 100 keV have the potential to make electron
Three-dimensional image reconstruction for PET by multi-slice rebinning and axial image filtering.
Lewittt, R M; Muehllehner, G; Karpt, J S
1994-03-01
A fast method is described for reconstructing volume images from three-dimensional (3D) coincidence data in positron emission tomography (PET). The reconstruction method makes use of all coincidence data acquired by high-sensitivity PET systems that do not have inter-slice absorbers (septa) to restrict the axial acceptance angle. The reconstruction method requires only a small amount of storage and computation, making it well suited for dynamic and whole-body studies. The method consists of three steps: (i) rebinning of coincidence data into a stack of 2D sinograms; (ii) slice-by-slice reconstruction of the sinogram associated with each slice to produce a preliminary 3D image having strong blurring in the axial (z) direction, but with different blurring at different z positions; and (iii) spatially variant filtering of the 3D image in the axial direction (i.e. 1D filtering in z for each x-y column) to produce the final image. The first step involves a new form of the rebinning operation in which multiple sinograms are incremented for each oblique coincidence line (multi-slice rebinning). The axial filtering step is formulated and implemented using the singular value decomposition (SVD). The method has been applied successfully to simulated data and to measured data for different kinds of phantom (multiple point sources, multiple discs, a cylinder with cold spheres, and a 3D brain phantom). PMID:15551583
3D image reconstruction algorithms for cryo-electron-microscopy images of virus particles
NASA Astrophysics Data System (ADS)
Doerschuk, Peter C.; Johnson, John E.
2000-11-01
A statistical model for the object and the complete image formation process in cryo electron microscopy of viruses is presented. Using this model, maximum likelihood reconstructions of the 3D structure of viruses are computed using the expectation maximization algorithm and an example based on Cowpea mosaic virus is provided.
Image improvement and three-dimensional reconstruction using holographic image processing
NASA Technical Reports Server (NTRS)
Stroke, G. W.; Halioua, M.; Thon, F.; Willasch, D. H.
1977-01-01
Holographic computing principles make possible image improvement and synthesis in many cases of current scientific and engineering interest. Examples are given for the improvement of resolution in electron microscopy and 3-D reconstruction in electron microscopy and X-ray crystallography, following an analysis of optical versus digital computing in such applications.
Filling factor characteristics of masking phase-only hologram on the quality of reconstructed images
NASA Astrophysics Data System (ADS)
Deng, Yuanbo; Chu, Daping
2016-03-01
The present study evaluates the filling factor characteristics of masking phase-only hologram on its corresponding reconstructed image. A square aperture with different filling factor is added on the phase-only hologram of the target image, and average cross-section intensity profile of the reconstructed image is obtained and deconvolved with that of the target image to calculate the point spread function (PSF) of the image. Meanwhile, Lena image is used as the target image and evaluated by metrics RMSE and SSIM to assess the quality of reconstructed image. The results show that the PSF of the image agrees with the PSF of the Fourier transform of the mask, and as the filling factor of the mask decreases, the width of PSF increases and the quality of reconstructed image drops. These characteristics could be used in practical situations where phase-only hologram is confined or need to be sliced or tiled.
Event-by-event PET image reconstruction using list-mode origin ensembles algorithm
NASA Astrophysics Data System (ADS)
Andreyev, Andriy
2016-03-01
There is a great demand for real time or event-by-event (EBE) image reconstruction in emission tomography. Ideally, as soon as event has been detected by the acquisition electronics, it needs to be used in the image reconstruction software. This would greatly speed up the image reconstruction since most of the data will be processed and reconstructed while the patient is still undergoing the scan. Unfortunately, the current industry standard is that the reconstruction of the image would not start until all the data for the current image frame would be acquired. Implementing an EBE reconstruction for MLEM family of algorithms is possible, but not straightforward as multiple (computationally expensive) updates to the image estimate are required. In this work an alternative Origin Ensembles (OE) image reconstruction algorithm for PET imaging is converted to EBE mode and is investigated whether it is viable alternative for real-time image reconstruction. In OE algorithm all acquired events are seen as points that are located somewhere along the corresponding line-of-responses (LORs), together forming a point cloud. Iteratively, with a multitude of quasi-random shifts following the likelihood function the point cloud converges to a reflection of an actual radiotracer distribution with the degree of accuracy that is similar to MLEM. New data can be naturally added into the point cloud. Preliminary results with simulated data show little difference between regular reconstruction and EBE mode, proving the feasibility of the proposed approach.
NASA Astrophysics Data System (ADS)
Liang, Zhiting; Guan, Yong; Liu, Gang; Bian, Rui; Zhang, Xiaobo; Xiong, Ying; Tian, Yangchao
2013-09-01
Nano-CT has been considered as an important technique applied in analyzing inter-structures of nanomaterials and biological cell. However, maximum rotation angle of the sample stage is limited by sample space; meanwhile, the scan time is exorbitantly large to get enough projections in some cases. Therefore, it is difficult to acquire nano-CT images with high quality by using conventional Fourier reconstruction methods based on limited-angle or few-view projections. In this paper, we utilized the total variation (TV) iterative reconstruction to carry out numerical image and nano-CT image reconstruction with limited-angle and few-view data. The results indicated that better quality images had been achieved.
An adaptive total variation image reconstruction method for speckles through disordered media
NASA Astrophysics Data System (ADS)
Gong, Changmei; Shao, Xiaopeng; Wu, Tengfei
2013-09-01
Multiple scattering of light in highly disordered medium can break the diffraction limit of conventional optical system combined with image reconstruction method. Once the transmission matrix of the imaging system is obtained, the target image can be reconstructed from its speckle pattern by image reconstruction algorithm. Nevertheless, the restored image attained by common image reconstruction algorithms such as Tikhonov regularization has a relatively low signal-tonoise ratio (SNR) due to the experimental noise and reconstruction noise, greatly reducing the quality of the result image. In this paper, the speckle pattern of the test image is simulated by the combination of light propagation theories and statistical optics theories. Subsequently, an adaptive total variation (ATV) algorithm—the TV minimization by augmented Lagrangian and alternating direction algorithms (TVAL3), which is based on augmented Lagrangian and alternating direction algorithm, is utilized to reconstruct the target image. Numerical simulation experimental results show that, the TVAL3 algorithm can effectively suppress the noise of the restored image and preserve more image details, thus greatly boosts the SNR of the restored image. It also indicates that, compared with the image directly formed by `clean' system, the reconstructed results can overcoming the diffraction limit of the `clean' system, therefore being conductive to the observation of cells and protein molecules in biological tissues and other structures in micro/nano scale.
Restoration of singularities in reconstructed phase of crystal image in electron holography.
Li, Wei; Tanji, Takayoshi
2014-12-01
Off-axis electron holography can be used to measure the inner potential of a specimen from its reconstructed phase image and is thus a powerful technique for materials scientists. However, abrupt reversals of contrast from white to black may sometimes occur in a digitally reconstructed phase image, which results in inaccurate information. Such phase distortion is mainly due to the digital reconstruction process and weak electron wave amplitude in some areas of the specimen. Therefore, digital image processing can be applied to the reconstruction and restoration of phase images. In this paper, fringe reconnection processing is applied to phase image restoration of a crystal structure image. The disconnection and wrong connection of interference fringes in the hologram that directly cause a 2π phase jump imperfection are correctly reconnected. Experimental results show that the phase distortion is significantly reduced after the processing. The quality of the reconstructed phase image was improved by the removal of imperfections in the final phase. PMID:25272997
Image reconstruction for PET/CT scanners: past achievements and future challenges
Tong, Shan; Alessio, Adam M; Kinahan, Paul E
2011-01-01
PET is a medical imaging modality with proven clinical value for disease diagnosis and treatment monitoring. The integration of PET and CT on modern scanners provides a synergy of the two imaging modalities. Through different mathematical algorithms, PET data can be reconstructed into the spatial distribution of the injected radiotracer. With dynamic imaging, kinetic parameters of specific biological processes can also be determined. Numerous efforts have been devoted to the development of PET image reconstruction methods over the last four decades, encompassing analytic and iterative reconstruction methods. This article provides an overview of the commonly used methods. Current challenges in PET image reconstruction include more accurate quantitation, TOF imaging, system modeling, motion correction and dynamic reconstruction. Advances in these aspects could enhance the use of PET/CT imaging in patient care and in clinical research studies of pathophysiology and therapeutic interventions. PMID:21339831
A nonlinear image reconstruction technique for ECT using a combined neural network approach
NASA Astrophysics Data System (ADS)
Marashdeh, Q.; Warsito, W.; Fan, L.-S.; Teixeira, F. L.
2006-08-01
A combined multilayer feed-forward neural network (MLFF-NN) and analogue Hopfield network is developed for nonlinear image reconstruction of electrical capacitance tomography (ECT). The (nonlinear) forward problem in ECT is solved using the MLFF-NN trained with a set of capacitance data from measurements based on a back-propagation training algorithm with regularization. The inverse problem is solved using an analogue Hopfield network based on a neural-network multi-criteria optimization image reconstruction technique (HN-MOIRT). The nonlinear image reconstruction based on this combined MLFF-NN + HN-MOIRT approach is tested on measured capacitance data not used in training to reconstruct the permittivity distribution. The performance of the technique is compared against commonly used linear Landweber and semi-linear image reconstruction techniques, showing superiority in terms of both stability and quality of reconstructed images.
NASA Astrophysics Data System (ADS)
Chang, Jenghwa; Graber, Harry L.; Barbour, Randall L.
1995-05-01
This study reports on results of our efforts to improve the efficiency and stability of previously developed reconstruction algorithms for optical diffusion tomography. The previous studies, which applied regularization and a priori contraints to iterative methods--POCS, CGD, and SART algorithms--showed that in most cases, good quality reconstructions of simply structured media were achievalbe using a perturbation model. The CGD method, which is the most efficient of the three algorithms, was, however, in some instances not able to produce good quality images because of the difficulty in applying range constraints, which can cause divergence. In this study, a scheme is proposed to detect this gradient vector is reset and the CGD reconstruction is restarted using the previous reconstruction as the initial value. In gradient vector is reset and the CGD reconstruction is restarted using the previous reconstruction as the initial value. In addition, a rescaling technique, which rescaled the weight matrix to make it more uniform and less ill-conditioned, is also used to suppress numerical errors and accelerate convergence. Two criteria, rescaling the maximum of each column to 1 and rescaling the average of each column to 1, were applied and compared to results without rescaling. The results show that, with properly imposed constraints, good quality images can be obtained using the CGD method. The convergence speed is much slower when constraints are imposed, but still comparable to the POCS and SART algorithms, The rescaling technique produces more stable and more accurate reconstructions, and speeds up the reconstruction significantly for all three algorithms.
MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction
Chen, G; Pan, X; Stayman, J; Samei, E
2014-06-15
Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within the reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical
A Weighted Two-Level Bregman Method with Dictionary Updating for Nonconvex MR Image Reconstruction
Peng, Xi; Liu, Jianbo; Yang, Dingcheng
2014-01-01
Nonconvex optimization has shown that it needs substantially fewer measurements than l 1 minimization for exact recovery under fixed transform/overcomplete dictionary. In this work, two efficient numerical algorithms which are unified by the method named weighted two-level Bregman method with dictionary updating (WTBMDU) are proposed for solving lp optimization under the dictionary learning model and subjecting the fidelity to the partial measurements. By incorporating the iteratively reweighted norm into the two-level Bregman iteration method with dictionary updating scheme (TBMDU), the modified alternating direction method (ADM) solves the model of pursuing the approximated lp-norm penalty efficiently. Specifically, the algorithms converge after a relatively small number of iterations, under the formulation of iteratively reweighted l 1 and l 2 minimization. Experimental results on MR image simulations and real MR data, under a variety of sampling trajectories and acceleration factors, consistently demonstrate that the proposed method can efficiently reconstruct MR images from highly undersampled k-space data and presents advantages over the current state-of-the-art reconstruction approaches, in terms of higher PSNR and lower HFEN values. PMID:25431583
A Novel Linear Accelerator For Image Guided Radiation Therapy
Ding Xiaodong; Boucher, Salime
2011-06-01
RadiaBeam is developing a novel linear accelerator which produces both kilovoltage ({approx}100 keV) X-rays for imaging, and megavoltage (6 to 20 MeV) X-rays for therapy. We call this system the DEXITron: Dual Energy X-ray source for Imaging and Therapy. The Dexitron is enabled by an innovation in the electromagnetic design of the linac, which allows the output energy to be rapidly switched from high energy to low energy. In brief, the method involves switching the phase of the radiofrequency (RF) power by 180 degrees at some point in the linac such that, after that point, the linac decelerates the beam, rather than accelerating it. The Dexitron will have comparable cost to other linacs, and avoids the problems associated with current IGRT equipment.
A Novel Linear Accelerator For Image Guided Radiation Therapy
NASA Astrophysics Data System (ADS)
Ding, Xiaodong; Boucher, Salime
2011-06-01
RadiaBeam is developing a novel linear accelerator which produces both kilovoltage (˜100 keV) X-rays for imaging, and megavoltage (6 to 20 MeV) X-rays for therapy. We call this system the DEXITron: Dual Energy X-ray source for Imaging and Therapy. The Dexitron is enabled by an innovation in the electromagnetic design of the linac, which allows the output energy to be rapidly switched from high energy to low energy. In brief, the method involves switching the phase of the radiofrequency (RF) power by 180 degrees at some point in the linac such that, after that point, the linac decelerates the beam, rather than accelerating it. The Dexitron will have comparable cost to other linacs, and avoids the problems associated with current IGRT equipment.
Investigation of optimization-based reconstruction with an image-total-variation constraint in PET
NASA Astrophysics Data System (ADS)
Zhang, Zheng; Ye, Jinghan; Chen, Buxin; Perkins, Amy E.; Rose, Sean; Sidky, Emil Y.; Kao, Chien-Min; Xia, Dan; Tung, Chi-Hua; Pan, Xiaochuan
2016-08-01
Interest remains in reconstruction-algorithm research and development for possible improvement of image quality in current PET imaging and for enabling innovative PET systems to enhance existing, and facilitate new, preclinical and clinical applications. Optimization-based image reconstruction has been demonstrated in recent years of potential utility for CT imaging applications. In this work, we investigate tailoring the optimization-based techniques to image reconstruction for PET systems with standard and non-standard scan configurations. Specifically, given an image-total-variation (TV) constraint, we investigated how the selection of different data divergences and associated parameters impacts the optimization-based reconstruction of PET images. The reconstruction robustness was explored also with respect to different data conditions and activity up-takes of practical relevance. A study was conducted particularly for image reconstruction from data collected by use of a PET configuration with sparsely populated detectors. Overall, the study demonstrates the robustness of the TV-constrained, optimization-based reconstruction for considerably different data conditions in PET imaging, as well as its potential to enable PET configurations with reduced numbers of detectors. Insights gained in the study may be exploited for developing algorithms for PET-image reconstruction and for enabling PET-configuration design of practical usefulness in preclinical and clinical applications.
Investigation of optimization-based reconstruction with an image-total-variation constraint in PET.
Zhang, Zheng; Ye, Jinghan; Chen, Buxin; Perkins, Amy E; Rose, Sean; Sidky, Emil Y; Kao, Chien-Min; Xia, Dan; Tung, Chi-Hua; Pan, Xiaochuan
2016-08-21
Interest remains in reconstruction-algorithm research and development for possible improvement of image quality in current PET imaging and for enabling innovative PET systems to enhance existing, and facilitate new, preclinical and clinical applications. Optimization-based image reconstruction has been demonstrated in recent years of potential utility for CT imaging applications. In this work, we investigate tailoring the optimization-based techniques to image reconstruction for PET systems with standard and non-standard scan configurations. Specifically, given an image-total-variation (TV) constraint, we investigated how the selection of different data divergences and associated parameters impacts the optimization-based reconstruction of PET images. The reconstruction robustness was explored also with respect to different data conditions and activity up-takes of practical relevance. A study was conducted particularly for image reconstruction from data collected by use of a PET configuration with sparsely populated detectors. Overall, the study demonstrates the robustness of the TV-constrained, optimization-based reconstruction for considerably different data conditions in PET imaging, as well as its potential to enable PET configurations with reduced numbers of detectors. Insights gained in the study may be exploited for developing algorithms for PET-image reconstruction and for enabling PET-configuration design of practical usefulness in preclinical and clinical applications. PMID:27452653
Wang, Li; Gac, Nicolas; Mohammad-Djafari, Ali
2015-01-13
In order to improve quality of 3D X-ray tomography reconstruction for Non Destructive Testing (NDT), we investigate in this paper hierarchical Bayesian methods. In NDT, useful prior information on the volume like the limited number of materials or the presence of homogeneous area can be included in the iterative reconstruction algorithms. In hierarchical Bayesian methods, not only the volume is estimated thanks to the prior model of the volume but also the hyper parameters of this prior. This additional complexity in the reconstruction methods when applied to large volumes (from 512{sup 3} to 8192{sup 3} voxels) results in an increasing computational cost. To reduce it, the hierarchical Bayesian methods investigated in this paper lead to an algorithm acceleration by Variational Bayesian Approximation (VBA) [1] and hardware acceleration thanks to projection and back-projection operators paralleled on many core processors like GPU [2]. In this paper, we will consider a Student-t prior on the gradient of the image implemented in a hierarchical way [3, 4, 1]. Operators H (forward or projection) and H{sup t} (adjoint or back-projection) implanted in multi-GPU [2] have been used in this study. Different methods will be evalued on synthetic volume 'Shepp and Logan' in terms of quality and time of reconstruction. We used several simple regularizations of order 1 and order 2. Other prior models also exists [5]. Sometimes for a discrete image, we can do the segmentation and reconstruction at the same time, then the reconstruction can be done with less projections.
Improved Liver Lesion Conspicuity With Iterative Reconstruction in Computed Tomography Imaging.
Jensen, Kristin; Andersen, Hilde Kjernlie; Tingberg, Anders; Reisse, Claudius; Fosse, Erik; Martinsen, Anne Catrine T
2016-01-01
Studies on iterative reconstruction techniques on computed tomographic (CT) scanners show reduced noise and changed image texture. The purpose of this study was to address the possibility of dose reduction and improved conspicuity of lesions in a liver phantom for different iterative reconstruction algorithms. An anthropomorphic upper abdomen phantom, specially designed for receiver operating characteristic analysis was scanned with 2 different CT models from the same vendor, GE CT750 HD and GE Lightspeed VCT. Images were obtained at 3 dose levels, 5, 10, and 15mGy, and reconstructed with filtered back projection (FBP), and 2 different iterative reconstruction algorithms; adaptive statistical iterative reconstruction and Veo. Overall, 5 interpreters evaluated the images and receiver operating characteristic analysis was performed. Standard deviation and the contrast to noise ratio were measured. Veo image reconstruction resulted in larger area under curves compared with those adaptive statistical iterative reconstruction and FBP image reconstruction for given dose levels. For the CT750 HD, iterative reconstruction at the 10mGy dose level resulted in larger or similar area under curves compared with FBP at the 15mGy dose level (0.88-0.95 vs 0.90). This was not shown for the Lightspeed VCT (0.83-0.85 vs 0.92). The results in this study indicate that the possibility for radiation dose reduction using iterative reconstruction techniques depends on both reconstruction technique and the CT scanner model used. PMID:26790606
Eck, Brendan L.; Fahmi, Rachid; Miao, Jun; Brown, Kevin M.; Zabic, Stanislav; Raihani, Nilgoun; Wilson, David L.
2015-10-15
Purpose: Aims in this study are to (1) develop a computational model observer which reliably tracks the detectability of human observers in low dose computed tomography (CT) images reconstructed with knowledge-based iterative reconstruction (IMR™, Philips Healthcare) and filtered back projection (FBP) across a range of independent variables, (2) use the model to evaluate detectability trends across reconstructions and make predictions of human observer detectability, and (3) perform human observer studies based on model predictions to demonstrate applications of the model in CT imaging. Methods: Detectability (d′) was evaluated in phantom studies across a range of conditions. Images were generated using a numerical CT simulator. Trained observers performed 4-alternative forced choice (4-AFC) experiments across dose (1.3, 2.7, 4.0 mGy), pin size (4, 6, 8 mm), contrast (0.3%, 0.5%, 1.0%), and reconstruction (FBP, IMR), at fixed display window. A five-channel Laguerre–Gauss channelized Hotelling observer (CHO) was developed with internal noise added to the decision variable and/or to channel outputs, creating six different internal noise models. Semianalytic internal noise computation was tested against Monte Carlo and used to accelerate internal noise parameter optimization. Model parameters were estimated from all experiments at once using maximum likelihood on the probability correct, P{sub C}. Akaike information criterion (AIC) was used to compare models of different orders. The best model was selected according to AIC and used to predict detectability in blended FBP-IMR images, analyze trends in IMR detectability improvements, and predict dose savings with IMR. Predicted dose savings were compared against 4-AFC study results using physical CT phantom images. Results: Detection in IMR was greater than FBP in all tested conditions. The CHO with internal noise proportional to channel output standard deviations, Model-k4, showed the best trade-off between fit
Image reconstruction for a Positron Emission Tomograph optimized for breast cancer imaging
Virador, Patrick R.G.
2000-04-01
The author performs image reconstruction for a novel Positron Emission Tomography camera that is optimized for breast cancer imaging. This work addresses for the first time, the problem of fully-3D, tomographic reconstruction using a septa-less, stationary, (i.e. no rotation or linear motion), and rectangular camera whose Field of View (FOV) encompasses the entire volume enclosed by detector modules capable of measuring Depth of Interaction (DOI) information. The camera is rectangular in shape in order to accommodate breasts of varying sizes while allowing for soft compression of the breast during the scan. This non-standard geometry of the camera exacerbates two problems: (a) radial elongation due to crystal penetration and (b) reconstructing images from irregularly sampled data. Packing considerations also give rise to regions in projection space that are not sampled which lead to missing information. The author presents new Fourier Methods based image reconstruction algorithms that incorporate DOI information and accommodate the irregular sampling of the camera in a consistent manner by defining lines of responses (LORs) between the measured interaction points instead of rebinning the events into predefined crystal face LORs which is the only other method to handle DOI information proposed thus far. The new procedures maximize the use of the increased sampling provided by the DOI while minimizing interpolation in the data. The new algorithms use fixed-width evenly spaced radial bins in order to take advantage of the speed of the Fast Fourier Transform (FFT), which necessitates the use of irregular angular sampling in order to minimize the number of unnormalizable Zero-Efficiency Bins (ZEBs). In order to address the persisting ZEBs and the issue of missing information originating from packing considerations, the algorithms (a) perform nearest neighbor smoothing in 2D in the radial bins (b) employ a semi-iterative procedure in order to estimate the unsampled data
Modulus reconstruction from prostate ultrasound images using finite element modeling
NASA Astrophysics Data System (ADS)
Yan, Zhennan; Zhang, Shaoting; Alam, S. Kaisar; Metaxas, Dimitris N.; Garra, Brian S.; Feleppa, Ernest J.
2012-03-01
In medical diagnosis, use of elastography is becoming increasingly more useful. However, treatments usually assume a planar compression applied to tissue surfaces and measure the deformation. The stress distribution is relatively uniform close to the surface when using a large, flat compressor but it diverges gradually along tissue depth. Generally in prostate elastography, the transrectal probes used for scanning and compression are cylindrical side-fire or rounded end-fire probes, and the force is applied through the rectal wall. These make it very difficult to detect cancer in prostate, since the rounded contact surfaces exaggerate the non-uniformity of the applied stress, especially for the distal, anterior prostate. We have developed a preliminary 2D Finite Element Model (FEM) to simulate prostate deformation in elastography. The model includes a homogeneous prostate with a stiffer tumor in the proximal, posterior region of the gland. A force is applied to the rectal wall to deform the prostate, strain and stress distributions can be computed from the resultant displacements. Then, we assume the displacements as boundary condition and reconstruct the modulus distribution (inverse problem) using linear perturbation method. FEM simulation shows that strain and strain contrast (of the lesion) decrease very rapidly with increasing depth and lateral distance. Therefore, lesions would not be clearly visible if located far away from the probe. However, the reconstructed modulus image can better depict relatively stiff lesion wherever the lesion is located.
MIT image reconstruction based on edge-preserving regularization.
Casanova, R; Silva, A; Borges, A R
2004-02-01
Tikhonov regularization has been widely used in electrical tomography to deal with the ill-posedness of the inverse problem. However, due to the fact that discontinuities are strongly penalized, this approach tends to produce blurred images. Recently, a lot of interest has been devoted to methods with edge-preserving properties, such as those related to total variation, wavelets and half-quadratic regularization. In the present work, the performance of an edge-preserving regularization method, called ARTUR, is evaluated in the context of magnetic induction tomography (MIT). ARTUR is a deterministic method based on half-quadratic regularization, where complementary a priori information may be introduced in the reconstruction algorithm by the use of a nonnegativity constraint. The method is first tested using an MIT analytical model that generates projection data given the position, the radius and the magnetic permeability of a single nonconductive cylindrical object. It is shown that even in the presence of strong Gaussian additive noise, it is still able to recover the main features of the object. Secondly, reconstructions based on real data for different configurations of conductive nonmagnetic cylindrical objects are presented and some of their parameters estimated. PMID:15005316
Current profile reconstruction using electron temperature imaging diagnostics
NASA Astrophysics Data System (ADS)
Tritz, K.; Stutman, D.; Delgado-Aparicio, L. F.; Finkenthal, M.; Pacella, D.; Kaita, R.; Stratton, B.; Sabbagh, S.
2004-10-01
Flux surface shape information can be used to constrain the current profile for reconstruction of the plasma equilibrium. One method of inferring flux surface shape relies on plasma x-ray emission; however, deviations from the flux surfaces due to impurity and density asymmetries complicate the interpretation. Electron isotherm surfaces should correspond well to the plasma flux surfaces, and equilibrium constraint modeling using this isotherm information constrains the current profile. The KFIT code is used to assess the profile uncertainty and to optimize the number, location and SNR required for the Te detectors. As Te imaging detectors we consider tangentially viewing, vertically spaced, linear gas electron multiplier arrays operated in pulse height analysis (PHA) mode and multifoil soft x-ray arrays. Isoflux coordinate sets provided by Te measurements offer a strong constraint on the equilibrium reconstruction in both a stacked horizontal array configuration and a crossed horizontal and vertical beam system, with q0 determined to within ±4%. The required SNR can be provided with either PHA or multicolor diagnostic techniques, though the multicolor system requires ˜×4 better statistics for comparable final errors.
Accelerated nanoscale magnetic resonance imaging through phase multiplexing
Moores, B. A.; Eichler, A. Takahashi, H.; Navaretti, P.; Degen, C. L.; Tao, Y.
2015-05-25
We report a method for accelerated nanoscale nuclear magnetic resonance imaging by detecting several signals in parallel. Our technique relies on phase multiplexing, where the signals from different nuclear spin ensembles are encoded in the phase of an ultrasensitive magnetic detector. We demonstrate this technique by simultaneously acquiring statistically polarized spin signals from two different nuclear species ({sup 1}H, {sup 19}F) and from up to six spatial locations in a nanowire test sample using a magnetic resonance force microscope. We obtain one-dimensional imaging resolution better than 5 nm, and subnanometer positional accuracy.
Accelerated nanoscale magnetic resonance imaging through phase multiplexing
NASA Astrophysics Data System (ADS)
Moores, B. A.; Eichler, A.; Tao, Y.; Takahashi, H.; Navaretti, P.; Degen, C. L.
2015-05-01
We report a method for accelerated nanoscale nuclear magnetic resonance imaging by detecting several signals in parallel. Our technique relies on phase multiplexing, where the signals from different nuclear spin ensembles are encoded in the phase of an ultrasensitive magnetic detector. We demonstrate this technique by simultaneously acquiring statistically polarized spin signals from two different nuclear species (1H, 19F) and from up to six spatial locations in a nanowire test sample using a magnetic resonance force microscope. We obtain one-dimensional imaging resolution better than 5 nm, and subnanometer positional accuracy.
3D imaging reconstruction and impacted third molars: case reports
Tuzi, Andrea; Di Bari, Roberto; Cicconetti, Andrea
2012-01-01
Summary There is a debate in the literature about the need for Computed Tomagraphy (CT) before removing third molars, even if positive radiographic signs are present. In few cases, the third molar is so close to the inferior alveolar nerve that its extraction might expose patients to the risk of post-operative neuro-sensitive alterations of the skin and the mucosa of the homolateral lower lip and chin. Thus, the injury of the inferior alveolar nerve may represent a serious, though infrequent, neurologic complication in the surgery of the third molars rendering necessary a careful pre-operative evaluation of their anatomical relationship with the inferior alveolar nerve by means of radiographic imaging techniques. This contribution presents two case reports showing positive radiographic signs, which are the hallmarks of a possible close relationship between the inferior alveolar nerve and the third molars. We aim at better defining the relationship between third molars and the mandibular canal using Dental CT Scan, DICOM image acquisition and 3D reconstruction with a dedicated software. By our study we deduce that 3D images are not indispensable, but they can provide a very agreeable assistance in the most complicated cases. PMID:23386934
Image reconstruction from few views by l0-norm optimization
NASA Astrophysics Data System (ADS)
Sun, Yu-Li; Tao, Jin-Xu
2014-07-01
In the medical computer tomography (CT) field, total variation (TV), which is the l1-norm of the discrete gradient transform (DGT), is widely used as regularization based on the compressive sensing (CS) theory. To overcome the TV model's disadvantageous tendency of uniformly penalizing the image gradient and over smoothing the low-contrast structures, an iterative algorithm based on the l0-norm optimization of the DGT is proposed. In order to rise to the challenges introduced by the l0-norm DGT, the algorithm uses a pseudo-inverse transform of DGT and adapts an iterative hard thresholding (IHT) algorithm, whose convergence and effective efficiency have been theoretically proven. The simulation demonstrates our conclusions and indicates that the algorithm proposed in this paper can obviously improve the reconstruction quality.
Terrain reconstruction from Chang'e-3 PCAM images
NASA Astrophysics Data System (ADS)
Wang, Wen-Rui; Ren, Xin; Wang, Fen-Fei; Liu, Jian-Jun; Li, Chun-Lai
2015-07-01
The existing terrain models that describe the local lunar surface have limited resolution and accuracy, which can hardly meet the needs of rover navigation, positioning and geological analysis. China launched the lunar probe Chang'e-3 in December, 2013. Chang'e-3 encompassed a lander and a lunar rover called “Yutu” (Jade Rabbit). A set of panoramic cameras were installed on the rover mast. After acquiring panoramic images of four sites that were explored, the terrain models of the local lunar surface with resolution of 0.02m were reconstructed. Compared with other data sources, the models derived from Chang'e-3 data were clear and accurate enough that they could be used to plan the route of Yutu. Supported by the National Natural Science Foundation of China.
Applications of laser wakefield accelerators for biomedical imaging
NASA Astrophysics Data System (ADS)
Najmudin, Zulfikar
2014-10-01
Laser-wakefield accelerators driven by high-intensity short-pulse lasers are a proven compact source of high-energy electron beams, with energy gains of ~GeV energy in centimetres of plasma demonstrated. One of the main proposed applications for these accelerators is to drive synchrotron light sources, in particular for x-ray applications. It has also been shown that the same plasma accelerator can also act as a wigglers, capable of the production of high brightness and spatially coherent hard x-ray beams. In this latest work, we demonstrate the application of these unique light-sources for biological and medical applications. The experiments were performed with the Astra Gemini laser at the Rutherford Appleton Laboratory in the UK. Gemini produces laser pulses with energy exceeding 10 J in pulse lengths down to 40 fs. A long focal length parabola (f / 20) is used to focus the laser down to a spot of size approximately 25 μ m (fwhm) into a gas-cell of variable length. Electrons are accelerated to energies up to 1 GeV and a bright beam of x-rays is observed simultaneously with the accelerated beam. The length of the gas cell was optimised to produce high contrast x-ray images of radiographed test objects. This source was then used for imaging a number of interesting medical and biological samples. Full tomographic imaging of a human trabecular bone sample was made with resolution easily exceeding the ~100 μm level required for CT applications. Phase-contrast imaging of human prostrate and mouse neonates at the micron level was also demonstrated. These studies indicate the usefulness of these sources in research and clinical applications. They also show that full 3D imaging can be made possible with this source in a fraction of the time that it would take with a corresponding x-ray tube. The JAI is funded by STFC Grant ST/J002062/1.
Real-time maximum a-posteriori image reconstruction for fluorescence microscopy
NASA Astrophysics Data System (ADS)
Jabbar, Anwar A.; Dilipkumar, Shilpa; C K, Rasmi; Rajan, K.; Mondal, Partha P.
2015-08-01
Rapid reconstruction of multidimensional image is crucial for enabling real-time 3D fluorescence imaging. This becomes a key factor for imaging rapidly occurring events in the cellular environment. To facilitate real-time imaging, we have developed a graphics processing unit (GPU) based real-time maximum a-posteriori (MAP) image reconstruction system. The parallel processing capability of GPU device that consists of a large number of tiny processing cores and the adaptability of image reconstruction algorithm to parallel processing (that employ multiple independent computing modules called threads) results in high temporal resolution. Moreover, the proposed quadratic potential based MAP algorithm effectively deconvolves the images as well as suppresses the noise. The multi-node multi-threaded GPU and the Compute Unified Device Architecture (CUDA) efficiently execute the iterative image reconstruction algorithm that is ≈200-fold faster (for large dataset) when compared to existing CPU based systems.
The high sensitivity of the maximum likelihood estimator method of tomographic image reconstruction
Llacer, J.; Veklerov, E.
1987-01-01
Positron Emission Tomography (PET) images obtained by the MLE iterative method of image reconstruction converge towards strongly deteriorated versions of the original source image. The image deterioration is caused by an excessive attempt by the algorithm to match the projection data with high counts. We can modulate this effect. We compared a source image with reconstructions by filtered backprojection to the MLE algorithm to show that the MLE images can have similar noise to the filtered backprojection images at regions of high activity and very low noise, comparable to the source image, in regions of low activity, if the iterative procedure is stopped at an appropriate point.
Photoacoustic image reconstruction from ultrasound post-beamformed B-mode image
NASA Astrophysics Data System (ADS)
Zhang, Haichong K.; Guo, Xiaoyu; Kang, Hyun Jae; Boctor, Emad M.
2016-03-01
A requirement to reconstruct photoacoustic (PA) image is to have a synchronized channel data acquisition with laser firing. Unfortunately, most clinical ultrasound (US) systems don't offer an interface to obtain synchronized channel data. To broaden the impact of clinical PA imaging, we propose a PA image reconstruction algorithm utilizing US B-mode image, which is readily available from clinical scanners. US B-mode image involves a series of signal processing including beamforming, followed by envelope detection, and end with log compression. Yet, it will be defocused when PA signals are input due to incorrect delay function. Our approach is to reverse the order of image processing steps and recover the original US post-beamformed radio-frequency (RF) data, in which a synthetic aperture based PA rebeamforming algorithm can be further applied. Taking B-mode image as the input, we firstly recovered US postbeamformed RF data by applying log decompression and convoluting an acoustic impulse response to combine carrier frequency information. Then, the US post-beamformed RF data is utilized as pre-beamformed RF data for the adaptive PA beamforming algorithm, and the new delay function is applied by taking into account that the focus depth in US beamforming is at the half depth of the PA case. The feasibility of the proposed method was validated through simulation, and was experimentally demonstrated using an acoustic point source. The point source was successfully beamformed from a US B-mode image, and the full with at the half maximum of the point improved 3.97 times. Comparing this result to the ground-truth reconstruction using channel data, the FWHM was slightly degraded with 1.28 times caused by information loss during envelope detection and convolution of the RF information.
Research on super-resolution image reconstruction based on an improved POCS algorithm
NASA Astrophysics Data System (ADS)
Xu, Haiming; Miao, Hong; Yang, Chong; Xiong, Cheng
2015-07-01
Super-resolution image reconstruction (SRIR) can improve the fuzzy image's resolution; solve the shortage of the spatial resolution, excessive noise, and low-quality problem of the image. Firstly, we introduce the image degradation model to reveal the essence of super-resolution reconstruction process is an ill-posed inverse problem in mathematics. Secondly, analysis the blurring reason of optical imaging process - light diffraction and small angle scattering is the main reason for the fuzzy; propose an image point spread function estimation method and an improved projection onto convex sets (POCS) algorithm which indicate effectiveness by analyzing the changes between the time domain and frequency domain algorithm in the reconstruction process, pointed out that the improved POCS algorithms based on prior knowledge have the effect to restore and approach the high frequency of original image scene. Finally, we apply the algorithm to reconstruct synchrotron radiation computer tomography (SRCT) image, and then use these images to reconstruct the three-dimensional slice images. Comparing the differences between the original method and super-resolution algorithm, it is obvious that the improved POCS algorithm can restrain the noise and enhance the image resolution, so it is indicated that the algorithm is effective. This study and exploration to super-resolution image reconstruction by improved POCS algorithm is proved to be an effective method. It has important significance and broad application prospects - for example, CT medical image processing and SRCT ceramic sintering analyze of microstructure evolution mechanism.
Lutzweiler, Christian; Razansky, Daniel
2013-01-01
This paper comprehensively reviews the emerging topic of optoacoustic imaging from the image reconstruction and quantification perspective. Optoacoustic imaging combines highly attractive features, including rich contrast and high versatility in sensing diverse biological targets, excellent spatial resolution not compromised by light scattering, and relatively low cost of implementation. Yet, living objects present a complex target for optoacoustic imaging due to the presence of a highly heterogeneous tissue background in the form of strong spatial variations of scattering and absorption. Extracting quantified information on the actual distribution of tissue chromophores and other biomarkers constitutes therefore a challenging problem. Image quantification is further compromised by some frequently-used approximated inversion formulae. In this review, the currently available optoacoustic image reconstruction and quantification approaches are assessed, including back-projection and model-based inversion algorithms, sparse signal representation, wavelet-based approaches, methods for reduction of acoustic artifacts as well as multi-spectral methods for visualization of tissue bio-markers. Applicability of the different methodologies is further analyzed in the context of real-life performance in small animal and clinical in-vivo imaging scenarios. PMID:23736854
Segmentation of colon tissue sample images using multiple graphics accelerators.
Szénási, Sándor
2014-08-01
Nowadays, processing medical images is increasingly done through using digital imagery and custom software solutions. The distributed algorithm presented in this paper is used to detect special tissue parts, the nuclei on haematoxylin and eosin stained colon tissue sample images. The main aim of this work is the development of a new data-parallel region growing algorithm that can be implemented even in an environment using multiple video accelerators. This new method has three levels of parallelism: (a) the parallel region growing itself, (b) starting more region growing in the device, and (c) using more than one accelerator. We use the split-and-merge technique based on our already existing data-parallel cell nuclei segmentation algorithm extended with a fast, backtracking-based, non-overlapping cell filter method. This extension does not cause significant degradation of the accuracy; the results are practically the same as those of the original sequential region growing method. However, as expected, using more devices usually means that less time is needed to process the tissue image; in the case of the configuration of one central processing unit and two graphics cards, the average speed-up is about 4-6×. The implemented algorithm has the additional advantage of efficiently processing very large images with high memory requirements. PMID:24893331
NASA Astrophysics Data System (ADS)
Zhang, Xiaoxin; Wang, Huaning; He, Fei; Chen, Bo
An extreme ultraviolet (EUV) camera was mounted on the top of the instrumental module of the Chinese Chang'E-3 (CE-3) lunar lander which has landed on lunar surface on December 14, 2013. On December 21, 2013, the EUV camera was powered on and the global plasmaspheric 30.4 nm emission was imaged from the Moon for the first time on December 25, 2013 after tests of the camera. Clear plasmasphere region, plasmapause, Earth’s shadow can be seen in the images. During this period, the geomagnetic activity is extremely quiet, so the plasmasphere is diffusive and no plume structure was observed. By application of the reconstruction algorithm of He et al. [2011] to the images, the equatorial plasmapause positions and plasmasphere density distributions were reconstructed. The reconstructed plasmapause positions are in consistent with in situ observations by THEMIS and DMSP satellites. In future, more plasmaspheric images under different solar wind, interplanetary magnetic field and geomagnetic conditions will be detected, and then the plasmaspheric dynamics and its role on inner magnetospheric coupling will be investigated.
NASA Astrophysics Data System (ADS)
Lartizien, Carole; Kinahan, Paul E.; Comtat, Claude; Lin, Michael; Swensson, Richard G.; Trebossen, Regine; Bendriem, Bernard
2000-04-01
This work presents initial results from observer detection performance studies using the same volume visualization software tools that are used in clinical PET oncology imaging. Research into the FORE+OSEM and FORE+AWOSEM statistical image reconstruction methods tailored to whole- body 3D PET oncology imaging have indicated potential improvements in image SNR compared to currently used analytic reconstruction methods (FBP). To assess the resulting impact of these reconstruction methods on the performance of human observers in detecting and localizing tumors, we use a non- Monte Carlo technique to generate multiple statistically accurate realizations of 3D whole-body PET data, based on an extended MCAT phantom and with clinically realistic levels of statistical noise. For each realization, we add a fixed number of randomly located 1 cm diam. lesions whose contrast is varied among pre-calibrated values so that the range of true positive fractions is well sampled. The observer is told the number of tumors and, similar to the AFROC method, asked to localize all of them. The true positive fraction for the three algorithms (FBP, FORE+OSEM, FORE+AWOSEM) as a function of lesion contrast is calculated, although other protocols could be compared. A confidence level for each tumor is also recorded for incorporation into later AFROC analysis.
Iterative Image Reconstruction for PROPELLER-MRI using the NonUniform Fast Fourier Transform
Tamhane, Ashish A.; Anastasio, Mark A.; Gui, Minzhi; Arfanakis, Konstantinos
2013-01-01
Purpose To investigate an iterative image reconstruction algorithm using the non-uniform fast Fourier transform (NUFFT) for PROPELLER (Periodically Rotated Overlapping parallEL Lines with Enhanced Reconstruction) MRI. Materials and Methods Numerical simulations, as well as experiments on a phantom and a healthy human subject were used to evaluate the performance of the iterative image reconstruction algorithm for PROPELLER, and compare it to that of conventional gridding. The trade-off between spatial resolution, signal to noise ratio, and image artifacts, was investigated for different values of the regularization parameter. The performance of the iterative image reconstruction algorithm in the presence of motion was also evaluated. Results It was demonstrated that, for a certain range of values of the regularization parameter, iterative reconstruction produced images with significantly increased SNR, reduced artifacts, for similar spatial resolution, compared to gridding. Furthermore, the ability to reduce the effects of motion in PROPELLER-MRI was maintained when using the iterative reconstruction approach. Conclusion An iterative image reconstruction technique based on the NUFFT was investigated for PROPELLER MRI. For a certain range of values of the regularization parameter the new reconstruction technique may provide PROPELLER images with improved image quality compared to conventional gridding. PMID:20578028
Does thorax EIT image analysis depend on the image reconstruction method?
NASA Astrophysics Data System (ADS)
Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut
2013-04-01
Different methods were proposed to analyze the resulting images of electrical impedance tomography (EIT) measurements during ventilation. The aim of our study was to examine if the analysis methods based on back-projection deliver the same results when applied on images based on other reconstruction algorithms. Seven mechanically ventilated patients with ARDS were examined by EIT. The thorax contours were determined from the routine CT images. EIT raw data was reconstructed offline with (1) filtered back-projection with circular forward model (BPC); (2) GREIT reconstruction method with circular forward model (GREITC) and (3) GREIT with individual thorax geometry (GREITT). Three parameters were calculated on the resulting images: linearity, global ventilation distribution and regional ventilation distribution. The results of linearity test are 5.03±2.45, 4.66±2.25 and 5.32±2.30 for BPC, GREITC and GREITT, respectively (median ±interquartile range). The differences among the three methods are not significant (p = 0.93, Kruskal-Wallis test). The proportions of ventilation in the right lung are 0.58±0.17, 0.59±0.20 and 0.59±0.25 for BPC, GREITC and GREITT, respectively (p = 0.98). The differences of the GI index based on different reconstruction methods (0.53±0.16, 0.51±0.25 and 0.54±0.16 for BPC, GREITC and GREITT, respectively) are also not significant (p = 0.93). We conclude that the parameters developed for images generated with GREITT are comparable with filtered back-projection and GREITC.
Dang, H.; Wang, A. S.; Sussman, Marc S.; Siewerdsen, J. H.; Stayman, J. W.
2014-01-01
Sequential imaging studies are conducted in many clinical scenarios. Prior images from previous studies contain a great deal of patient-specific anatomical information and can be used in conjunction with subsequent imaging acquisitions to maintain image quality while enabling radiation dose reduction (e.g., through sparse angular sampling, reduction in fluence, etc.). However, patient motion between images in such sequences results in misregistration between the prior image and current anatomy. Existing prior-image-based approaches often include only a simple rigid registration step that can be insufficient for capturing complex anatomical motion, introducing detrimental effects in subsequent image reconstruction. In this work, we propose a joint framework that estimates the 3D deformation between an unregistered prior image and the current anatomy (based on a subsequent data acquisition) and reconstructs the current anatomical image using a model-based reconstruction approach that includes regularization based on the deformed prior image. This framework is referred to as deformable prior image registration, penalized-likelihood estimation (dPIRPLE). Central to this framework is the inclusion of a 3D B-spline-based free-form-deformation model into the joint registration-reconstruction objective function. The proposed framework is solved using a maximization strategy whereby alternating updates to the registration parameters and image estimates are applied allowing for improvements in both the registration and reconstruction throughout the optimization process. Cadaver experiments were conducted on a cone-beam CT testbench emulating a lung nodule surveillance scenario. Superior reconstruction accuracy and image quality were demonstrated using the dPIRPLE algorithm as compared to more traditional reconstruction methods including filtered backprojection, penalized-likelihood estimation (PLE), prior image penalized-likelihood estimation (PIPLE) without registration
Preoperative digital mammography imaging in conservative mastectomy and immediate reconstruction
Angrigiani, Claudio; Hammond, Dennis; Nava, Maurizio; Gonzalez, Eduardo; Rostagno, Roman; Gercovich, Gustavo
2016-01-01
Background Digital mammography clearly distinguishes gland tissue density from the overlying non-glandular breast tissue coverage, which corresponds to the existing tissue between the skin and the Cooper’s ligaments surrounding the gland (i.e., dermis and subcutaneous fat). Preoperative digital imaging can determine the thickness of this breast tissue coverage, thus facilitating planning of the most adequate surgical techniques and reconstructive procedures for each case. Methods This study aimed to describe the results of a retrospective study of 352 digital mammograms in 176 patients with different breast volumes who underwent preoperative conservative mastectomies. The breast tissue coverage thickness and its relationship with the breast volume were evaluated. Results The breast tissue coverage thickness ranged from 0.233 to 4.423 cm, with a mean value of 1.952 cm. A comparison of tissue coverage and breast volume revealed a non-direct relationship between these factors. Conclusions Preoperative planning should not depend only on breast volume. Flap evaluations based on preoperative imaging measurements might be helpful when planning a conservative mastectomy. Accordingly, we propose a breast tissue coverage classification (BTCC). PMID:26855903
Bayer patterned high dynamic range image reconstruction using adaptive weighting function
NASA Astrophysics Data System (ADS)
Kang, Hee; Lee, Suk Ho; Song, Ki Sun; Kang, Moon Gi
2014-12-01
It is not easy to acquire a desired high dynamic range (HDR) image directly from a camera due to the limited dynamic range of most image sensors. Therefore, generally, a post-process called HDR image reconstruction is used, which reconstructs an HDR image from a set of differently exposed images to overcome the limited dynamic range. However, conventional HDR image reconstruction methods suffer from noise factors and ghost artifacts. This is due to the fact that the input images taken with a short exposure time contain much noise in the dark regions, which contributes to increased noise in the corresponding dark regions of the reconstructed HDR image. Furthermore, since input images are acquired at different times, the images contain different motion information, which results in ghost artifacts. In this paper, we propose an HDR image reconstruction method which reduces the impact of the noise factors and prevents ghost artifacts. To reduce the influence of the noise factors, the weighting function, which determines the contribution of a certain input image to the reconstructed HDR image, is designed to adapt to the exposure time and local motions. Furthermore, the weighting function is designed to exclude ghosting regions by considering the differences of the luminance and the chrominance values between several input images. Unlike conventional methods, which generally work on a color image processed by the image processing module (IPM), the proposed method works directly on the Bayer raw image. This allows for a linear camera response function and also improves the efficiency in hardware implementation. Experimental results show that the proposed method can reconstruct high-quality Bayer patterned HDR images while being robust against ghost artifacts and noise factors.
Exact Reconstruction for Near-Field Three-Dimensional Planar Millimeter-Wave Holographic Imaging
NASA Astrophysics Data System (ADS)
Qiao, Lingbo; Wang, Yingxin; Zhao, Ziran; Chen, Zhiqiang
2015-12-01
In this paper, an exact reconstruction formula is presented for near-field three-dimensional (3D) planar millimeter-wave (MMW) holographic imaging. The proposed formula is derived based on scalar diffraction theory, and the round-trip imaging process is equivalent to a unidirectional optical field propagation. Because of compensating the propagation loss of the source for the near-field imaging configuration, the inconsistency in range domain of the reconstructed 3D images is avoided. The proposed reconstruction formula also gives a phase correction for the reconstructed complex-valued reflectivity of the target and the range coordinate can be exactly determined. Simulations and laboratory imaging experiments are performed to demonstrate the effectiveness of the proposed reconstruction formula.
Microwave Imaging for Breast Cancer Detection: Advances in Three–Dimensional Image Reconstruction
Golnabi, Amir H.; Meaney, Paul M.; Epstein, Neil R.; Paulsen, Keith D.
2013-01-01
Microwave imaging is based on the electrical property (permittivity and conductivity) differences in materials. Microwave imaging for biomedical applications is particularly interesting, mainly due to the fact that available range of dielectric properties for different tissues can provide important functional information about their health. Under the assumption that a 3D scattering problem can be reasonably represented as a simplified 2D model, one can take advantage of the simplicity and lower computational cost of 2D models to characterize such 3D phenomenon. Nonetheless, by eliminating excessive model simplifications, 3D microwave imaging provides potentially more valuable information over 2Dtechniques, and as a result, more accurate dielectric property maps may be obtained. In this paper, we present some advances we have made in three–dimensional image reconstruction, and show the results from a 3D breast phantom experiment using our clinical microwave imaging system at Dartmouth Hitchcock Medical Center (DHMC), NH. PMID:22255641
Model-based super-resolution reconstruction techniques for underwater imaging
NASA Astrophysics Data System (ADS)
Chen, Yuzhang; Yang, Bofei; Xia, Min; Li, Wei; Yang, Kecheng; Zhang, Xiaohui
2012-01-01
The visibility of underwater imaging has been of long-standing interest to investigators working in many civilian and military areas such as oceanographic environments, efforts such as image restoration techniques can help to enhance the image quality; however, the resolution is still limited. Image super resolution reconstruction (SRR) techniques are promising approaches for improving resolution beyond the limit of hardware; furthermore, with the prior knowledge of the imaging system such as the point spread function and diffration limit, performance of the super resolution reconstruction can be further enhanced, which can also extend the imaging range as well. In order to improve the resolution to a best possible level, an imaging model based on beam propagation is established and applied to image super-resolution reconstruction techniques for an underwater range-gated pulsed laser imaging system in the presented effort. Experimental results show that the proposed approaches can effectively enhance the resolution and quality of underwater imaging
Model-based super-resolution reconstruction techniques for underwater imaging
NASA Astrophysics Data System (ADS)
Chen, Yuzhang; Yang, Bofei; Xia, Min; Li, Wei; Yang, Kecheng; Zhang, Xiaohui
2011-11-01
The visibility of underwater imaging has been of long-standing interest to investigators working in many civilian and military areas such as oceanographic environments, efforts such as image restoration techniques can help to enhance the image quality; however, the resolution is still limited. Image super resolution reconstruction (SRR) techniques are promising approaches for improving resolution beyond the limit of hardware; furthermore, with the prior knowledge of the imaging system such as the point spread function and diffration limit, performance of the super resolution reconstruction can be further enhanced, which can also extend the imaging range as well. In order to improve the resolution to a best possible level, an imaging model based on beam propagation is established and applied to image super-resolution reconstruction techniques for an underwater range-gated pulsed laser imaging system in the presented effort. Experimental results show that the proposed approaches can effectively enhance the resolution and quality of underwater imaging
STEREOSCOPIC POLAR PLUME RECONSTRUCTIONS FROM STEREO/SECCHI IMAGES
Feng, L.; Inhester, B.; Solanki, S. K.; Wilhelm, K.; Wiegelmann, T.; Podlipnik, B.; Howard, R. A.; Plunkett, S. P.; Wuelser, J. P.; Gan, W. Q.
2009-07-20
We present stereoscopic reconstructions of the location and inclination of polar plumes of two data sets based on the two simultaneously recorded images taken by the EUVI telescopes in the SECCHI instrument package onboard the Solar TErrestrial RElations Observatory spacecraft. The 10 plumes investigated show a superradial expansion in the coronal hole in three dimensions (3D) which is consistent with the two-dimensional results. Their deviations from the local meridian planes are rather small with an average of 6.{sup 0}47. By comparing the reconstructed plumes with a dipole field with its axis along the solar rotation axis, it is found that plumes are inclined more horizontally than the dipole field. The lower the latitude is, the larger is the deviation from the dipole field. The relationship between plumes and bright points has been investigated and they are not always associated. For the first data set, based on the 3D height of plumes and the electron density derived from SUMER/SOHO Si VIII line pair, we found that electron densities along the plumes decrease with height above the solar surface. The temperature obtained from the density scale height is 1.6-1.8 times larger than the temperature obtained from Mg IX line ratios. We attribute this discrepancy to a deviation of the electron and the ion temperatures. Finally, we have found that the outflow speeds studied in the O VI line in the plumes corrected by the angle between the line of sight and the plume orientation are quite small with a maximum of 10 km s{sup -1}. It is unlikely that plumes are a dominant contributor to the fast solar wind.
NASA Astrophysics Data System (ADS)
Müller, K.; Maier, A. K.; Schwemmer, C.; Lauritsch, G.; De Buck, S.; Wielandts, J.-Y.; Hornegger, J.; Fahrig, R.
2014-06-01
The acquisition of data for cardiac imaging using a C-arm computed tomography system requires several seconds and multiple heartbeats. Hence, incorporation of motion correction in the reconstruction step may improve the resulting image quality. Cardiac motion can be estimated by deformable three-dimensional (3D)/3D registration performed on initial 3D images of different heart phases. This motion information can be used for a motion-compensated reconstruction allowing the use of all acquired data for image reconstruction. However, the result of the registration procedure and hence the estimated deformations are influenced by the quality of the initial 3D images. In this paper, the sensitivity of the 3D/3D registration step to the image quality of the initial images is studied. Different reconstruction algorithms are evaluated for a recently proposed cardiac C-arm CT acquisition protocol. The initial 3D images are all based on retrospective electrocardiogram (ECG)-gated data. ECG-gating of data from a single C-arm rotation provides only a few projections per heart phase for image reconstruction. This view sparsity leads to prominent streak artefacts and a poor signal to noise ratio. Five different initial image reconstructions are evaluated: (1) cone beam filtered-backprojection (FDK), (2) cone beam filtered-backprojection and an additional bilateral filter (FFDK), (3) removal of the shadow of dense objects (catheter, pacing electrode, etc) before reconstruction with a cone beam filtered-backprojection (cathFDK), (4) removal of the shadow of dense objects before reconstruction with a cone beam filtered-backprojection and a bilateral filter (cathFFDK). The last method (5) is an iterative few-view reconstruction (FV), the prior image constrained compressed sensing combined with the improved total variation algorithm. All reconstructions are investigated with respect to the final motion-compensated reconstruction quality. The algorithms were tested on a mathematical
Müller, K; Maier, A K; Schwemmer, C; Lauritsch, G; De Buck, S; Wielandts, J-Y; Hornegger, J; Fahrig, R
2014-06-21
The acquisition of data for cardiac imaging using a C-arm computed tomography system requires several seconds and multiple heartbeats. Hence, incorporation of motion correction in the reconstruction step may improve the resulting image quality. Cardiac motion can be estimated by deformable three-dimensional (3D)/3D registration performed on initial 3D images of different heart phases. This motion information can be used for a motion-compensated reconstruction allowing the use of all acquired data for image reconstruction. However, the result of the registration procedure and hence the estimated deformations are influenced by the quality of the initial 3D images. In this paper, the sensitivity of the 3D/3D registration step to the image quality of the initial images is studied. Different reconstruction algorithms are evaluated for a recently proposed cardiac C-arm CT acquisition protocol. The initial 3D images are all based on retrospective electrocardiogram (ECG)-gated data. ECG-gating of data from a single C-arm rotation provides only a few projections per heart phase for image reconstruction. This view sparsity leads to prominent streak artefacts and a poor signal to noise ratio. Five different initial image reconstructions are evaluated: (1) cone beam filtered-backprojection (FDK), (2) cone beam filtered-backprojection and an additional bilateral filter (FFDK), (3) removal of the shadow of dense objects (catheter, pacing electrode, etc) before reconstruction with a cone beam filtered-backprojection (cathFDK), (4) removal of the shadow of dense objects before reconstruction with a cone beam filtered-backprojection and a bilateral filter (cathFFDK). The last method (5) is an iterative few-view reconstruction (FV), the prior image constrained compressed sensing combined with the improved total variation algorithm. All reconstructions are investigated with respect to the final motion-compensated reconstruction quality. The algorithms were tested on a mathematical
Reconstruction of Human Monte Carlo Geometry from Segmented Images
NASA Astrophysics Data System (ADS)
Zhao, Kai; Cheng, Mengyun; Fan, Yanchang; Wang, Wen; Long, Pengcheng; Wu, Yican
2014-06-01
Human computational phantoms have been used extensively for scientific experimental analysis and experimental simulation. This article presented a method for human geometry reconstruction from a series of segmented images of a Chinese visible human dataset. The phantom geometry could actually describe detailed structure of an organ and could be converted into the input file of the Monte Carlo codes for dose calculation. A whole-body computational phantom of Chinese adult female has been established by FDS Team which is named Rad-HUMAN with about 28.8 billion voxel number. For being processed conveniently, different organs on images were segmented with different RGB colors and the voxels were assigned with positions of the dataset. For refinement, the positions were first sampled. Secondly, the large sums of voxels inside the organ were three-dimensional adjacent, however, there were not thoroughly mergence methods to reduce the cell amounts for the description of the organ. In this study, the voxels on the organ surface were taken into consideration of the mergence which could produce fewer cells for the organs. At the same time, an indexed based sorting algorithm was put forward for enhancing the mergence speed. Finally, the Rad-HUMAN which included a total of 46 organs and tissues was described by the cuboids into the Monte Carlo Monte Carlo Geometry for the simulation. The Monte Carlo geometry was constructed directly from the segmented images and the voxels was merged exhaustively. Each organ geometry model was constructed without ambiguity and self-crossing, its geometry information could represent the accuracy appearance and precise interior structure of the organs. The constructed geometry largely retaining the original shape of organs could easily be described into different Monte Carlo codes input file such as MCNP. Its universal property was testified and high-performance was experimentally verified
SU-E-I-73: Clinical Evaluation of CT Image Reconstructed Using Interior Tomography
Zhang, J; Ge, G; Winkler, M; Cong, W; Wang, G
2014-06-01
Purpose: Radiation dose reduction has been a long standing challenge in CT imaging of obese patients. Recent advances in interior tomography (reconstruction of an interior region of interest (ROI) from line integrals associated with only paths through the ROI) promise to achieve significant radiation dose reduction without compromising image quality. This study is to investigate the application of this technique in CT imaging through evaluating imaging quality reconstructed from patient data. Methods: Projection data were directly obtained from patients who had CT examinations in a Dual Source CT scanner (DSCT). Two detectors in a DSCT acquired projection data simultaneously. One detector provided projection data for full field of view (FOV, 50 cm) while another detectors provided truncated projection data for a FOV of 26 cm. Full FOV CT images were reconstructed using both filtered back projection and iterative algorithm; while interior tomography algorithm was implemented to reconstruct ROI images. For comparison reason, FBP was also used to reconstruct ROI images. Reconstructed CT images were evaluated by radiologists and compared with images from CT scanner. Results: The results show that the reconstructed ROI image was in excellent agreement with the truth inside the ROI, obtained from images from CT scanner, and the detailed features in the ROI were quantitatively accurate. Radiologists evaluation shows that CT images reconstructed with interior tomography met diagnosis requirements. Radiation dose may be reduced up to 50% using interior tomography, depending on patient size. Conclusion: This study shows that interior tomography can be readily employed in CT imaging for radiation dose reduction. It may be especially useful in imaging obese patients, whose subcutaneous tissue is less clinically relevant but may significantly increase radiation dose.
Yu, Shu-Hai; Dong, Lei; Liu, Xin-Yue; Lin, Xu-Dong; Megn, Hao-Ran; Zhong, Xing
2016-08-20
To confirm the effect of uplink atmospheric turbulence on Fourier telescopy (FT), we designed a system for far-field imaging, utilizing a T-type laser transmitting configuration with commercially available hardware, except for a green imaging laser. The horizontal light transmission distance for both uplink and downlink was ∼300 m. For both the transmitting and received beams, the height upon the ground was below 1 m. The imaging laser's pointing accuracy was ∼9.3 μrad. A novel image reconstruction approach was proposed, yielding significantly improved quality and Strehl ratio of reconstructed images. From the reconstruction result, we observed that the tip/tilt aberration is tolerated by the FT system even for Changchun's atmospheric coherence length parameter (r0) below 3 cm. The resolution of the reconstructed images was ∼0.615 μrad. PMID:27556991
Image reconstruction from phased-array data based on multichannel blind deconvolution.
She, Huajun; Chen, Rong-Rong; Liang, Dong; Chang, Yuchou; Ying, Leslie
2015-11-01
In this paper we consider image reconstruction from fully sampled multichannel phased array MRI data without knowledge of the coil sensitivities. To overcome the non-uniformity of the conventional sum-of-square reconstruction, a new framework based on multichannel blind deconvolution (MBD) is developed for joint estimation of the image function and the sensitivity functions in image domain. The proposed approach addresses the non-uniqueness of the MBD problem by exploiting the smoothness of both functions in the image domain through regularization. Results using simulation, phantom and in vivo experiments demonstrate that the reconstructions by the proposed algorithm are more uniform than those by the existing methods. PMID:26119418
NASA Astrophysics Data System (ADS)
Choi, Joonsung; Kim, Dongchan; Oh, Changhyun; Han, Yeji; Park, HyunWook
2013-05-01
In MRI (magnetic resonance imaging), signal sampling along a radial k-space trajectory is preferred in certain applications due to its distinct advantages such as robustness to motion, and the radial sampling can be beneficial for reconstruction algorithms such as parallel MRI (pMRI) due to the incoherency. For radial MRI, the image is usually reconstructed from projection data using analytic methods such as filtered back-projection or Fourier reconstruction after gridding. However, the quality of the reconstructed image from these analytic methods can be degraded when the number of acquired projection views is insufficient. In this paper, we propose a novel reconstruction method based on the expectation maximization (EM) method, where the EM algorithm is remodeled for MRI so that complex images can be reconstructed. Then, to optimize the proposed method for radial pMRI, a reconstruction method that uses coil sensitivity information of multichannel RF coils is formulated. Experiment results from synthetic and in vivo data show that the proposed method introduces better reconstructed images than the analytic methods, even from highly subsampled data, and provides monotonic convergence properties compared to the conjugate gradient based reconstruction method.
Choi, Joonsung; Kim, Dongchan; Oh, Changhyun; Han, Yeji; Park, HyunWook
2013-05-01
In MRI (magnetic resonance imaging), signal sampling along a radial k-space trajectory is preferred in certain applications due to its distinct advantages such as robustness to motion, and the radial sampling can be beneficial for reconstruction algorithms such as parallel MRI (pMRI) due to the incoherency. For radial MRI, the image is usually reconstructed from projection data using analytic methods such as filtered back-projection or Fourier reconstruction after gridding. However, the quality of the reconstructed image from these analytic methods can be degraded when the number of acquired projection views is insufficient. In this paper, we propose a novel reconstruction method based on the expectation maximization (EM) method, where the EM algorithm is remodeled for MRI so that complex images can be reconstructed. Then, to optimize the proposed method for radial pMRI, a reconstruction method that uses coil sensitivity information of multichannel RF coils is formulated. Experiment results from synthetic and in vivo data show that the proposed method introduces better reconstructed images than the analytic methods, even from highly subsampled data, and provides monotonic convergence properties compared to the conjugate gradient based reconstruction method. PMID:23588215
WE-G-18A-05: Cone-Beam CT Reconstruction with Deformed Prior Image
Zhang, H; Huang, J; Ma, J; Chen, W; Ouyang, L; Wang, J
2014-06-15
Purpose: Prior image can be incorporated into image reconstruction process to improve the quality of on-treatment cone-beam CT (CBCT) from sparseview or low-dose projections. However, the deformation between the prior image and on-treatment CBCT are not considered in current prior image based reconstructions (e.g., prior image constrained compressed sensing (PICCS)). The purpose of this work is to develop a deformed-prior-imagebased- reconstruction strategy (DPIR) to address the mismatch problem between the prior image and target image. Methods: The deformed prior image is obtained by a projection based registration approach. Specifically, the deformation vector fields (DVF) used to deform the prior image is estimated through matching the forward projection of the prior image and the measured on-treatment projection. The deformed prior image is then used as the prior image in the standard PICCS algorithm. Simulation studies on the XCAT phantom was conducted to evaluate the performance of the projection based registration procedure and the proposed DPIR strategy. Results: The deformed prior image matches the geometry of on-treatment CBCT closer as compared to the original prior image. Using the deformed prior image, the quality of the image reconstructed by DPIR from few-view projection data is greatly improved as compared to the standard PICCS algorithm. The relative image reconstruction error is reduced to 11.13% in the proposed DPIR from 17.57% in the original PICCS. Conclusion: The proposed DPIR approach can solve the mismatch problem between the prior image and target image, which overcomes the limitation of the original PICCS algorithm for CBCT reconstruction from sparse-view or low-dose projections.
Kang, Daehun; Sung, Yul-Wan; Kang, Chang-Ki
2015-01-01
This study was to evaluate the proposed consecutive multishot echo planar imaging (cmsEPI) combined with a parallel imaging technique in terms of signal-to-noise ratio (SNR) and acceleration for a functional imaging study. We developed cmsEPI sequence using both consecutively acquired multishot EPI segments and variable flip angles to minimize the delay between segments and to maximize the SNR, respectively. We also combined cmsEPI with the generalized autocalibrating partially parallel acquisitions (GRAPPA) method. Temporal SNRs were measured at different acceleration factors and number of segments for functional sensitivity evaluation. We also examined the geometric distortions, which inherently occurred in EPI sequence. The practical acceleration factors, R = 2 or R = 3, of the proposed technique improved the temporal SNR by maximally 18% in phantom test and by averagely 8.2% in in vivo experiment, compared to cmsEPI without parallel imaging. The data collection time was decreased in inverse proportion to the acceleration factor as well. The improved temporal SNR resulted in better statistical power when evaluated on the functional response of the brain. In this study, we demonstrated that the combination of cmsEPI with the parallel imaging technique could provide the improved functional sensitivity for functional imaging study, compensating for the lower SNR by cmsEPI. PMID:26413518
A nonlinear fuzzy assisted image reconstruction algorithm for electrical capacitance tomography.
Deabes, W A; Abdelrahman, M A
2010-01-01
A nonlinear method based on a Fuzzy Inference System (FIS) to improve the images obtained from Electrical Capacitance Tomography (ECT) is proposed. Estimation of the molten metal characteristic in the Lost Foam Casting (LFC) process is a novel application in the area of the tomography process. The convergence rate of iterative image reconstruction techniques is dependent on the accuracy of the first image. The possibility of the existence of metal in the first image is computed by the proposed fuzzy system. This first image is passed to an iterative image reconstruction technique to get more precise images and to speed up the convergence rate. The proposed technique is able to detect the position of the metal on the periphery of the imaging area by using just eight capacitive sensors. The final results demonstrate the advantage of using the FIS compared to the performance of the iterative back projection image reconstruction technique. PMID:19900672
Texture-preserving Bayesian image reconstruction for low-dose CT
NASA Astrophysics Data System (ADS)
Zhang, Hao; Han, Hao; Hu, Yifan; Liu, Yan; Ma, Jianhua; Li, Lihong; Moore, William; Liang, Zhengrong
2016-03-01
Markov random field (MRF) model has been widely used in Bayesian image reconstruction to reconstruct piecewise smooth images in the presence of noise, such as in low-dose X-ray computed tomography (LdCT). While it can preserve edge sharpness via edge-preserving potential function, its regional smoothing may sacrifice tissue image textures, which have been recognized as useful imaging biomarkers, and thus it compromises clinical tasks such as differentiating malignant vs. benign lesions, e.g., lung nodule or colon polyp. This study aims to shift the edge preserving regional noise smoothing paradigm to texture-preserving framework for LdCT image reconstruction while retaining the advantage of MRF's neighborhood system on edge preservation. Specifically, we adapted the MRF model to incorporate the image textures of lung, bone, fat, muscle, etc. from previous full-dose CT scan as a priori knowledge for texture-preserving Bayesian reconstruction of current LdCT images. To show the feasibility of proposed reconstruction framework, experiments using clinical patient scans (with lung nodule or colon polyp) were conducted. The experimental outcomes showed noticeable gain by the a priori knowledge for LdCT image reconstruction with the well-known Haralick texture measures. Thus, it is conjectured that texture-preserving LdCT reconstruction has advantages over edge-preserving regional smoothing paradigm for texture-specific clinical applications.
Sparsity-regularized image reconstruction of decomposed K-edge data in spectral CT
NASA Astrophysics Data System (ADS)
Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A.; Schirra, Carsten O.
2014-05-01
The development of spectral computed tomography (CT) using binned photon-counting detectors has garnered great interest in recent years and has enabled selective imaging of K-edge materials. A practical challenge in CT image reconstruction of K-edge materials is the mitigation of image artifacts that arise from reduced-view and/or noisy decomposed sinogram data. In this note, we describe and investigate sparsity-regularized penalized weighted least squares-based image reconstruction algorithms for reconstructing K-edge images from few-view decomposed K-edge sinogram data. To exploit the inherent sparseness of typical K-edge images, we investigate use of a total variation (TV) penalty and a weighted sum of a TV penalty and an ℓ1-norm with a wavelet sparsifying transform. Computer-simulation and experimental phantom studies are conducted to quantitatively demonstrate the effectiveness of the proposed reconstruction algorithms.
NASA Astrophysics Data System (ADS)
Zhu, Liren; Chen, Yujia; Liang, Jinyang; Gao, Liang; Ma, Cheng; Wang, Lihong V.
2016-03-01
The single-shot compressed ultrafast photography (CUP) camera is the fastest receive-only camera in the world. In this work, we introduce an external CCD camera and a space- and intensity-constrained (SIC) reconstruction algorithm to improve the image quality of CUP. The CCD camera takes a time-unsheared image of the dynamic scene. Unlike the previously used unconstrained algorithm, the proposed algorithm incorporates both spatial and intensity constraints, based on the additional prior information provided by the external CCD camera. First, a spatial mask is extracted from the time-unsheared image to define the zone of action. Second, an intensity threshold constraint is determined based on the similarity between the temporally projected image of the reconstructed datacube and the time-unsheared image taken by the external CCD. Both simulation and experimental studies showed that the SIC reconstruction improves the spatial resolution, contrast, and general quality of the reconstructed image.
NASA Astrophysics Data System (ADS)
Edmonds, C. S.; Gratus, J.; Hock, K. M.; Machida, S.; Muratori, B. D.; Torromé, R. G.; Wolski, A.
2014-05-01
In high chromaticity circular accelerators, rapid decoherence of the betatron motion of a particle beam can make the measurement of lattice and bunch values, such as Courant-Snyder parameters and betatron amplitude, difficult. A method for reconstructing the momentum distribution of a beam from beam position measurements is presented. Further analysis of the same beam position monitor data allows estimates to be made of the Courant-Snyder parameters and the amplitude of coherent betatron oscillation of the beam. The methods are tested through application to data taken on the linear nonscaling fixed field alternating gradient accelerator, EMMA.
Reconstruction of sectional images in frequency-domain based photoacoustic imaging.
Zhu, Banghe; Sevick-Muraca, Eva M
2011-11-01
Photoacoustic (PA) imaging is based upon the generation of an ultrasound pulse arising from subsurface tissue absorption due to pulsed laser excitation, and measurement of its surface time-of-arrival. Expensive and bulky pulsed lasers with high peak fluence powers may provide shortcomings for applications of PA imaging in medicine and biology. These limitations may be overcome with the frequency-domain PA measurements, which employ modulated rather than pulsed light to generate the acoustic wave. In this contribution, we model the single modulation frequency based PA pressures on the measurement plane through the diffraction approximation and then employ a convolution approach to reconstruct the sectional image slices. The results demonstrate that the proposed method with appropriate data post-processing is capable of recovering sectional images while suppressing the defocused noise resulting from the other sections. PMID:22109207
Regularization design and control of change admission in prior-image-based reconstruction
NASA Astrophysics Data System (ADS)
Dang, Hao; Siewerdsen, Jeffrey H.; Stayman, J. Webster
2014-03-01
Nearly all reconstruction methods are controlled through various parameter selections. Traditionally, such parameters are used to specify a particular noise and resolution trade-off in the reconstructed image volumes. The introduction of reconstruction methods that incorporate prior image information has demonstrated dramatic improvements in dose utilization and image quality, but has complicated the selection of reconstruction parameters including those associated with balancing information used from prior images with that of the measurement data. While a noise-resolution tradeoff still exists, other potentially detrimental effects are possible with poor prior image parameter values including the possible introduction of false features and the failure to incorporate sufficient prior information to gain any improvements. Traditional parameter selection methods such as heuristics based on similar imaging scenarios are subject to error and suboptimal solutions while exhaustive searches can involve a large number of time-consuming iterative reconstructions. We propose a novel approach that prospectively determines optimal prior image regularization strength to accurately admit specific anatomical changes without performing full iterative reconstructions. This approach leverages analytical approximations to the implicitly defined prior image-based reconstruction solution and predictive metrics used to estimate imaging performance. The proposed method is investigated in phantom experiments and the shift-variance and data-dependence of optimal prior strength is explored. Optimal regularization based on the predictive approach is shown to agree well with traditional exhaustive reconstruction searches, while yielding substantial reductions in computation time. This suggests great potential of the proposed methodology in allowing for prospective patient-, data-, and change-specific customization of prior-image penalty strength to ensure accurate reconstruction of specific
NASA Astrophysics Data System (ADS)
Tang, Xiangyang; Hsieh, Jiang; Nilsen, Roy A.; Mcolash, Scott M.
2006-08-01
The tomographic images reconstructed from cone beam projection data with a slice thickness larger than the nominal detector row width (namely thick image) is of practical importance in clinical CT imaging, such as neuro- and trauma- applications as well as applications for treatment planning in image guided radiation therapy. To get a balance optimization between image quality and computational efficiency, a cone beam filtered backprojection (CB-FBP) algorithm to reconstruct a thick image by tracking adaptively up-sampled cone beam projection of virtual reconstruction planes is proposed in this paper. Theoretically, a thick image is a weighted summation of a number of images with slice thickness corresponding to the nominal detector row width (namely thin image), and each thin image corresponds to a virtual reconstruction plane. To obtain the most achievable computational efficiency, the weighted summation has to be carried out in projection domain. However, it has been experimentally found that, to obtain a thick image with the reconstruction accuracy comparable to that of a thin image, the CB-FBP reconstruction algorithm has to be applied by tracking adaptively up-sampled cone beam projection data, which is the novelty of the proposed algorithm. The tracking process is carried out by making use of the cone beam projection data corresponding to the involved virtual reconstruction planes only, while the adaptive up-sampling process is implemented by interpolation along the z-direction at an adequate up-sampling rate. By using a helical body phantom, the performance of the proposed cone beam reconstruction algorithm, particularly its capability of suppressing artifacts, are experimentally evaluated and verified.
Sampling conditions for gradient-magnitude sparsity based image reconstruction algorithms
NASA Astrophysics Data System (ADS)
Sidky, Emil Y.; Jørgensen, Jakob H.; Pan, Xiaochuan
2012-03-01
Image reconstruction from sparse-view data in 2D fan-beam CT is investigated by constrained, total-variation minimization. This optimization problem exploits possible sparsity in the gradient magnitude image (GMI). The investigation is performed in simulation under ideal, noiseless data conditions in order to reveal a possible link between GMI sparsity and the necessary number of projection views for reconstructing an accurate image. Results are shown for two, quite different phantoms of similar GMI sparsity.
Reconstruction of images from Gabor zone plate gamma-ray holography
NASA Astrophysics Data System (ADS)
Unwin, Clare E.; Rew, G. A. A.; Perks, J. R.; Beynon, T. D.; Scott, Malcolm C.
1999-09-01
Zone plate holography is a way of obtaining 3D images from a single exposure. Unlike conventional holography, coherent radiation sources are not required. Gama ray zone plate holography can be used to image gamma rays emitted by radiopharmaceuticals used in nuclear medicine. This work concerns the computer based reconstruction of gamma ray holograms. Reconstruction algorithms including correlation and Wiener filtering are described. The images obtained using the different methods are compared.
Hardware acceleration of image recognition through a visual cortex model
NASA Astrophysics Data System (ADS)
Rice, Kenneth L.; Taha, Tarek M.; Vutsinas, Christopher N.
2008-09-01
Recent findings in neuroscience have led to the development of several new models describing the processes in the neocortex. These models excel at cognitive applications such as image analysis and movement control. This paper presents a hardware architecture to speed up image content recognition through a recently proposed model of the visual cortex. The system is based on a set of parallel computation nodes implemented in an FPGA. The design was optimized for hardware by reducing the data storage requirements, and removing the need for multiplies and divides. The reconfigurable logic hardware implementation running at 121 MHz provided a speedup of 148 times over a 2 GHz AMD Opteron processor. The results indicate the feasibility of specialized hardware to accelerate larger biological scale implementations of the model.
Applying a PC accelerator board for medical imaging.
Gray, J; Grenzow, F; Siedband, M
1990-01-01
An AT-compatible computer was used to expand X-ray images that had been compressed and stored on optical data cards. Initially, execution time for expansion of a single X-ray image was 25 min. The requirements were for an expansion time of under 10 s and costs of under $1000 for computing hardware. This meant a computational speed increase of over 150 times was needed. Tests showed that incorporating an 80287 coprocessor would only give a speed increase of five times. The DSP32-PC-160 floating-point accelerator board was selected as a cost-effective solution to the need for more computing power. This board provided adequate processor speed, onboard memory, and data bus width; floating-point math precision; and a high-level language compiler for code development. PMID:18238350
In vivo microelectrode track reconstruction using magnetic resonance imaging
Fung, S.H.; Burstein, D.; Born, R.T.
2010-01-01
To obtain more precise anatomical information about cortical sites of microelectrode recording and microstimulation experiments in alert animals, we have developed a non-invasive, magnetic resonance imaging (MRI) technique for reconstructing microelectrode tracks. We made microelectrode penetrations in the brains of anesthetized rats and marked sites along them by depositing metal, presumably iron, with anodic monophasic or biphasic current from the tip of a stainless steel microelectrode. The metal deposits were clearly visible in the living animal as approximately 200 μm wide hypointense punctate marks using gradient echo sequences in a 4.7T MRI scanner. We confirmed the MRI findings by comparing them directly to the postmortem histology in which the iron in the deposits could be rendered visible with a Prussian blue reaction. MRI-visible marks could be created using currents as low as 1 μA (anodic) for 5 s, and they remained stable in the brains of living rats for up to nine months. We were able to make marks using either direct current or biphasic current pulses. Biphasic pulses caused less tissue damage and were similar to those used by many laboratories for functional microstimulation studies in the brains of alert monkeys. PMID:9667395
NASA Astrophysics Data System (ADS)
Papaconstadopoulos, P.; Levesque, I. R.; Maglieri, R.; Seuntjens, J.
2016-02-01
Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size (0.5× 0.5 cm2). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.
Papaconstadopoulos, P; Levesque, I R; Maglieri, R; Seuntjens, J
2016-02-01
Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size ([Formula: see text] cm(2)). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect. PMID:26758232
3-D Reconstruction From 2-D Radiographic Images and Its Application to Clinical Veterinary Medicine
NASA Astrophysics Data System (ADS)
Hamamoto, Kazuhiko; Sato, Motoyoshi
3D imaging technique is very important and indispensable in diagnosis. The main stream of the technique is one in which 3D image is reconstructed from a set of slice images, such as X-ray CT and MRI. However, these systems require large space and high costs. On the other hand, a low cost and small size 3D imaging system is needed in clinical veterinary medicine, for example, in the case of diagnosis in X-ray car or pasture area. We propose a novel 3D imaging technique using 2-D X-ray radiographic images. This system can be realized by cheaper system than X-ray CT and enables to get 3D image in X-ray car or portable X-ray equipment. In this paper, a 3D visualization technique from 2-D radiographic images is proposed and several reconstructions are shown. These reconstructions are evaluated by veterinarians.
NASA Astrophysics Data System (ADS)
Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib
2016-08-01
Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate K i as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting K i images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit K i bias of sPatlak analysis at regions with non-negligible 18F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source software for tomographic image reconstruction platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation–maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published 18F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced K i target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D versus the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10–20 sub-iterations. Moreover, systematic reduction in K i % bias and improved TBR were
Karakatsanis, Nicolas A; Casey, Michael E; Lodge, Martin A; Rahmim, Arman; Zaidi, Habib
2016-08-01
Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate K i as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting K i images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit K i bias of sPatlak analysis at regions with non-negligible (18)F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source software for tomographic image reconstruction platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published (18)F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced K i target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D versus the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10-20 sub-iterations. Moreover, systematic reduction in K i % bias and improved TBR were
Xu, Qiaofeng; Sidky, Emil Y.; Pan, Xiaochuan; Stampanoni, Marco; Modregger, Peter; Anastasio, Mark A.
2012-01-01
Differential X-ray phase-contrast tomography (DPCT) refers to a class of promising methods for reconstructing the X-ray refractive index distribution of materials that present weak X-ray absorption contrast. The tomographic projection data in DPCT, from which an estimate of the refractive index distribution is reconstructed, correspond to one-dimensional (1D) derivatives of the two-dimensional (2D) Radon transform of the refractive index distribution. There is an important need for the development of iterative image reconstruction methods for DPCT that can yield useful images from few-view projection data, thereby mitigating the long data-acquisition times and large radiation doses associated with use of analytic reconstruction methods. In this work, we analyze the numerical and statistical properties of two classes of discrete imaging models that form the basis for iterative image reconstruction in DPCT. We also investigate the use of one of the models with a modern image reconstruction algorithm for performing few-view image reconstruction of a tissue specimen. PMID:22565698
Sidky, Emil Y; Anastasio, Mark A; Pan, Xiaochuan
2010-05-10
Propagation-based X-ray phase-contrast tomography (PCT) seeks to reconstruct information regarding the complex-valued refractive index distribution of an object. In many applications, a boundary-enhanced image is sought that reveals the locations of discontinuities in the real-valued component of the refractive index distribution. We investigate two iterative algorithms for few-view image reconstruction in boundary-enhanced PCT that exploit the fact that a boundary-enhanced PCT image, or its gradient, is often sparse. In order to exploit object sparseness, the reconstruction algorithms seek to minimize the l(1)-norm or TV-norm of the image, subject to data consistency constraints. We demonstrate that the algorithms can reconstruct accurate boundary-enhanced images from highly incomplete few-view projection data. PMID:20588896
Nielsen, Tim; Brendel, Bernhard; Ziegler, Ronny; Beek, Michiel van; Uhlemann, Falk; Bontus, Claas; Koehler, Thomas
2009-04-01
Diffuse optical tomography (DOT) is a potential new imaging modality to detect or monitor breast lesions. Recently, Philips developed a new DOT system capable of transmission and fluorescence imaging, where the investigated breast is hanging freely into the measurement cup containing scattering fluid. We present a fast and robust image reconstruction algorithm that is used for the transmission measurements. The algorithm is based on the Rytov approximation. We show that this algorithm can be used over a wide range of tissue optical properties if the reconstruction is adapted to each patient. We use estimates of the breast shape and average tissue optical properties to initialize the reconstruction, which improves the image quality significantly. We demonstrate the capability of the measurement system and reconstruction to image breast lesions by clinical examples.
Kang, Yan; Yao, Yin-Ping; Kang, Zhi-Hua; Ma, Lin; Zhang, Tong-Yi
2015-06-01
We present different signal reconstruction techniques for implementation of compressive ghost imaging (CGI). The different techniques are validated on the data collected from ghost imaging with the pseudothermal light experimental system. Experiment results show that the technique based on total variance minimization gives high-quality reconstruction of the imaging object with less time consumption. The different performances among these reconstruction techniques and their parameter settings are also analyzed. The conclusion thus offers valuable information to promote the implementation of CGI in real applications. PMID:26367039
Mapping iterative medical imaging algorithm on cell accelerator.
Xu, Meilian; Thulasiraman, Parimala
2011-01-01
Algebraic reconstruction techniques require about half the number of projections as that of Fourier backprojection methods, which makes these methods safer in terms of required radiation dose. Algebraic reconstruction technique (ART) and its variant OS-SART (ordered subset simultaneous ART) are techniques that provide faster convergence with comparatively good image quality. However, the prohibitively long processing time of these techniques prevents their adoption in commercial CT machines. Parallel computing is one solution to this problem. With the advent of heterogeneous multicore architectures that exploit data parallel applications, medical imaging algorithms such as OS-SART can be studied to produce increased performance. In this paper, we map OS-SART on cell broadband engine (Cell BE). We effectively use the architectural features of Cell BE to provide an efficient mapping. The Cell BE consists of one powerPC processor element (PPE) and eight SIMD coprocessors known as synergetic processor elements (SPEs). The limited memory storage on each of the SPEs makes the mapping challenging. Therefore, we present optimization techniques to efficiently map the algorithm on the Cell BE for improved performance over CPU version. We compare the performance of our proposed algorithm on Cell BE to that of Sun Fire ×4600, a shared memory machine. The Cell BE is five times faster than AMD Opteron dual-core processor. The speedup of the algorithm on Cell BE increases with the increase in the number of SPEs. We also experiment with various parameters, such as number of subsets, number of processing elements, and number of DMA transfers between main memory and local memory, that impact the performance of the algorithm. PMID:21922018
Mapping Iterative Medical Imaging Algorithm on Cell Accelerator
Xu, Meilian; Thulasiraman, Parimala
2011-01-01
Algebraic reconstruction techniques require about half the number of projections as that of Fourier backprojection methods, which makes these methods safer in terms of required radiation dose. Algebraic reconstruction technique (ART) and its variant OS-SART (ordered subset simultaneous ART) are techniques that provide faster convergence with comparatively good image quality. However, the prohibitively long processing time of these techniques prevents their adoption in commercial CT machines. Parallel computing is one solution to this problem. With the advent of heterogeneous multicore architectures that exploit data parallel applications, medical imaging algorithms such as OS-SART can be studied to produce increased performance. In this paper, we map OS-SART on cell broadband engine (Cell BE). We effectively use the architectural features of Cell BE to provide an efficient mapping. The Cell BE consists of one powerPC processor element (PPE) and eight SIMD coprocessors known as synergetic processor elements (SPEs). The limited memory storage on each of the SPEs makes the mapping challenging. Therefore, we present optimization techniques to efficiently map the algorithm on the Cell BE for improved performance over CPU version. We compare the performance of our proposed algorithm on Cell BE to that of Sun Fire ×4600, a shared memory machine. The Cell BE is five times faster than AMD Opteron dual-core processor. The speedup of the algorithm on Cell BE increases with the increase in the number of SPEs. We also experiment with various parameters, such as number of subsets, number of processing elements, and number of DMA transfers between main memory and local memory, that impact the performance of the algorithm. PMID:21922018
Feature-based face representations and image reconstruction from behavioral and neural data.
Nestor, Adrian; Plaut, David C; Behrmann, Marlene
2016-01-12
The reconstruction of images from neural data can provide a unique window into the content of human perceptual representations. Although recent efforts have established the viability of this enterprise using functional magnetic resonance imaging (MRI) patterns, these efforts have relied on a variety of prespecified image features. Here, we take on the twofold task of deriving features directly from empirical data and of using these features for facial image reconstruction. First, we use a method akin to reverse correlation to derive visual features from functional MRI patterns elicited by a large set of homogeneous face exemplars. Then, we combine these features to reconstruct novel face images from the corresponding neural patterns. This approach allows us to estimate collections of features associated with different cortical areas as well as to successfully match image reconstructions to corresponding face exemplars. Furthermore, we establish the robustness and the utility of this approach by reconstructing images from patterns of behavioral data. From a theoretical perspective, the current results provide key insights into the nature of high-level visual representations, and from a practical perspective, these findings make possible a broad range of image-reconstruction applications via a straightforward methodological approach. PMID:26711997
Feature-based face representations and image reconstruction from behavioral and neural data
Nestor, Adrian; Plaut, David C.; Behrmann, Marlene
2016-01-01
The reconstruction of images from neural data can provide a unique window into the content of human perceptual representations. Although recent efforts have established the viability of this enterprise using functional magnetic resonance imaging (MRI) patterns, these efforts have relied on a variety of prespecified image features. Here, we take on the twofold task of deriving features directly from empirical data and of using these features for facial image reconstruction. First, we use a method akin to reverse correlation to derive visual features from functional MRI patterns elicited by a large set of homogeneous face exemplars. Then, we combine these features to reconstruct novel face images from the corresponding neural patterns. This approach allows us to estimate collections of features associated with different cortical areas as well as to successfully match image reconstructions to corresponding face exemplars. Furthermore, we establish the robustness and the utility of this approach by reconstructing images from patterns of behavioral data. From a theoretical perspective, the current results provide key insights into the nature of high-level visual representations, and from a practical perspective, these findings make possible a broad range of image-reconstruction applications via a straightforward methodological approach. PMID:26711997
Mariappan, Leo; He, Bin
2013-03-01
Magnetoacoustic tomography with magnetic induction (MAT-MI) is a technique proposed to reconstruct the conductivity distribution in biological tissue at ultrasound imaging resolution. A magnetic pulse is used to generate eddy currents in the object, which in the presence of a static magnetic field induces Lorentz force based acoustic waves in the medium. This time resolved acoustic waves are collected with ultrasound transducers and, in the present work, these are used to reconstruct the current source which gives rise to the MAT-MI acoustic signal using vector imaging point spread functions. The reconstructed source is then used to estimate the conductivity distribution of the object. Computer simulations and phantom experiments are performed to demonstrate conductivity reconstruction through vector source imaging in a circular scanning geometry with a limited bandwidth finite size piston transducer. The results demonstrate that the MAT-MI approach is capable of conductivity reconstruction in a physical setting. PMID:23322761
Three-dimensional reconstruction of laser-imploded targets from simulated pinhole images.
Xu, Peng; Bai, Yonglin; Bai, Xiaohong; Liu, Baiyu; Ouyang, Xian; Wang, Bo; Yang, Wenzheng; Gou, Yongsheng; Zhu, Bingli; Qin, Junjun
2012-11-10
This paper proposes an integral method to achieve a more accurate weighting matrix that makes very positive contributions to the image reconstruction in inertial confinement fusion research. Standard algebraic reconstruction techniques with a positivity constraint included are utilized. The final normalized mean-square error between the simulated and reconstructed projection images is 0.000365%, which is a nearly perfect result, indicating that the weighting matrix is very important. Compared with the error between the simulated and reconstructed phantoms, which is 2.35%, it seems that the improvement of the accuracy of the projection image does not mean the improvement of the phantom. The proposed method can reconstruct a simulated laser-imploded target consisting of 100×100×100 voxels. PMID:23142895
Three-dimensional reconstruction of light microscopy image sections: present and future.
Wang, Yuzhen; Xu, Rui; Luo, Gaoxing; Wu, Jun
2015-03-01
Three-dimensional (3D) image reconstruction technologies can reveal previously hidden microstructures in human tissue. However, the lack of ideal, non-destructive cross-sectional imaging techniques is still a problem. Despite some drawbacks, histological sectioning remains one of the most powerful methods for accurate high-resolution representation of tissue structures. Computer technologies can produce 3D representations of interesting human tissue and organs that have been serial-sectioned, dyed or stained, imaged, and segmented for 3D visualization. 3D reconstruction also has great potential in the fields of tissue engineering and 3D printing. This article outlines the most common methods for 3D tissue section reconstruction. We describe the most important academic concepts in this field, and provide critical explanations and comparisons. We also note key steps in the reconstruction procedures, and highlight recent progress in the development of new reconstruction methods. PMID:24952302
Liu, Kai; Tian, Jie; Qin, Chenghu; Yang, Xin; Zhu, Shouping; Han, Dong; Wu, Ping
2011-04-01
Generally, the performance of tomographic bioluminescence imaging is dependent on several factors, such as regularization parameters and initial guess of source distribution. In this paper, a global-inexact-Newton based reconstruction method, which is regularized by a dynamic sparse term, is presented for tomographic reconstruction. The proposed method can enhance higher imaging reliability and efficiency. In vivo mouse experimental reconstructions were performed to validate the proposed method. Reconstruction comparisons of the proposed method with other methods demonstrate the applicability on an entire region. Moreover, the reliable performance on a wide range of regularization parameters and initial unknown values were also investigated. Based on the in vivo experiment and a mouse atlas, the tolerance for optical property mismatch was evaluated with optical overestimation and underestimation. Additionally, the reconstruction efficiency was also investigated with different sizes of mouse grids. We showed that this method was reliable for tomographic bioluminescence imaging in practical mouse experimental applications. PMID:21529085
Mariappan, Leo; He, Bin
2013-01-01
Magneto acoustic tomography with magnetic induction (MAT-MI) is a technique proposed to reconstruct the conductivity distribution in biological tissue at ultrasound imaging resolution. A magnetic pulse is used to generate eddy currents in the object, which in the presence of a static magnetic field induces Lorentz force based acoustic waves in the medium. This time resolved acoustic waves are collected with ultrasound transducers and, in the present work, these are used to reconstruct the current source which gives rise to the MAT-MI acoustic signal using vector imaging point spread functions. The reconstructed source is then used to estimate the conductivity distribution of the object. Computer simulations and phantom experiments are performed to demonstrate conductivity reconstruction through vector source imaging in a circular scanning geometry with a limited bandwidth finite size piston transducer. The results demonstrate that the MAT-MI approach is capable of conductivity reconstruction in a physical setting. PMID:23322761
Image Alignment for Tomography Reconstruction from Synchrotron X-Ray Microscopic Images
Cheng, Chang-Chieh; Chien, Chia-Chi; Chen, Hsiang-Hsin; Hwu, Yeukuang; Ching, Yu-Tai
2014-01-01
A synchrotron X-ray microscope is a powerful imaging apparatus for taking high-resolution and high-contrast X-ray images of nanoscale objects. A sufficient number of X-ray projection images from different angles is required for constructing 3D volume images of an object. Because a synchrotron light source is immobile, a rotational object holder is required for tomography. At a resolution of 10 nm per pixel, the vibration of the holder caused by rotating the object cannot be disregarded if tomographic images are to be reconstructed accurately. This paper presents a computer method to compensate for the vibration of the rotational holder by aligning neighboring X-ray images. This alignment process involves two steps. The first step is to match the “projected feature points” in the sequence of images. The matched projected feature points in the - plane should form a set of sine-shaped loci. The second step is to fit the loci to a set of sine waves to compute the parameters required for alignment. The experimental results show that the proposed method outperforms two previously proposed methods, Xradia and SPIDER. The developed software system can be downloaded from the URL, http://www.cs.nctu.edu.tw/~chengchc/SCTA or http://goo.gl/s4AMx. PMID:24416264
Image alignment for tomography reconstruction from synchrotron X-ray microscopic images.
Cheng, Chang-Chieh; Chien, Chia-Chi; Chen, Hsiang-Hsin; Hwu, Yeukuang; Ching, Yu-Tai
2014-01-01
A synchrotron X-ray microscope is a powerful imaging apparatus for taking high-resolution and high-contrast X-ray images of nanoscale objects. A sufficient number of X-ray projection images from different angles is required for constructing 3D volume images of an object. Because a synchrotron light source is immobile, a rotational object holder is required for tomography. At a resolution of 10 nm per pixel, the vibration of the holder caused by rotating the object cannot be disregarded if tomographic images are to be reconstructed accurately. This paper presents a computer method to compensate for the vibration of the rotational holder by aligning neighboring X-ray images. This alignment process involves two steps. The first step is to match the "projected feature points" in the sequence of images. The matched projected feature points in the x-θ plane should form a set of sine-shaped loci. The second step is to fit the loci to a set of sine waves to compute the parameters required for alignment. The experimental results show that the proposed method outperforms two previously proposed methods, Xradia and SPIDER. The developed software system can be downloaded from the URL, http://www.cs.nctu.edu.tw/~chengchc/SCTA or http://goo.gl/s4AMx. PMID:24416264
Current profile reconstruction using X-ray imaging on the PEGASUS toroidal experiment
NASA Astrophysics Data System (ADS)
Tritz, Kevin Lee
Internal plasma profiles, specifically the current profile, are necessary to accurately characterize the plasma equilibrium and perform detailed stability analyses of magnetically confined toroidal plasmas. External magnetic measurements alone are not sufficient to properly constrain the current profile for an equilibrium reconstruction. This work confirms the insensitivity of the profiles to external magnetics and demonstrates the successful incorporation of tangential X-ray imaging into a modified equilibrium code for current profile reconstruction in highly shaped, low aspect-ratio plasmas. An equilibrium reconstruction code was developed that used two dimensional X-ray images to constrain a flexible spline parameterization of the plasma profiles. Image constraint modeling was performed with this code, demonstrating that the profiles were well constrained, with less than 10% deviation of the reconstructed central safety factor, if the image measurement noise was held below 2% for emissivity constraints, and below 1% for intensity constraints. Two tangential soft X-ray pinhole camera imaging systems, a transmissive and reflective phosphor design, were built and operated on the PEGASUS toroidal experiment. Intensity image contours from these systems were used to constrain equilibrium reconstructions of the plasma discharge. The shapes and values of the q profiles determined by these reconstructions correspond well with the presence of coherent MHD activity observed in the plasmas. A comparison of the X-ray intensity-constrained equilibria with the external-magnetics-only reconstructions showed good agreement between most gross plasma parameters, but large variation between the reconstructed profiles. A next generation X-ray imaging system was designed to provide higher sensitivity, a more compact form factor, and multiple time point capability. The increased sensitivity will allow the variance of the experimental reconstructed profiles to achieve the level
Effects of aberrations on image reconstruction of data from hybrid intensity interferometers
NASA Astrophysics Data System (ADS)
Murray-Krezan, Jeremy; Crabtree, Peter N.
2012-06-01
Intensity interferometery (II) holds tremendous potential for remote sensing of space objects. We investigate the properties of a hybrid intensity interferometer concept where information from an II is fused with information from a traditional imaging telescope. Although not an imager, hybrid intensity interferometery measurements can be used to reconstruct an image. In previous work we investigated the effects of poor SNR on this image formation process. In this work, we go beyond the obviously deleterious effects of SNR, to investigate reconstructed image quality as a function of the chosen support constraint, and the resultant image quality issues. The benefits to fusion of assumed perfect-yet-partial a priori information with traditional intensity interferometery measurements are explored and shown to result in increased sensitivity and improved reconstructed-image quality.
Regularized Fully 5D Reconstruction of Cardiac Gated Dynamic SPECT Images.
Niu, Xiaofeng; Yang, Yongyi; Jin, Mingwu; Wernick, Miles N; King, Michael A
2010-01-01
In our recent work, we proposed an image reconstruction procedure aimed to unify gated imaging and dynamic imaging in nuclear cardiac imaging. With this procedure the goal is to obtain an image sequence from a single acquisition which shows simultaneously both cardiac motion and tracer distribution change over the course of imaging. In this work, we further develop and demonstrate this procedure for fully 5D (3D space plus time plus gate) reconstruction in gated, dynamic cardiac SPECT imaging, where the challenge is even greater without the use of multiple fast camera rotations. For 5D reconstruction, we develop and compare two iterative algorithms: one is based on the modified block sequential regularized EM (BSREM-II) algorithm, and the other is based on the one-step late (OSL) algorithm. In our experiments, we simulated gated cardiac imaging with the NURBS-based cardiac-torso (NCAT) phantom and Tc99m-Teboroxime as the imaging agent, where acquisition with the equivalent of only three full camera rotations was used during the course of a 12-minute postinjection period. We conducted a thorough evaluation of the reconstruction results using a number of quantitative measures. Our results demonstrate that the 5D reconstruction procedure can yield gated dynamic images which show quantitative information for both perfusion defect detection and cardiac motion. PMID:24049191
Novel reconstruction scheme for cardiac volume imaging with MSCT providing cone correction
NASA Astrophysics Data System (ADS)
Bruder, Herbert; Stierstorfer, Karl; Ohnesorge, Bernd; Schaller, Stefan; Flohr, Thomas
2002-05-01
We present a novel reconstruction scheme for ca