NASA Technical Reports Server (NTRS)
Alfano, Robert R. (Inventor); Cai, Wei (Inventor)
2007-01-01
A reconstruction technique for reducing computation burden in the 3D image processes, wherein the reconstruction procedure comprises an inverse and a forward model. The inverse model uses a hybrid dual Fourier algorithm that combines a 2D Fourier inversion with a 1D matrix inversion to thereby provide high-speed inverse computations. The inverse algorithm uses a hybrid transfer to provide fast Fourier inversion for data of multiple sources and multiple detectors. The forward model is based on an analytical cumulant solution of a radiative transfer equation. The accurate analytical form of the solution to the radiative transfer equation provides an efficient formalism for fast computation of the forward model.
Riemann–Hilbert problem approach for two-dimensional flow inverse scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agaltsov, A. D., E-mail: agalets@gmail.com; Novikov, R. G., E-mail: novikov@cmap.polytechnique.fr; IEPT RAS, 117997 Moscow
2014-10-15
We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given.
NASA Astrophysics Data System (ADS)
Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua
2016-07-01
On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.
Investigation of the reconstruction accuracy of guided wave tomography using full waveform inversion
NASA Astrophysics Data System (ADS)
Rao, Jing; Ratassepp, Madis; Fan, Zheng
2017-07-01
Guided wave tomography is a promising tool to accurately determine the remaining wall thicknesses of corrosion damages, which are among the major concerns for many industries. Full Waveform Inversion (FWI) algorithm is an attractive guided wave tomography method, which uses a numerical forward model to predict the waveform of guided waves when propagating through corrosion defects, and an inverse model to reconstruct the thickness map from the ultrasonic signals captured by transducers around the defect. This paper discusses the reconstruction accuracy of the FWI algorithm on plate-like structures by using simulations as well as experiments. It was shown that this algorithm can obtain a resolution of around 0.7 wavelengths for defects with smooth depth variations from the acoustic modeling data, and about 1.5-2 wavelengths from the elastic modeling data. Further analysis showed that the reconstruction accuracy is also dependent on the shape of the defect. It was demonstrated that the algorithm maintains the accuracy in the case of multiple defects compared to conventional algorithms based on Born approximation.
NASA Astrophysics Data System (ADS)
David, Sabrina; Burion, Steve; Tepe, Alan; Wilfley, Brian; Menig, Daniel; Funk, Tobias
2012-03-01
Iterative reconstruction methods have emerged as a promising avenue to reduce dose in CT imaging. Another, perhaps less well-known, advance has been the development of inverse geometry CT (IGCT) imaging systems, which can significantly reduce the radiation dose delivered to a patient during a CT scan compared to conventional CT systems. Here we show that IGCT data can be reconstructed using iterative methods, thereby combining two novel methods for CT dose reduction. A prototype IGCT scanner was developed using a scanning beam digital X-ray system - an inverse geometry fluoroscopy system with a 9,000 focal spot x-ray source and small photon counting detector. 90 fluoroscopic projections or "superviews" spanning an angle of 360 degrees were acquired of an anthropomorphic phantom mimicking a 1 year-old boy. The superviews were reconstructed with a custom iterative reconstruction algorithm, based on the maximum-likelihood algorithm for transmission tomography (ML-TR). The normalization term was calculated based on flat-field data acquired without a phantom. 15 subsets were used, and a total of 10 complete iterations were performed. Initial reconstructed images showed faithful reconstruction of anatomical details. Good edge resolution and good contrast-to-noise properties were observed. Overall, ML-TR reconstruction of IGCT data collected by a bench-top prototype was shown to be viable, which may be an important milestone in the further development of inverse geometry CT.
NASA Astrophysics Data System (ADS)
Zhou, Jianmei; Wang, Jianxun; Shang, Qinglong; Wang, Hongnian; Yin, Changchun
2014-04-01
We present an algorithm for inverting controlled source audio-frequency magnetotelluric (CSAMT) data in horizontally layered transversely isotropic (TI) media. The popular inversion method parameterizes the media into a large number of layers which have fixed thickness and only reconstruct the conductivities (e.g. Occam's inversion), which does not enable the recovery of the sharp interfaces between layers. In this paper, we simultaneously reconstruct all the model parameters, including both the horizontal and vertical conductivities and layer depths. Applying the perturbation principle and the dyadic Green's function in TI media, we derive the analytic expression of Fréchet derivatives of CSAMT responses with respect to all the model parameters in the form of Sommerfeld integrals. A regularized iterative inversion method is established to simultaneously reconstruct all the model parameters. Numerical results show that the inverse algorithm, including the depths of the layer interfaces, can significantly improve the inverse results. It can not only reconstruct the sharp interfaces between layers, but also can obtain conductivities close to the true value.
Multigrid-based reconstruction algorithm for quantitative photoacoustic tomography
Li, Shengfu; Montcel, Bruno; Yuan, Zhen; Liu, Wanyu; Vray, Didier
2015-01-01
This paper proposes a multigrid inversion framework for quantitative photoacoustic tomography reconstruction. The forward model of optical fluence distribution and the inverse problem are solved at multiple resolutions. A fixed-point iteration scheme is formulated for each resolution and used as a cost function. The simulated and experimental results for quantitative photoacoustic tomography reconstruction show that the proposed multigrid inversion can dramatically reduce the required number of iterations for the optimization process without loss of reliability in the results. PMID:26203371
Comparison of Compressed Sensing Algorithms for Inversion of 3-D Electrical Resistivity Tomography.
NASA Astrophysics Data System (ADS)
Peddinti, S. R.; Ranjan, S.; Kbvn, D. P.
2016-12-01
Image reconstruction algorithms derived from electrical resistivity tomography (ERT) are highly non-linear, sparse, and ill-posed. The inverse problem is much severe, when dealing with 3-D datasets that result in large sized matrices. Conventional gradient based techniques using L2 norm minimization with some sort of regularization can impose smoothness constraint on the solution. Compressed sensing (CS) is relatively new technique that takes the advantage of inherent sparsity in parameter space in one or the other form. If favorable conditions are met, CS was proven to be an efficient image reconstruction technique that uses limited observations without losing edge sharpness. This paper deals with the development of an open source 3-D resistivity inversion tool using CS framework. The forward model was adopted from RESINVM3D (Pidlisecky et al., 2007) with CS as the inverse code. Discrete cosine transformation (DCT) function was used to induce model sparsity in orthogonal form. Two CS based algorithms viz., interior point method and two-step IST were evaluated on a synthetic layered model with surface electrode observations. The algorithms were tested (in terms of quality and convergence) under varying degrees of parameter heterogeneity, model refinement, and reduced observation data space. In comparison to conventional gradient algorithms, CS was proven to effectively reconstruct the sub-surface image with less computational cost. This was observed by a general increase in NRMSE from 0.5 in 10 iterations using gradient algorithm to 0.8 in 5 iterations using CS algorithms.
Acoustic Inversion in Optoacoustic Tomography: A Review
Rosenthal, Amir; Ntziachristos, Vasilis; Razansky, Daniel
2013-01-01
Optoacoustic tomography enables volumetric imaging with optical contrast in biological tissue at depths beyond the optical mean free path by the use of optical excitation and acoustic detection. The hybrid nature of optoacoustic tomography gives rise to two distinct inverse problems: The optical inverse problem, related to the propagation of the excitation light in tissue, and the acoustic inverse problem, which deals with the propagation and detection of the generated acoustic waves. Since the two inverse problems have different physical underpinnings and are governed by different types of equations, they are often treated independently as unrelated problems. From an imaging standpoint, the acoustic inverse problem relates to forming an image from the measured acoustic data, whereas the optical inverse problem relates to quantifying the formed image. This review focuses on the acoustic aspects of optoacoustic tomography, specifically acoustic reconstruction algorithms and imaging-system practicalities. As these two aspects are intimately linked, and no silver bullet exists in the path towards high-performance imaging, we adopt a holistic approach in our review and discuss the many links between the two aspects. Four classes of reconstruction algorithms are reviewed: time-domain (so called back-projection) formulae, frequency-domain formulae, time-reversal algorithms, and model-based algorithms. These algorithms are discussed in the context of the various acoustic detectors and detection surfaces which are commonly used in experimental studies. We further discuss the effects of non-ideal imaging scenarios on the quality of reconstruction and review methods that can mitigate these effects. Namely, we consider the cases of finite detector aperture, limited-view tomography, spatial under-sampling of the acoustic signals, and acoustic heterogeneities and losses. PMID:24772060
High resolution x-ray CMT: Reconstruction methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, J.K.
This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited formore » high accuracy, tomographic reconstruction codes.« less
NASA Astrophysics Data System (ADS)
Wiskin, James; Klock, John; Iuanow, Elaine; Borup, Dave T.; Terry, Robin; Malik, Bilal H.; Lenox, Mark
2017-03-01
There has been a great deal of research into ultrasound tomography for breast imaging over the past 35 years. Few successful attempts have been made to reconstruct high-resolution images using transmission ultrasound. To this end, advances have been made in 2D and 3D algorithms that utilize either time of arrival or full wave data to reconstruct images with high spatial and contrast resolution suitable for clinical interpretation. The highest resolution and quantitative accuracy result from inverse scattering applied to full wave data in 3D. However, this has been prohibitively computationally expensive, meaning that full inverse scattering ultrasound tomography has not been considered clinically viable. Here we show the results of applying a nonlinear inverse scattering algorithm to 3D data in a clinically useful time frame. This method yields Quantitative Transmission (QT) ultrasound images with high spatial and contrast resolution. We reconstruct sound speeds for various 2D and 3D phantoms and verify these values with independent measurements. The data are fully 3D as is the reconstruction algorithm, with no 2D approximations. We show that 2D reconstruction algorithms can introduce artifacts into the QT breast image which are avoided by using a full 3D algorithm and data. We show high resolution gross and microscopic anatomic correlations comparing cadaveric breast QT images with MRI to establish imaging capability and accuracy. Finally, we show reconstructions of data from volunteers, as well as an objective visual grading analysis to confirm clinical imaging capability and accuracy.
Adaptive Filtering in the Wavelet Transform Domain via Genetic Algorithms
2004-08-06
wavelet transforms. Whereas the term “evolved” pertains only to the altered wavelet coefficients used during the inverse transform process. 2...words, the inverse transform produces the original signal x(t) from the wavelet and scaling coefficients. )()( ,, tdtx nk n nk k ψ...reconstruct the original signal as accurately as possible. The inverse transform reconstructs an approximation of the original signal (Burrus
Imaging of voids by means of a physical-optics-based shape-reconstruction algorithm.
Liseno, Angelo; Pierri, Rocco
2004-06-01
We analyze the performance of a shape-reconstruction algorithm for the retrieval of voids starting from the electromagnetic scattered field. Such an algorithm exploits the physical optics (PO) approximation to obtain a linear unknown-data relationship and performs inversions by means of the singular-value-decomposition approach. In the case of voids, in addition to a geometrical optics reflection, the presence of the lateral wave phenomenon must be considered. We analyze the effect of the presence of lateral waves on the reconstructions. For the sake of shape reconstruction, we can regard the PO algorithm as one of assuming the electric and magnetic field on the illuminated side as constant in amplitude and linear in phase, as far as the dependence on the frequency is concerned. Therefore we analyze how much the lateral wave phenomenon impairs such an assumption, and we show inversions for both one single and two circular voids, for different values of the background permittivity.
A fast reconstruction algorithm for fluorescence optical diffusion tomography based on preiteration.
Song, Xiaolei; Xiong, Xiaoyun; Bai, Jing
2007-01-01
Fluorescence optical diffusion tomography in the near-infrared (NIR) bandwidth is considered to be one of the most promising ways for noninvasive molecular-based imaging. Many reconstructive approaches to it utilize iterative methods for data inversion. However, they are time-consuming and they are far from meeting the real-time imaging demands. In this work, a fast preiteration algorithm based on the generalized inverse matrix is proposed. This method needs only one step of matrix-vector multiplication online, by pushing the iteration process to be executed offline. In the preiteration process, the second-order iterative format is employed to exponentially accelerate the convergence. Simulations based on an analytical diffusion model show that the distribution of fluorescent yield can be well estimated by this algorithm and the reconstructed speed is remarkably increased.
Reconstructing surface wave profiles from reflected acoustic pulses using multiple receivers.
Walstead, Sean P; Deane, Grant B
2014-08-01
Surface wave shapes are determined by analyzing underwater reflected acoustic signals collected at multiple receivers. The transmitted signals are of nominal frequency 300 kHz and are reflected off surface gravity waves that are paddle-generated in a wave tank. An inverse processing algorithm reconstructs 50 surface wave shapes over a length span of 2.10 m. The inverse scheme uses a broadband forward scattering model based on Kirchhoff's diffraction formula to determine wave shapes. The surface reconstruction algorithm is self-starting in that source and receiver geometry and initial estimates of wave shape are determined from the same acoustic signals used in the inverse processing. A high speed camera provides ground-truth measurements of the surface wave field for comparison with the acoustically derived surface waves. Within Fresnel zone regions the statistical confidence of the inversely optimized surface profile exceeds that of the camera profile. Reconstructed surfaces are accurate to a resolution of about a quarter-wavelength of the acoustic pulse only within Fresnel zones associated with each source and receiver pair. Multiple isolated Fresnel zones from multiple receivers extend the spatial extent of accurate surface reconstruction while overlapping Fresnel zones increase confidence in the optimized profiles there.
Zhou, C.; Liu, L.; Lane, J.W.
2001-01-01
A nonlinear tomographic inversion method that uses first-arrival travel-time and amplitude-spectra information from cross-hole radar measurements was developed to simultaneously reconstruct electromagnetic velocity and attenuation distribution in earth materials. Inversion methods were developed to analyze single cross-hole tomography surveys and differential tomography surveys. Assuming the earth behaves as a linear system, the inversion methods do not require estimation of source radiation pattern, receiver coupling, or geometrical spreading. The data analysis and tomographic inversion algorithm were applied to synthetic test data and to cross-hole radar field data provided by the US Geological Survey (USGS). The cross-hole radar field data were acquired at the USGS fractured-rock field research site at Mirror Lake near Thornton, New Hampshire, before and after injection of a saline tracer, to monitor the transport of electrically conductive fluids in the image plane. Results from the synthetic data test demonstrate the algorithm computational efficiency and indicate that the method robustly can reconstruct electromagnetic (EM) wave velocity and attenuation distribution in earth materials. The field test results outline zones of velocity and attenuation anomalies consistent with the finding of previous investigators; however, the tomograms appear to be quite smooth. Further work is needed to effectively find the optimal smoothness criterion in applying the Tikhonov regularization in the nonlinear inversion algorithms for cross-hole radar tomography. ?? 2001 Elsevier Science B.V. All rights reserved.
Abrishami, V; Bilbao-Castro, J R; Vargas, J; Marabini, R; Carazo, J M; Sorzano, C O S
2015-10-01
We describe a fast and accurate method for the reconstruction of macromolecular complexes from a set of projections. Direct Fourier inversion (in which the Fourier Slice Theorem plays a central role) is a solution for dealing with this inverse problem. Unfortunately, the set of projections provides a non-equidistantly sampled version of the macromolecule Fourier transform in the single particle field (and, therefore, a direct Fourier inversion) may not be an optimal solution. In this paper, we introduce a gridding-based direct Fourier method for the three-dimensional reconstruction approach that uses a weighting technique to compute a uniform sampled Fourier transform. Moreover, the contrast transfer function of the microscope, which is a limiting factor in pursuing a high resolution reconstruction, is corrected by the algorithm. Parallelization of this algorithm, both on threads and on multiple CPU's, makes the process of three-dimensional reconstruction even faster. The experimental results show that our proposed gridding-based direct Fourier reconstruction is slightly more accurate than similar existing methods and presents a lower computational complexity both in terms of time and memory, thereby allowing its use on larger volumes. The algorithm is fully implemented in the open-source Xmipp package and is downloadable from http://xmipp.cnb.csic.es. Copyright © 2015 Elsevier B.V. All rights reserved.
Solving Large-Scale Inverse Magnetostatic Problems using the Adjoint Method
Bruckner, Florian; Abert, Claas; Wautischer, Gregor; Huber, Christian; Vogler, Christoph; Hinze, Michael; Suess, Dieter
2017-01-01
An efficient algorithm for the reconstruction of the magnetization state within magnetic components is presented. The occurring inverse magnetostatic problem is solved by means of an adjoint approach, based on the Fredkin-Koehler method for the solution of the forward problem. Due to the use of hybrid FEM-BEM coupling combined with matrix compression techniques the resulting algorithm is well suited for large-scale problems. Furthermore the reconstruction of the magnetization state within a permanent magnet as well as an optimal design application are demonstrated. PMID:28098851
Genetic Algorithms Evolve Optimized Transforms for Signal Processing Applications
2005-04-01
coefficient sets describing inverse transforms and matched forward/ inverse transform pairs that consistently outperform wavelets for image compression and reconstruction applications under conditions subject to quantization error.
Liauh, Chihng-Tsung; Shih, Tzu-Ching; Huang, Huang-Wen; Lin, Win-Li
2004-02-01
An inverse algorithm with Tikhonov regularization of order zero has been used to estimate the intensity ratios of the reflected longitudinal wave to the incident longitudinal wave and that of the refracted shear wave to the total transmitted wave into bone in calculating the absorbed power field and then to reconstruct the temperature distribution in muscle and bone regions based on a limited number of temperature measurements during simulated ultrasound hyperthermia. The effects of the number of temperature sensors are investigated, as is the amount of noise superimposed on the temperature measurements, and the effects of the optimal sensor location on the performance of the inverse algorithm. Results show that noisy input data degrades the performance of this inverse algorithm, especially when the number of temperature sensors is small. Results are also presented demonstrating an improvement in the accuracy of the temperature estimates by employing an optimal value of the regularization parameter. Based on the analysis of singular-value decomposition, the optimal sensor position in a case utilizing only one temperature sensor can be determined to make the inverse algorithm converge to the true solution.
NASA Astrophysics Data System (ADS)
Chvetsov, Alevei V.; Sandison, George A.; Schwartz, Jeffrey L.; Rengan, Ramesh
2015-11-01
The main objective of this article is to improve the stability of reconstruction algorithms for estimation of radiobiological parameters using serial tumor imaging data acquired during radiation therapy. Serial images of tumor response to radiation therapy represent a complex summation of several exponential processes as treatment induced cell inactivation, tumor growth rates, and the rate of cell loss. Accurate assessment of treatment response would require separation of these processes because they define radiobiological determinants of treatment response and, correspondingly, tumor control probability. However, the estimation of radiobiological parameters using imaging data can be considered an inverse ill-posed problem because a sum of several exponentials would produce the Fredholm integral equation of the first kind which is ill posed. Therefore, the stability of reconstruction of radiobiological parameters presents a problem even for the simplest models of tumor response. To study stability of the parameter reconstruction problem, we used a set of serial CT imaging data for head and neck cancer and a simplest case of a two-level cell population model of tumor response. Inverse reconstruction was performed using a simulated annealing algorithm to minimize a least squared objective function. Results show that the reconstructed values of cell surviving fractions and cell doubling time exhibit significant nonphysical fluctuations if no stabilization algorithms are applied. However, after applying a stabilization algorithm based on variational regularization, the reconstruction produces statistical distributions for survival fractions and doubling time that are comparable to published in vitro data. This algorithm is an advance over our previous work where only cell surviving fractions were reconstructed. We conclude that variational regularization allows for an increase in the number of free parameters in our model which enables development of more-advanced parameter reconstruction algorithms.
NASA Astrophysics Data System (ADS)
Hosani, E. Al; Zhang, M.; Abascal, J. F. P. J.; Soleimani, M.
2016-11-01
Electrical capacitance tomography (ECT) is an imaging technology used to reconstruct the permittivity distribution within the sensing region. So far, ECT has been primarily used to image non-conductive media only, since if the conductivity of the imaged object is high, the capacitance measuring circuit will be almost shortened by the conductivity path and a clear image cannot be produced using the standard image reconstruction approaches. This paper tackles the problem of imaging metallic samples using conventional ECT systems by investigating the two main aspects of image reconstruction algorithms, namely the forward problem and the inverse problem. For the forward problem, two different methods to model the region of high conductivity in ECT is presented. On the other hand, for the inverse problem, three different algorithms to reconstruct the high contrast images are examined. The first two methods are the linear single step Tikhonov method and the iterative total variation regularization method, and use two sets of ECT data to reconstruct the image in time difference mode. The third method, namely the level set method, uses absolute ECT measurements and was developed using a metallic forward model. The results indicate that the applications of conventional ECT systems can be extended to metal samples using the suggested algorithms and forward model, especially using a level set algorithm to find the boundary of the metal.
A Survey of the Use of Iterative Reconstruction Algorithms in Electron Microscopy
Otón, J.; Vilas, J. L.; Kazemi, M.; Melero, R.; del Caño, L.; Cuenca, J.; Conesa, P.; Gómez-Blanco, J.; Marabini, R.; Carazo, J. M.
2017-01-01
One of the key steps in Electron Microscopy is the tomographic reconstruction of a three-dimensional (3D) map of the specimen being studied from a set of two-dimensional (2D) projections acquired at the microscope. This tomographic reconstruction may be performed with different reconstruction algorithms that can be grouped into several large families: direct Fourier inversion methods, back-projection methods, Radon methods, or iterative algorithms. In this review, we focus on the latter family of algorithms, explaining the mathematical rationale behind the different algorithms in this family as they have been introduced in the field of Electron Microscopy. We cover their use in Single Particle Analysis (SPA) as well as in Electron Tomography (ET). PMID:29312997
Real-time inverse kinematics for the upper limb: a model-based algorithm using segment orientations.
Borbély, Bence J; Szolgay, Péter
2017-01-17
Model based analysis of human upper limb movements has key importance in understanding the motor control processes of our nervous system. Various simulation software packages have been developed over the years to perform model based analysis. These packages provide computationally intensive-and therefore off-line-solutions to calculate the anatomical joint angles from motion captured raw measurement data (also referred as inverse kinematics). In addition, recent developments in inertial motion sensing technology show that it may replace large, immobile and expensive optical systems with small, mobile and cheaper solutions in cases when a laboratory-free measurement setup is needed. The objective of the presented work is to extend the workflow of measurement and analysis of human arm movements with an algorithm that allows accurate and real-time estimation of anatomical joint angles for a widely used OpenSim upper limb kinematic model when inertial sensors are used for movement recording. The internal structure of the selected upper limb model is analyzed and used as the underlying platform for the development of the proposed algorithm. Based on this structure, a prototype marker set is constructed that facilitates the reconstruction of model-based joint angles using orientation data directly available from inertial measurement systems. The mathematical formulation of the reconstruction algorithm is presented along with the validation of the algorithm on various platforms, including embedded environments. Execution performance tables of the proposed algorithm show significant improvement on all tested platforms. Compared to OpenSim's Inverse Kinematics tool 50-15,000x speedup is achieved while maintaining numerical accuracy. The proposed algorithm is capable of real-time reconstruction of standardized anatomical joint angles even in embedded environments, establishing a new way for complex applications to take advantage of accurate and fast model-based inverse kinematics calculations.
Inverse Beta Decay Reconstruction in the Double Chooz Monte Carlo
NASA Astrophysics Data System (ADS)
Norrick, Anne
2010-02-01
The Double Chooz Experiment will search for neutrino oscillations using the ``Inverse Beta-Decay'' (IBD) interactions of electron antineutrinos from a nuclear reactor in Chooz, France. The experiment needs to isolate IBD events by detecting and reconstructing the positions and deposited energies of the outgoing positron and neutron. Methods for isolating this process will be described. In addition, results of simulation studies of two different reconstruction algorithms will be presented and their performances compared. )
Numerical solution of 2D-vector tomography problem using the method of approximate inverse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Svetov, Ivan; Maltseva, Svetlana; Polyakova, Anna
2016-08-10
We propose a numerical solution of reconstruction problem of a two-dimensional vector field in a unit disk from the known values of the longitudinal and transverse ray transforms. The algorithm is based on the method of approximate inverse. Numerical simulations confirm that the proposed method yields good results of reconstruction of vector fields.
Inverse transport calculations in optical imaging with subspace optimization algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Tian, E-mail: tding@math.utexas.edu; Ren, Kui, E-mail: ren@math.utexas.edu
2014-09-15
Inverse boundary value problems for the radiative transport equation play an important role in optics-based medical imaging techniques such as diffuse optical tomography (DOT) and fluorescence optical tomography (FOT). Despite the rapid progress in the mathematical theory and numerical computation of these inverse problems in recent years, developing robust and efficient reconstruction algorithms remains a challenging task and an active research topic. We propose here a robust reconstruction method that is based on subspace minimization techniques. The method splits the unknown transport solution (or a functional of it) into low-frequency and high-frequency components, and uses singular value decomposition to analyticallymore » recover part of low-frequency information. Minimization is then applied to recover part of the high-frequency components of the unknowns. We present some numerical simulations with synthetic data to demonstrate the performance of the proposed algorithm.« less
Tomographic inversion of satellite photometry
NASA Technical Reports Server (NTRS)
Solomon, S. C.; Hays, P. B.; Abreu, V. J.
1984-01-01
An inversion algorithm capable of reconstructing the volume emission rate of thermospheric airglow features from satellite photometry has been developed. The accuracy and resolution of this technique are investigated using simulated data, and the inversions of several sets of observations taken by the Visible Airglow Experiment are presented.
Research on compressive sensing reconstruction algorithm based on total variation model
NASA Astrophysics Data System (ADS)
Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin
2017-12-01
Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.
Quantitative imaging technique using the layer-stripping algorithm
NASA Astrophysics Data System (ADS)
Beilina, L.
2017-07-01
We present the layer-stripping algorithm for the solution of the hyperbolic coefficient inverse problem (CIP). Our numerical examples show quantitative reconstruction of small tumor-like inclusions in two-dimensions.
NASA Astrophysics Data System (ADS)
Volkov, D.
2017-12-01
We introduce an algorithm for the simultaneous reconstruction of faults and slip fields on those faults. We define a regularized functional to be minimized for the reconstruction. We prove that the minimum of that functional converges to the unique solution of the related fault inverse problem. Due to inherent uncertainties in measurements, rather than seeking a deterministic solution to the fault inverse problem, we consider a Bayesian approach. The advantage of such an approach is that we obtain a way of quantifying uncertainties as part of our final answer. On the downside, this Bayesian approach leads to a very large computation. To contend with the size of this computation we developed an algorithm for the numerical solution to the stochastic minimization problem which can be easily implemented on a parallel multi-core platform and we discuss techniques to save on computational time. After showing how this algorithm performs on simulated data and assessing the effect of noise, we apply it to measured data. The data was recorded during a slow slip event in Guerrero, Mexico.
NASA Astrophysics Data System (ADS)
Liu, Hao; Li, Kangda; Wang, Bing; Tang, Hainie; Gong, Xiaohui
2017-01-01
A quantized block compressive sensing (QBCS) framework, which incorporates the universal measurement, quantization/inverse quantization, entropy coder/decoder, and iterative projected Landweber reconstruction, is summarized. Under the QBCS framework, this paper presents an improved reconstruction algorithm for aerial imagery, QBCS, with entropy-aware projected Landweber (QBCS-EPL), which leverages the full-image sparse transform without Wiener filter and an entropy-aware thresholding model for wavelet-domain image denoising. Through analyzing the functional relation between the soft-thresholding factors and entropy-based bitrates for different quantization methods, the proposed model can effectively remove wavelet-domain noise of bivariate shrinkage and achieve better image reconstruction quality. For the overall performance of QBCS reconstruction, experimental results demonstrate that the proposed QBCS-EPL algorithm significantly outperforms several existing algorithms. With the experiment-driven methodology, the QBCS-EPL algorithm can obtain better reconstruction quality at a relatively moderate computational cost, which makes it more desirable for aerial imagery applications.
Sparsity-based acoustic inversion in cross-sectional multiscale optoacoustic imaging.
Han, Yiyong; Tzoumas, Stratis; Nunes, Antonio; Ntziachristos, Vasilis; Rosenthal, Amir
2015-09-01
With recent advancement in hardware of optoacoustic imaging systems, highly detailed cross-sectional images may be acquired at a single laser shot, thus eliminating motion artifacts. Nonetheless, other sources of artifacts remain due to signal distortion or out-of-plane signals. The purpose of image reconstruction algorithms is to obtain the most accurate images from noisy, distorted projection data. In this paper, the authors use the model-based approach for acoustic inversion, combined with a sparsity-based inversion procedure. Specifically, a cost function is used that includes the L1 norm of the image in sparse representation and a total variation (TV) term. The optimization problem is solved by a numerically efficient implementation of a nonlinear gradient descent algorithm. TV-L1 model-based inversion is tested in the cross section geometry for numerically generated data as well as for in vivo experimental data from an adult mouse. In all cases, model-based TV-L1 inversion showed a better performance over the conventional Tikhonov regularization, TV inversion, and L1 inversion. In the numerical examples, the images reconstructed with TV-L1 inversion were quantitatively more similar to the originating images. In the experimental examples, TV-L1 inversion yielded sharper images and weaker streak artifact. The results herein show that TV-L1 inversion is capable of improving the quality of highly detailed, multiscale optoacoustic images obtained in vivo using cross-sectional imaging systems. As a result of its high fidelity, model-based TV-L1 inversion may be considered as the new standard for image reconstruction in cross-sectional imaging.
Regularization Reconstruction Method for Imaging Problems in Electrical Capacitance Tomography
NASA Astrophysics Data System (ADS)
Chu, Pan; Lei, Jing
2017-11-01
The electrical capacitance tomography (ECT) is deemed to be a powerful visualization measurement technique for the parametric measurement in a multiphase flow system. The inversion task in the ECT technology is an ill-posed inverse problem, and seeking for an efficient numerical method to improve the precision of the reconstruction images is important for practical measurements. By the introduction of the Tikhonov regularization (TR) methodology, in this paper a loss function that emphasizes the robustness of the estimation and the low rank property of the imaging targets is put forward to convert the solution of the inverse problem in the ECT reconstruction task into a minimization problem. Inspired by the split Bregman (SB) algorithm, an iteration scheme is developed for solving the proposed loss function. Numerical experiment results validate that the proposed inversion method not only reconstructs the fine structures of the imaging targets, but also improves the robustness.
Shi, Junwei; Zhang, Bin; Liu, Fei; Luo, Jianwen; Bai, Jing
2013-09-15
For the ill-posed fluorescent molecular tomography (FMT) inverse problem, the L1 regularization can protect the high-frequency information like edges while effectively reduce the image noise. However, the state-of-the-art L1 regularization-based algorithms for FMT reconstruction are expensive in memory, especially for large-scale problems. An efficient L1 regularization-based reconstruction algorithm based on nonlinear conjugate gradient with restarted strategy is proposed to increase the computational speed with low memory consumption. The reconstruction results from phantom experiments demonstrate that the proposed algorithm can obtain high spatial resolution and high signal-to-noise ratio, as well as high localization accuracy for fluorescence targets.
A compressed sensing based 3D resistivity inversion algorithm for hydrogeological applications
NASA Astrophysics Data System (ADS)
Ranjan, Shashi; Kambhammettu, B. V. N. P.; Peddinti, Srinivasa Rao; Adinarayana, J.
2018-04-01
Image reconstruction from discrete electrical responses pose a number of computational and mathematical challenges. Application of smoothness constrained regularized inversion from limited measurements may fail to detect resistivity anomalies and sharp interfaces separated by hydro stratigraphic units. Under favourable conditions, compressed sensing (CS) can be thought of an alternative to reconstruct the image features by finding sparse solutions to highly underdetermined linear systems. This paper deals with the development of a CS assisted, 3-D resistivity inversion algorithm for use with hydrogeologists and groundwater scientists. CS based l1-regularized least square algorithm was applied to solve the resistivity inversion problem. Sparseness in the model update vector is introduced through block oriented discrete cosine transformation, with recovery of the signal achieved through convex optimization. The equivalent quadratic program was solved using primal-dual interior point method. Applicability of the proposed algorithm was demonstrated using synthetic and field examples drawn from hydrogeology. The proposed algorithm has outperformed the conventional (smoothness constrained) least square method in recovering the model parameters with much fewer data, yet preserving the sharp resistivity fronts separated by geologic layers. Resistivity anomalies represented by discrete homogeneous blocks embedded in contrasting geologic layers were better imaged using the proposed algorithm. In comparison to conventional algorithm, CS has resulted in an efficient (an increase in R2 from 0.62 to 0.78; a decrease in RMSE from 125.14 Ω-m to 72.46 Ω-m), reliable, and fast converging (run time decreased by about 25%) solution.
NASA Astrophysics Data System (ADS)
Liu, Sha; Liu, Shi; Tong, Guowei
2017-11-01
In industrial areas, temperature distribution information provides a powerful data support for improving system efficiency, reducing pollutant emission, ensuring safety operation, etc. As a noninvasive measurement technology, acoustic tomography (AT) has been widely used to measure temperature distribution where the efficiency of the reconstruction algorithm is crucial for the reliability of the measurement results. Different from traditional reconstruction techniques, in this paper a two-phase reconstruction method is proposed to ameliorate the reconstruction accuracy (RA). In the first phase, the measurement domain is discretized by a coarse square grid to reduce the number of unknown variables to mitigate the ill-posed nature of the AT inverse problem. By taking into consideration the inaccuracy of the measured time-of-flight data, a new cost function is constructed to improve the robustness of the estimation, and a grey wolf optimizer is used to solve the proposed cost function to obtain the temperature distribution on the coarse grid. In the second phase, the Adaboost.RT based BP neural network algorithm is developed for predicting the temperature distribution on the refined grid in accordance with the temperature distribution data estimated in the first phase. Numerical simulations and experiment measurement results validate the superiority of the proposed reconstruction algorithm in improving the robustness and RA.
Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction
NASA Technical Reports Server (NTRS)
Oliver, A. Brandon; Amar, Adam J.
2016-01-01
Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of determining boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation details will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of problems.
Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction
NASA Technical Reports Server (NTRS)
Oliver, A Brandon; Amar, Adam J.
2016-01-01
Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of specifying boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation nuances will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of one-dimensional and multi-dimensional problems
Sparsity-based acoustic inversion in cross-sectional multiscale optoacoustic imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Yiyong; Tzoumas, Stratis; Nunes, Antonio
2015-09-15
Purpose: With recent advancement in hardware of optoacoustic imaging systems, highly detailed cross-sectional images may be acquired at a single laser shot, thus eliminating motion artifacts. Nonetheless, other sources of artifacts remain due to signal distortion or out-of-plane signals. The purpose of image reconstruction algorithms is to obtain the most accurate images from noisy, distorted projection data. Methods: In this paper, the authors use the model-based approach for acoustic inversion, combined with a sparsity-based inversion procedure. Specifically, a cost function is used that includes the L1 norm of the image in sparse representation and a total variation (TV) term. Themore » optimization problem is solved by a numerically efficient implementation of a nonlinear gradient descent algorithm. TV–L1 model-based inversion is tested in the cross section geometry for numerically generated data as well as for in vivo experimental data from an adult mouse. Results: In all cases, model-based TV–L1 inversion showed a better performance over the conventional Tikhonov regularization, TV inversion, and L1 inversion. In the numerical examples, the images reconstructed with TV–L1 inversion were quantitatively more similar to the originating images. In the experimental examples, TV–L1 inversion yielded sharper images and weaker streak artifact. Conclusions: The results herein show that TV–L1 inversion is capable of improving the quality of highly detailed, multiscale optoacoustic images obtained in vivo using cross-sectional imaging systems. As a result of its high fidelity, model-based TV–L1 inversion may be considered as the new standard for image reconstruction in cross-sectional imaging.« less
Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A
2017-12-01
The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction.
Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A.
2017-01-01
The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction. PMID:29376111
A combined reconstruction-classification method for diffuse optical tomography.
Hiltunen, P; Prince, S J D; Arridge, S
2009-11-07
We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.
Self-prior strategy for organ reconstruction in fluorescence molecular tomography
Zhou, Yuan; Chen, Maomao; Su, Han; Luo, Jianwen
2017-01-01
The purpose of this study is to propose a strategy for organ reconstruction in fluorescence molecular tomography (FMT) without prior information from other imaging modalities, and to overcome the high cost and ionizing radiation caused by the traditional structural prior strategy. The proposed strategy is designed as an iterative architecture to solve the inverse problem of FMT. In each iteration, a short time Fourier transform (STFT) based algorithm is used to extract the self-prior information in the space-frequency energy spectrum with the assumption that the regions with higher fluorescence concentration have larger energy intensity, then the cost function of the inverse problem is modified by the self-prior information, and lastly an iterative Laplacian regularization algorithm is conducted to solve the updated inverse problem and obtains the reconstruction results. Simulations and in vivo experiments on liver reconstruction are carried out to test the performance of the self-prior strategy on organ reconstruction. The organ reconstruction results obtained by the proposed self-prior strategy are closer to the ground truth than those obtained by the iterative Tikhonov regularization (ITKR) method (traditional non-prior strategy). Significant improvements are shown in the evaluation indexes of relative locational error (RLE), relative error (RE) and contrast-to-noise ratio (CNR). The self-prior strategy improves the organ reconstruction results compared with the non-prior strategy and also overcomes the shortcomings of the traditional structural prior strategy. Various applications such as metabolic imaging and pharmacokinetic study can be aided by this strategy. PMID:29082094
Self-prior strategy for organ reconstruction in fluorescence molecular tomography.
Zhou, Yuan; Chen, Maomao; Su, Han; Luo, Jianwen
2017-10-01
The purpose of this study is to propose a strategy for organ reconstruction in fluorescence molecular tomography (FMT) without prior information from other imaging modalities, and to overcome the high cost and ionizing radiation caused by the traditional structural prior strategy. The proposed strategy is designed as an iterative architecture to solve the inverse problem of FMT. In each iteration, a short time Fourier transform (STFT) based algorithm is used to extract the self-prior information in the space-frequency energy spectrum with the assumption that the regions with higher fluorescence concentration have larger energy intensity, then the cost function of the inverse problem is modified by the self-prior information, and lastly an iterative Laplacian regularization algorithm is conducted to solve the updated inverse problem and obtains the reconstruction results. Simulations and in vivo experiments on liver reconstruction are carried out to test the performance of the self-prior strategy on organ reconstruction. The organ reconstruction results obtained by the proposed self-prior strategy are closer to the ground truth than those obtained by the iterative Tikhonov regularization (ITKR) method (traditional non-prior strategy). Significant improvements are shown in the evaluation indexes of relative locational error (RLE), relative error (RE) and contrast-to-noise ratio (CNR). The self-prior strategy improves the organ reconstruction results compared with the non-prior strategy and also overcomes the shortcomings of the traditional structural prior strategy. Various applications such as metabolic imaging and pharmacokinetic study can be aided by this strategy.
Automatic alignment for three-dimensional tomographic reconstruction
NASA Astrophysics Data System (ADS)
van Leeuwen, Tristan; Maretzke, Simon; Joost Batenburg, K.
2018-02-01
In tomographic reconstruction, the goal is to reconstruct an unknown object from a collection of line integrals. Given a complete sampling of such line integrals for various angles and directions, explicit inverse formulas exist to reconstruct the object. Given noisy and incomplete measurements, the inverse problem is typically solved through a regularized least-squares approach. A challenge for both approaches is that in practice the exact directions and offsets of the x-rays are only known approximately due to, e.g. calibration errors. Such errors lead to artifacts in the reconstructed image. In the case of sufficient sampling and geometrically simple misalignment, the measurements can be corrected by exploiting so-called consistency conditions. In other cases, such conditions may not apply and we have to solve an additional inverse problem to retrieve the angles and shifts. In this paper we propose a general algorithmic framework for retrieving these parameters in conjunction with an algebraic reconstruction technique. The proposed approach is illustrated by numerical examples for both simulated data and an electron tomography dataset.
Continuous analog of multiplicative algebraic reconstruction technique for computed tomography
NASA Astrophysics Data System (ADS)
Tateishi, Kiyoko; Yamaguchi, Yusaku; Abou Al-Ola, Omar M.; Kojima, Takeshi; Yoshinaga, Tetsuya
2016-03-01
We propose a hybrid dynamical system as a continuous analog to the block-iterative multiplicative algebraic reconstruction technique (BI-MART), which is a well-known iterative image reconstruction algorithm for computed tomography. The hybrid system is described by a switched nonlinear system with a piecewise smooth vector field or differential equation and, for consistent inverse problems, the convergence of non-negatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem. Namely, we can prove theoretically that a weighted Kullback-Leibler divergence measure can be a common Lyapunov function for the switched system. We show that discretizing the differential equation by using the first-order approximation (Euler's method) based on the geometric multiplicative calculus leads to the same iterative formula of the BI-MART with the scaling parameter as a time-step of numerical discretization. The present paper is the first to reveal that a kind of iterative image reconstruction algorithm is constructed by the discretization of a continuous-time dynamical system for solving tomographic inverse problems. Iterative algorithms with not only the Euler method but also the Runge-Kutta methods of lower-orders applied for discretizing the continuous-time system can be used for image reconstruction. A numerical example showing the characteristics of the discretized iterative methods is presented.
QR-decomposition based SENSE reconstruction using parallel architecture.
Ullah, Irfan; Nisar, Habab; Raza, Haseeb; Qasim, Malik; Inam, Omair; Omer, Hammad
2018-04-01
Magnetic Resonance Imaging (MRI) is a powerful medical imaging technique that provides essential clinical information about the human body. One major limitation of MRI is its long scan time. Implementation of advance MRI algorithms on a parallel architecture (to exploit inherent parallelism) has a great potential to reduce the scan time. Sensitivity Encoding (SENSE) is a Parallel Magnetic Resonance Imaging (pMRI) algorithm that utilizes receiver coil sensitivities to reconstruct MR images from the acquired under-sampled k-space data. At the heart of SENSE lies inversion of a rectangular encoding matrix. This work presents a novel implementation of GPU based SENSE algorithm, which employs QR decomposition for the inversion of the rectangular encoding matrix. For a fair comparison, the performance of the proposed GPU based SENSE reconstruction is evaluated against single and multicore CPU using openMP. Several experiments against various acceleration factors (AFs) are performed using multichannel (8, 12 and 30) phantom and in-vivo human head and cardiac datasets. Experimental results show that GPU significantly reduces the computation time of SENSE reconstruction as compared to multi-core CPU (approximately 12x speedup) and single-core CPU (approximately 53x speedup) without any degradation in the quality of the reconstructed images. Copyright © 2018 Elsevier Ltd. All rights reserved.
Bayesian image reconstruction for improving detection performance of muon tomography.
Wang, Guobao; Schultz, Larry J; Qi, Jinyi
2009-05-01
Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.
Adaptive Filtering in the Wavelet Transform Domain Via Genetic Algorithms
2004-08-01
inverse transform process. 2. BACKGROUND The image processing research conducted at the AFRL/IFTA Reconfigurable Computing Laboratory has been...coefficients from the wavelet domain back into the original signal domain. In other words, the inverse transform produces the original signal x(t) from the...coefficients for an inverse wavelet transform, such that the MSE of images reconstructed by this inverse transform is significantly less than the mean squared
Iterative algorithms for a non-linear inverse problem in atmospheric lidar
NASA Astrophysics Data System (ADS)
Denevi, Giulia; Garbarino, Sara; Sorrentino, Alberto
2017-08-01
We consider the inverse problem of retrieving aerosol extinction coefficients from Raman lidar measurements. In this problem the unknown and the data are related through the exponential of a linear operator, the unknown is non-negative and the data follow the Poisson distribution. Standard methods work on the log-transformed data and solve the resulting linear inverse problem, but neglect to take into account the noise statistics. In this study we show that proper modelling of the noise distribution can improve substantially the quality of the reconstructed extinction profiles. To achieve this goal, we consider the non-linear inverse problem with non-negativity constraint, and propose two iterative algorithms derived using the Karush-Kuhn-Tucker conditions. We validate the algorithms with synthetic and experimental data. As expected, the proposed algorithms out-perform standard methods in terms of sensitivity to noise and reliability of the estimated profile.
Ma, Qingyu; He, Bin
2007-08-21
A theoretical study on the magnetoacoustic signal generation with magnetic induction and its applications to electrical conductivity reconstruction is conducted. An object with a concentric cylindrical geometry is located in a static magnetic field and a pulsed magnetic field. Driven by Lorentz force generated by the static magnetic field, the magnetically induced eddy current produces acoustic vibration and the propagated sound wave is received by a transducer around the object to reconstruct the corresponding electrical conductivity distribution of the object. A theory on the magnetoacoustic waveform generation for a circular symmetric model is provided as a forward problem. The explicit formulae and quantitative algorithm for the electrical conductivity reconstruction are then presented as an inverse problem. Computer simulations were conducted to test the proposed theory and assess the performance of the inverse algorithms for a multi-layer cylindrical model. The present simulation results confirm the validity of the proposed theory and suggest the feasibility of reconstructing electrical conductivity distribution based on the proposed theory on the magnetoacoustic signal generation with magnetic induction.
Qi, Hong; Qiao, Yao-Bin; Ren, Ya-Tao; Shi, Jing-Wen; Zhang, Ze-Yu; Ruan, Li-Ming
2016-10-17
Sequential quadratic programming (SQP) is used as an optimization algorithm to reconstruct the optical parameters based on the time-domain radiative transfer equation (TD-RTE). Numerous time-resolved measurement signals are obtained using the TD-RTE as forward model. For a high computational efficiency, the gradient of objective function is calculated using an adjoint equation technique. SQP algorithm is employed to solve the inverse problem and the regularization term based on the generalized Gaussian Markov random field (GGMRF) model is used to overcome the ill-posed problem. Simulated results show that the proposed reconstruction scheme performs efficiently and accurately.
Direct integration of the inverse Radon equation for X-ray computed tomography.
Libin, E E; Chakhlov, S V; Trinca, D
2016-11-22
A new mathematical appoach using the inverse Radon equation for restoration of images in problems of linear two-dimensional x-ray tomography is formulated. In this approach, Fourier transformation is not used, and it gives the chance to create the practical computing algorithms having more reliable mathematical substantiation. Results of software implementation show that for especially for low number of projections, the described approach performs better than standard X-ray tomographic reconstruction algorithms.
Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM☆
López, J.D.; Litvak, V.; Espinosa, J.J.; Friston, K.; Barnes, G.R.
2014-01-01
The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy—an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. PMID:24041874
Inverse problem for multispecies ferromagneticlike mean-field models in phase space with many states
NASA Astrophysics Data System (ADS)
Fedele, Micaela; Vernia, Cecilia
2017-10-01
In this paper we solve the inverse problem for the Curie-Weiss model and its multispecies version when multiple thermodynamic states are present as in the low temperature phase where the phase space is clustered. The inverse problem consists of reconstructing the model parameters starting from configuration data generated according to the distribution of the model. We demonstrate that, without taking into account the presence of many states, the application of the inversion procedure produces very poor inference results. To overcome this problem, we use the clustering algorithm. When the system has two symmetric states of positive and negative magnetizations, the parameter reconstruction can also be obtained with smaller computational effort simply by flipping the sign of the magnetizations from positive to negative (or vice versa). The parameter reconstruction fails when the system undergoes a phase transition: In that case we give the correct inversion formulas for the Curie-Weiss model and we show that they can be used to measure how close the system gets to being critical.
A multistage selective weighting method for improved microwave breast tomography.
Shahzad, Atif; O'Halloran, Martin; Jones, Edward; Glavin, Martin
2016-12-01
Microwave tomography has shown potential to successfully reconstruct the dielectric properties of the human breast, thereby providing an alternative to other imaging modalities used in breast imaging applications. Considering the costly forward solution and complex iterative algorithms, computational complexity becomes a major bottleneck in practical applications of microwave tomography. In addition, the natural tendency of microwave inversion algorithms to reward high contrast breast tissue boundaries, such as the skin-adipose interface, usually leads to a very slow reconstruction of the internal tissue structure of human breast. This paper presents a multistage selective weighting method to improve the reconstruction quality of breast dielectric properties and minimize the computational cost of microwave breast tomography. In the proposed two stage approach, the skin layer is approximated using scaled microwave measurements in the first pass of the inversion algorithm; a numerical skin model is then constructed based on the estimated skin layer and the assumed dielectric properties of the skin tissue. In the second stage of the algorithm, the skin model is used as a priori information to reconstruct the internal tissue structure of the breast using a set of temporal scaling functions. The proposed method is evaluated on anatomically accurate MRI-derived breast phantoms and a comparison with the standard single-stage technique is presented. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Calculating tissue shear modulus and pressure by 2D log-elastographic methods
NASA Astrophysics Data System (ADS)
McLaughlin, Joyce R.; Zhang, Ning; Manduca, Armando
2010-08-01
Shear modulus imaging, often called elastography, enables detection and characterization of tissue abnormalities. In this paper the data are two displacement components obtained from successive MR or ultrasound data sets acquired while the tissue is excited mechanically. A 2D plane strain elastic model is assumed to govern the 2D displacement, u. The shear modulus, μ, is unknown and whether or not the first Lamé parameter, λ, is known the pressure p = λ∇ sdot u which is present in the plane strain model cannot be measured and is unreliably computed from measured data and can be shown to be an order one quantity in the units kPa. So here we present a 2D log-elastographic inverse algorithm that (1) simultaneously reconstructs the shear modulus, μ, and p, which together satisfy a first-order partial differential equation system, with the goal of imaging μ (2) controls potential exponential growth in the numerical error and (3) reliably reconstructs the quantity p in the inverse algorithm as compared to the same quantity computed with a forward algorithm. This work generalizes the log-elastographic algorithm in Lin et al (2009 Inverse Problems 25) which uses one displacement component, is derived assuming that the component satisfies the wave equation and is tested on synthetic data computed with the wave equation model. The 2D log-elastographic algorithm is tested on 2D synthetic data and 2D in vivo data from Mayo Clinic. We also exhibit examples to show that the 2D log-elastographic algorithm improves the quality of the recovered images as compared to the log-elastographic and direct inversion algorithms.
Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A
2015-03-01
Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the sound speed distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Both computer simulation and experimental phantom studies are conducted to demonstrate the use of the WISE method. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Almansouri, Hani; Venkatakrishnan, Singanallur V.; Clayton, Dwight A.
One-sided non-destructive evaluation (NDE) is widely used to inspect materials, such as concrete structures in nuclear power plants (NPP). A widely used method for one-sided NDE is the synthetic aperture focusing technique (SAFT). The SAFT algorithm produces reasonable results when inspecting simple structures. However, for complex structures, such as heavily reinforced thick concrete structures, SAFT results in artifacts and hence there is a need for a more sophisticated inversion technique. Model-based iterative reconstruction (MBIR) algorithms, which are typically equivalent to regularized inversion techniques, offer a powerful framework to incorporate complex models for the physics, detector miscalibrations and the materials beingmore » imaged to obtain high quality reconstructions. Previously, we have proposed an ultrasonic MBIR method that signifcantly improves reconstruction quality compared to SAFT. However, the method made some simplifying assumptions on the propagation model and did not disucss ways to handle data that is obtained by raster scanning a system over a surface to inspect large regions. In this paper, we propose a novel MBIR algorithm that incorporates an anisotropic forward model and allows for the joint processing of data obtained from a system that raster scans a large surface. We demonstrate that the new MBIR method can produce dramatic improvements in reconstruction quality compared to SAFT and suppresses articfacts compared to the perviously presented MBIR approach.« less
NASA Astrophysics Data System (ADS)
Almansouri, Hani; Venkatakrishnan, Singanallur; Clayton, Dwight; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector
2018-04-01
One-sided non-destructive evaluation (NDE) is widely used to inspect materials, such as concrete structures in nuclear power plants (NPP). A widely used method for one-sided NDE is the synthetic aperture focusing technique (SAFT). The SAFT algorithm produces reasonable results when inspecting simple structures. However, for complex structures, such as heavily reinforced thick concrete structures, SAFT results in artifacts and hence there is a need for a more sophisticated inversion technique. Model-based iterative reconstruction (MBIR) algorithms, which are typically equivalent to regularized inversion techniques, offer a powerful framework to incorporate complex models for the physics, detector miscalibrations and the materials being imaged to obtain high quality reconstructions. Previously, we have proposed an ultrasonic MBIR method that signifcantly improves reconstruction quality compared to SAFT. However, the method made some simplifying assumptions on the propagation model and did not disucss ways to handle data that is obtained by raster scanning a system over a surface to inspect large regions. In this paper, we propose a novel MBIR algorithm that incorporates an anisotropic forward model and allows for the joint processing of data obtained from a system that raster scans a large surface. We demonstrate that the new MBIR method can produce dramatic improvements in reconstruction quality compared to SAFT and suppresses articfacts compared to the perviously presented MBIR approach.
Novel automated inversion algorithm for temperature reconstruction using gas isotopes from ice cores
NASA Astrophysics Data System (ADS)
Döring, Michael; Leuenberger, Markus C.
2018-06-01
Greenland past temperature history can be reconstructed by forcing the output of a firn-densification and heat-diffusion model to fit multiple gas-isotope data (δ15N or δ40Ar or δ15Nexcess) extracted from ancient air in Greenland ice cores using published accumulation-rate (Acc) datasets. We present here a novel methodology to solve this inverse problem, by designing a fully automated algorithm. To demonstrate the performance of this novel approach, we begin by intentionally constructing synthetic temperature histories and associated δ15N datasets, mimicking real Holocene data that we use as true values
(targets) to be compared to the output of the algorithm. This allows us to quantify uncertainties originating from the algorithm itself. The presented approach is completely automated and therefore minimizes the subjective
impact of manual parameter tuning, leading to reproducible temperature estimates. In contrast to many other ice-core-based temperature reconstruction methods, the presented approach is completely independent from ice-core stable-water isotopes, providing the opportunity to validate water-isotope-based reconstructions or reconstructions where water isotopes are used together with δ15N or δ40Ar. We solve the inverse problem T(δ15N, Acc) by using a combination of a Monte Carlo based iterative approach and the analysis of remaining mismatches between modelled and target data, based on cubic-spline filtering of random numbers and the laboratory-determined temperature sensitivity for nitrogen isotopes. Additionally, the presented reconstruction approach was tested by fitting measured δ40Ar and δ15Nexcess data, which led as well to a robust agreement between modelled and measured data. The obtained final mismatches follow a symmetric standard-distribution function. For the study on synthetic data, 95 % of the mismatches compared to the synthetic target data are in an envelope between 3.0 to 6.3 permeg for δ15N and 0.23 to 0.51 K for temperature (2σ, respectively). In addition to Holocene temperature reconstructions, the fitting approach can also be used for glacial temperature reconstructions. This is shown by fitting of the North Greenland Ice Core Project (NGRIP) δ15N data for two Dansgaard-Oeschger events using the presented approach, leading to results comparable to other studies.
Kim, Hyun Keol; Montejo, Ludguier D; Jia, Jingfei; Hielscher, Andreas H
2017-06-01
We introduce here the finite volume formulation of the frequency-domain simplified spherical harmonics model with n -th order absorption coefficients (FD-SP N ) that approximates the frequency-domain equation of radiative transfer (FD-ERT). We then present the FD-SP N based reconstruction algorithm that recovers absorption and scattering coefficients in biological tissue. The FD-SP N model with 3 rd order absorption coefficient (i.e., FD-SP 3 ) is used as a forward model to solve the inverse problem. The FD-SP 3 is discretized with a node-centered finite volume scheme and solved with a restarted generalized minimum residual (GMRES) algorithm. The absorption and scattering coefficients are retrieved using a limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm. Finally, the forward and inverse algorithms are evaluated using numerical phantoms with optical properties and size that mimic small-volume tissue such as finger joints and small animals. The forward results show that the FD-SP 3 model approximates the FD-ERT (S 12 ) solution within relatively high accuracy; the average error in the phase (<3.7%) and the amplitude (<7.1%) of the partial current at the boundary are reported. From the inverse results we find that the absorption and scattering coefficient maps are more accurately reconstructed with the SP 3 model than those with the SP 1 model. Therefore, this work shows that the FD-SP 3 is an efficient model for optical tomographic imaging of small-volume media with non-diffuse properties both in terms of computational time and accuracy as it requires significantly lower CPU time than the FD-ERT (S 12 ) and also it is more accurate than the FD-SP 1 .
Regional regularization method for ECT based on spectral transformation of Laplacian
NASA Astrophysics Data System (ADS)
Guo, Z. H.; Kan, Z.; Lv, D. C.; Shao, F. Q.
2016-10-01
Image reconstruction in electrical capacitance tomography is an ill-posed inverse problem, and regularization techniques are usually used to solve the problem for suppressing noise. An anisotropic regional regularization algorithm for electrical capacitance tomography is constructed using a novel approach called spectral transformation. Its function is derived and applied to the weighted gradient magnitude of the sensitivity of Laplacian as a regularization term. With the optimum regional regularizer, the a priori knowledge on the local nonlinearity degree of the forward map is incorporated into the proposed online reconstruction algorithm. Simulation experimentations were performed to verify the capability of the new regularization algorithm to reconstruct a superior quality image over two conventional Tikhonov regularization approaches. The advantage of the new algorithm for improving performance and reducing shape distortion is demonstrated with the experimental data.
Mathematics of Computed Tomography
NASA Astrophysics Data System (ADS)
Hawkins, William Grant
A review of the applications of the Radon transform is presented, with emphasis on emission computed tomography and transmission computed tomography. The theory of the 2D and 3D Radon transforms, and the effects of attenuation for emission computed tomography are presented. The algebraic iterative methods, their importance and limitations are reviewed. Analytic solutions of the 2D problem the convolution and frequency filtering methods based on linear shift invariant theory, and the solution of the circular harmonic decomposition by integral transform theory--are reviewed. The relation between the invisible kernels, the inverse circular harmonic transform, and the consistency conditions are demonstrated. The discussion and review are extended to the 3D problem-convolution, frequency filtering, spherical harmonic transform solutions, and consistency conditions. The Cormack algorithm based on reconstruction with Zernike polynomials is reviewed. An analogous algorithm and set of reconstruction polynomials is developed for the spherical harmonic transform. The relations between the consistency conditions, boundary conditions and orthogonal basis functions for the 2D projection harmonics are delineated and extended to the 3D case. The equivalence of the inverse circular harmonic transform, the inverse Radon transform, and the inverse Cormack transform is presented. The use of the number of nodes of a projection harmonic as a filter is discussed. Numerical methods for the efficient implementation of angular harmonic algorithms based on orthogonal functions and stable recursion are presented. The derivation of a lower bound for the signal-to-noise ratio of the Cormack algorithm is derived.
NASA Astrophysics Data System (ADS)
Li, Jinghe; Song, Linping; Liu, Qing Huo
2016-02-01
A simultaneous multiple frequency contrast source inversion (CSI) method is applied to reconstructing hydrocarbon reservoir targets in a complex multilayered medium in two dimensions. It simulates the effects of a salt dome sedimentary formation in the context of reservoir monitoring. In this method, the stabilized biconjugate-gradient fast Fourier transform (BCGS-FFT) algorithm is applied as a fast solver for the 2D volume integral equation for the forward computation. The inversion technique with CSI combines the efficient FFT algorithm to speed up the matrix-vector multiplication and the stable convergence of the simultaneous multiple frequency CSI in the iteration process. As a result, this method is capable of making quantitative conductivity image reconstruction effectively for large-scale electromagnetic oil exploration problems, including the vertical electromagnetic profiling (VEP) survey investigated here. A number of numerical examples have been demonstrated to validate the effectiveness and capacity of the simultaneous multiple frequency CSI method for a limited array view in VEP.
Linear functional minimization for inverse modeling
Barajas-Solano, David A.; Wohlberg, Brendt Egon; Vesselinov, Velimir Valentinov; ...
2015-06-01
In this paper, we present a novel inverse modeling strategy to estimate spatially distributed parameters of nonlinear models. The maximum a posteriori (MAP) estimators of these parameters are based on a likelihood functional, which contains spatially discrete measurements of the system parameters and spatiotemporally discrete measurements of the transient system states. The piecewise continuity prior for the parameters is expressed via Total Variation (TV) regularization. The MAP estimator is computed by minimizing a nonquadratic objective equipped with the TV operator. We apply this inversion algorithm to estimate hydraulic conductivity of a synthetic confined aquifer from measurements of conductivity and hydraulicmore » head. The synthetic conductivity field is composed of a low-conductivity heterogeneous intrusion into a high-conductivity heterogeneous medium. Our algorithm accurately reconstructs the location, orientation, and extent of the intrusion from the steady-state data only. Finally, addition of transient measurements of hydraulic head improves the parameter estimation, accurately reconstructing the conductivity field in the vicinity of observation locations.« less
Bayesian approach to inverse statistical mechanics.
Habeck, Michael
2014-05-01
Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.
Bayesian approach to inverse statistical mechanics
NASA Astrophysics Data System (ADS)
Habeck, Michael
2014-05-01
Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.
NASA Astrophysics Data System (ADS)
Li, Lei; Yu, Long; Yang, Kecheng; Li, Wei; Li, Kai; Xia, Min
2018-04-01
The multiangle dynamic light scattering (MDLS) technique can better estimate particle size distributions (PSDs) than single-angle dynamic light scattering. However, determining the inversion range, angular weighting coefficients, and scattering angle combination is difficult but fundamental to the reconstruction for both unimodal and multimodal distributions. In this paper, we propose a self-adapting regularization method called the wavelet iterative recursion nonnegative Tikhonov-Phillips-Twomey (WIRNNT-PT) algorithm. This algorithm combines a wavelet multiscale strategy with an appropriate inversion method and could self-adaptively optimize several noteworthy issues containing the choices of the weighting coefficients, the inversion range and the optimal inversion method from two regularization algorithms for estimating the PSD from MDLS measurements. In addition, the angular dependence of the MDLS for estimating the PSDs of polymeric latexes is thoroughly analyzed. The dependence of the results on the number and range of measurement angles was analyzed in depth to identify the optimal scattering angle combination. Numerical simulations and experimental results for unimodal and multimodal distributions are presented to demonstrate both the validity of the WIRNNT-PT algorithm and the angular dependence of MDLS and show that the proposed algorithm with a six-angle analysis in the 30-130° range can be satisfactorily applied to retrieve PSDs from MDLS measurements.
Freiberger, Manuel; Egger, Herbert; Liebmann, Manfred; Scharfetter, Hermann
2011-11-01
Image reconstruction in fluorescence optical tomography is a three-dimensional nonlinear ill-posed problem governed by a system of partial differential equations. In this paper we demonstrate that a combination of state of the art numerical algorithms and a careful hardware optimized implementation allows to solve this large-scale inverse problem in a few seconds on standard desktop PCs with modern graphics hardware. In particular, we present methods to solve not only the forward but also the non-linear inverse problem by massively parallel programming on graphics processors. A comparison of optimized CPU and GPU implementations shows that the reconstruction can be accelerated by factors of about 15 through the use of the graphics hardware without compromising the accuracy in the reconstructed images.
Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM.
López, J D; Litvak, V; Espinosa, J J; Friston, K; Barnes, G R
2014-01-01
The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy-an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. © 2013. Published by Elsevier Inc. All rights reserved.
Computed inverse resonance imaging for magnetic susceptibility map reconstruction.
Chen, Zikuan; Calhoun, Vince
2012-01-01
This article reports a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a 2-step computational approach. The forward T2*-weighted MRI (T2*MRI) process is broken down into 2 steps: (1) from magnetic susceptibility source to field map establishment via magnetization in the main field and (2) from field map to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes 2 inverse steps to reverse the T2*MRI procedure: field map calculation from MR-phase image and susceptibility source calculation from the field map. The inverse step from field map to susceptibility map is a 3-dimensional ill-posed deconvolution problem, which can be solved with 3 kinds of approaches: the Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from an MR-phase image with high fidelity (spatial correlation ≈ 0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by 2 computational steps: calculating the field map from the phase image and reconstructing the susceptibility map from the field map. The crux of CIMRI lies in an ill-posed 3-dimensional deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm.
Computed inverse MRI for magnetic susceptibility map reconstruction
Chen, Zikuan; Calhoun, Vince
2015-01-01
Objective This paper reports on a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a two-step computational approach. Methods The forward T2*-weighted MRI (T2*MRI) process is decomposed into two steps: 1) from magnetic susceptibility source to fieldmap establishment via magnetization in a main field, and 2) from fieldmap to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes two inverse steps to reverse the T2*MRI procedure: fieldmap calculation from MR phase image and susceptibility source calculation from the fieldmap. The inverse step from fieldmap to susceptibility map is a 3D ill-posed deconvolution problem, which can be solved by three kinds of approaches: Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Results Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from a MR phase image with high fidelity (spatial correlation≈0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. Conclusions The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by two computational steps: calculating the fieldmap from the phase image and reconstructing the susceptibility map from the fieldmap. The crux of CIMRI lies in an ill-posed 3D deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm. PMID:22446372
NASA Astrophysics Data System (ADS)
Mary, D.; Ferrari, A.; Ferrari, C.; Deguignet, J.; Vannier, M.
2016-12-01
With millions of receivers leading to TerraByte data cubes, the story of the giant SKA telescope is also that of collaborative efforts from radioastronomy, signal processing, optimization and computer sciences. Reconstructing SKA cubes poses two challenges. First, the majority of existing algorithms work in 2D and cannot be directly translated into 3D. Second, the reconstruction implies solving an inverse problem and it is not clear what ultimate limit we can expect on the error of this solution. This study addresses (of course partially) both challenges. We consider an extremely simple data acquisition model, and we focus on strategies making it possible to implement 3D reconstruction algorithms that use state-of-the-art image/spectral regularization. The proposed approach has two main features: (i) reduced memory storage with respect to a previous approach; (ii) efficient parallelization and ventilation of the computational load over the spectral bands. This work will allow to implement and compare various 3D reconstruction approaches in a large scale framework.
Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction
NASA Astrophysics Data System (ADS)
Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng
2017-01-01
Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.
Breast ultrasound computed tomography using waveform inversion with source encoding
NASA Astrophysics Data System (ADS)
Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A.
2015-03-01
Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the speed-of-sound distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Computer-simulation studies are conducted to demonstrate the use of the WISE method. Using a single graphics processing unit card, each iteration can be completed within 25 seconds for a 128 × 128 mm2 reconstruction region. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.
Sparse Matrix Motivated Reconstruction of Far-Field Radiation Patterns
2015-03-01
method for base - station antenna radiation patterns. IEEE Antennas Propagation Magazine. 2001;43(2):132. 4. Vasiliadis TG, Dimitriou D, Sergiadis JD...algorithm based on sparse representations of radiation patterns using the inverse Discrete Fourier Transform (DFT) and the inverse Discrete Cosine...patterns using a Model- Based Parameter Estimation (MBPE) technique that reduces the computational time required to model radiation patterns. Another
Laplace-domain waveform modeling and inversion for the 3D acoustic-elastic coupled media
NASA Astrophysics Data System (ADS)
Shin, Jungkyun; Shin, Changsoo; Calandra, Henri
2016-06-01
Laplace-domain waveform inversion reconstructs long-wavelength subsurface models by using the zero-frequency component of damped seismic signals. Despite the computational advantages of Laplace-domain waveform inversion over conventional frequency-domain waveform inversion, an acoustic assumption and an iterative matrix solver have been used to invert 3D marine datasets to mitigate the intensive computing cost. In this study, we develop a Laplace-domain waveform modeling and inversion algorithm for 3D acoustic-elastic coupled media by using a parallel sparse direct solver library (MUltifrontal Massively Parallel Solver, MUMPS). We precisely simulate a real marine environment by coupling the 3D acoustic and elastic wave equations with the proper boundary condition at the fluid-solid interface. In addition, we can extract the elastic properties of the Earth below the sea bottom from the recorded acoustic pressure datasets. As a matrix solver, the parallel sparse direct solver is used to factorize the non-symmetric impedance matrix in a distributed memory architecture and rapidly solve the wave field for a number of shots by using the lower and upper matrix factors. Using both synthetic datasets and real datasets obtained by a 3D wide azimuth survey, the long-wavelength component of the P-wave and S-wave velocity models is reconstructed and the proposed modeling and inversion algorithm are verified. A cluster of 80 CPU cores is used for this study.
NASA Technical Reports Server (NTRS)
Mareboyana, Manohar; Le Moigne-Stewart, Jacqueline; Bennett, Jerome
2016-01-01
In this paper, we demonstrate a simple algorithm that projects low resolution (LR) images differing in subpixel shifts on a high resolution (HR) also called super resolution (SR) grid. The algorithm is very effective in accuracy as well as time efficiency. A number of spatial interpolation techniques using nearest neighbor, inverse-distance weighted averages, Radial Basis Functions (RBF) etc. used in projection yield comparable results. For best accuracy of reconstructing SR image by a factor of two requires four LR images differing in four independent subpixel shifts. The algorithm has two steps: i) registration of low resolution images and (ii) shifting the low resolution images to align with reference image and projecting them on high resolution grid based on the shifts of each low resolution image using different interpolation techniques. Experiments are conducted by simulating low resolution images by subpixel shifts and subsampling of original high resolution image and the reconstructing the high resolution images from the simulated low resolution images. The results of accuracy of reconstruction are compared by using mean squared error measure between original high resolution image and reconstructed image. The algorithm was tested on remote sensing images and found to outperform previously proposed techniques such as Iterative Back Projection algorithm (IBP), Maximum Likelihood (ML), and Maximum a posterior (MAP) algorithms. The algorithm is robust and is not overly sensitive to the registration inaccuracies.
Ramani, Sathish; Liu, Zhihao; Rosen, Jeffrey; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.
2012-01-01
Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Stein's Unbiased Risk Estimate— SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance σ2), and GCV (that does not need σ2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type ℓ1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly sub-optimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms. PMID:22531764
MO-DE-207A-06: ECG-Gated CT Reconstruction for a C-Arm Inverse Geometry X-Ray System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slagowski, JM; Dunkerley, DAP
2016-06-15
Purpose: To obtain ECG-gated CT images from truncated projection data acquired with a C-arm based inverse geometry fluoroscopy system, for the purpose of cardiac chamber mapping in interventional procedures. Methods: Scanning-beam digital x-ray (SBDX) is an inverse geometry fluoroscopy system with a scanned multisource x-ray tube and a photon-counting detector mounted to a C-arm. In the proposed method, SBDX short-scan rotational acquisition is performed followed by inverse geometry CT (IGCT) reconstruction and segmentation of contrast-enhanced objects. The prior image constrained compressed sensing (PICCS) framework was adapted for IGCT reconstruction to mitigate artifacts arising from data truncation and angular undersampling duemore » to cardiac gating. The performance of the reconstruction algorithm was evaluated in numerical simulations of truncated and non-truncated thorax phantoms containing a dynamic ellipsoid to represent a moving cardiac chamber. The eccentricity of the ellipsoid was varied at frequencies from 1–1.5 Hz. Projection data were retrospectively sorted into 13 cardiac phases. Each phase was reconstructed using IGCT-PICCS, with a nongated gridded FBP (gFBP) prior image. Surface accuracy was determined using Dice similarity coefficient and a histogram of the point distances between the segmented surface and ground truth surface. Results: The gated IGCT-PICCS algorithm improved surface accuracy and reduced streaking and truncation artifacts when compared to nongated gFBP. For the non-truncated thorax with 1.25 Hz motion, 99% of segmented surface points were within 0.3 mm of the 15 mm diameter ground truth ellipse, versus 1.0 mm for gFBP. For the truncated thorax phantom with a 40 mm diameter ellipse, IGCT-PICCS surface accuracy measured 0.3 mm versus 7.8 mm for gFBP. Dice similarity coefficient was 0.99–1.00 (IGCT-PICCS) versus 0.63–0.75 (gFBP) for intensity-based segmentation thresholds ranging from 25–75% maximum contrast. Conclusions: The PICCS algorithm was successfully applied to reconstruct truncated IGCT projection data with angular undersampling resulting from simulated cardiac gating. Research supported by the National Heart, Lung, and Blood Institute of the NIH under award number R01HL084022. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.« less
Noniterative MAP reconstruction using sparse matrix representations.
Cao, Guangzhi; Bouman, Charles A; Webb, Kevin J
2009-09-01
We present a method for noniterative maximum a posteriori (MAP) tomographic reconstruction which is based on the use of sparse matrix representations. Our approach is to precompute and store the inverse matrix required for MAP reconstruction. This approach has generally not been used in the past because the inverse matrix is typically large and fully populated (i.e., not sparse). In order to overcome this problem, we introduce two new ideas. The first idea is a novel theory for the lossy source coding of matrix transformations which we refer to as matrix source coding. This theory is based on a distortion metric that reflects the distortions produced in the final matrix-vector product, rather than the distortions in the coded matrix itself. The resulting algorithms are shown to require orthonormal transformations of both the measurement data and the matrix rows and columns before quantization and coding. The second idea is a method for efficiently storing and computing the required orthonormal transformations, which we call a sparse-matrix transform (SMT). The SMT is a generalization of the classical FFT in that it uses butterflies to compute an orthonormal transform; but unlike an FFT, the SMT uses the butterflies in an irregular pattern, and is numerically designed to best approximate the desired transforms. We demonstrate the potential of the noniterative MAP reconstruction with examples from optical tomography. The method requires offline computation to encode the inverse transform. However, once these offline computations are completed, the noniterative MAP algorithm is shown to reduce both storage and computation by well over two orders of magnitude, as compared to a linear iterative reconstruction methods.
Enhanced image fusion using directional contrast rules in fuzzy transform domain.
Nandal, Amita; Rosales, Hamurabi Gamboa
2016-01-01
In this paper a novel image fusion algorithm based on directional contrast in fuzzy transform (FTR) domain is proposed. Input images to be fused are first divided into several non-overlapping blocks. The components of these sub-blocks are fused using directional contrast based fuzzy fusion rule in FTR domain. The fused sub-blocks are then transformed into original size blocks using inverse-FTR. Further, these inverse transformed blocks are fused according to select maximum based fusion rule for reconstructing the final fused image. The proposed fusion algorithm is both visually and quantitatively compared with other standard and recent fusion algorithms. Experimental results demonstrate that the proposed method generates better results than the other methods.
MR fingerprinting reconstruction with Kalman filter.
Zhang, Xiaodi; Zhou, Zechen; Chen, Shiyang; Chen, Shuo; Li, Rui; Hu, Xiaoping
2017-09-01
Magnetic resonance fingerprinting (MR fingerprinting or MRF) is a newly introduced quantitative magnetic resonance imaging technique, which enables simultaneous multi-parameter mapping in a single acquisition with improved time efficiency. The current MRF reconstruction method is based on dictionary matching, which may be limited by the discrete and finite nature of the dictionary and the computational cost associated with dictionary construction, storage and matching. In this paper, we describe a reconstruction method based on Kalman filter for MRF, which avoids the use of dictionary to obtain continuous MR parameter measurements. With this Kalman filter framework, the Bloch equation of inversion-recovery balanced steady state free-precession (IR-bSSFP) MRF sequence was derived to predict signal evolution, and acquired signal was entered to update the prediction. The algorithm can gradually estimate the accurate MR parameters during the recursive calculation. Single pixel and numeric brain phantom simulation were implemented with Kalman filter and the results were compared with those from dictionary matching reconstruction algorithm to demonstrate the feasibility and assess the performance of Kalman filter algorithm. The results demonstrated that Kalman filter algorithm is applicable for MRF reconstruction, eliminating the need for a pre-define dictionary and obtaining continuous MR parameter in contrast to the dictionary matching algorithm. Copyright © 2017 Elsevier Inc. All rights reserved.
Parallelized Bayesian inversion for three-dimensional dental X-ray imaging.
Kolehmainen, Ville; Vanne, Antti; Siltanen, Samuli; Järvenpää, Seppo; Kaipio, Jari P; Lassas, Matti; Kalke, Martti
2006-02-01
Diagnostic and operational tasks based on dental radiology often require three-dimensional (3-D) information that is not available in a single X-ray projection image. Comprehensive 3-D information about tissues can be obtained by computerized tomography (CT) imaging. However, in dental imaging a conventional CT scan may not be available or practical because of high radiation dose, low-resolution or the cost of the CT scanner equipment. In this paper, we consider a novel type of 3-D imaging modality for dental radiology. We consider situations in which projection images of the teeth are taken from a few sparsely distributed projection directions using the dentist's regular (digital) X-ray equipment and the 3-D X-ray attenuation function is reconstructed. A complication in these experiments is that the reconstruction of the 3-D structure based on a few projection images becomes an ill-posed inverse problem. Bayesian inversion is a well suited framework for reconstruction from such incomplete data. In Bayesian inversion, the ill-posed reconstruction problem is formulated in a well-posed probabilistic form in which a priori information is used to compensate for the incomplete information of the projection data. In this paper we propose a Bayesian method for 3-D reconstruction in dental radiology. The method is partially based on Kolehmainen et al. 2003. The prior model for dental structures consist of a weighted l1 and total variation (TV)-prior together with the positivity prior. The inverse problem is stated as finding the maximum a posteriori (MAP) estimate. To make the 3-D reconstruction computationally feasible, a parallelized version of an optimization algorithm is implemented for a Beowulf cluster computer. The method is tested with projection data from dental specimens and patient data. Tomosynthetic reconstructions are given as reference for the proposed method.
NASA Astrophysics Data System (ADS)
Jiang, Y.; Xing, H. L.
2016-12-01
Micro-seismic events induced by water injection, mining activity or oil/gas extraction are quite informative, the interpretation of which can be applied for the reconstruction of underground stress and monitoring of hydraulic fracturing progress in oil/gas reservoirs. The source characterises and locations are crucial parameters that required for these purposes, which can be obtained through the waveform matching inversion (WMI) method. Therefore it is imperative to develop a WMI algorithm with high accuracy and convergence speed. Heuristic algorithm, as a category of nonlinear method, possesses a very high convergence speed and good capacity to overcome local minimal values, and has been well applied for many areas (e.g. image processing, artificial intelligence). However, its effectiveness for micro-seismic WMI is still poorly investigated; very few literatures exits that addressing this subject. In this research an advanced heuristic algorithm, gravitational search algorithm (GSA) , is proposed to estimate the focal mechanism (angle of strike, dip and rake) and source locations in three dimension. Unlike traditional inversion methods, the heuristic algorithm inversion does not require the approximation of green function. The method directly interacts with a CPU parallelized finite difference forward modelling engine, and updating the model parameters under GSA criterions. The effectiveness of this method is tested with synthetic data form a multi-layered elastic model; the results indicate GSA can be well applied on WMI and has its unique advantages. Keywords: Micro-seismicity, Waveform matching inversion, gravitational search algorithm, parallel computation
Filtered gradient reconstruction algorithm for compressive spectral imaging
NASA Astrophysics Data System (ADS)
Mejia, Yuri; Arguello, Henry
2017-04-01
Compressive sensing matrices are traditionally based on random Gaussian and Bernoulli entries. Nevertheless, they are subject to physical constraints, and their structure unusually follows a dense matrix distribution, such as the case of the matrix related to compressive spectral imaging (CSI). The CSI matrix represents the integration of coded and shifted versions of the spectral bands. A spectral image can be recovered from CSI measurements by using iterative algorithms for linear inverse problems that minimize an objective function including a quadratic error term combined with a sparsity regularization term. However, current algorithms are slow because they do not exploit the structure and sparse characteristics of the CSI matrices. A gradient-based CSI reconstruction algorithm, which introduces a filtering step in each iteration of a conventional CSI reconstruction algorithm that yields improved image quality, is proposed. Motivated by the structure of the CSI matrix, Φ, this algorithm modifies the iterative solution such that it is forced to converge to a filtered version of the residual ΦTy, where y is the compressive measurement vector. We show that the filtered-based algorithm converges to better quality performance results than the unfiltered version. Simulation results highlight the relative performance gain over the existing iterative algorithms.
Fluorescence molecular imaging based on the adjoint radiative transport equation
NASA Astrophysics Data System (ADS)
Asllanaj, Fatmir; Addoum, Ahmad; Rodolphe Roche, Jean
2018-07-01
A new reconstruction algorithm for fluorescence diffuse optical tomography of biological tissues is proposed. The radiative transport equation in the frequency domain is used to model light propagation. The adjoint method studied in this work provides an efficient way for solving the inverse problem. The methodology is applied to a 2D tissue-like phantom subjected to a collimated laser beam. Indocyanine Green is used as fluorophore. Reconstructed images of the spatial fluorophore absorption distribution is assessed taking into account the residual fluorescence in the medium. We show that illuminating the tissue surface from a collimated centered direction near the inclusion gaves a better reconstruction quality. Two closely positioned inclusions can be accurately localized. Additionally, their fluorophore absorption coefficients can be quantified. However, the algorithm failes to reconstruct smaller or deeper inclusions. This is due to light attenuation in the medium. Reconstructions with noisy data are also achieved with a reasonable accuracy.
Zhang, Hao; Zeng, Dong; Zhang, Hua; Wang, Jing; Liang, Zhengrong
2017-01-01
Low-dose X-ray computed tomography (LDCT) imaging is highly recommended for use in the clinic because of growing concerns over excessive radiation exposure. However, the CT images reconstructed by the conventional filtered back-projection (FBP) method from low-dose acquisitions may be severely degraded with noise and streak artifacts due to excessive X-ray quantum noise, or with view-aliasing artifacts due to insufficient angular sampling. In 2005, the nonlocal means (NLM) algorithm was introduced as a non-iterative edge-preserving filter to denoise natural images corrupted by additive Gaussian noise, and showed superior performance. It has since been adapted and applied to many other image types and various inverse problems. This paper specifically reviews the applications of the NLM algorithm in LDCT image processing and reconstruction, and explicitly demonstrates its improving effects on the reconstructed CT image quality from low-dose acquisitions. The effectiveness of these applications on LDCT and their relative performance are described in detail. PMID:28303644
Filtering techniques for efficient inversion of two-dimensional Nuclear Magnetic Resonance data
NASA Astrophysics Data System (ADS)
Bortolotti, V.; Brizi, L.; Fantazzini, P.; Landi, G.; Zama, F.
2017-10-01
The inversion of two-dimensional Nuclear Magnetic Resonance (NMR) data requires the solution of a first kind Fredholm integral equation with a two-dimensional tensor product kernel and lower bound constraints. For the solution of this ill-posed inverse problem, the recently presented 2DUPEN algorithm [V. Bortolotti et al., Inverse Problems, 33(1), 2016] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. In this work, I2DUPEN, an improved version of 2DUPEN that implements Mean Windowing and Singular Value Decomposition filters, is deeply tested. The reconstruction problem with filtered data is formulated as a compressed weighted least squares problem with multi-parameter Tikhonov regularization. Results on synthetic and real 2D NMR data are presented with the main purpose to deeper analyze the separate and combined effects of these filtering techniques on the reconstructed 2D distribution.
Inverse transport problems in quantitative PAT for molecular imaging
NASA Astrophysics Data System (ADS)
Ren, Kui; Zhang, Rongting; Zhong, Yimin
2015-12-01
Fluorescence photoacoustic tomography (fPAT) is a molecular imaging modality that combines photoacoustic tomography with fluorescence imaging to obtain high-resolution imaging of fluorescence distributions inside heterogeneous media. The objective of this work is to study inverse problems in the quantitative step of fPAT where we intend to reconstruct physical coefficients in a coupled system of radiative transport equations using internal data recovered from ultrasound measurements. We derive uniqueness and stability results on the inverse problems and develop some efficient algorithms for image reconstructions. Numerical simulations based on synthetic data are presented to validate the theoretical analysis. The results we present here complement these in Ren K and Zhao H (2013 SIAM J. Imaging Sci. 6 2024-49) on the same problem but in the diffusive regime.
Fast Quantitative Susceptibility Mapping with L1-Regularization and Automatic Parameter Selection
Bilgic, Berkin; Fan, Audrey P.; Polimeni, Jonathan R.; Cauley, Stephen F.; Bianciardi, Marta; Adalsteinsson, Elfar; Wald, Lawrence L.; Setsompop, Kawin
2014-01-01
Purpose To enable fast reconstruction of quantitative susceptibility maps with Total Variation penalty and automatic regularization parameter selection. Methods ℓ1-regularized susceptibility mapping is accelerated by variable-splitting, which allows closed-form evaluation of each iteration of the algorithm by soft thresholding and FFTs. This fast algorithm also renders automatic regularization parameter estimation practical. A weighting mask derived from the magnitude signal can be incorporated to allow edge-aware regularization. Results Compared to the nonlinear Conjugate Gradient (CG) solver, the proposed method offers 20× speed-up in reconstruction time. A complete pipeline including Laplacian phase unwrapping, background phase removal with SHARP filtering and ℓ1-regularized dipole inversion at 0.6 mm isotropic resolution is completed in 1.2 minutes using Matlab on a standard workstation compared to 22 minutes using the Conjugate Gradient solver. This fast reconstruction allows estimation of regularization parameters with the L-curve method in 13 minutes, which would have taken 4 hours with the CG algorithm. Proposed method also permits magnitude-weighted regularization, which prevents smoothing across edges identified on the magnitude signal. This more complicated optimization problem is solved 5× faster than the nonlinear CG approach. Utility of the proposed method is also demonstrated in functional BOLD susceptibility mapping, where processing of the massive time-series dataset would otherwise be prohibitive with the CG solver. Conclusion Online reconstruction of regularized susceptibility maps may become feasible with the proposed dipole inversion. PMID:24259479
Advanced Fast 3-D Electromagnetic Solver for Microwave Tomography Imaging.
Simonov, Nikolai; Kim, Bo-Ra; Lee, Kwang-Jae; Jeon, Soon-Ik; Son, Seong-Ho
2017-10-01
This paper describes a fast-forward electromagnetic solver (FFS) for the image reconstruction algorithm of our microwave tomography system. Our apparatus is a preclinical prototype of a biomedical imaging system, designed for the purpose of early breast cancer detection. It operates in the 3-6-GHz frequency band using a circular array of probe antennas immersed in a matching liquid; it produces image reconstructions of the permittivity and conductivity profiles of the breast under examination. Our reconstruction algorithm solves the electromagnetic (EM) inverse problem and takes into account the real EM properties of the probe antenna array as well as the influence of the patient's body and that of the upper metal screen sheet. This FFS algorithm is much faster than conventional EM simulation solvers. In comparison, in the same PC, the CST solver takes ~45 min, while the FFS takes ~1 s of effective simulation time for the same EM model of a numerical breast phantom.
Ma, Ren; Zhou, Xiaoqing; Zhang, Shunqi; Yin, Tao; Liu, Zhipeng
2016-12-21
In this study we present a three-dimensional (3D) reconstruction algorithm for magneto-acoustic tomography with magnetic induction (MAT-MI) based on the characteristics of the ultrasound transducer. The algorithm is investigated to solve the blur problem of the MAT-MI acoustic source image, which is caused by the ultrasound transducer and the scanning geometry. First, we established a transducer model matrix using measured data from the real transducer. With reference to the S-L model used in the computed tomography algorithm, a 3D phantom model of electrical conductivity is set up. Both sphere scanning and cylinder scanning geometries are adopted in the computer simulation. Then, using finite element analysis, the distribution of the eddy current and the acoustic source as well as the acoustic pressure can be obtained with the transducer model matrix. Next, using singular value decomposition, the inverse transducer model matrix together with the reconstruction algorithm are worked out. The acoustic source and the conductivity images are reconstructed using the proposed algorithm. Comparisons between an ideal point transducer and the realistic transducer are made to evaluate the algorithms. Finally, an experiment is performed using a graphite phantom. We found that images of the acoustic source reconstructed using the proposed algorithm are a better match than those using the previous one, the correlation coefficient of sphere scanning geometry is 98.49% and that of cylinder scanning geometry is 94.96%. Comparison between the ideal point transducer and the realistic transducer shows that the correlation coefficients are 90.2% in sphere scanning geometry and 86.35% in cylinder scanning geometry. The reconstruction of the graphite phantom experiment also shows a higher resolution using the proposed algorithm. We conclude that the proposed reconstruction algorithm, which considers the characteristics of the transducer, can obviously improve the resolution of the reconstructed image. This study can be applied to analyse the effect of the position of the transducer and the scanning geometry on imaging. It may provide a more precise method to reconstruct the conductivity distribution in MAT-MI.
3D and 4D magnetic susceptibility tomography based on complex MR images
Chen, Zikuan; Calhoun, Vince D
2014-11-11
Magnetic susceptibility is the physical property for T2*-weighted magnetic resonance imaging (T2*MRI). The invention relates to methods for reconstructing an internal distribution (3D map) of magnetic susceptibility values, .chi. (x,y,z), of an object, from 3D T2*MRI phase images, by using Computed Inverse Magnetic Resonance Imaging (CIMRI) tomography. The CIMRI technique solves the inverse problem of the 3D convolution by executing a 3D Total Variation (TV) regularized iterative convolution scheme, using a split Bregman iteration algorithm. The reconstruction of .chi. (x,y,z) can be designed for low-pass, band-pass, and high-pass features by using a convolution kernel that is modified from the standard dipole kernel. Multiple reconstructions can be implemented in parallel, and averaging the reconstructions can suppress noise. 4D dynamic magnetic susceptibility tomography can be implemented by reconstructing a 3D susceptibility volume from a 3D phase volume by performing 3D CIMRI magnetic susceptibility tomography at each snapshot time.
Layer Stripping Solutions of Inverse Seismic Problems.
1985-03-21
problems--more so than has generally been recognized. The subject of this thesis is the theoretical development of the . layer-stripping methodology , and...medium varies sharply at each interface, which would be expected to cause difficulties for the algorithm, since it was designed for a smoothy varying... methodology was applied in a novel way. The inverse problem considered in this chapter was that of reconstructing a layered medium from measurement of its
Systolic Algorithms for Imaging from Space
1989-07-31
on a keystone or trapezoidal grid [ Arikan & Munson, 1987]. The image reconstruction algorithm then simply applies an inverse 2-D FFT to the stored...rithm composed of groups of point targets, and we determined the effects of windowing and incor- poration of a Jacobian weighting factor [ Arikan ...the impulse response of the desired filter [ Arikan & Munson, 1989]. The necessary filtering is then accomplished through the physical mechanism of the
CUDA-based high-performance computing of the S-BPF algorithm with no-waiting pipelining
NASA Astrophysics Data System (ADS)
Deng, Lin; Yan, Bin; Chang, Qingmei; Han, Yu; Zhang, Xiang; Xi, Xiaoqi; Li, Lei
2015-10-01
The backprojection-filtration (BPF) algorithm has become a good solution for local reconstruction in cone-beam computed tomography (CBCT). However, the reconstruction speed of BPF is a severe limitation for clinical applications. The selective-backprojection filtration (S-BPF) algorithm is developed to improve the parallel performance of BPF by selective backprojection. Furthermore, the general-purpose graphics processing unit (GP-GPU) is a popular tool for accelerating the reconstruction. Much work has been performed aiming for the optimization of the cone-beam back-projection. As the cone-beam back-projection process becomes faster, the data transportation holds a much bigger time proportion in the reconstruction than before. This paper focuses on minimizing the total time in the reconstruction with the S-BPF algorithm by hiding the data transportation among hard disk, CPU and GPU. And based on the analysis of the S-BPF algorithm, some strategies are implemented: (1) the asynchronous calls are used to overlap the implemention of CPU and GPU, (2) an innovative strategy is applied to obtain the DBP image to hide the transport time effectively, (3) two streams for data transportation and calculation are synchronized by the cudaEvent in the inverse of finite Hilbert transform on GPU. Our main contribution is a smart reconstruction of the S-BPF algorithm with GPU's continuous calculation and no data transportation time cost. a 5123 volume is reconstructed in less than 0.7 second on a single Tesla-based K20 GPU from 182 views projection with 5122 pixel per projection. The time cost of our implementation is about a half of that without the overlap behavior.
Choice of reconstructed tissue properties affects interpretation of lung EIT images.
Grychtol, Bartłomiej; Adler, Andy
2014-06-01
Electrical impedance tomography (EIT) estimates an image of change in electrical properties within a body from stimulations and measurements at surface electrodes. There is significant interest in EIT as a tool to monitor and guide ventilation therapy in mechanically ventilated patients. In lung EIT, the EIT inverse problem is commonly linearized and only changes in electrical properties are reconstructed. Early algorithms reconstructed changes in resistivity, while most recent work using the finite element method reconstructs conductivity. Recently, we demonstrated that EIT images of ventilation can be misleading if the electrical contrasts within the thorax are not taken into account during the image reconstruction process. In this paper, we explore the effect of the choice of the reconstructed electrical properties (resistivity or conductivity) on the resulting EIT images. We show in simulation and experimental data that EIT images reconstructed with the same algorithm but with different parametrizations lead to large and clinically significant differences in the resulting images, which persist even after attempts to eliminate the impact of the parameter choice by recovering volume changes from the EIT images. Since there is no consensus among the most popular reconstruction algorithms and devices regarding the parametrization, this finding has implications for potential clinical use of EIT. We propose a program of research to develop reconstruction techniques that account for both the relationship between air volume and electrical properties of the lung and artefacts introduced by the linearization.
Eo, Taejoon; Jun, Yohan; Kim, Taeseong; Jang, Jinseong; Lee, Ho-Joon; Hwang, Dosik
2018-04-06
To demonstrate accurate MR image reconstruction from undersampled k-space data using cross-domain convolutional neural networks (CNNs) METHODS: Cross-domain CNNs consist of 3 components: (1) a deep CNN operating on the k-space (KCNN), (2) a deep CNN operating on an image domain (ICNN), and (3) an interleaved data consistency operations. These components are alternately applied, and each CNN is trained to minimize the loss between the reconstructed and corresponding fully sampled k-spaces. The final reconstructed image is obtained by forward-propagating the undersampled k-space data through the entire network. Performances of K-net (KCNN with inverse Fourier transform), I-net (ICNN with interleaved data consistency), and various combinations of the 2 different networks were tested. The test results indicated that K-net and I-net have different advantages/disadvantages in terms of tissue-structure restoration. Consequently, the combination of K-net and I-net is superior to single-domain CNNs. Three MR data sets, the T 2 fluid-attenuated inversion recovery (T 2 FLAIR) set from the Alzheimer's Disease Neuroimaging Initiative and 2 data sets acquired at our local institute (T 2 FLAIR and T 1 weighted), were used to evaluate the performance of 7 conventional reconstruction algorithms and the proposed cross-domain CNNs, which hereafter is referred to as KIKI-net. KIKI-net outperforms conventional algorithms with mean improvements of 2.29 dB in peak SNR and 0.031 in structure similarity. KIKI-net exhibits superior performance over state-of-the-art conventional algorithms in terms of restoring tissue structures and removing aliasing artifacts. The results demonstrate that KIKI-net is applicable up to a reduction factor of 3 to 4 based on variable-density Cartesian undersampling. © 2018 International Society for Magnetic Resonance in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chvetsov, A; Sandison, G; Schwartz, J
Purpose: Combination of serial tumor imaging with radiobiological modeling can provide more accurate information on the nature of treatment response and what underlies resistance. The purpose of this article is to improve the algorithms related to imaging-based radiobilogical modeling of tumor response. Methods: Serial imaging of tumor response to radiation therapy represents a sum of tumor cell sensitivity, tumor growth rates, and the rate of cell loss which are not separated explicitly. Accurate treatment response assessment would require separation of these radiobiological determinants of treatment response because they define tumor control probability. We show that the problem of reconstruction ofmore » radiobiological parameters from serial imaging data can be considered as inverse ill-posed problem described by the Fredholm integral equation of the first kind because it is governed by a sum of several exponential processes. Therefore, the parameter reconstruction can be solved using regularization methods. Results: To study the reconstruction problem, we used a set of serial CT imaging data for the head and neck cancer and a two-level cell population model of tumor response which separates the entire tumor cell population in two subpopulations of viable and lethally damage cells. The reconstruction was done using a least squared objective function and a simulated annealing algorithm. Using in vitro data for radiobiological parameters as reference data, we shown that the reconstructed values of cell surviving fractions and potential doubling time exhibit non-physical fluctuations if no stabilization algorithms are applied. The variational regularization allowed us to obtain statistical distribution for cell surviving fractions and cell number doubling times comparable to in vitro data. Conclusion: Our results indicate that using variational regularization can increase the number of free parameters in the model and open the way to development of more advanced algorithms which take into account tumor heterogeneity, for example, related to hypoxia.« less
Liu, Hesheng; Schimpf, Paul H; Dong, Guoya; Gao, Xiaorong; Yang, Fusheng; Gao, Shangkai
2005-10-01
This paper presents a new algorithm called Standardized Shrinking LORETA-FOCUSS (SSLOFO) for solving the electroencephalogram (EEG) inverse problem. Multiple techniques are combined in a single procedure to robustly reconstruct the underlying source distribution with high spatial resolution. This algorithm uses a recursive process which takes the smooth estimate of sLORETA as initialization and then employs the re-weighted minimum norm introduced by FOCUSS. An important technique called standardization is involved in the recursive process to enhance the localization ability. The algorithm is further improved by automatically adjusting the source space according to the estimate of the previous step, and by the inclusion of temporal information. Simulation studies are carried out on both spherical and realistic head models. The algorithm achieves very good localization ability on noise-free data. It is capable of recovering complex source configurations with arbitrary shapes and can produce high quality images of extended source distributions. We also characterized the performance with noisy data in a realistic head model. An important feature of this algorithm is that the temporal waveforms are clearly reconstructed, even for closely spaced sources. This provides a convenient way to estimate neural dynamics directly from the cortical sources.
Uniqueness and reconstruction in magnetic resonance-electrical impedance tomography (MR-EIT).
Ider, Y Ziya; Onart, Serkan; Lionheart, William R B
2003-05-01
Magnetic resonance-electrical impedance tomography (MR-EIT) was first proposed in 1992. Since then various reconstruction algorithms have been suggested and applied. These algorithms use peripheral voltage measurements and internal current density measurements in different combinations. In this study the problem of MR-EIT is treated as a hyperbolic system of first-order partial differential equations, and three numerical methods are proposed for its solution. This approach is not utilized in any of the algorithms proposed earlier. The numerical solution methods are integration along equipotential surfaces (method of characteristics), integration on a Cartesian grid, and inversion of a system matrix derived by a finite difference formulation. It is shown that if some uniqueness conditions are satisfied, then using at least two injected current patterns, resistivity can be reconstructed apart from a multiplicative constant. This constant can then be identified using a single voltage measurement. The methods proposed are direct, non-iterative, and valid and feasible for 3D reconstructions. They can also be used to easily obtain slice and field-of-view images from a 3D object. 2D simulations are made to illustrate the performance of the algorithms.
Ramírez-Nava, Gerardo J; Santos-Cuevas, Clara L; Chairez, Isaac; Aranda-Lara, Liliana
2017-12-01
The aim of this study was to characterize the in vivo volumetric distribution of three folate-based biosensors by different imaging modalities (X-ray, fluorescence, Cerenkov luminescence, and radioisotopic imaging) through the development of a tridimensional image reconstruction algorithm. The preclinical and multimodal Xtreme imaging system, with a Multimodal Animal Rotation System (MARS), was used to acquire bidimensional images, which were processed to obtain the tridimensional reconstruction. Images of mice at different times (biosensor distribution) were simultaneously obtained from the four imaging modalities. The filtered back projection and inverse Radon transformation were used as main image-processing techniques. The algorithm developed in Matlab was able to calculate the volumetric profiles of 99m Tc-Folate-Bombesin (radioisotopic image), 177 Lu-Folate-Bombesin (Cerenkov image), and FolateRSense™ 680 (fluorescence image) in tumors and kidneys of mice, and no significant differences were detected in the volumetric quantifications among measurement techniques. The imaging tridimensional reconstruction algorithm can be easily extrapolated to different 2D acquisition-type images. This characteristic flexibility of the algorithm developed in this study is a remarkable advantage in comparison to similar reconstruction methods.
Infrared super-resolution imaging based on compressed sensing
NASA Astrophysics Data System (ADS)
Sui, Xiubao; Chen, Qian; Gu, Guohua; Shen, Xuewei
2014-03-01
The theoretical basis of traditional infrared super-resolution imaging method is Nyquist sampling theorem. The reconstruction premise is that the relative positions of the infrared objects in the low-resolution image sequences should keep fixed and the image restoration means is the inverse operation of ill-posed issues without fixed rules. The super-resolution reconstruction ability of the infrared image, algorithm's application area and stability of reconstruction algorithm are limited. To this end, we proposed super-resolution reconstruction method based on compressed sensing in this paper. In the method, we selected Toeplitz matrix as the measurement matrix and realized it by phase mask method. We researched complementary matching pursuit algorithm and selected it as the recovery algorithm. In order to adapt to the moving target and decrease imaging time, we take use of area infrared focal plane array to acquire multiple measurements at one time. Theoretically, the method breaks though Nyquist sampling theorem and can greatly improve the spatial resolution of the infrared image. The last image contrast and experiment data indicate that our method is effective in improving resolution of infrared images and is superior than some traditional super-resolution imaging method. The compressed sensing super-resolution method is expected to have a wide application prospect.
SSULI/SSUSI UV Tomographic Images of Large-Scale Plasma Structuring
NASA Astrophysics Data System (ADS)
Hei, M. A.; Budzien, S. A.; Dymond, K.; Paxton, L. J.; Schaefer, R. K.; Groves, K. M.
2015-12-01
We present a new technique that creates tomographic reconstructions of atmospheric ultraviolet emission based on data from the Special Sensor Ultraviolet Limb Imager (SSULI) and the Special Sensor Ultraviolet Spectrographic Imager (SSUSI), both flown on the Defense Meteorological Satellite Program (DMSP) Block 5D3 series satellites. Until now, the data from these two instruments have been used independently of each other. The new algorithm combines SSULI/SSUSI measurements of 135.6 nm emission using the tomographic technique; the resultant data product - whole-orbit reconstructions of atmospheric volume emission within the satellite orbital plane - is substantially improved over the original data sets. Tests using simulated atmospheric emission verify that the algorithm performs well in a variety of situations, including daytime, nighttime, and even in the challenging terminator regions. A comparison with ALTAIR radar data validates that the volume emission reconstructions can be inverted to yield maps of electron density. The algorithm incorporates several innovative new features, including the use of both SSULI and SSUSI data to create tomographic reconstructions, the use of an inversion algorithm (Richardson-Lucy; RL) that explicitly accounts for the Poisson statistics inherent in optical measurements, and a pseudo-diffusion based regularization scheme implemented between iterations of the RL code. The algorithm also explicitly accounts for extinction due to absorption by molecular oxygen.
Accurate reconstruction of the thermal conductivity depth profile in case hardened steel
NASA Astrophysics Data System (ADS)
Celorrio, Ricardo; Apiñaniz, Estibaliz; Mendioroz, Arantza; Salazar, Agustín; Mandelis, Andreas
2010-04-01
The problem of retrieving a nonhomogeneous thermal conductivity profile from photothermal radiometry data is addressed from the perspective of a stabilized least square fitting algorithm. We have implemented an inversion method with several improvements: (a) a renormalization of the experimental data which removes not only the instrumental factor, but the constants affecting the amplitude and the phase as well, (b) the introduction of a frequency weighting factor in order to balance the contribution of high and low frequencies in the inversion algorithm, (c) the simultaneous fitting of amplitude and phase data, balanced according to their experimental noises, (d) a modified Tikhonov regularization procedure has been introduced to stabilize the inversion, and (e) the Morozov discrepancy principle has been used to stop the iterative process automatically, according to the experimental noise, to avoid "overfitting" of the experimental data. We have tested this improved method by fitting theoretical data generated from a known conductivity profile. Finally, we have applied our method to real data obtained in a hardened stainless steel plate. The reconstructed in-depth thermal conductivity profile exhibits low dispersion, even at the deepest locations, and is in good anticorrelation with the hardness indentation test.
Yi, Huangjian; Chen, Duofang; Li, Wei; Zhu, Shouping; Wang, Xiaorui; Liang, Jimin; Tian, Jie
2013-05-01
Fluorescence molecular tomography (FMT) is an important imaging technique of optical imaging. The major challenge of the reconstruction method for FMT is the ill-posed and underdetermined nature of the inverse problem. In past years, various regularization methods have been employed for fluorescence target reconstruction. A comparative study between the reconstruction algorithms based on l1-norm and l2-norm for two imaging models of FMT is presented. The first imaging model is adopted by most researchers, where the fluorescent target is of small size to mimic small tissue with fluorescent substance, as demonstrated by the early detection of a tumor. The second model is the reconstruction of distribution of the fluorescent substance in organs, which is essential to drug pharmacokinetics. Apart from numerical experiments, in vivo experiments were conducted on a dual-modality FMT/micro-computed tomography imaging system. The experimental results indicated that l1-norm regularization is more suitable for reconstructing the small fluorescent target, while l2-norm regularization performs better for the reconstruction of the distribution of fluorescent substance.
Inverse imaging of the breast with a material classification technique.
Manry, C W; Broschat, S L
1998-03-01
In recent publications [Chew et al., IEEE Trans. Blomed. Eng. BME-9, 218-225 (1990); Borup et al., Ultrason. Imaging 14, 69-85 (1992)] the inverse imaging problem has been solved by means of a two-step iterative method. In this paper, a third step is introduced for ultrasound imaging of the breast. In this step, which is based on statistical pattern recognition, classification of tissue types and a priori knowledge of the anatomy of the breast are integrated into the iterative method. Use of this material classification technique results in more rapid convergence to the inverse solution--approximately 40% fewer iterations are required--as well as greater accuracy. In addition, tumors are detected early in the reconstruction process. Results for reconstructions of a simple two-dimensional model of the human breast are presented. These reconstructions are extremely accurate when system noise and variations in tissue parameters are not too great. However, for the algorithm used, degradation of the reconstructions and divergence from the correct solution occur when system noise and variations in parameters exceed threshold values. Even in this case, however, tumors are still identified within a few iterations.
Rybicki, F J; Hrovat, M I; Patz, S
2000-09-01
We have proposed a two-dimensional PERiodic-Linear (PERL) magnetic encoding field geometry B(x,y) = g(y)y cos(q(x)x) and a magnetic resonance imaging pulse sequence which incorporates two fields to image a two-dimensional spin density: a standard linear gradient in the x dimension, and the PERL field. Because of its periodicity, the PERL field produces a signal where the phase of the two dimensions is functionally different. The x dimension is encoded linearly, but the y dimension appears as the argument of a sinusoidal phase term. Thus, the time-domain signal and image spin density are not related by a two-dimensional Fourier transform. They are related by a one-dimensional Fourier transform in the x dimension and a new Bessel function integral transform (the PERL transform) in the y dimension. The inverse of the PERL transform provides a reconstruction algorithm for the y dimension of the spin density from the signal space. To date, the inverse transform has been computed numerically by a Bessel function expansion over its basis functions. This numerical solution used a finite sum to approximate an infinite summation and thus introduced a truncation error. This work analytically determines the basis functions for the PERL transform and incorporates them into the reconstruction algorithm. The improved algorithm is demonstrated by (1) direct comparison between the numerically and analytically computed basis functions, and (2) reconstruction of a known spin density. The new solution for the basis functions also lends proof of the system function for the PERL transform under specific conditions.
NASA Astrophysics Data System (ADS)
Nurge, Mark A.
2007-05-01
An electrical capacitance volume tomography system has been created for use with a new image reconstruction algorithm capable of imaging high contrast dielectric distributions. The electrode geometry consists of two 4 × 4 parallel planes of copper conductors connected through custom built switch electronics to a commercially available capacitance to digital converter. Typical electrical capacitance tomography (ECT) systems rely solely on mutual capacitance readings to reconstruct images of dielectric distributions. This paper presents a method of reconstructing images of high contrast dielectric materials using only the self-capacitance measurements. By constraining the unknown dielectric material to one of two values, the inverse problem is no longer ill-determined. Resolution becomes limited only by the accuracy and resolution of the measurement circuitry. Images were reconstructed using this method with both synthetic and real data acquired using an aluminium structure inserted at different positions within the sensing region. Comparisons with standard two-dimensional ECT systems highlight the capabilities and limitations of the electronics and reconstruction algorithm.
Electrical capacitance volume tomography of high contrast dielectrics using a cuboid geometry
NASA Astrophysics Data System (ADS)
Nurge, Mark A.
An Electrical Capacitance Volume Tomography system has been created for use with a new image reconstruction algorithm capable of imaging high contrast dielectric distributions. The electrode geometry consists of two 4 x 4 parallel planes of copper conductors connected through custom built switch electronics to a commercially available capacitance to digital converter. Typical electrical capacitance tomography (ECT) systems rely solely on mutual capacitance readings to reconstruct images of dielectric distributions. This dissertation presents a method of reconstructing images of high contrast dielectric materials using only the self capacitance measurements. By constraining the unknown dielectric material to one of two values, the inverse problem is no longer ill-determined. Resolution becomes limited only by the accuracy and resolution of the measurement circuitry. Images were reconstructed using this method with both synthetic and real data acquired using an aluminum structure inserted at different positions within the sensing region. Comparisons with standard two dimensional ECT systems highlight the capabilities and limitations of the electronics and reconstruction algorithm.
Decomposed direct matrix inversion for fast non-cartesian SENSE reconstructions.
Qian, Yongxian; Zhang, Zhenghui; Wang, Yi; Boada, Fernando E
2006-08-01
A new k-space direct matrix inversion (DMI) method is proposed here to accelerate non-Cartesian SENSE reconstructions. In this method a global k-space matrix equation is established on basic MRI principles, and the inverse of the global encoding matrix is found from a set of local matrix equations by taking advantage of the small extension of k-space coil maps. The DMI algorithm's efficiency is achieved by reloading the precalculated global inverse when the coil maps and trajectories remain unchanged, such as in dynamic studies. Phantom and human subject experiments were performed on a 1.5T scanner with a standard four-channel phased-array cardiac coil. Interleaved spiral trajectories were used to collect fully sampled and undersampled 3D raw data. The equivalence of the global k-space matrix equation to its image-space version, was verified via conjugate gradient (CG) iterative algorithms on a 2x undersampled phantom and numerical-model data sets. When applied to the 2x undersampled phantom and human-subject raw data, the decomposed DMI method produced images with small errors (< or = 3.9%) relative to the reference images obtained from the fully-sampled data, at a rate of 2 s per slice (excluding 4 min for precalculating the global inverse at an image size of 256 x 256). The DMI method may be useful for noise evaluations in parallel coil designs, dynamic MRI, and 3D sodium MRI with fixed coils and trajectories. Copyright 2006 Wiley-Liss, Inc.
Predicting ozone profile shape from satellite UV spectra
NASA Astrophysics Data System (ADS)
Xu, Jian; Loyola, Diego; Romahn, Fabian; Doicu, Adrian
2017-04-01
Identifying ozone profile shape is a critical yet challenging job for the accurate reconstruction of vertical distributions of atmospheric ozone that is relevant to climate change and air quality. Motivated by the need to develop an approach to reliably and efficiently estimate vertical information of ozone and inspired by the success of machine learning techniques, this work proposes a new algorithm for deriving ozone profile shapes from ultraviolet (UV) absorption spectra that are recorded by satellite instruments, e.g. GOME series and the future Sentinel missions. The proposed algorithm formulates this particular inverse problem in a classification framework rather than a conventional inversion one and places an emphasis on effectively characterizing various profile shapes based on machine learning techniques. Furthermore, a comparison of the ozone profiles from real GOME-2 data estimated by our algorithm and the classical retrieval algorithm (Optimal Estimation Method) is performed.
Efficient electromagnetic source imaging with adaptive standardized LORETA/FOCUSS.
Schimpf, Paul H; Liu, Hesheng; Ramon, Ceon; Haueisen, Jens
2005-05-01
Functional brain imaging and source localization based on the scalp's potential field require a solution to an ill-posed inverse problem with many solutions. This makes it necessary to incorporate a priori knowledge in order to select a particular solution. A computational challenge for some subject-specific head models is that many inverse algorithms require a comprehensive sampling of the candidate source space at the desired resolution. In this study, we present an algorithm that can accurately reconstruct details of localized source activity from a sparse sampling of the candidate source space. Forward computations are minimized through an adaptive procedure that increases source resolution as the spatial extent is reduced. With this algorithm, we were able to compute inverses using only 6% to 11% of the full resolution lead-field, with a localization accuracy that was not significantly different than an exhaustive search through a fully-sampled source space. The technique is, therefore, applicable for use with anatomically-realistic, subject-specific forward models for applications with spatially concentrated source activity.
Liu, Hesheng; Gao, Xiaorong; Schimpf, Paul H; Yang, Fusheng; Gao, Shangkai
2004-10-01
Estimation of intracranial electric activity from the scalp electroencephalogram (EEG) requires a solution to the EEG inverse problem, which is known as an ill-conditioned problem. In order to yield a unique solution, weighted minimum norm least square (MNLS) inverse methods are generally used. This paper proposes a recursive algorithm, termed Shrinking LORETA-FOCUSS, which combines and expands upon the central features of two well-known weighted MNLS methods: LORETA and FOCUSS. This recursive algorithm makes iterative adjustments to the solution space as well as the weighting matrix, thereby dramatically reducing the computation load, and increasing local source resolution. Simulations are conducted on a 3-shell spherical head model registered to the Talairach human brain atlas. A comparative study of four different inverse methods, standard Weighted Minimum Norm, L1-norm, LORETA-FOCUSS and Shrinking LORETA-FOCUSS are presented. The results demonstrate that Shrinking LORETA-FOCUSS is able to reconstruct a three-dimensional source distribution with smaller localization and energy errors compared to the other methods.
Reconstruction of local perturbations in periodic surfaces
NASA Astrophysics Data System (ADS)
Lechleiter, Armin; Zhang, Ruming
2018-03-01
This paper concerns the inverse scattering problem to reconstruct a local perturbation in a periodic structure. Unlike the periodic problems, the periodicity for the scattered field no longer holds, thus classical methods, which reduce quasi-periodic fields in one periodic cell, are no longer available. Based on the Floquet-Bloch transform, a numerical method has been developed to solve the direct problem, that leads to a possibility to design an algorithm for the inverse problem. The numerical method introduced in this paper contains two steps. The first step is initialization, that is to locate the support of the perturbation by a simple method. This step reduces the inverse problem in an infinite domain into one periodic cell. The second step is to apply the Newton-CG method to solve the associated optimization problem. The perturbation is then approximated by a finite spline basis. Numerical examples are given at the end of this paper, showing the efficiency of the numerical method.
Two-level image authentication by two-step phase-shifting interferometry and compressive sensing
NASA Astrophysics Data System (ADS)
Zhang, Xue; Meng, Xiangfeng; Yin, Yongkai; Yang, Xiulun; Wang, Yurong; Li, Xianye; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi
2018-01-01
A two-level image authentication method is proposed; the method is based on two-step phase-shifting interferometry, double random phase encoding, and compressive sensing (CS) theory, by which the certification image can be encoded into two interferograms. Through discrete wavelet transform (DWT), sparseness processing, Arnold transform, and data compression, two compressed signals can be generated and delivered to two different participants of the authentication system. Only the participant who possesses the first compressed signal attempts to pass the low-level authentication. The application of Orthogonal Match Pursuit CS algorithm reconstruction, inverse Arnold transform, inverse DWT, two-step phase-shifting wavefront reconstruction, and inverse Fresnel transform can result in the output of a remarkable peak in the central location of the nonlinear correlation coefficient distributions of the recovered image and the standard certification image. Then, the other participant, who possesses the second compressed signal, is authorized to carry out the high-level authentication. Therefore, both compressed signals are collected to reconstruct the original meaningful certification image with a high correlation coefficient. Theoretical analysis and numerical simulations verify the feasibility of the proposed method.
NASA Astrophysics Data System (ADS)
Fujii, M.
2017-07-01
Two variations of a depth-selective back-projection filter for functional near-infrared spectroscopy (fNIRS) systems are introduced. The filter comprises a depth-selective algorithm that uses inverse problems applied to an optically diffusive multilayer medium. In this study, simultaneous signal reconstruction of both superficial and deep tissue from fNIRS experiments of the human forehead using a prototype of a CW-NIRS system is demonstrated.
Non-Cartesian MRI Reconstruction With Automatic Regularization Via Monte-Carlo SURE
Weller, Daniel S.; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.
2013-01-01
Magnetic resonance image (MRI) reconstruction from undersampled k-space data requires regularization to reduce noise and aliasing artifacts. Proper application of regularization however requires appropriate selection of associated regularization parameters. In this work, we develop a data-driven regularization parameter adjustment scheme that minimizes an estimate (based on the principle of Stein’s unbiased risk estimate—SURE) of a suitable weighted squared-error measure in k-space. To compute this SURE-type estimate, we propose a Monte-Carlo scheme that extends our previous approach to inverse problems (e.g., MRI reconstruction) involving complex-valued images. Our approach depends only on the output of a given reconstruction algorithm and does not require knowledge of its internal workings, so it is capable of tackling a wide variety of reconstruction algorithms and nonquadratic regularizers including total variation and those based on the ℓ1-norm. Experiments with simulated and real MR data indicate that the proposed approach is capable of providing near mean squared-error (MSE) optimal regularization parameters for single-coil undersampled non-Cartesian MRI reconstruction. PMID:23591478
Wang, G.L.; Chew, W.C.; Cui, T.J.; Aydiner, A.A.; Wright, D.L.; Smith, D.V.
2004-01-01
Three-dimensional (3D) subsurface imaging by using inversion of data obtained from the very early time electromagnetic system (VETEM) was discussed. The study was carried out by using the distorted Born iterative method to match the internal nonlinear property of the 3D inversion problem. The forward solver was based on the total-current formulation bi-conjugate gradient-fast Fourier transform (BCCG-FFT). It was found that the selection of regularization parameter follow a heuristic rule as used in the Levenberg-Marquardt algorithm so that the iteration is stable.
Objective evaluation of linear and nonlinear tomosynthetic reconstruction algorithms
NASA Astrophysics Data System (ADS)
Webber, Richard L.; Hemler, Paul F.; Lavery, John E.
2000-04-01
This investigation objectively tests five different tomosynthetic reconstruction methods involving three different digital sensors, each used in a different radiologic application: chest, breast, and pelvis, respectively. The common task was to simulate a specific representative projection for each application by summation of appropriately shifted tomosynthetically generated slices produced by using the five algorithms. These algorithms were, respectively, (1) conventional back projection, (2) iteratively deconvoluted back projection, (3) a nonlinear algorithm similar to back projection, except that the minimum value from all of the component projections for each pixel is computed instead of the average value, (4) a similar algorithm wherein the maximum value was computed instead of the minimum value, and (5) the same type of algorithm except that the median value was computed. Using these five algorithms, we obtained data from each sensor-tissue combination, yielding three factorially distributed series of contiguous tomosynthetic slices. The respective slice stacks then were aligned orthogonally and averaged to yield an approximation of a single orthogonal projection radiograph of the complete (unsliced) tissue thickness. Resulting images were histogram equalized, and actual projection control images were subtracted from their tomosynthetically synthesized counterparts. Standard deviations of the resulting histograms were recorded as inverse figures of merit (FOMs). Visual rankings of image differences by five human observers of a subset (breast data only) also were performed to determine whether their subjective observations correlated with homologous FOMs. Nonparametric statistical analysis of these data demonstrated significant differences (P > 0.05) between reconstruction algorithms. The nonlinear minimization reconstruction method nearly always outperformed the other methods tested. Observer rankings were similar to those measured objectively.
NASA Astrophysics Data System (ADS)
Castelo, A.; Mendioroz, A.; Celorrio, R.; Salazar, A.; López de Uralde, P.; Gorosmendi, I.; Gorostegui-Colinas, E.
2017-05-01
Lock-in vibrothermography is used to characterize vertical kissing and open cracks in metals. In this technique the crack heats up during ultrasound excitation due mainly to friction between the defect's faces. We have solved the inverse problem, consisting in determining the heat source distribution produced at cracks under amplitude modulated ultrasound excitation, which is an ill-posed inverse problem. As a consequence the minimization of the residual is unstable. We have stabilized the algorithm introducing a penalty term based on Total Variation functional. In the inversion, we combine amplitude and phase surface temperature data obtained at several modulation frequencies. Inversions of synthetic data with added noise indicate that compact heat sources are characterized accurately and that the particular upper contours can be retrieved for shallow heat sources. The overall shape of open and homogeneous semicircular strip-shaped heat sources representing open half-penny cracks can also be retrieved but the reconstruction of the deeper end of the heat source loses contrast. Angle-, radius- and depth-dependent inhomogeneous heat flux distributions within these semicircular strips can also be qualitatively characterized. Reconstructions of experimental data taken on samples containing calibrated heat sources confirm the predictions from reconstructions of synthetic data. We also present inversions of experimental data obtained from a real welded Inconel 718 specimen. The results are in good qualitative agreement with the results of liquids penetrants testing.
Obtaining sparse distributions in 2D inverse problems.
Reci, A; Sederman, A J; Gladden, L F
2017-08-01
The mathematics of inverse problems has relevance across numerous estimation problems in science and engineering. L 1 regularization has attracted recent attention in reconstructing the system properties in the case of sparse inverse problems; i.e., when the true property sought is not adequately described by a continuous distribution, in particular in Compressed Sensing image reconstruction. In this work, we focus on the application of L 1 regularization to a class of inverse problems; relaxation-relaxation, T 1 -T 2 , and diffusion-relaxation, D-T 2 , correlation experiments in NMR, which have found widespread applications in a number of areas including probing surface interactions in catalysis and characterizing fluid composition and pore structures in rocks. We introduce a robust algorithm for solving the L 1 regularization problem and provide a guide to implementing it, including the choice of the amount of regularization used and the assignment of error estimates. We then show experimentally that L 1 regularization has significant advantages over both the Non-Negative Least Squares (NNLS) algorithm and Tikhonov regularization. It is shown that the L 1 regularization algorithm stably recovers a distribution at a signal to noise ratio<20 and that it resolves relaxation time constants and diffusion coefficients differing by as little as 10%. The enhanced resolving capability is used to measure the inter and intra particle concentrations of a mixture of hexane and dodecane present within porous silica beads immersed within a bulk liquid phase; neither NNLS nor Tikhonov regularization are able to provide this resolution. This experimental study shows that the approach enables discrimination between different chemical species when direct spectroscopic discrimination is impossible, and hence measurement of chemical composition within porous media, such as catalysts or rocks, is possible while still being stable to high levels of noise. Copyright © 2017. Published by Elsevier Inc.
Obtaining sparse distributions in 2D inverse problems
NASA Astrophysics Data System (ADS)
Reci, A.; Sederman, A. J.; Gladden, L. F.
2017-08-01
The mathematics of inverse problems has relevance across numerous estimation problems in science and engineering. L1 regularization has attracted recent attention in reconstructing the system properties in the case of sparse inverse problems; i.e., when the true property sought is not adequately described by a continuous distribution, in particular in Compressed Sensing image reconstruction. In this work, we focus on the application of L1 regularization to a class of inverse problems; relaxation-relaxation, T1-T2, and diffusion-relaxation, D-T2, correlation experiments in NMR, which have found widespread applications in a number of areas including probing surface interactions in catalysis and characterizing fluid composition and pore structures in rocks. We introduce a robust algorithm for solving the L1 regularization problem and provide a guide to implementing it, including the choice of the amount of regularization used and the assignment of error estimates. We then show experimentally that L1 regularization has significant advantages over both the Non-Negative Least Squares (NNLS) algorithm and Tikhonov regularization. It is shown that the L1 regularization algorithm stably recovers a distribution at a signal to noise ratio < 20 and that it resolves relaxation time constants and diffusion coefficients differing by as little as 10%. The enhanced resolving capability is used to measure the inter and intra particle concentrations of a mixture of hexane and dodecane present within porous silica beads immersed within a bulk liquid phase; neither NNLS nor Tikhonov regularization are able to provide this resolution. This experimental study shows that the approach enables discrimination between different chemical species when direct spectroscopic discrimination is impossible, and hence measurement of chemical composition within porous media, such as catalysts or rocks, is possible while still being stable to high levels of noise.
NASA Astrophysics Data System (ADS)
Courdurier, M.; Monard, F.; Osses, A.; Romero, F.
2015-09-01
In medical single-photon emission computed tomography (SPECT) imaging, we seek to simultaneously obtain the internal radioactive sources and the attenuation map using not only ballistic measurements but also first-order scattering measurements and assuming a very specific scattering regime. The problem is modeled using the radiative transfer equation by means of an explicit non-linear operator that gives the ballistic and scattering measurements as a function of the radioactive source and attenuation distributions. First, by differentiating this non-linear operator we obtain a linearized inverse problem. Then, under regularity hypothesis for the source distribution and attenuation map and considering small attenuations, we rigorously prove that the linear operator is invertible and we compute its inverse explicitly. This allows proof of local uniqueness for the non-linear inverse problem. Finally, using the previous inversion result for the linear operator, we propose a new type of iterative algorithm for simultaneous source and attenuation recovery for SPECT based on the Neumann series and a Newton-Raphson algorithm.
Pardo-Montero, Juan; Fenwick, John D
2010-06-01
The purpose of this work is twofold: To further develop an approach to multiobjective optimization of rotational therapy treatments recently introduced by the authors [J. Pardo-Montero and J. D. Fenwick, "An approach to multiobjective optimization of rotational therapy," Med. Phys. 36, 3292-3303 (2009)], especially regarding its application to realistic geometries, and to study the quality (Pareto optimality) of plans obtained using such an approach by comparing them with Pareto optimal plans obtained through inverse planning. In the previous work of the authors, a methodology is proposed for constructing a large number of plans, with different compromises between the objectives involved, from a small number of geometrically based arcs, each arc prioritizing different objectives. Here, this method has been further developed and studied. Two different techniques for constructing these arcs are investigated, one based on image-reconstruction algorithms and the other based on more common gradient-descent algorithms. The difficulty of dealing with organs abutting the target, briefly reported in previous work of the authors, has been investigated using partial OAR unblocking. Optimality of the solutions has been investigated by comparison with a Pareto front obtained from inverse planning. A relative Euclidean distance has been used to measure the distance of these plans to the Pareto front, and dose volume histogram comparisons have been used to gauge the clinical impact of these distances. A prostate geometry has been used for the study. For geometries where a blocked OAR abuts the target, moderate OAR unblocking can substantially improve target dose distribution and minimize hot spots while not overly compromising dose sparing of the organ. Image-reconstruction type and gradient-descent blocked-arc computations generate similar results. The Pareto front for the prostate geometry, reconstructed using a large number of inverse plans, presents a hockey-stick shape comprising two regions: One where the dose to the target is close to prescription and trade-offs can be made between doses to the organs at risk and (small) changes in target dose, and one where very substantial rectal sparing is achieved at the cost of large target underdosage. Plans computed following the approach using a conformal arc and four blocked arcs generally lie close to the Pareto front, although distances of some plans from high gradient regions of the Pareto front can be greater. Only around 12% of plans lie a relative Euclidean distance of 0.15 or greater from the Pareto front. Using the alternative distance measure of Craft ["Calculating and controlling the error of discrete representations of Pareto surfaces in convex multi-criteria optimization," Phys. Medica (to be published)], around 2/5 of plans lie more than 0.05 from the front. Computation of blocked arcs is quite fast, the algorithms requiring 35%-80% of the running time per iteration needed for conventional inverse plan computation. The geometry-based arc approach to multicriteria optimization of rotational therapy allows solutions to be obtained that lie close to the Pareto front. Both the image-reconstruction type and gradient-descent algorithms produce similar modulated arcs, the latter one perhaps being preferred because it is more easily implementable in standard treatment planning systems. Moderate unblocking provides a good way of dealing with OARs which abut the PTV. Optimization of geometry-based arcs is faster than usual inverse optimization of treatment plans, making this approach more rapid than an inverse-based Pareto front reconstruction.
NASA Astrophysics Data System (ADS)
Gok, Gokhan; Mosna, Zbysek; Arikan, Feza; Arikan, Orhan; Erdem, Esra
2016-07-01
Ionospheric observation is essentially accomplished by specialized radar systems called ionosondes. The time delay between the transmitted and received signals versus frequency is measured by the ionosondes and the received signals are processed to generate ionogram plots, which show the time delay or reflection height of signals with respect to transmitted frequency. The critical frequencies of ionospheric layers and virtual heights, that provide useful information about ionospheric structurecan be extracted from ionograms . Ionograms also indicate the amount of variability or disturbances in the ionosphere. With special inversion algorithms and tomographical methods, electron density profiles can also be estimated from the ionograms. Although structural pictures of ionosphere in the vertical direction can be observed from ionosonde measurements, some errors may arise due to inaccuracies that arise from signal propagation, modeling, data processing and tomographic reconstruction algorithms. Recently IONOLAB group (www.ionolab.org) developed a new algorithm for effective and accurate extraction of ionospheric parameters and reconstruction of electron density profile from ionograms. The electron density reconstruction algorithm applies advanced optimization techniques to calculate parameters of any existing analytical function which defines electron density with respect to height using ionogram measurement data. The process of reconstructing electron density with respect to height is known as the ionogram scaling or true height analysis. IONOLAB-RAY algorithm is a tool to investigate the propagation path and parameters of HF wave in the ionosphere. The algorithm models the wave propagation using ray representation under geometrical optics approximation. In the algorithm , the structural ionospheric characteristics arerepresented as realistically as possible including anisotropicity, inhomogenity and time dependence in 3-D voxel structure. The algorithm is also used for various purposes including calculation of actual height and generation of ionograms. In this study, the performance of electron density reconstruction algorithm of IONOLAB group and standard electron density profile algorithms of ionosondes are compared with IONOLAB-RAY wave propagation simulation in near vertical incidence. The electron density reconstruction and parameter extraction algorithms of ionosondes are validated with the IONOLAB-RAY results both for quiet anddisturbed ionospheric states in Central Europe using ionosonde stations such as Pruhonice and Juliusruh . It is observed that IONOLAB ionosonde parameter extraction and electron density reconstruction algorithm performs significantly better compared to standard algorithms especially for disturbed ionospheric conditions. IONOLAB-RAY provides an efficient and reliable tool to investigate and validate ionosonde electron density reconstruction algorithms, especially in determination of reflection height (true height) of signals and critical parameters of ionosphere. This study is supported by TUBITAK 114E541, 115E915 and Joint TUBITAK 114E092 and AS CR 14/001 projects.
ODTbrain: a Python library for full-view, dense diffraction tomography.
Müller, Paul; Schürmann, Mirjam; Guck, Jochen
2015-11-04
Analyzing the three-dimensional (3D) refractive index distribution of a single cell makes it possible to describe and characterize its inner structure in a marker-free manner. A dense, full-view tomographic data set is a set of images of a cell acquired for multiple rotational positions, densely distributed from 0 to 360 degrees. The reconstruction is commonly realized by projection tomography, which is based on the inversion of the Radon transform. The reconstruction quality of projection tomography is greatly improved when first order scattering, which becomes relevant when the imaging wavelength is comparable to the characteristic object size, is taken into account. This advanced reconstruction technique is called diffraction tomography. While many implementations of projection tomography are available today, there is no publicly available implementation of diffraction tomography so far. We present a Python library that implements the backpropagation algorithm for diffraction tomography in 3D. By establishing benchmarks based on finite-difference time-domain (FDTD) simulations, we showcase the superiority of the backpropagation algorithm over the backprojection algorithm. Furthermore, we discuss how measurment parameters influence the reconstructed refractive index distribution and we also give insights into the applicability of diffraction tomography to biological cells. The present software library contains a robust implementation of the backpropagation algorithm. The algorithm is ideally suited for the application to biological cells. Furthermore, the implementation is a drop-in replacement for the classical backprojection algorithm and is made available to the large user community of the Python programming language.
Anatomy assisted PET image reconstruction incorporating multi-resolution joint entropy
NASA Astrophysics Data System (ADS)
Tang, Jing; Rahmim, Arman
2015-01-01
A promising approach in PET image reconstruction is to incorporate high resolution anatomical information (measured from MR or CT) taking the anato-functional similarity measures such as mutual information or joint entropy (JE) as the prior. These similarity measures only classify voxels based on intensity values, while neglecting structural spatial information. In this work, we developed an anatomy-assisted maximum a posteriori (MAP) reconstruction algorithm wherein the JE measure is supplied by spatial information generated using wavelet multi-resolution analysis. The proposed wavelet-based JE (WJE) MAP algorithm involves calculation of derivatives of the subband JE measures with respect to individual PET image voxel intensities, which we have shown can be computed very similarly to how the inverse wavelet transform is implemented. We performed a simulation study with the BrainWeb phantom creating PET data corresponding to different noise levels. Realistically simulated T1-weighted MR images provided by BrainWeb modeling were applied in the anatomy-assisted reconstruction with the WJE-MAP algorithm and the intensity-only JE-MAP algorithm. Quantitative analysis showed that the WJE-MAP algorithm performed similarly to the JE-MAP algorithm at low noise level in the gray matter (GM) and white matter (WM) regions in terms of noise versus bias tradeoff. When noise increased to medium level in the simulated data, the WJE-MAP algorithm started to surpass the JE-MAP algorithm in the GM region, which is less uniform with smaller isolated structures compared to the WM region. In the high noise level simulation, the WJE-MAP algorithm presented clear improvement over the JE-MAP algorithm in both the GM and WM regions. In addition to the simulation study, we applied the reconstruction algorithms to real patient studies involving DPA-173 PET data and Florbetapir PET data with corresponding T1-MPRAGE MRI images. Compared to the intensity-only JE-MAP algorithm, the WJE-MAP algorithm resulted in comparable regional mean values to those from the maximum likelihood algorithm while reducing noise. Achieving robust performance in various noise-level simulation and patient studies, the WJE-MAP algorithm demonstrates its potential in clinical quantitative PET imaging.
Thermal depth profiling of vascular lesions: automated regularization of reconstruction algorithms
NASA Astrophysics Data System (ADS)
Verkruysse, Wim; Choi, Bernard; Zhang, Jenny R.; Kim, Jeehyun; Nelson, J. Stuart
2008-03-01
Pulsed photo-thermal radiometry (PPTR) is a non-invasive, non-contact diagnostic technique used to locate cutaneous chromophores such as melanin (epidermis) and hemoglobin (vascular structures). Clinical utility of PPTR is limited because it typically requires trained user intervention to regularize the inversion solution. Herein, the feasibility of automated regularization was studied. A second objective of this study was to depart from modeling port wine stain PWS, a vascular skin lesion frequently studied with PPTR, as strictly layered structures since this may influence conclusions regarding PPTR reconstruction quality. Average blood vessel depths, diameters and densities derived from histology of 30 PWS patients were used to generate 15 randomized lesion geometries for which we simulated PPTR signals. Reconstruction accuracy for subjective regularization was compared with that for automated regularization methods. The objective regularization approach performed better. However, the average difference was much smaller than the variation between the 15 simulated profiles. Reconstruction quality depended more on the actual profile to be reconstructed than on the reconstruction algorithm or regularization method. Similar, or better, accuracy reconstructions can be achieved with an automated regularization procedure which enhances prospects for user friendly implementation of PPTR to optimize laser therapy on an individual patient basis.
Empirical investigation into depth-resolution of Magnetotelluric data
NASA Astrophysics Data System (ADS)
Piana Agostinetti, N.; Ogaya, X.
2017-12-01
We investigate the depth-resolution of MT data comparing reconstructed 1D resistivity profiles with measured resistivity and lithostratigraphy from borehole data. Inversion of MT data has been widely used to reconstruct the 1D fine-layered resistivity structure beneath an isolated Magnetotelluric (MT) station. Uncorrelated noise is generally assumed to be associated to MT data. However, wrong assumptions on error statistics have been proved to strongly bias the results obtained in geophysical inversions. In particular the number of resolved layers at depth strongly depends on error statistics. In this study, we applied a trans-dimensional McMC algorithm for reconstructing the 1D resistivity profile near-by the location of a 1500 m-deep borehole, using MT data. We resolve the MT inverse problem imposing different models for the error statistics associated to the MT data. Following a Hierachical Bayes' approach, we also inverted for the hyper-parameters associated to each error statistics model. Preliminary results indicate that assuming un-correlated noise leads to a number of resolved layers larger than expected from the retrieved lithostratigraphy. Moreover, comparing the inversion of synthetic resistivity data obtained from the "true" resistivity stratification measured along the borehole shows that a consistent number of resistivity layers can be obtained using a Gaussian model for the error statistics, with substantial correlation length.
Greedy algorithms for diffuse optical tomography reconstruction
NASA Astrophysics Data System (ADS)
Dileep, B. P. V.; Das, Tapan; Dutta, Pranab K.
2018-03-01
Diffuse optical tomography (DOT) is a noninvasive imaging modality that reconstructs the optical parameters of a highly scattering medium. However, the inverse problem of DOT is ill-posed and highly nonlinear due to the zig-zag propagation of photons that diffuses through the cross section of tissue. The conventional DOT imaging methods iteratively compute the solution of forward diffusion equation solver which makes the problem computationally expensive. Also, these methods fail when the geometry is complex. Recently, the theory of compressive sensing (CS) has received considerable attention because of its efficient use in biomedical imaging applications. The objective of this paper is to solve a given DOT inverse problem by using compressive sensing framework and various Greedy algorithms such as orthogonal matching pursuit (OMP), compressive sampling matching pursuit (CoSaMP), and stagewise orthogonal matching pursuit (StOMP), regularized orthogonal matching pursuit (ROMP) and simultaneous orthogonal matching pursuit (S-OMP) have been studied to reconstruct the change in the absorption parameter i.e, Δα from the boundary data. Also, the Greedy algorithms have been validated experimentally on a paraffin wax rectangular phantom through a well designed experimental set up. We also have studied the conventional DOT methods like least square method and truncated singular value decomposition (TSVD) for comparison. One of the main features of this work is the usage of less number of source-detector pairs, which can facilitate the use of DOT in routine applications of screening. The performance metrics such as mean square error (MSE), normalized mean square error (NMSE), structural similarity index (SSIM), and peak signal to noise ratio (PSNR) have been used to evaluate the performance of the algorithms mentioned in this paper. Extensive simulation results confirm that CS based DOT reconstruction outperforms the conventional DOT imaging methods in terms of computational efficiency. The main advantage of this study is that the forward diffusion equation solver need not be repeatedly solved.
NASA Astrophysics Data System (ADS)
Yu, Liang; Antoni, Jerome; Leclere, Quentin; Jiang, Weikang
2017-11-01
Acoustical source reconstruction is a typical inverse problem, whose minimum frequency of reconstruction hinges on the size of the array and maximum frequency depends on the spacing distance between the microphones. For the sake of enlarging the frequency of reconstruction and reducing the cost of an acquisition system, Cyclic Projection (CP), a method of sequential measurements without reference, was recently investigated (JSV,2016,372:31-49). In this paper, the Propagation based Fast Iterative Shrinkage Thresholding Algorithm (Propagation-FISTA) is introduced, which improves CP in two aspects: (1) the number of acoustic sources is no longer needed and the only making assumption is that of a "weakly sparse" eigenvalue spectrum; (2) the construction of the spatial basis is much easier and adaptive to practical scenarios of acoustical measurements benefiting from the introduction of propagation based spatial basis. The proposed Propagation-FISTA is first investigated with different simulations and experimental setups and is next illustrated with an industrial case.
Multirate-based fast parallel algorithms for 2-D DHT-based real-valued discrete Gabor transform.
Tao, Liang; Kwan, Hon Keung
2012-07-01
Novel algorithms for the multirate and fast parallel implementation of the 2-D discrete Hartley transform (DHT)-based real-valued discrete Gabor transform (RDGT) and its inverse transform are presented in this paper. A 2-D multirate-based analysis convolver bank is designed for the 2-D RDGT, and a 2-D multirate-based synthesis convolver bank is designed for the 2-D inverse RDGT. The parallel channels in each of the two convolver banks have a unified structure and can apply the 2-D fast DHT algorithm to speed up their computations. The computational complexity of each parallel channel is low and is independent of the Gabor oversampling rate. All the 2-D RDGT coefficients of an image are computed in parallel during the analysis process and can be reconstructed in parallel during the synthesis process. The computational complexity and time of the proposed parallel algorithms are analyzed and compared with those of the existing fastest algorithms for 2-D discrete Gabor transforms. The results indicate that the proposed algorithms are the fastest, which make them attractive for real-time image processing.
Microwave tomography for GPR data processing in archaeology and cultural heritages diagnostics
NASA Astrophysics Data System (ADS)
Soldovieri, F.
2009-04-01
Ground Penetrating Radar (GPR) is one of the most feasible and friendly instrumentation to detect buried remains and perform diagnostics of archaeological structures with the aim of detecting hidden objects (defects, voids, constructive typology; etc..). In fact, GPR technique allows to perform measurements over large areas in a very fast way thanks to a portable instrumentation. Despite of the widespread exploitation of the GPR as data acquisition system, many difficulties arise in processing GPR data so to obtain images reliable and easily interpretable by the end-users. This difficulty is exacerbated when no a priori information is available as for example arises in the case of historical heritages for which the knowledge of the constructive modalities and materials of the structure might be completely missed. A possible answer to the above cited difficulties resides in the development and the exploitation of microwave tomography algorithms [1, 2], based on more refined electromagnetic scattering model with respect to the ones usually adopted in the classic radaristic approach. By exploitation of the microwave tomographic approach, it is possible to gain accurate and reliable "images" of the investigated structure in order to detect, localize and possibly determine the extent and the geometrical features of the embedded objects. In this framework, the adoption of simplified models of the electromagnetic scattering appears very convenient for practical and theoretical reasons. First, the linear inversion algorithms are numerically efficient thus allowing to investigate domains large in terms of the probing wavelength in a quasi real- time also in the case of 3D case also by adopting schemes based on the combination of 2D reconstruction [3]. In addition, the solution approaches are very robust against the uncertainties in the parameters of the measurement configuration and on the investigated scenario. From a theoretical point of view, the linear models allow further advantages such as: the absence of the false solutions (a question to be arisen in non linear inverse problems); the exploitation of well known regularization tools for achieving a stable solution of the problem; the possibility to analyze the reconstruction performances of the algorithm once the measurement configuration and the properties of the host medium are known. Here, we will present the main features and the reconstruction results of a linear inversion algorithm based on the Born approximation in realistic applications in archaeology and cultural heritage diagnostics. Born model is useful when penetrable objects are under investigations. As well known, the Born Approximation is used to solve the forward problem, that is the determination of the scattered field from a known object under the hypothesis of weak scatterer, i.e. an object whose dielectric permittivity is slightly different from the one of the host medium and whose extent is small in term of probing wavelength. Differently, for the inverse scattering problem, the above hypotheses can be relaxed at the cost to renounce to a "quantitative reconstruction" of the object. In fact, as already shown by results in realistic conditions [4, 5], the adoption of a Born model inversion scheme allows to detect, to localize and to determine the geometry of the object also in the case of not weak scattering objects. [1] R. Persico, R. Bernini, F. Soldovieri, "The role of the measurement configuration in inverse scattering from buried objects under the Born approximation", IEEE Trans. Antennas and Propagation, vol. 53, no.6, pp. 1875-1887, June 2005. [2] F. Soldovieri, J. Hugenschmidt, R. Persico and G. Leone, "A linear inverse scattering algorithm for realistic GPR applications", Near Surface Geophysics, vol. 5, no. 1, pp. 29-42, February 2007. [3] R. Solimene, F. Soldovieri, G. Prisco, R.Pierri, "Three-Dimensional Microwave Tomography by a 2-D Slice-Based Reconstruction Algorithm", IEEE Geoscience and Remote Sensing Letters, vol. 4, no. 4, pp. 556 - 560, Oct. 2007. [4] L. Orlando, F. Soldovieri, "Two different approaches for georadar data processing: a case study in archaeological prospecting", Journal of Applied Geophysics, vol. 64, pp. 1-13, March 2008. [5] F. Soldovieri, M. Bavusi, L. Crocco, S. Piscitelli, A. Giocoli, F. Vallianatos, S. Pantellis, A. Sarris, "A comparison between two GPR data processing techniques for fracture detection and characterization", Proc. of 70th EAGE Conference & Exhibition, Rome, Italy, 9 - 12 June 2008
2D Seismic Imaging of Elastic Parameters by Frequency Domain Full Waveform Inversion
NASA Astrophysics Data System (ADS)
Brossier, R.; Virieux, J.; Operto, S.
2008-12-01
Thanks to recent advances in parallel computing, full waveform inversion is today a tractable seismic imaging method to reconstruct physical parameters of the earth interior at different scales ranging from the near- surface to the deep crust. We present a massively parallel 2D frequency-domain full-waveform algorithm for imaging visco-elastic media from multi-component seismic data. The forward problem (i.e. the resolution of the frequency-domain 2D PSV elastodynamics equations) is based on low-order Discontinuous Galerkin (DG) method (P0 and/or P1 interpolations). Thanks to triangular unstructured meshes, the DG method allows accurate modeling of both body waves and surface waves in case of complex topography for a discretization of 10 to 15 cells per shear wavelength. The frequency-domain DG system is solved efficiently for multiple sources with the parallel direct solver MUMPS. The local inversion procedure (i.e. minimization of residuals between observed and computed data) is based on the adjoint-state method which allows to efficiently compute the gradient of the objective function. Applying the inversion hierarchically from the low frequencies to the higher ones defines a multiresolution imaging strategy which helps convergence towards the global minimum. In place of expensive Newton algorithm, the combined use of the diagonal terms of the approximate Hessian matrix and optimization algorithms based on quasi-Newton methods (Conjugate Gradient, LBFGS, ...) allows to improve the convergence of the iterative inversion. The distribution of forward problem solutions over processors driven by a mesh partitioning performed by METIS allows to apply most of the inversion in parallel. We shall present the main features of the parallel modeling/inversion algorithm, assess its scalability and illustrate its performances with realistic synthetic case studies.
NASA Astrophysics Data System (ADS)
Siddeq, M. M.; Rodrigues, M. A.
2015-09-01
Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.
Fu, Chi-Yung; Petrich, Loren I.
1997-01-01
An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.
Numerical Aspects of Cone Beam Contour Reconstruction
NASA Astrophysics Data System (ADS)
Louis, Alfred K.
2017-12-01
We describe a method for directly calculating the contours of a function from cone beam data. The algorithm is based on a new inversion formula for the gradient of a function presented in Louis (Inverse Probl 32(11):115005, 2016. http://stacks.iop.org/0266-5611/32/i=11/a=115005). The Radon transform of the gradient is found by using a Grangeat type of formula, reducing the inversion problem to the inversion of the Radon transform. In that way the influence of the scanning curve, vital for all exact inversion formulas for complete data, is avoided Numerical results are presented for the circular scanning geometry which neither fulfills the Tuy-Kirillov condition nor the much weaker condition given by the author in Louis (Inverse Probl 32(11):115005, 2016. http://stacks.iop.org/0266-5611/32/i=11/a=115005).
Shi, Junwei; Liu, Fei; Zhang, Guanglei; Luo, Jianwen; Bai, Jing
2014-04-01
Owing to the high degree of scattering of light through tissues, the ill-posedness of fluorescence molecular tomography (FMT) inverse problem causes relatively low spatial resolution in the reconstruction results. Unlike L2 regularization, L1 regularization can preserve the details and reduce the noise effectively. Reconstruction is obtained through a restarted L1 regularization-based nonlinear conjugate gradient (re-L1-NCG) algorithm, which has been proven to be able to increase the computational speed with low memory consumption. The algorithm consists of inner and outer iterations. In the inner iteration, L1-NCG is used to obtain the L1-regularized results. In the outer iteration, the restarted strategy is used to increase the convergence speed of L1-NCG. To demonstrate the performance of re-L1-NCG in terms of spatial resolution, simulation and physical phantom studies with fluorescent targets located with different edge-to-edge distances were carried out. The reconstruction results show that the re-L1-NCG algorithm has the ability to resolve targets with an edge-to-edge distance of 0.1 cm at a depth of 1.5 cm, which is a significant improvement for FMT.
General phase regularized reconstruction using phase cycling.
Ong, Frank; Cheng, Joseph Y; Lustig, Michael
2018-07-01
To develop a general phase regularized image reconstruction method, with applications to partial Fourier imaging, water-fat imaging and flow imaging. The problem of enforcing phase constraints in reconstruction was studied under a regularized inverse problem framework. A general phase regularized reconstruction algorithm was proposed to enable various joint reconstruction of partial Fourier imaging, water-fat imaging and flow imaging, along with parallel imaging (PI) and compressed sensing (CS). Since phase regularized reconstruction is inherently non-convex and sensitive to phase wraps in the initial solution, a reconstruction technique, named phase cycling, was proposed to render the overall algorithm invariant to phase wraps. The proposed method was applied to retrospectively under-sampled in vivo datasets and compared with state of the art reconstruction methods. Phase cycling reconstructions showed reduction of artifacts compared to reconstructions without phase cycling and achieved similar performances as state of the art results in partial Fourier, water-fat and divergence-free regularized flow reconstruction. Joint reconstruction of partial Fourier + water-fat imaging + PI + CS, and partial Fourier + divergence-free regularized flow imaging + PI + CS were demonstrated. The proposed phase cycling reconstruction provides an alternative way to perform phase regularized reconstruction, without the need to perform phase unwrapping. It is robust to the choice of initial solutions and encourages the joint reconstruction of phase imaging applications. Magn Reson Med 80:112-125, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
System for uncollimated digital radiography
Wang, Han; Hall, James M.; McCarrick, James F.; Tang, Vincent
2015-08-11
The inversion algorithm based on the maximum entropy method (MEM) removes unwanted effects in high energy imaging resulting from an uncollimated source interacting with a finitely thick scintillator. The algorithm takes as input the image from the thick scintillator (TS) and the radiography setup geometry. The algorithm then outputs a restored image which appears as if taken with an infinitesimally thin scintillator (ITS). Inversion is accomplished by numerically generating a probabilistic model relating the ITS image to the TS image and then inverting this model on the TS image through MEM. This reconstruction technique can reduce the exposure time or the required source intensity without undesirable object blurring on the image by allowing the use of both thicker scintillators with higher efficiencies and closer source-to-detector distances to maximize incident radiation flux. The technique is applicable in radiographic applications including fast neutron, high-energy gamma and x-ray radiography using thick scintillators.
Magneto-acousto-electrical Measurement Based Electrical Conductivity Reconstruction for Tissues.
Zhou, Yan; Ma, Qingyu; Guo, Gepu; Tu, Juan; Zhang, Dong
2018-05-01
Based on the interaction of ultrasonic excitation and magnetoelectrical induction, magneto-acousto-electrical (MAE) technology was demonstrated to have the capability of differentiating conductivity variations along the acoustic transmission. By applying the characteristics of the MAE voltage, a simplified algorithm of MAE measurement based conductivity reconstruction was developed. With the analyses of acoustic vibration, ultrasound propagation, Hall effect, and magnetoelectrical induction, theoretical and experimental studies of MAE measurement and conductivity reconstruction were performed. The formula of MAE voltage was derived and simplified for the transducer with strong directivity. MAE voltage was simulated for a three-layer gel phantom and the conductivity distribution was reconstructed using the modified Wiener inverse filter and Hilbert transform, which was also verified by experimental measurements. The experimental results are basically consistent with the simulations, and demonstrate that the wave packets of MAE voltage are generated at tissue interfaces with the amplitudes and vibration polarities representing the values and directions of conductivity variations. With the proposed algorithm, the amplitude and polarity of conductivity gradient can be restored and the conductivity distribution can also be reconstructed accurately. The favorable results demonstrate the feasibility of accurate conductivity reconstruction with improved spatial resolution using MAE measurement for tissues with conductivity variations, especially suitable for nondispersive tissues with abrupt conductivity changes. This study demonstrates that the MAE measurement based conductivity reconstruction algorithm can be applied as a new strategy for nondestructive real-time monitoring of conductivity variations in biomedical engineering.
Restoration algorithms for imaging through atmospheric turbulence
2017-02-18
the Fourier spectrum of each frame. The reconstructed image is then obtained by taking the inverse Fourier transform of the average of all processed...with wipξq “ Gσp|Fpviqpξq|pq řM j“1Gσp|Fpvjqpξq|pq , where F denotes the Fourier transform (ξ are the frequencies) and Gσ is a Gaussian filter of...a combination of SIFT [26] and ORSA [14] algorithms) in order to remove affine transformations (translations, rotations and homothety). The authors
Application and performance of an ML-EM algorithm in NEXT
NASA Astrophysics Data System (ADS)
Simón, A.; Lerche, C.; Monrabal, F.; Gómez-Cadenas, J. J.; Álvarez, V.; Azevedo, C. D. R.; Benlloch-Rodríguez, J. M.; Borges, F. I. G. M.; Botas, A.; Cárcel, S.; Carrión, J. V.; Cebrián, S.; Conde, C. A. N.; Díaz, J.; Diesburg, M.; Escada, J.; Esteve, R.; Felkai, R.; Fernandes, L. M. P.; Ferrario, P.; Ferreira, A. L.; Freitas, E. D. C.; Goldschmidt, A.; González-Díaz, D.; Gutiérrez, R. M.; Hauptman, J.; Henriques, C. A. O.; Hernandez, A. I.; Hernando Morata, J. A.; Herrero, V.; Jones, B. J. P.; Labarga, L.; Laing, A.; Lebrun, P.; Liubarsky, I.; López-March, N.; Losada, M.; Martín-Albo, J.; Martínez-Lema, G.; Martínez, A.; McDonald, A. D.; Monteiro, C. M. B.; Mora, F. J.; Moutinho, L. M.; Muñoz Vidal, J.; Musti, M.; Nebot-Guinot, M.; Novella, P.; Nygren, D. R.; Palmeiro, B.; Para, A.; Pérez, J.; Querol, M.; Renner, J.; Ripoll, L.; Rodríguez, J.; Rogers, L.; Santos, F. P.; dos Santos, J. M. F.; Sofka, C.; Sorel, M.; Stiegler, T.; Toledo, J. F.; Torrent, J.; Tsamalaidze, Z.; Veloso, J. F. C. A.; Webb, R.; White, J. T.; Yahlali, N.
2017-08-01
The goal of the NEXT experiment is the observation of neutrinoless double beta decay in 136Xe using a gaseous xenon TPC with electroluminescent amplification and specialized photodetector arrays for calorimetry and tracking. The NEXT Collaboration is exploring a number of reconstruction algorithms to exploit the full potential of the detector. This paper describes one of them: the Maximum Likelihood Expectation Maximization (ML-EM) method, a generic iterative algorithm to find maximum-likelihood estimates of parameters that has been applied to solve many different types of complex inverse problems. In particular, we discuss a bi-dimensional version of the method in which the photosensor signals integrated over time are used to reconstruct a transverse projection of the event. First results show that, when applied to detector simulation data, the algorithm achieves nearly optimal energy resolution (better than 0.5% FWHM at the Q value of 136Xe) for events distributed over the full active volume of the TPC.
NASA Astrophysics Data System (ADS)
Jiang, Peng; Peng, Lihui; Xiao, Deyun
2007-06-01
This paper presents a regularization method by using different window functions as regularization for electrical capacitance tomography (ECT) image reconstruction. Image reconstruction for ECT is a typical ill-posed inverse problem. Because of the small singular values of the sensitivity matrix, the solution is sensitive to the measurement noise. The proposed method uses the spectral filtering properties of different window functions to make the solution stable by suppressing the noise in measurements. The window functions, such as the Hanning window, the cosine window and so on, are modified for ECT image reconstruction. Simulations with respect to five typical permittivity distributions are carried out. The reconstructions are better and some of the contours are clearer than the results from the Tikhonov regularization. Numerical results show that the feasibility of the image reconstruction algorithm using different window functions as regularization.
Fault Identification by Unsupervised Learning Algorithm
NASA Astrophysics Data System (ADS)
Nandan, S.; Mannu, U.
2012-12-01
Contemporary fault identification techniques predominantly rely on the surface expression of the fault. This biased observation is inadequate to yield detailed fault structures in areas with surface cover like cities deserts vegetation etc and the changes in fault patterns with depth. Furthermore it is difficult to estimate faults structure which do not generate any surface rupture. Many disastrous events have been attributed to these blind faults. Faults and earthquakes are very closely related as earthquakes occur on faults and faults grow by accumulation of coseismic rupture. For a better seismic risk evaluation it is imperative to recognize and map these faults. We implement a novel approach to identify seismically active fault planes from three dimensional hypocenter distribution by making use of unsupervised learning algorithms. We employ K-means clustering algorithm and Expectation Maximization (EM) algorithm modified to identify planar structures in spatial distribution of hypocenter after filtering out isolated events. We examine difference in the faults reconstructed by deterministic assignment in K- means and probabilistic assignment in EM algorithm. The method is conceptually identical to methodologies developed by Ouillion et al (2008, 2010) and has been extensively tested on synthetic data. We determined the sensitivity of the methodology to uncertainties in hypocenter location, density of clustering and cross cutting fault structures. The method has been applied to datasets from two contrasting regions. While Kumaon Himalaya is a convergent plate boundary, Koyna-Warna lies in middle of the Indian Plate but has a history of triggered seismicity. The reconstructed faults were validated by examining the fault orientation of mapped faults and the focal mechanism of these events determined through waveform inversion. The reconstructed faults could be used to solve the fault plane ambiguity in focal mechanism determination and constrain the fault orientations for finite source inversions. The faults produced by the method exhibited good correlation with the fault planes obtained by focal mechanism solutions and previously mapped faults.
A framelet-based iterative maximum-likelihood reconstruction algorithm for spectral CT
NASA Astrophysics Data System (ADS)
Wang, Yingmei; Wang, Ge; Mao, Shuwei; Cong, Wenxiang; Ji, Zhilong; Cai, Jian-Feng; Ye, Yangbo
2016-11-01
Standard computed tomography (CT) cannot reproduce spectral information of an object. Hardware solutions include dual-energy CT which scans the object twice in different x-ray energy levels, and energy-discriminative detectors which can separate lower and higher energy levels from a single x-ray scan. In this paper, we propose a software solution and give an iterative algorithm that reconstructs an image with spectral information from just one scan with a standard energy-integrating detector. The spectral information obtained can be used to produce color CT images, spectral curves of the attenuation coefficient μ (r,E) at points inside the object, and photoelectric images, which are all valuable imaging tools in cancerous diagnosis. Our software solution requires no change on hardware of a CT machine. With the Shepp-Logan phantom, we have found that although the photoelectric and Compton components were not perfectly reconstructed, their composite effect was very accurately reconstructed as compared to the ground truth and the dual-energy CT counterpart. This means that our proposed method has an intrinsic benefit in beam hardening correction and metal artifact reduction. The algorithm is based on a nonlinear polychromatic acquisition model for x-ray CT. The key technique is a sparse representation of iterations in a framelet system. Convergence of the algorithm is studied. This is believed to be the first application of framelet imaging tools to a nonlinear inverse problem.
Sun, Deyong; Hu, Chuanmin; Qiu, Zhongfeng; Wang, Shengqiang
2015-06-01
A new scheme has been proposed by Lee et al. (2014) to reconstruct hyperspectral (400 - 700 nm, 5 nm resolution) remote sensing reflectance (Rrs(λ), sr-1) of representative global waters using measurements at 15 spectral bands. This study tested its applicability to optically complex turbid inland waters in China, where Rrs(λ) are typically much higher than those used in Lee et al. (2014). Strong interdependence of Rrs(λ) between neighboring bands (≤ 10 nm interval) was confirmed, with Pearson correlation coefficient (PCC) mostly above 0.98. The scheme of Lee et al. (2014) for Rrs(λ) re-construction with its original global parameterization worked well with this data set, while new parameterization showed improvement in reducing uncertainties in the reconstructed Rrs(λ). Mean absolute error (MAERrs(λi)) in the reconstructed Rrs(λ) was mostly < 0.0002 sr-1 between 400 and 700nm, and mean relative error (MRERrs(λi)) was < 1% when the comparison was made between reconstructed and measured Rrs(λ) spectra. When Rrs(λ) at the MODIS bands were used to reconstruct the hyperspectral Rrs(λ), MAERrs(λi) was < 0.001 sr-1 and MRERrs(λi) was < 3%. When Rrs(λ) at the MERIS bands were used, MAERrs(λi) in the reconstructed hyperspectral Rrs(λ) was < 0.0004 sr-1 and MRERrs(λi) was < 1%. These results have significant implications for inversion algorithms to retrieve concentrations of phytoplankton pigments (e.g., chlorophyll-a or Chla, and phycocyanin or PC) and total suspended materials (TSM) as well as absorption coefficient of colored dissolved organic matter (CDOM), as some of the algorithms were developed from in situ Rrs(λ) data using spectral bands that may not exist on satellite sensors.
Perriñez, Phillip R.; Kennedy, Francis E.; Van Houten, Elijah E. W.; Weaver, John B.; Paulsen, Keith D.
2010-01-01
Magnetic Resonance Poroelastography (MRPE) is introduced as an alternative to single-phase model-based elastographic reconstruction methods. A three-dimensional (3D) finite element poroelastic inversion algorithm was developed to recover the mechanical properties of fluid-saturated tissues. The performance of this algorithm was assessed through a variety of numerical experiments, using synthetic data to probe its stability and sensitivity to the relevant model parameters. Preliminary results suggest the algorithm is robust in the presence of noise and capable of producing accurate assessments of the underlying mechanical properties in simulated phantoms. Further, a 3D time-harmonic motion field was recorded for a poroelastic phantom containing a single cylindrical inclusion and used to assess the feasibility of MRPE image reconstruction from experimental data. The elastograms obtained from the proposed poroelastic algorithm demonstrate significant improvement over linearly elastic MRE images generated using the same data. In addition, MRPE offers the opportunity to estimate the time-harmonic pressure field resulting from tissue excitation, highlighting the potential for its application in the diagnosis and monitoring of disease processes associated with changes in interstitial pressure. PMID:20199912
Pulsed excitation terahertz tomography - multiparametric approach
NASA Astrophysics Data System (ADS)
Lopato, Przemyslaw
2018-04-01
This article deals with pulsed excitation terahertz computed tomography (THz CT). Opposite to x-ray CT, where just a single value (pixel) is obtained, in case of pulsed THz CT the time signal is acquired for each position. Recorded waveform can be parametrized - many features carrying various information about examined structure can be calculated. Based on this, multiparametric reconstruction algorithm was proposed: inverse Radon transform based reconstruction is applied for each parameter and then fusion of results is utilized. Performance of the proposed imaging scheme was experimentally verified using dielectric phantoms.
Accelerated gradient based diffuse optical tomographic image reconstruction.
Biswas, Samir Kumar; Rajan, K; Vasu, R M
2011-01-01
Fast reconstruction of interior optical parameter distribution using a new approach called Broyden-based model iterative image reconstruction (BMOBIIR) and adjoint Broyden-based MOBIIR (ABMOBIIR) of a tissue and a tissue mimicking phantom from boundary measurement data in diffuse optical tomography (DOT). DOT is a nonlinear and ill-posed inverse problem. Newton-based MOBIIR algorithm, which is generally used, requires repeated evaluation of the Jacobian which consumes bulk of the computation time for reconstruction. In this study, we propose a Broyden approach-based accelerated scheme for Jacobian computation and it is combined with conjugate gradient scheme (CGS) for fast reconstruction. The method makes explicit use of secant and adjoint information that can be obtained from forward solution of the diffusion equation. This approach reduces the computational time many fold by approximating the system Jacobian successively through low-rank updates. Simulation studies have been carried out with single as well as multiple inhomogeneities. Algorithms are validated using an experimental study carried out on a pork tissue with fat acting as an inhomogeneity. The results obtained through the proposed BMOBIIR and ABMOBIIR approaches are compared with those of Newton-based MOBIIR algorithm. The mean squared error and execution time are used as metrics for comparing the results of reconstruction. We have shown through experimental and simulation studies that Broyden-based MOBIIR and adjoint Broyden-based methods are capable of reconstructing single as well as multiple inhomogeneities in tissue and a tissue-mimicking phantom. Broyden MOBIIR and adjoint Broyden MOBIIR methods are computationally simple and they result in much faster implementations because they avoid direct evaluation of Jacobian. The image reconstructions have been carried out with different initial values using Newton, Broyden, and adjoint Broyden approaches. These algorithms work well when the initial guess is close to the true solution. However, when initial guess is far away from true solution, Newton-based MOBIIR gives better reconstructed images. The proposed methods are found to be stable with noisy measurement data.
3D frequency-domain ultrasound waveform tomography breast imaging
NASA Astrophysics Data System (ADS)
Sandhu, Gursharan Yash; West, Erik; Li, Cuiping; Roy, Olivier; Duric, Neb
2017-03-01
Frequency-domain ultrasound waveform tomography is a promising method for the visualization and characterization of breast disease. It has previously been shown to accurately reconstruct the sound speed distributions of breasts of varying densities. The reconstructed images show detailed morphological and quantitative information that can help differentiate different types of breast disease including benign and malignant lesions. The attenuation properties of an ex vivo phantom have also been assessed. However, the reconstruction algorithms assumed a 2D geometry while the actual data acquisition process was not. Although clinically useful sound speed images can be reconstructed assuming this mismatched geometry, artifacts from the reconstruction process exist within the reconstructed images. This is especially true for registration across different modalities and when the 2D assumption is violated. For example, this happens when a patient's breast is rapidly sloping. It is also true for attenuation imaging where energy lost or gained out of the plane gets transformed into artifacts within the image space. In this paper, we will briefly review ultrasound waveform tomography techniques, give motivation for pursuing the 3D method, discuss the 3D reconstruction algorithm, present the results of 3D forward modeling, show the mismatch that is induced by the violation of 3D modeling via numerical simulations, and present a 3D inversion of a numerical phantom.
Hybrid Weighted Minimum Norm Method A new method based LORETA to solve EEG inverse problem.
Song, C; Zhuang, T; Wu, Q
2005-01-01
This Paper brings forward a new method to solve EEG inverse problem. Based on following physiological characteristic of neural electrical activity source: first, the neighboring neurons are prone to active synchronously; second, the distribution of source space is sparse; third, the active intensity of the sources are high centralized, we take these prior knowledge as prerequisite condition to develop the inverse solution of EEG, and not assume other characteristic of inverse solution to realize the most commonly 3D EEG reconstruction map. The proposed algorithm takes advantage of LORETA's low resolution method which emphasizes particularly on 'localization' and FOCUSS's high resolution method which emphasizes particularly on 'separability'. The method is still under the frame of the weighted minimum norm method. The keystone is to construct a weighted matrix which takes reference from the existing smoothness operator, competition mechanism and study algorithm. The basic processing is to obtain an initial solution's estimation firstly, then construct a new estimation using the initial solution's information, repeat this process until the solutions under last two estimate processing is keeping unchanged.
An improved pulse sequence and inversion algorithm of T2 spectrum
NASA Astrophysics Data System (ADS)
Ge, Xinmin; Chen, Hua; Fan, Yiren; Liu, Juntao; Cai, Jianchao; Liu, Jianyu
2017-03-01
The nuclear magnetic resonance transversal relaxation time is widely applied in geological prospecting, both in laboratory and downhole environments. However, current methods used for data acquisition and inversion should be reformed to characterize geological samples with complicated relaxation components and pore size distributions, such as samples of tight oil, gas shale, and carbonate. We present an improved pulse sequence to collect transversal relaxation signals based on the CPMG (Carr, Purcell, Meiboom, and Gill) pulse sequence. The echo spacing is not constant but varies in different windows, depending on prior knowledge or customer requirements. We use the entropy based truncated singular value decomposition (TSVD) to compress the ill-posed matrix and discard small singular values which cause the inversion instability. A hybrid algorithm combining the iterative TSVD and a simultaneous iterative reconstruction technique is implemented to reach the global convergence and stability of the inversion. Numerical simulations indicate that the improved pulse sequence leads to the same result as CPMG, but with lower echo numbers and computational time. The proposed method is a promising technique for geophysical prospecting and other related fields in future.
NASA Astrophysics Data System (ADS)
Gianoli, Chiara; Kurz, Christopher; Riboldi, Marco; Bauer, Julia; Fontana, Giulia; Baroni, Guido; Debus, Jürgen; Parodi, Katia
2016-06-01
A clinical trial named PROMETHEUS is currently ongoing for inoperable hepatocellular carcinoma (HCC) at the Heidelberg Ion Beam Therapy Center (HIT, Germany). In this framework, 4D PET-CT datasets are acquired shortly after the therapeutic treatment to compare the irradiation induced PET image with a Monte Carlo PET prediction resulting from the simulation of treatment delivery. The extremely low count statistics of this measured PET image represents a major limitation of this technique, especially in presence of target motion. The purpose of the study is to investigate two different 4D PET motion compensation strategies towards the recovery of the whole count statistics for improved image quality of the 4D PET-CT datasets for PET-based treatment verification. The well-known 4D-MLEM reconstruction algorithm, embedding the motion compensation in the reconstruction process of 4D PET sinograms, was compared to a recently proposed pre-reconstruction motion compensation strategy, which operates in sinogram domain by applying the motion compensation to the 4D PET sinograms. With reference to phantom and patient datasets, advantages and drawbacks of the two 4D PET motion compensation strategies were identified. The 4D-MLEM algorithm was strongly affected by inverse inconsistency of the motion model but demonstrated the capability to mitigate the noise-break-up effects. Conversely, the pre-reconstruction warping showed less sensitivity to inverse inconsistency but also more noise in the reconstructed images. The comparison was performed by relying on quantification of PET activity and ion range difference, typically yielding similar results. The study demonstrated that treatment verification of moving targets could be accomplished by relying on the whole count statistics image quality, as obtained from the application of 4D PET motion compensation strategies. In particular, the pre-reconstruction warping was shown to represent a promising choice when combined with intra-reconstruction smoothing.
On the Accuracy of Language Trees
Pompei, Simone; Loreto, Vittorio; Tria, Francesca
2011-01-01
Historical linguistics aims at inferring the most likely language phylogenetic tree starting from information concerning the evolutionary relatedness of languages. The available information are typically lists of homologous (lexical, phonological, syntactic) features or characters for many different languages: a set of parallel corpora whose compilation represents a paramount achievement in linguistics. From this perspective the reconstruction of language trees is an example of inverse problems: starting from present, incomplete and often noisy, information, one aims at inferring the most likely past evolutionary history. A fundamental issue in inverse problems is the evaluation of the inference made. A standard way of dealing with this question is to generate data with artificial models in order to have full access to the evolutionary process one is going to infer. This procedure presents an intrinsic limitation: when dealing with real data sets, one typically does not know which model of evolution is the most suitable for them. A possible way out is to compare algorithmic inference with expert classifications. This is the point of view we take here by conducting a thorough survey of the accuracy of reconstruction methods as compared with the Ethnologue expert classifications. We focus in particular on state-of-the-art distance-based methods for phylogeny reconstruction using worldwide linguistic databases. In order to assess the accuracy of the inferred trees we introduce and characterize two generalizations of standard definitions of distances between trees. Based on these scores we quantify the relative performances of the distance-based algorithms considered. Further we quantify how the completeness and the coverage of the available databases affect the accuracy of the reconstruction. Finally we draw some conclusions about where the accuracy of the reconstructions in historical linguistics stands and about the leading directions to improve it. PMID:21674034
Chen, Zikuan; Calhoun, Vince D
2016-03-01
Conventionally, independent component analysis (ICA) is performed on an fMRI magnitude dataset to analyze brain functional mapping (AICA). By solving the inverse problem of fMRI, we can reconstruct the brain magnetic susceptibility (χ) functional states. Upon the reconstructed χ dataspace, we propose an ICA-based brain functional χ mapping method (χICA) to extract task-evoked brain functional map. A complex division algorithm is applied to a timeseries of fMRI phase images to extract temporal phase changes (relative to an OFF-state snapshot). A computed inverse MRI (CIMRI) model is used to reconstruct a 4D brain χ response dataset. χICA is implemented by applying a spatial InfoMax ICA algorithm to the reconstructed 4D χ dataspace. With finger-tapping experiments on a 7T system, the χICA-extracted χ-depicted functional map is similar to the SPM-inferred functional χ map by a spatial correlation of 0.67 ± 0.05. In comparison, the AICA-extracted magnitude-depicted map is correlated with the SPM magnitude map by 0.81 ± 0.05. The understanding of the inferiority of χICA to AICA for task-evoked functional map is an ongoing research topic. For task-evoked brain functional mapping, we compare the data-driven ICA method with the task-correlated SPM method. In particular, we compare χICA with AICA for extracting task-correlated timecourses and functional maps. χICA can extract a χ-depicted task-evoked brain functional map from a reconstructed χ dataspace without the knowledge about brain hemodynamic responses. The χICA-extracted brain functional χ map reveals a bidirectional BOLD response pattern that is unavailable (or different) from AICA. Copyright © 2016 Elsevier B.V. All rights reserved.
Fu, C.Y.; Petrich, L.I.
1997-12-30
An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described. 22 figs.
Processing-optimised imaging of analog geological models by electrical capacitance tomography
NASA Astrophysics Data System (ADS)
Ortiz Alemán, C.; Espíndola-Carmona, A.; Hernández-Gómez, J. J.; Orozco Del Castillo, MG
2017-06-01
In this work, the electrical capacitance tomography (ECT) technique is applied in monitoring internal deformation of geological analog models, which are used to study structural deformation mechanisms, in particular for simulating migration and emplacement of allochtonous salt bodies. A rectangular ECT sensor was used for internal visualization of analog geologic deformation. The monitoring of analog models consists in the reconstruction of permittivity images from the capacitance measurements obtained by introducing the model inside the ECT sensor. A simulated annealing (SA) algorithm is used as a reconstruction method, and is optimized by taking full advantage of some special features in a linearized version of this inverse approach. As a second part of this work our SA image reconstruction algorithm is applied to synthetic models, where its performance is evaluated in comparison to other commonly used algorithms such as linear back-projection and iterative Landweber methods. Finally, the SA method is applied to visualise two simple geological analog models. Encouraging results were obtained in terms of the quality of the reconstructed images, as interfaces corresponding to main geological units in the analog model were clearly distinguishable in them. We found reliable results quite useful for real time non-invasive monitoring of internal deformation of analog geological models.
Atoche, Alejandro Castillo; Castillo, Javier Vázquez
2012-01-01
A high-speed dual super-systolic core for reconstructive signal processing (SP) operations consists of a double parallel systolic array (SA) machine in which each processing element of the array is also conceptualized as another SA in a bit-level fashion. In this study, we addressed the design of a high-speed dual super-systolic array (SSA) core for the enhancement/reconstruction of remote sensing (RS) imaging of radar/synthetic aperture radar (SAR) sensor systems. The selected reconstructive SP algorithms are efficiently transformed in their parallel representation and then, they are mapped into an efficient high performance embedded computing (HPEC) architecture in reconfigurable Xilinx field programmable gate array (FPGA) platforms. As an implementation test case, the proposed approach was aggregated in a HW/SW co-design scheme in order to solve the nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) from a remotely sensed scene. We show how such dual SSA core, drastically reduces the computational load of complex RS regularization techniques achieving the required real-time operational mode. PMID:22736964
NASA Astrophysics Data System (ADS)
Niu, Chun-Yang; Qi, Hong; Huang, Xing; Ruan, Li-Ming; Tan, He-Ping
2016-11-01
A rapid computational method called generalized sourced multi-flux method (GSMFM) was developed to simulate outgoing radiative intensities in arbitrary directions at the boundary surfaces of absorbing, emitting, and scattering media which were served as input for the inverse analysis. A hybrid least-square QR decomposition-stochastic particle swarm optimization (LSQR-SPSO) algorithm based on the forward GSMFM solution was developed to simultaneously reconstruct multi-dimensional temperature distribution and absorption and scattering coefficients of the cylindrical participating media. The retrieval results for axisymmetric temperature distribution and non-axisymmetric temperature distribution indicated that the temperature distribution and scattering and absorption coefficients could be retrieved accurately using the LSQR-SPSO algorithm even with noisy data. Moreover, the influences of extinction coefficient and scattering albedo on the accuracy of the estimation were investigated, and the results suggested that the reconstruction accuracy decreased with the increase of extinction coefficient and the scattering albedo. Finally, a non-contact measurement platform of flame temperature field based on the light field imaging was set up to validate the reconstruction model experimentally.
NASA Astrophysics Data System (ADS)
Tylen, Ulf; Friman, Ola; Borga, Magnus; Angelhed, Jan-Erik
2001-05-01
Emphysema is characterized by destruction of lung tissue with development of small or large holes within the lung. These areas will have Hounsfield values (HU) approaching -1000. It is possible to detect and quantificate such areas using simple density mask technique. The edge enhancement reconstruction algorithm, gravity and motion of the heart and vessels during scanning causes artefacts, however. The purpose of our work was to construct an algorithm that detects such image artefacts and corrects them. The first step is to apply inverse filtering to the image removing much of the effect of the edge enhancement reconstruction algorithm. The next step implies computation of the antero-posterior density gradient caused by gravity and correction for that. Motion artefacts are in a third step corrected for by use of normalized averaging, thresholding and region growing. Twenty healthy volunteers were investigated, 10 with slight emphysema and 10 without. Using simple density mask technique it was not possible to separate persons with disease from those without. Our algorithm improved separation of the two groups considerably. Our algorithm needs further refinement, but may form a basis for further development of methods for computerized diagnosis and quantification of emphysema by HRCT.
Tomographic phase microscopy: principles and applications in bioimaging [Invited
Jin, Di; Zhou, Renjie; Yaqoob, Zahid; So, Peter T. C.
2017-01-01
Tomographic phase microscopy (TPM) is an emerging optical microscopic technique for bioimaging. TPM uses digital holographic measurements of complex scattered fields to reconstruct three-dimensional refractive index (RI) maps of cells with diffraction-limited resolution by solving inverse scattering problems. In this paper, we review the developments of TPM from the fundamental physics to its applications in bioimaging. We first provide a comprehensive description of the tomographic reconstruction physical models used in TPM. The RI map reconstruction algorithms and various regularization methods are discussed. Selected TPM applications for cellular imaging, particularly in hematology, are reviewed. Finally, we examine the limitations of current TPM systems, propose future solutions, and envision promising directions in biomedical research. PMID:29386746
Iterative inversion of deformation vector fields with feedback control.
Dubey, Abhishek; Iliopoulos, Alexandros-Stavros; Sun, Xiaobai; Yin, Fang-Fang; Ren, Lei
2018-05-14
Often, the inverse deformation vector field (DVF) is needed together with the corresponding forward DVF in four-dimesional (4D) reconstruction and dose calculation, adaptive radiation therapy, and simultaneous deformable registration. This study aims at improving both accuracy and efficiency of iterative algorithms for DVF inversion, and advancing our understanding of divergence and latency conditions. We introduce a framework of fixed-point iteration algorithms with active feedback control for DVF inversion. Based on rigorous convergence analysis, we design control mechanisms for modulating the inverse consistency (IC) residual of the current iterate, to be used as feedback into the next iterate. The control is designed adaptively to the input DVF with the objective to enlarge the convergence area and expedite convergence. Three particular settings of feedback control are introduced: constant value over the domain throughout the iteration; alternating values between iteration steps; and spatially variant values. We also introduce three spectral measures of the displacement Jacobian for characterizing a DVF. These measures reveal the critical role of what we term the nontranslational displacement component (NTDC) of the DVF. We carry out inversion experiments with an analytical DVF pair, and with DVFs associated with thoracic CT images of six patients at end of expiration and end of inspiration. The NTDC-adaptive iterations are shown to attain a larger convergence region at a faster pace compared to previous nonadaptive DVF inversion iteration algorithms. By our numerical experiments, alternating control yields smaller IC residuals and inversion errors than constant control. Spatially variant control renders smaller residuals and errors by at least an order of magnitude, compared to other schemes, in no more than 10 steps. Inversion results also show remarkable quantitative agreement with analysis-based predictions. Our analysis captures properties of DVF data associated with clinical CT images, and provides new understanding of iterative DVF inversion algorithms with a simple residual feedback control. Adaptive control is necessary and highly effective in the presence of nonsmall NTDCs. The adaptive iterations or the spectral measures, or both, may potentially be incorporated into deformable image registration methods. © 2018 American Association of Physicists in Medicine.
Reconstructing Images in Astrophysics, an Inverse Problem Point of View
NASA Astrophysics Data System (ADS)
Theys, Céline; Aime, Claude
2016-04-01
After a short introduction, a first section provides a brief tutorial to the physics of image formation and its detection in the presence of noises. The rest of the chapter focuses on the resolution of the inverse problem
NASA Astrophysics Data System (ADS)
Zhou, Chaojie; Ding, Xiaohua; Zhang, Jie; Yang, Jungang; Ma, Qiang
2017-12-01
While global oceanic surface information with large-scale, real-time, high-resolution data is collected by satellite remote sensing instrumentation, three-dimensional (3D) observations are usually obtained from in situ measurements, but with minimal coverage and spatial resolution. To meet the needs of 3D ocean investigations, we have developed a new algorithm to reconstruct the 3D ocean temperature field based on the Array for Real-time Geostrophic Oceanography (Argo) profiles and sea surface temperature (SST) data. The Argo temperature profiles are first optimally fitted to generate a series of temperature functions of depth, with the vertical temperature structure represented continuously. By calculating the derivatives of the fitted functions, the calculation of the vertical temperature gradient of the Argo profiles at an arbitrary depth is accomplished. A gridded 3D temperature gradient field is then found by applying inverse distance weighting interpolation in the horizontal direction. Combined with the processed SST, the 3D temperature field reconstruction is realized below the surface using the gridded temperature gradient. Finally, to confirm the effectiveness of the algorithm, an experiment in the Pacific Ocean south of Japan is conducted, for which a 3D temperature field is generated. Compared with other similar gridded products, the reconstructed 3D temperature field derived by the proposed algorithm achieves satisfactory accuracy, with correlation coefficients of 0.99 obtained, including a higher spatial resolution (0.25° × 0.25°), resulting in the capture of smaller-scale characteristics. Finally, both the accuracy and the superiority of the algorithm are validated.
NASA Astrophysics Data System (ADS)
Voznyuk, I.; Litman, A.; Tortel, H.
2015-08-01
A Quasi-Newton method for reconstructing the constitutive parameters of three-dimensional (3D) penetrable scatterers from scattered field measurements is presented. This method is adapted for handling large-scale electromagnetic problems while keeping the memory requirement and the time flexibility as low as possible. The forward scattering problem is solved by applying the finite-element tearing and interconnecting full-dual-primal (FETI-FDP2) method which shares the same spirit as the domain decomposition methods for finite element methods. The idea is to split the computational domain into smaller non-overlapping sub-domains in order to simultaneously solve local sub-problems. Various strategies are proposed in order to efficiently couple the inversion algorithm with the FETI-FDP2 method: a separation into permanent and non-permanent subdomains is performed, iterative solvers are favorized for resolving the interface problem and a marching-on-in-anything initial guess selection further accelerates the process. The computational burden is also reduced by applying the adjoint state vector methodology. Finally, the inversion algorithm is confronted to measurements extracted from the 3D Fresnel database.
Principle and Reconstruction Algorithm for Atomic-Resolution Holography
NASA Astrophysics Data System (ADS)
Matsushita, Tomohiro; Muro, Takayuki; Matsui, Fumihiko; Happo, Naohisa; Hosokawa, Shinya; Ohoyama, Kenji; Sato-Tomita, Ayana; Sasaki, Yuji C.; Hayashi, Kouichi
2018-06-01
Atomic-resolution holography makes it possible to obtain the three-dimensional (3D) structure around a target atomic site. Translational symmetry of the atomic arrangement of the sample is not necessary, and the 3D atomic image can be measured when the local structure of the target atomic site is oriented. Therefore, 3D local atomic structures such as dopants and adsorbates are observable. Here, the atomic-resolution holography comprising photoelectron holography, X-ray fluorescence holography, neutron holography, and their inverse modes are treated. Although the measurement methods are different, they can be handled with a unified theory. The algorithm for reconstructing 3D atomic images from holograms plays an important role. Although Fourier transform-based methods have been proposed, they require the multiple-energy holograms. In addition, they cannot be directly applied to photoelectron holography because of the phase shift problem. We have developed methods based on the fitting method for reconstructing from single-energy and photoelectron holograms. The developed methods are applicable to all types of atomic-resolution holography.
Real-Time Compressive Sensing MRI Reconstruction Using GPU Computing and Split Bregman Methods
Smith, David S.; Gore, John C.; Yankeelov, Thomas E.; Welch, E. Brian
2012-01-01
Compressive sensing (CS) has been shown to enable dramatic acceleration of MRI acquisition in some applications. Being an iterative reconstruction technique, CS MRI reconstructions can be more time-consuming than traditional inverse Fourier reconstruction. We have accelerated our CS MRI reconstruction by factors of up to 27 by using a split Bregman solver combined with a graphics processing unit (GPU) computing platform. The increases in speed we find are similar to those we measure for matrix multiplication on this platform, suggesting that the split Bregman methods parallelize efficiently. We demonstrate that the combination of the rapid convergence of the split Bregman algorithm and the massively parallel strategy of GPU computing can enable real-time CS reconstruction of even acquisition data matrices of dimension 40962 or more, depending on available GPU VRAM. Reconstruction of two-dimensional data matrices of dimension 10242 and smaller took ~0.3 s or less, showing that this platform also provides very fast iterative reconstruction for small-to-moderate size images. PMID:22481908
Real-Time Compressive Sensing MRI Reconstruction Using GPU Computing and Split Bregman Methods.
Smith, David S; Gore, John C; Yankeelov, Thomas E; Welch, E Brian
2012-01-01
Compressive sensing (CS) has been shown to enable dramatic acceleration of MRI acquisition in some applications. Being an iterative reconstruction technique, CS MRI reconstructions can be more time-consuming than traditional inverse Fourier reconstruction. We have accelerated our CS MRI reconstruction by factors of up to 27 by using a split Bregman solver combined with a graphics processing unit (GPU) computing platform. The increases in speed we find are similar to those we measure for matrix multiplication on this platform, suggesting that the split Bregman methods parallelize efficiently. We demonstrate that the combination of the rapid convergence of the split Bregman algorithm and the massively parallel strategy of GPU computing can enable real-time CS reconstruction of even acquisition data matrices of dimension 4096(2) or more, depending on available GPU VRAM. Reconstruction of two-dimensional data matrices of dimension 1024(2) and smaller took ~0.3 s or less, showing that this platform also provides very fast iterative reconstruction for small-to-moderate size images.
NASA Astrophysics Data System (ADS)
Poudel, Joemini; Matthews, Thomas P.; Mitsuhashi, Kenji; Garcia-Uribe, Alejandro; Wang, Lihong V.; Anastasio, Mark A.
2017-03-01
Photoacoustic computed tomography (PACT) is an emerging computed imaging modality that exploits optical contrast and ultrasonic detection principles to form images of the photoacoustically induced initial pressure distribution within tissue. The PACT reconstruction problem corresponds to a time-domain inverse source problem, where the initial pressure distribution is recovered from the measurements recorded on an aperture outside the support of the source. A major challenge in transcranial PACT brain imaging is to compensate for aberrations in the measured data due to the propagation of the photoacoustic wavefields through the skull. To properly account for these effects, a wave equation-based inversion method should be employed that can model the heterogeneous elastic properties of the medium. In this study, an iterative image reconstruction method for 3D transcranial PACT is developed based on the elastic wave equation. To accomplish this, a forward model based on a finite-difference time-domain discretization of the elastic wave equation is established. Subsequently, gradient-based methods are employed for computing penalized least squares estimates of the initial source distribution that produced the measured photoacoustic data. The developed reconstruction algorithm is validated and investigated through computer-simulation studies.
A forward model and conjugate gradient inversion technique for low-frequency ultrasonic imaging.
van Dongen, Koen W A; Wright, William M D
2006-10-01
Emerging methods of hyperthermia cancer treatment require noninvasive temperature monitoring, and ultrasonic techniques show promise in this regard. Various tomographic algorithms are available that reconstruct sound speed or contrast profiles, which can be related to temperature distribution. The requirement of a high enough frequency for adequate spatial resolution and a low enough frequency for adequate tissue penetration is a difficult compromise. In this study, the feasibility of using low frequency ultrasound for imaging and temperature monitoring was investigated. The transient probing wave field had a bandwidth spanning the frequency range 2.5-320.5 kHz. The results from a forward model which computed the propagation and scattering of low-frequency acoustic pressure and velocity wave fields were used to compare three imaging methods formulated within the Born approximation, representing two main types of reconstruction. The first uses Fourier techniques to reconstruct sound-speed profiles from projection or Radon data based on optical ray theory, seen as an asymptotical limit for comparison. The second uses backpropagation and conjugate gradient inversion methods based on acoustical wave theory. The results show that the accuracy in localization was 2.5 mm or better when using low frequencies and the conjugate gradient inversion scheme, which could be used for temperature monitoring.
Localization of synchronous cortical neural sources.
Zerouali, Younes; Herry, Christophe L; Jemel, Boutheina; Lina, Jean-Marc
2013-03-01
Neural synchronization is a key mechanism to a wide variety of brain functions, such as cognition, perception, or memory. High temporal resolution achieved by EEG recordings allows the study of the dynamical properties of synchronous patterns of activity at a very fine temporal scale but with very low spatial resolution. Spatial resolution can be improved by retrieving the neural sources of EEG signal, thus solving the so-called inverse problem. Although many methods have been proposed to solve the inverse problem and localize brain activity, few of them target the synchronous brain regions. In this paper, we propose a novel algorithm aimed at localizing specifically synchronous brain regions and reconstructing the time course of their activity. Using multivariate wavelet ridge analysis, we extract signals capturing the synchronous events buried in the EEG and then solve the inverse problem on these signals. Using simulated data, we compare results of source reconstruction accuracy achieved by our method to a standard source reconstruction approach. We show that the proposed method performs better across a wide range of noise levels and source configurations. In addition, we applied our method on real dataset and identified successfully cortical areas involved in the functional network underlying visual face perception. We conclude that the proposed approach allows an accurate localization of synchronous brain regions and a robust estimation of their activity.
Density reconstruction in multiparameter elastic full-waveform inversion
NASA Astrophysics Data System (ADS)
Sun, Min'ao; Yang, Jizhong; Dong, Liangguo; Liu, Yuzhu; Huang, Chao
2017-12-01
Elastic full-waveform inversion (EFWI) is a quantitative data fitting procedure that recovers multiple subsurface parameters from multicomponent seismic data. As density is involved in addition to P- and S-wave velocities, the multiparameter EFWI suffers from more serious tradeoffs. In addition, compared with P- and S-wave velocities, the misfit function is less sensitive to density perturbation. Thus, a robust density reconstruction remains a difficult problem in multiparameter EFWI. In this paper, we develop an improved scattering-integral-based truncated Gauss-Newton method to simultaneously recover P- and S-wave velocities and density in EFWI. In this method, the inverse Gauss-Newton Hessian has been estimated by iteratively solving the Gauss-Newton equation with a matrix-free conjugate gradient algorithm. Therefore, it is able to properly handle the parameter tradeoffs. To give a detailed illustration of the tradeoffs between P- and S-wave velocities and density in EFWI, wavefield-separated sensitivity kernels and the Gauss-Newton Hessian are numerically computed, and their distribution characteristics are analyzed. Numerical experiments on a canonical inclusion model and a modified SEG/EAGE Overthrust model have demonstrated that the proposed method can effectively mitigate the tradeoff effects, and improve multiparameter gradients. Thus, a high convergence rate and an accurate density reconstruction can be achieved.
NASA Astrophysics Data System (ADS)
Tian, Lei; Waller, Laura
2017-05-01
Microscope lenses can have either large field of view (FOV) or high resolution, not both. Computational microscopy based on illumination coding circumvents this limit by fusing images from different illumination angles using nonlinear optimization algorithms. The result is a Gigapixel-scale image having both wide FOV and high resolution. We demonstrate an experimentally robust reconstruction algorithm based on a 2nd order quasi-Newton's method, combined with a novel phase initialization scheme. To further extend the Gigapixel imaging capability to 3D, we develop a reconstruction method to process the 4D light field measurements from sequential illumination scanning. The algorithm is based on a 'multislice' forward model that incorporates both 3D phase and diffraction effects, as well as multiple forward scatterings. To solve the inverse problem, an iterative update procedure that combines both phase retrieval and 'error back-propagation' is developed. To avoid local minimum solutions, we further develop a novel physical model-based initialization technique that accounts for both the geometric-optic and 1st order phase effects. The result is robust reconstructions of Gigapixel 3D phase images having both wide FOV and super resolution in all three dimensions. Experimental results from an LED array microscope were demonstrated.
Reconstruction of stochastic temporal networks through diffusive arrival times
NASA Astrophysics Data System (ADS)
Li, Xun; Li, Xiang
2017-06-01
Temporal networks have opened a new dimension in defining and quantification of complex interacting systems. Our ability to identify and reproduce time-resolved interaction patterns is, however, limited by the restricted access to empirical individual-level data. Here we propose an inverse modelling method based on first-arrival observations of the diffusion process taking place on temporal networks. We describe an efficient coordinate-ascent implementation for inferring stochastic temporal networks that builds in particular but not exclusively on the null model assumption of mutually independent interaction sequences at the dyadic level. The results of benchmark tests applied on both synthesized and empirical network data sets confirm the validity of our algorithm, showing the feasibility of statistically accurate inference of temporal networks only from moderate-sized samples of diffusion cascades. Our approach provides an effective and flexible scheme for the temporally augmented inverse problems of network reconstruction and has potential in a broad variety of applications.
Fast tomographic methods for the tokamak ISTTOK
NASA Astrophysics Data System (ADS)
Carvalho, P. J.; Thomsen, H.; Gori, S.; Toussaint, U. v.; Weller, A.; Coelho, R.; Neto, A.; Pereira, T.; Silva, C.; Fernandes, H.
2008-04-01
The achievement of long duration, alternating current discharges on the tokamak IST-TOK requires a real-time plasma position control system. The plasma position determination based on magnetic probes system has been found to be inadequate during the current inversion due to the reduced plasma current. A tomography diagnostic has been therefore installed to supply the required feedback to the control system. Several tomographic methods are available for soft X-ray or bolo-metric tomography, among which the Cormack and Neural networks methods stand out due to their inherent speed of up to 1000 reconstructions per second, with currently available technology. This paper discusses the application of these algorithms on fusion devices while comparing performance and reliability of the results. It has been found that although the Cormack based inversion proved to be faster, the neural networks reconstruction has fewer artifacts and is more accurate.
Non-destructive testing of ceramic materials using mid-infrared ultrashort-pulse laser
NASA Astrophysics Data System (ADS)
Sun, S. C.; Qi, Hong; An, X. Y.; Ren, Y. T.; Qiao, Y. B.; Ruan, Liming M.
2018-04-01
The non-destructive testing (NDT) of ceramic materials using mid-infrared ultrashort-pulse laser is investigated in this study. The discrete ordinate method is applied to solve the transient radiative transfer equation in 2D semitransparent medium and the emerging radiative intensity on boundary serves as input for the inverse analysis. The sequential quadratic programming algorithm is employed as the inverse technique to optimize objective function, in which the gradient of objective function with respect to reconstruction parameters is calculated using the adjoint model. Two reticulated porous ceramics including partially stabilized zirconia and oxide-bonded silicon carbide are tested. The retrieval results show that the main characteristics of defects such as optical properties, geometric shapes and positions can be accurately reconstructed by the present model. The proposed technique is effective and robust in NDT of ceramics even with measurement errors.
Reconstruction of stochastic temporal networks through diffusive arrival times
Li, Xun; Li, Xiang
2017-01-01
Temporal networks have opened a new dimension in defining and quantification of complex interacting systems. Our ability to identify and reproduce time-resolved interaction patterns is, however, limited by the restricted access to empirical individual-level data. Here we propose an inverse modelling method based on first-arrival observations of the diffusion process taking place on temporal networks. We describe an efficient coordinate-ascent implementation for inferring stochastic temporal networks that builds in particular but not exclusively on the null model assumption of mutually independent interaction sequences at the dyadic level. The results of benchmark tests applied on both synthesized and empirical network data sets confirm the validity of our algorithm, showing the feasibility of statistically accurate inference of temporal networks only from moderate-sized samples of diffusion cascades. Our approach provides an effective and flexible scheme for the temporally augmented inverse problems of network reconstruction and has potential in a broad variety of applications. PMID:28604687
In vivo bioluminescence tomography based on multi-view projection and 3D surface reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Shuang; Wang, Kun; Leng, Chengcai; Deng, Kexin; Hu, Yifang; Tian, Jie
2015-03-01
Bioluminescence tomography (BLT) is a powerful optical molecular imaging modality, which enables non-invasive realtime in vivo imaging as well as 3D quantitative analysis in preclinical studies. In order to solve the inverse problem and reconstruct inner light sources accurately, the prior structural information is commonly necessary and obtained from computed tomography or magnetic resonance imaging. This strategy requires expensive hybrid imaging system, complicated operation protocol and possible involvement of ionizing radiation. The overall robustness highly depends on the fusion accuracy between the optical and structural information. In this study we present a pure optical bioluminescence tomographic system (POBTS) and a novel BLT method based on multi-view projection acquisition and 3D surface reconstruction. The POBTS acquired a sparse set of white light surface images and bioluminescent images of a mouse. Then the white light images were applied to an approximate surface model to generate a high quality textured 3D surface reconstruction of the mouse. After that we integrated multi-view luminescent images based on the previous reconstruction, and applied an algorithm to calibrate and quantify the surface luminescent flux in 3D.Finally, the internal bioluminescence source reconstruction was achieved with this prior information. A BALB/C mouse with breast tumor of 4T1-fLuc cells mouse model were used to evaluate the performance of the new system and technique. Compared with the conventional hybrid optical-CT approach using the same inverse reconstruction method, the reconstruction accuracy of this technique was improved. The distance error between the actual and reconstructed internal source was decreased by 0.184 mm.
An inverse method for estimation of the acoustic intensity in the focused ultrasound field
NASA Astrophysics Data System (ADS)
Yu, Ying; Shen, Guofeng; Chen, Yazhu
2017-03-01
Recently, a new method which based on infrared (IR) imaging was introduced. Authors (A. Shaw, et al and M. R. Myers, et al) have established the relationship between absorber surface temperature and incident intensity during the absorber was irradiated by the transducer. Theoretically, the shorter irradiating time makes estimation more in line with the actual results. But due to the influence of noise and performance constrains of the IR camera, it is hard to identify the difference in temperature with short heating time. An inverse technique is developed to reconstruct the incident intensity distribution using the surface temperature with shorter irradiating time. The algorithm is validated using surface temperature data generated numerically from three-layer model which was developed to calculate the acoustic field in the absorber, the absorbed acoustic energy during the irradiation, and the consequent temperature elevation. To assess the effect of noisy data on the reconstructed intensity profile, in the simulations, the different noise levels with zero mean were superposed on the exact data. Simulation results demonstrate that the inversion technique can provide fairly reliable intensity estimation with satisfactory accuracy.
Three-Dimensional Inverse Transport Solver Based on Compressive Sensing Technique
NASA Astrophysics Data System (ADS)
Cheng, Yuxiong; Wu, Hongchun; Cao, Liangzhi; Zheng, Youqi
2013-09-01
According to the direct exposure measurements from flash radiographic image, a compressive sensing-based method for three-dimensional inverse transport problem is presented. The linear absorption coefficients and interface locations of objects are reconstructed directly at the same time. It is always very expensive to obtain enough measurements. With limited measurements, compressive sensing sparse reconstruction technique orthogonal matching pursuit is applied to obtain the sparse coefficients by solving an optimization problem. A three-dimensional inverse transport solver is developed based on a compressive sensing-based technique. There are three features in this solver: (1) AutoCAD is employed as a geometry preprocessor due to its powerful capacity in graphic. (2) The forward projection matrix rather than Gauss matrix is constructed by the visualization tool generator. (3) Fourier transform and Daubechies wavelet transform are adopted to convert an underdetermined system to a well-posed system in the algorithm. Simulations are performed and numerical results in pseudo-sine absorption problem, two-cube problem and two-cylinder problem when using compressive sensing-based solver agree well with the reference value.
Theoretical limit of spatial resolution in diffuse optical tomography using a perturbation model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konovalov, A B; Vlasov, V V
2014-03-28
We have assessed the limit of spatial resolution of timedomain diffuse optical tomography (DOT) based on a perturbation reconstruction model. From the viewpoint of the structure reconstruction accuracy, three different approaches to solving the inverse DOT problem are compared. The first approach involves reconstruction of diffuse tomograms from straight lines, the second – from average curvilinear trajectories of photons and the third – from total banana-shaped distributions of photon trajectories. In order to obtain estimates of resolution, we have derived analytical expressions for the point spread function and modulation transfer function, as well as have performed a numerical experiment onmore » reconstruction of rectangular scattering objects with circular absorbing inhomogeneities. It is shown that in passing from reconstruction from straight lines to reconstruction using distributions of photon trajectories we can improve resolution by almost an order of magnitude and exceed the accuracy of reconstruction of multi-step algorithms used in DOT. (optical tomography)« less
2017-01-01
Objective Electrical Impedance Tomography (EIT) is a powerful non-invasive technique for imaging applications. The goal is to estimate the electrical properties of living tissues by measuring the potential at the boundary of the domain. Being safe with respect to patient health, non-invasive, and having no known hazards, EIT is an attractive and promising technology. However, it suffers from a particular technical difficulty, which consists of solving a nonlinear inverse problem in real time. Several nonlinear approaches have been proposed as a replacement for the linear solver, but in practice very few are capable of stable, high-quality, and real-time EIT imaging because of their very low robustness to errors and inaccurate modeling, or because they require considerable computational effort. Methods In this paper, a post-processing technique based on an artificial neural network (ANN) is proposed to obtain a nonlinear solution to the inverse problem, starting from a linear solution. While common reconstruction methods based on ANNs estimate the solution directly from the measured data, the method proposed here enhances the solution obtained from a linear solver. Conclusion Applying a linear reconstruction algorithm before applying an ANN reduces the effects of noise and modeling errors. Hence, this approach significantly reduces the error associated with solving 2D inverse problems using machine-learning-based algorithms. Significance This work presents radical enhancements in the stability of nonlinear methods for biomedical EIT applications. PMID:29206856
On epicardial potential reconstruction using regularization schemes with the L1-norm data term.
Shou, Guofa; Xia, Ling; Liu, Feng; Jiang, Mingfeng; Crozier, Stuart
2011-01-07
The electrocardiographic (ECG) inverse problem is ill-posed and usually solved by regularization schemes. These regularization methods, such as the Tikhonov method, are often based on the L2-norm data and constraint terms. However, L2-norm-based methods inherently provide smoothed inverse solutions that are sensitive to measurement errors, and also lack the capability of localizing and distinguishing multiple proximal cardiac electrical sources. This paper presents alternative regularization schemes employing the L1-norm data term for the reconstruction of epicardial potentials (EPs) from measured body surface potentials (BSPs). During numerical implementation, the iteratively reweighted norm algorithm was applied to solve the L1-norm-related schemes, and measurement noises were considered in the BSP data. The proposed L1-norm data term-based regularization schemes (with L1 and L2 penalty terms of the normal derivative constraint (labelled as L1TV and L1L2)) were compared with the L2-norm data terms (Tikhonov with zero-order and normal derivative constraints, labelled as ZOT and FOT, and the total variation method labelled as L2TV). The studies demonstrated that, with averaged measurement noise, the inverse solutions provided by the L1L2 and FOT algorithms have less relative error values. However, when larger noise occurred in some electrodes (for example, signal lost during measurement), the L1TV and L1L2 methods can obtain more accurate EPs in a robust manner. Therefore the L1-norm data term-based solutions are generally less perturbed by measurement noises, suggesting that the new regularization scheme is promising for providing practical ECG inverse solutions.
Investigation of the Capability of Compact Polarimetric SAR Interferometry to Estimate Forest Height
NASA Astrophysics Data System (ADS)
Zhang, Hong; Xie, Lei; Wang, Chao; Chen, Jiehong
2013-08-01
The main objective of this paper is to investigate the capability of compact Polarimetric SAR Interferometry (C-PolInSAR) on forest height estimation. For this, the pseudo fully polarimetric interferomteric (F-PolInSAR) covariance matrix is firstly reconstructed, then the three- stage inversion algorithm, hybrid algorithm, Music and Capon algorithm are applied to both C-PolInSAR covariance matrix and pseudo F-PolInSAR covariance matrix. The availability of forest height estimation is demonstrated using L-band data generated by simulator PolSARProSim and X-band airborne data acquired by East China Research Institute of Electronic Engineering, China Electronics Technology Group Corporation.
Stereo reconstruction from multiperspective panoramas.
Li, Yin; Shum, Heung-Yeung; Tang, Chi-Keung; Szeliski, Richard
2004-01-01
A new approach to computing a panoramic (360 degrees) depth map is presented in this paper. Our approach uses a large collection of images taken by a camera whose motion has been constrained to planar concentric circles. We resample regular perspective images to produce a set of multiperspective panoramas and then compute depth maps directly from these resampled panoramas. Our panoramas sample uniformly in three dimensions: rotation angle, inverse radial distance, and vertical elevation. The use of multiperspective panoramas eliminates the limited overlap present in the original input images and, thus, problems as in conventional multibaseline stereo can be avoided. Our approach differs from stereo matching of single-perspective panoramic images taken from different locations, where the epipolar constraints are sine curves. For our multiperspective panoramas, the epipolar geometry, to the first order approximation, consists of horizontal lines. Therefore, any traditional stereo algorithm can be applied to multiperspective panoramas with little modification. In this paper, we describe two reconstruction algorithms. The first is a cylinder sweep algorithm that uses a small number of resampled multiperspective panoramas to obtain dense 3D reconstruction. The second algorithm, in contrast, uses a large number of multiperspective panoramas and takes advantage of the approximate horizontal epipolar geometry inherent in multiperspective panoramas. It comprises a novel and efficient 1D multibaseline matching technique, followed by tensor voting to extract the depth surface. Experiments show that our algorithms are capable of producing comparable high quality depth maps which can be used for applications such as view interpolation.
Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D
2002-07-01
Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.
NASA Astrophysics Data System (ADS)
Liu, Shuang; Hu, Xiangyun; Zhu, Rixiang
2018-07-01
The Mengku iron deposit is one of the largest magnetite deposits in Xinjiang Province, northwestern China. It is important to accurately delineate the positions and shapes of concealed orebodies for drillhole layout and resource quantity evaluations. Total-field surface and three-component borehole magnetic measurements were carried out in the deposit. We made a joint inversion of the surface and borehole magnetic data to investigate the characteristics of the orebodies. We recovered the distributions of the magnetization intensity using a preconditioned conjugate gradient algorithm. Synthetic examples show that the reconstructed models of the joint inversion yield a better consistency with the true models than those recovered using independent inversion. By using joint inversion, more accurate information is obtained on the position and shape of the orebodies in the Mengku iron deposit. The magnetization distribution of Line 135 reveals that the major magnetite orebodies occur at 200-400 m depth with a lenticular cross-section dipping north-east. The orebodies of Line 143 are modified and buried at 100-200 m depth with an elliptical cross-section caused by fault activities at north-northeast directions. This information is verified by well logs. The borehole component anomalies are combined with surface data to reconstruct the physical property model and improve the ability to distinguish vertical and horizontal directions, which provides an effective approach to prospect buried orebodies.
Imaging of isotropic and anisotropic conductivities from power densities in three dimensions
NASA Astrophysics Data System (ADS)
Monard, François; Rim, Donsub
2018-07-01
We present numerical reconstructions of anisotropic conductivity tensors in three dimensions, from knowledge of a finite family of power density functionals. Such a problem arises in the coupled-physics imaging modality ultrasound modulated electrical impedance tomography for instance. We improve on the algorithms previously derived in Bal et al (2013 Inverse Problems Imaging 7 353–75) Monard and Bal (2013 Commun. PDE 38 1183–207) for both isotropic and anisotropic cases, and we address the well-known issue of vanishing determinants in particular. The algorithm is implemented and we provide numerical results that illustrate the improvements.
NASA Astrophysics Data System (ADS)
Belkebir, Kamal; Saillard, Marc
2005-12-01
This special section deals with the reconstruction of scattering objects from experimental data. A few years ago, inspired by the Ipswich database [1 4], we started to build an experimental database in order to validate and test inversion algorithms against experimental data. In the special section entitled 'Testing inversion algorithms against experimental data' [5], preliminary results were reported through 11 contributions from several research teams. (The experimental data are free for scientific use and can be downloaded from the web site.) The success of this previous section has encouraged us to go further and to design new challenges for the inverse scattering community. Taking into account the remarks formulated by several colleagues, the new data sets deal with inhomogeneous cylindrical targets and transverse electric (TE) polarized incident fields have also been used. Among the four inhomogeneous targets, three are purely dielectric, while the last one is a `hybrid' target mixing dielectric and metallic cylinders. Data have been collected in the anechoic chamber of the Centre Commun de Ressources Micro-ondes in Marseille. The experimental setup as well as the layout of the files containing the measurements are presented in the contribution by J-M Geffrin, P Sabouroux and C Eyraud. The antennas did not change from the ones used previously [5], namely wide-band horn antennas. However, improvements have been achieved by refining the mechanical positioning devices. In order to enlarge the scope of applications, both TE and transverse magnetic (TM) polarizations have been carried out for all targets. Special care has been taken not to move the target under test when switching from TE to TM measurements, ensuring that TE and TM data are available for the same configuration. All data correspond to electric field measurements. In TE polarization the measured component is orthogonal to the axis of invariance. Contributions A Abubakar, P M van den Berg and T M Habashy, Application of the multiplicative regularized contrast source inversion method TM- and TE-polarized experimental Fresnel data, present results of profile inversions obtained using the contrast source inversion (CSI) method, in which a multiplicative regularization is plugged in. The authors successfully inverted both TM- and TE-polarized fields. Note that this paper is one of only two contributions which address the inversion of TE-polarized data. A Baussard, Inversion of multi-frequency experimental data using an adaptive multiscale approach, reports results of reconstructions using the modified gradient method (MGM). It suggests that a coarse-to-fine iterative strategy based on spline pyramids. In this iterative technique, the number of degrees of freedom is reduced, which improves robustness. The introduction, during the iterative process, of finer scales inside areas of interest leads to an accurate representation of the object under test. The efficiency of this technique is shown via comparisons between the results obtained with the standard MGM and those from an adaptive approach. L Crocco, M D'Urso and T Isernia, Testing the contrast source extended Born inversion method against real data: the case of TM data, assume that the main contribution in the domain integral formulation comes from the singularity of Green's function, even though the media involved are lossless. A Fourier Bessel analysis of the incident and scattered measured fields is used to derive a model of the incident field and an estimate of the location and size of the target. The iterative procedure lies on a conjugate gradient method associated with Tikhonov regularization, and the multi-frequency data are dealt with using a frequency-hopping approach. In many cases, it is difficult to reconstruct accurately both real and imaginary parts of the permittivity if no prior information is included. M Donelli, D Franceschini, A Massa, M Pastorino and A Zanetti, Multi-resolution iterative inversion of real inhomogeneous targets, adopt a multi-resolution strategy, which, at each step, adaptive discretization of the integral equation is performed over an irregular mesh, with a coarser grid outside the regions of interest and tighter sampling where better resolution is required. Here, this procedure is achieved while keeping the number of unknowns constant. The way such a strategy could be combined with multi-frequency data, edge preserving regularization, or any technique also devoted to improve resolution, remains to be studied. As done by some other contributors, the model of incident field is chosen to fit the Fourier Bessel expansion of the measured one. A Dubois, K Belkebir and M Saillard, Retrieval of inhomogeneous targets from experimental frequency diversity data, present results of the reconstruction of targets using three different non-regularized techniques. It is suggested to minimize a frequency weighted cost function rather than a standard one. The different approaches are compared and discussed. C Estatico, G Bozza, A Massa, M Pastorino and A Randazzo, A two-step iterative inexact-Newton method for electromagnetic imaging of dielectric structures from real data, use a two nested iterative methods scheme, based on the second-order Born approximation, which is nonlinear in terms of contrast but does not involve the total field. At each step of the outer iteration, the problem is linearized and solved iteratively using the Landweber method. Better reconstructions than with the Born approximation are obtained at low numerical cost. O Feron, B Duchêne and A Mohammad-Djafari, Microwave imaging of inhomogeneous objects made of a finite number of dielectric and conductive materials from experimental data, adopt a Bayesian framework based on a hidden Markov model, built to take into account, as prior knowledge, that the target is composed of a finite number of homogeneous regions. It has been applied to diffraction tomography and to a rigorous formulation of the inverse problem. The latter can be viewed as a Bayesian adaptation of the contrast source method such that prior information about the contrast can be introduced in the prior law distribution, and it results in estimating the posterior mean instead of minimizing a cost functional. The accuracy of the result is thus closely linked to the prior knowledge of the contrast, making this approach well suited for non-destructive testing. J-M Geffrin, P Sabouroux and C Eyraud, Free space experimental scattering database continuation: experimental set-up and measurement precision, describe the experimental set-up used to carry out the data for the inversions. They report the modifications of the experimental system used previously in order to improve the precision of the measurements. Reliability of data is demonstrated through comparisons between measurements and computed scattered field with both fundamental polarizations. In addition, the reader interested in using the database will find the relevant information needed to perform inversions as well as the description of the targets under test. A Litman, Reconstruction by level sets of n-ary scattering obstacles, presents the reconstruction of targets using a level sets representation. It is assumed that the constitutive materials of the obstacles under test are known and the shape is retrieved. Two approaches are reported. In the first one the obstacles of different constitutive materials are represented in a single level set, while in the second approach several level sets are combined. The approaches are applied to the experimental data and compared. U Shahid, M Testorf and M A Fiddy, Minimum-phase-based inverse scattering algorithm applied to Institut Fresnel data, suggest a way of extending the use of minimum phase functions to 2D problems. In the kind of inverse problems we are concerned with, it consists of separating the contributions from the field and from the contrast in the so-called contrast source term, through homomorphic filtering. Images of the targets are obtained by combination with diffraction tomography. Both pre-processing and imaging are thus based on the use of Fourier transforms, making the algorithm very fast compared to classical iterative approaches. It is also pointed out that the design of appropriate filters remains an open topic. C Yu, L-P Song and Q H Liu, Inversion of multi-frequency experimental data for imaging complex objects by a DTA CSI method, use the contrast source inversion (CSI) method for the reconstruction of the targets, in which the initial guess is a solution deduced from another iterative technique based on the diagonal tensor approximation (DTA). In so doing, the authors combine the fast convergence of the DTA method for generating an accurate initial estimate for the CSI method. Note that this paper is one of only two contributions which address the inversion of TE-polarized data. Conclusion In this special section various inverse scattering techniques were used to successfully reconstruct inhomogeneous targets from multi-frequency multi-static measurements. This shows that the database is reliable and can be useful for researchers wanting to test and validate inversion algorithms. From the database, it is also possible to extract subsets to study particular inverse problems, for instance from phaseless data or from `aspect-limited' configurations. Our future efforts will be directed towards extending the database in order to explore inversions from transient fields and the full three-dimensional problem. Acknowledgments The authors would like to thank the Inverse Problems board for opening the journal to us, and offer profound thanks to Elaine Longden-Chapman and Kate Hooper for their help in organizing this special section.
Experimental investigations on airborne gravimetry based on compressed sensing.
Yang, Yapeng; Wu, Meiping; Wang, Jinling; Zhang, Kaidong; Cao, Juliang; Cai, Shaokun
2014-03-18
Gravity surveys are an important research topic in geophysics and geodynamics. This paper investigates a method for high accuracy large scale gravity anomaly data reconstruction. Based on the airborne gravimetry technology, a flight test was carried out in China with the strap-down airborne gravimeter (SGA-WZ) developed by the Laboratory of Inertial Technology of the National University of Defense Technology. Taking into account the sparsity of airborne gravimetry by the discrete Fourier transform (DFT), this paper proposes a method for gravity anomaly data reconstruction using the theory of compressed sensing (CS). The gravity anomaly data reconstruction is an ill-posed inverse problem, which can be transformed into a sparse optimization problem. This paper uses the zero-norm as the objective function and presents a greedy algorithm called Orthogonal Matching Pursuit (OMP) to solve the corresponding minimization problem. The test results have revealed that the compressed sampling rate is approximately 14%, the standard deviation of the reconstruction error by OMP is 0.03 mGal and the signal-to-noise ratio (SNR) is 56.48 dB. In contrast, the standard deviation of the reconstruction error by the existing nearest-interpolation method (NIPM) is 0.15 mGal and the SNR is 42.29 dB. These results have shown that the OMP algorithm can reconstruct the gravity anomaly data with higher accuracy and fewer measurements.
Experimental Investigations on Airborne Gravimetry Based on Compressed Sensing
Yang, Yapeng; Wu, Meiping; Wang, Jinling; Zhang, Kaidong; Cao, Juliang; Cai, Shaokun
2014-01-01
Gravity surveys are an important research topic in geophysics and geodynamics. This paper investigates a method for high accuracy large scale gravity anomaly data reconstruction. Based on the airborne gravimetry technology, a flight test was carried out in China with the strap-down airborne gravimeter (SGA-WZ) developed by the Laboratory of Inertial Technology of the National University of Defense Technology. Taking into account the sparsity of airborne gravimetry by the discrete Fourier transform (DFT), this paper proposes a method for gravity anomaly data reconstruction using the theory of compressed sensing (CS). The gravity anomaly data reconstruction is an ill-posed inverse problem, which can be transformed into a sparse optimization problem. This paper uses the zero-norm as the objective function and presents a greedy algorithm called Orthogonal Matching Pursuit (OMP) to solve the corresponding minimization problem. The test results have revealed that the compressed sampling rate is approximately 14%, the standard deviation of the reconstruction error by OMP is 0.03 mGal and the signal-to-noise ratio (SNR) is 56.48 dB. In contrast, the standard deviation of the reconstruction error by the existing nearest-interpolation method (NIPM) is 0.15 mGal and the SNR is 42.29 dB. These results have shown that the OMP algorithm can reconstruct the gravity anomaly data with higher accuracy and fewer measurements. PMID:24647125
The shifting zoom: new possibilities for inverse scattering on electrically large domains
NASA Astrophysics Data System (ADS)
Persico, Raffaele; Ludeno, Giovanni; Soldovieri, Francesco; De Coster, Alberic; Lambot, Sebastien
2017-04-01
Inverse scattering is a subject of great interest in diagnostic problems, which are in their turn of interest for many applicative problems as investigation of cultural heritage, characterization of foundations or subservices, identification of unexploded ordnances and so on [1-4]. In particular, GPR data are usually focused by means of migration algorithms, essentially based on a linear approximation of the scattering phenomenon. Migration algorithms are popular because they are computationally efficient and do not require the inversion of a matrix, neither the calculation of the elements of a matrix. In fact, they are essentially based on the adjoint of the linearised scattering operator, which allows in the end to write the inversion formula as a suitably weighted integral of the data [5]. In particular, this makes a migration algorithm more suitable than a linear microwave tomography inversion algorithm for the reconstruction of an electrically large investigation domain. However, this computational challenge can be overcome by making use of investigation domains joined side by side, as proposed e.g. in ref. [3]. This allows to apply a microwave tomography algorithm even to large investigation domains. However, the joining side by side of sequential investigation domains introduces a problem of limited (and asymmetric) maximum view angle with regard to the targets occurring close to the edges between two adjacent domains, or possibly crossing these edges. The shifting zoom is a method that allows to overcome this difficulty by means of overlapped investigation and observation domains [6-7]. It requires more sequential inversion with respect to adjacent investigation domains, but the really required extra-time is minimal because the matrix to be inverted is calculated ones and for all, as well as its singular value decomposition: what is repeated more time is only a fast matrix-vector multiplication. References [1] M. Pieraccini, L. Noferini, D. Mecatti, C. Atzeni, R. Persico, F. Soldovieri, Advanced Processing Techniques for Step-frequency Continuous-Wave Penetrating Radar: the Case Study of "Palazzo Vecchio" Walls (Firenze, Italy), Research on Nondestructive Evaluation, vol. 17, pp. 71-83, 2006. [2] N. Masini, R. Persico, E. Rizzo, A. Calia, M. T. Giannotta, G. Quarta, A. Pagliuca, "Integrated Techniques for Analysis and Monitoring of Historical Monuments: the case of S.Giovanni al Sepolcro in Brindisi (Southern Italy)." Near Surface Geophysics, vol. 8 (5), pp. 423-432, 2010. [3] E. Pettinelli, A. Di Matteo, E. Mattei, L. Crocco, F. Soldovieri, J. D. Redman, and A. P. Annan, "GPR response from buried pipes: Measurement on field site and tomographic reconstructions", IEEE Transactions on Geoscience and Remote Sensing, vol. 47, n. 8, 2639-2645, Aug. 2009. [4] O. Lopera, E. C. Slob, N. Milisavljevic and S. Lambot, "Filtering soil surface and antenna effects from GPR data to enhance landmine detection", IEEE Transactions on Geoscience and Remote Sensing, vol. 45, n. 3, pp.707-717, 2007. [5] R. Persico, "Introduction to Ground Penetrating Radar: Inverse Scattering and Data Processing". Wiley, 2014. [6] R. Persico, J. Sala, "The problem of the investigation domain subdivision in 2D linear inversions for large scale GPR data", IEEE Geoscience and Remote Sensing Letters, vol. 11, n. 7, pp. 1215-1219, doi 10.1109/LGRS.2013.2290008, July 2014. [7] R. Persico, F. Soldovieri, S. Lambot, Shifting zoom in 2D linear inversions performed on GPR data gathered along an electrically large investigation domain, Proc. 16th International Conference on Ground Penetrating Radar GPR2016, Honk-Kong, June 13-16, 2016
A novel high-frequency encoding algorithm for image compression
NASA Astrophysics Data System (ADS)
Siddeq, Mohammed M.; Rodrigues, Marcos A.
2017-12-01
In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.
NASA Astrophysics Data System (ADS)
Fang, Jinwei; Zhou, Hui; Zhang, Qingchen; Chen, Hanming; Wang, Ning; Sun, Pengyuan; Wang, Shucheng
2018-01-01
It is critically important to assess the effectiveness of elastic full waveform inversion (FWI) algorithms when FWI is applied to real land seismic data including strong surface and multiple waves related to the air-earth boundary. In this paper, we review the realization of the free surface boundary condition in staggered-grid finite-difference (FD) discretization of elastic wave equation, and analyze the impact of the free surface on FWI results. To reduce inputs/outputs (I/O) operations in gradient calculation, we adopt the boundary value reconstruction method to rebuild the source wavefields during the backward propagation of the residual data. A time-domain multiscale inversion strategy is conducted by using a convolutional objective function, and a multi-GPU parallel programming technique is used to accelerate our elastic FWI further. Forward simulation and elastic FWI examples without and with considering the free surface are shown and analyzed, respectively. Numerical results indicate that no free surface incorporated elastic FWI fails to recover a good inversion result from the Rayleigh wave contaminated observed data. By contrast, when the free surface is incorporated into FWI, the inversion results become better. We also discuss the dependency of the Rayleigh waveform incorporated FWI on the accuracy of initial models, especially the accuracy of the shallow part of the initial models.
Hessian Schatten-norm regularization for linear inverse problems.
Lefkimmiatis, Stamatios; Ward, John Paul; Unser, Michael
2013-05-01
We introduce a novel family of invariant, convex, and non-quadratic functionals that we employ to derive regularized solutions of ill-posed linear inverse imaging problems. The proposed regularizers involve the Schatten norms of the Hessian matrix, which are computed at every pixel of the image. They can be viewed as second-order extensions of the popular total-variation (TV) semi-norm since they satisfy the same invariance properties. Meanwhile, by taking advantage of second-order derivatives, they avoid the staircase effect, a common artifact of TV-based reconstructions, and perform well for a wide range of applications. To solve the corresponding optimization problems, we propose an algorithm that is based on a primal-dual formulation. A fundamental ingredient of this algorithm is the projection of matrices onto Schatten norm balls of arbitrary radius. This operation is performed efficiently based on a direct link we provide between vector projections onto lq norm balls and matrix projections onto Schatten norm balls. Finally, we demonstrate the effectiveness of the proposed methods through experimental results on several inverse imaging problems with real and simulated data.
Cheng, Yih-Chun; Tsai, Pei-Yun; Huang, Ming-Hao
2016-05-19
Low-complexity compressed sensing (CS) techniques for monitoring electrocardiogram (ECG) signals in wireless body sensor network (WBSN) are presented. The prior probability of ECG sparsity in the wavelet domain is first exploited. Then, variable orthogonal multi-matching pursuit (vOMMP) algorithm that consists of two phases is proposed. In the first phase, orthogonal matching pursuit (OMP) algorithm is adopted to effectively augment the support set with reliable indices and in the second phase, the orthogonal multi-matching pursuit (OMMP) is employed to rescue the missing indices. The reconstruction performance is thus enhanced with the prior information and the vOMMP algorithm. Furthermore, the computation-intensive pseudo-inverse operation is simplified by the matrix-inversion-free (MIF) technique based on QR decomposition. The vOMMP-MIF CS decoder is then implemented in 90 nm CMOS technology. The QR decomposition is accomplished by two systolic arrays working in parallel. The implementation supports three settings for obtaining 40, 44, and 48 coefficients in the sparse vector. From the measurement result, the power consumption is 11.7 mW at 0.9 V and 12 MHz. Compared to prior chip implementations, our design shows good hardware efficiency and is suitable for low-energy applications.
Total variation-based neutron computed tomography
NASA Astrophysics Data System (ADS)
Barnard, Richard C.; Bilheux, Hassina; Toops, Todd; Nafziger, Eric; Finney, Charles; Splitter, Derek; Archibald, Rick
2018-05-01
We perform the neutron computed tomography reconstruction problem via an inverse problem formulation with a total variation penalty. In the case of highly under-resolved angular measurements, the total variation penalty suppresses high-frequency artifacts which appear in filtered back projections. In order to efficiently compute solutions for this problem, we implement a variation of the split Bregman algorithm; due to the error-forgetting nature of the algorithm, the computational cost of updating can be significantly reduced via very inexact approximate linear solvers. We present the effectiveness of the algorithm in the significantly low-angular sampling case using synthetic test problems as well as data obtained from a high flux neutron source. The algorithm removes artifacts and can even roughly capture small features when an extremely low number of angles are used.
Yang, C L; Wei, H Y; Adler, A; Soleimani, M
2013-06-01
Electrical impedance tomography (EIT) is a fast and cost-effective technique to provide a tomographic conductivity image of a subject from boundary current-voltage data. This paper proposes a time and memory efficient method for solving a large scale 3D EIT inverse problem using a parallel conjugate gradient (CG) algorithm. The 3D EIT system with a large number of measurement data can produce a large size of Jacobian matrix; this could cause difficulties in computer storage and the inversion process. One of challenges in 3D EIT is to decrease the reconstruction time and memory usage, at the same time retaining the image quality. Firstly, a sparse matrix reduction technique is proposed using thresholding to set very small values of the Jacobian matrix to zero. By adjusting the Jacobian matrix into a sparse format, the element with zeros would be eliminated, which results in a saving of memory requirement. Secondly, a block-wise CG method for parallel reconstruction has been developed. The proposed method has been tested using simulated data as well as experimental test samples. Sparse Jacobian with a block-wise CG enables the large scale EIT problem to be solved efficiently. Image quality measures are presented to quantify the effect of sparse matrix reduction in reconstruction results.
Richardson-Lucy deconvolution as a general tool for combining images with complementary strengths.
Ingaramo, Maria; York, Andrew G; Hoogendoorn, Eelco; Postma, Marten; Shroff, Hari; Patterson, George H
2014-03-17
We use Richardson-Lucy (RL) deconvolution to combine multiple images of a simulated object into a single image in the context of modern fluorescence microscopy techniques. RL deconvolution can merge images with very different point-spread functions, such as in multiview light-sheet microscopes,1, 2 while preserving the best resolution information present in each image. We show that RL deconvolution is also easily applied to merge high-resolution, high-noise images with low-resolution, low-noise images, relevant when complementing conventional microscopy with localization microscopy. We also use RL deconvolution to merge images produced by different simulated illumination patterns, relevant to structured illumination microscopy (SIM)3, 4 and image scanning microscopy (ISM). The quality of our ISM reconstructions is at least as good as reconstructions using standard inversion algorithms for ISM data, but our method follows a simpler recipe that requires no mathematical insight. Finally, we apply RL deconvolution to merge a series of ten images with varying signal and resolution levels. This combination is relevant to gated stimulated-emission depletion (STED) microscopy, and shows that merges of high-quality images are possible even in cases for which a non-iterative inversion algorithm is unknown. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Tavakoli, Behnoosh; Zhu, Quing
2013-01-01
Ultrasound-guided diffuse optical tomography (DOT) is a promising method for characterizing malignant and benign lesions in the female breast. We introduce a new two-step algorithm for DOT inversion in which the optical parameters are estimated with the global optimization method, genetic algorithm. The estimation result is applied as an initial guess to the conjugate gradient (CG) optimization method to obtain the absorption and scattering distributions simultaneously. Simulations and phantom experiments have shown that the maximum absorption and reduced scattering coefficients are reconstructed with less than 10% and 25% errors, respectively. This is in contrast with the CG method alone, which generates about 20% error for the absorption coefficient and does not accurately recover the scattering distribution. A new measure of scattering contrast has been introduced to characterize benign and malignant breast lesions. The results of 16 clinical cases reconstructed with the two-step method demonstrates that, on average, the absorption coefficient and scattering contrast of malignant lesions are about 1.8 and 3.32 times higher than the benign cases, respectively.
Azimipour, Mehdi; Sheikhzadeh, Mahya; Baumgartner, Ryan; Cullen, Patrick K; Helmstetter, Fred J; Chang, Woo-Jin; Pashaie, Ramin
2017-01-01
We present our effort in implementing a fluorescence laminar optical tomography scanner which is specifically designed for noninvasive three-dimensional imaging of fluorescence proteins in the brains of small rodents. A laser beam, after passing through a cylindrical lens, scans the brain tissue from the surface while the emission signal is captured by the epi-fluorescence optics and is recorded using an electron multiplication CCD sensor. Image reconstruction algorithms are developed based on Monte Carlo simulation to model light–tissue interaction and generate the sensitivity matrices. To solve the inverse problem, we used the iterative simultaneous algebraic reconstruction technique. The performance of the developed system was evaluated by imaging microfabricated silicon microchannels embedded inside a substrate with optical properties close to the brain as a tissue phantom and ultimately by scanning brain tissue in vivo. Details of the hardware design and reconstruction algorithms are discussed and several experimental results are presented. The developed system can specifically facilitate neuroscience experiments where fluorescence imaging and molecular genetic methods are used to study the dynamics of the brain circuitries.
Fast local reconstruction by selective backprojection for low dose in dental computed tomography
NASA Astrophysics Data System (ADS)
Yan, Bin; Deng, Lin; Han, Yu; Zhang, Feng; Wang, Xian-Chao; Li, Lei
2014-10-01
The high radiation dose in computed tomography (CT) scans increases the lifetime risk of cancer, which becomes a major clinical concern. The backprojection-filtration (BPF) algorithm could reduce the radiation dose by reconstructing the images from truncated data in a short scan. In a dental CT, it could reduce the radiation dose for the teeth by using the projection acquired in a short scan, and could avoid irradiation to the other part by using truncated projection. However, the limit of integration for backprojection varies per PI-line, resulting in low calculation efficiency and poor parallel performance. Recently, a tent BPF has been proposed to improve the calculation efficiency by rearranging the projection. However, the memory-consuming data rebinning process is included. Accordingly, the selective BPF (S-BPF) algorithm is proposed in this paper. In this algorithm, the derivative of the projection is backprojected to the points whose x coordinate is less than that of the source focal spot to obtain the differentiated backprojection. The finite Hilbert inverse is then applied to each PI-line segment. S-BPF avoids the influence of the variable limit of integration by selective backprojection without additional time cost or memory cost. The simulation experiment and the real experiment demonstrated the higher reconstruction efficiency of S-BPF.
Lensfree diffractive tomography for the imaging of 3D cell cultures
NASA Astrophysics Data System (ADS)
Berdeu, Anthony; Momey, Fabien; Dinten, Jean-Marc; Gidrol, Xavier; Picollet-D'hahan, Nathalie; Allier, Cédric
2017-02-01
New microscopes are needed to help reaching the full potential of 3D organoid culture studies by gathering large quantitative and systematic data over extended periods of time while preserving the integrity of the living sample. In order to reconstruct large volumes while preserving the ability to catch every single cell, we propose new imaging platforms based on lens-free microscopy, a technic which is addressing these needs in the context of 2D cell culture, providing label-free and non-phototoxic acquisition of large datasets. We built lens-free diffractive tomography setups performing multi-angle acquisitions of 3D organoid cultures embedded in Matrigel and developed dedicated 3D holographic reconstruction algorithms based on the Fourier diffraction theorem. Nonetheless, holographic setups do not record the phase of the incident wave front and the biological samples in Petri dish strongly limit the angular coverage. These limitations introduce numerous artefacts in the sample reconstruction. We developed several methods to overcome them, such as multi-wavelength imaging or iterative phase retrieval. The most promising technic currently developed is based on a regularised inverse problem approach directly applied on the 3D volume to reconstruct. 3D reconstructions were performed on several complex samples such as 3D networks or spheroids embedded in capsules with large reconstructed volumes up to 25 mm3 while still being able to identify single cells. To our knowledge, this is the first time that such an inverse problem approach is implemented in the context of lens-free diffractive tomography enabling to reconstruct large fully 3D volumes of unstained biological samples.
Iterative Nonlinear Tikhonov Algorithm with Constraints for Electromagnetic Tomography
NASA Technical Reports Server (NTRS)
Xu, Feng; Deshpande, Manohar
2012-01-01
Low frequency electromagnetic tomography such as the capacitance tomography (ECT) has been proposed for monitoring and mass-gauging of gas-liquid two-phase system under microgravity condition in NASA's future long-term space missions. Due to the ill-posed inverse problem of ECT, images reconstructed using conventional linear algorithms often suffer from limitations such as low resolution and blurred edges. Hence, new efficient high resolution nonlinear imaging algorithms are needed for accurate two-phase imaging. The proposed Iterative Nonlinear Tikhonov Regularized Algorithm with Constraints (INTAC) is based on an efficient finite element method (FEM) forward model of quasi-static electromagnetic problem. It iteratively minimizes the discrepancy between FEM simulated and actual measured capacitances by adjusting the reconstructed image using the Tikhonov regularized method. More importantly, it enforces the known permittivity of two phases to the unknown pixels which exceed the reasonable range of permittivity in each iteration. This strategy does not only stabilize the converging process, but also produces sharper images. Simulations show that resolution improvement of over 2 times can be achieved by INTAC with respect to conventional approaches. Strategies to further improve spatial imaging resolution are suggested, as well as techniques to accelerate nonlinear forward model and thus increase the temporal resolution.
Image reconstruction from cone-beam projections with attenuation correction
NASA Astrophysics Data System (ADS)
Weng, Yi
1997-07-01
In single photon emission computered tomography (SPECT) imaging, photon attenuation within the body is a major factor contributing to the quantitative inaccuracy in measuring the distribution of radioactivity. Cone-beam SPECT provides improved sensitivity for imaging small organs. This thesis extends the results for 2D parallel- beam and fan-beam geometry to 3D parallel-beam and cone- beam geometries in order to derive filtered backprojection reconstruction algorithms for the 3D exponential parallel-beam transform and for the exponential cone-beam transform with sampling on a sphere. An exact inversion formula for the 3D exponential parallel-beam transform is obtained and is extended to the 3D exponential cone-beam transform. Sampling on a sphere is not useful clinically and current cone-beam tomography, with the focal point traversing a planar orbit, does not acquire sufficient data to give an accurate reconstruction. Thus a data acquisition method that obtains complete data for cone-beam SPECT by simultaneously rotating the gamma camera and translating the patient bed, so that cone-beam projections can be obtained with the focal point traversing a helix that surrounds the patient was developed. First, an implementation of Grangeat's algorithm for helical cone- beam projections was developed without attenuation correction. A fast new rebinning scheme was developed that uses all of the detected data to reconstruct the image and properly normalizes any multiply scanned data. In the case of attenuation no theorem analogous to Tuy's has been proven. We hypothesized that an artifact-free reconstruction could be obtained even if the cone-beam data are attenuated, provided the imaging orbit satisfies Tuy's condition and the exact attenuation map is known. Cone-beam emission data were acquired by using a circle- and-line and a helix orbit on a clinical SPECT system. An iterative conjugate gradient reconstruction algorithm was used to reconstruct projection data with a known attenuation map. The quantitative accuracy of the attenuation-corrected emission reconstruction was significantly improved.
A fast rebinning algorithm for 3D positron emission tomography using John's equation
NASA Astrophysics Data System (ADS)
Defrise, Michel; Liu, Xuan
1999-08-01
Volume imaging in positron emission tomography (PET) requires the inversion of the three-dimensional (3D) x-ray transform. The usual solution to this problem is based on 3D filtered-backprojection (FBP), but is slow. Alternative methods have been proposed which factor the 3D data into independent 2D data sets corresponding to the 2D Radon transforms of a stack of parallel slices. Each slice is then reconstructed using 2D FBP. These so-called rebinning methods are numerically efficient but are approximate. In this paper a new exact rebinning method is derived by exploiting the fact that the 3D x-ray transform of a function is the solution to the second-order partial differential equation first studied by John. The method is proposed for two sampling schemes, one corresponding to a pair of infinite plane detectors and another one corresponding to a cylindrical multi-ring PET scanner. The new FORE-J algorithm has been implemented for this latter geometry and was compared with the approximate Fourier rebinning algorithm FORE and with another exact rebinning algorithm, FOREX. Results with simulated data demonstrate a significant improvement in accuracy compared to FORE, while the reconstruction time is doubled. Compared to FOREX, the FORE-J algorithm is slightly less accurate but more than three times faster.
Developpement de techniques de diagnostic non intrusif par tomographie optique
NASA Astrophysics Data System (ADS)
Dubot, Fabien
Que ce soit dans les domaines des procedes industriels ou de l'imagerie medicale, on a assiste ces deux dernieres decennies a un developpement croissant des techniques optiques de diagnostic. L'engouement pour ces methodes repose principalement sur le fait qu'elles sont totalement non invasives, qu'elle utilisent des sources de rayonnement non nocives pour l'homme et l'environnement et qu'elles sont relativement peu couteuses et faciles a mettre en oeuvre comparees aux autres techniques d'imagerie. Une de ces techniques est la Tomographie Optique Diffuse (TOD). Cette methode d'imagerie tridimensionnelle consiste a caracteriser les proprietes radiatives d'un Milieu Semi-Transparent (MST) a partir de mesures optiques dans le proche infrarouge obtenues a l'aide d'un ensemble de sources et detecteurs situes sur la frontiere du domaine sonde. Elle repose notamment sur un modele direct de propagation de la lumiere dans le MST, fournissant les predictions, et un algorithme de minimisation d'une fonction de cout integrant les predictions et les mesures, permettant la reconstruction des parametres d'interet. Dans ce travail, le modele direct est l'approximation diffuse de l'equation de transfert radiatif dans le regime frequentiel tandis que les parametres d'interet sont les distributions spatiales des coefficients d'absorption et de diffusion reduit. Cette these est consacree au developpement d'une methode inverse robuste pour la resolution du probleme de TOD dans le domaine frequentiel. Pour repondre a cet objectif, ce travail est structure en trois parties qui constituent les principaux axes de la these. Premierement, une comparaison des algorithmes de Gauss-Newton amorti et de Broyden- Fletcher-Goldfarb-Shanno (BFGS) est proposee dans le cas bidimensionnel. Deux methodes de regularisation sont combinees pour chacun des deux algorithmes, a savoir la reduction de la dimension de l'espace de controle basee sur le maillage et la regularisation par penalisation de Tikhonov pour l'algorithme de Gauss-Newton amorti, et les regularisations basees sur le maillage et l'utilisation des gradients de Sobolev, uniformes ou spatialement dependants, lors de l'extraction du gradient de la fonction cout, pour la methode BFGS. Les resultats numeriques indiquent que l'algorithme de BFGS surpasse celui de Gauss-Newton amorti en ce qui concerne la qualite des reconstructions obtenues, le temps de calcul ou encore la facilite de selection du parametre de regularisation. Deuxiemement, une etude sur la quasi-independance du parametre de penalisation de Tikhonov optimal par rapport a la dimension de l'espace de controle dans les problemes inverses d'estimation de fonctions spatialement dependantes est menee. Cette etude fait suite a une observation realisee lors de la premiere partie de ce travail ou le parametre de Tikhonov, determine par la methode " L-curve ", se trouve etre independant de la dimension de l'espace de controle dans le cas sous-determine. Cette hypothese est demontree theoriquement puis verifiee numeriquement sur un probleme inverse lineaire de conduction de la chaleur puis sur le probleme inverse non-lineaire de TOD. La verification numerique repose sur la determination d'un parametre de Tikhonov optimal, defini comme etant celui qui minimise les ecarts entre les cibles et les reconstructions. La demonstration theorique repose sur le principe de Morozov (discrepancy principle) dans le cas lineaire, tandis qu'elle repose essentiellement sur l'hypothese que les fonctions radiatives a reconstruire sont des variables aleatoires suivant une loi normale dans le cas non-lineaire. En conclusion, la these demontre que le parametre de Tikhonov peut etre determine en utilisant une parametrisation des variables de controle associee a un maillage lâche afin de reduire les temps de calcul. Troisiemement, une methode inverse multi-echelle basee sur les ondelettes associee a l'algorithme de BFGS est developpee. Cette methode, qui s'appuie sur une reformulation du probleme inverse original en une suite de sous-problemes inverses de la plus grande echelle a la plus petite, a l'aide de la transformee en ondelettes, permet de faire face a la propriete de convergence locale de l'optimiseur et a la presence de nombreux minima locaux dans la fonction cout. Les resultats numeriques montrent que la methode proposee est plus stable vis-a-vis de l'estimation initiale des proprietes radiatives et fournit des reconstructions finales plus precises que l'algorithme de BFGS ordinaire tout en necessitant des temps de calcul semblables. Les resultats de ces travaux sont presentes dans cette these sous forme de quatre articles. Le premier article a ete accepte dans l'International Journal of Thermal Sciences, le deuxieme est accepte dans la revue Inverse Problems in Science and Engineering, le troisieme est accepte dans le Journal of Computational and Applied Mathematics et le quatrieme a ete soumis au Journal of Quantitative Spectroscopy & Radiative Transfer. Dix autres articles ont ete publies dans des comptes-rendus de conferences avec comite de lecture. Ces articles sont disponibles en format pdf sur le site de la Chaire de recherche t3e (www.t3e.info).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Meng, E-mail: mengwu@stanford.edu; Fahrig, Rebecca
2014-11-01
Purpose: The scanning beam digital x-ray system (SBDX) is an inverse geometry fluoroscopic system with high dose efficiency and the ability to perform continuous real-time tomosynthesis in multiple planes. This system could be used for image guidance during lung nodule biopsy. However, the reconstructed images suffer from strong out-of-plane artifact due to the small tomographic angle of the system. Methods: The authors propose an out-of-plane artifact subtraction tomosynthesis (OPAST) algorithm that utilizes a prior CT volume to augment the run-time image processing. A blur-and-add (BAA) analytical model, derived from the project-to-backproject physical model, permits the generation of tomosynthesis images thatmore » are a good approximation to the shift-and-add (SAA) reconstructed image. A computationally practical algorithm is proposed to simulate images and out-of-plane artifacts from patient-specific prior CT volumes using the BAA model. A 3D image registration algorithm to align the simulated and reconstructed images is described. The accuracy of the BAA analytical model and the OPAST algorithm was evaluated using three lung cancer patients’ CT data. The OPAST and image registration algorithms were also tested with added nonrigid respiratory motions. Results: Image similarity measurements, including the correlation coefficient, mean squared error, and structural similarity index, indicated that the BAA model is very accurate in simulating the SAA images from the prior CT for the SBDX system. The shift-variant effect of the BAA model can be ignored when the shifts between SBDX images and CT volumes are within ±10 mm in the x and y directions. The nodule visibility and depth resolution are improved by subtracting simulated artifacts from the reconstructions. The image registration and OPAST are robust in the presence of added respiratory motions. The dominant artifacts in the subtraction images are caused by the mismatches between the real object and the prior CT volume. Conclusions: Their proposed prior CT-augmented OPAST reconstruction algorithm improves lung nodule visibility and depth resolution for the SBDX system.« less
A new inversion algorithm for HF sky-wave backscatter ionograms
NASA Astrophysics Data System (ADS)
Feng, Jing; Ni, Binbin; Lou, Peng; Wei, Na; Yang, Longquan; Liu, Wen; Zhao, Zhengyu; Li, Xue
2018-05-01
HF sky-wave backscatter sounding system is capable of measuring the large-scale, two-dimensional (2-D) distributions of ionospheric electron density. The leading edge (LE) of a backscatter ionogram (BSI) is widely used for ionospheric inversion since it is hardly affected by any factors other than ionospheric electron density. Traditional BSI inversion methods have failed to distinguish LEs associated with different ionospheric layers, and simply utilize the minimum group path of each operating frequency, which generally corresponds to the LE associated with the F2 layer. Consequently, while the inversion results can provide accurate profiles of the F region below the F2 peak, the diagnostics may not be so effective for other ionospheric layers. In order to resolve this issue, we present a new BSI inversion method using LEs associated with different layers, which can further improve the accuracy of electron density distribution, especially the profile of the ionospheric layers below the F2 region. The efficiency of the algorithm is evaluated by computing the mean and the standard deviation of the differences between inverted parameter values and true values obtained from both vertical and oblique incidence sounding. Test results clearly manifest that the method we have developed outputs more accurate electron density profiles due to improvements to acquire the profiles of the layers below the F2 region. Our study can further improve the current BSI inversion methods on the reconstruction of 2-D electron density distribution in a vertical plane aligned with the direction of sounding.
NASA Astrophysics Data System (ADS)
Yu, Jiao; Nie, Erwei; Zhu, Yanying; Hong, Yi
2018-03-01
Biodegradable elastomeric scaffolds for soft tissue repair represent a growing area of biomaterials research. Mechanical strength is one of the key factors to consider in the evaluation of candidate materials and the designs for tissue scaffolds. It is desirable to develop non-invasive evaluation methods of the mechanical property of scaffolds which would provide options for monitoring temporal mechanical property changes in situ. In this paper, we conduct in silico simulation and in vitro evaluation of an elastomeric scaffold using a novel ultrasonic shear wave imaging (USWI). The scaffold is fabricated from a biodegradable elastomer, poly(carbonate urethane) urea using salt leaching method. A numerical simulation is performed to test the robustness of the developed inversion algorithm for the elasticity map reconstruction which will be implemented in the phantom experiment. The generation and propagation of shear waves in a homogeneous tissue-mimicking medium with a circular scaffold inclusion is simulated and the elasticity map is well reconstructed. A PVA phantom experiment is performed to test the ability of USWI combined with the inversion algorithm to non-invasively characterize the mechanical property of a porous, biodegradable elastomeric scaffold. The elastic properties of the tested scaffold can be easily differentiated from the surrounding medium in the reconstructed image. The ability of the developed method to identify the edge of the scaffold and characterize the elasticity distribution is demonstrated. Preliminary results in this pilot study support the idea of applying the USWI based method for non-invasive elasticity characterization of tissue scaffolds.
Remote-sensing image encryption in hybrid domains
NASA Astrophysics Data System (ADS)
Zhang, Xiaoqiang; Zhu, Guiliang; Ma, Shilong
2012-04-01
Remote-sensing technology plays an important role in military and industrial fields. Remote-sensing image is the main means of acquiring information from satellites, which always contain some confidential information. To securely transmit and store remote-sensing images, we propose a new image encryption algorithm in hybrid domains. This algorithm makes full use of the advantages of image encryption in both spatial domain and transform domain. First, the low-pass subband coefficients of image DWT (discrete wavelet transform) decomposition are sorted by a PWLCM system in transform domain. Second, the image after IDWT (inverse discrete wavelet transform) reconstruction is diffused with 2D (two-dimensional) Logistic map and XOR operation in spatial domain. The experiment results and algorithm analyses show that the new algorithm possesses a large key space and can resist brute-force, statistical and differential attacks. Meanwhile, the proposed algorithm has the desirable encryption efficiency to satisfy requirements in practice.
A multifrequency MUSIC algorithm for locating small inhomogeneities in inverse scattering
NASA Astrophysics Data System (ADS)
Griesmaier, Roland; Schmiedecke, Christian
2017-03-01
We consider an inverse scattering problem for time-harmonic acoustic or electromagnetic waves with sparse multifrequency far field data-sets. The goal is to localize several small penetrable objects embedded inside an otherwise homogeneous background medium from observations of far fields of scattered waves corresponding to incident plane waves with one fixed incident direction but several different frequencies. We assume that the far field is measured at a few observation directions only. Taking advantage of the smallness of the scatterers with respect to wavelength we utilize an asymptotic representation formula for the far field to design and analyze a MUSIC-type reconstruction method for this setup. We establish lower bounds on the number of frequencies and receiver directions that are required to recover the number and the positions of an ensemble of scatterers from the given measurements. Furthermore we briefly sketch a possible application of the reconstruction method to the practically relevant case of multifrequency backscattering data. Numerical examples are presented to document the potentials and limitations of this approach.
Joint reconstruction of x-ray fluorescence and transmission tomography
Di, Zichao Wendy; Chen, Si; Hong, Young Pyo; Jacobsen, Chris; Leyffer, Sven; Wild, Stefan M.
2017-01-01
X-ray fluorescence tomography is based on the detection of fluorescence x-ray photons produced following x-ray absorption while a specimen is rotated; it provides information on the 3D distribution of selected elements within a sample. One limitation in the quality of sample recovery is the separation of elemental signals due to the finite energy resolution of the detector. Another limitation is the effect of self-absorption, which can lead to inaccurate results with dense samples. To recover a higher quality elemental map, we combine x-ray fluorescence detection with a second data modality: conventional x-ray transmission tomography using absorption. By using these combined signals in a nonlinear optimization-based approach, we demonstrate the benefit of our algorithm on real experimental data and obtain an improved quantitative reconstruction of the spatial distribution of dominant elements in the sample. Compared with single-modality inversion based on x-ray fluorescence alone, this joint inversion approach reduces ill-posedness and should result in improved elemental quantification and better correction of self-absorption. PMID:28788848
An algorithm for hyperspectral remote sensing of aerosols: 1. Development of theoretical framework
NASA Astrophysics Data System (ADS)
Hou, Weizhen; Wang, Jun; Xu, Xiaoguang; Reid, Jeffrey S.; Han, Dong
2016-07-01
This paper describes the first part of a series of investigations to develop algorithms for simultaneous retrieval of aerosol parameters and surface reflectance from a newly developed hyperspectral instrument, the GEOstationary Trace gas and Aerosol Sensor Optimization (GEO-TASO), by taking full advantage of available hyperspectral measurement information in the visible bands. We describe the theoretical framework of an inversion algorithm for the hyperspectral remote sensing of the aerosol optical properties, in which major principal components (PCs) for surface reflectance is assumed known, and the spectrally dependent aerosol refractive indices are assumed to follow a power-law approximation with four unknown parameters (two for real and two for imaginary part of refractive index). New capabilities for computing the Jacobians of four Stokes parameters of reflected solar radiation at the top of the atmosphere with respect to these unknown aerosol parameters and the weighting coefficients for each PC of surface reflectance are added into the UNified Linearized Vector Radiative Transfer Model (UNL-VRTM), which in turn facilitates the optimization in the inversion process. Theoretical derivations of the formulas for these new capabilities are provided, and the analytical solutions of Jacobians are validated against the finite-difference calculations with relative error less than 0.2%. Finally, self-consistency check of the inversion algorithm is conducted for the idealized green-vegetation and rangeland surfaces that were spectrally characterized by the U.S. Geological Survey digital spectral library. It shows that the first six PCs can yield the reconstruction of spectral surface reflectance with errors less than 1%. Assuming that aerosol properties can be accurately characterized, the inversion yields a retrieval of hyperspectral surface reflectance with an uncertainty of 2% (and root-mean-square error of less than 0.003), which suggests self-consistency in the inversion framework. The next step of using this framework to study the aerosol information content in GEO-TASO measurements is also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chvetsov, A
Purpose: To develop a tumor response model which could be uses to compute tumor hypoxic fraction using serial volumetric tumor imaging. This algorithm may be used for treatment response assessment and also for guidance of more expensive PET imaging of hypoxia. Methods: Previously developed two-level cell population tumor response model was modified to include a third cell level describing hypoxic and necrotic cells. This third level was considered constant value during radiotherapy treatment; therefore, inclusion additional parameter did not compromise stability of model fitting to imaging data. Fitting the model to serial volumetric imaging data was performed using a leastmore » squares objective function and simulated annealing algorithm. The problem of reconstruction of radiobiological parameters from serial imaging data was considered as inverse ill-posed problem described by the Fredholm integral equation of the first kind. Variational regularization was used to stabilize solutions. Results: To evaluate performance of the algorithm, we used a set of serial CT imaging data on tumor-volume for 14 head and neck cancer patients. The hypoxic fractions were reconstructed for each patient and the distribution of hypoxic fractions was compared to the distribution of initial hypoxic fractions previously measured using histograph. The measured and reconstructed from imaging data distributions of hypoxic fractions are in good agreement. The reconstructed distribution of cell surviving fraction was also in better agreement with in vitro data than previously obtained using the two-level cell population model. Conclusion: Our results indicate that it is possible to evaluate the initial hypoxic tumor fraction using serial volumetric imaging and a tumor response model. This algorithm can be used for treatment response assessment and guidance of more expensive PET imaging.« less
Reconstruction of color images via Haar wavelet based on digital micromirror device
NASA Astrophysics Data System (ADS)
Liu, Xingjiong; He, Weiji; Gu, Guohua
2015-10-01
A digital micro mirror device( DMD) is introduced to form Haar wavelet basis , projecting on the color target image by making use of structured illumination, including red, green and blue light. The light intensity signals reflected from the target image are received synchronously by the bucket detector which has no spatial resolution, converted into voltage signals and then transferred into PC[1] .To reach the aim of synchronization, several synchronization processes are added during data acquisition. In the data collection process, according to the wavelet tree structure, the locations of significant coefficients at the finer scale are predicted by comparing the coefficients sampled at the coarsest scale with the threshold. The monochrome grayscale images are obtained under red , green and blue structured illumination by using Haar wavelet inverse transform algorithm, respectively. The color fusion algorithm is carried on the three monochrome grayscale images to obtain the final color image. According to the imaging principle, the experimental demonstration device is assembled. The letter "K" and the X-rite Color Checker Passport are projected and reconstructed as target images, and the final reconstructed color images have good qualities. This article makes use of the method of Haar wavelet reconstruction, reducing the sampling rate considerably. It provides color information without compromising the resolution of the final image.
A singular-value method for reconstruction of nonradial and lossy objects.
Jiang, Wei; Astheimer, Jeffrey; Waag, Robert
2012-03-01
Efficient inverse scattering algorithms for nonradial lossy objects are presented using singular-value decomposition to form reduced-rank representations of the scattering operator. These algorithms extend eigenfunction methods that are not applicable to nonradial lossy scattering objects because the scattering operators for these objects do not have orthonormal eigenfunction decompositions. A method of local reconstruction by segregation of scattering contributions from different local regions is also presented. Scattering from each region is isolated by forming a reduced-rank representation of the scattering operator that has domain and range spaces comprised of far-field patterns with retransmitted fields that focus on the local region. Methods for the estimation of the boundary, average sound speed, and average attenuation slope of the scattering object are also given. These methods yielded approximations of scattering objects that were sufficiently accurate to allow residual variations to be reconstructed in a single iteration. Calculated scattering from a lossy elliptical object with a random background, internal features, and white noise is used to evaluate the proposed methods. Local reconstruction yielded images with spatial resolution that is finer than a half wavelength of the center frequency and reproduces sound speed and attenuation slope with relative root-mean-square errors of 1.09% and 11.45%, respectively.
Freyer, Marcus; Ale, Angelique; Schulz, Ralf B; Zientkowska, Marta; Ntziachristos, Vasilis; Englmeier, Karl-Hans
2010-01-01
The recent development of hybrid imaging scanners that integrate fluorescence molecular tomography (FMT) and x-ray computed tomography (XCT) allows the utilization of x-ray information as image priors for improving optical tomography reconstruction. To fully capitalize on this capacity, we consider a framework for the automatic and fast detection of different anatomic structures in murine XCT images. To accurately differentiate between different structures such as bone, lung, and heart, a combination of image processing steps including thresholding, seed growing, and signal detection are found to offer optimal segmentation performance. The algorithm and its utilization in an inverse FMT scheme that uses priors is demonstrated on mouse images.
Real-time marker-free motion capture system using blob feature analysis
NASA Astrophysics Data System (ADS)
Park, Chang-Joon; Kim, Sung-Eun; Kim, Hong-Seok; Lee, In-Ho
2005-02-01
This paper presents a real-time marker-free motion capture system which can reconstruct 3-dimensional human motions. The virtual character of the proposed system mimics the motion of an actor in real-time. The proposed system captures human motions by using three synchronized CCD cameras and detects the root and end-effectors of an actor such as a head, hands, and feet by exploiting the blob feature analysis. And then, the 3-dimensional positions of end-effectors are restored and tracked by using Kalman filter. At last, the positions of the intermediate joint are reconstructed by using anatomically constrained inverse kinematics algorithm. The proposed system was implemented under general lighting conditions and we confirmed that the proposed system could reconstruct motions of a lot of people wearing various clothes in real-time stably.
Graph-cut based discrete-valued image reconstruction.
Tuysuzoglu, Ahmet; Karl, W Clem; Stojanovic, Ivana; Castañòn, David; Ünlü, M Selim
2015-05-01
Efficient graph-cut methods have been used with great success for labeling and denoising problems occurring in computer vision. Unfortunately, the presence of linear image mappings has prevented the use of these techniques in most discrete-amplitude image reconstruction problems. In this paper, we develop a graph-cut based framework for the direct solution of discrete amplitude linear image reconstruction problems cast as regularized energy function minimizations. We first analyze the structure of discrete linear inverse problem cost functions to show that the obstacle to the application of graph-cut methods to their solution is the variable mixing caused by the presence of the linear sensing operator. We then propose to use a surrogate energy functional that overcomes the challenges imposed by the sensing operator yet can be utilized efficiently in existing graph-cut frameworks. We use this surrogate energy functional to devise a monotonic iterative algorithm for the solution of discrete valued inverse problems. We first provide experiments using local convolutional operators and show the robustness of the proposed technique to noise and stability to changes in regularization parameter. Then we focus on nonlocal, tomographic examples where we consider limited-angle data problems. We compare our technique with state-of-the-art discrete and continuous image reconstruction techniques. Experiments show that the proposed method outperforms state-of-the-art techniques in challenging scenarios involving discrete valued unknowns.
Tomographic Neutron Imaging using SIRT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gregor, Jens; FINNEY, Charles E A; Toops, Todd J
2013-01-01
Neutron imaging is complementary to x-ray imaging in that materials such as water and plastic are highly attenuating while material such as metal is nearly transparent. We showcase tomographic imaging of a diesel particulate filter. Reconstruction is done using a modified version of SIRT called PSIRT. We expand on previous work and introduce Tikhonov regularization. We show that near-optimal relaxation can still be achieved. The algorithmic ideas apply to cone beam x-ray CT and other inverse problems.
Controlled wavelet domain sparsity for x-ray tomography
NASA Astrophysics Data System (ADS)
Purisha, Zenith; Rimpeläinen, Juho; Bubba, Tatiana; Siltanen, Samuli
2018-01-01
Tomographic reconstruction is an ill-posed inverse problem that calls for regularization. One possibility is to require sparsity of the unknown in an orthonormal wavelet basis. This, in turn, can be achieved by variational regularization, where the penalty term is the sum of the absolute values of the wavelet coefficients. The primal-dual fixed point algorithm showed that the minimizer of the variational regularization functional can be computed iteratively using a soft-thresholding operation. Choosing the soft-thresholding parameter \
Limited data tomographic image reconstruction via dual formulation of total variation minimization
NASA Astrophysics Data System (ADS)
Jang, Kwang Eun; Sung, Younghun; Lee, Kangeui; Lee, Jongha; Cho, Seungryong
2011-03-01
The X-ray mammography is the primary imaging modality for breast cancer screening. For the dense breast, however, the mammogram is usually difficult to read due to tissue overlap problem caused by the superposition of normal tissues. The digital breast tomosynthesis (DBT) that measures several low dose projections over a limited angle range may be an alternative modality for breast imaging, since it allows the visualization of the cross-sectional information of breast. The DBT, however, may suffer from the aliasing artifact and the severe noise corruption. To overcome these problems, a total variation (TV) regularized statistical reconstruction algorithm is presented. Inspired by the dual formulation of TV minimization in denoising and deblurring problems, we derived a gradient-type algorithm based on statistical model of X-ray tomography. The objective function is comprised of a data fidelity term derived from the statistical model and a TV regularization term. The gradient of the objective function can be easily calculated using simple operations in terms of auxiliary variables. After a descending step, the data fidelity term is renewed in each iteration. Since the proposed algorithm can be implemented without sophisticated operations such as matrix inverse, it provides an efficient way to include the TV regularization in the statistical reconstruction method, which results in a fast and robust estimation for low dose projections over the limited angle range. Initial tests with an experimental DBT system confirmed our finding.
Harmony: EEG/MEG Linear Inverse Source Reconstruction in the Anatomical Basis of Spherical Harmonics
Petrov, Yury
2012-01-01
EEG/MEG source localization based on a “distributed solution” is severely underdetermined, because the number of sources is much larger than the number of measurements. In particular, this makes the solution strongly affected by sensor noise. A new way to constrain the problem is presented. By using the anatomical basis of spherical harmonics (or spherical splines) instead of single dipoles the dimensionality of the inverse solution is greatly reduced without sacrificing the quality of the data fit. The smoothness of the resulting solution reduces the surface bias and scatter of the sources (incoherency) compared to the popular minimum-norm algorithms where single-dipole basis is used (MNE, depth-weighted MNE, dSPM, sLORETA, LORETA, IBF) and allows to efficiently reduce the effect of sensor noise. This approach, termed Harmony, performed well when applied to experimental data (two exemplars of early evoked potentials) and showed better localization precision and solution coherence than the other tested algorithms when applied to realistically simulated data. PMID:23071497
Inversion of 2-D DC resistivity data using rapid optimization and minimal complexity neural network
NASA Astrophysics Data System (ADS)
Singh, U. K.; Tiwari, R. K.; Singh, S. B.
2010-02-01
The backpropagation (BP) artificial neural network (ANN) technique of optimization based on steepest descent algorithm is known to be inept for its poor performance and does not ensure global convergence. Nonlinear and complex DC resistivity data require efficient ANN model and more intensive optimization procedures for better results and interpretations. Improvements in the computational ANN modeling process are described with the goals of enhancing the optimization process and reducing ANN model complexity. Well-established optimization methods, such as Radial basis algorithm (RBA) and Levenberg-Marquardt algorithms (LMA) have frequently been used to deal with complexity and nonlinearity in such complex geophysical records. We examined here the efficiency of trained LMA and RB networks by using 2-D synthetic resistivity data and then finally applied to the actual field vertical electrical resistivity sounding (VES) data collected from the Puga Valley, Jammu and Kashmir, India. The resulting ANN reconstruction resistivity results are compared with the result of existing inversion approaches, which are in good agreement. The depths and resistivity structures obtained by the ANN methods also correlate well with the known drilling results and geologic boundaries. The application of the above ANN algorithms proves to be robust and could be used for fast estimation of resistive structures for other complex earth model also.
Yin, X X; Ng, B W-H; Ramamohanarao, K; Baghai-Wadji, A; Abbott, D
2012-09-01
It has been shown that, magnetic resonance images (MRIs) with sparsity representation in a transformed domain, e.g. spatial finite-differences (FD), or discrete cosine transform (DCT), can be restored from undersampled k-space via applying current compressive sampling theory. The paper presents a model-based method for the restoration of MRIs. The reduced-order model, in which a full-system-response is projected onto a subspace of lower dimensionality, has been used to accelerate image reconstruction by reducing the size of the involved linear system. In this paper, the singular value threshold (SVT) technique is applied as a denoising scheme to reduce and select the model order of the inverse Fourier transform image, and to restore multi-slice breast MRIs that have been compressively sampled in k-space. The restored MRIs with SVT for denoising show reduced sampling errors compared to the direct MRI restoration methods via spatial FD, or DCT. Compressive sampling is a technique for finding sparse solutions to underdetermined linear systems. The sparsity that is implicit in MRIs is to explore the solution to MRI reconstruction after transformation from significantly undersampled k-space. The challenge, however, is that, since some incoherent artifacts result from the random undersampling, noise-like interference is added to the image with sparse representation. These recovery algorithms in the literature are not capable of fully removing the artifacts. It is necessary to introduce a denoising procedure to improve the quality of image recovery. This paper applies a singular value threshold algorithm to reduce the model order of image basis functions, which allows further improvement of the quality of image reconstruction with removal of noise artifacts. The principle of the denoising scheme is to reconstruct the sparse MRI matrices optimally with a lower rank via selecting smaller number of dominant singular values. The singular value threshold algorithm is performed by minimizing the nuclear norm of difference between the sampled image and the recovered image. It has been illustrated that this algorithm improves the ability of previous image reconstruction algorithms to remove noise artifacts while significantly improving the quality of MRI recovery.
NASA Astrophysics Data System (ADS)
Krysta, M.; Kusmierczyk-Michulec, J.; Nikkinen, M.; Carter, J. A.
2011-12-01
In order to support its mission of monitoring compliance with the treaty banning nuclear explosions, the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) operates four global networks of, respectively, seismic, infrasound, hydroacoustic sensors and air samplers accompanied with radionuclide detectors. The role of the International Data Centre (IDC) of CTBTO is to associate the signals detected in the monitoring networks with the physical phenomena which emitted these signals, by forming events. One of the aspects of associating detections with emitters is the problem of inferring the sources of radionuclides from the detections made at CTBTO radionuclide network stations. This task is particularly challenging because the average transport distance between a release point and detectors is large. Complex processes of turbulent diffusion are responsible for efficient mixing and consequently for decreasing the information content of detections with an increasing distance from the source. The problem is generally addressed in a two-step process. In the first step, an atmospheric transport model establishes a link between the detections and the regions of possible source location. In the second step this link is inverted to infer source information from the detections. In this presentation, we will discuss enhancements of the presently used regression-based inversion algorithm to reconstruct a source of radionuclides. To this aim, modern inversion algorithms accounting for prior information and appropriately regularizing an under-determined reconstruction problem will be briefly introduced. Emphasis will be on the CTBTO context and the choice of inversion methods. An illustration of the first tests will be provided using a framework of twin experiments, i.e. fictitious detections in the CTBTO radionuclide network generated with an atmospheric transport model.
Optimal structure and parameter learning of Ising models
Lokhov, Andrey; Vuffray, Marc Denis; Misra, Sidhant; ...
2018-03-16
Reconstruction of the structure and parameters of an Ising model from binary samples is a problem of practical importance in a variety of disciplines, ranging from statistical physics and computational biology to image processing and machine learning. The focus of the research community shifted toward developing universal reconstruction algorithms that are both computationally efficient and require the minimal amount of expensive data. Here, we introduce a new method, interaction screening, which accurately estimates model parameters using local optimization problems. The algorithm provably achieves perfect graph structure recovery with an information-theoretically optimal number of samples, notably in the low-temperature regime, whichmore » is known to be the hardest for learning. Here, the efficacy of interaction screening is assessed through extensive numerical tests on synthetic Ising models of various topologies with different types of interactions, as well as on real data produced by a D-Wave quantum computer. Finally, this study shows that the interaction screening method is an exact, tractable, and optimal technique that universally solves the inverse Ising problem.« less
Optimal structure and parameter learning of Ising models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lokhov, Andrey; Vuffray, Marc Denis; Misra, Sidhant
Reconstruction of the structure and parameters of an Ising model from binary samples is a problem of practical importance in a variety of disciplines, ranging from statistical physics and computational biology to image processing and machine learning. The focus of the research community shifted toward developing universal reconstruction algorithms that are both computationally efficient and require the minimal amount of expensive data. Here, we introduce a new method, interaction screening, which accurately estimates model parameters using local optimization problems. The algorithm provably achieves perfect graph structure recovery with an information-theoretically optimal number of samples, notably in the low-temperature regime, whichmore » is known to be the hardest for learning. Here, the efficacy of interaction screening is assessed through extensive numerical tests on synthetic Ising models of various topologies with different types of interactions, as well as on real data produced by a D-Wave quantum computer. Finally, this study shows that the interaction screening method is an exact, tractable, and optimal technique that universally solves the inverse Ising problem.« less
NASA Astrophysics Data System (ADS)
Niu, Chun-Yang; Qi, Hong; Huang, Xing; Ruan, Li-Ming; Wang, Wei; Tan, He-Ping
2015-11-01
A hybrid least-square QR decomposition (LSQR)-particle swarm optimization (LSQR-PSO) algorithm was developed to estimate the three-dimensional (3D) temperature distributions and absorption coefficients simultaneously. The outgoing radiative intensities at the boundary surface of the absorbing media were simulated by the line-of-sight (LOS) method, which served as the input for the inverse analysis. The retrieval results showed that the 3D temperature distributions of the participating media with known radiative properties could be retrieved accurately using the LSQR algorithm, even with noisy data. For the participating media with unknown radiative properties, the 3D temperature distributions and absorption coefficients could be retrieved accurately using the LSQR-PSO algorithm even with measurement errors. It was also found that the temperature field could be estimated more accurately than the absorption coefficients. In order to gain insight into the effects on the accuracy of temperature distribution reconstruction, the selection of the detection direction and the angle between two detection directions was also analyzed. Project supported by the Major National Scientific Instruments and Equipment Development Special Foundation of China (Grant No. 51327803), the National Natural Science Foundation of China (Grant No. 51476043), and the Fund of Tianjin Key Laboratory of Civil Aircraft Airworthiness and Maintenance in Civil Aviation University of China.
On numerical reconstructions of lithographic masks in DUV scatterometry
NASA Astrophysics Data System (ADS)
Henn, M.-A.; Model, R.; Bär, M.; Wurm, M.; Bodermann, B.; Rathsfeld, A.; Gross, H.
2009-06-01
The solution of the inverse problem in scatterometry employing deep ultraviolet light (DUV) is discussed, i.e. we consider the determination of periodic surface structures from light diffraction patterns. With decreasing dimensions of the structures on photo lithography masks and wafers, increasing demands on the required metrology techniques arise. Scatterometry as a non-imaging indirect optical method is applied to periodic line structures in order to determine the sidewall angles, heights, and critical dimensions (CD), i.e., the top and bottom widths. The latter quantities are typically in the range of tens of nanometers. All these angles, heights, and CDs are the fundamental figures in order to evaluate the quality of the manufacturing process. To measure those quantities a DUV scatterometer is used, which typically operates at a wavelength of 193 nm. The diffraction of light by periodic 2D structures can be simulated using the finite element method for the Helmholtz equation. The corresponding inverse problem seeks to reconstruct the grating geometry from measured diffraction patterns. Fixing the class of gratings and the set of measurements, this inverse problem reduces to a finite dimensional nonlinear operator equation. Reformulating the problem as an optimization problem, a vast number of numerical schemes can be applied. Our tool is a sequential quadratic programing (SQP) variant of the Gauss-Newton iteration. In a first step, in which we use a simulated data set, we investigate how accurate the geometrical parameters of an EUV mask can be reconstructed, using light in the DUV range. We then determine the expected uncertainties of geometric parameters by reconstructing from simulated input data perturbed by noise representing the estimated uncertainties of input data. In the last step, we use the measurement data obtained from the new DUV scatterometer at PTB to determine the geometrical parameters of a typical EUV mask with our reconstruction algorithm. The results are compared to the outcome of investigations with two alternative methods namely EUV scatterometry and SEM measurements.
Ionospheric tomography over South Africa: Comparison of MIDAS and ionosondes measurements
NASA Astrophysics Data System (ADS)
Giday, Nigussie M.; Katamzi, Zama T.; McKinnell, Lee-Anne
2016-01-01
This paper aims to show the results of an ionospheric tomography algorithm called Multi-Instrument Data Analysis System (MIDAS) over the South African region. Recorded data from a network of 49-53 Global Positioning System (GPS) receivers over the South African region was used as input for the inversion. The inversion was made for April, July, October and December representing the four distinct seasons (Autumn, Winter, Spring and Summer respectively) of the year 2012. MIDAS reconstructions were validated by comparing maximum electron density of the F2 layer (NmF2) and peak height (hmF2) values predicted by MIDAS to those derived from three South African ionosonde measurements. The diurnal and seasonal trends of the MIDAS NmF2 values were in good agreement with the respective NmF2 values derived from the ionosondes. In addition, good agreement was found between the two measurements with minimum and maximum coefficients of determination (r2) between 0.84 and 0.96 in all the stations and validation days. The seasonal trend of the NmF2 values over the South Africa region has been reproduced using this inversion which was in good agreement with the ionosonde measurements. Moreover, a comparison of the International Reference Ionosphere (IRI-2012) model NmF2 values with the respective ionosonde derived NmF2 values showed to have higher deviation than a similar comparison between the MIDAS reconstruction and the ionosonde measurements. However, the monthly averaged hmF2 values derived from IRI 2012 model showed better agreement than the respective MIDAS reconstructed hmF2 values compared with the ionosonde derived hmF2 values.The performance of the MIDAS reconstruction was observed to deteriorate with increased geomagnetic conditions. MIDAS reconstructed electron density were slightly elevated during three storm periods studied (24 April, 15 July and 8 October) which was in good agreement with the ionosonde measurements.
NASA Astrophysics Data System (ADS)
Holman, Benjamin R.
In recent years, revolutionary "hybrid" or "multi-physics" methods of medical imaging have emerged. By combining two or three different types of waves these methods overcome limitations of classical tomography techniques and deliver otherwise unavailable, potentially life-saving diagnostic information. Thermoacoustic (and photoacoustic) tomography is the most developed multi-physics imaging modality. Thermo- and photo- acoustic tomography require reconstructing initial acoustic pressure in a body from time series of pressure measured on a surface surrounding the body. For the classical case of free space wave propagation, various reconstruction techniques are well known. However, some novel measurement schemes place the object of interest between reflecting walls that form a de facto resonant cavity. In this case, known methods cannot be used. In chapter 2 we present a fast iterative reconstruction algorithm for measurements made at the walls of a rectangular reverberant cavity with a constant speed of sound. We prove the convergence of the iterations under a certain sufficient condition, and demonstrate the effectiveness and efficiency of the algorithm in numerical simulations. In chapter 3 we consider the more general problem of an arbitrarily shaped resonant cavity with a non constant speed of sound and present the gradual time reversal method for computing solutions to the inverse source problem. It consists in solving back in time on the interval [0, T] the initial/boundary value problem for the wave equation, with the Dirichlet boundary data multiplied by a smooth cutoff function. If T is sufficiently large one obtains a good approximation to the initial pressure; in the limit of large T such an approximation converges (under certain conditions) to the exact solution.
NASA Astrophysics Data System (ADS)
Zhang, Hua; He, Zhen-Hua; Li, Ya-Lin; Li, Rui; He, Guamg-Ming; Li, Zhong
2017-06-01
Multi-wave exploration is an effective means for improving precision in the exploration and development of complex oil and gas reservoirs that are dense and have low permeability. However, converted wave data is characterized by a low signal-to-noise ratio and low resolution, because the conventional deconvolution technology is easily affected by the frequency range limits, and there is limited scope for improving its resolution. The spectral inversion techniques is used to identify λ/8 thin layers and its breakthrough regarding band range limits has greatly improved the seismic resolution. The difficulty associated with this technology is how to use the stable inversion algorithm to obtain a high-precision reflection coefficient, and then to use this reflection coefficient to reconstruct broadband data for processing. In this paper, we focus on how to improve the vertical resolution of the converted PS-wave for multi-wave data processing. Based on previous research, we propose a least squares inversion algorithm with a total variation constraint, in which we uses the total variance as a priori information to solve under-determined problems, thereby improving the accuracy and stability of the inversion. Here, we simulate the Gaussian fitting amplitude spectrum to obtain broadband wavelet data, which we then process to obtain a higher resolution converted wave. We successfully apply the proposed inversion technology in the processing of high-resolution data from the Penglai region to obtain higher resolution converted wave data, which we then verify in a theoretical test. Improving the resolution of converted PS-wave data will provide more accurate data for subsequent velocity inversion and the extraction of reservoir reflection information.
Louis, A. K.
2006-01-01
Many algorithms applied in inverse scattering problems use source-field systems instead of the direct computation of the unknown scatterer. It is well known that the resulting source problem does not have a unique solution, since certain parts of the source totally vanish outside of the reconstruction area. This paper provides for the two-dimensional case special sets of functions, which include all radiating and all nonradiating parts of the source. These sets are used to solve an acoustic inverse problem in two steps. The problem under discussion consists of determining an inhomogeneous obstacle supported in a part of a disc, from data, known for a subset of a two-dimensional circle. In a first step, the radiating parts are computed by solving a linear problem. The second step is nonlinear and consists of determining the nonradiating parts. PMID:23165060
Fieselmann, Andreas; Dennerlein, Frank; Deuerling-Zheng, Yu; Boese, Jan; Fahrig, Rebecca; Hornegger, Joachim
2011-06-21
Filtered backprojection is the basis for many CT reconstruction tasks. It assumes constant attenuation values of the object during the acquisition of the projection data. Reconstruction artifacts can arise if this assumption is violated. For example, contrast flow in perfusion imaging with C-arm CT systems, which have acquisition times of several seconds per C-arm rotation, can cause this violation. In this paper, we derived and validated a novel spatio-temporal model to describe these kinds of artifacts. The model separates the temporal dynamics due to contrast flow from the scan and reconstruction parameters. We introduced derivative-weighted point spread functions to describe the spatial spread of the artifacts. The model allows prediction of reconstruction artifacts for given temporal dynamics of the attenuation values. Furthermore, it can be used to systematically investigate the influence of different reconstruction parameters on the artifacts. We have shown that with optimized redundancy weighting function parameters the spatial spread of the artifacts around a typical arterial vessel can be reduced by about 70%. Finally, an inversion of our model could be used as the basis for novel dynamic reconstruction algorithms that further minimize these artifacts.
NASA Astrophysics Data System (ADS)
Son, J.; Medina-Cetina, Z.
2017-12-01
We discuss the comparison between deterministic and stochastic optimization approaches to the nonlinear geophysical full-waveform inverse problem, based on the seismic survey data from Mississippi Canyon in the Northern Gulf of Mexico. Since the subsea engineering and offshore construction projects actively require reliable ground models from various site investigations, the primary goal of this study is to reconstruct the accurate subsurface information of the soil and rock material profiles under the seafloor. The shallow sediment layers have naturally formed heterogeneous formations which may cause unwanted marine landslides or foundation failures of underwater infrastructure. We chose the quasi-Newton and simulated annealing as deterministic and stochastic optimization algorithms respectively. Seismic forward modeling based on finite difference method with absorbing boundary condition implements the iterative simulations in the inverse modeling. We briefly report on numerical experiments using a synthetic data as an offshore ground model which contains shallow artificial target profiles of geomaterials under the seafloor. We apply the seismic migration processing and generate Voronoi tessellation on two-dimensional space-domain to improve the computational efficiency of the imaging stratigraphical velocity model reconstruction. We then report on the detail of a field data implementation, which shows the complex geologic structures in the Northern Gulf of Mexico. Lastly, we compare the new inverted image of subsurface site profiles in the space-domain with the previously processed seismic image in the time-domain at the same location. Overall, stochastic optimization for seismic inversion with migration and Voronoi tessellation show significant promise to improve the subsurface imaging of ground models and improve the computational efficiency required for the full waveform inversion. We anticipate that by improving the inversion process of shallow layers from geophysical data will better support the offshore site investigation.
NASA Astrophysics Data System (ADS)
Davis, A. B.; Bal, G.; Chen, J.
2015-12-01
Operational remote sensing of microphysical and optical cloud properties is invariably predicated on the assumption of plane-parallel slab geometry for the targeted cloud. The sole benefit of this often-questionable assumption about the cloud is that it leads to one-dimensional (1D) radiative transfer (RT)---a textbook, computationally tractable model. We present new results as evidence that, thanks to converging advances in 3D RT, inverse problem theory, algorithm implementation, and computer hardware, we are at the dawn of a new era in cloud remote sensing where we can finally go beyond the plane-parallel paradigm. Granted, the plane-parallel/1D RT assumption is reasonable for spatially extended stratiform cloud layers, as well as the smoothly distributed background aerosol layers. However, these 1D RT-friendly scenarios exclude cases that are critically important for climate physics. 1D RT---whence operational cloud remote sensing---fails catastrophically for cumuliform clouds that have fully 3D outer shapes and internal structures driven by shallow or deep convection. For these situations, the first order of business in a robust characterization by remote sensing is to abandon the slab geometry framework and determine the 3D geometry of the cloud, as a first step toward bone fide 3D cloud tomography. With this specific goal in mind, we deliver a proof-of-concept for an entirely new kind of remote sensing applicable to 3D clouds. It is based on highly simplified 3D RT and exploits multi-angular suites of cloud images at high spatial resolution. Airborne sensors like AirMSPI readily acquire such data. The key element of the reconstruction algorithm is a sophisticated solution of the nonlinear inverse problem via linearization of the forward model and an iteration scheme supported, where necessary, by adaptive regularization. Currently, the demo uses a 2D setting to show how either vertical profiles or horizontal slices of the cloud can be accurately reconstructed. Extension to 3D volumes is straightforward but the next challenge is to accommodate images at lower spatial resolution, e.g., from MISR/Terra. G. Bal, J. Chen, and A.B. Davis (2015). Reconstruction of cloud geometry from multi-angle images, Inverse Problems in Imaging (submitted).
Energy functions for regularization algorithms
NASA Technical Reports Server (NTRS)
Delingette, H.; Hebert, M.; Ikeuchi, K.
1991-01-01
Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.
Intensity-enhanced MART for tomographic PIV
NASA Astrophysics Data System (ADS)
Wang, HongPing; Gao, Qi; Wei, RunJie; Wang, JinJun
2016-05-01
A novel technique to shrink the elongated particles and suppress the ghost particles in particle reconstruction of tomographic particle image velocimetry is presented. This method, named as intensity-enhanced multiplicative algebraic reconstruction technique (IntE-MART), utilizes an inverse diffusion function and an intensity suppressing factor to improve the quality of particle reconstruction and consequently the precision of velocimetry. A numerical assessment about vortex ring motion with and without image noise is performed to evaluate the new algorithm in terms of reconstruction, particle elongation and velocimetry. The simulation is performed at seven different seeding densities. The comparison of spatial filter MART and IntE-MART on the probability density function of particle peak intensity suggests that one of the local minima of the distribution can be used to separate the ghosts and actual particles. Thus, ghost removal based on IntE-MART is also introduced. To verify the application of IntE-MART, a real plate turbulent boundary layer experiment is performed. The result indicates that ghost reduction can increase the accuracy of RMS of velocity field.
Rao, Jing; Ratassepp, Madis; Lisevych, Danylo; Hamzah Caffoor, Mahadhir; Fan, Zheng
2017-12-12
Corrosion is a major safety and economic concern to various industries. In this paper, a novel ultrasonic guided wave tomography (GWT) system based on self-designed piezoelectric sensors is presented for on-line corrosion monitoring of large plate-like structures. Accurate thickness reconstruction of corrosion damages is achieved by using the dispersive regimes of selected guided waves and a reconstruction algorithm based on full waveform inversion (FWI). The system makes use of an array of miniaturised piezoelectric transducers that are capable of exciting and receiving highly dispersive A0 Lamb wave mode at low frequencies. The scattering from transducer array has been found to have a small effect on the thickness reconstruction. The efficiency and the accuracy of the new system have been demonstrated through continuous forced corrosion experiments. The FWI reconstructed thicknesses show good agreement with analytical predictions obtained by Faraday's law and laser measurements, and more importantly, the thickness images closely resemble the actual corrosion sites.
Yang, Defu; Wang, Lin; Chen, Dongmei; Yan, Chenggang; He, Xiaowei; Liang, Jimin; Chen, Xueli
2018-05-17
The reconstruction of bioluminescence tomography (BLT) is severely ill-posed due to the insufficient measurements and diffuses nature of the light propagation. Predefined permissible source region (PSR) combined with regularization terms is one common strategy to reduce such ill-posedness. However, the region of PSR is usually hard to determine and can be easily affected by subjective consciousness. Hence, we theoretically developed a filtered maximum likelihood expectation maximization (fMLEM) method for BLT. Our method can avoid predefining the PSR and provide a robust and accurate result for global reconstruction. In the method, the simplified spherical harmonics approximation (SP N ) was applied to characterize diffuse light propagation in medium, and the statistical estimation-based MLEM algorithm combined with a filter function was used to solve the inverse problem. We systematically demonstrated the performance of our method by the regular geometry- and digital mouse-based simulations and a liver cancer-based in vivo experiment. Graphical abstract The filtered MLEM-based global reconstruction method for BLT.
Reconstructing high-dimensional two-photon entangled states via compressive sensing
Tonolini, Francesco; Chan, Susan; Agnew, Megan; Lindsay, Alan; Leach, Jonathan
2014-01-01
Accurately establishing the state of large-scale quantum systems is an important tool in quantum information science; however, the large number of unknown parameters hinders the rapid characterisation of such states, and reconstruction procedures can become prohibitively time-consuming. Compressive sensing, a procedure for solving inverse problems by incorporating prior knowledge about the form of the solution, provides an attractive alternative to the problem of high-dimensional quantum state characterisation. Using a modified version of compressive sensing that incorporates the principles of singular value thresholding, we reconstruct the density matrix of a high-dimensional two-photon entangled system. The dimension of each photon is equal to d = 17, corresponding to a system of 83521 unknown real parameters. Accurate reconstruction is achieved with approximately 2500 measurements, only 3% of the total number of unknown parameters in the state. The algorithm we develop is fast, computationally inexpensive, and applicable to a wide range of quantum states, thus demonstrating compressive sensing as an effective technique for measuring the state of large-scale quantum systems. PMID:25306850
CLustre: semi-automated lineament clustering for palaeo-glacial reconstruction
NASA Astrophysics Data System (ADS)
Smith, Mike; Anders, Niels; Keesstra, Saskia
2016-04-01
Palaeo glacial reconstructions, or "inversions", using evidence from the palimpsest landscape are increasingly being undertaken with larger and larger databases. Predominant in landform evidence is the lineament (or drumlin) where the biggest datasets number in excess of 50,000 individual forms. One stage in the inversion process requires the identification of lineaments that are generically similar and then their subsequent interpretation in to a coherent chronology of events. Here we present CLustre, a semi-authomated algorithm that clusters lineaments using a locally adaptive, region growing, method. This is initially tested using 1,500 model runs on a synthetic dataset, before application to two case studies (where manual clustering has been undertaken by independent researchers): (1) Dubawnt Lake, Canada and (2) Victoria island, Canada. Results using the synthetic data show that classifications are robust in most scenarios, although specific cases of cross-cutting lineaments may lead to incorrect clusters. Application to the case studies showed a very good match to existing published work, with differences related to limited numbers of unclassified lineaments and parallel cross-cutting lineaments. The value in CLustre comes from the semi-automated, objective, application of a classification method that is repeatable. Once classified, summary statistics of lineament groups can be calculated and then used in the inversion.
NASA Astrophysics Data System (ADS)
Majer, C. L.; Meyer, S.; Konrad, S.; Sarli, E.; Bartelmann, M.
2016-07-01
This paper continues a series in which we intend to show how all observables of galaxy clusters can be combined to recover the two-dimensional, projected gravitational potential of individual clusters. Our goal is to develop a non-parametric algorithm for joint cluster reconstruction taking all cluster observables into account. For this reason we focus on the line-of-sight projected gravitational potential, proportional to the lensing potential, in order to extend existing reconstruction algorithms. In this paper, we begin with the relation between the Compton-y parameter and the Newtonian gravitational potential, assuming hydrostatic equilibrium and a polytropic stratification of the intracluster gas. Extending our first publication we now consider a spheroidal rather than a spherical cluster symmetry. We show how a Richardson-Lucy deconvolution can be used to convert the intensity change of the CMB due to the thermal Sunyaev-Zel'dovich effect into an estimate for the two-dimensional gravitational potential. We apply our reconstruction method to a cluster based on an N-body/hydrodynamical simulation processed with the characteristics (resolution and noise) of the ALMA interferometer for which we achieve a relative error of ≲20 per cent for a large fraction of the virial radius. We further apply our method to an observation of the galaxy cluster RXJ1347 for which we can reconstruct the potential with a relative error of ≲20 per cent for the observable cluster range.
Poisson image reconstruction with Hessian Schatten-norm regularization.
Lefkimmiatis, Stamatios; Unser, Michael
2013-11-01
Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.
Bouvier, Adeline; Deleaval, Flavien; Doyley, Marvin M; Yazdani, Saami K; Finet, Gérard; Le Floc'h, Simon; Cloutier, Guy; Pettigrew, Roderic I; Ohayon, Jacques
2016-01-01
The peak cap stress (PCS) amplitude is recognized as a biomechanical predictor of vulnerable plaque (VP) rupture. However, quantifying PCS in vivo remains a challenge since the stress depends on the plaque mechanical properties. In response, an iterative material finite element (FE) elasticity reconstruction method using strain measurements has been implemented for the solution of these inverse problems. Although this approach could resolve the mechanical characterization of VPs, it suffers from major limitations since (i) it is not adapted to characterize VPs exhibiting high material discontinuities between inclusions, and (ii) does not permit real time elasticity reconstruction for clinical use. The present theoretical study was therefore designed to develop a direct material-FE algorithm for elasticity reconstruction problems which accounts for material heterogeneities. We originally modified and adapted the extended FE method (Xfem), used mainly in crack analysis, to model material heterogeneities. This new algorithm was successfully applied to six coronary lesions of patients imaged in vivo with intravascular ultrasound. The results demonstrated that the mean relative absolute errors of the reconstructed Young's moduli obtained for the arterial wall, fibrosis, necrotic core, and calcified regions of the VPs decreased from 95.3±15.56%, 98.85±72.42%, 103.29±111.86% and 95.3±10.49%, respectively, to values smaller than 2.6 × 10−8±5.7 × 10−8% (i.e. close to the exact solutions) when including modified-Xfem method into our direct elasticity reconstruction method. PMID:24240392
Reconstruction of structural damage based on reflection intensity spectra of fiber Bragg gratings
NASA Astrophysics Data System (ADS)
Huang, Guojun; Wei, Changben; Chen, Shiyuan; Yang, Guowei
2014-12-01
We present an approach for structural damage reconstruction based on the reflection intensity spectra of fiber Bragg gratings (FBGs). Our approach incorporates the finite element method, transfer matrix (T-matrix), and genetic algorithm to solve the inverse photo-elastic problem of damage reconstruction, i.e. to identify the location, size, and shape of a defect. By introducing a parameterized characterization of the damage information, the inverse photo-elastic problem is reduced to an optimization problem, and a relevant computational scheme was developed. The scheme iteratively searches for the solution to the corresponding direct photo-elastic problem until the simulated and measured (or target) reflection intensity spectra of the FBGs near the defect coincide within a prescribed error. Proof-of-concept validations of our approach were performed numerically and experimentally using both holed and cracked plate samples as typical cases of plane-stress problems. The damage identifiability was simulated by changing the deployment of the FBG sensors, including the total number of sensors and their distance to the defect. Both the numerical and experimental results demonstrate that our approach is effective and promising. It provides us with a photo-elastic method for developing a remote, automatic damage-imaging technique that substantially improves damage identification for structural health monitoring.
Quantification of thickness loss in a liquid-loaded plate using ultrasonic guided wave tomography
NASA Astrophysics Data System (ADS)
Rao, Jing; Ratassepp, Madis; Fan, Zheng
2017-12-01
Ultrasonic guided wave tomography (GWT) provides an attractive solution to map thickness changes from remote locations. It is based on the velocity-to-thickness mapping employing the dispersive characteristics of selected guided modes. This study extends the application of GWT on a liquid-loaded plate. It is a more challenging case than the application on a free plate, due to energy of the guided waves leaking into the liquid. In order to ensure the accuracy of thickness reconstruction, advanced forward models are developed to consider attenuation effects using complex velocities. The reconstruction of the thickness map is based on the frequency-domain full waveform inversion (FWI) method, and its accuracy is discussed using different frequencies and defect dimensions. Validation experiments are carried out on a water-loaded plate with an irregularly shaped defect using S0 guided waves, showing excellent performance of the reconstruction algorithm.
Fast reconstruction of optical properties for complex segmentations in near infrared imaging
NASA Astrophysics Data System (ADS)
Jiang, Jingjing; Wolf, Martin; Sánchez Majos, Salvador
2017-04-01
The intrinsic ill-posed nature of the inverse problem in near infrared imaging makes the reconstruction of fine details of objects deeply embedded in turbid media challenging even for the large amounts of data provided by time-resolved cameras. In addition, most reconstruction algorithms for this type of measurements are only suitable for highly symmetric geometries and rely on a linear approximation to the diffusion equation since a numerical solution of the fully non-linear problem is computationally too expensive. In this paper, we will show that a problem of practical interest can be successfully addressed making efficient use of the totality of the information supplied by time-resolved cameras. We set aside the goal of achieving high spatial resolution for deep structures and focus on the reconstruction of complex arrangements of large regions. We show numerical results based on a combined approach of wavelength-normalized data and prior geometrical information, defining a fully parallelizable problem in arbitrary geometries for time-resolved measurements. Fast reconstructions are obtained using a diffusion approximation and Monte-Carlo simulations, parallelized in a multicore computer and a GPU respectively.
Photofragment image analysis using the Onion-Peeling Algorithm
NASA Astrophysics Data System (ADS)
Manzhos, Sergei; Loock, Hans-Peter
2003-07-01
With the growing popularity of the velocity map imaging technique, a need for the analysis of photoion and photoelectron images arose. Here, a computer program is presented that allows for the analysis of cylindrically symmetric images. It permits the inversion of the projection of the 3D charged particle distribution using the Onion Peeling Algorithm. Further analysis includes the determination of radial and angular distributions, from which velocity distributions and spatial anisotropy parameters are obtained. Identification and quantification of the different photolysis channels is therefore straightforward. In addition, the program features geometry correction, centering, and multi-Gaussian fitting routines, as well as a user-friendly graphical interface and the possibility of generating synthetic images using either the fitted or user-defined parameters. Program summaryTitle of program: Glass Onion Catalogue identifier: ADRY Program Summary URL:http://cpc.cs.qub.ac.uk/summaries/ADRY Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Computer: IBM PC Operating system under which the program has been tested: Windows 98, Windows 2000, Windows NT Programming language used: Delphi 4.0 Memory required to execute with typical data: 18 Mwords No. of bits in a word: 32 No. of bytes in distributed program, including test data, etc.: 9 911 434 Distribution format: zip file Keywords: Photofragment image, onion peeling, anisotropy parameters Nature of physical problem: Information about velocity and angular distributions of photofragments is the basis on which the analysis of the photolysis process resides. Reconstructing the three-dimensional distribution from the photofragment image is the first step, further processing involving angular and radial integration of the inverted image to obtain velocity and angular distributions. Provisions have to be made to correct for slight distortions of the image, and to verify the accuracy of the analysis process. Method of solution: The "Onion Peeling" algorithm described by Helm [Rev. Sci. Instrum. 67 (6) (1996)] is used to perform the image reconstruction. Angular integration with a subsequent multi-Gaussian fit supplies information about the velocity distribution of the photofragments, whereas radial integration with subsequent expansion of the angular distributions over Legendre Polynomials gives the spatial anisotropy parameters. Fitting algorithms have been developed to centre the image and to correct for image distortion. Restrictions on the complexity of the problem: The maximum image size (1280×1280) and resolution (16 bit) are restricted by available memory and can be changed in the source code. Initial centre coordinates within 5 pixels may be required for the correction and the centering algorithm to converge. Peaks on the velocity profile separated by less then the peak width may not be deconvolved. In the charged particle image reconstruction, it is assumed that the kinetic energy released in the dissociation process is small compared to the energy acquired in the electric field. For the fitting parameters to be physically meaningful, cylindrical symmetry of the image has to be assumed but the actual inversion algorithm is stable to distortions of such symmetry in experimental images. Typical running time: The analysis procedure can be divided into three parts: inversion, fitting, and geometry correction. The inversion time grows approx. as R3, where R is the radius of the region of interest: for R=200 pixels it is less than a minute, for R=400 pixels less then 6 min on a 400 MHz IBM personal computer. The time for the velocity fitting procedure to converge depends strongly on the number of peaks in the velocity profile and the convergence criterion. It ranges between less then a second for simple curves and a few minutes for profiles with up to twenty peaks. The time taken for the image correction scales as R2 and depends on the curve profile. It is on the order of a few minutes for images with R=500 pixels. Unusual features of the program: Our centering and image correction algorithm is based on Fourier analysis of the radial distribution to insure the sharpest velocity profile and is insensitive to an uneven intensity distribution. There exists an angular averaging option to stabilize the inversion algorithm and not to loose the resolution at the same time.
2017-01-01
This paper presents a method for formation flight and collision avoidance of multiple UAVs. Due to the shortcomings such as collision avoidance caused by UAV’s high-speed and unstructured environments, this paper proposes a modified tentacle algorithm to ensure the high performance of collision avoidance. Different from the conventional tentacle algorithm which uses inverse derivation, the modified tentacle algorithm rapidly matches the radius of each tentacle and the steering command, ensuring that the data calculation problem in the conventional tentacle algorithm is solved. Meanwhile, both the speed sets and tentacles in one speed set are reduced and reconstructed so as to be applied to multiple UAVs. Instead of path iterative optimization, the paper selects the best tentacle to obtain the UAV collision avoidance path quickly. The simulation results show that the method presented in the paper effectively enhances the performance of flight formation and collision avoidance for multiple high-speed UAVs in unstructured environments. PMID:28763498
Joint reconstruction of x-ray fluorescence and transmission tomography
Di, Zichao; Chen, Si; Hong, Young Pyo; ...
2017-05-30
X-ray fluorescence tomography is based on the detection of fluorescence x-ray photons produced following x-ray absorption while a specimen is rotated; it provides information on the 3D distribution of selected elements within a sample. One limitation in the quality of sample recovery is the separation of elemental signals due to the finite energy resolution of the detector. Another limitation is the effect of self-absorption, which can lead to inaccurate results with dense samples. To recover a higher quality elemental map, we combine x-ray fluorescence detection with a second data modality: conventional x-ray transmission tomography using absorption. By using these combinedmore » signals in a nonlinear optimization-based approach, we demonstrate the benefit of our algorithm on real experimental data and obtain an improved quantitative reconstruction of the spatial distribution of dominant elements in the sample. Furthermore, compared with single-modality inversion based on x-ray fluorescence alone, this joint inversion approach reduces ill-posedness and should result in improved elemental quantification and better correction of self-absorption.« less
NASA Astrophysics Data System (ADS)
Cheng, Jun; Zhang, Jun; Tian, Jinwen
2015-12-01
Based on deep analysis of the LiveWire interactive boundary extraction algorithm, a new algorithm focusing on improving the speed of LiveWire algorithm is proposed in this paper. Firstly, the Haar wavelet transform is carried on the input image, and the boundary is extracted on the low resolution image obtained by the wavelet transform of the input image. Secondly, calculating LiveWire shortest path is based on the control point set direction search by utilizing the spatial relationship between the two control points users provide in real time. Thirdly, the search order of the adjacent points of the starting node is set in advance. An ordinary queue instead of a priority queue is taken as the storage pool of the points when optimizing their shortest path value, thus reducing the complexity of the algorithm from O[n2] to O[n]. Finally, A region iterative backward projection method based on neighborhood pixel polling has been used to convert dual-pixel boundary of the reconstructed image to single-pixel boundary after Haar wavelet inverse transform. The algorithm proposed in this paper combines the advantage of the Haar wavelet transform and the advantage of the optimal path searching method based on control point set direction search. The former has fast speed of image decomposition and reconstruction and is more consistent with the texture features of the image and the latter can reduce the time complexity of the original algorithm. So that the algorithm can improve the speed in interactive boundary extraction as well as reflect the boundary information of the image more comprehensively. All methods mentioned above have a big role in improving the execution efficiency and the robustness of the algorithm.
Radio-Tomographic Images of Post-midnight Equatorial Plasma Depletions
NASA Astrophysics Data System (ADS)
Hei, M. A.; Bernhardt, P. A.; Siefring, C. L.; Wilkens, M.; Huba, J. D.; Krall, J.; Valladares, C. E.; Heelis, R. A.; Hairston, M. R.; Coley, W. R.; Chau, J. L.
2013-12-01
For the first time, post-midnight equatorial plasma depletions (EPDs) have been imaged in the longitude-altitude plane using radio-tomography. High-resolution (~10 km × 10 km) electron-density reconstructions were created from total electron content (TEC) data using an array of receivers sited in Peru and the Multiplicative Algebraic Reconstruction Technique (MART) inversion algorithm. TEC data were obtained from the 150 and 400 MHz signals transmitted by the CERTO beacon on the C/NOFS satellite. In-situ electron density data from the C/NOFS CINDI instrument and electron density profiles from the UML Jicamarca ionosonde were used to generate an initial guess for the MART inversion, and also to constrain the inversion process. Observed EPDs had widths of 100-1000 km, spacings of 300-900 km, and often appeared 'pinched off' at the bottom. Well-developed EPDs appeared on an evening with a very small (4 m/s) Pre-Reversal-Enhancement (PRE), suggesting that postmidnight enhancements of the vertical plasma drift and/or seeding-induced uplifts (e.g. gravity waves) were responsible for driving the Rayleigh-Taylor Instability into the nonlinear regime on this night. On another night the Jicamarca ISR recorded postmidnight (~0230 LT) Eastward electric fields nearly twice as strong as the PRE fields seven hours earlier. These electric fields lifted the whole ionosphere, including embedded EPDs, over a longitude range ~14° wide. CINDI detected a dawn depletion in exactly the area where the reconstruction showed an uplifted EPD. Strong Equatorial Spread-F observed by the Jicamarca ionosonde during receiver observation times confirmed the presence of ionospheric irregularities.
NASA Astrophysics Data System (ADS)
Fiandrotti, Attilio; Fosson, Sophie M.; Ravazzi, Chiara; Magli, Enrico
2018-04-01
Compressive sensing promises to enable bandwidth-efficient on-board compression of astronomical data by lifting the encoding complexity from the source to the receiver. The signal is recovered off-line, exploiting GPUs parallel computation capabilities to speedup the reconstruction process. However, inherent GPU hardware constraints limit the size of the recoverable signal and the speedup practically achievable. In this work, we design parallel algorithms that exploit the properties of circulant matrices for efficient GPU-accelerated sparse signals recovery. Our approach reduces the memory requirements, allowing us to recover very large signals with limited memory. In addition, it achieves a tenfold signal recovery speedup thanks to ad-hoc parallelization of matrix-vector multiplications and matrix inversions. Finally, we practically demonstrate our algorithms in a typical application of circulant matrices: deblurring a sparse astronomical image in the compressed domain.
Advancements to the planogram frequency–distance rebinning algorithm
Champley, Kyle M; Raylman, Raymond R; Kinahan, Paul E
2010-01-01
In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact reconstruction) and planogram filtered backprojection image reconstruction algorithms. We show that the PFDRX algorithm produces images that are nearly as accurate as images reconstructed with the planogram filtered backprojection algorithm and more accurate than images reconstructed with the PFDR+FBP algorithm. Both the PFDR+FBP and PFDRX algorithms provide a dramatic improvement in computation time over the planogram filtered backprojection algorithm. PMID:20436790
Shen, Hui-min; Lee, Kok-Meng; Hu, Liang; Foong, Shaohui; Fu, Xin
2016-01-01
Localization of active neural source (ANS) from measurements on head surface is vital in magnetoencephalography. As neuron-generated magnetic fields are extremely weak, significant uncertainties caused by stochastic measurement interference complicate its localization. This paper presents a novel computational method based on reconstructed magnetic field from sparse noisy measurements for enhanced ANS localization by suppressing effects of unrelated noise. In this approach, the magnetic flux density (MFD) in the nearby current-free space outside the head is reconstructed from measurements through formulating the infinite series solution of the Laplace's equation, where boundary condition (BC) integrals over the entire measurements provide "smooth" reconstructed MFD with the decrease in unrelated noise. Using a gradient-based method, reconstructed MFDs with good fidelity are selected for enhanced ANS localization. The reconstruction model, spatial interpolation of BC, parametric equivalent current dipole-based inverse estimation algorithm using reconstruction, and gradient-based selection are detailed and validated. The influences of various source depths and measurement signal-to-noise ratio levels on the estimated ANS location are analyzed numerically and compared with a traditional method (where measurements are directly used), and it was demonstrated that gradient-selected high-fidelity reconstructed data can effectively improve the accuracy of ANS localization.
NASA Astrophysics Data System (ADS)
La Foy, Roderick; Vlachos, Pavlos
2011-11-01
An optimally designed MLOS tomographic reconstruction algorithm for use in 3D PIV and PTV applications is analyzed. Using a set of optimized reconstruction parameters, the reconstructions produced by the MLOS algorithm are shown to be comparable to reconstructions produced by the MART algorithm for a range of camera geometries, camera numbers, and particle seeding densities. The resultant velocity field error calculated using PIV and PTV algorithms is further minimized by applying both pre and post processing to the reconstructed data sets.
Algorithms and Array Design Criteria for Robust Imaging in Interferometry
NASA Astrophysics Data System (ADS)
Kurien, Binoy George
Optical interferometry is a technique for obtaining high-resolution imagery of a distant target by interfering light from multiple telescopes. Image restoration from interferometric measurements poses a unique set of challenges. The first challenge is that the measurement set provides only a sparse-sampling of the object's Fourier Transform and hence image formation from these measurements is an inherently ill-posed inverse problem. Secondly, atmospheric turbulence causes severe distortion of the phase of the Fourier samples. We develop array design conditions for unique Fourier phase recovery, as well as a comprehensive algorithmic framework based on the notion of redundant-spaced-calibration (RSC), which together achieve reliable image reconstruction in spite of these challenges. Within this framework, we see that classical interferometric observables such as the bispectrum and closure phase can limit sensitivity, and that generalized notions of these observables can improve both theoretical and empirical performance. Our framework leverages techniques from lattice theory to resolve integer phase ambiguities in the interferometric phase measurements, and from graph theory, to select a reliable set of generalized observables. We analyze the expected shot-noise-limited performance of our algorithm for both pairwise and Fizeau interferometric architectures and corroborate this analysis with simulation results. We apply techniques from the field of compressed sensing to perform image reconstruction from the estimates of the object's Fourier coefficients. The end result is a comprehensive strategy to achieve well-posed and easily-predictable reconstruction performance in optical interferometry.
Kim, Hyungjin; Park, Chang Min; Lee, Myunghee; Park, Sang Joon; Song, Yong Sub; Lee, Jong Hyuk; Hwang, Eui Jin; Goo, Jin Mo
2016-01-01
To identify the impact of reconstruction algorithms on CT radiomic features of pulmonary tumors and to reveal and compare the intra- and inter-reader and inter-reconstruction algorithm variability of each feature. Forty-two patients (M:F = 19:23; mean age, 60.43±10.56 years) with 42 pulmonary tumors (22.56±8.51mm) underwent contrast-enhanced CT scans, which were reconstructed with filtered back projection and commercial iterative reconstruction algorithm (level 3 and 5). Two readers independently segmented the whole tumor volume. Fifteen radiomic features were extracted and compared among reconstruction algorithms. Intra- and inter-reader variability and inter-reconstruction algorithm variability were calculated using coefficients of variation (CVs) and then compared. Among the 15 features, 5 first-order tumor intensity features and 4 gray level co-occurrence matrix (GLCM)-based features showed significant differences (p<0.05) among reconstruction algorithms. As for the variability, effective diameter, sphericity, entropy, and GLCM entropy were the most robust features (CV≤5%). Inter-reader variability was larger than intra-reader or inter-reconstruction algorithm variability in 9 features. However, for entropy, homogeneity, and 4 GLCM-based features, inter-reconstruction algorithm variability was significantly greater than inter-reader variability (p<0.013). Most of the radiomic features were significantly affected by the reconstruction algorithms. Inter-reconstruction algorithm variability was greater than inter-reader variability for entropy, homogeneity, and GLCM-based features.
Inverse scattering and refraction corrected reflection for breast cancer imaging
NASA Astrophysics Data System (ADS)
Wiskin, J.; Borup, D.; Johnson, S.; Berggren, M.; Robinson, D.; Smith, J.; Chen, J.; Parisky, Y.; Klock, John
2010-03-01
Reflection ultrasound (US) has been utilized as an adjunct imaging modality for over 30 years. TechniScan, Inc. has developed unique, transmission and concomitant reflection algorithms which are used to reconstruct images from data gathered during a tomographic breast scanning process called Warm Bath Ultrasound (WBU™). The transmission algorithm yields high resolution, 3D, attenuation and speed of sound (SOS) images. The reflection algorithm is based on canonical ray tracing utilizing refraction correction via the SOS and attenuation reconstructions. The refraction correction reflection algorithm allows 360 degree compounding resulting in the reflection image. The requisite data are collected when scanning the entire breast in a 33° C water bath, on average in 8 minutes. This presentation explains how the data are collected and processed by the 3D transmission and reflection imaging mode algorithms. The processing is carried out using two NVIDIA® Tesla™ GPU processors, accessing data on a 4-TeraByte RAID. The WBU™ images are displayed in a DICOM viewer that allows registration of all three modalities. Several representative cases are presented to demonstrate potential diagnostic capability including: a cyst, fibroadenoma, and a carcinoma. WBU™ images (SOS, attenuation, and reflection modalities) are shown along with their respective mammograms and standard ultrasound images. In addition, anatomical studies are shown comparing WBU™ images and MRI images of a cadaver breast. This innovative technology is designed to provide additional tools in the armamentarium for diagnosis of breast disease.
Singular value decomposition for the truncated Hilbert transform
NASA Astrophysics Data System (ADS)
Katsevich, A.
2010-11-01
Starting from a breakthrough result by Gelfand and Graev, inversion of the Hilbert transform became a very important tool for image reconstruction in tomography. In particular, their result is useful when the tomographic data are truncated and one deals with an interior problem. As was established recently, the interior problem admits a stable and unique solution when some a priori information about the object being scanned is available. The most common approach to solving the interior problem is based on converting it to the Hilbert transform and performing analytic continuation. Depending on what type of tomographic data are available, one gets different Hilbert inversion problems. In this paper, we consider two such problems and establish singular value decomposition for the operators involved. We also propose algorithms for performing analytic continuation.
NASA Astrophysics Data System (ADS)
Kazantsev, Daniil; Pickalov, Valery; Nagella, Srikanth; Pasca, Edoardo; Withers, Philip J.
2018-01-01
In the field of computerized tomographic imaging, many novel reconstruction techniques are routinely tested using simplistic numerical phantoms, e.g. the well-known Shepp-Logan phantom. These phantoms cannot sufficiently cover the broad spectrum of applications in CT imaging where, for instance, smooth or piecewise-smooth 3D objects are common. TomoPhantom provides quick access to an external library of modular analytical 2D/3D phantoms with temporal extensions. In TomoPhantom, quite complex phantoms can be built using additive combinations of geometrical objects, such as, Gaussians, parabolas, cones, ellipses, rectangles and volumetric extensions of them. Newly designed phantoms are better suited for benchmarking and testing of different image processing techniques. Specifically, tomographic reconstruction algorithms which employ 2D and 3D scanning geometries, can be rigorously analyzed using the software. TomoPhantom also provides a capability of obtaining analytical tomographic projections which further extends the applicability of software towards more realistic, free from the "inverse crime" testing. All core modules of the package are written in the C-OpenMP language and wrappers for Python and MATLAB are provided to enable easy access. Due to C-based multi-threaded implementation, volumetric phantoms of high spatial resolution can be obtained with computational efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, Jaideep; Lee, Jina; Lefantzi, Sophia
The estimation of fossil-fuel CO2 emissions (ffCO2) from limited ground-based and satellite measurements of CO2 concentrations will form a key component of the monitoring of treaties aimed at the abatement of greenhouse gas emissions. To that end, we construct a multiresolution spatial parametrization for fossil-fuel CO2 emissions (ffCO2), to be used in atmospheric inversions. Such a parametrization does not currently exist. The parametrization uses wavelets to accurately capture the multiscale, nonstationary nature of ffCO2 emissions and employs proxies of human habitation, e.g., images of lights at night and maps of built-up areas to reduce the dimensionality of the multiresolution parametrization.more » The parametrization is used in a synthetic data inversion to test its suitability for use in atmospheric inverse problem. This linear inverse problem is predicated on observations of ffCO2 concentrations collected at measurement towers. We adapt a convex optimization technique, commonly used in the reconstruction of compressively sensed images, to perform sparse reconstruction of the time-variant ffCO2 emission field. We also borrow concepts from compressive sensing to impose boundary conditions i.e., to limit ffCO2 emissions within an irregularly shaped region (the United States, in our case). We find that the optimization algorithm performs a data-driven sparsification of the spatial parametrization and retains only of those wavelets whose weights could be estimated from the observations. Further, our method for the imposition of boundary conditions leads to a 10computational saving over conventional means of doing so. We conclude with a discussion of the accuracy of the estimated emissions and the suitability of the spatial parametrization for use in inverse problems with a significant degree of regularization.« less
Saito, Shigeyoshi; Tanaka, Keiko; Hashido, Takashi
2016-02-01
The purpose of this study was to compare the mean hepatic stiffness values obtained by the application of two different direct inverse problem reconstruction methods to magnetic resonance elastography (MRE). Thirteen healthy men (23.2±2.1 years) and 16 patients with liver diseases (78.9±4.3 years; 12 men and 4 women) were examined for this study using a 3.0 T-MRI. The healthy volunteers underwent three consecutive scans, two 70-Hz waveform and a 50-Hz waveform scans. On the other hand, the patients with liver disease underwent scanning using the 70-Hz waveform only. The MRE data for each subject was processed twice for calculation of the mean hepatic stiffness (Pa), once using the multiscale direct inversion (MSDI) and once using the multimodel direct inversion (MMDI). There were no significant differences in the mean stiffness values among the scans obtained with two 70-Hz and different waveforms. However, the mean stiffness values obtained with the MSDI technique (with mask: 2895.3±255.8 Pa, without mask: 2940.6±265.4 Pa) were larger than those obtained with the MMDI technique (with mask: 2614.0±242.1 Pa, without mask: 2699.2±273.5 Pa). The reproducibility of measurements obtained using the two techniques was high for both the healthy volunteers [intraclass correlation coefficients (ICCs): 0.840-0.953] and the patients (ICC: 0.830-0.995). These results suggest that knowledge of the characteristics of different direct inversion algorithms is important for longitudinal liver stiffness assessments such as the comparison of different scanners and evaluation of the response to fibrosis therapy.
NASA Astrophysics Data System (ADS)
Huang, Weilin; Wang, Runqiu; Chen, Yangkang
2018-05-01
Microseismic signal is typically weak compared with the strong background noise. In order to effectively detect the weak signal in microseismic data, we propose a mathematical morphology based approach. We decompose the initial data into several morphological multiscale components. For detection of weak signal, a non-stationary weighting operator is proposed and introduced into the process of reconstruction of data by morphological multiscale components. The non-stationary weighting operator can be obtained by solving an inversion problem. The regularized non-stationary method can be understood as a non-stationary matching filtering method, where the matching filter has the same size as the data to be filtered. In this paper, we provide detailed algorithmic descriptions and analysis. The detailed algorithm framework, parameter selection and computational issue for the regularized non-stationary morphological reconstruction (RNMR) method are presented. We validate the presented method through a comprehensive analysis through different data examples. We first test the proposed technique using a synthetic data set. Then the proposed technique is applied to a field project, where the signals induced from hydraulic fracturing are recorded by 12 three-component geophones in a monitoring well. The result demonstrates that the RNMR can improve the detectability of the weak microseismic signals. Using the processed data, the short-term-average over long-term average picking algorithm and Geiger's method are applied to obtain new locations of microseismic events. In addition, we show that the proposed RNMR method can be used not only in microseismic data but also in reflection seismic data to detect the weak signal. We also discussed the extension of RNMR from 1-D to 2-D or a higher dimensional version.
Single image super resolution algorithm based on edge interpolation in NSCT domain
NASA Astrophysics Data System (ADS)
Zhang, Mengqun; Zhang, Wei; He, Xinyu
2017-11-01
In order to preserve the texture and edge information and to improve the space resolution of single frame, a superresolution algorithm based on Contourlet (NSCT) is proposed. The original low resolution image is transformed by NSCT, and the directional sub-band coefficients of the transform domain are obtained. According to the scale factor, the high frequency sub-band coefficients are amplified by the interpolation method based on the edge direction to the desired resolution. For high frequency sub-band coefficients with noise and weak targets, Bayesian shrinkage is used to calculate the threshold value. The coefficients below the threshold are determined by the correlation among the sub-bands of the same scale to determine whether it is noise and de-noising. The anisotropic diffusion filter is used to effectively enhance the weak target in the low contrast region of the target and background. Finally, the high-frequency sub-band is amplified by the bilinear interpolation method to the desired resolution, and then combined with the high-frequency subband coefficients after de-noising and small target enhancement, the NSCT inverse transform is used to obtain the desired resolution image. In order to verify the effectiveness of the proposed algorithm, the proposed algorithm and several common image reconstruction methods are used to test the synthetic image, motion blurred image and hyperspectral image, the experimental results show that compared with the traditional single resolution algorithm, the proposed algorithm can obtain smooth edges and good texture features, and the reconstructed image structure is well preserved and the noise is suppressed to some extent.
Higher order total variation regularization for EIT reconstruction.
Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut
2018-01-08
Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.
NASA Astrophysics Data System (ADS)
He, Xingyu; Tong, Ningning; Hu, Xiaowei
2018-01-01
Compressive sensing has been successfully applied to inverse synthetic aperture radar (ISAR) imaging of moving targets. By exploiting the block sparse structure of the target image, sparse solution for multiple measurement vectors (MMV) can be applied in ISAR imaging and a substantial performance improvement can be achieved. As an effective sparse recovery method, sparse Bayesian learning (SBL) for MMV involves a matrix inverse at each iteration. Its associated computational complexity grows significantly with the problem size. To address this problem, we develop a fast inverse-free (IF) SBL method for MMV. A relaxed evidence lower bound (ELBO), which is computationally more amiable than the traditional ELBO used by SBL, is obtained by invoking fundamental property for smooth functions. A variational expectation-maximization scheme is then employed to maximize the relaxed ELBO, and a computationally efficient IF-MSBL algorithm is proposed. Numerical results based on simulated and real data show that the proposed method can reconstruct row sparse signal accurately and obtain clear superresolution ISAR images. Moreover, the running time and computational complexity are reduced to a great extent compared with traditional SBL methods.
NASA Astrophysics Data System (ADS)
Nguyen, Dinh-Liem; Klibanov, Michael V.; Nguyen, Loc H.; Kolesov, Aleksandr E.; Fiddy, Michael A.; Liu, Hui
2017-09-01
We analyze in this paper the performance of a newly developed globally convergent numerical method for a coefficient inverse problem for the case of multi-frequency experimental backscatter data associated to a single incident wave. These data were collected using a microwave scattering facility at the University of North Carolina at Charlotte. The challenges for the inverse problem under the consideration are not only from its high nonlinearity and severe ill-posedness but also from the facts that the amount of the measured data is minimal and that these raw data are contaminated by a significant amount of noise, due to a non-ideal experimental setup. This setup is motivated by our target application in detecting and identifying explosives. We show in this paper how the raw data can be preprocessed and successfully inverted using our inversion method. More precisely, we are able to reconstruct the dielectric constants and the locations of the scattering objects with a good accuracy, without using any advanced a priori knowledge of their physical and geometrical properties.
Optical tomography by means of regularized MLEM
NASA Astrophysics Data System (ADS)
Majer, Charles L.; Urbanek, Tina; Peter, Jörg
2015-09-01
To solve the inverse problem involved in fluorescence mediated tomography a regularized maximum likelihood expectation maximization (MLEM) reconstruction strategy is proposed. This technique has recently been applied to reconstruct galaxy clusters in astronomy and is adopted here. The MLEM algorithm is implemented as Richardson-Lucy (RL) scheme and includes entropic regularization and a floating default prior. Hence, the strategy is very robust against measurement noise and also avoids converging into noise patterns. Normalized Gaussian filtering with fixed standard deviation is applied for the floating default kernel. The reconstruction strategy is investigated using the XFM-2 homogeneous mouse phantom (Caliper LifeSciences Inc., Hopkinton, MA) with known optical properties. Prior to optical imaging, X-ray CT tomographic data of the phantom were acquire to provide structural context. Phantom inclusions were fit with various fluorochrome inclusions (Cy5.5) for which optical data at 60 projections over 360 degree have been acquired, respectively. Fluorochrome excitation has been accomplished by scanning laser point illumination in transmission mode (laser opposite to camera). Following data acquisition, a 3D triangulated mesh is derived from the reconstructed CT data which is then matched with the various optical projection images through 2D linear interpolation, correlation and Fourier transformation in order to assess translational and rotational deviations between the optical and CT imaging systems. Preliminary results indicate that the proposed regularized MLEM algorithm, when driven with a constant initial condition, yields reconstructed images that tend to be smoother in comparison to classical MLEM without regularization. Once the floating default prior is included this bias was significantly reduced.
A new non-iterative reconstruction method for the electrical impedance tomography problem
NASA Astrophysics Data System (ADS)
Ferreira, A. D.; Novotny, A. A.
2017-03-01
The electrical impedance tomography (EIT) problem consists in determining the distribution of the electrical conductivity of a medium subject to a set of current fluxes, from measurements of the corresponding electrical potentials on its boundary. EIT is probably the most studied inverse problem since the fundamental works by Calderón from the 1980s. It has many relevant applications in medicine (detection of tumors), geophysics (localization of mineral deposits) and engineering (detection of corrosion in structures). In this work, we are interested in reconstructing a number of anomalies with different electrical conductivity from the background. Since the EIT problem is written in the form of an overdetermined boundary value problem, the idea is to rewrite it as a topology optimization problem. In particular, a shape functional measuring the misfit between the boundary measurements and the electrical potentials obtained from the model is minimized with respect to a set of ball-shaped anomalies by using the concept of topological derivatives. It means that the objective functional is expanded and then truncated up to the second order term, leading to a quadratic and strictly convex form with respect to the parameters under consideration. Thus, a trivial optimization step leads to a non-iterative second order reconstruction algorithm. As a result, the reconstruction process becomes very robust with respect to noisy data and independent of any initial guess. Finally, in order to show the effectiveness of the devised reconstruction algorithm, some numerical experiments into two spatial dimensions are presented, taking into account total and partial boundary measurements.
Application of kernel method in fluorescence molecular tomography
NASA Astrophysics Data System (ADS)
Zhao, Yue; Baikejiang, Reheman; Li, Changqing
2017-02-01
Reconstruction of fluorescence molecular tomography (FMT) is an ill-posed inverse problem. Anatomical guidance in the FMT reconstruction can improve FMT reconstruction efficiently. We have developed a kernel method to introduce the anatomical guidance into FMT robustly and easily. The kernel method is from machine learning for pattern analysis and is an efficient way to represent anatomical features. For the finite element method based FMT reconstruction, we calculate a kernel function for each finite element node from an anatomical image, such as a micro-CT image. Then the fluorophore concentration at each node is represented by a kernel coefficient vector and the corresponding kernel function. In the FMT forward model, we have a new system matrix by multiplying the sensitivity matrix with the kernel matrix. Thus, the kernel coefficient vector is the unknown to be reconstructed following a standard iterative reconstruction process. We convert the FMT reconstruction problem into the kernel coefficient reconstruction problem. The desired fluorophore concentration at each node can be calculated accordingly. Numerical simulation studies have demonstrated that the proposed kernel-based algorithm can improve the spatial resolution of the reconstructed FMT images. In the proposed kernel method, the anatomical guidance can be obtained directly from the anatomical image and is included in the forward modeling. One of the advantages is that we do not need to segment the anatomical image for the targets and background.
3D Compton scattering imaging and contour reconstruction for a class of Radon transforms
NASA Astrophysics Data System (ADS)
Rigaud, Gaël; Hahn, Bernadette N.
2018-07-01
Compton scattering imaging is a nascent concept arising from the current development of high-sensitive energy detectors and is devoted to exploit the scattering radiation to image the electron density of the studied medium. Such detectors are able to collect incoming photons in terms of energy. This paper introduces potential 3D modalities in Compton scattering imaging (CSI). The associated measured data are modeled using a class of generalized Radon transforms. The study of this class of operators leads to build a filtered back-projection kind algorithm preserving the contours of the sought-for function and offering a fast approach to partially solve the associated inverse problems. Simulation results including Poisson noise demonstrate the potential of this new imaging concept as well as the proposed image reconstruction approach.
NASA Astrophysics Data System (ADS)
Li, Xianye; Meng, Xiangfeng; Yang, Xiulun; Wang, Yurong; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi
2018-03-01
A multiple-image encryption method via lifting wavelet transform (LWT) and XOR operation is proposed, which is based on a row scanning compressive ghost imaging scheme. In the encryption process, the scrambling operation is implemented for the sparse images transformed by LWT, then the XOR operation is performed on the scrambled images, and the resulting XOR images are compressed in the row scanning compressive ghost imaging, through which the ciphertext images can be detected by bucket detector arrays. During decryption, the participant who possesses his/her correct key-group, can successfully reconstruct the corresponding plaintext image by measurement key regeneration, compression algorithm reconstruction, XOR operation, sparse images recovery, and inverse LWT (iLWT). Theoretical analysis and numerical simulations validate the feasibility of the proposed method.
Linear single-step image reconstruction in the presence of nonscattering regions.
Dehghani, H; Delpy, D T
2002-06-01
There is growing interest in the use of near-infrared spectroscopy for the noninvasive determination of the oxygenation level within biological tissue. Stemming from this application, there has been further research in using this technique for obtaining tomographic images of the neonatal head, with the view of determining the level of oxygenated and deoxygenated blood within the brain. Because of computational complexity, methods used for numerical modeling of photon transfer within tissue have usually been limited to the diffusion approximation of the Boltzmann transport equation. The diffusion approximation, however, is not valid in regions of low scatter, such as the cerebrospinal fluid. Methods have been proposed for dealing with nonscattering regions within diffusing materials through the use of a radiosity-diffusion model. Currently, this new model assumes prior knowledge of the void region; therefore it is instructive to examine the errors introduced in applying a simple diffusion-based reconstruction scheme in cases where a nonscattering region exists. We present reconstructed images, using linear algorithms, of models that contain a nonscattering region within a diffusing material. The forward data are calculated by using the radiosity-diffusion model, and the inverse problem is solved by using either the radiosity-diffusion model or the diffusion-only model. When using data from a model containing a clear layer and reconstructing with the correct model, one can reconstruct the anomaly, but the qualitative accuracy and the position of the reconstructed anomaly depend on the size and the position of the clear regions. If the inverse model has no information about the clear regions (i.e., it is a purely diffusing model), an anomaly can be reconstructed, but the resulting image has very poor qualitative accuracy and poor localization of the anomaly. The errors in quantitative and localization accuracies depend on the size and location of the clear regions.
Linear single-step image reconstruction in the presence of nonscattering regions
NASA Astrophysics Data System (ADS)
Dehghani, H.; Delpy, D. T.
2002-06-01
There is growing interest in the use of near-infrared spectroscopy for the noninvasive determination of the oxygenation level within biological tissue. Stemming from this application, there has been further research in using this technique for obtaining tomographic images of the neonatal head, with the view of determining the level of oxygenated and deoxygenated blood within the brain. Because of computational complexity, methods used for numerical modeling of photon transfer within tissue have usually been limited to the diffusion approximation of the Boltzmann transport equation. The diffusion approximation, however, is not valid in regions of low scatter, such as the cerebrospinal fluid. Methods have been proposed for dealing with nonscattering regions within diffusing materials through the use of a radiosity-diffusion model. Currently, this new model assumes prior knowledge of the void region; therefore it is instructive to examine the errors introduced in applying a simple diffusion-based reconstruction scheme in cases where a nonscattering region exists. We present reconstructed images, using linear algorithms, of models that contain a nonscattering region within a diffusing material. The forward data are calculated by using the radiosity-diffusion model, and the inverse problem is solved by using either the radiosity-diffusion model or the diffusion-only model. When using data from a model containing a clear layer and reconstructing with the correct model, one can reconstruct the anomaly, but the qualitative accuracy and the position of the reconstructed anomaly depend on the size and the position of the clear regions. If the inverse model has no information about the clear regions (i.e., it is a purely diffusing model), an anomaly can be reconstructed, but the resulting image has very poor qualitative accuracy and poor localization of the anomaly. The errors in quantitative and localization accuracies depend on the size and location of the clear regions.
Casero, Ramón; Siedlecka, Urszula; Jones, Elizabeth S; Gruscheski, Lena; Gibb, Matthew; Schneider, Jürgen E; Kohl, Peter; Grau, Vicente
2017-05-01
Traditional histology is the gold standard for tissue studies, but it is intrinsically reliant on two-dimensional (2D) images. Study of volumetric tissue samples such as whole hearts produces a stack of misaligned and distorted 2D images that need to be reconstructed to recover a congruent volume with the original sample's shape. In this paper, we develop a mathematical framework called Transformation Diffusion (TD) for stack alignment refinement as a solution to the heat diffusion equation. This general framework does not require contour segmentation, is independent of the registration method used, and is trivially parallelizable. After the first stack sweep, we also replace registration operations by operations in the space of transformations, several orders of magnitude faster and less memory-consuming. Implementing TD with operations in the space of transformations produces our Transformation Diffusion Reconstruction (TDR) algorithm, applicable to general transformations that are closed under inversion and composition. In particular, we provide formulas for translation and affine transformations. We also propose an Approximated TDR (ATDR) algorithm that extends the same principles to tensor-product B-spline transformations. Using TDR and ATDR, we reconstruct a full mouse heart at pixel size 0.92µm×0.92µm, cut 10µm thick, spaced 20µm (84G). Our algorithms employ only local information from transformations between neighboring slices, but the TD framework allows theoretical analysis of the refinement as applying a global Gaussian low-pass filter to the unknown stack misalignments. We also show that reconstruction without an external reference produces large shape artifacts in a cardiac specimen while still optimizing slice-to-slice alignment. To overcome this problem, we use a pre-cutting blockface imaging process previously developed by our group that takes advantage of Brewster's angle and a polarizer to capture the outline of only the topmost layer of wax in the block containing embedded tissue for histological sectioning. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Bennett, C.; Dunne, J. F.; Trimby, S.; Richardson, D.
2017-02-01
A recurrent non-linear autoregressive with exogenous input (NARX) neural network is proposed, and a suitable fully-recurrent training methodology is adapted and tuned, for reconstructing cylinder pressure in multi-cylinder IC engines using measured crank kinematics. This type of indirect sensing is important for cost effective closed-loop combustion control and for On-Board Diagnostics. The challenge addressed is to accurately predict cylinder pressure traces within the cycle under generalisation conditions: i.e. using data not previously seen by the network during training. This involves direct construction and calibration of a suitable inverse crank dynamic model, which owing to singular behaviour at top-dead-centre (TDC), has proved difficult via physical model construction, calibration, and inversion. The NARX architecture is specialised and adapted to cylinder pressure reconstruction, using a fully-recurrent training methodology which is needed because the alternatives are too slow and unreliable for practical network training on production engines. The fully-recurrent Robust Adaptive Gradient Descent (RAGD) algorithm, is tuned initially using synthesised crank kinematics, and then tested on real engine data to assess the reconstruction capability. Real data is obtained from a 1.125 l, 3-cylinder, in-line, direct injection spark ignition (DISI) engine involving synchronised measurements of crank kinematics and cylinder pressure across a range of steady-state speed and load conditions. The paper shows that a RAGD-trained NARX network using both crank velocity and crank acceleration as input information, provides fast and robust training. By using the optimum epoch identified during RAGD training, acceptably accurate cylinder pressures, and especially accurate location-of-peak-pressure, can be reconstructed robustly under generalisation conditions, making it the most practical NARX configuration and recurrent training methodology for use on production engines.
Gradient-based Optimization for Poroelastic and Viscoelastic MR Elastography
Tan, Likun; McGarry, Matthew D.J.; Van Houten, Elijah E.W.; Ji, Ming; Solamen, Ligin; Weaver, John B.
2017-01-01
We describe an efficient gradient computation for solving inverse problems arising in magnetic resonance elastography (MRE). The algorithm can be considered as a generalized ‘adjoint method’ based on a Lagrangian formulation. One requirement for the classic adjoint method is assurance of the self-adjoint property of the stiffness matrix in the elasticity problem. In this paper, we show this property is no longer a necessary condition in our algorithm, but the computational performance can be as efficient as the classic method, which involves only two forward solutions and is independent of the number of parameters to be estimated. The algorithm is developed and implemented in material property reconstructions using poroelastic and viscoelastic modeling. Various gradient- and Hessian-based optimization techniques have been tested on simulation, phantom and in vivo brain data. The numerical results show the feasibility and the efficiency of the proposed scheme for gradient calculation. PMID:27608454
Ji, Guoli; Ye, Pengchao; Shi, Yijian; Yuan, Leiming; Chen, Xiaojing; Yuan, Mingshun; Zhu, Dehua; Chen, Xi; Hu, Xinyu; Jiang, Jing
2017-01-01
Tegillarca granosa samples contaminated artificially by three kinds of toxic heavy metals including zinc (Zn), cadmium (Cd), and lead (Pb) were attempted to be distinguished using laser-induced breakdown spectroscopy (LIBS) technology and pattern recognition methods in this study. The measured spectra were firstly processed by a wavelet transform algorithm (WTA), then the generated characteristic information was subsequently expressed by an information gain algorithm (IGA). As a result, 30 variables obtained were used as input variables for three classifiers: partial least square discriminant analysis (PLS-DA), support vector machine (SVM), and random forest (RF), among which the RF model exhibited the best performance, with 93.3% discrimination accuracy among those classifiers. Besides, the extracted characteristic information was used to reconstruct the original spectra by inverse WTA, and the corresponding attribution of the reconstructed spectra was then discussed. This work indicates that the healthy shellfish samples of Tegillarca granosa could be distinguished from the toxic heavy-metal-contaminated ones by pattern recognition analysis combined with LIBS technology, which only requires minimal pretreatments. PMID:29149053
2007-06-05
From - To) 05-06-2007 Technical Paper 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER An Inversion Method for Reconstructing Hall Thruster Plume...239.18 An Inversion Method for Reconstructing Hall Thruster Plume Parameters from Line Integrated Measurements (Preprint) Taylor S. Matlock∗ Jackson...dimensional estimate of the plume electron temperature using a published xenon collisional radiative model. I. Introduction The Hall thruster is a high
Microwave imaging by three-dimensional Born linearization of electromagnetic scattering
NASA Astrophysics Data System (ADS)
Caorsi, S.; Gragnani, G. L.; Pastorino, M.
1990-11-01
An approach to microwave imaging is proposed that uses a three-dimensional vectorial form of the Born approximation to linearize the equation of electromagnetic scattering. The inverse scattering problem is numerically solved for three-dimensional geometries by means of the moment method. A pseudoinversion algorithm is adopted to overcome ill conditioning. Results show that the method is well suited for qualitative imaging purposes, while its capability for exactly reconstructing the complex dielectric permittivity is affected by the limitations inherent in the Born approximation and in ill conditioning.
Dual-sided coded-aperture imager
Ziock, Klaus-Peter [Clinton, TN
2009-09-22
In a vehicle, a single detector plane simultaneously measures radiation coming through two coded-aperture masks, one on either side of the detector. To determine which side of the vehicle a source is, the two shadow masks are inverses of each other, i.e., one is a mask and the other is the anti-mask. All of the data that is collected is processed through two versions of an image reconstruction algorithm. One treats the data as if it were obtained through the mask, the other as though the data is obtained through the anti-mask.
Numerical reconstruction of tsunami source using combined seismic, satellite and DART data
NASA Astrophysics Data System (ADS)
Krivorotko, Olga; Kabanikhin, Sergey; Marinin, Igor
2014-05-01
Recent tsunamis, for instance, in Japan (2011), in Sumatra (2004), and at the Indian coast (2004) showed that a system of producing exact and timely information about tsunamis is of a vital importance. Numerical simulation is an effective instrument for providing such information. Bottom relief characteristics and the initial perturbation data (a tsunami source) are required for the direct simulation of tsunamis. The seismic data about the source are usually obtained in a few tens of minutes after an event has occurred (the seismic waves velocity being about five hundred kilometres per minute, while the velocity of tsunami waves is less than twelve kilometres per minute). A difference in the arrival times of seismic and tsunami waves can be used when operationally refining the tsunami source parameters and modelling expected tsunami wave height on the shore. The most suitable physical models related to the tsunamis simulation are based on the shallow water equations. The problem of identification parameters of a tsunami source using additional measurements of a passing wave is called inverse tsunami problem. We investigate three different inverse problems of determining a tsunami source using three different additional data: Deep-ocean Assessment and Reporting of Tsunamis (DART) measurements, satellite wave-form images and seismic data. These problems are severely ill-posed. We apply regularization techniques to control the degree of ill-posedness such as Fourier expansion, truncated singular value decomposition, numerical regularization. The algorithm of selecting the truncated number of singular values of an inverse problem operator which is agreed with the error level in measured data is described and analyzed. In numerical experiment we used gradient methods (Landweber iteration and conjugate gradient method) for solving inverse tsunami problems. Gradient methods are based on minimizing the corresponding misfit function. To calculate the gradient of the misfit function, the adjoint problem is solved. The conservative finite-difference schemes for solving the direct and adjoint problems in the approximation of shallow water are constructed. Results of numerical experiments of the tsunami source reconstruction are presented and discussed. We show that using a combination of three different types of data allows one to increase the stability and efficiency of tsunami source reconstruction. Non-profit organization WAPMERR (World Agency of Planetary Monitoring and Earthquake Risk Reduction) in collaboration with Informap software development department developed the Integrated Tsunami Research and Information System (ITRIS) to simulate tsunami waves and earthquakes, river course changes, coastal zone floods, and risk estimates for coastal constructions at wave run-ups and earthquakes. The special scientific plug-in components are embedded in a specially developed GIS-type graphic shell for easy data retrieval, visualization and processing. This work was supported by the Russian Foundation for Basic Research (project No. 12-01-00773 'Theory and Numerical Methods for Solving Combined Inverse Problems of Mathematical Physics') and interdisciplinary project of SB RAS 14 'Inverse Problems and Applications: Theory, Algorithms, Software'.
A Gauss-Newton full-waveform inversion in PML-truncated domains using scalar probing waves
NASA Astrophysics Data System (ADS)
Pakravan, Alireza; Kang, Jun Won; Newtson, Craig M.
2017-12-01
This study considers the characterization of subsurface shear wave velocity profiles in semi-infinite media using scalar waves. Using surficial responses caused by probing waves, a reconstruction of the material profile is sought using a Gauss-Newton full-waveform inversion method in a two-dimensional domain truncated by perfectly matched layer (PML) wave-absorbing boundaries. The PML is introduced to limit the semi-infinite extent of the half-space and to prevent reflections from the truncated boundaries. A hybrid unsplit-field PML is formulated in the inversion framework to enable more efficient wave simulations than with a fully mixed PML. The full-waveform inversion method is based on a constrained optimization framework that is implemented using Karush-Kuhn-Tucker (KKT) optimality conditions to minimize the objective functional augmented by PML-endowed wave equations via Lagrange multipliers. The KKT conditions consist of state, adjoint, and control problems, and are solved iteratively to update the shear wave velocity profile of the PML-truncated domain. Numerical examples show that the developed Gauss-Newton inversion method is accurate enough and more efficient than another inversion method. The algorithm's performance is demonstrated by the numerical examples including the case of noisy measurement responses and the case of reduced number of sources and receivers.
NASA Astrophysics Data System (ADS)
Stock, Dennis; Meyer, Sven; Sarli, Eleonora; Bartelmann, Matthias; Balestra, Italo; Grillo, Claudio; Koekemoer, Anton; Mercurio, Amata; Nonino, Mario; Rosati, Piero
2015-12-01
We reconstruct the radial profile of the projected gravitational potential of the galaxy cluster MACS J1206 from 592 spectroscopic measurements of velocities of cluster members. To accomplish this, we use a method we have developed recently based on the Richardson-Lucy deprojection algorithm and an inversion of the spherically-symmetric Jeans equation. We find that, within the uncertainties, our reconstruction agrees very well with a potential reconstruction from weak and strong gravitational lensing as well as with a potential obtained from X-ray measurements. In addition, our reconstruction is in good agreement with several common analytic profiles of the lensing potential. Varying the anisotropy parameter in the Jeans equation, we find that isotropy parameters, which are either small, β ≲ 0.2, or decrease with radius, yield potential profiles that strongly disagree with that obtained from gravitational lensing. We achieve the best agreement between our potential profile and the profile from gravitational lensing if the anisotropy parameter rises steeply to β ≈ 0.6 within ≈ 0.5 Mpc and stays constant further out.
EIT Imaging Regularization Based on Spectral Graph Wavelets.
Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Vauhkonen, Marko; Wolf, Gerhard; Mueller-Lisse, Ullrich; Moeller, Knut
2017-09-01
The objective of electrical impedance tomographic reconstruction is to identify the distribution of tissue conductivity from electrical boundary conditions. This is an ill-posed inverse problem usually solved under the finite-element method framework. In previous studies, standard sparse regularization was used for difference electrical impedance tomography to achieve a sparse solution. However, regarding elementwise sparsity, standard sparse regularization interferes with the smoothness of conductivity distribution between neighboring elements and is sensitive to noise. As an effect, the reconstructed images are spiky and depict a lack of smoothness. Such unexpected artifacts are not realistic and may lead to misinterpretation in clinical applications. To eliminate such artifacts, we present a novel sparse regularization method that uses spectral graph wavelet transforms. Single-scale or multiscale graph wavelet transforms are employed to introduce local smoothness on different scales into the reconstructed images. The proposed approach relies on viewing finite-element meshes as undirected graphs and applying wavelet transforms derived from spectral graph theory. Reconstruction results from simulations, a phantom experiment, and patient data suggest that our algorithm is more robust to noise and produces more reliable images.
Jing, Liwen; Li, Zhao; Wang, Wenjie; Dubey, Amartansh; Lee, Pedro; Meniconi, Silvia; Brunone, Bruno; Murch, Ross D
2018-05-01
An approximate inverse scattering technique is proposed for reconstructing cross-sectional area variation along water pipelines to deduce the size and position of blockages. The technique allows the reconstructed blockage profile to be written explicitly in terms of the measured acoustic reflectivity. It is based upon the Born approximation and provides good accuracy, low computational complexity, and insight into the reconstruction process. Numerical simulations and experimental results are provided for long pipelines with mild and severe blockages of different lengths. Good agreement is found between the inverse result and the actual pipe condition for mild blockages.
NASA Astrophysics Data System (ADS)
Yee, Eugene
2007-04-01
Although a great deal of research effort has been focused on the forward prediction of the dispersion of contaminants (e.g., chemical and biological warfare agents) released into the turbulent atmosphere, much less work has been directed toward the inverse prediction of agent source location and strength from the measured concentration, even though the importance of this problem for a number of practical applications is obvious. In general, the inverse problem of source reconstruction is ill-posed and unsolvable without additional information. It is demonstrated that a Bayesian probabilistic inferential framework provides a natural and logically consistent method for source reconstruction from a limited number of noisy concentration data. In particular, the Bayesian approach permits one to incorporate prior knowledge about the source as well as additional information regarding both model and data errors. The latter enables a rigorous determination of the uncertainty in the inference of the source parameters (e.g., spatial location, emission rate, release time, etc.), hence extending the potential of the methodology as a tool for quantitative source reconstruction. A model (or, source-receptor relationship) that relates the source distribution to the concentration data measured by a number of sensors is formulated, and Bayesian probability theory is used to derive the posterior probability density function of the source parameters. A computationally efficient methodology for determination of the likelihood function for the problem, based on an adjoint representation of the source-receptor relationship, is described. Furthermore, we describe the application of efficient stochastic algorithms based on Markov chain Monte Carlo (MCMC) for sampling from the posterior distribution of the source parameters, the latter of which is required to undertake the Bayesian computation. The Bayesian inferential methodology for source reconstruction is validated against real dispersion data for two cases involving contaminant dispersion in highly disturbed flows over urban and complex environments where the idealizations of horizontal homogeneity and/or temporal stationarity in the flow cannot be applied to simplify the problem. Furthermore, the methodology is applied to the case of reconstruction of multiple sources.
2007-07-01
Technical Paper 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER An Inversion Method for Reconstructing Hall Thruster Plume...298 (Rev. 8-98) Prescribed by ANSI Std. 239.18 An Inversion Method for Reconstructing Hall Thruster Plume Parameters from Line Integrated Measurements... Hall thruster is a high specific impulse electric thruster that produces a highly ionized plasma inside an annular chamber through the use of high
NASA Astrophysics Data System (ADS)
Song, Xizi; Xu, Yanbin; Dong, Feng
2017-04-01
Electrical resistance tomography (ERT) is a promising measurement technique with important industrial and clinical applications. However, with limited effective measurements, it suffers from poor spatial resolution due to the ill-posedness of the inverse problem. Recently, there has been an increasing research interest in hybrid imaging techniques, utilizing couplings of physical modalities, because these techniques obtain much more effective measurement information and promise high resolution. Ultrasound modulated electrical impedance tomography (UMEIT) is one of the newly developed hybrid imaging techniques, which combines electric and acoustic modalities. A linearized image reconstruction method based on power density is proposed for UMEIT. The interior data, power density distribution, is adopted to reconstruct the conductivity distribution with the proposed image reconstruction method. At the same time, relating the power density change to the change in conductivity, the Jacobian matrix is employed to make the nonlinear problem into a linear one. The analytic formulation of this Jacobian matrix is derived and its effectiveness is also verified. In addition, different excitation patterns are tested and analyzed, and opposite excitation provides the best performance with the proposed method. Also, multiple power density distributions are combined to implement image reconstruction. Finally, image reconstruction is implemented with the linear back-projection (LBP) algorithm. Compared with ERT, with the proposed image reconstruction method, UMEIT can produce reconstructed images with higher quality and better quantitative evaluation results.
NASA Technical Reports Server (NTRS)
Mcdade, Ian C.
1991-01-01
Techniques were developed for recovering two-dimensional distributions of auroral volume emission rates from rocket photometer measurements made in a tomographic spin scan mode. These tomographic inversion procedures are based upon an algebraic reconstruction technique (ART) and utilize two different iterative relaxation techniques for solving the problems associated with noise in the observational data. One of the inversion algorithms is based upon a least squares method and the other on a maximum probability approach. The performance of the inversion algorithms, and the limitations of the rocket tomography technique, were critically assessed using various factors such as (1) statistical and non-statistical noise in the observational data, (2) rocket penetration of the auroral form, (3) background sources of emission, (4) smearing due to the photometer field of view, and (5) temporal variations in the auroral form. These tests show that the inversion procedures may be successfully applied to rocket observations made in medium intensity aurora with standard rocket photometer instruments. The inversion procedures have been used to recover two-dimensional distributions of auroral emission rates and ionization rates from an existing set of N2+3914A rocket photometer measurements which were made in a tomographic spin scan mode during the ARIES auroral campaign. The two-dimensional distributions of the 3914A volume emission rates recoverd from the inversion of the rocket data compare very well with the distributions that were inferred from ground-based measurements using triangulation-tomography techniques and the N2 ionization rates derived from the rocket tomography results are in very good agreement with the in situ particle measurements that were made during the flight. Three pre-prints describing the tomographic inversion techniques and the tomographic analysis of the ARIES rocket data are included as appendices.
Magnetotelluric inversion via reverse time migration algorithm of seismic data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ha, Taeyoung; Shin, Changsoo
2007-07-01
We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nedelec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversionmore » algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data.« less
Sorting signed permutations by inversions in O(nlogn) time.
Swenson, Krister M; Rajan, Vaibhav; Lin, Yu; Moret, Bernard M E
2010-03-01
The study of genomic inversions (or reversals) has been a mainstay of computational genomics for nearly 20 years. After the initial breakthrough of Hannenhalli and Pevzner, who gave the first polynomial-time algorithm for sorting signed permutations by inversions, improved algorithms have been designed, culminating with an optimal linear-time algorithm for computing the inversion distance and a subquadratic algorithm for providing a shortest sequence of inversions--also known as sorting by inversions. Remaining open was the question of whether sorting by inversions could be done in O(nlogn) time. In this article, we present a qualified answer to this question, by providing two new sorting algorithms, a simple and fast randomized algorithm and a deterministic refinement. The deterministic algorithm runs in time O(nlogn + kn), where k is a data-dependent parameter. We provide the results of extensive experiments showing that both the average and the standard deviation for k are small constants, independent of the size of the permutation. We conclude (but do not prove) that almost all signed permutations can be sorted by inversions in O(nlogn) time.
NASA Astrophysics Data System (ADS)
Rashid, Ahmar; Khambampati, Anil Kumar; Kim, Bong Seok; Liu, Dong; Kim, Sin; Kim, Kyung Youn
EIT image reconstruction is an ill-posed problem, the spatial resolution of the estimated conductivity distribution is usually poor and the external voltage measurements are subject to variable noise. Therefore, EIT conductivity estimation cannot be used in the raw form to correctly estimate the shape and size of complex shaped regional anomalies. An efficient algorithm employing a shape based estimation scheme is needed. The performance of traditional inverse algorithms, such as the Newton Raphson method, used for this purpose is below par and depends upon the initial guess and the gradient of the cost functional. This paper presents the application of differential evolution (DE) algorithm to estimate complex shaped region boundaries, expressed as coefficients of truncated Fourier series, using EIT. DE is a simple yet powerful population-based, heuristic algorithm with the desired features to solve global optimization problems under realistic conditions. The performance of the algorithm has been tested through numerical simulations, comparing its results with that of the traditional modified Newton Raphson (mNR) method.
Algorithm for the classification of multi-modulating signals on the electrocardiogram.
Mita, Mitsuo
2007-03-01
This article discusses the algorithm to measure electrocardiogram (ECG) and respiration simultaneously and to have the diagnostic potentiality for sleep apnoea from ECG recordings. The algorithm is composed by the combination with the three particular scale transform of a(j)(t), u(j)(t), o(j)(a(j)) and the statistical Fourier transform (SFT). Time and magnitude scale transforms of a(j)(t), u(j)(t) change the source into the periodic signal and tau(j) = o(j)(a(j)) confines its harmonics into a few instantaneous components at tau(j) being a common instant on two scales between t and tau(j). As a result, the multi-modulating source is decomposed by the SFT and is reconstructed into ECG, respiration and the other signals by inverse transform. The algorithm is expected to get the partial ventilation and the heart rate variability from scale transforms among a(j)(t), a(j+1)(t) and u(j+1)(t) joining with each modulation. The algorithm has a high potentiality of the clinical checkup for the diagnosis of sleep apnoea from ECG recordings.
A statistical-based approach for acoustic tomography of the atmosphere.
Kolouri, Soheil; Azimi-Sadjadi, Mahmood R; Ziemann, Astrid
2014-01-01
Acoustic travel-time tomography of the atmosphere is a nonlinear inverse problem which attempts to reconstruct temperature and wind velocity fields in the atmospheric surface layer using the dependence of sound speed on temperature and wind velocity fields along the propagation path. This paper presents a statistical-based acoustic travel-time tomography algorithm based on dual state-parameter unscented Kalman filter (UKF) which is capable of reconstructing and tracking, in time, temperature, and wind velocity fields (state variables) as well as the dynamic model parameters within a specified investigation area. An adaptive 3-D spatial-temporal autoregressive model is used to capture the state evolution in the UKF. The observations used in the dual state-parameter UKF process consist of the acoustic time of arrivals measured for every pair of transmitter/receiver nodes deployed in the investigation area. The proposed method is then applied to the data set collected at the Meteorological Observatory Lindenberg, Germany, as part of the STINHO experiment, and the reconstruction results are presented.
Regularized two-step brain activity reconstruction from spatiotemporal EEG data
NASA Astrophysics Data System (ADS)
Alecu, Teodor I.; Voloshynovskiy, Sviatoslav; Pun, Thierry
2004-10-01
We are aiming at using EEG source localization in the framework of a Brain Computer Interface project. We propose here a new reconstruction procedure, targeting source (or equivalently mental task) differentiation. EEG data can be thought of as a collection of time continuous streams from sparse locations. The measured electric potential on one electrode is the result of the superposition of synchronized synaptic activity from sources in all the brain volume. Consequently, the EEG inverse problem is a highly underdetermined (and ill-posed) problem. Moreover, each source contribution is linear with respect to its amplitude but non-linear with respect to its localization and orientation. In order to overcome these drawbacks we propose a novel two-step inversion procedure. The solution is based on a double scale division of the solution space. The first step uses a coarse discretization and has the sole purpose of globally identifying the active regions, via a sparse approximation algorithm. The second step is applied only on the retained regions and makes use of a fine discretization of the space, aiming at detailing the brain activity. The local configuration of sources is recovered using an iterative stochastic estimator with adaptive joint minimum energy and directional consistency constraints.
Bayesian statistical ionospheric tomography improved by incorporating ionosonde measurements
NASA Astrophysics Data System (ADS)
Norberg, Johannes; Virtanen, Ilkka I.; Roininen, Lassi; Vierinen, Juha; Orispää, Mikko; Kauristie, Kirsti; Lehtinen, Markku S.
2016-04-01
We validate two-dimensional ionospheric tomography reconstructions against EISCAT incoherent scatter radar measurements. Our tomography method is based on Bayesian statistical inversion with prior distribution given by its mean and covariance. We employ ionosonde measurements for the choice of the prior mean and covariance parameters and use the Gaussian Markov random fields as a sparse matrix approximation for the numerical computations. This results in a computationally efficient tomographic inversion algorithm with clear probabilistic interpretation. We demonstrate how this method works with simultaneous beacon satellite and ionosonde measurements obtained in northern Scandinavia. The performance is compared with results obtained with a zero-mean prior and with the prior mean taken from the International Reference Ionosphere 2007 model. In validating the results, we use EISCAT ultra-high-frequency incoherent scatter radar measurements as the ground truth for the ionization profile shape. We find that in comparison to the alternative prior information sources, ionosonde measurements improve the reconstruction by adding accurate information about the absolute value and the altitude distribution of electron density. With an ionosonde at continuous disposal, the presented method enhances stand-alone near-real-time ionospheric tomography for the given conditions significantly.
NASA Astrophysics Data System (ADS)
Qin, Zhuanping; Ma, Wenjuan; Ren, Shuyan; Geng, Liqing; Li, Jing; Yang, Ying; Qin, Yingmei
2017-02-01
Endoscopic DOT has the potential to apply to cancer-related imaging in tubular organs. Although the DOT has relatively large tissue penetration depth, the endoscopic DOT is limited by the narrow space of the internal tubular tissue, so as to the relatively small penetration depth. Because some adenocarcinomas including cervical adenocarcinoma are located in deep canal, it is necessary to improve the imaging resolution under the limited measurement condition. To improve the resolution, a new FOCUSS algorithm along with the image reconstruction algorithm based on the effective detection range (EDR) is developed. This algorithm is based on the region of interest (ROI) to reduce the dimensions of the matrix. The shrinking method cuts down the computation burden. To reduce the computational complexity, double conjugate gradient method is used in the matrix inversion. For a typical inner size and optical properties of the cervix-like tubular tissue, reconstructed images from the simulation data demonstrate that the proposed method achieves equivalent image quality to that obtained from the method based on EDR when the target is close the inner boundary of the model, and with higher spatial resolution and quantitative ratio when the targets are far from the inner boundary of the model. The quantitative ratio of reconstructed absorption and reduced scattering coefficient can be up to 70% and 80% under 5mm depth, respectively. Furthermore, the two close targets with different depths can be separated from each other. The proposed method will be useful to the development of endoscopic DOT technologies in tubular organs.
Luo, Jianhua; Mou, Zhiying; Qin, Binjie; Li, Wanqing; Ogunbona, Philip; Robini, Marc C; Zhu, Yuemin
2018-07-01
Reconstructing magnetic resonance images from undersampled k-space data is a challenging problem. This paper introduces a novel method of image reconstruction from undersampled k-space data based on the concept of singularizing operators and a novel singular k-space model. Exploring the sparsity of an image in the k-space, the singular k-space model (SKM) is proposed in terms of the k-space functions of a singularizing operator. The singularizing operator is constructed by combining basic difference operators. An algorithm is developed to reliably estimate the model parameters from undersampled k-space data. The estimated parameters are then used to recover the missing k-space data through the model, subsequently achieving high-quality reconstruction of the image using inverse Fourier transform. Experiments on physical phantom and real brain MR images have shown that the proposed SKM method constantly outperforms the popular total variation (TV) and the classical zero-filling (ZF) methods regardless of the undersampling rates, the noise levels, and the image structures. For the same objective quality of the reconstructed images, the proposed method requires much less k-space data than the TV method. The SKM method is an effective method for fast MRI reconstruction from the undersampled k-space data. Graphical abstract Two Real Images and their sparsified images by singularizing operator.
GPU implementation of prior image constrained compressed sensing (PICCS)
NASA Astrophysics Data System (ADS)
Nett, Brian E.; Tang, Jie; Chen, Guang-Hong
2010-04-01
The Prior Image Constrained Compressed Sensing (PICCS) algorithm (Med. Phys. 35, pg. 660, 2008) has been applied to several computed tomography applications with both standard CT systems and flat-panel based systems designed for guiding interventional procedures and radiation therapy treatment delivery. The PICCS algorithm typically utilizes a prior image which is reconstructed via the standard Filtered Backprojection (FBP) reconstruction algorithm. The algorithm then iteratively solves for the image volume that matches the measured data, while simultaneously assuring the image is similar to the prior image. The PICCS algorithm has demonstrated utility in several applications including: improved temporal resolution reconstruction, 4D respiratory phase specific reconstructions for radiation therapy, and cardiac reconstruction from data acquired on an interventional C-arm. One disadvantage of the PICCS algorithm, just as other iterative algorithms, is the long computation times typically associated with reconstruction. In order for an algorithm to gain clinical acceptance reconstruction must be achievable in minutes rather than hours. In this work the PICCS algorithm has been implemented on the GPU in order to significantly reduce the reconstruction time of the PICCS algorithm. The Compute Unified Device Architecture (CUDA) was used in this implementation.
Pant, Jeevan K; Krishnan, Sridhar
2014-04-01
A new algorithm for the reconstruction of electrocardiogram (ECG) signals and a dictionary learning algorithm for the enhancement of its reconstruction performance for a class of signals are proposed. The signal reconstruction algorithm is based on minimizing the lp pseudo-norm of the second-order difference, called as the lp(2d) pseudo-norm, of the signal. The optimization involved is carried out using a sequential conjugate-gradient algorithm. The dictionary learning algorithm uses an iterative procedure wherein a signal reconstruction and a dictionary update steps are repeated until a convergence criterion is satisfied. The signal reconstruction step is implemented by using the proposed signal reconstruction algorithm and the dictionary update step is implemented by using the linear least-squares method. Extensive simulation results demonstrate that the proposed algorithm yields improved reconstruction performance for temporally correlated ECG signals relative to the state-of-the-art lp(1d)-regularized least-squares and Bayesian learning based algorithms. Also for a known class of signals, the reconstruction performance of the proposed algorithm can be improved by applying it in conjunction with a dictionary obtained using the proposed dictionary learning algorithm.
Model-based tomographic reconstruction
Chambers, David H; Lehman, Sean K; Goodman, Dennis M
2012-06-26
A model-based approach to estimating wall positions for a building is developed and tested using simulated data. It borrows two techniques from geophysical inversion problems, layer stripping and stacking, and combines them with a model-based estimation algorithm that minimizes the mean-square error between the predicted signal and the data. The technique is designed to process multiple looks from an ultra wideband radar array. The processed signal is time-gated and each section processed to detect the presence of a wall and estimate its position, thickness, and material parameters. The floor plan of a building is determined by moving the array around the outside of the building. In this paper we describe how the stacking and layer stripping algorithms are combined and show the results from a simple numerical example of three parallel walls.
Time-resolved diffusion tomographic 2D and 3D imaging in highly scattering turbid media
NASA Technical Reports Server (NTRS)
Alfano, Robert R. (Inventor); Cai, Wei (Inventor); Liu, Feng (Inventor); Lax, Melvin (Inventor); Das, Bidyut B. (Inventor)
1999-01-01
A method for imaging objects in highly scattering turbid media. According to one embodiment of the invention, the method involves using a plurality of intersecting source/detectors sets and time-resolving equipment to generate a plurality of time-resolved intensity curves for the diffusive component of light emergent from the medium. For each of the curves, the intensities at a plurality of times are then inputted into the following inverse reconstruction algorithm to form an image of the medium: ##EQU1## wherein W is a matrix relating output at source and detector positions r.sub.s and r.sub.d, at time t, to position r, .LAMBDA. is a regularization matrix, chosen for convenience to be diagonal, but selected in a way related to the ratio of the noise,
Time-resolved diffusion tomographic 2D and 3D imaging in highly scattering turbid media
NASA Technical Reports Server (NTRS)
Alfano, Robert R. (Inventor); Cai, Wei (Inventor); Gayen, Swapan K. (Inventor)
2000-01-01
A method for imaging objects in highly scattering turbid media. According to one embodiment of the invention, the method involves using a plurality of intersecting source/detectors sets and time-resolving equipment to generate a plurality of time-resolved intensity curves for the diffusive component of light emergent from the medium. For each of the curves, the intensities at a plurality of times are then inputted into the following inverse reconstruction algorithm to form an image of the medium: wherein W is a matrix relating output at source and detector positions r.sub.s and r.sub.d, at time t, to position r, .LAMBDA. is a regularization matrix, chosen for convenience to be diagonal, but selected in a way related to the ratio of the noise,
NASA Astrophysics Data System (ADS)
An, M.; Assumpcao, M.
2003-12-01
The joint inversion of receiver function and surface wave is an effective way to diminish the influences of the strong tradeoff among parameters and the different sensitivity to the model parameters in their respective inversions, but the inversion problem becomes more complex. Multi-objective problems can be much more complicated than single-objective inversion in the model selection and optimization. If objectives are involved and conflicting, models can be ordered only partially. In this case, Pareto-optimal preference should be used to select solutions. On the other hand, the inversion to get only a few optimal solutions can not deal properly with the strong tradeoff between parameters, the uncertainties in the observation, the geophysical complexities and even the incompetency of the inversion technique. The effective way is to retrieve the geophysical information statistically from many acceptable solutions, which requires more competent global algorithms. Competent genetic algorithms recently proposed are far superior to the conventional genetic algorithm and can solve hard problems quickly, reliably and accurately. In this work we used one of competent genetic algorithms, Bayesian Optimization Algorithm as the main inverse procedure. This algorithm uses Bayesian networks to draw out inherited information and can use Pareto-optimal preference in the inversion. With this algorithm, the lithospheric structure of Paran"› basin is inverted to fit both the observations of inter-station surface wave dispersion and receiver function.
NASA Technical Reports Server (NTRS)
Tessler, Alexander; Spangler, Jan L.
2003-01-01
A variational principle is formulated for the inverse problem of full-field reconstruction of three-dimensional plate/shell deformations from experimentally measured surface strains. The formulation is based upon the minimization of a least squares functional that uses the complete set of strain measures consistent with linear, first-order shear-deformation theory. The formulation, which accommodates for transverse shear deformation, is applicable for the analysis of thin and moderately thick plate and shell structures. The main benefit of the variational principle is that it is well suited for C(sup 0)-continuous displacement finite element discretizations, thus enabling the development of robust algorithms for application to complex civil and aeronautical structures. The methodology is especially aimed at the next generation of aerospace vehicles for use in real-time structural health monitoring systems.
Wang, L; Rokhlin, S I
2002-09-01
An inversion method based on Floquet wave velocity in a periodic medium has been introduced to determine the single ply elastic moduli of a multi-ply composite. The stability of this algorithm is demonstrated by numerical simulation. The applicability of the plane wave approximation to the velocity measurement in the double-through-transmission self-reference method has been analyzed using a time-domain beam model. It shows that the finite width of the transmitter affects only the amplitudes of the signals and has almost no effect on the time delay. Using this method, the ply moduli for a multiply composite have been experimentally determined. While the paper focuses on elastic constant reconstruction from phase velocity measurements by the self-reference double-through-transmission method, the reconstruction methodology is also applicable to assessment of data collected by other methods.
The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.
Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut
2014-06-01
Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms.
Time-domain wavefield reconstruction inversion
NASA Astrophysics Data System (ADS)
Li, Zhen-Chun; Lin, Yu-Zhao; Zhang, Kai; Li, Yuan-Yuan; Yu, Zhen-Nan
2017-12-01
Wavefield reconstruction inversion (WRI) is an improved full waveform inversion theory that has been proposed in recent years. WRI method expands the searching space by introducing the wave equation into the objective function and reconstructing the wavefield to update model parameters, thereby improving the computing efficiency and mitigating the influence of the local minimum. However, frequency-domain WRI is difficult to apply to real seismic data because of the high computational memory demand and requirement of time-frequency transformation with additional computational costs. In this paper, wavefield reconstruction inversion theory is extended into the time domain, the augmented wave equation of WRI is derived in the time domain, and the model gradient is modified according to the numerical test with anomalies. The examples of synthetic data illustrate the accuracy of time-domain WRI and the low dependency of WRI on low-frequency information.
Filtered refocusing: a volumetric reconstruction algorithm for plenoptic-PIV
NASA Astrophysics Data System (ADS)
Fahringer, Timothy W.; Thurow, Brian S.
2016-09-01
A new algorithm for reconstruction of 3D particle fields from plenoptic image data is presented. The algorithm is based on the technique of computational refocusing with the addition of a post reconstruction filter to remove the out of focus particles. This new algorithm is tested in terms of reconstruction quality on synthetic particle fields as well as a synthetically generated 3D Gaussian ring vortex. Preliminary results indicate that the new algorithm performs as well as the MART algorithm (used in previous work) in terms of the reconstructed particle position accuracy, but produces more elongated particles. The major advantage to the new algorithm is the dramatic reduction in the computational cost required to reconstruct a volume. It is shown that the new algorithm takes 1/9th the time to reconstruct the same volume as MART while using minimal resources. Experimental results are presented in the form of the wake behind a cylinder at a Reynolds number of 185.
Investigation of iterative image reconstruction in three-dimensional optoacoustic tomography
Wang, Kun; Su, Richard; Oraevsky, Alexander A; Anastasio, Mark A
2012-01-01
Iterative image reconstruction algorithms for optoacoustic tomography (OAT), also known as photoacoustic tomography, have the ability to improve image quality over analytic algorithms due to their ability to incorporate accurate models of the imaging physics, instrument response, and measurement noise. However, to date, there have been few reported attempts to employ advanced iterative image reconstruction algorithms for improving image quality in three-dimensional (3D) OAT. In this work, we implement and investigate two iterative image reconstruction methods for use with a 3D OAT small animal imager: namely, a penalized least-squares (PLS) method employing a quadratic smoothness penalty and a PLS method employing a total variation norm penalty. The reconstruction algorithms employ accurate models of the ultrasonic transducer impulse responses. Experimental data sets are employed to compare the performances of the iterative reconstruction algorithms to that of a 3D filtered backprojection (FBP) algorithm. By use of quantitative measures of image quality, we demonstrate that the iterative reconstruction algorithms can mitigate image artifacts and preserve spatial resolution more effectively than FBP algorithms. These features suggest that the use of advanced image reconstruction algorithms can improve the effectiveness of 3D OAT while reducing the amount of data required for biomedical applications. PMID:22864062
Tang, Jie; Nett, Brian E; Chen, Guang-Hong
2009-10-07
Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niemkiewicz, J; Palmiotti, A; Miner, M
2014-06-01
Purpose: Metal in patients creates streak artifacts in CT images. When used for radiation treatment planning, these artifacts make it difficult to identify internal structures and affects radiation dose calculations, which depend on HU numbers for inhomogeneity correction. This work quantitatively evaluates a new metal artifact reduction (MAR) CT image reconstruction algorithm (GE Healthcare CT-0521-04.13-EN-US DOC1381483) when metal is present. Methods: A Gammex Model 467 Tissue Characterization phantom was used. CT images were taken of this phantom on a GE Optima580RT CT scanner with and without steel and titanium plugs using both the standard and MAR reconstruction algorithms. HU valuesmore » were compared pixel by pixel to determine if the MAR algorithm altered the HUs of normal tissues when no metal is present, and to evaluate the effect of using the MAR algorithm when metal is present. Also, CT images of patients with internal metal objects using standard and MAR reconstruction algorithms were compared. Results: Comparing the standard and MAR reconstructed images of the phantom without metal, 95.0% of pixels were within ±35 HU and 98.0% of pixels were within ±85 HU. Also, the MAR reconstruction algorithm showed significant improvement in maintaining HUs of non-metallic regions in the images taken of the phantom with metal. HU Gamma analysis (2%, 2mm) of metal vs. non-metal phantom imaging using standard reconstruction resulted in an 84.8% pass rate compared to 96.6% for the MAR reconstructed images. CT images of patients with metal show significant artifact reduction when reconstructed with the MAR algorithm. Conclusion: CT imaging using the MAR reconstruction algorithm provides improved visualization of internal anatomy and more accurate HUs when metal is present compared to the standard reconstruction algorithm. MAR reconstructed CT images provide qualitative and quantitative improvements over current reconstruction algorithms, thus improving radiation treatment planning accuracy.« less
Feng, Yanqiu; Song, Yanli; Wang, Cong; Xin, Xuegang; Feng, Qianjin; Chen, Wufan
2013-10-01
To develop and test a new algorithm for fast direct Fourier transform (DrFT) reconstruction of MR data on non-Cartesian trajectories composed of lines with equally spaced points. The DrFT, which is normally used as a reference in evaluating the accuracy of other reconstruction methods, can reconstruct images directly from non-Cartesian MR data without interpolation. However, DrFT reconstruction involves substantially intensive computation, which makes the DrFT impractical for clinical routine applications. In this article, the Chirp transform algorithm was introduced to accelerate the DrFT reconstruction of radial and Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction (PROPELLER) MRI data located on the trajectories that are composed of lines with equally spaced points. The performance of the proposed Chirp transform algorithm-DrFT algorithm was evaluated by using simulation and in vivo MRI data. After implementing the algorithm on a graphics processing unit, the proposed Chirp transform algorithm-DrFT algorithm achieved an acceleration of approximately one order of magnitude, and the speed-up factor was further increased to approximately three orders of magnitude compared with the traditional single-thread DrFT reconstruction. Implementation the Chirp transform algorithm-DrFT algorithm on the graphics processing unit can efficiently calculate the DrFT reconstruction of the radial and PROPELLER MRI data. Copyright © 2012 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Cui, Yi-an; Liu, Lanbo; Zhu, Xiaoxiong
2017-08-01
Monitoring the extent and evolution of contaminant plumes in local and regional groundwater systems from existing landfills is critical in contamination control and remediation. The self-potential survey is an efficient and economical nondestructive geophysical technique that can be used to investigate underground contaminant plumes. Based on the unscented transform, we have built a Kalman filtering cycle to conduct time-lapse data assimilation for monitoring the transport of solute based on the solute transport experiment using a bench-scale physical model. The data assimilation was formed by modeling the evolution based on the random walk model and observation correcting based on the self-potential forward. Thus, monitoring self-potential data can be inverted by the data assimilation technique. As a result, we can reconstruct the dynamic process of the contaminant plume instead of using traditional frame-to-frame static inversion, which may cause inversion artifacts. The data assimilation inversion algorithm was evaluated through noise-added synthetic time-lapse self-potential data. The result of the numerical experiment shows validity, accuracy and tolerance to the noise of the dynamic inversion. To validate the proposed algorithm, we conducted a scaled-down sandbox self-potential observation experiment to generate time-lapse data that closely mimics the real-world contaminant monitoring setup. The results of physical experiments support the idea that the data assimilation method is a potentially useful approach for characterizing the transport of contamination plumes using the unscented Kalman filter (UKF) data assimilation technique applied to field time-lapse self-potential data.
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions
NASA Astrophysics Data System (ADS)
Novosad, Philip; Reader, Andrew J.
2016-06-01
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [11C]SCH23390 data, showing promising results.
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.
Novosad, Philip; Reader, Andrew J
2016-06-21
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [(11)C]SCH23390 data, showing promising results.
Li, Yue; Zhang, Di; Capoglu, Ilker; Hujsak, Karl A; Damania, Dhwanil; Cherkezyan, Lusik; Roth, Eric; Bleher, Reiner; Wu, Jinsong S; Subramanian, Hariharan; Dravid, Vinayak P; Backman, Vadim
2017-06-01
Essentially all biological processes are highly dependent on the nanoscale architecture of the cellular components where these processes take place. Statistical measures, such as the autocorrelation function (ACF) of the three-dimensional (3D) mass-density distribution, are widely used to characterize cellular nanostructure. However, conventional methods of reconstruction of the deterministic 3D mass-density distribution, from which these statistical measures can be calculated, have been inadequate for thick biological structures, such as whole cells, due to the conflict between the need for nanoscale resolution and its inverse relationship with thickness after conventional tomographic reconstruction. To tackle the problem, we have developed a robust method to calculate the ACF of the 3D mass-density distribution without tomography. Assuming the biological mass distribution is isotropic, our method allows for accurate statistical characterization of the 3D mass-density distribution by ACF with two data sets: a single projection image by scanning transmission electron microscopy and a thickness map by atomic force microscopy. Here we present validation of the ACF reconstruction algorithm, as well as its application to calculate the statistics of the 3D distribution of mass-density in a region containing the nucleus of an entire mammalian cell. This method may provide important insights into architectural changes that accompany cellular processes.
Li, Yue; Zhang, Di; Capoglu, Ilker; Hujsak, Karl A.; Damania, Dhwanil; Cherkezyan, Lusik; Roth, Eric; Bleher, Reiner; Wu, Jinsong S.; Subramanian, Hariharan; Dravid, Vinayak P.; Backman, Vadim
2018-01-01
Essentially all biological processes are highly dependent on the nanoscale architecture of the cellular components where these processes take place. Statistical measures, such as the autocorrelation function (ACF) of the three-dimensional (3D) mass–density distribution, are widely used to characterize cellular nanostructure. However, conventional methods of reconstruction of the deterministic 3D mass–density distribution, from which these statistical measures can be calculated, have been inadequate for thick biological structures, such as whole cells, due to the conflict between the need for nanoscale resolution and its inverse relationship with thickness after conventional tomographic reconstruction. To tackle the problem, we have developed a robust method to calculate the ACF of the 3D mass–density distribution without tomography. Assuming the biological mass distribution is isotropic, our method allows for accurate statistical characterization of the 3D mass–density distribution by ACF with two data sets: a single projection image by scanning transmission electron microscopy and a thickness map by atomic force microscopy. Here we present validation of the ACF reconstruction algorithm, as well as its application to calculate the statistics of the 3D distribution of mass–density in a region containing the nucleus of an entire mammalian cell. This method may provide important insights into architectural changes that accompany cellular processes. PMID:28416035
2010-04-27
Dirichlet boundary data DP̃ (x, y) at the entire plane P̃ . Then one can solve the following boundary value problem in the half space below P̃ ∆w − s2w...which we wanted to be a plane wave when reaching the bottom side of the prism of Figure 1, where measurements were conducted. But actually this 14 was a...initializing wave field is a plane wave. On the other hand, a visual inspection of the output experimental data has revealed to us that actually we had a
Purevsuren, Tserenchimed; Batbaatar, Myagmarbayar; Khuyagbaatar, Batbayar; Kim, Kyungsoo; Kim, Yoon Hyuk
2018-03-12
Biomechanical studies have indicated that the conventional non-anatomic reconstruction techniques for lateral ankle sprain (LAS) tend to restrict subtalar joint motion compared to intact ankle joints. Excessive restriction in subtalar motion may lead to chronic pain, functional difficulties, and development of osteoarthritis. Therefore, various anatomic surgical techniques to reconstruct both the anterior talofibular and calcaneofibular ligaments have been introduced. In this study, ankle joint stability was evaluated using multibody computational ankle joint model to assess two new anatomic reconstruction and three popular non-anatomic reconstruction techniques. An LAS injury, three popular non-anatomic reconstruction models (Watson-Jones, Evans, and Chrisman-Snook), and two common types of anatomic reconstruction models were developed based on the intact ankle model. The stability of ankle in both talocrural and subtalar joint were evaluated under anterior drawer test (150 N anterior force), inversion test (3 Nm inversion moment), internal rotational test (3 Nm internal rotation moment), and the combined loading test (9 Nm inversion and internal moment as well as 1800 N compressive force). Our overall results show that the two anatomic reconstruction techniques were superior to the non-anatomic reconstruction techniques in stabilizing both talocrural and subtalar joints. Restricted subtalar joint motion, which mainly observed in Watson-Jones and Chrisman-Snook techniques, was not shown in the anatomical reconstructions. Evans technique was beneficial for subtalar joint as it does not restrict subtalar motion, though Evans technique was insufficient for restoring talocrural joint inversion. The anatomical reconstruction techniques best recovered ankle stability.
Ping, Bo; Su, Fenzhen; Meng, Yunshan
2016-01-01
In this study, an improved Data INterpolating Empirical Orthogonal Functions (DINEOF) algorithm for determination of missing values in a spatio-temporal dataset is presented. Compared with the ordinary DINEOF algorithm, the iterative reconstruction procedure until convergence based on every fixed EOF to determine the optimal EOF mode is not necessary and the convergence criterion is only reached once in the improved DINEOF algorithm. Moreover, in the ordinary DINEOF algorithm, after optimal EOF mode determination, the initial matrix with missing data will be iteratively reconstructed based on the optimal EOF mode until the reconstruction is convergent. However, the optimal EOF mode may be not the best EOF for some reconstructed matrices generated in the intermediate steps. Hence, instead of using asingle EOF to fill in the missing data, in the improved algorithm, the optimal EOFs for reconstruction are variable (because the optimal EOFs are variable, the improved algorithm is called VE-DINEOF algorithm in this study). To validate the accuracy of the VE-DINEOF algorithm, a sea surface temperature (SST) data set is reconstructed by using the DINEOF, I-DINEOF (proposed in 2015) and VE-DINEOF algorithms. Four parameters (Pearson correlation coefficient, signal-to-noise ratio, root-mean-square error, and mean absolute difference) are used as a measure of reconstructed accuracy. Compared with the DINEOF and I-DINEOF algorithms, the VE-DINEOF algorithm can significantly enhance the accuracy of reconstruction and shorten the computational time.
Markov prior-based block-matching algorithm for superdimension reconstruction of porous media
NASA Astrophysics Data System (ADS)
Li, Yang; He, Xiaohai; Teng, Qizhi; Feng, Junxi; Wu, Xiaohong
2018-04-01
A superdimension reconstruction algorithm is used for the reconstruction of three-dimensional (3D) structures of a porous medium based on a single two-dimensional image. The algorithm borrows the concepts of "blocks," "learning," and "dictionary" from learning-based superresolution reconstruction and applies them to the 3D reconstruction of a porous medium. In the neighborhood-matching process of the conventional superdimension reconstruction algorithm, the Euclidean distance is used as a criterion, although it may not really reflect the structural correlation between adjacent blocks in an actual situation. Hence, in this study, regular items are adopted as prior knowledge in the reconstruction process, and a Markov prior-based block-matching algorithm for superdimension reconstruction is developed for more accurate reconstruction. The algorithm simultaneously takes into consideration the probabilistic relationship between the already reconstructed blocks in three different perpendicular directions (x , y , and z ) and the block to be reconstructed, and the maximum value of the probability product of the blocks to be reconstructed (as found in the dictionary for the three directions) is adopted as the basis for the final block selection. Using this approach, the problem of an imprecise spatial structure caused by a point simulation can be overcome. The problem of artifacts in the reconstructed structure is also addressed through the addition of hard data and by neighborhood matching. To verify the improved reconstruction accuracy of the proposed method, the statistical and morphological features of the results from the proposed method and traditional superdimension reconstruction method are compared with those of the target system. The proposed superdimension reconstruction algorithm is confirmed to enable a more accurate reconstruction of the target system while also eliminating artifacts.
Numerical Recovering of a Speed of Sound by the BC-Method in 3D
NASA Astrophysics Data System (ADS)
Pestov, Leonid; Bolgova, Victoria; Danilin, Alexandr
We develop the numerical algorithm for solving the inverse problem for the wave equation by the Boundary Control method. The problem, which we refer to as a forward one, is an initial boundary value problem for the wave equation with zero initial data in the bounded domain. The inverse problem is to find the speed of sound c(x) by the measurements of waves induced by a set of boundary sources. The time of observation is assumed to be greater then two acoustical radius of the domain. The numerical algorithm for sound reconstruction is based on two steps. The first one is to find a (sufficiently large) number of controls {f_j} (the basic control is defined by the position of the source and some time delay), which generates the same number of known harmonic functions, i.e. Δ {u_j}(.,T) = 0 , where {u_j} is the wave generated by the control {f_j} . After that the linear integral equation w.r.t. the speed of sound is obtained. The piecewise constant model of the speed is used. The result of numerical testing of 3-dimensional model is presented.
A Shearlet-based algorithm for quantum noise removal in low-dose CT images
NASA Astrophysics Data System (ADS)
Zhang, Aguan; Jiang, Huiqin; Ma, Ling; Liu, Yumin; Yang, Xiaopeng
2016-03-01
Low-dose CT (LDCT) scanning is a potential way to reduce the radiation exposure of X-ray in the population. It is necessary to improve the quality of low-dose CT images. In this paper, we propose an effective algorithm for quantum noise removal in LDCT images using shearlet transform. Because the quantum noise can be simulated by Poisson process, we first transform the quantum noise by using anscombe variance stabilizing transform (VST), producing an approximately Gaussian noise with unitary variance. Second, the non-noise shearlet coefficients are obtained by adaptive hard-threshold processing in shearlet domain. Third, we reconstruct the de-noised image using the inverse shearlet transform. Finally, an anscombe inverse transform is applied to the de-noised image, which can produce the improved image. The main contribution is to combine the anscombe VST with the shearlet transform. By this way, edge coefficients and noise coefficients can be separated from high frequency sub-bands effectively. A number of experiments are performed over some LDCT images by using the proposed method. Both quantitative and visual results show that the proposed method can effectively reduce the quantum noise while enhancing the subtle details. It has certain value in clinical application.
NASA Astrophysics Data System (ADS)
Ren, Zhong; Liu, Guodong; Huang, Zhen
2012-11-01
The image reconstruction is a key step in medical imaging (MI) and its algorithm's performance determinates the quality and resolution of reconstructed image. Although some algorithms have been used, filter back-projection (FBP) algorithm is still the classical and commonly-used algorithm in clinical MI. In FBP algorithm, filtering of original projection data is a key step in order to overcome artifact of the reconstructed image. Since simple using of classical filters, such as Shepp-Logan (SL), Ram-Lak (RL) filter have some drawbacks and limitations in practice, especially for the projection data polluted by non-stationary random noises. So, an improved wavelet denoising combined with parallel-beam FBP algorithm is used to enhance the quality of reconstructed image in this paper. In the experiments, the reconstructed effects were compared between the improved wavelet denoising and others (directly FBP, mean filter combined FBP and median filter combined FBP method). To determine the optimum reconstruction effect, different algorithms, and different wavelet bases combined with three filters were respectively test. Experimental results show the reconstruction effect of improved FBP algorithm is better than that of others. Comparing the results of different algorithms based on two evaluation standards i.e. mean-square error (MSE), peak-to-peak signal-noise ratio (PSNR), it was found that the reconstructed effects of the improved FBP based on db2 and Hanning filter at decomposition scale 2 was best, its MSE value was less and the PSNR value was higher than others. Therefore, this improved FBP algorithm has potential value in the medical imaging.
PRIFIRA: General regularization using prior-conditioning for fast radio interferometric imaging†
NASA Astrophysics Data System (ADS)
Naghibzadeh, Shahrzad; van der Veen, Alle-Jan
2018-06-01
Image formation in radio astronomy is a large-scale inverse problem that is inherently ill-posed. We present a general algorithmic framework based on a Bayesian-inspired regularized maximum likelihood formulation of the radio astronomical imaging problem with a focus on diffuse emission recovery from limited noisy correlation data. The algorithm is dubbed PRIor-conditioned Fast Iterative Radio Astronomy (PRIFIRA) and is based on a direct embodiment of the regularization operator into the system by right preconditioning. The resulting system is then solved using an iterative method based on projections onto Krylov subspaces. We motivate the use of a beamformed image (which includes the classical "dirty image") as an efficient prior-conditioner. Iterative reweighting schemes generalize the algorithmic framework and can account for different regularization operators that encourage sparsity of the solution. The performance of the proposed method is evaluated based on simulated one- and two-dimensional array arrangements as well as actual data from the core stations of the Low Frequency Array radio telescope antenna configuration, and compared to state-of-the-art imaging techniques. We show the generality of the proposed method in terms of regularization schemes while maintaining a competitive reconstruction quality with the current reconstruction techniques. Furthermore, we show that exploiting Krylov subspace methods together with the proper noise-based stopping criteria results in a great improvement in imaging efficiency.
NASA Astrophysics Data System (ADS)
Li, Xianye; Meng, Xiangfeng; Wang, Yurong; Yang, Xiulun; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi
2017-09-01
A multiple-image encryption method is proposed that is based on row scanning compressive ghost imaging, (t, n) threshold secret sharing, and phase retrieval in the Fresnel domain. In the encryption process, after wavelet transform and Arnold transform of the target image, the ciphertext matrix can be first detected using a bucket detector. Based on a (t, n) threshold secret sharing algorithm, the measurement key used in the row scanning compressive ghost imaging can be decomposed and shared into two pairs of sub-keys, which are then reconstructed using two phase-only mask (POM) keys with fixed pixel values, placed in the input plane and transform plane 2 of the phase retrieval scheme, respectively; and the other POM key in the transform plane 1 can be generated and updated by the iterative encoding of each plaintext image. In each iteration, the target image acts as the input amplitude constraint in the input plane. During decryption, each plaintext image possessing all the correct keys can be successfully decrypted by measurement key regeneration, compression algorithm reconstruction, inverse wavelet transformation, and Fresnel transformation. Theoretical analysis and numerical simulations both verify the feasibility of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jing; Guan, Huaiqun; Solberg, Timothy
2011-07-15
Purpose: A statistical projection restoration algorithm based on the penalized weighted least-squares (PWLS) criterion can substantially improve the image quality of low-dose CBCT images. The performance of PWLS is largely dependent on the choice of the penalty parameter. Previously, the penalty parameter was chosen empirically by trial and error. In this work, the authors developed an inverse technique to calculate the penalty parameter in PWLS for noise suppression of low-dose CBCT in image guided radiotherapy (IGRT). Methods: In IGRT, a daily CBCT is acquired for the same patient during a treatment course. In this work, the authors acquired the CBCTmore » with a high-mAs protocol for the first session and then a lower mAs protocol for the subsequent sessions. The high-mAs projections served as the goal (ideal) toward, which the low-mAs projections were to be smoothed by minimizing the PWLS objective function. The penalty parameter was determined through an inverse calculation of the derivative of the objective function incorporating both the high and low-mAs projections. Then the parameter obtained can be used for PWLS to smooth the noise in low-dose projections. CBCT projections for a CatPhan 600 and an anthropomorphic head phantom, as well as for a brain patient, were used to evaluate the performance of the proposed technique. Results: The penalty parameter in PWLS was obtained for each CBCT projection using the proposed strategy. The noise in the low-dose CBCT images reconstructed from the smoothed projections was greatly suppressed. Image quality in PWLS-processed low-dose CBCT was comparable to its corresponding high-dose CBCT. Conclusions: A technique was proposed to estimate the penalty parameter for PWLS algorithm. It provides an objective and efficient way to obtain the penalty parameter for image restoration algorithms that require predefined smoothing parameters.« less
NASA Astrophysics Data System (ADS)
Reiter, D. T.; Rodi, W. L.
2015-12-01
Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.
A modified conjugate gradient method based on the Tikhonov system for computerized tomography (CT).
Wang, Qi; Wang, Huaxiang
2011-04-01
During the past few decades, computerized tomography (CT) was widely used for non-destructive testing (NDT) and non-destructive examination (NDE) in the industrial area because of its characteristics of non-invasiveness and visibility. Recently, CT technology has been applied to multi-phase flow measurement. Using the principle of radiation attenuation measurements along different directions through the investigated object with a special reconstruction algorithm, cross-sectional information of the scanned object can be worked out. It is a typical inverse problem and has always been a challenge for its nonlinearity and ill-conditions. The Tikhonov regulation method is widely used for similar ill-posed problems. However, the conventional Tikhonov method does not provide reconstructions with qualities good enough, the relative errors between the reconstructed images and the real distribution should be further reduced. In this paper, a modified conjugate gradient (CG) method is applied to a Tikhonov system (MCGT method) for reconstructing CT images. The computational load is dominated by the number of independent measurements m, and a preconditioner is imported to lower the condition number of the Tikhonov system. Both simulation and experiment results indicate that the proposed method can reduce the computational time and improve the quality of image reconstruction. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
Higher order reconstruction for MRI in the presence of spatiotemporal field perturbations.
Wilm, Bertram J; Barmet, Christoph; Pavan, Matteo; Pruessmann, Klaas P
2011-06-01
Despite continuous hardware advances, MRI is frequently subject to field perturbations that are of higher than first order in space and thus violate the traditional k-space picture of spatial encoding. Sources of higher order perturbations include eddy currents, concomitant fields, thermal drifts, and imperfections of higher order shim systems. In conventional MRI with Fourier reconstruction, they give rise to geometric distortions, blurring, artifacts, and error in quantitative data. This work describes an alternative approach in which the entire field evolution, including higher order effects, is accounted for by viewing image reconstruction as a generic inverse problem. The relevant field evolutions are measured with a third-order NMR field camera. Algebraic reconstruction is then formulated such as to jointly minimize artifacts and noise in the resulting image. It is solved by an iterative conjugate-gradient algorithm that uses explicit matrix-vector multiplication to accommodate arbitrary net encoding. The feasibility and benefits of this approach are demonstrated by examples of diffusion imaging. In a phantom study, it is shown that higher order reconstruction largely overcomes variable image distortions that diffusion gradients induce in EPI data. In vivo experiments then demonstrate that the resulting geometric consistency permits straightforward tensor analysis without coregistration. Copyright © 2011 Wiley-Liss, Inc.
Li, Yanqiu; Liu, Shi; Inaki, Schlaberg H.
2017-01-01
Accuracy and speed of algorithms play an important role in the reconstruction of temperature field measurements by acoustic tomography. Existing algorithms are based on static models which only consider the measurement information. A dynamic model of three-dimensional temperature reconstruction by acoustic tomography is established in this paper. A dynamic algorithm is proposed considering both acoustic measurement information and the dynamic evolution information of the temperature field. An objective function is built which fuses measurement information and the space constraint of the temperature field with its dynamic evolution information. Robust estimation is used to extend the objective function. The method combines a tunneling algorithm and a local minimization technique to solve the objective function. Numerical simulations show that the image quality and noise immunity of the dynamic reconstruction algorithm are better when compared with static algorithms such as least square method, algebraic reconstruction technique and standard Tikhonov regularization algorithms. An effective method is provided for temperature field reconstruction by acoustic tomography. PMID:28895930
Gao, Jingkun; Deng, Bin; Qin, Yuliang; Wang, Hongqiang; Li, Xiang
2016-12-14
An efficient wide-angle inverse synthetic aperture imaging method considering the spherical wavefront effects and suitable for the terahertz band is presented. Firstly, the echo signal model under spherical wave assumption is established, and the detailed wavefront curvature compensation method accelerated by 1D fast Fourier transform (FFT) is discussed. Then, to speed up the reconstruction procedure, the fast Gaussian gridding (FGG)-based nonuniform FFT (NUFFT) is employed to focus the image. Finally, proof-of-principle experiments are carried out and the results are compared with the ones obtained by the convolution back-projection (CBP) algorithm. The results demonstrate the effectiveness and the efficiency of the presented method. This imaging method can be directly used in the field of nondestructive detection and can also be used to provide a solution for the calculation of the far-field RCSs (Radar Cross Section) of targets in the terahertz regime.
Uncertainty principles for inverse source problems for electromagnetic and elastic waves
NASA Astrophysics Data System (ADS)
Griesmaier, Roland; Sylvester, John
2018-06-01
In isotropic homogeneous media, far fields of time-harmonic electromagnetic waves radiated by compactly supported volume currents, and elastic waves radiated by compactly supported body force densities can be modelled in very similar fashions. Both are projected restricted Fourier transforms of vector-valued source terms. In this work we generalize two types of uncertainty principles recently developed for far fields of scalar-valued time-harmonic waves in Griesmaier and Sylvester (2017 SIAM J. Appl. Math. 77 154–80) to this vector-valued setting. These uncertainty principles yield stability criteria and algorithms for splitting far fields radiated by collections of well-separated sources into the far fields radiated by individual source components, and for the restoration of missing data segments. We discuss proper regularization strategies for these inverse problems, provide stability estimates based on the new uncertainty principles, and comment on reconstruction schemes. A numerical example illustrates our theoretical findings.
Experimental determination of pore shapes using phase retrieval from q -space NMR diffraction
NASA Astrophysics Data System (ADS)
Demberg, Kerstin; Laun, Frederik Bernd; Bertleff, Marco; Bachert, Peter; Kuder, Tristan Anselm
2018-05-01
This paper presents an approach to solving the phase problem in nuclear magnetic resonance (NMR) diffusion pore imaging, a method that allows imaging the shape of arbitrary closed pores filled with an NMR-detectable medium for investigation of the microstructure of biological tissue and porous materials. Classical q -space imaging composed of two short diffusion-encoding gradient pulses yields, analogously to diffraction experiments, the modulus squared of the Fourier transform of the pore image which entails an inversion problem: An unambiguous reconstruction of the pore image requires both magnitude and phase. Here the phase information is recovered from the Fourier modulus by applying a phase retrieval algorithm. This allows omitting experimentally challenging phase measurements using specialized temporal gradient profiles. A combination of the hybrid input-output algorithm and the error reduction algorithm was used with dynamically adapting support (shrinkwrap extension). No a priori knowledge on the pore shape was fed to the algorithm except for a finite pore extent. The phase retrieval approach proved successful for simulated data with and without noise and was validated in phantom experiments with well-defined pores using hyperpolarized xenon gas.
Experimental determination of pore shapes using phase retrieval from q-space NMR diffraction.
Demberg, Kerstin; Laun, Frederik Bernd; Bertleff, Marco; Bachert, Peter; Kuder, Tristan Anselm
2018-05-01
This paper presents an approach to solving the phase problem in nuclear magnetic resonance (NMR) diffusion pore imaging, a method that allows imaging the shape of arbitrary closed pores filled with an NMR-detectable medium for investigation of the microstructure of biological tissue and porous materials. Classical q-space imaging composed of two short diffusion-encoding gradient pulses yields, analogously to diffraction experiments, the modulus squared of the Fourier transform of the pore image which entails an inversion problem: An unambiguous reconstruction of the pore image requires both magnitude and phase. Here the phase information is recovered from the Fourier modulus by applying a phase retrieval algorithm. This allows omitting experimentally challenging phase measurements using specialized temporal gradient profiles. A combination of the hybrid input-output algorithm and the error reduction algorithm was used with dynamically adapting support (shrinkwrap extension). No a priori knowledge on the pore shape was fed to the algorithm except for a finite pore extent. The phase retrieval approach proved successful for simulated data with and without noise and was validated in phantom experiments with well-defined pores using hyperpolarized xenon gas.
Frequency-radial duality based photoacoustic image reconstruction.
Akramus Salehin, S M; Abhayapala, Thushara D
2012-07-01
Photoacoustic image reconstruction algorithms are usually slow due to the large sizes of data that are processed. This paper proposes a method for exact photoacoustic reconstruction for the spherical geometry in the limiting case of a continuous aperture and infinite measurement bandwidth that is faster than existing methods namely (1) backprojection method and (2) the Norton-Linzer method [S. J. Norton and M. Linzer, "Ultrasonic reflectivity imaging in three dimensions: Exact inverse scattering solution for plane, cylindrical and spherical apertures," Biomedical Engineering, IEEE Trans. BME 28, 202-220 (1981)]. The initial pressure distribution is expanded using a spherical Fourier Bessel series. The proposed method estimates the Fourier Bessel coefficients and subsequently recovers the pressure distribution. A concept of frequency-radial duality is introduced that separates the information from the different radial basis functions by using frequencies corresponding to the Bessel zeros. This approach provides a means to analyze the information obtained given a measurement bandwidth. Using order analysis and numerical experiments, the proposed method is shown to be faster than both the backprojection and the Norton-Linzer methods. Further, the reconstructed images using the proposed methodology were of similar quality to the Norton-Linzer method and were better than the approximate backprojection method.
NASA Astrophysics Data System (ADS)
Durand, Sylvain; Frapart, Yves-Michel; Kerebel, Maud
2017-11-01
Spatial electron paramagnetic resonance imaging (EPRI) is a recent method to localize and characterize free radicals in vivo or in vitro, leading to applications in material and biomedical sciences. To improve the quality of the reconstruction obtained by EPRI, a variational method is proposed to inverse the image formation model. It is based on a least-square data-fidelity term and the total variation and Besov seminorm for the regularization term. To fully comprehend the Besov seminorm, an implementation using the curvelet transform and the L 1 norm enforcing the sparsity is proposed. It allows our model to reconstruct both image where acquisition information are missing and image with details in textured areas, thus opening possibilities to reduce acquisition times. To implement the minimization problem using the algorithm developed by Chambolle and Pock, a thorough analysis of the direct model is undertaken and the latter is inverted while avoiding the use of filtered backprojection (FBP) and of non-uniform Fourier transform. Numerical experiments are carried out on simulated data, where the proposed model outperforms both visually and quantitatively the classical model using deconvolution and FBP. Improved reconstructions on real data, acquired on an irradiated distal phalanx, were successfully obtained.
Temporal sparsity exploiting nonlocal regularization for 4D computed tomography reconstruction
Kazantsev, Daniil; Guo, Enyu; Kaestner, Anders; Lionheart, William R. B.; Bent, Julian; Withers, Philip J.; Lee, Peter D.
2016-01-01
X-ray imaging applications in medical and material sciences are frequently limited by the number of tomographic projections collected. The inversion of the limited projection data is an ill-posed problem and needs regularization. Traditional spatial regularization is not well adapted to the dynamic nature of time-lapse tomography since it discards the redundancy of the temporal information. In this paper, we propose a novel iterative reconstruction algorithm with a nonlocal regularization term to account for time-evolving datasets. The aim of the proposed nonlocal penalty is to collect the maximum relevant information in the spatial and temporal domains. With the proposed sparsity seeking approach in the temporal space, the computational complexity of the classical nonlocal regularizer is substantially reduced (at least by one order of magnitude). The presented reconstruction method can be directly applied to various big data 4D (x, y, z+time) tomographic experiments in many fields. We apply the proposed technique to modelled data and to real dynamic X-ray microtomography (XMT) data of high resolution. Compared to the classical spatio-temporal nonlocal regularization approach, the proposed method delivers reconstructed images of improved resolution and higher contrast while remaining significantly less computationally demanding. PMID:27002902
CT cardiac imaging: evolution from 2D to 3D backprojection
NASA Astrophysics Data System (ADS)
Tang, Xiangyang; Pan, Tinsu; Sasaki, Kosuke
2004-04-01
The state-of-the-art multiple detector-row CT, which usually employs fan beam reconstruction algorithms by approximating a cone beam geometry into a fan beam geometry, has been well recognized as an important modality for cardiac imaging. At present, the multiple detector-row CT is evolving into volumetric CT, in which cone beam reconstruction algorithms are needed to combat cone beam artifacts caused by large cone angle. An ECG-gated cardiac cone beam reconstruction algorithm based upon the so-called semi-CB geometry is implemented in this study. To get the highest temporal resolution, only the projection data corresponding to 180° plus the cone angle are row-wise rebinned into the semi-CB geometry for three-dimensional reconstruction. Data extrapolation is utilized to extend the z-coverage of the ECG-gated cardiac cone beam reconstruction algorithm approaching the edge of a CT detector. A helical body phantom is used to evaluate the ECG-gated cone beam reconstruction algorithm"s z-coverage and capability of suppressing cone beam artifacts. Furthermore, two sets of cardiac data scanned by a multiple detector-row CT scanner at 16 x 1.25 (mm) and normalized pitch 0.275 and 0.3 respectively are used to evaluate the ECG-gated CB reconstruction algorithm"s imaging performance. As a reference, the images reconstructed by a fan beam reconstruction algorithm for multiple detector-row CT are also presented. The qualitative evaluation shows that, the ECG-gated cone beam reconstruction algorithm outperforms its fan beam counterpart from the perspective of cone beam artifact suppression and z-coverage while the temporal resolution is well maintained. Consequently, the scan speed can be increased to reduce the contrast agent amount and injection time, improve the patient comfort and x-ray dose efficiency. Based up on the comparison, it is believed that, with the transition of multiple detector-row CT into volumetric CT, ECG-gated cone beam reconstruction algorithms will provide better image quality for CT cardiac applications.
From scores to face templates: a model-based approach.
Mohanty, Pranab; Sarkar, Sudeep; Kasturi, Rangachar
2007-12-01
Regeneration of templates from match scores has security and privacy implications related to any biometric authentication system. We propose a novel paradigm to reconstruct face templates from match scores using a linear approach. It proceeds by first modeling the behavior of the given face recognition algorithm by an affine transformation. The goal of the modeling is to approximate the distances computed by a face recognition algorithm between two faces by distances between points, representing these faces, in an affine space. Given this space, templates from an independent image set (break-in) are matched only once with the enrolled template of the targeted subject and match scores are recorded. These scores are then used to embed the targeted subject in the approximating affine (non-orthogonal) space. Given the coordinates of the targeted subject in the affine space, the original template of the targeted subject is reconstructed using the inverse of the affine transformation. We demonstrate our ideas using three, fundamentally different, face recognition algorithms: Principal Component Analysis (PCA) with Mahalanobis cosine distance measure, Bayesian intra-extrapersonal classifier (BIC), and a feature-based commercial algorithm. To demonstrate the independence of the break-in set with the gallery set, we select face templates from two different databases: Face Recognition Grand Challenge (FRGC) and Facial Recognition Technology (FERET) Database (FERET). With an operational point set at 1 percent False Acceptance Rate (FAR) and 99 percent True Acceptance Rate (TAR) for 1,196 enrollments (FERET gallery), we show that at most 600 attempts (score computations) are required to achieve a 73 percent chance of breaking in as a randomly chosen target subject for the commercial face recognition system. With similar operational set up, we achieve a 72 percent and 100 percent chance of breaking in for the Bayesian and PCA based face recognition systems, respectively. With three different levels of score quantization, we achieve 69 percent, 68 percent and 49 percent probability of break-in, indicating the robustness of our proposed scheme to score quantization. We also show that the proposed reconstruction scheme has 47 percent more probability of breaking in as a randomly chosen target subject for the commercial system as compared to a hill climbing approach with the same number of attempts. Given that the proposed template reconstruction method uses distinct face templates to reconstruct faces, this work exposes a more severe form of vulnerability than a hill climbing kind of attack where incrementally different versions of the same face are used. Also, the ability of the proposed approach to reconstruct actual face templates of the users increases privacy concerns in biometric systems.
Image reconstruction through thin scattering media by simulated annealing algorithm
NASA Astrophysics Data System (ADS)
Fang, Longjie; Zuo, Haoyi; Pang, Lin; Yang, Zuogang; Zhang, Xicheng; Zhu, Jianhua
2018-07-01
An idea for reconstructing the image of an object behind thin scattering media is proposed by phase modulation. The optimized phase mask is achieved by modulating the scattered light using simulated annealing algorithm. The correlation coefficient is exploited as a fitness function to evaluate the quality of reconstructed image. The reconstructed images optimized from simulated annealing algorithm and genetic algorithm are compared in detail. The experimental results show that our proposed method has better definition and higher speed than genetic algorithm.
Owen, Julia P; Wipf, David P; Attias, Hagai T; Sekihara, Kensuke; Nagarajan, Srikantan S
2012-03-01
In this paper, we present an extensive performance evaluation of a novel source localization algorithm, Champagne. It is derived in an empirical Bayesian framework that yields sparse solutions to the inverse problem. It is robust to correlated sources and learns the statistics of non-stimulus-evoked activity to suppress the effect of noise and interfering brain activity. We tested Champagne on both simulated and real M/EEG data. The source locations used for the simulated data were chosen to test the performance on challenging source configurations. In simulations, we found that Champagne outperforms the benchmark algorithms in terms of both the accuracy of the source localizations and the correct estimation of source time courses. We also demonstrate that Champagne is more robust to correlated brain activity present in real MEG data and is able to resolve many distinct and functionally relevant brain areas with real MEG and EEG data. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Min; Yuan, Yunbin; Wang, Ningbo; Li, Zishen; Liu, Xifeng; Zhang, Xiao
2018-07-01
This paper presents a quantitative comparison of several widely used interpolation algorithms, i.e., Ordinary Kriging (OrK), Universal Kriging (UnK), planar fit and Inverse Distance Weighting (IDW), based on a grid-based single-shell ionosphere model over China. The experimental data were collected from the Crustal Movement Observation Network of China (CMONOC) and the International GNSS Service (IGS), covering the days of year 60-90 in 2015. The quality of these interpolation algorithms was assessed by cross-validation in terms of both the ionospheric correction performance and Single-Frequency (SF) Precise Point Positioning (PPP) accuracy on an epoch-by-epoch basis. The results indicate that the interpolation models perform better at mid-latitudes than low latitudes. For the China region, the performance of OrK and UnK is relatively better than the planar fit and IDW model for estimating ionospheric delay and positioning. In addition, the computational efficiencies of the IDW and planar fit models are better than those of OrK and UnK.
Time-of-flight PET image reconstruction using origin ensembles.
Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven
2015-03-07
The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.
Time-of-flight PET image reconstruction using origin ensembles
NASA Astrophysics Data System (ADS)
Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven
2015-03-01
The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.
Hosseini, Seyed Abolfazl; Esmaili Paeen Afrakoti, Iman
2018-01-17
The purpose of the present study was to reconstruct the energy spectrum of a poly-energetic neutron source using an algorithm developed based on an Adaptive Neuro-Fuzzy Inference System (ANFIS). ANFIS is a kind of artificial neural network based on the Takagi-Sugeno fuzzy inference system. The ANFIS algorithm uses the advantages of both fuzzy inference systems and artificial neural networks to improve the effectiveness of algorithms in various applications such as modeling, control and classification. The neutron pulse height distributions used as input data in the training procedure for the ANFIS algorithm were obtained from the simulations performed by MCNPX-ESUT computational code (MCNPX-Energy engineering of Sharif University of Technology). Taking into account the normalization condition of each energy spectrum, 4300 neutron energy spectra were generated randomly. (The value in each bin was generated randomly, and finally a normalization of each generated energy spectrum was performed). The randomly generated neutron energy spectra were considered as output data of the developed ANFIS computational code in the training step. To calculate the neutron energy spectrum using conventional methods, an inverse problem with an approximately singular response matrix (with the determinant of the matrix close to zero) should be solved. The solution of the inverse problem using the conventional methods unfold neutron energy spectrum with low accuracy. Application of the iterative algorithms in the solution of such a problem, or utilizing the intelligent algorithms (in which there is no need to solve the problem), is usually preferred for unfolding of the energy spectrum. Therefore, the main reason for development of intelligent algorithms like ANFIS for unfolding of neutron energy spectra is to avoid solving the inverse problem. In the present study, the unfolded neutron energy spectra of 252Cf and 241Am-9Be neutron sources using the developed computational code were found to have excellent agreement with the reference data. Also, the unfolded energy spectra of the neutron sources as obtained using ANFIS were more accurate than the results reported from calculations performed using artificial neural networks in previously published papers. © The Author(s) 2018. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.
Magnetic Resonance-Based Electrical Property Tomography (MR- EPT) for Prostate Cancer Grade Imaging
2014-07-01
TV 2D 5 10 15 2 4 6 8 10 12 14 i 5 10 15 2 4 6 8 10 12 14 Figure 10. Prostate-like gelatin phantom with one inclusion (5mm, play dough ...magnitude image (TSE) and reconstructions. 11 c) Multiple Inclusions Two 5 mm diameter inclusions ( play dough to provide significant conductivity...reconstruction, 2D inverse reconstruction with Total Variation, 3D inverse reconstruction 10 b) Single inclusion A single 5 mm diameter inclusion ( play
Denoised Wigner distribution deconvolution via low-rank matrix completion
Lee, Justin; Barbastathis, George
2016-08-23
Wigner distribution deconvolution (WDD) is a decades-old method for recovering phase from intensity measurements. Although the technique offers an elegant linear solution to the quadratic phase retrieval problem, it has seen limited adoption due to its high computational/memory requirements and the fact that the technique often exhibits high noise sensitivity. Here, we propose a method for noise suppression in WDD via low-rank noisy matrix completion. Our technique exploits the redundancy of an object’s phase space to denoise its WDD reconstruction. We show in model calculations that our technique outperforms other WDD algorithms as well as modern iterative methods for phasemore » retrieval such as ptychography. Here, our results suggest that a class of phase retrieval techniques relying on regularized direct inversion of ptychographic datasets (instead of iterative reconstruction techniques) can provide accurate quantitative phase information in the presence of high levels of noise.« less
Denoised Wigner distribution deconvolution via low-rank matrix completion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Justin; Barbastathis, George
Wigner distribution deconvolution (WDD) is a decades-old method for recovering phase from intensity measurements. Although the technique offers an elegant linear solution to the quadratic phase retrieval problem, it has seen limited adoption due to its high computational/memory requirements and the fact that the technique often exhibits high noise sensitivity. Here, we propose a method for noise suppression in WDD via low-rank noisy matrix completion. Our technique exploits the redundancy of an object’s phase space to denoise its WDD reconstruction. We show in model calculations that our technique outperforms other WDD algorithms as well as modern iterative methods for phasemore » retrieval such as ptychography. Here, our results suggest that a class of phase retrieval techniques relying on regularized direct inversion of ptychographic datasets (instead of iterative reconstruction techniques) can provide accurate quantitative phase information in the presence of high levels of noise.« less
Constrained signal reconstruction from wavelet transform coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, C.M.
1991-12-31
A new method is introduced for reconstructing a signal from an incomplete sampling of its Discrete Wavelet Transform (DWT). The algorithm yields a minimum-norm estimate satisfying a priori upper and lower bounds on the signal. The method is based on a finite-dimensional representation theory for minimum-norm estimates of bounded signals developed by R.E. Cole. Cole`s work has its origins in earlier techniques of maximum-entropy spectral estimation due to Lang and McClellan, which were adapted by Steinhardt, Goodrich and Roberts for minimum-norm spectral estimation. Cole`s extension of their work provides a representation for minimum-norm estimates of a class of generalized transformsmore » in terms of general correlation data (not just DFT`s of autocorrelation lags, as in spectral estimation). One virtue of this great generality is that it includes the inverse DWT. 20 refs.« less
Jolivet, Frédéric; Momey, Fabien; Denis, Loïc; Méès, Loïc; Faure, Nicolas; Grosjean, Nathalie; Pinston, Frédéric; Marié, Jean-Louis; Fournier, Corinne
2018-04-02
Reconstruction of phase objects is a central problem in digital holography, whose various applications include microscopy, biomedical imaging, and fluid mechanics. Starting from a single in-line hologram, there is no direct way to recover the phase of the diffracted wave in the hologram plane. The reconstruction of absorbing and phase objects therefore requires the inversion of the non-linear hologram formation model. We propose a regularized reconstruction method that includes several physically-grounded constraints such as bounds on transmittance values, maximum/minimum phase, spatial smoothness or the absence of any object in parts of the field of view. To solve the non-convex and non-smooth optimization problem induced by our modeling, a variable splitting strategy is applied and the closed-form solution of the sub-problem (the so-called proximal operator) is derived. The resulting algorithm is efficient and is shown to lead to quantitative phase estimation on reconstructions of accurate simulations of in-line holograms based on the Mie theory. As our approach is adaptable to several in-line digital holography configurations, we present and discuss the promising results of reconstructions from experimental in-line holograms obtained in two different applications: the tracking of an evaporating droplet (size ∼ 100μm) and the microscopic imaging of bacteria (size ∼ 1μm).
NASA Astrophysics Data System (ADS)
Oware, E. K.; Moysey, S. M.
2016-12-01
Regularization stabilizes the geophysical imaging problem resulting from sparse and noisy measurements that render solutions unstable and non-unique. Conventional regularization constraints are, however, independent of the physics of the underlying process and often produce smoothed-out tomograms with mass underestimation. Cascaded time-lapse (CTL) is a widely used reconstruction technique for monitoring wherein a tomogram obtained from the background dataset is employed as starting model for the inversion of subsequent time-lapse datasets. In contrast, a proper orthogonal decomposition (POD)-constrained inversion framework enforces physics-based regularization based upon prior understanding of the expected evolution of state variables. The physics-based constraints are represented in the form of POD basis vectors. The basis vectors are constructed from numerically generated training images (TIs) that mimic the desired process. The target can be reconstructed from a small number of selected basis vectors, hence, there is a reduction in the number of inversion parameters compared to the full dimensional space. The inversion involves finding the optimal combination of the selected basis vectors conditioned on the geophysical measurements. We apply the algorithm to 2-D lab-scale saline transport experiments with electrical resistivity (ER) monitoring. We consider two transport scenarios with one and two mass injection points evolving into unimodal and bimodal plume morphologies, respectively. The unimodal plume is consistent with the assumptions underlying the generation of the TIs, whereas bimodality in plume morphology was not conceptualized. We compare difference tomograms retrieved from POD with those obtained from CTL. Qualitative comparisons of the difference tomograms with images of their corresponding dye plumes suggest that POD recovered more compact plumes in contrast to those of CTL. While mass recovery generally deteriorated with increasing number of time-steps, POD outperformed CTL in terms of mass recovery accuracy rates. POD is computationally superior requiring only 2.5 mins to complete each inversion compared to 3 hours for CTL to do the same.
NASA Astrophysics Data System (ADS)
Ham, Woonchul; Song, Chulgyu; Lee, Kangsan; Roh, Seungkuk
2016-05-01
In this paper, we propose a new image reconstruction algorithm considering the geometric information of acoustic sources and senor detector and review the two-step reconstruction algorithm which was previously proposed based on the geometrical information of ROI(region of interest) considering the finite size of acoustic sensor element. In a new image reconstruction algorithm, not only mathematical analysis is very simple but also its software implementation is very easy because we don't need to use the FFT. We verify the effectiveness of the proposed reconstruction algorithm by showing the simulation results by using Matlab k-wave toolkit.
Theory of the amplitude-phase retrieval in any linear-transform system and its applications
NASA Astrophysics Data System (ADS)
Yang, Guozhen; Gu, Ben-Yuan; Dong, Bi-Zhen
1992-12-01
This paper is a summary of the theory of the amplitude-phase retrieval problem in any linear transform system and its applications based on our previous works in the past decade. We describe the general statement on the amplitude-phase retrieval problem in an imaging system and derive a set of equations governing the amplitude-phase distribution in terms of the rigorous mathematical derivation. We then show that, by using these equations and an iterative algorithm, a variety of amplitude-phase problems can be successfully handled. We carry out the systematic investigations and comprehensive numerical calculations to demonstrate the utilization of this new algorithm in various transform systems. For instance, we have achieved the phase retrieval from two intensity measurements in an imaging system with diffraction loss (non-unitary transform), both theoretically and experimentally, and the recovery of model real image from its Hartley-transform modulus only in one and two dimensional cases. We discuss the achievement of the phase retrieval problem from a single intensity only based on the sampling theorem and our algorithm. We also apply this algorithm to provide an optimal design of the phase-adjusted plate for a phase-adjustment focusing laser accelerator and a design approach of single phase-only element for implementing optical interconnect. In order to closely simulate the really measured data, we examine the reconstruction of image from its spectral modulus corrupted by a random noise in detail. The results show that the convergent solution can always be obtained and the quality of the recovered image is satisfactory. We also indicated the relationship and distinction between our algorithm and the original Gerchberg- Saxton algorithm. From these studies, we conclude that our algorithm shows great capability to deal with the comprehensive phase-retrieval problems in the imaging system and the inverse problem in solid state physics. It may open a new way to solve important inverse source problems extensively appearing in physics.
FOREWORD: 5th International Workshop on New Computational Methods for Inverse Problems
NASA Astrophysics Data System (ADS)
Vourc'h, Eric; Rodet, Thomas
2015-11-01
This volume of Journal of Physics: Conference Series is dedicated to the scientific research presented during the 5th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2015 (http://complement.farman.ens-cachan.fr/NCMIP_2015.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 29, 2015. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011, and secondly at the initiative of Institut Farman, in May 2012, May 2013 and May 2014. The New Computational Methods for Inverse Problems (NCMIP) workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel methods, learning methods, convex optimization, free discontinuity problems, metamodels, proper orthogonal decomposition, reduced models for the inversion, non-linear inverse scattering, image reconstruction and restoration, and applications (bio-medical imaging, non-destructive evaluation...). NCMIP 2015 was a one-day workshop held in May 2015 which attracted around 70 attendees. Each of the submitted papers has been reviewed by two reviewers. There have been 15 accepted papers. In addition, three international speakers were invited to present a longer talk. The workshop was supported by Institut Farman (ENS Cachan, CNRS) and endorsed by the following French research networks: GDR ISIS, GDR MIA, GDR MOA and GDR Ondes. The program committee acknowledges the following research laboratories: CMLA, LMT, LURPA and SATIE.
The algorithm of central axis in surface reconstruction
NASA Astrophysics Data System (ADS)
Zhao, Bao Ping; Zhang, Zheng Mei; Cai Li, Ji; Sun, Da Ming; Cao, Hui Ying; Xing, Bao Liang
2017-09-01
Reverse engineering is an important technique means of product imitation and new product development. Its core technology -- surface reconstruction is the current research for scholars. In the various algorithms of surface reconstruction, using axis reconstruction is a kind of important method. For the various reconstruction, using medial axis algorithm was summarized, pointed out the problems existed in various methods, as well as the place needs to be improved. Also discussed the later surface reconstruction and development of axial direction.
A reconstruction algorithm for helical CT imaging on PI-planes.
Liang, Hongzhu; Zhang, Cishen; Yan, Ming
2006-01-01
In this paper, a Feldkamp type approximate reconstruction algorithm is presented for helical cone-beam Computed Tomography. To effectively suppress artifacts due to large cone angle scanning, it is proposed to reconstruct the object point-wisely on unique customized tilted PI-planes which are close to the data collecting helices of the corresponding points. Such a reconstruction scheme can considerably suppress the artifacts in the cone-angle scanning. Computer simulations show that the proposed algorithm can provide improved imaging performance compared with the existing approximate cone-beam reconstruction algorithms.
Photoacoustic image reconstruction via deep learning
NASA Astrophysics Data System (ADS)
Antholzer, Stephan; Haltmeier, Markus; Nuster, Robert; Schwab, Johannes
2018-02-01
Applying standard algorithms to sparse data problems in photoacoustic tomography (PAT) yields low-quality images containing severe under-sampling artifacts. To some extent, these artifacts can be reduced by iterative image reconstruction algorithms which allow to include prior knowledge such as smoothness, total variation (TV) or sparsity constraints. These algorithms tend to be time consuming as the forward and adjoint problems have to be solved repeatedly. Further, iterative algorithms have additional drawbacks. For example, the reconstruction quality strongly depends on a-priori model assumptions about the objects to be recovered, which are often not strictly satisfied in practical applications. To overcome these issues, in this paper, we develop direct and efficient reconstruction algorithms based on deep learning. As opposed to iterative algorithms, we apply a convolutional neural network, whose parameters are trained before the reconstruction process based on a set of training data. For actual image reconstruction, a single evaluation of the trained network yields the desired result. Our presented numerical results (using two different network architectures) demonstrate that the proposed deep learning approach reconstructs images with a quality comparable to state of the art iterative reconstruction methods.
Integrating prior information into microwave tomography Part 1: Impact of detail on image quality.
Kurrant, Douglas; Baran, Anastasia; LoVetri, Joe; Fear, Elise
2017-12-01
The authors investigate the impact that incremental increases in the level of detail of patient-specific prior information have on image quality and the convergence behavior of an inversion algorithm in the context of near-field microwave breast imaging. A methodology is presented that uses image quality measures to characterize the ability of the algorithm to reconstruct both internal structures and lesions embedded in fibroglandular tissue. The approach permits key aspects that impact the quality of reconstruction of these structures to be identified and quantified. This provides insight into opportunities to improve image reconstruction performance. Patient-specific information is acquired using radar-based methods that form a regional map of the breast. This map is then incorporated into a microwave tomography algorithm. Previous investigations have demonstrated the effectiveness of this approach to improve image quality when applied to data generated with two-dimensional (2D) numerical models. The present study extends this work by generating prior information that is customized to vary the degree of structural detail to facilitate the investigation of the role of prior information in image formation. Numerical 2D breast models constructed from magnetic resonance (MR) scans, and reconstructions formed with a three-dimensional (3D) numerical breast model are used to assess if trends observed for the 2D results can be extended to 3D scenarios. For the blind reconstruction scenario (i.e., no prior information), the breast surface is not accurately identified and internal structures are not clearly resolved. A substantial improvement in image quality is achieved by incorporating the skin surface map and constraining the imaging domain to the breast. Internal features within the breast appear in the reconstructed image. However, it is challenging to discriminate between adipose and glandular regions and there are inaccuracies in both the structural properties of the glandular region and the dielectric properties reconstructed within this structure. Using a regional map with a skin layer only marginally improves this situation. Increasing the structural detail in the prior information to include internal features leads to reconstructions for which the interface that delineates the fat and gland regions can be inferred. Different features within the glandular region corresponding to tissues with varying relative permittivity values, such as a lesion embedded within glandular structure, emerge in the reconstructed images. Including knowledge of the breast surface and skin layer leads to a substantial improvement in image quality compared to the blind case, but the images have limited diagnostic utility for applications such as tumor response tracking. The diagnostic utility of the reconstruction technique is improved considerably when patient-specific structural information is used. This qualitative observation is supported quantitatively with image metrics. © 2017 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Ghiglieri, Jacopo
We report on a search for new physics in a final state with two same sign leptons, missing transverse energy, and significant hadronic activity at a center of mass energy sqrt(s) = 7 TeV. The data were collected with the CMS detector at the CERN LHC and correspond to an integrated luminosity of 0.98 inverse femtobarns. Data-driven methods are developed to estimate the dominant Standard Model backgrounds. No evidence for new physics is observed. The dominant background to the analysis comes from failures of lepton identification in Standard Model ttbar events. The ttbar production cross section in the dilepton final state is measured using 3.1 inverse picobarns of data. The cross section is measured to be 194 +/- 72 (stat) +/- 24 (syst) +/- 21 (lumi) pb. An algorithm is developed that uses tracking information to improve the reconstruction of missing transverse energy. The reconstruction of missing transverse energy is commissioned using the first collisions recorded at 0.9, 2.36 and 7 TeV data. Events with abnormally large values of missing transverse energy are identified as arising from anomalous signals in the calorimeters. Tools are developed to identify and remove these anomalous signals.
NASA Astrophysics Data System (ADS)
Lohvithee, Manasavee; Biguri, Ander; Soleimani, Manuchehr
2017-12-01
There are a number of powerful total variation (TV) regularization methods that have great promise in limited data cone-beam CT reconstruction with an enhancement of image quality. These promising TV methods require careful selection of the image reconstruction parameters, for which there are no well-established criteria. This paper presents a comprehensive evaluation of parameter selection in a number of major TV-based reconstruction algorithms. An appropriate way of selecting the values for each individual parameter has been suggested. Finally, a new adaptive-weighted projection-controlled steepest descent (AwPCSD) algorithm is presented, which implements the edge-preserving function for CBCT reconstruction with limited data. The proposed algorithm shows significant robustness compared to three other existing algorithms: ASD-POCS, AwASD-POCS and PCSD. The proposed AwPCSD algorithm is able to preserve the edges of the reconstructed images better with fewer sensitive parameters to tune.
Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework
Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.
2016-01-01
Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of TOF scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (Direct Image Reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias vs. variance performance to iterative TOF reconstruction with a matched resolution model. PMID:27032968
Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework
NASA Astrophysics Data System (ADS)
Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.
2016-05-01
Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of time-of-flight (TOF) scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (DIRECT: direct image reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias versus variance performance to iterative TOF reconstruction with a matched resolution model.
Theory and algorithms for image reconstruction on chords and within regions of interest
NASA Astrophysics Data System (ADS)
Zou, Yu; Pan, Xiaochuan; Sidky, Emilâ Y.
2005-11-01
We introduce a formula for image reconstruction on a chord of a general source trajectory. We subsequently develop three algorithms for exact image reconstruction on a chord from data acquired with the general trajectory. Interestingly, two of the developed algorithms can accommodate data containing transverse truncations. The widely used helical trajectory and other trajectories discussed in literature can be interpreted as special cases of the general trajectory, and the developed theory and algorithms are thus directly applicable to reconstructing images exactly from data acquired with these trajectories. For instance, chords on a helical trajectory are equivalent to the n-PI-line segments. In this situation, the proposed algorithms become the algorithms that we proposed previously for image reconstruction on PI-line segments. We have performed preliminary numerical studies, which include the study on image reconstruction on chords of two-circle trajectory, which is nonsmooth, and on n-PI lines of a helical trajectory, which is smooth. Quantitative results of these studies verify and demonstrate the proposed theory and algorithms.
Vecherin, Sergey N; Ostashev, Vladimir E; Ziemann, A; Wilson, D Keith; Arnold, K; Barth, M
2007-09-01
Acoustic travel-time tomography allows one to reconstruct temperature and wind velocity fields in the atmosphere. In a recently published paper [S. Vecherin et al., J. Acoust. Soc. Am. 119, 2579 (2006)], a time-dependent stochastic inversion (TDSI) was developed for the reconstruction of these fields from travel times of sound propagation between sources and receivers in a tomography array. TDSI accounts for the correlation of temperature and wind velocity fluctuations both in space and time and therefore yields more accurate reconstruction of these fields in comparison with algebraic techniques and regular stochastic inversion. To use TDSI, one needs to estimate spatial-temporal covariance functions of temperature and wind velocity fluctuations. In this paper, these spatial-temporal covariance functions are derived for locally frozen turbulence which is a more general concept than a widely used hypothesis of frozen turbulence. The developed theory is applied to reconstruction of temperature and wind velocity fields in the acoustic tomography experiment carried out by University of Leipzig, Germany. The reconstructed temperature and velocity fields are presented and errors in reconstruction of these fields are studied.
Review of inversion techniques using analysis of different tests
NASA Astrophysics Data System (ADS)
Smaglichenko, T. A.
2012-04-01
Tomographic techniques are tools, which estimate the Earth's deep interior by inverting seismic data. Reliability of visualization provides adequate understanding of geodynamic processes for prediction of natural hazard and protection of environment. This presentation focuses on two interrelated factors, which affect on the reliability namely: particularities of geophysical medium and strategy for choice of inversion method. Three main techniques are under review. First, the standard LSQR algorithm is derived directly by the Lanczos algebraic application. The Double Difference tomography widely incorporates this algorithm and its expansion. Next, the CSSA technique, or method of subtraction has been introduced into seismology by Nikolaev et al. in 1985. This method got farther development in 2003 (Smaglichenko et al.) as the coordinate method of possible directions, which has been already known in the theory of numerical methods. And finally, the new Differentiated Approach (DA) tomography that has been recently developed by the author for seismology and introduced into applied mathematics as the modification of Gaussian elimination. Different test models are presented by detecting various properties of the medium and having a value for the mining sector as well for prediction of seismic activity. They are: 1) checker-board resolution test; 2) the single anomalous block surrounded by an uniform zone; 3) the large-size structure; 4) the most complicated case, when the model consist of contrast layers and the observation response is equal zero value. The geometry of experiment for all models is given in the note of Leveque et al., 1993. It was assumed that errors in experimental data are in limits of pre-assigned accuracy. The testing showed that LSQR is effective, when the small-size structure (1) is retrieved, while CSSA works faster under reconstruction of the separated anomaly (2). The large-size structure (3) can be reconstructed applying DA, which uses both Lanczos's method and CSSA as composed parts of the inversion process. Difficulty of the model of contrast layers (4) can be overcome with a priori information that could allow the DA implementation. The testing leads us to the following conclusion. Careful analyze and weighted assumptions about characteristics of the being investigated medium should be done before start of data inversion. The choice of suitable technique will provide reliability of solution. Nevertheless, DA is preferred in the case of noisy and large data.
Sparse radar imaging using 2D compressed sensing
NASA Astrophysics Data System (ADS)
Hou, Qingkai; Liu, Yang; Chen, Zengping; Su, Shaoying
2014-10-01
Radar imaging is an ill-posed linear inverse problem and compressed sensing (CS) has been proved to have tremendous potential in this field. This paper surveys the theory of radar imaging and a conclusion is drawn that the processing of ISAR imaging can be denoted mathematically as a problem of 2D sparse decomposition. Based on CS, we propose a novel measuring strategy for ISAR imaging radar and utilize random sub-sampling in both range and azimuth dimensions, which will reduce the amount of sampling data tremendously. In order to handle 2D reconstructing problem, the ordinary solution is converting the 2D problem into 1D by Kronecker product, which will increase the size of dictionary and computational cost sharply. In this paper, we introduce the 2D-SL0 algorithm into the reconstruction of imaging. It is proved that 2D-SL0 can achieve equivalent result as other 1D reconstructing methods, but the computational complexity and memory usage is reduced significantly. Moreover, we will state the results of simulating experiments and prove the effectiveness and feasibility of our method.
NASA Astrophysics Data System (ADS)
Luo, H.; Zhang, H.; Gao, J.
2016-12-01
Seismic and magnetotelluric (MT) imaging methods are generally used to characterize subsurface structures at various scales. The two methods are complementary to each other and the integration of them is helpful for more reliably determining the resistivity and velocity models of the target region. Because of the difficulty in finding empirical relationship between resistivity and velocity parameters, Gallardo and Meju [2003] proposed a joint inversion method enforcing resistivity and velocity models consistent in structure, which is realized by minimizing cross gradients between two models. However, it is extremely challenging to combine two different inversion systems together along with the cross gradient constraints. For this reason, Gallardo [2007] proposed a joint inversion scheme that decouples the seismic and MT inversion systems by iteratively performing seismic and MT inversions as well as cross gradient minimization separately. This scheme avoids the complexity of combining two different systems together but it suffers the issue of balancing between data fitting and structure constraint. In this study, we have developed a new joint inversion scheme that avoids the problem encountered by the scheme of Gallardo [2007]. In the new scheme, seismic and MT inversions are still separately performed but the cross gradient minimization is also constrained by model perturbations from separate inversions. In this way, the new scheme still avoids the complexity of combining two different systems together and at the same time the balance between data fitting and structure consistency constraint can be enforced. We have tested our joint inversion algorithm for both 2D and 3D cases. Synthetic tests show that joint inversion better reconstructed the velocity and resistivity models than separate inversions. Compared to separate inversions, joint inversion can remove artifacts in the resistivity model and can improve the resolution for deeper resistivity structures. We will also show results applying the new joint seismic and MT inversion scheme to southwest China, where several MT profiles are available and earthquakes are very active.
A fast 4D cone beam CT reconstruction method based on the OSC-TV algorithm.
Mascolo-Fortin, Julia; Matenine, Dmitri; Archambault, Louis; Després, Philippe
2018-01-01
Four-dimensional cone beam computed tomography allows for temporally resolved imaging with useful applications in radiotherapy, but raises particular challenges in terms of image quality and computation time. The purpose of this work is to develop a fast and accurate 4D algorithm by adapting a GPU-accelerated ordered subsets convex algorithm (OSC), combined with the total variation minimization regularization technique (TV). Different initialization schemes were studied to adapt the OSC-TV algorithm to 4D reconstruction: each respiratory phase was initialized either with a 3D reconstruction or a blank image. Reconstruction algorithms were tested on a dynamic numerical phantom and on a clinical dataset. 4D iterations were implemented for a cluster of 8 GPUs. All developed methods allowed for an adequate visualization of the respiratory movement and compared favorably to the McKinnon-Bates and adaptive steepest descent projection onto convex sets algorithms, while the 4D reconstructions initialized from a prior 3D reconstruction led to better overall image quality. The most suitable adaptation of OSC-TV to 4D CBCT was found to be a combination of a prior FDK reconstruction and a 4D OSC-TV reconstruction with a reconstruction time of 4.5 minutes. This relatively short reconstruction time could facilitate a clinical use.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, Jaideep; Lee, Jina; Lefantzi, Sophia
2013-09-01
The estimation of fossil-fuel CO2 emissions (ffCO2) from limited ground-based and satellite measurements of CO2 concentrations will form a key component of the monitoring of treaties aimed at the abatement of greenhouse gas emissions. The limited nature of the measured data leads to a severely-underdetermined estimation problem. If the estimation is performed at fine spatial resolutions, it can also be computationally expensive. In order to enable such estimations, advances are needed in the spatial representation of ffCO2 emissions, scalable inversion algorithms and the identification of observables to measure. To that end, we investigate parsimonious spatial parameterizations of ffCO2 emissions whichmore » can be used in atmospheric inversions. We devise and test three random field models, based on wavelets, Gaussian kernels and covariance structures derived from easily-observed proxies of human activity. In doing so, we constructed a novel inversion algorithm, based on compressive sensing and sparse reconstruction, to perform the estimation. We also address scalable ensemble Kalman filters as an inversion mechanism and quantify the impact of Gaussian assumptions inherent in them. We find that the assumption does not impact the estimates of mean ffCO2 source strengths appreciably, but a comparison with Markov chain Monte Carlo estimates show significant differences in the variance of the source strengths. Finally, we study if the very different spatial natures of biogenic and ffCO2 emissions can be used to estimate them, in a disaggregated fashion, solely from CO2 concentration measurements, without extra information from products of incomplete combustion e.g., CO. We find that this is possible during the winter months, though the errors can be as large as 50%.« less
Compressively sampled MR image reconstruction using generalized thresholding iterative algorithm
NASA Astrophysics Data System (ADS)
Elahi, Sana; kaleem, Muhammad; Omer, Hammad
2018-01-01
Compressed sensing (CS) is an emerging area of interest in Magnetic Resonance Imaging (MRI). CS is used for the reconstruction of the images from a very limited number of samples in k-space. This significantly reduces the MRI data acquisition time. One important requirement for signal recovery in CS is the use of an appropriate non-linear reconstruction algorithm. It is a challenging task to choose a reconstruction algorithm that would accurately reconstruct the MR images from the under-sampled k-space data. Various algorithms have been used to solve the system of non-linear equations for better image quality and reconstruction speed in CS. In the recent past, iterative soft thresholding algorithm (ISTA) has been introduced in CS-MRI. This algorithm directly cancels the incoherent artifacts produced because of the undersampling in k -space. This paper introduces an improved iterative algorithm based on p -thresholding technique for CS-MRI image reconstruction. The use of p -thresholding function promotes sparsity in the image which is a key factor for CS based image reconstruction. The p -thresholding based iterative algorithm is a modification of ISTA, and minimizes non-convex functions. It has been shown that the proposed p -thresholding iterative algorithm can be used effectively to recover fully sampled image from the under-sampled data in MRI. The performance of the proposed method is verified using simulated and actual MRI data taken at St. Mary's Hospital, London. The quality of the reconstructed images is measured in terms of peak signal-to-noise ratio (PSNR), artifact power (AP), and structural similarity index measure (SSIM). The proposed approach shows improved performance when compared to other iterative algorithms based on log thresholding, soft thresholding and hard thresholding techniques at different reduction factors.
Intelligent inversion method for pre-stack seismic big data based on MapReduce
NASA Astrophysics Data System (ADS)
Yan, Xuesong; Zhu, Zhixin; Wu, Qinghua
2018-01-01
Seismic exploration is a method of oil exploration that uses seismic information; that is, according to the inversion of seismic information, the useful information of the reservoir parameters can be obtained to carry out exploration effectively. Pre-stack data are characterised by a large amount of data, abundant information, and so on, and according to its inversion, the abundant information of the reservoir parameters can be obtained. Owing to the large amount of pre-stack seismic data, existing single-machine environments have not been able to meet the computational needs of the huge amount of data; thus, the development of a method with a high efficiency and the speed to solve the inversion problem of pre-stack seismic data is urgently needed. The optimisation of the elastic parameters by using a genetic algorithm easily falls into a local optimum, which results in a non-obvious inversion effect, especially for the optimisation effect of the density. Therefore, an intelligent optimisation algorithm is proposed in this paper and used for the elastic parameter inversion of pre-stack seismic data. This algorithm improves the population initialisation strategy by using the Gardner formula and the genetic operation of the algorithm, and the improved algorithm obtains better inversion results when carrying out a model test with logging data. All of the elastic parameters obtained by inversion and the logging curve of theoretical model are fitted well, which effectively improves the inversion precision of the density. This algorithm was implemented with a MapReduce model to solve the seismic big data inversion problem. The experimental results show that the parallel model can effectively reduce the running time of the algorithm.
Evaluation of the spline reconstruction technique for PET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kastis, George A., E-mail: gkastis@academyofathens.gr; Kyriakopoulou, Dimitra; Gaitanis, Anastasios
2014-04-15
Purpose: The spline reconstruction technique (SRT), based on the analytic formula for the inverse Radon transform, has been presented earlier in the literature. In this study, the authors present an improved formulation and numerical implementation of this algorithm and evaluate it in comparison to filtered backprojection (FBP). Methods: The SRT is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of “custom made” cubic splines. By restricting reconstruction only within object pixels and by utilizing certain mathematical symmetries, the authors achieve a reconstruction time comparable to that of FBP. The authors havemore » implemented SRT in STIR and have evaluated this technique using simulated data from a clinical positron emission tomography (PET) system, as well as real data obtained from clinical and preclinical PET scanners. For the simulation studies, the authors have simulated sinograms of a point-source and three digital phantoms. Using these sinograms, the authors have created realizations of Poisson noise at five noise levels. In addition to visual comparisons of the reconstructed images, the authors have determined contrast and bias for different regions of the phantoms as a function of noise level. For the real-data studies, sinograms of an{sup 18}F-FDG injected mouse, a NEMA NU 4-2008 image quality phantom, and a Derenzo phantom have been acquired from a commercial PET system. The authors have determined: (a) coefficient of variations (COV) and contrast from the NEMA phantom, (b) contrast for the various sections of the Derenzo phantom, and (c) line profiles for the Derenzo phantom. Furthermore, the authors have acquired sinograms from a whole-body PET scan of an {sup 18}F-FDG injected cancer patient, using the GE Discovery ST PET/CT system. SRT and FBP reconstructions of the thorax have been visually evaluated. Results: The results indicate an improvement in FWHM and FWTM in both simulated and real point-source studies. In all simulated phantoms, the SRT exhibits higher contrast and lower bias than FBP at all noise levels, by increasing the COV in the reconstructed images. Finally, in real studies, whereas the contrast of the cold chambers are similar for both algorithms, the SRT reconstructed images of the NEMA phantom exhibit slightly higher COV values than those of FBP. In the Derenzo phantom, SRT resolves the 2-mm separated holes slightly better than FBP. The small-animal and human reconstructions via SRT exhibit slightly higher resolution and contrast than the FBP reconstructions. Conclusions: The SRT provides images of higher resolution, higher contrast, and lower bias than FBP, by increasing slightly the noise in the reconstructed images. Furthermore, it eliminates streak artifacts outside the object boundary. Unlike other analytic algorithms, the reconstruction time of SRT is comparable with that of FBP. The source code for SRT will become available in a future release of STIR.« less
NASA Astrophysics Data System (ADS)
Nguyen, Thinh; Potter, Thomas; Grossman, Robert; Zhang, Yingchun
2018-06-01
Objective. Neuroimaging has been employed as a promising approach to advance our understanding of brain networks in both basic and clinical neuroscience. Electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) represent two neuroimaging modalities with complementary features; EEG has high temporal resolution and low spatial resolution while fMRI has high spatial resolution and low temporal resolution. Multimodal EEG inverse methods have attempted to capitalize on these properties but have been subjected to localization error. The dynamic brain transition network (DBTN) approach, a spatiotemporal fMRI constrained EEG source imaging method, has recently been developed to address these issues by solving the EEG inverse problem in a Bayesian framework, utilizing fMRI priors in a spatial and temporal variant manner. This paper presents a computer simulation study to provide a detailed characterization of the spatial and temporal accuracy of the DBTN method. Approach. Synthetic EEG data were generated in a series of computer simulations, designed to represent realistic and complex brain activity at superficial and deep sources with highly dynamical activity time-courses. The source reconstruction performance of the DBTN method was tested against the fMRI-constrained minimum norm estimates algorithm (fMRIMNE). The performances of the two inverse methods were evaluated both in terms of spatial and temporal accuracy. Main results. In comparison with the commonly used fMRIMNE method, results showed that the DBTN method produces results with increased spatial and temporal accuracy. The DBTN method also demonstrated the capability to reduce crosstalk in the reconstructed cortical time-course(s) induced by neighboring regions, mitigate depth bias and improve overall localization accuracy. Significance. The improved spatiotemporal accuracy of the reconstruction allows for an improved characterization of complex neural activity. This improvement can be extended to any subsequent brain connectivity analyses used to construct the associated dynamic brain networks.
Simulation of a fast diffuse optical tomography system based on radiative transfer equation
NASA Astrophysics Data System (ADS)
Motevalli, S. M.; Payani, A.
2016-12-01
Studies show that near-infrared (NIR) light (light with wavelength between 700nm and 1300nm) undergoes two interactions, absorption and scattering, when it penetrates a tissue. Since scattering is the predominant interaction, the calculation of light distribution in the tissue and the image reconstruction of absorption and scattering coefficients are very complicated. Some analytical and numerical methods, such as radiative transport equation and Monte Carlo method, have been used for the simulation of light penetration in tissue. Recently, some investigators in the world have tried to develop a diffuse optical tomography system. In these systems, NIR light penetrates the tissue and passes through the tissue. Then, light exiting the tissue is measured by NIR detectors placed around the tissue. These data are collected from all the detectors and transferred to the computational parts (including hardware and software), which make a cross-sectional image of the tissue after performing some computational processes. In this paper, the results of the simulation of an optical diffuse tomography system are presented. This simulation involves two stages: a) Simulation of the forward problem (or light penetration in the tissue), which is performed by solving the diffusion approximation equation in the stationary state using FEM. b) Simulation of the inverse problem (or image reconstruction), which is performed by the optimization algorithm called Broyden quasi-Newton. This method of image reconstruction is faster compared to the other Newton-based optimization algorithms, such as the Levenberg-Marquardt one.
A nudging-based data assimilation method: the Back and Forth Nudging (BFN) algorithm
NASA Astrophysics Data System (ADS)
Auroux, D.; Blum, J.
2008-03-01
This paper deals with a new data assimilation algorithm, called Back and Forth Nudging. The standard nudging technique consists in adding to the equations of the model a relaxation term that is supposed to force the observations to the model. The BFN algorithm consists in repeatedly performing forward and backward integrations of the model with relaxation (or nudging) terms, using opposite signs in the direct and inverse integrations, so as to make the backward evolution numerically stable. This algorithm has first been tested on the standard Lorenz model with discrete observations (perfect or noisy) and compared with the variational assimilation method. The same type of study has then been performed on the viscous Burgers equation, comparing again with the variational method and focusing on the time evolution of the reconstruction error, i.e. the difference between the reference trajectory and the identified one over a time period composed of an assimilation period followed by a prediction period. The possible use of the BFN algorithm as an initialization for the variational method has also been investigated. Finally the algorithm has been tested on a layered quasi-geostrophic model with sea-surface height observations. The behaviours of the two algorithms have been compared in the presence of perfect or noisy observations, and also for imperfect models. This has allowed us to reach a conclusion concerning the relative performances of the two algorithms.
Xu, Q; Yang, D; Tan, J; Anastasio, M
2012-06-01
To improve image quality and reduce imaging dose in CBCT for radiation therapy applications and to realize near real-time image reconstruction based on use of a fast convergence iterative algorithm and acceleration by multi-GPUs. An iterative image reconstruction that sought to minimize a weighted least squares cost function that employed total variation (TV) regularization was employed to mitigate projection data incompleteness and noise. To achieve rapid 3D image reconstruction (< 1 min), a highly optimized multiple-GPU implementation of the algorithm was developed. The convergence rate and reconstruction accuracy were evaluated using a modified 3D Shepp-Logan digital phantom and a Catphan-600 physical phantom. The reconstructed images were compared with the clinical FDK reconstruction results. Digital phantom studies showed that only 15 iterations and 60 iterations are needed to achieve algorithm convergence for 360-view and 60-view cases, respectively. The RMSE was reduced to 10-4 and 10-2, respectively, by using 15 iterations for each case. Our algorithm required 5.4s to complete one iteration for the 60-view case using one Tesla C2075 GPU. The few-view study indicated that our iterative algorithm has great potential to reduce the imaging dose and preserve good image quality. For the physical Catphan studies, the images obtained from the iterative algorithm possessed better spatial resolution and higher SNRs than those obtained from by use of a clinical FDK reconstruction algorithm. We have developed a fast convergence iterative algorithm for CBCT image reconstruction. The developed algorithm yielded images with better spatial resolution and higher SNR than those produced by a commercial FDK tool. In addition, from the few-view study, the iterative algorithm has shown great potential for significantly reducing imaging dose. We expect that the developed reconstruction approach will facilitate applications including IGART and patient daily CBCT-based treatment localization. © 2012 American Association of Physicists in Medicine.
Xiaodong Zhuge; Palenstijn, Willem Jan; Batenburg, Kees Joost
2016-01-01
In this paper, we present a novel iterative reconstruction algorithm for discrete tomography (DT) named total variation regularized discrete algebraic reconstruction technique (TVR-DART) with automated gray value estimation. This algorithm is more robust and automated than the original DART algorithm, and is aimed at imaging of objects consisting of only a few different material compositions, each corresponding to a different gray value in the reconstruction. By exploiting two types of prior knowledge of the scanned object simultaneously, TVR-DART solves the discrete reconstruction problem within an optimization framework inspired by compressive sensing to steer the current reconstruction toward a solution with the specified number of discrete gray values. The gray values and the thresholds are estimated as the reconstruction improves through iterations. Extensive experiments from simulated data, experimental μCT, and electron tomography data sets show that TVR-DART is capable of providing more accurate reconstruction than existing algorithms under noisy conditions from a small number of projection images and/or from a small angular range. Furthermore, the new algorithm requires less effort on parameter tuning compared with the original DART algorithm. With TVR-DART, we aim to provide the tomography society with an easy-to-use and robust algorithm for DT.
Rayleigh wave nonlinear inversion based on the Firefly algorithm
NASA Astrophysics Data System (ADS)
Zhou, Teng-Fei; Peng, Geng-Xin; Hu, Tian-Yue; Duan, Wen-Sheng; Yao, Feng-Chang; Liu, Yi-Mou
2014-06-01
Rayleigh waves have high amplitude, low frequency, and low velocity, which are treated as strong noise to be attenuated in reflected seismic surveys. This study addresses how to identify useful shear wave velocity profile and stratigraphic information from Rayleigh waves. We choose the Firefly algorithm for inversion of surface waves. The Firefly algorithm, a new type of particle swarm optimization, has the advantages of being robust, highly effective, and allows global searching. This algorithm is feasible and has advantages for use in Rayleigh wave inversion with both synthetic models and field data. The results show that the Firefly algorithm, which is a robust and practical method, can achieve nonlinear inversion of surface waves with high resolution.
Large Airborne Full Tensor Gradient Data Inversion Based on a Non-Monotone Gradient Method
NASA Astrophysics Data System (ADS)
Sun, Yong; Meng, Zhaohai; Li, Fengting
2018-03-01
Following the development of gravity gradiometer instrument technology, the full tensor gravity (FTG) data can be acquired on airborne and marine platforms. Large-scale geophysical data can be obtained using these methods, making such data sets a number of the "big data" category. Therefore, a fast and effective inversion method is developed to solve the large-scale FTG data inversion problem. Many algorithms are available to accelerate the FTG data inversion, such as conjugate gradient method. However, the conventional conjugate gradient method takes a long time to complete data processing. Thus, a fast and effective iterative algorithm is necessary to improve the utilization of FTG data. Generally, inversion processing is formulated by incorporating regularizing constraints, followed by the introduction of a non-monotone gradient-descent method to accelerate the convergence rate of FTG data inversion. Compared with the conventional gradient method, the steepest descent gradient algorithm, and the conjugate gradient algorithm, there are clear advantages of the non-monotone iterative gradient-descent algorithm. Simulated and field FTG data were applied to show the application value of this new fast inversion method.
Ray, J.; Lee, J.; Yadav, V.; ...
2015-04-29
Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO 2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) andmore » fitting. Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO 2 (ffCO 2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO 2 emissions and synthetic observations of ffCO 2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of 2. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, J.; Lee, J.; Yadav, V.
Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO 2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) andmore » fitting. Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO 2 (ffCO 2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO 2 emissions and synthetic observations of ffCO 2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of 2. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less
A multiwave range test for obstacle reconstructions with unknown physical properties
NASA Astrophysics Data System (ADS)
Potthast, Roland; Schulz, Jochen
2007-08-01
We develop a new multiwave version of the range test for shape reconstruction in inverse scattering theory. The range test [R. Potthast, et al., A `range test' for determining scatterers with unknown physical properties, Inverse Problems 19(3) (2003) 533-547] has originally been proposed to obtain knowledge about an unknown scatterer when the far field pattern for only one plane wave is given. Here, we extend the method to the case of multiple waves and show that the full shape of the unknown scatterer can be reconstructed. We further will clarify the relation between the range test methods, the potential method [A. Kirsch, R. Kress, On an integral equation of the first kind in inverse acoustic scattering, in: Inverse Problems (Oberwolfach, 1986), Internationale Schriftenreihe zur Numerischen Mathematik, vol. 77, Birkhauser, Basel, 1986, pp. 93-102] and the singular sources method [R. Potthast, Point sources and multipoles in inverse scattering theory, Habilitation Thesis, Gottingen, 1999]. In particular, we propose a new version of the Kirsch-Kress method using the range test and a new approach to the singular sources method based on the range test and potential method. Numerical examples of reconstructions for all four methods are provided.
Studies of jet mass in dijet and W/Z + jet events
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatrchyan, S.; Khachatryan, V.; Sirunyan, A. M.
Invariant mass spectra for jets reconstructed using the anti-kt and Cambridge-Aachen algorithms are studied for different jet "grooming" techniques in data corresponding to an integrated luminosity of 5 inverse femtobarns, recorded with the CMS detector in proton-proton collisions at the LHC at a center-of-mass energy of 7 TeV. Leading-order QCD predictions for inclusive dijet and W/Z+jet production combined with parton-shower Monte Carlo models are found to agree overall with the data, and the agreement improves with the implementation of jet grooming methods used to distinguish merged jets of large transverse momentum from softer QCD gluon radiation.
Yalavarthy, Phaneendra K; Lynch, Daniel R; Pogue, Brian W; Dehghani, Hamid; Paulsen, Keith D
2008-05-01
Three-dimensional (3D) diffuse optical tomography is known to be a nonlinear, ill-posed and sometimes under-determined problem, where regularization is added to the minimization to allow convergence to a unique solution. In this work, a generalized least-squares (GLS) minimization method was implemented, which employs weight matrices for both data-model misfit and optical properties to include their variances and covariances, using a computationally efficient scheme. This allows inversion of a matrix that is of a dimension dictated by the number of measurements, instead of by the number of imaging parameters. This increases the computation speed up to four times per iteration in most of the under-determined 3D imaging problems. An analytic derivation, using the Sherman-Morrison-Woodbury identity, is shown for this efficient alternative form and it is proven to be equivalent, not only analytically, but also numerically. Equivalent alternative forms for other minimization methods, like Levenberg-Marquardt (LM) and Tikhonov, are also derived. Three-dimensional reconstruction results indicate that the poor recovery of quantitatively accurate values in 3D optical images can also be a characteristic of the reconstruction algorithm, along with the target size. Interestingly, usage of GLS reconstruction methods reduces error in the periphery of the image, as expected, and improves by 20% the ability to quantify local interior regions in terms of the recovered optical contrast, as compared to LM methods. Characterization of detector photo-multiplier tubes noise has enabled the use of the GLS method for reconstructing experimental data and showed a promise for better quantification of target in 3D optical imaging. Use of these new alternative forms becomes effective when the ratio of the number of imaging property parameters exceeds the number of measurements by a factor greater than 2.
NASA Technical Reports Server (NTRS)
Chu, W. P.
1977-01-01
Spacecraft remote sensing of stratospheric aerosol and ozone vertical profiles using the solar occultation experiment has been analyzed. A computer algorithm has been developed in which a two step inversion of the simulated data can be performed. The radiometric data are first inverted into a vertical extinction profile using a linear inversion algorithm. Then the multiwavelength extinction profiles are solved with a nonlinear least square algorithm to produce aerosol and ozone vertical profiles. Examples of inversion results are shown illustrating the resolution and noise sensitivity of the inversion algorithms.
NASA Astrophysics Data System (ADS)
Bae, Kyung-hoon; Park, Changhan; Kim, Eun-soo
2008-03-01
In this paper, intermediate view reconstruction (IVR) using adaptive disparity search algorithm (ASDA) is for realtime 3-dimensional (3D) processing proposed. The proposed algorithm can reduce processing time of disparity estimation by selecting adaptive disparity search range. Also, the proposed algorithm can increase the quality of the 3D imaging. That is, by adaptively predicting the mutual correlation between stereo images pair using the proposed algorithm, the bandwidth of stereo input images pair can be compressed to the level of a conventional 2D image and a predicted image also can be effectively reconstructed using a reference image and disparity vectors. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm improves the PSNRs of a reconstructed image to about 4.8 dB by comparing with that of conventional algorithms, and reduces the Synthesizing time of a reconstructed image to about 7.02 sec by comparing with that of conventional algorithms.
NASA Astrophysics Data System (ADS)
Mickevicius, Nikolai J.; Paulson, Eric S.
2017-04-01
The purpose of this work is to investigate the effects of undersampling and reconstruction algorithm on the total processing time and image quality of respiratory phase-resolved 4D MRI data. Specifically, the goal is to obtain quality 4D-MRI data with a combined acquisition and reconstruction time of five minutes or less, which we reasoned would be satisfactory for pre-treatment 4D-MRI in online MRI-gRT. A 3D stack-of-stars, self-navigated, 4D-MRI acquisition was used to scan three healthy volunteers at three image resolutions and two scan durations. The NUFFT, CG-SENSE, SPIRiT, and XD-GRASP reconstruction algorithms were used to reconstruct each dataset on a high performance reconstruction computer. The overall image quality, reconstruction time, artifact prevalence, and motion estimates were compared. The CG-SENSE and XD-GRASP reconstructions provided superior image quality over the other algorithms. The combination of a 3D SoS sequence and parallelized reconstruction algorithms using computing hardware more advanced than those typically seen on product MRI scanners, can result in acquisition and reconstruction of high quality respiratory correlated 4D-MRI images in less than five minutes.
Acceleration of the direct reconstruction of linear parametric images using nested algorithms.
Wang, Guobao; Qi, Jinyi
2010-03-07
Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.
Axial Cone-Beam Reconstruction by Weighted BPF/DBPF and Orthogonal Butterfly Filtering.
Tang, Shaojie; Tang, Xiangyang
2016-09-01
The backprojection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical reconstruction from cone-beam (CB) scan data and axial reconstruction from fan beam data, respectively. These two algorithms can be heuristically extended for image reconstruction from axial CB scan data, but induce severe artifacts in images located away from the central plane, determined by the circular source trajectory. We propose an algorithmic solution herein to eliminate the artifacts. The solution is an integration of three-dimensional (3-D) weighted axial CB-BPF/DBPF algorithm with orthogonal butterfly filtering, namely axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering. Using the computer simulated Forbild head and thoracic phantoms that are rigorous in inspecting the reconstruction accuracy, and an anthropomorphic thoracic phantom with projection data acquired by a CT scanner, we evaluate the performance of the proposed algorithm. Preliminary results show that the orthogonal butterfly filtering can eliminate the severe streak artifacts existing in the images reconstructed by the 3-D weighted axial CB-BPF/DBPF algorithm located at off-central planes. Integrated with orthogonal butterfly filtering, the 3-D weighted CB-BPF/DBPF algorithm can perform at least as well as the 3-D weighted CB-FBP algorithm in image reconstruction from axial CB scan data. The proposed 3-D weighted axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering can be an algorithmic solution for CT imaging in extensive clinical and preclinical applications.
NASA Astrophysics Data System (ADS)
Beilina, L.; Cristofol, M.; Li, S.; Yamamoto, M.
2018-01-01
We consider an inverse problem of reconstructing two spatially varying coefficients in an acoustic equation of hyperbolic type using interior data of solutions with suitable choices of initial condition. Using a Carleman estimate, we prove Lipschitz stability estimates which ensure unique reconstruction of both coefficients. Our theoretical results are justified by numerical studies on the reconstruction of two unknown coefficients using noisy backscattered data.
Stotts, Steven A; Koch, Robert A
2017-08-01
In this paper an approach is presented to estimate the constraint required to apply maximum entropy (ME) for statistical inference with underwater acoustic data from a single track segment. Previous algorithms for estimating the ME constraint require multiple source track segments to determine the constraint. The approach is relevant for addressing model mismatch effects, i.e., inaccuracies in parameter values determined from inversions because the propagation model does not account for all acoustic processes that contribute to the measured data. One effect of model mismatch is that the lowest cost inversion solution may be well outside a relatively well-known parameter value's uncertainty interval (prior), e.g., source speed from track reconstruction or towed source levels. The approach requires, for some particular parameter value, the ME constraint to produce an inferred uncertainty interval that encompasses the prior. Motivating this approach is the hypothesis that the proposed constraint determination procedure would produce a posterior probability density that accounts for the effect of model mismatch on inferred values of other inversion parameters for which the priors might be quite broad. Applications to both measured and simulated data are presented for model mismatch that produces minimum cost solutions either inside or outside some priors.
The New Method of Tsunami Source Reconstruction With r-Solution Inversion Method
NASA Astrophysics Data System (ADS)
Voronina, T. A.; Romanenko, A. A.
2016-12-01
Application of the r-solution method to reconstructing the initial tsunami waveform is discussed. This methodology is based on the inversion of remote measurements of water-level data. The wave propagation is considered within the scope of a linear shallow-water theory. The ill-posed inverse problem in question is regularized by means of a least square inversion using the truncated Singular Value Decomposition method. As a result of the numerical process, an r-solution is obtained. The method proposed allows one to control the instability of a numerical solution and to obtain an acceptable result in spite of ill posedness of the problem. Implementation of this methodology to reconstructing of the initial waveform to 2013 Solomon Islands tsunami validates the theoretical conclusion for synthetic data and a model tsunami source: the inversion result strongly depends on data noisiness, the azimuthal and temporal coverage of recording stations with respect to the source area. Furthermore, it is possible to make a preliminary selection of the most informative set of the available recording stations used in the inversion process.
A methodology for image quality evaluation of advanced CT systems.
Wilson, Joshua M; Christianson, Olav I; Richard, Samuel; Samei, Ehsan
2013-03-01
This work involved the development of a phantom-based method to quantify the performance of tube current modulation and iterative reconstruction in modern computed tomography (CT) systems. The quantification included resolution, HU accuracy, noise, and noise texture accounting for the impact of contrast, prescribed dose, reconstruction algorithm, and body size. A 42-cm-long, 22.5-kg polyethylene phantom was designed to model four body sizes. Each size was represented by a uniform section, for the measurement of the noise-power spectrum (NPS), and a feature section containing various rods, for the measurement of HU and the task-based modulation transfer function (TTF). The phantom was scanned on a clinical CT system (GE, 750HD) using a range of tube current modulation settings (NI levels) and reconstruction methods (FBP and ASIR30). An image quality analysis program was developed to process the phantom data to calculate the targeted image quality metrics as a function of contrast, prescribed dose, and body size. The phantom fabrication closely followed the design specifications. In terms of tube current modulation, the tube current and resulting image noise varied as a function of phantom size as expected based on the manufacturer specification: From the 16- to 37-cm section, the HU contrast for each rod was inversely related to phantom size, and noise was relatively constant (<5% change). With iterative reconstruction, the TTF exhibited a contrast dependency with better performance for higher contrast objects. At low noise levels, TTFs of iterative reconstruction were better than those of FBP, but at higher noise, that superiority was not maintained at all contrast levels. Relative to FBP, the NPS of iterative reconstruction exhibited an ~30% decrease in magnitude and a 0.1 mm(-1) shift in the peak frequency. Phantom and image quality analysis software were created for assessing CT image quality over a range of contrasts, doses, and body sizes. The testing platform enabled robust NPS, TTF, HU, and pixel noise measurements as a function of body size capable of characterizing the performance of reconstruction algorithms and tube current modulation techniques.
Estimation of the gravitational wave polarizations from a nontemplate search
NASA Astrophysics Data System (ADS)
Di Palma, Irene; Drago, Marco
2018-01-01
Gravitational wave astronomy is just beginning, after the recent success of the four direct detections of binary black hole (BBH) mergers and the first observation from a binary neutron star inspiral, with the expectation of many more events to come. Given the possibility to detect waves from not exactly modeled astrophysical processes, it is fundamental to be ready to calculate the polarization waveforms in the case of searches using nontemplate algorithms. In such a case, the waveform polarizations are the only quantities that contain direct information about the generating process. We present the performance of a new valuable tool to estimate the inverse solution of gravitational wave transient signals, starting from the analysis of the signal properties of a nontemplate algorithm that is open to a wider class of gravitational signals not covered by template algorithms. We highlight the contributions to the wave polarization associated with the detector response, the sky localization, and the polarization angle of the source. In this paper we present the performances of such a method and its implications by using two main classes of transient signals, resembling the limiting case for most simple and complicated morphologies. The performances are encouraging for the tested waveforms: the correlation between the original and the reconstructed waveforms spans from better than 80% for simple morphologies to better than 50% for complicated ones. For a nontemplate search these results can be considered satisfactory to reconstruct the astrophysical progenitor.
NASA Astrophysics Data System (ADS)
Chen, Xiao-jun; Dong, Li-zhi; Wang, Shuai; Yang, Ping; Xu, Bing
2017-11-01
In quadri-wave lateral shearing interferometry (QWLSI), when the intensity distribution of the incident light wave is non-uniform, part of the information of the intensity distribution will couple with the wavefront derivatives to cause wavefront reconstruction errors. In this paper, we propose two algorithms to reduce the influence of a non-uniform intensity distribution on wavefront reconstruction. Our simulation results demonstrate that the reconstructed amplitude distribution (RAD) algorithm can effectively reduce the influence of the intensity distribution on the wavefront reconstruction and that the collected amplitude distribution (CAD) algorithm can almost eliminate it.
Investigating the Use of the Intel Xeon Phi for Event Reconstruction
NASA Astrophysics Data System (ADS)
Sherman, Keegan; Gilfoyle, Gerard
2014-09-01
The physics goal of Jefferson Lab is to understand how quarks and gluons form nuclei and it is being upgraded to a higher, 12-GeV beam energy. The new CLAS12 detector in Hall B will collect 5-10 terabytes of data per day and will require considerable computing resources. We are investigating tools, such as the Intel Xeon Phi, to speed up the event reconstruction. The Kalman Filter is one of the methods being studied. It is a linear algebra algorithm that estimates the state of a system by combining existing data and predictions of those measurements. The tools required to apply this technique (i.e. matrix multiplication, matrix inversion) are being written using C++ intrinsics for Intel's Xeon Phi Coprocessor, which uses the Many Integrated Cores (MIC) architecture. The Intel MIC is a new high-performance chip that connects to a host machine through the PCIe bus and is built to run highly vectorized and parallelized code making it a well-suited device for applications such as the Kalman Filter. Our tests of the MIC optimized algorithms needed for the filter show significant increases in speed. For example, matrix multiplication of 5x5 matrices on the MIC was able to run up to 69 times faster than the host core. The physics goal of Jefferson Lab is to understand how quarks and gluons form nuclei and it is being upgraded to a higher, 12-GeV beam energy. The new CLAS12 detector in Hall B will collect 5-10 terabytes of data per day and will require considerable computing resources. We are investigating tools, such as the Intel Xeon Phi, to speed up the event reconstruction. The Kalman Filter is one of the methods being studied. It is a linear algebra algorithm that estimates the state of a system by combining existing data and predictions of those measurements. The tools required to apply this technique (i.e. matrix multiplication, matrix inversion) are being written using C++ intrinsics for Intel's Xeon Phi Coprocessor, which uses the Many Integrated Cores (MIC) architecture. The Intel MIC is a new high-performance chip that connects to a host machine through the PCIe bus and is built to run highly vectorized and parallelized code making it a well-suited device for applications such as the Kalman Filter. Our tests of the MIC optimized algorithms needed for the filter show significant increases in speed. For example, matrix multiplication of 5x5 matrices on the MIC was able to run up to 69 times faster than the host core. Work supported by the University of Richmond and the US Department of Energy.
High-speed parallel implementation of a modified PBR algorithm on DSP-based EH topology
NASA Astrophysics Data System (ADS)
Rajan, K.; Patnaik, L. M.; Ramakrishna, J.
1997-08-01
Algebraic Reconstruction Technique (ART) is an age-old method used for solving the problem of three-dimensional (3-D) reconstruction from projections in electron microscopy and radiology. In medical applications, direct 3-D reconstruction is at the forefront of investigation. The simultaneous iterative reconstruction technique (SIRT) is an ART-type algorithm with the potential of generating in a few iterations tomographic images of a quality comparable to that of convolution backprojection (CBP) methods. Pixel-based reconstruction (PBR) is similar to SIRT reconstruction, and it has been shown that PBR algorithms give better quality pictures compared to those produced by SIRT algorithms. In this work, we propose a few modifications to the PBR algorithms. The modified algorithms are shown to give better quality pictures compared to PBR algorithms. The PBR algorithm and the modified PBR algorithms are highly compute intensive, Not many attempts have been made to reconstruct objects in the true 3-D sense because of the high computational overhead. In this study, we have developed parallel two-dimensional (2-D) and 3-D reconstruction algorithms based on modified PBR. We attempt to solve the two problems encountered by the PBR and modified PBR algorithms, i.e., the long computational time and the large memory requirements, by parallelizing the algorithm on a multiprocessor system. We investigate the possible task and data partitioning schemes by exploiting the potential parallelism in the PBR algorithm subject to minimizing the memory requirement. We have implemented an extended hypercube (EH) architecture for the high-speed execution of the 3-D reconstruction algorithm using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PEs) and dual-port random access memories (DPR) as channels between the PEs. We discuss and compare the performances of the PBR algorithm on an IBM 6000 RISC workstation, on a Silicon Graphics Indigo 2 workstation, and on an EH system. The results show that an EH(3,1) using DSP chips as PEs executes the modified PBR algorithm about 100 times faster than an LBM 6000 RISC workstation. We have executed the algorithms on a 4-node IBM SP2 parallel computer. The results show that execution time of the algorithm on an EH(3,1) is better than that of a 4-node IBM SP2 system. The speed-up of an EH(3,1) system with eight PEs and one network controller is approximately 7.85.
Low dose reconstruction algorithm for differential phase contrast imaging.
Wang, Zhentian; Huang, Zhifeng; Zhang, Li; Chen, Zhiqiang; Kang, Kejun; Yin, Hongxia; Wang, Zhenchang; Marco, Stampanoni
2011-01-01
Differential phase contrast imaging computed tomography (DPCI-CT) is a novel x-ray inspection method to reconstruct the distribution of refraction index rather than the attenuation coefficient in weakly absorbing samples. In this paper, we propose an iterative reconstruction algorithm for DPCI-CT which benefits from the new compressed sensing theory. We first realize a differential algebraic reconstruction technique (DART) by discretizing the projection process of the differential phase contrast imaging into a linear partial derivative matrix. In this way the compressed sensing reconstruction problem of DPCI reconstruction can be transformed to a resolved problem in the transmission imaging CT. Our algorithm has the potential to reconstruct the refraction index distribution of the sample from highly undersampled projection data. Thus it can significantly reduce the dose and inspection time. The proposed algorithm has been validated by numerical simulations and actual experiments.
NASA Technical Reports Server (NTRS)
Oliver, A. Brandon
2017-01-01
Obtaining measurements of flight environments on ablative heat shields is both critical for spacecraft development and extremely challenging due to the harsh heating environment and surface recession. Thermocouples installed several millimeters below the surface are commonly used to measure the heat shield temperature response, but an ill-posed inverse heat conduction problem must be solved to reconstruct the surface heating environment from these measurements. Ablation can contribute substantially to the measurement response making solutions to the inverse problem strongly dependent on the recession model, which is often poorly characterized. To enable efficient surface reconstruction for recession model sensitivity analysis, a method for decoupling the surface recession evaluation from the inverse heat conduction problem is presented. The decoupled method is shown to provide reconstructions of equivalent accuracy to the traditional coupled method but with substantially reduced computational effort. These methods are applied to reconstruct the environments on the Mars Science Laboratory heat shield using diffusion limit and kinetically limited recession models.
Optimization-based reconstruction for reduction of CBCT artifact in IGRT
NASA Astrophysics Data System (ADS)
Xia, Dan; Zhang, Zheng; Paysan, Pascal; Seghers, Dieter; Brehm, Marcus; Munro, Peter; Sidky, Emil Y.; Pelizzari, Charles; Pan, Xiaochuan
2016-04-01
Kilo-voltage cone-beam computed tomography (CBCT) plays an important role in image guided radiation therapy (IGRT) by providing 3D spatial information of tumor potentially useful for optimizing treatment planning. In current IGRT CBCT system, reconstructed images obtained with analytic algorithms, such as FDK algorithm and its variants, may contain artifacts. In an attempt to compensate for the artifacts, we investigate optimization-based reconstruction algorithms such as the ASD-POCS algorithm for potentially reducing arti- facts in IGRT CBCT images. In this study, using data acquired with a physical phantom and a patient subject, we demonstrate that the ASD-POCS reconstruction can significantly reduce artifacts observed in clinical re- constructions. Moreover, patient images reconstructed by use of the ASD-POCS algorithm indicate a contrast level of soft-tissue improved over that of the clinical reconstruction. We have also performed reconstructions from sparse-view data, and observe that, for current clinical imaging conditions, ASD-POCS reconstructions from data collected at one half of the current clinical projection views appear to show image quality, in terms of spatial and soft-tissue-contrast resolution, higher than that of the corresponding clinical reconstructions.
A density based algorithm to detect cavities and holes from planar points
NASA Astrophysics Data System (ADS)
Zhu, Jie; Sun, Yizhong; Pang, Yueyong
2017-12-01
Delaunay-based shape reconstruction algorithms are widely used in approximating the shape from planar points. However, these algorithms cannot ensure the optimality of varied reconstructed cavity boundaries and hole boundaries. This inadequate reconstruction can be primarily attributed to the lack of efficient mathematic formulation for the two structures (hole and cavity). In this paper, we develop an efficient algorithm for generating cavities and holes from planar points. The algorithm yields the final boundary based on an iterative removal of the Delaunay triangulation. Our algorithm is mainly divided into two steps, namely, rough and refined shape reconstructions. The rough shape reconstruction performed by the algorithm is controlled by a relative parameter. Based on the rough result, the refined shape reconstruction mainly aims to detect holes and pure cavities. Cavity and hole are conceptualized as a structure with a low-density region surrounded by the high-density region. With this structure, cavity and hole are characterized by a mathematic formulation called as compactness of point formed by the length variation of the edges incident to point in Delaunay triangulation. The boundaries of cavity and hole are then found by locating a shape gradient change in compactness of point set. The experimental comparison with other shape reconstruction approaches shows that the proposed algorithm is able to accurately yield the boundaries of cavity and hole with varying point set densities and distributions.
Low Dose CT Reconstruction via Edge-preserving Total Variation Regularization
Tian, Zhen; Jia, Xun; Yuan, Kehong; Pan, Tinsu; Jiang, Steve B.
2014-01-01
High radiation dose in CT scans increases a lifetime risk of cancer and has become a major clinical concern. Recently, iterative reconstruction algorithms with Total Variation (TV) regularization have been developed to reconstruct CT images from highly undersampled data acquired at low mAs levels in order to reduce the imaging dose. Nonetheless, the low contrast structures tend to be smoothed out by the TV regularization, posing a great challenge for the TV method. To solve this problem, in this work we develop an iterative CT reconstruction algorithm with edge-preserving TV regularization to reconstruct CT images from highly undersampled data obtained at low mAs levels. The CT image is reconstructed by minimizing an energy consisting of an edge-preserving TV norm and a data fidelity term posed by the x-ray projections. The edge-preserving TV term is proposed to preferentially perform smoothing only on non-edge part of the image in order to better preserve the edges, which is realized by introducing a penalty weight to the original total variation norm. During the reconstruction process, the pixels at edges would be gradually identified and given small penalty weight. Our iterative algorithm is implemented on GPU to improve its speed. We test our reconstruction algorithm on a digital NCAT phantom, a physical chest phantom, and a Catphan phantom. Reconstruction results from a conventional FBP algorithm and a TV regularization method without edge preserving penalty are also presented for comparison purpose. The experimental results illustrate that both TV-based algorithm and our edge-preserving TV algorithm outperform the conventional FBP algorithm in suppressing the streaking artifacts and image noise under the low dose context. Our edge-preserving algorithm is superior to the TV-based algorithm in that it can preserve more information of low contrast structures and therefore maintain acceptable spatial resolution. PMID:21860076
Shi, Ximin; Li, Nan; Ding, Haiyan; Dang, Yonghong; Hu, Guilan; Liu, Shuai; Cui, Jie; Zhang, Yue; Li, Fang; Zhang, Hui; Huo, Li
2018-01-01
Kinetic modeling of dynamic 11 C-acetate PET imaging provides quantitative information for myocardium assessment. The quality and quantitation of PET images are known to be dependent on PET reconstruction methods. This study aims to investigate the impacts of reconstruction algorithms on the quantitative analysis of dynamic 11 C-acetate cardiac PET imaging. Suspected alcoholic cardiomyopathy patients ( N = 24) underwent 11 C-acetate dynamic PET imaging after low dose CT scan. PET images were reconstructed using four algorithms: filtered backprojection (FBP), ordered subsets expectation maximization (OSEM), OSEM with time-of-flight (TOF), and OSEM with both time-of-flight and point-spread-function (TPSF). Standardized uptake values (SUVs) at different time points were compared among images reconstructed using the four algorithms. Time-activity curves (TACs) in myocardium and blood pools of ventricles were generated from the dynamic image series. Kinetic parameters K 1 and k 2 were derived using a 1-tissue-compartment model for kinetic modeling of cardiac flow from 11 C-acetate PET images. Significant image quality improvement was found in the images reconstructed using iterative OSEM-type algorithms (OSME, TOF, and TPSF) compared with FBP. However, no statistical differences in SUVs were observed among the four reconstruction methods at the selected time points. Kinetic parameters K 1 and k 2 also exhibited no statistical difference among the four reconstruction algorithms in terms of mean value and standard deviation. However, for the correlation analysis, OSEM reconstruction presented relatively higher residual in correlation with FBP reconstruction compared with TOF and TPSF reconstruction, and TOF and TPSF reconstruction were highly correlated with each other. All the tested reconstruction algorithms performed similarly for quantitative analysis of 11 C-acetate cardiac PET imaging. TOF and TPSF yielded highly consistent kinetic parameter results with superior image quality compared with FBP. OSEM was relatively less reliable. Both TOF and TPSF were recommended for cardiac 11 C-acetate kinetic analysis.
Wave optics theory and 3-D deconvolution for the light field microscope
Broxton, Michael; Grosenick, Logan; Yang, Samuel; Cohen, Noy; Andalman, Aaron; Deisseroth, Karl; Levoy, Marc
2013-01-01
Light field microscopy is a new technique for high-speed volumetric imaging of weakly scattering or fluorescent specimens. It employs an array of microlenses to trade off spatial resolution against angular resolution, thereby allowing a 4-D light field to be captured using a single photographic exposure without the need for scanning. The recorded light field can then be used to computationally reconstruct a full volume. In this paper, we present an optical model for light field microscopy based on wave optics, instead of previously reported ray optics models. We also present a 3-D deconvolution method for light field microscopy that is able to reconstruct volumes at higher spatial resolution, and with better optical sectioning, than previously reported. To accomplish this, we take advantage of the dense spatio-angular sampling provided by a microlens array at axial positions away from the native object plane. This dense sampling permits us to decode aliasing present in the light field to reconstruct high-frequency information. We formulate our method as an inverse problem for reconstructing the 3-D volume, which we solve using a GPU-accelerated iterative algorithm. Theoretical limits on the depth-dependent lateral resolution of the reconstructed volumes are derived. We show that these limits are in good agreement with experimental results on a standard USAF 1951 resolution target. Finally, we present 3-D reconstructions of pollen grains that demonstrate the improvements in fidelity made possible by our method. PMID:24150383
Cai, Congbo; Chen, Zhong; van Zijl, Peter C.M.
2017-01-01
The reconstruction of MR quantitative susceptibility mapping (QSM) from local phase measurements is an ill posed inverse problem and different regularization strategies incorporating a priori information extracted from magnitude and phase images have been proposed. However, the anatomy observed in magnitude and phase images does not always coincide spatially with that in susceptibility maps, which could give erroneous estimation in the reconstructed susceptibility map. In this paper, we develop a structural feature based collaborative reconstruction (SFCR) method for QSM including both magnitude and susceptibility based information. The SFCR algorithm is composed of two consecutive steps corresponding to complementary reconstruction models, each with a structural feature based l1 norm constraint and a voxel fidelity based l2 norm constraint, which allows both the structure edges and tiny features to be recovered, whereas the noise and artifacts could be reduced. In the M-step, the initial susceptibility map is reconstructed by employing a k-space based compressed sensing model incorporating magnitude prior. In the S-step, the susceptibility map is fitted in spatial domain using weighted constraints derived from the initial susceptibility map from the M-step. Simulations and in vivo human experiments at 7T MRI show that the SFCR method provides high quality susceptibility maps with improved RMSE and MSSIM. Finally, the susceptibility values of deep gray matter are analyzed in multiple head positions, with the supine position most approximate to the gold standard COSMOS result. PMID:27019480
Sinogram-based adaptive iterative reconstruction for sparse view x-ray computed tomography
NASA Astrophysics Data System (ADS)
Trinca, D.; Zhong, Y.; Wang, Y.-Z.; Mamyrbayev, T.; Libin, E.
2016-10-01
With the availability of more powerful computing processors, iterative reconstruction algorithms have recently been successfully implemented as an approach to achieving significant dose reduction in X-ray CT. In this paper, we propose an adaptive iterative reconstruction algorithm for X-ray CT, that is shown to provide results comparable to those obtained by proprietary algorithms, both in terms of reconstruction accuracy and execution time. The proposed algorithm is thus provided for free to the scientific community, for regular use, and for possible further optimization.
Axial Cone Beam Reconstruction by Weighted BPF/DBPF and Orthogonal Butterfly Filtering
Tang, Shaojie; Tang, Xiangyang
2016-01-01
Goal The backprojection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical reconstruction from cone beam (CB) scan data and axial reconstruction from fan beam data, respectively. These two algorithms can be heuristically extended for image reconstruction from axial CB scan data, but induce severe artifacts in images located away from the central plane determined by the circular source trajectory. We propose an algorithmic solution herein to eliminate the artifacts. Methods The solution is an integration of three-dimensional (3D) weighted axial CB-BPF/ DBPF algorithm with orthogonal butterfly filtering, namely axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering. Using the computer simulated Forbild head and thoracic phantoms that are rigorous in inspecting reconstruction accuracy and an anthropomorphic thoracic phantom with projection data acquired by a CT scanner, we evaluate performance of the proposed algorithm. Results Preliminary results show that the orthogonal butterfly filtering can eliminate the severe streak artifacts existing in the images reconstructed by the 3D weighted axial CB-BPF/DBPF algorithm located at off-central planes. Conclusion Integrated with orthogonal butterfly filtering, the 3D weighted CB-BPF/DBPF algorithm can perform at least as well as the 3D weighted CB-FBP algorithm in image reconstruction from axial CB scan data. Significance The proposed 3D weighted axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering can be an algorithmic solution for CT imaging in extensive clinical and preclinical applications. PMID:26660512
NASA Astrophysics Data System (ADS)
Jardin, A.; Mazon, D.; Malard, P.; O'Mullane, M.; Chernyshova, M.; Czarski, T.; Malinowski, K.; Kasprowicz, G.; Wojenski, A.; Pozniak, K.
2017-08-01
The tokamak WEST aims at testing ITER divertor high heat flux component technology in long pulse operation. Unfortunately, heavy impurities like tungsten (W) sputtered from the plasma facing components can pollute the plasma core by radiation cooling in the soft x-ray (SXR) range, which is detrimental for the energy confinement and plasma stability. SXR diagnostics give valuable information to monitor impurities and study their transport. The WEST SXR diagnostic is composed of two new cameras based on the Gas Electron Multiplier (GEM) technology. The WEST GEM cameras will be used for impurity transport studies by performing 2D tomographic reconstructions with spectral resolution in tunable energy bands. In this paper, we characterize the GEM spectral response and investigate W density reconstruction thanks to a synthetic diagnostic recently developed and coupled with a tomography algorithm based on the minimum Fisher information (MFI) inversion method. The synthetic diagnostic includes the SXR source from a given plasma scenario, the photoionization, electron cloud transport and avalanche in the detection volume using Magboltz, and tomographic reconstruction of the radiation from the GEM signal. Preliminary studies of the effect of transport on the W ionization equilibrium and on the reconstruction capabilities are also presented.
A FAST POLYNOMIAL TRANSFORM PROGRAM WITH A MODULARIZED STRUCTURE
NASA Technical Reports Server (NTRS)
Truong, T. K.
1994-01-01
This program utilizes a fast polynomial transformation (FPT) algorithm applicable to two-dimensional mathematical convolutions. Two-dimensional convolution has many applications, particularly in image processing. Two-dimensional cyclic convolutions can be converted to a one-dimensional convolution in a polynomial ring. Traditional FPT methods decompose the one-dimensional cyclic polynomial into polynomial convolutions of different lengths. This program will decompose a cyclic polynomial into polynomial convolutions of the same length. Thus, only FPTs and Fast Fourier Transforms of the same length are required. This modular approach can save computational resources. To further enhance its appeal, the program is written in the transportable 'C' language. The steps in the algorithm are: 1) formulate the modulus reduction equations, 2) calculate the polynomial transforms, 3) multiply the transforms using a generalized fast Fourier transformation, 4) compute the inverse polynomial transforms, and 5) reconstruct the final matrices using the Chinese remainder theorem. Input to this program is comprised of the row and column dimensions and the initial two matrices. The matrices are printed out at all steps, ending with the final reconstruction. This program is written in 'C' for batch execution and has been implemented on the IBM PC series of computers under DOS with a central memory requirement of approximately 18K of 8 bit bytes. This program was developed in 1986.
Ahmad, Moiz; Balter, Peter; Pan, Tinsu
2011-10-01
Data sufficiency are a major problem in four-dimensional cone-beam computed tomography (4D-CBCT) on linear accelerator-integrated scanners for image-guided radiotherapy. Scan times must be in the range of 4-6 min to avoid undersampling artifacts. Various image reconstruction algorithms have been proposed to accommodate undersampled data acquisitions, but these algorithms are computationally expensive, may require long reconstruction times, and may require algorithm parameters to be optimized. The authors present a novel reconstruction method, 4D volume-of-interest (4D-VOI) reconstruction which suppresses undersampling artifacts and resolves lung tumor motion for undersampled 1-min scans. The 4D-VOI reconstruction is much less computationally expensive than other 4D-CBCT algorithms. The 4D-VOI method uses respiration-correlated projection data to reconstruct a four-dimensional (4D) image inside a VOI containing the moving tumor, and uncorrelated projection data to reconstruct a three-dimensional (3D) image outside the VOI. Anatomical motion is resolved inside the VOI and blurred outside the VOI. The authors acquired a 1-min. scan of an anthropomorphic chest phantom containing a moving water-filled sphere. The authors also used previously acquired 1-min scans for two lung cancer patients who had received CBCT-guided radiation therapy. The same raw data were used to test and compare the 4D-VOI reconstruction with the standard 4D reconstruction and the McKinnon-Bates (MB) reconstruction algorithms. Both the 4D-VOI and the MB reconstructions suppress nearly all the streak artifacts compared with the standard 4D reconstruction, but the 4D-VOI has 3-8 times greater contrast-to-noise ratio than the MB reconstruction. In the dynamic chest phantom study, the 4D-VOI and the standard 4D reconstructions both resolved a moving sphere with an 18 mm displacement. The 4D-VOI reconstruction shows a motion blur of only 3 mm, whereas the MB reconstruction shows a motion blur of 13 mm. With graphics processing unit hardware used to accelerate computations, the 4D-VOI reconstruction required a 40-s reconstruction time. 4D-VOI reconstruction effectively reduces undersampling artifacts and resolves lung tumor motion in 4D-CBCT. The 4D-VOI reconstruction is computationally inexpensive compared with more sophisticated iterative algorithms. Compared with these algorithms, our 4D-VOI reconstruction is an attractive alternative in 4D-CBCT for reconstructing target motion without generating numerous streak artifacts.
Ahmad, Moiz; Balter, Peter; Pan, Tinsu
2011-01-01
Purpose: Data sufficiency are a major problem in four-dimensional cone-beam computed tomography (4D-CBCT) on linear accelerator-integrated scanners for image-guided radiotherapy. Scan times must be in the range of 4–6 min to avoid undersampling artifacts. Various image reconstruction algorithms have been proposed to accommodate undersampled data acquisitions, but these algorithms are computationally expensive, may require long reconstruction times, and may require algorithm parameters to be optimized. The authors present a novel reconstruction method, 4D volume-of-interest (4D-VOI) reconstruction which suppresses undersampling artifacts and resolves lung tumor motion for undersampled 1-min scans. The 4D-VOI reconstruction is much less computationally expensive than other 4D-CBCT algorithms. Methods: The 4D-VOI method uses respiration-correlated projection data to reconstruct a four-dimensional (4D) image inside a VOI containing the moving tumor, and uncorrelated projection data to reconstruct a three-dimensional (3D) image outside the VOI. Anatomical motion is resolved inside the VOI and blurred outside the VOI. The authors acquired a 1-min. scan of an anthropomorphic chest phantom containing a moving water-filled sphere. The authors also used previously acquired 1-min scans for two lung cancer patients who had received CBCT-guided radiation therapy. The same raw data were used to test and compare the 4D-VOI reconstruction with the standard 4D reconstruction and the McKinnon-Bates (MB) reconstruction algorithms. Results: Both the 4D-VOI and the MB reconstructions suppress nearly all the streak artifacts compared with the standard 4D reconstruction, but the 4D-VOI has 3–8 times greater contrast-to-noise ratio than the MB reconstruction. In the dynamic chest phantom study, the 4D-VOI and the standard 4D reconstructions both resolved a moving sphere with an 18 mm displacement. The 4D-VOI reconstruction shows a motion blur of only 3 mm, whereas the MB reconstruction shows a motion blur of 13 mm. With graphics processing unit hardware used to accelerate computations, the 4D-VOI reconstruction required a 40-s reconstruction time. Conclusions: 4D-VOI reconstruction effectively reduces undersampling artifacts and resolves lung tumor motion in 4D-CBCT. The 4D-VOI reconstruction is computationally inexpensive compared with more sophisticated iterative algorithms. Compared with these algorithms, our 4D-VOI reconstruction is an attractive alternative in 4D-CBCT for reconstructing target motion without generating numerous streak artifacts. PMID:21992381
pyGIMLi: An open-source library for modelling and inversion in geophysics
NASA Astrophysics Data System (ADS)
Rücker, Carsten; Günther, Thomas; Wagner, Florian M.
2017-12-01
Many tasks in applied geosciences cannot be solved by single measurements, but require the integration of geophysical, geotechnical and hydrological methods. Numerical simulation techniques are essential both for planning and interpretation, as well as for the process understanding of modern geophysical methods. These trends encourage open, simple, and modern software architectures aiming at a uniform interface for interdisciplinary and flexible modelling and inversion approaches. We present pyGIMLi (Python Library for Inversion and Modelling in Geophysics), an open-source framework that provides tools for modelling and inversion of various geophysical but also hydrological methods. The modelling component supplies discretization management and the numerical basis for finite-element and finite-volume solvers in 1D, 2D and 3D on arbitrarily structured meshes. The generalized inversion framework solves the minimization problem with a Gauss-Newton algorithm for any physical forward operator and provides opportunities for uncertainty and resolution analyses. More general requirements, such as flexible regularization strategies, time-lapse processing and different sorts of coupling individual methods are provided independently of the actual methods used. The usage of pyGIMLi is first demonstrated by solving the steady-state heat equation, followed by a demonstration of more complex capabilities for the combination of different geophysical data sets. A fully coupled hydrogeophysical inversion of electrical resistivity tomography (ERT) data of a simulated tracer experiment is presented that allows to directly reconstruct the underlying hydraulic conductivity distribution of the aquifer. Another example demonstrates the improvement of jointly inverting ERT and ultrasonic data with respect to saturation by a new approach that incorporates petrophysical relations in the inversion. Potential applications of the presented framework are manifold and include time-lapse, constrained, joint, and coupled inversions of various geophysical and hydrological data sets.
The whole space three-dimensional magnetotelluric inversion algorithm with static shift correction
NASA Astrophysics Data System (ADS)
Zhang, K.
2016-12-01
Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results.The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm. The verification and application example of 3D inversion algorithm is shown in Figure 1. From the comparison of figure 1, the inversion model can reflect all the abnormal bodies and terrain clearly regardless of what type of data (impedance/tipper/impedance and tipper). And the resolution of the bodies' boundary can be improved by using tipper data. The algorithm is very effective for terrain inversion. So it is very useful for the study of continental shelf with continuous exploration of land, marine and underground.The three-dimensional electrical model of the ore zone reflects the basic information of stratum, rock and structure. Although it cannot indicate the ore body position directly, the important clues are provided for prospecting work by the delineation of diorite pluton uplift range. The test results show that, the high quality of the data processing and efficient inversion method for electromagnetic method is an important guarantee for porphyry ore.
UV Reconstruction Algorithm And Diurnal Cycle Variability
NASA Astrophysics Data System (ADS)
Curylo, Aleksander; Litynska, Zenobia; Krzyscin, Janusz; Bogdanska, Barbara
2009-03-01
UV reconstruction is a method of estimation of surface UV with the use of available actinometrical and aerological measurements. UV reconstruction is necessary for the study of long-term UV change. A typical series of UV measurements is not longer than 15 years, which is too short for trend estimation. The essential problem in the reconstruction algorithm is the good parameterization of clouds. In our previous algorithm we used an empirical relation between Cloud Modification Factor (CMF) in global radiation and CMF in UV. The CMF is defined as the ratio between measured and modelled irradiances. Clear sky irradiance was calculated with a solar radiative transfer model. In the proposed algorithm, the time variability of global radiation during the diurnal cycle is used as an additional source of information. For elaborating an improved reconstruction algorithm relevant data from Legionowo [52.4 N, 21.0 E, 96 m a.s.l], Poland were collected with the following instruments: NILU-UV multi channel radiometer, Kipp&Zonen pyranometer, radiosonde profiles of ozone, humidity and temperature. The proposed algorithm has been used for reconstruction of UV at four Polish sites: Mikolajki, Kolobrzeg, Warszawa-Bielany and Zakopane since the early 1960s. Krzyscin's reconstruction of total ozone has been used in the calculations.
Reconstruction of three-dimensional ultrasound images based on cyclic Savitzky-Golay filters
NASA Astrophysics Data System (ADS)
Toonkum, Pollakrit; Suwanwela, Nijasri C.; Chinrungrueng, Chedsada
2011-01-01
We present a new algorithm for reconstructing a three-dimensional (3-D) ultrasound image from a series of two-dimensional B-scan ultrasound slices acquired in the mechanical linear scanning framework. Unlike most existing 3-D ultrasound reconstruction algorithms, which have been developed and evaluated in the freehand scanning framework, the new algorithm has been designed to capitalize the regularity pattern of the mechanical linear scanning, where all the B-scan slices are precisely parallel and evenly spaced. The new reconstruction algorithm, referred to as the cyclic Savitzky-Golay (CSG) reconstruction filter, is an improvement on the original Savitzky-Golay filter in two respects: First, it is extended to accept a 3-D array of data as the filter input instead of a one-dimensional data sequence. Second, it incorporates the cyclic indicator function in its least-squares objective function so that the CSG algorithm can simultaneously perform both smoothing and interpolating tasks. The performance of the CSG reconstruction filter compared to that of most existing reconstruction algorithms in generating a 3-D synthetic test image and a clinical 3-D carotid artery bifurcation in the mechanical linear scanning framework are also reported.
Experimental scheme and restoration algorithm of block compression sensing
NASA Astrophysics Data System (ADS)
Zhang, Linxia; Zhou, Qun; Ke, Jun
2018-01-01
Compressed Sensing (CS) can use the sparseness of a target to obtain its image with much less data than that defined by the Nyquist sampling theorem. In this paper, we study the hardware implementation of a block compression sensing system and its reconstruction algorithms. Different block sizes are used. Two algorithms, the orthogonal matching algorithm (OMP) and the full variation minimum algorithm (TV) are used to obtain good reconstructions. The influence of block size on reconstruction is also discussed.
LEAP: Looking beyond pixels with continuous-space EstimAtion of Point sources
NASA Astrophysics Data System (ADS)
Pan, Hanjie; Simeoni, Matthieu; Hurley, Paul; Blu, Thierry; Vetterli, Martin
2017-12-01
Context. Two main classes of imaging algorithms have emerged in radio interferometry: the CLEAN algorithm and its multiple variants, and compressed-sensing inspired methods. They are both discrete in nature, and estimate source locations and intensities on a regular grid. For the traditional CLEAN-based imaging pipeline, the resolution power of the tool is limited by the width of the synthesized beam, which is inversely proportional to the largest baseline. The finite rate of innovation (FRI) framework is a robust method to find the locations of point-sources in a continuum without grid imposition. The continuous formulation makes the FRI recovery performance only dependent on the number of measurements and the number of sources in the sky. FRI can theoretically find sources below the perceived tool resolution. To date, FRI had never been tested in the extreme conditions inherent to radio astronomy: weak signal / high noise, huge data sets, large numbers of sources. Aims: The aims were (i) to adapt FRI to radio astronomy, (ii) verify it can recover sources in radio astronomy conditions with more accurate positioning than CLEAN, and possibly resolve some sources that would otherwise be missed, (iii) show that sources can be found using less data than would otherwise be required to find them, and (iv) show that FRI does not lead to an augmented rate of false positives. Methods: We implemented a continuous domain sparse reconstruction algorithm in Python. The angular resolution performance of the new algorithm was assessed under simulation, and with visibility measurements from the LOFAR telescope. Existing catalogs were used to confirm the existence of sources. Results: We adapted the FRI framework to radio interferometry, and showed that it is possible to determine accurate off-grid point-source locations and their corresponding intensities. In addition, FRI-based sparse reconstruction required less integration time and smaller baselines to reach a comparable reconstruction quality compared to a conventional method. The achieved angular resolution is higher than the perceived instrument resolution, and very close sources can be reliably distinguished. The proposed approach has cubic complexity in the total number (typically around a few thousand) of uniform Fourier data of the sky image estimated from the reconstruction. It is also demonstrated that the method is robust to the presence of extended-sources, and that false-positives can be addressed by choosing an adequate model order to match the noise level.
2009-12-30
FA9550-06-1-0107 for “A Study of the 3-D Reconstruction of Heliospheric Vector Magnetic Fields from Faraday-Rotation Inversion” for work performed...from 2005 – 2009 by the University of California at San Diego. There are three aspects to this research: 1) The inversion of simple synthetic Faraday...rotation measurements that can be used to demonstrate the feasibility of performing this inversion when and if Faraday-rotation observations become
Mathematics of tsunami: modelling and identification
NASA Astrophysics Data System (ADS)
Krivorotko, Olga; Kabanikhin, Sergey
2015-04-01
Tsunami (long waves in the deep water) motion caused by underwater earthquakes is described by shallow water equations ( { ηtt = div (gH (x,y)-gradη), (x,y) ∈ Ω, t ∈ (0,T ); η|t=0 = q(x,y), ηt|t=0 = 0, (x,y) ∈ Ω. ( (1) Bottom relief H(x,y) characteristics and the initial perturbation data (a tsunami source q(x,y)) are required for the direct simulation of tsunamis. The main difficulty problem of tsunami modelling is a very big size of the computational domain (Ω = 500 × 1000 kilometres in space and about one hour computational time T for one meter of initial perturbation amplitude max|q|). The calculation of the function η(x,y,t) of three variables in Ω × (0,T) requires large computing resources. We construct a new algorithm to solve numerically the problem of determining the moving tsunami wave height S(x,y) which is based on kinematic-type approach and analytical representation of fundamental solution. Proposed algorithm of determining the function of two variables S(x,y) reduces the number of operations in 1.5 times than solving problem (1). If all functions does not depend on the variable y (one dimensional case), then the moving tsunami wave height satisfies of the well-known Airy-Green formula: S(x) = S(0)° --- 4H (0)/H (x). The problem of identification parameters of a tsunami source using additional measurements of a passing wave is called inverse tsunami problem. We investigate two different inverse problems of determining a tsunami source q(x,y) using two different additional data: Deep-ocean Assessment and Reporting of Tsunamis (DART) measurements and satellite altimeters wave-form images. These problems are severely ill-posed. The main idea consists of combination of two measured data to reconstruct the source parameters. We apply regularization techniques to control the degree of ill-posedness such as Fourier expansion, truncated singular value decomposition, numerical regularization. The algorithm of selecting the truncated number of singular values of an inverse problem operator which is agreed with the error level in measured data is described and analysed. In numerical experiment we used conjugate gradient method for solving inverse tsunami problems. Gradient methods are based on minimizing the corresponding misfit function. To calculate the gradient of the misfit function, the adjoint problem is solved. The conservative finite-difference schemes for solving the direct and adjoint problems in the approximation of shallow water are constructed. Results of numerical experiments of the tsunami source reconstruction are presented and discussed. We show that using a combination of two types of data allows one to increase the stability and efficiency of tsunami source reconstruction. Non-profit organization WAPMERR (World Agency of Planetary Monitoring and Earthquake Risk Reduction) in collaboration with Institute of Computational Mathematics and Mathematical Geophysics of SB RAS developed the Integrated Tsunami Research and Information System (ITRIS) to simulate tsunami waves and earthquakes, river course changes, coastal zone floods, and risk estimates for coastal constructions at wave run-ups and earthquakes. The special scientific plug-in components are embedded in a specially developed GIS-type graphic shell for easy data retrieval, visualization and processing. We demonstrate the tsunami simulation plug-in for historical tsunami events (2004 Indian Ocean tsunami, Simushir tsunami 2006 and others). This work was supported by the Ministry of Education and Science of the Russian Federation.
Mikhaylova, E; Kolstein, M; De Lorenzo, G; Chmeissani, M
2014-07-01
A novel positron emission tomography (PET) scanner design based on a room-temperature pixelated CdTe solid-state detector is being developed within the framework of the Voxel Imaging PET (VIP) Pathfinder project [1]. The simulation results show a great potential of the VIP to produce high-resolution images even in extremely challenging conditions such as the screening of a human head [2]. With unprecedented high channel density (450 channels/cm 3 ) image reconstruction is a challenge. Therefore optimization is needed to find the best algorithm in order to exploit correctly the promising detector potential. The following reconstruction algorithms are evaluated: 2-D Filtered Backprojection (FBP), Ordered Subset Expectation Maximization (OSEM), List-Mode OSEM (LM-OSEM), and the Origin Ensemble (OE) algorithm. The evaluation is based on the comparison of a true image phantom with a set of reconstructed images obtained by each algorithm. This is achieved by calculation of image quality merit parameters such as the bias, the variance and the mean square error (MSE). A systematic optimization of each algorithm is performed by varying the reconstruction parameters, such as the cutoff frequency of the noise filters and the number of iterations. The region of interest (ROI) analysis of the reconstructed phantom is also performed for each algorithm and the results are compared. Additionally, the performance of the image reconstruction methods is compared by calculating the modulation transfer function (MTF). The reconstruction time is also taken into account to choose the optimal algorithm. The analysis is based on GAMOS [3] simulation including the expected CdTe and electronic specifics.
NASA Astrophysics Data System (ADS)
Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.
2016-05-01
X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.
NASA Astrophysics Data System (ADS)
Stritzel, J.; Melchert, O.; Wollweber, M.; Roth, B.
2017-09-01
The direct problem of optoacoustic signal generation in biological media consists of solving an inhomogeneous three-dimensional (3D) wave equation for an initial acoustic stress profile. In contrast, the more defiant inverse problem requires the reconstruction of the initial stress profile from a proper set of observed signals. In this article, we consider an effectively 1D approach, based on the assumption of a Gaussian transverse irradiation source profile and plane acoustic waves, in which the effects of acoustic diffraction are described in terms of a linear integral equation. The respective inverse problem along the beam axis can be cast into a Volterra integral equation of the second kind for which we explore here efficient numerical schemes in order to reconstruct initial stress profiles from observed signals, constituting a methodical progress of computational aspects of optoacoustics. In this regard, we explore the validity as well as the limits of the inversion scheme via numerical experiments, with parameters geared toward actual optoacoustic problem instances. The considered inversion input consists of synthetic data, obtained in terms of the effectively 1D approach, and, more generally, a solution of the 3D optoacoustic wave equation. Finally, we also analyze the effect of noise and different detector-to-sample distances on the optoacoustic signal and the reconstructed pressure profiles.
2014-09-01
to develop an optimized system design and associated image reconstruction algorithms for a hybrid three-dimensional (3D) breast imaging system that...research is to develop an optimized system design and associated image reconstruction algorithms for a hybrid three-dimensional (3D) breast imaging ...i) developed time-of- flight extraction algorithms to perform USCT, (ii) developing image reconstruction algorithms for USCT, (iii) developed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bielecki, J.; Scholz, M.; Drozdowicz, K.
A method of tomographic reconstruction of the neutron emissivity in the poloidal cross section of the Joint European Torus (JET, Culham, UK) tokamak was developed. Due to very limited data set (two projection angles, 19 lines of sight only) provided by the neutron emission profile monitor (KN3 neutron camera), the reconstruction is an ill-posed inverse problem. The aim of this work consists in making a contribution to the development of reliable plasma tomography reconstruction methods that could be routinely used at JET tokamak. The proposed method is based on Phillips-Tikhonov regularization and incorporates a priori knowledge of the shape ofmore » normalized neutron emissivity profile. For the purpose of the optimal selection of the regularization parameters, the shape of normalized neutron emissivity profile is approximated by the shape of normalized electron density profile measured by LIDAR or high resolution Thomson scattering JET diagnostics. In contrast with some previously developed methods of ill-posed plasma tomography reconstruction problem, the developed algorithms do not include any post-processing of the obtained solution and the physical constrains on the solution are imposed during the regularization process. The accuracy of the method is at first evaluated by several tests with synthetic data based on various plasma neutron emissivity models (phantoms). Then, the method is applied to the neutron emissivity reconstruction for JET D plasma discharge #85100. It is demonstrated that this method shows good performance and reliability and it can be routinely used for plasma neutron emissivity reconstruction on JET.« less
Reduced projection angles for binary tomography with particle aggregation.
Al-Rifaie, Mohammad Majid; Blackwell, Tim
This paper extends particle aggregate reconstruction technique (PART), a reconstruction algorithm for binary tomography based on the movement of particles. PART supposes that pixel values are particles, and that particles diffuse through the image, staying together in regions of uniform pixel value known as aggregates. In this work, a variation of this algorithm is proposed and a focus is placed on reducing the number of projections and whether this impacts the reconstruction of images. The algorithm is tested on three phantoms of varying sizes and numbers of forward projections and compared to filtered back projection, a random search algorithm and to SART, a standard algebraic reconstruction method. It is shown that the proposed algorithm outperforms the aforementioned algorithms on small numbers of projections. This potentially makes the algorithm attractive in scenarios where collecting less projection data are inevitable.
Binary optimization for source localization in the inverse problem of ECG.
Potyagaylo, Danila; Cortés, Elisenda Gil; Schulze, Walther H W; Dössel, Olaf
2014-09-01
The goal of ECG-imaging (ECGI) is to reconstruct heart electrical activity from body surface potential maps. The problem is ill-posed, which means that it is extremely sensitive to measurement and modeling errors. The most commonly used method to tackle this obstacle is Tikhonov regularization, which consists in converting the original problem into a well-posed one by adding a penalty term. The method, despite all its practical advantages, has however a serious drawback: The obtained solution is often over-smoothed, which can hinder precise clinical diagnosis and treatment planning. In this paper, we apply a binary optimization approach to the transmembrane voltage (TMV)-based problem. For this, we assume the TMV to take two possible values according to a heart abnormality under consideration. In this work, we investigate the localization of simulated ischemic areas and ectopic foci and one clinical infarction case. This affects only the choice of the binary values, while the core of the algorithms remains the same, making the approximation easily adjustable to the application needs. Two methods, a hybrid metaheuristic approach and the difference of convex functions (DC), algorithm were tested. For this purpose, we performed realistic heart simulations for a complex thorax model and applied the proposed techniques to the obtained ECG signals. Both methods enabled localization of the areas of interest, hence showing their potential for application in ECGI. For the metaheuristic algorithm, it was necessary to subdivide the heart into regions in order to obtain a stable solution unsusceptible to the errors, while the analytical DC scheme can be efficiently applied for higher dimensional problems. With the DC method, we also successfully reconstructed the activation pattern and origin of a simulated extrasystole. In addition, the DC algorithm enables iterative adjustment of binary values ensuring robust performance.
FOREWORD: 4th International Workshop on New Computational Methods for Inverse Problems (NCMIP2014)
NASA Astrophysics Data System (ADS)
2014-10-01
This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 4th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2014 (http://www.farman.ens-cachan.fr/NCMIP_2014.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 23, 2014. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/), and secondly at the initiative of Institut Farman, in May 2012 and May 2013, (http://www.farman.ens-cachan.fr/NCMIP_2012.html), (http://www.farman.ens-cachan.fr/NCMIP_2013.html). The New Computational Methods for Inverse Problems (NCMIP) Workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel methods, learning methods, convex optimization, free discontinuity problems, metamodels, proper orthogonal decomposition, reduced models for the inversion, non-linear inverse scattering, image reconstruction and restoration, and applications (bio-medical imaging, non-destructive evaluation...). NCMIP 2014 was a one-day workshop held in May 2014 which attracted around sixty attendees. Each of the submitted papers has been reviewed by two reviewers. There have been nine accepted papers. In addition, three international speakers were invited to present a longer talk. The workshop was supported by Institut Farman (ENS Cachan, CNRS) and endorsed by the following French research networks (GDR ISIS, GDR MIA, GDR MOA, GDR Ondes). The program committee acknowledges the following research laboratories: CMLA, LMT, LURPA, SATIE. Eric Vourc'h and Thomas Rodet
NASA Astrophysics Data System (ADS)
Salucci, Marco; Tenuti, Lorenza; Nardin, Cristina; Oliveri, Giacomo; Viani, Federico; Rocca, Paolo; Massa, Andrea
2014-05-01
The application of non-destructive testing and evaluation (NDT/NDE) methodologies in civil engineering has raised a growing interest during the last years because of its potential impact in several different scenarios. As a consequence, Ground Penetrating Radar (GPR) technologies have been widely adopted as an instrument for the inspection of the structural stability of buildings and for the detection of cracks and voids. In this framework, the development and validation of GPR algorithms and methodologies represents one of the most active research areas within the ELEDIA Research Center of the University of Trento. More in detail, great efforts have been devoted towards the development of inversion techniques based on the integration of deterministic and stochastic search algorithms with multi-focusing strategies. These approaches proved to be effective in mitigating the effects of both nonlinearity and ill-posedness of microwave imaging problems, which represent the well-known issues arising in GPR inverse scattering formulations. More in detail, a regularized multi-resolution approach based on the Inexact Newton Method (INM) has been recently applied to subsurface prospecting, showing a remarkable advantage over a single-resolution implementation [1]. Moreover, the use of multi-frequency or frequency-hopping strategies to exploit the information coming from GPR data collected in time domain and transformed into its frequency components has been proposed as well. In this framework, the effectiveness of the multi-resolution multi-frequency techniques has been proven on synthetic data generated with numerical models such as GprMax [2]. The application of inversion algorithms based on Bayesian Compressive Sampling (BCS) [3][4] to GPR is currently under investigation, as well, in order to exploit their capability to provide satisfactory reconstructions in presence of single and multiple sparse scatterers [3][4]. Furthermore, multi-scaling approaches exploiting level-set-based optimization have been developed for the qualitative reconstruction of multiple and disconnected homogeneous scatterers [5]. Finally, the real-time detection and classification of subsurface scatterers has been investigated by means of learning-by-examples (LBE) techniques, such as Support Vector Machines (SVM) [6]. Acknowledgment - This work was partially supported by COST Action TU1208 'Civil Engineering Applications of Ground Penetrating Radar' References [1] M. Salucci, D. Sartori, N. Anselmi, A. Randazzo, G. Oliveri, and A. Massa, 'Imaging Buried Objects within the Second-Order Born Approximation through a Multiresolution Regularized Inexact-Newton Method', 2013 International Symposium on Electromagnetic Theory (EMTS), (Hiroshima, Japan), May 20-24 2013 (invited). [2] A. Giannopoulos, 'Modelling ground penetrating radar by GprMax', Construct. Build. Mater., vol. 19, no. 10, pp.755 -762 2005 [3] L. Poli, G. Oliveri, P. Rocca, and A. Massa, "Bayesian compressive sensing approaches for the reconstruction of two-dimensional sparse scatterers under TE illumination," IEEE Trans. Geosci. Remote Sensing, vol. 51, no. 5, pp. 2920-2936, May. 2013. [4] L. Poli, G. Oliveri, and A. Massa, "Imaging sparse metallic cylinders through a Local Shape Function Bayesian Compressive Sensing approach," Journal of Optical Society of America A, vol. 30, no. 6, pp. 1261-1272, 2013. [5] M. Benedetti, D. Lesselier, M. Lambert, and A. Massa, "Multiple shapes reconstruction by means of multi-region level sets," IEEE Trans. Geosci. Remote Sensing, vol. 48, no. 5, pp. 2330-2342, May 2010. [6] L. Lizzi, F. Viani, P. Rocca, G. Oliveri, M. Benedetti and A. Massa, "Three-dimensional real-time localization of subsurface objects - From theory to experimental validation," 2009 IEEE International Geoscience and Remote Sensing Symposium, vol. 2, pp. II-121-II-124, 12-17 July 2009.
Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung
2016-02-01
Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low-contrast microcalcifications, the FBP reduced detectability due to its increased noise. The EM algorithm yielded high conspicuity for both microcalcifications and masses and yielded better ASFs in terms of the full width at half maximum. The higher contrast and lower homogeneity in terms of texture analysis were shown in FBP algorithm than in other algorithms. The patient images using the EM algorithm resulted in high visibility of low-contrast mass with clear border. In this study, we compared three reconstruction algorithms by using various kinds of breast phantoms and patient cases. Future work using these algorithms and considering the type of the breast and the acquisition techniques used (e.g., angular range, dose distribution) should include the use of actual patients or patient-like phantoms to increase the potential for practical applications.
Wind reconstruction algorithm for Viking Lander 1
NASA Astrophysics Data System (ADS)
Kynkäänniemi, Tuomas; Kemppinen, Osku; Harri, Ari-Matti; Schmidt, Walter
2017-06-01
The wind measurement sensors of Viking Lander 1 (VL1) were only fully operational for the first 45 sols of the mission. We have developed an algorithm for reconstructing the wind measurement data after the wind measurement sensor failures. The algorithm for wind reconstruction enables the processing of wind data during the complete VL1 mission. The heater element of the quadrant sensor, which provided auxiliary measurement for wind direction, failed during the 45th sol of the VL1 mission. Additionally, one of the wind sensors of VL1 broke down during sol 378. Regardless of the failures, it was still possible to reconstruct the wind measurement data, because the failed components of the sensors did not prevent the determination of the wind direction and speed, as some of the components of the wind measurement setup remained intact for the complete mission. This article concentrates on presenting the wind reconstruction algorithm and methods for validating the operation of the algorithm. The algorithm enables the reconstruction of wind measurements for the complete VL1 mission. The amount of available sols is extended from 350 to 2245 sols.
Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia
2013-02-01
The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.
Edge-oriented dual-dictionary guided enrichment (EDGE) for MRI-CT image reconstruction.
Li, Liang; Wang, Bigong; Wang, Ge
2016-01-01
In this paper, we formulate the joint/simultaneous X-ray CT and MRI image reconstruction. In particular, a novel algorithm is proposed for MRI image reconstruction from highly under-sampled MRI data and CT images. It consists of two steps. First, a training dataset is generated from a series of well-registered MRI and CT images on the same patients. Then, an initial MRI image of a patient can be reconstructed via edge-oriented dual-dictionary guided enrichment (EDGE) based on the training dataset and a CT image of the patient. Second, an MRI image is reconstructed using the dictionary learning (DL) algorithm from highly under-sampled k-space data and the initial MRI image. Our algorithm can establish a one-to-one correspondence between the two imaging modalities, and obtain a good initial MRI estimation. Both noise-free and noisy simulation studies were performed to evaluate and validate the proposed algorithm. The results with different under-sampling factors show that the proposed algorithm performed significantly better than those reconstructed using the DL algorithm from MRI data alone.
Sparsity-constrained PET image reconstruction with learned dictionaries
NASA Astrophysics Data System (ADS)
Tang, Jing; Yang, Bao; Wang, Yanhua; Ying, Leslie
2016-09-01
PET imaging plays an important role in scientific and clinical measurement of biochemical and physiological processes. Model-based PET image reconstruction such as the iterative expectation maximization algorithm seeking the maximum likelihood solution leads to increased noise. The maximum a posteriori (MAP) estimate removes divergence at higher iterations. However, a conventional smoothing prior or a total-variation (TV) prior in a MAP reconstruction algorithm causes over smoothing or blocky artifacts in the reconstructed images. We propose to use dictionary learning (DL) based sparse signal representation in the formation of the prior for MAP PET image reconstruction. The dictionary to sparsify the PET images in the reconstruction process is learned from various training images including the corresponding MR structural image and a self-created hollow sphere. Using simulated and patient brain PET data with corresponding MR images, we study the performance of the DL-MAP algorithm and compare it quantitatively with a conventional MAP algorithm, a TV-MAP algorithm, and a patch-based algorithm. The DL-MAP algorithm achieves improved bias and contrast (or regional mean values) at comparable noise to what the other MAP algorithms acquire. The dictionary learned from the hollow sphere leads to similar results as the dictionary learned from the corresponding MR image. Achieving robust performance in various noise-level simulation and patient studies, the DL-MAP algorithm with a general dictionary demonstrates its potential in quantitative PET imaging.
Local ROI Reconstruction via Generalized FBP and BPF Algorithms along More Flexible Curves.
Yu, Hengyong; Ye, Yangbo; Zhao, Shiying; Wang, Ge
2006-01-01
We study the local region-of-interest (ROI) reconstruction problem, also referred to as the local CT problem. Our scheme includes two steps: (a) the local truncated normal-dose projections are extended to global dataset by combining a few global low-dose projections; (b) the ROI are reconstructed by either the generalized filtered backprojection (FBP) or backprojection-filtration (BPF) algorithms. The simulation results show that both the FBP and BPF algorithms can reconstruct satisfactory results with image quality in the ROI comparable to that of the corresponding global CT reconstruction.
Zhang, Lingli; Zeng, Li; Guo, Yumeng
2018-01-01
Restricted by the scanning environment in some CT imaging modalities, the acquired projection data are usually incomplete, which may lead to a limited-angle reconstruction problem. Thus, image quality usually suffers from the slope artifacts. The objective of this study is to first investigate the distorted domains of the reconstructed images which encounter the slope artifacts and then present a new iterative reconstruction method to address the limited-angle X-ray CT reconstruction problem. The presented framework of new method exploits the structural similarity between the prior image and the reconstructed image aiming to compensate the distorted edges. Specifically, the new method utilizes l0 regularization and wavelet tight framelets to suppress the slope artifacts and pursue the sparsity. New method includes following 4 steps to (1) address the data fidelity using SART; (2) compensate for the slope artifacts due to the missed projection data using the prior image and modified nonlocal means (PNLM); (3) utilize l0 regularization to suppress the slope artifacts and pursue the sparsity of wavelet coefficients of the transformed image by using iterative hard thresholding (l0W); and (4) apply an inverse wavelet transform to reconstruct image. In summary, this method is referred to as "l0W-PNLM". Numerical implementations showed that the presented l0W-PNLM was superior to suppress the slope artifacts while preserving the edges of some features as compared to the commercial and other popular investigative algorithms. When the image to be reconstructed is inconsistent with the prior image, the new method can avoid or minimize the distorted edges in the reconstructed images. Quantitative assessments also showed that applying the new method obtained the highest image quality comparing to the existing algorithms. This study demonstrated that the presented l0W-PNLM yielded higher image quality due to a number of unique characteristics, which include that (1) it utilizes the structural similarity between the reconstructed image and prior image to modify the distorted edges by slope artifacts; (2) it adopts wavelet tight frames to obtain the first and high derivative in several directions and levels; and (3) it takes advantage of l0 regularization to promote the sparsity of wavelet coefficients, which is effective for the inhibition of the slope artifacts. Therefore, the new method can address the limited-angle CT reconstruction problem effectively and have practical significance.
Fast algorithm for wavefront reconstruction in XAO/SCAO with pyramid wavefront sensor
NASA Astrophysics Data System (ADS)
Shatokhina, Iuliia; Obereder, Andreas; Ramlau, Ronny
2014-08-01
We present a fast wavefront reconstruction algorithm developed for an extreme adaptive optics system equipped with a pyramid wavefront sensor on a 42m telescope. The method is called the Preprocessed Cumulative Reconstructor with domain decomposition (P-CuReD). The algorithm is based on the theoretical relationship between pyramid and Shack-Hartmann wavefront sensor data. The algorithm consists of two consecutive steps - a data preprocessing, and an application of the CuReD algorithm, which is a fast method for wavefront reconstruction from Shack-Hartmann sensor data. The closed loop simulation results show that the P-CuReD method provides the same reconstruction quality and is significantly faster than an MVM.
Tomše, Petra; Jensterle, Luka; Rep, Sebastijan; Grmek, Marko; Zaletel, Katja; Eidelberg, David; Dhawan, Vijay; Ma, Yilong; Trošt, Maja
2017-09-01
To evaluate the reproducibility of the expression of Parkinson's Disease Related Pattern (PDRP) across multiple sets of 18F-FDG-PET brain images reconstructed with different reconstruction algorithms. 18F-FDG-PET brain imaging was performed in two independent cohorts of Parkinson's disease (PD) patients and normal controls (NC). Slovenian cohort (20 PD patients, 20 NC) was scanned with Siemens Biograph mCT camera and reconstructed using FBP, FBP+TOF, OSEM, OSEM+TOF, OSEM+PSF and OSEM+PSF+TOF. American Cohort (20 PD patients, 7 NC) was scanned with GE Advance camera and reconstructed using 3DRP, FORE-FBP and FORE-Iterative. Expressions of two previously-validated PDRP patterns (PDRP-Slovenia and PDRP-USA) were calculated. We compared the ability of PDRP to discriminate PD patients from NC, differences and correlation between the corresponding subject scores and ROC analysis results across the different reconstruction algorithms. The expression of PDRP-Slovenia and PDRP-USA networks was significantly elevated in PD patients compared to NC (p<0.0001), regardless of reconstruction algorithms. PDRP expression strongly correlated between all studied algorithms and the reference algorithm (r⩾0.993, p<0.0001). Average differences in the PDRP expression among different algorithms varied within 0.73 and 0.08 of the reference value for PDRP-Slovenia and PDRP-USA, respectively. ROC analysis confirmed high similarity in sensitivity, specificity and AUC among all studied reconstruction algorithms. These results show that the expression of PDRP is reproducible across a variety of reconstruction algorithms of 18F-FDG-PET brain images. PDRP is capable of providing a robust metabolic biomarker of PD for multicenter 18F-FDG-PET images acquired in the context of differential diagnosis or clinical trials. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NOTE: A BPF-type algorithm for CT with a curved PI detector
NASA Astrophysics Data System (ADS)
Tang, Jie; Zhang, Li; Chen, Zhiqiang; Xing, Yuxiang; Cheng, Jianping
2006-08-01
Helical cone-beam CT is used widely nowadays because of its rapid scan speed and efficient utilization of x-ray dose. Recently, an exact reconstruction algorithm for helical cone-beam CT was proposed (Zou and Pan 2004a Phys. Med. Biol. 49 941 59). The algorithm is referred to as a backprojection-filtering (BPF) algorithm. This BPF algorithm for a helical cone-beam CT with a flat-panel detector (FPD-HCBCT) requires minimum data within the Tam Danielsson window and can naturally address the problem of ROI reconstruction from data truncated in both longitudinal and transversal directions. In practical CT systems, detectors are expensive and always take a very important position in the total cost. Hence, we work on an exact reconstruction algorithm for a CT system with a detector of the smallest size, i.e., a curved PI detector fitting the Tam Danielsson window. The reconstruction algorithm is derived following the framework of the BPF algorithm. Numerical simulations are done to validate our algorithm in this study.
A BPF-type algorithm for CT with a curved PI detector.
Tang, Jie; Zhang, Li; Chen, Zhiqiang; Xing, Yuxiang; Cheng, Jianping
2006-08-21
Helical cone-beam CT is used widely nowadays because of its rapid scan speed and efficient utilization of x-ray dose. Recently, an exact reconstruction algorithm for helical cone-beam CT was proposed (Zou and Pan 2004a Phys. Med. Biol. 49 941-59). The algorithm is referred to as a backprojection-filtering (BPF) algorithm. This BPF algorithm for a helical cone-beam CT with a flat-panel detector (FPD-HCBCT) requires minimum data within the Tam-Danielsson window and can naturally address the problem of ROI reconstruction from data truncated in both longitudinal and transversal directions. In practical CT systems, detectors are expensive and always take a very important position in the total cost. Hence, we work on an exact reconstruction algorithm for a CT system with a detector of the smallest size, i.e., a curved PI detector fitting the Tam-Danielsson window. The reconstruction algorithm is derived following the framework of the BPF algorithm. Numerical simulations are done to validate our algorithm in this study.
NASA Astrophysics Data System (ADS)
McLaughlin, Joyce; Renzi, Daniel
2006-04-01
Transient elastography and supersonic imaging are promising new techniques for characterizing the elasticity of soft tissues. Using this method, an 'ultrafast imaging' system (up to 10 000 frames s-1) follows in real time the propagation of a low-frequency shear wave. The displacement of the propagating shear wave is measured as a function of time and space. Here we develop a fast level set based algorithm for finding the shear wave speed from the interior positions of the propagating front. We compare the performance of level curve methods developed here and our previously developed (McLaughlin J and Renzi D 2006 Shear wave speed recovery in transient elastography and supersonic imaging using propagating fronts Inverse Problems 22 681-706) distance methods. We give reconstruction examples from synthetic data and from data obtained from a phantom experiment accomplished by Mathias Fink's group (the Laboratoire Ondes et Acoustique, ESPCI, Université Paris VII).
NASA Astrophysics Data System (ADS)
Chen, Siyu; Zhang, Hanming; Li, Lei; Xi, Xiaoqi; Han, Yu; Yan, Bin
2016-10-01
X-ray computed tomography (CT) has been extensively applied in industrial non-destructive testing (NDT). However, in practical applications, the X-ray beam polychromaticity often results in beam hardening problems for image reconstruction. The beam hardening artifacts, which manifested as cupping, streaks and flares, not only debase the image quality, but also disturb the subsequent analyses. Unfortunately, conventional CT scanning requires that the scanned object is completely covered by the field of view (FOV), the state-of-art beam hardening correction methods only consider the ideal scanning configuration, and often suffer problems for interior tomography due to the projection truncation. Aiming at this problem, this paper proposed a beam hardening correction method based on radon inversion transform for interior tomography. Experimental results show that, compared to the conventional correction algorithms, the proposed approach has achieved excellent performance in both beam hardening artifacts reduction and truncation artifacts suppression. Therefore, the presented method has vitally theoretic and practicable meaning in artifacts correction of industrial CT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di, Zichao; Chen, Si; Hong, Young Pyo
X-ray fluorescence tomography is based on the detection of fluorescence x-ray photons produced following x-ray absorption while a specimen is rotated; it provides information on the 3D distribution of selected elements within a sample. One limitation in the quality of sample recovery is the separation of elemental signals due to the finite energy resolution of the detector. Another limitation is the effect of self-absorption, which can lead to inaccurate results with dense samples. To recover a higher quality elemental map, we combine x-ray fluorescence detection with a second data modality: conventional x-ray transmission tomography using absorption. By using these combinedmore » signals in a nonlinear optimization-based approach, we demonstrate the benefit of our algorithm on real experimental data and obtain an improved quantitative reconstruction of the spatial distribution of dominant elements in the sample. Furthermore, compared with single-modality inversion based on x-ray fluorescence alone, this joint inversion approach reduces ill-posedness and should result in improved elemental quantification and better correction of self-absorption.« less
Geng, Xiaobing; Xie, Zhenghui; Zhang, Lijun; Xu, Mei; Jia, Binghao
2018-03-01
An inverse source estimation method is proposed to reconstruct emission rates using local air concentration sampling data. It involves the nonlinear least squares-based ensemble four-dimensional variational data assimilation (NLS-4DVar) algorithm and a transfer coefficient matrix (TCM) created using FLEXPART, a Lagrangian atmospheric dispersion model. The method was tested by twin experiments and experiments with actual Cs-137 concentrations measured around the Fukushima Daiichi Nuclear Power Plant (FDNPP). Emission rates can be reconstructed sequentially with the progression of a nuclear accident, which is important in the response to a nuclear emergency. With pseudo observations generated continuously, most of the emission rates were estimated accurately, except under conditions when the wind blew off land toward the sea and at extremely slow wind speeds near the FDNPP. Because of the long duration of accidents and variability in meteorological fields, monitoring networks composed of land stations only in a local area are unable to provide enough information to support an emergency response. The errors in the estimation compared to the real observations from the FDNPP nuclear accident stemmed from a shortage of observations, lack of data control, and an inadequate atmospheric dispersion model without improvement and appropriate meteorological data. The proposed method should be developed further to meet the requirements of a nuclear emergency response. Copyright © 2017 Elsevier Ltd. All rights reserved.
Henrion, Sebastian; Spoor, Cees W; Pieters, Remco P M; Müller, Ulrike K; van Leeuwen, Johan L
2015-07-07
Images of underwater objects are distorted by refraction at the water-glass-air interfaces and these distortions can lead to substantial errors when reconstructing the objects' position and shape. So far, aquatic locomotion studies have minimized refraction in their experimental setups and used the direct linear transform algorithm (DLT) to reconstruct position information, which does not model refraction explicitly. Here we present a refraction corrected ray-tracing algorithm (RCRT) that reconstructs position information using Snell's law. We validated this reconstruction by calculating 3D reconstruction error-the difference between actual and reconstructed position of a marker. We found that reconstruction error is small (typically less than 1%). Compared with the DLT algorithm, the RCRT has overall lower reconstruction errors, especially outside the calibration volume, and errors are essentially insensitive to camera position and orientation and the number and position of the calibration points. To demonstrate the effectiveness of the RCRT, we tracked an anatomical marker on a seahorse recorded with four cameras to reconstruct the swimming trajectory for six different camera configurations. The RCRT algorithm is accurate and robust and it allows cameras to be oriented at large angles of incidence and facilitates the development of accurate tracking algorithms to quantify aquatic manoeuvers.
Sidky, Emil Y.; Jørgensen, Jakob H.; Pan, Xiaochuan
2012-01-01
The primal-dual optimization algorithm developed in Chambolle and Pock (CP), 2011 is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems for the purpose of designing iterative image reconstruction algorithms for CT. The primal-dual algorithm is briefly summarized in the article, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application modeling breast CT with low-intensity X-ray illumination is presented. PMID:22538474
Using sparse regularization for multi-resolution tomography of the ionosphere
NASA Astrophysics Data System (ADS)
Panicciari, T.; Smith, N. D.; Mitchell, C. N.; Da Dalt, F.; Spencer, P. S. J.
2015-10-01
Computerized ionospheric tomography (CIT) is a technique that allows reconstructing the state of the ionosphere in terms of electron content from a set of slant total electron content (STEC) measurements. It is usually denoted as an inverse problem. In this experiment, the measurements are considered coming from the phase of the GPS signal and, therefore, affected by bias. For this reason the STEC cannot be considered in absolute terms but rather in relative terms. Measurements are collected from receivers not evenly distributed in space and together with limitations such as angle and density of the observations, they are the cause of instability in the operation of inversion. Furthermore, the ionosphere is a dynamic medium whose processes are continuously changing in time and space. This can affect CIT by limiting the accuracy in resolving structures and the processes that describe the ionosphere. Some inversion techniques are based on ℓ2 minimization algorithms (i.e. Tikhonov regularization) and a standard approach is implemented here using spherical harmonics as a reference to compare the new method. A new approach is proposed for CIT that aims to permit sparsity in the reconstruction coefficients by using wavelet basis functions. It is based on the ℓ1 minimization technique and wavelet basis functions due to their properties of compact representation. The ℓ1 minimization is selected because it can optimize the result with an uneven distribution of observations by exploiting the localization property of wavelets. Also illustrated is how the inter-frequency biases on the STEC are calibrated within the operation of inversion, and this is used as a way for evaluating the accuracy of the method. The technique is demonstrated using a simulation, showing the advantage of ℓ1 minimization to estimate the coefficients over the ℓ2 minimization. This is in particular true for an uneven observation geometry and especially for multi-resolution CIT.
Accounting for hardware imperfections in EIT image reconstruction algorithms.
Hartinger, Alzbeta E; Gagnon, Hervé; Guardo, Robert
2007-07-01
Electrical impedance tomography (EIT) is a non-invasive technique for imaging the conductivity distribution of a body section. Different types of EIT images can be reconstructed: absolute, time difference and frequency difference. Reconstruction algorithms are sensitive to many errors which translate into image artefacts. These errors generally result from incorrect modelling or inaccurate measurements. Every reconstruction algorithm incorporates a model of the physical set-up which must be as accurate as possible since any discrepancy with the actual set-up will cause image artefacts. Several methods have been proposed in the literature to improve the model realism, such as creating anatomical-shaped meshes, adding a complete electrode model and tracking changes in electrode contact impedances and positions. Absolute and frequency difference reconstruction algorithms are particularly sensitive to measurement errors and generally assume that measurements are made with an ideal EIT system. Real EIT systems have hardware imperfections that cause measurement errors. These errors translate into image artefacts since the reconstruction algorithm cannot properly discriminate genuine measurement variations produced by the medium under study from those caused by hardware imperfections. We therefore propose a method for eliminating these artefacts by integrating a model of the system hardware imperfections into the reconstruction algorithms. The effectiveness of the method has been evaluated by reconstructing absolute, time difference and frequency difference images with and without the hardware model from data acquired on a resistor mesh phantom. Results have shown that artefacts are smaller for images reconstructed with the model, especially for frequency difference imaging.
NASA Astrophysics Data System (ADS)
Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.
2016-12-01
Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.
Image reconstruction by domain-transform manifold learning.
Zhu, Bo; Liu, Jeremiah Z; Cauley, Stephen F; Rosen, Bruce R; Rosen, Matthew S
2018-03-21
Image reconstruction is essential for imaging applications across the physical and life sciences, including optical and radar systems, magnetic resonance imaging, X-ray computed tomography, positron emission tomography, ultrasound imaging and radio astronomy. During image acquisition, the sensor encodes an intermediate representation of an object in the sensor domain, which is subsequently reconstructed into an image by an inversion of the encoding function. Image reconstruction is challenging because analytic knowledge of the exact inverse transform may not exist a priori, especially in the presence of sensor non-idealities and noise. Thus, the standard reconstruction approach involves approximating the inverse function with multiple ad hoc stages in a signal processing chain, the composition of which depends on the details of each acquisition strategy, and often requires expert parameter tuning to optimize reconstruction performance. Here we present a unified framework for image reconstruction-automated transform by manifold approximation (AUTOMAP)-which recasts image reconstruction as a data-driven supervised learning task that allows a mapping between the sensor and the image domain to emerge from an appropriate corpus of training data. We implement AUTOMAP with a deep neural network and exhibit its flexibility in learning reconstruction transforms for various magnetic resonance imaging acquisition strategies, using the same network architecture and hyperparameters. We further demonstrate that manifold learning during training results in sparse representations of domain transforms along low-dimensional data manifolds, and observe superior immunity to noise and a reduction in reconstruction artefacts compared with conventional handcrafted reconstruction methods. In addition to improving the reconstruction performance of existing acquisition methodologies, we anticipate that AUTOMAP and other learned reconstruction approaches will accelerate the development of new acquisition strategies across imaging modalities.
A review of ocean chlorophyll algorithms and primary production models
NASA Astrophysics Data System (ADS)
Li, Jingwen; Zhou, Song; Lv, Nan
2015-12-01
This paper mainly introduces the five ocean chlorophyll concentration inversion algorithm and 3 main models for computing ocean primary production based on ocean chlorophyll concentration. Through the comparison of five ocean chlorophyll inversion algorithm, sums up the advantages and disadvantages of these algorithm,and briefly analyzes the trend of ocean primary production model.
This is SPIRAL-TAP: Sparse Poisson Intensity Reconstruction ALgorithms--theory and practice.
Harmany, Zachary T; Marcia, Roummel F; Willett, Rebecca M
2012-03-01
Observations in many applications consist of counts of discrete events, such as photons hitting a detector, which cannot be effectively modeled using an additive bounded or Gaussian noise model, and instead require a Poisson noise model. As a result, accurate reconstruction of a spatially or temporally distributed phenomenon (f*) from Poisson data (y) cannot be effectively accomplished by minimizing a conventional penalized least-squares objective function. The problem addressed in this paper is the estimation of f* from y in an inverse problem setting, where the number of unknowns may potentially be larger than the number of observations and f* admits sparse approximation. The optimization formulation considered in this paper uses a penalized negative Poisson log-likelihood objective function with nonnegativity constraints (since Poisson intensities are naturally nonnegative). In particular, the proposed approach incorporates key ideas of using separable quadratic approximations to the objective function at each iteration and penalization terms related to l1 norms of coefficient vectors, total variation seminorms, and partition-based multiscale estimation methods.
No-reference image quality assessment for horizontal-path imaging scenarios
NASA Astrophysics Data System (ADS)
Rios, Carlos; Gladysz, Szymon
2013-05-01
There exist several image-enhancement algorithms and tasks associated with imaging through turbulence that depend on defining the quality of an image. Examples include: "lucky imaging", choosing the width of the inverse filter for image reconstruction, or stopping iterative deconvolution. We collected a number of image quality metrics found in the literature. Particularly interesting are the blind, "no-reference" metrics. We discuss ways of evaluating the usefulness of these metrics, even when a fully objective comparison is impossible because of the lack of a reference image. Metrics are tested on simulated and real data. Field data comes from experiments performed by the NATO SET 165 research group over a 7 km distance in Dayton, Ohio.
Spectral ballistic imaging: a novel technique for viewing through turbid or obstructing media.
Granot, Er'el; Sternklar, Shmuel
2003-08-01
We propose a new method for viewing through turbid or obstructing media. The medium is illuminated with a modulated cw laser and the amplitude and phase of the transmitted (or reflected) signal is measured. This process takes place for a set of wavelengths in a certain wide band. In this way we acquire the Fourier transform of the temporal output. With this information we can reconstruct the temporal shape of the transmitted signal by computing the inverse transform. The proposed method benefits from the advantages of the first-light technique: high resolution, simple algorithms, insensitivity to boundary condition, etc., without suffering from its main deficiencies: complex and expensive equipment.
NASA Astrophysics Data System (ADS)
Silkworth, Inga
A search for the standard model Higgs boson (H) decaying to bottom quarks and produced in association with a Z boson is presented. The search uses 8 TeV center-of-mass energy proton-proton collision data recorded by the Compact Muon Solenoid experiment at the Large Hadron Collider corresponding to integrated luminosity of 19.0 inverse femtobarns. The Z boson is reconstructed using two oppositely charged leptons -- either electrons or muons. Two techniques for reconstructing the Higgs candidate are discussed: the standard method using two jets reconstructed with the anti-kt algorithm and a second technique using jet substructure that was developed for highly boosted massive particles. Upper limits, at the 95% confidence level, on the production cross section times the branching ratio, with respect to the standard model expectations, are derived for a Higgs boson in a mass range 110-135 GeV. The results from the ZH channel are combined with five other channels, and an excess of events is observed consistent with the standard model Higgs boson with a local significance of 2.1 standard deviations at 125 GeV.
Optimization of compressive 4D-spatio-spectral snapshot imaging
NASA Astrophysics Data System (ADS)
Zhao, Xia; Feng, Weiyi; Lin, Lihua; Su, Wu; Xu, Guoqing
2017-10-01
In this paper, a modified 3D computational reconstruction method in the compressive 4D-spectro-volumetric snapshot imaging system is proposed for better sensing spectral information of 3D objects. In the design of the imaging system, a microlens array (MLA) is used to obtain a set of multi-view elemental images (EIs) of the 3D scenes. Then, these elemental images with one dimensional spectral information and different perspectives are captured by the coded aperture snapshot spectral imager (CASSI) which can sense the spectral data cube onto a compressive 2D measurement image. Finally, the depth images of 3D objects at arbitrary depths, like a focal stack, are computed by inversely mapping the elemental images according to geometrical optics. With the spectral estimation algorithm, the spectral information of 3D objects is also reconstructed. Using a shifted translation matrix, the contrast of the reconstruction result is further enhanced. Numerical simulation results verify the performance of the proposed method. The system can obtain both 3D spatial information and spectral data on 3D objects using only one single snapshot, which is valuable in the agricultural harvesting robots and other 3D dynamic scenes.
Optical 3D watermark based digital image watermarking for telemedicine
NASA Astrophysics Data System (ADS)
Li, Xiao Wei; Kim, Seok Tae
2013-12-01
Region of interest (ROI) of a medical image is an area including important diagnostic information and must be stored without any distortion. This algorithm for application of watermarking technique for non-ROI of the medical image preserving ROI. The paper presents a 3D watermark based medical image watermarking scheme. In this paper, a 3D watermark object is first decomposed into 2D elemental image array (EIA) by a lenslet array, and then the 2D elemental image array data is embedded into the host image. The watermark extraction process is an inverse process of embedding. The extracted EIA through the computational integral imaging reconstruction (CIIR) technique, the 3D watermark can be reconstructed. Because the EIA is composed of a number of elemental images possesses their own perspectives of a 3D watermark object. Even though the embedded watermark data badly damaged, the 3D virtual watermark can be successfully reconstructed. Furthermore, using CAT with various rule number parameters, it is possible to get many channels for embedding. So our method can recover the weak point having only one transform plane in traditional watermarking methods. The effectiveness of the proposed watermarking scheme is demonstrated with the aid of experimental results.
A 3D inversion for all-space magnetotelluric data with static shift correction
NASA Astrophysics Data System (ADS)
Zhang, Kun
2017-04-01
Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results. The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm.
Detection of reflector surface from near field phase measurements
NASA Technical Reports Server (NTRS)
Ida, Nathan
1991-01-01
The deviation of a reflector antenna surface from a perfect parabolic shape causes degradation of the performance of the antenna. The problem of determining the shape of the reflector surface in a reflector antenna using near field phase measurements is not a new one. A recent issue of the IEEE tansactions on Antennas and Propagation (June 1988) contained numerous descriptions of the use of these measurements: holographic reconstruction or inverse Fourier transform. Holographic reconstruction makes use of measurement of the far field of the reflector and then applies the Fourier transform relationship between the far field and the current distribution on the reflector surface. Inverse Fourier transformation uses the phase measurements to determine the far field pattern using the method of Kerns. After the far field pattern is established, an inverse Fourier transform is used to determine the phases in a plane between the reflector surface and the plane in which the near field measurements were taken. These calculations are time consuming since they involve a relatively large number of operations. A much faster method can be used to determine the position of the reflector. This method makes use of simple geometric optics to determine the path length of the ray from the feed to the reflector and from the reflector to the measurement point. For small physical objects and low frequencies, diffraction effects have a major effect on the error, and the algorithm provides incorrect results. It is believed that the effect is less noticeable for large distortions such as antenna warping, and more noticeable for small, localized distortions such as bumps and depressions such as might be caused by impact damage.
NASA Astrophysics Data System (ADS)
Luo, Shouhua; Shen, Tao; Sun, Yi; Li, Jing; Li, Guang; Tang, Xiangyang
2018-04-01
In high resolution (microscopic) CT applications, the scan field of view should cover the entire specimen or sample to allow complete data acquisition and image reconstruction. However, truncation may occur in projection data and results in artifacts in reconstructed images. In this study, we propose a low resolution image constrained reconstruction algorithm (LRICR) for interior tomography in microscopic CT at high resolution. In general, the multi-resolution acquisition based methods can be employed to solve the data truncation problem if the project data acquired at low resolution are utilized to fill up the truncated projection data acquired at high resolution. However, most existing methods place quite strict restrictions on the data acquisition geometry, which greatly limits their utility in practice. In the proposed LRICR algorithm, full and partial data acquisition (scan) at low and high resolutions, respectively, are carried out. Using the image reconstructed from sparse projection data acquired at low resolution as the prior, a microscopic image at high resolution is reconstructed from the truncated projection data acquired at high resolution. Two synthesized digital phantoms, a raw bamboo culm and a specimen of mouse femur, were utilized to evaluate and verify performance of the proposed LRICR algorithm. Compared with the conventional TV minimization based algorithm and the multi-resolution scout-reconstruction algorithm, the proposed LRICR algorithm shows significant improvement in reduction of the artifacts caused by data truncation, providing a practical solution for high quality and reliable interior tomography in microscopic CT applications. The proposed LRICR algorithm outperforms the multi-resolution scout-reconstruction method and the TV minimization based reconstruction for interior tomography in microscopic CT.
anisotropic microseismic focal mechanism inversion by waveform imaging matching
NASA Astrophysics Data System (ADS)
Wang, L.; Chang, X.; Wang, Y.; Xue, Z.
2016-12-01
The focal mechanism is one of the most important parameters in source inversion, for both natural earthquakes and human-induced seismic events. It has been reported to be useful for understanding stress distribution and evaluating the fracturing effect. The conventional focal mechanism inversion method picks the first arrival waveform of P wave. This method assumes the source as a Double Couple (DC) type and the media isotropic, which is usually not the case for induced seismic focal mechanism inversion. For induced seismic events, the inappropriate source and media model in inversion processing, by introducing ambiguity or strong simulation errors, will seriously reduce the inversion effectiveness. First, the focal mechanism contains significant non-DC source type. Generally, the source contains three components: DC, isotropic (ISO) and the compensated linear vector dipole (CLVD), which makes focal mechanisms more complicated. Second, the anisotropy of media will affect travel time and waveform to generate inversion bias. The common way to describe focal mechanism inversion is based on moment tensor (MT) inversion which can be decomposed into the combination of DC, ISO and CLVD components. There are two ways to achieve MT inversion. The wave-field migration method is applied to achieve moment tensor imaging. This method can construct elements imaging of MT in 3D space without picking the first arrival, but the retrieved MT value is influenced by imaging resolution. The full waveform inversion is employed to retrieve MT. In this method, the source position and MT can be reconstructed simultaneously. However, this method needs vast numerical calculation. Moreover, the source position and MT also influence each other in the inversion process. In this paper, the waveform imaging matching (WIM) method is proposed, which combines source imaging with waveform inversion for seismic focal mechanism inversion. Our method uses the 3D tilted transverse isotropic (TTI) elastic wave equation to approximate wave propagating in anisotropic media. First, a source imaging procedure is employed to obtain the source position. Second, we refine a waveform inversion algorithm to retrieve MT. We also use a microseismic data set recorded in surface acquisition to test our method.
Recursive partitioned inversion of large (1500 x 1500) symmetric matrices
NASA Technical Reports Server (NTRS)
Putney, B. H.; Brownd, J. E.; Gomez, R. A.
1976-01-01
A recursive algorithm was designed to invert large, dense, symmetric, positive definite matrices using small amounts of computer core, i.e., a small fraction of the core needed to store the complete matrix. The described algorithm is a generalized Gaussian elimination technique. Other algorithms are also discussed for the Cholesky decomposition and step inversion techniques. The purpose of the inversion algorithm is to solve large linear systems of normal equations generated by working geodetic problems. The algorithm was incorporated into a computer program called SOLVE. In the past the SOLVE program has been used in obtaining solutions published as the Goddard earth models.
2011-01-01
Background Gene regulatory networks play essential roles in living organisms to control growth, keep internal metabolism running and respond to external environmental changes. Understanding the connections and the activity levels of regulators is important for the research of gene regulatory networks. While relevance score based algorithms that reconstruct gene regulatory networks from transcriptome data can infer genome-wide gene regulatory networks, they are unfortunately prone to false positive results. Transcription factor activities (TFAs) quantitatively reflect the ability of the transcription factor to regulate target genes. However, classic relevance score based gene regulatory network reconstruction algorithms use models do not include the TFA layer, thus missing a key regulatory element. Results This work integrates TFA prediction algorithms with relevance score based network reconstruction algorithms to reconstruct gene regulatory networks with improved accuracy over classic relevance score based algorithms. This method is called Gene expression and Transcription factor activity based Relevance Network (GTRNetwork). Different combinations of TFA prediction algorithms and relevance score functions have been applied to find the most efficient combination. When the integrated GTRNetwork method was applied to E. coli data, the reconstructed genome-wide gene regulatory network predicted 381 new regulatory links. This reconstructed gene regulatory network including the predicted new regulatory links show promising biological significances. Many of the new links are verified by known TF binding site information, and many other links can be verified from the literature and databases such as EcoCyc. The reconstructed gene regulatory network is applied to a recent transcriptome analysis of E. coli during isobutanol stress. In addition to the 16 significantly changed TFAs detected in the original paper, another 7 significantly changed TFAs have been detected by using our reconstructed network. Conclusions The GTRNetwork algorithm introduces the hidden layer TFA into classic relevance score-based gene regulatory network reconstruction processes. Integrating the TFA biological information with regulatory network reconstruction algorithms significantly improves both detection of new links and reduces that rate of false positives. The application of GTRNetwork on E. coli gene transcriptome data gives a set of potential regulatory links with promising biological significance for isobutanol stress and other conditions. PMID:21668997
Simulation and performance of an artificial retina for 40 MHz track reconstruction
Abba, A.; Bedeschi, F.; Citterio, M.; ...
2015-03-05
We present the results of a detailed simulation of the artificial retina pattern-recognition algorithm, designed to reconstruct events with hundreds of charged-particle tracks in pixel and silicon detectors at LHCb with LHC crossing frequency of 40 MHz. Performances of the artificial retina algorithm are assessed using the official Monte Carlo samples of the LHCb experiment. We found performances for the retina pattern-recognition algorithm comparable with the full LHCb reconstruction algorithm.
Optimization-Based Approach for Joint X-Ray Fluorescence and Transmission Tomographic Inversion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di, Zichao; Leyffer, Sven; Wild, Stefan M.
2016-01-01
Fluorescence tomographic reconstruction, based on the detection of photons coming from fluorescent emission, can be used for revealing the internal elemental composition of a sample. On the other hand, conventional X-ray transmission tomography can be used for reconstructing the spatial distribution of the absorption coefficient inside a sample. In this work, we integrate both X-ray fluorescence and X-ray transmission data modalities and formulate a nonlinear optimization-based approach for reconstruction of the elemental composition of a given object. This model provides a simultaneous reconstruction of both the quantitative spatial distribution of all elements and the absorption effect in the sample. Mathematicallymore » speaking, we show that compared with the single-modality inversion (i.e., the X-ray transmission or fluorescence alone), the joint inversion provides a better-posed problem, which implies a better recovery. Therefore, the challenges in X-ray fluorescence tomography arising mainly from the effects of self-absorption in the sample are partially mitigated. The use of this technique is demonstrated on the reconstruction of several synthetic samples.« less
Local ROI Reconstruction via Generalized FBP and BPF Algorithms along More Flexible Curves
Ye, Yangbo; Zhao, Shiying; Wang, Ge
2006-01-01
We study the local region-of-interest (ROI) reconstruction problem, also referred to as the local CT problem. Our scheme includes two steps: (a) the local truncated normal-dose projections are extended to global dataset by combining a few global low-dose projections; (b) the ROI are reconstructed by either the generalized filtered backprojection (FBP) or backprojection-filtration (BPF) algorithms. The simulation results show that both the FBP and BPF algorithms can reconstruct satisfactory results with image quality in the ROI comparable to that of the corresponding global CT reconstruction. PMID:23165018
Systematics in lensing reconstruction: dark matter rings in the sky?
NASA Astrophysics Data System (ADS)
Ponente, P. P.; Diego, J. M.
2011-11-01
Context. Non-parametric lensing methods are a useful way of reconstructing the lensing mass of a cluster without making assumptions about the way the mass is distributed in the cluster. These methods are particularly powerful in the case of galaxy clusters with a large number of constraints. The advantage of not assuming implicitly that the luminous matter follows the dark matter is particularly interesting in those cases where the cluster is in a non-relaxed dynamical state. On the other hand, non-parametric methods have several limitations that should be taken into account carefully. Aims: We explore some of these limitations and focus on their implications for the possible ring of dark matter around the galaxy cluster CL0024+17. Methods: We project three background galaxies through a mock cluster of known radial profile density and obtain a map for the arcs (θ map). We also calculate the shear field associated with the mock cluster across the whole field of view (3.3 arcmin). Combining the positions of the arcs and the two-direction shear, we perform an inversion of the lens equation using two separate methods, the biconjugate gradient, and the quadratic programming (QADP) to reconstruct the convergence map of the mock cluster. Results: We explore the space of the solutions of the convergence map and compare the radial density profiles to the density profile of the mock cluster. When the inversion matrix algorithms are forced to find the exact solution, we encounter systematic effects resembling ring structures, that clearly depart from the original convergence map. Conclusions: Overfitting lensing data with a non-parametric method can produce ring-like structures similar to the alleged one in CL0024.
A Multi-Source Inverse-Geometry CT system: Initial results with an 8 spot x-ray source array
Baek, Jongduk; De Man, Bruno; Uribe, Jorge; Longtin, Randy; Harrison, Daniel; Reynolds, Joseph; Neculaes, Bogdan; Frutschy, Kristopher; Inzinna, Louis; Caiafa, Antonio; Senzig, Robert; Pelc, Norbert J.
2014-01-01
We present initial experimental results of a rotating-gantry multi-source inverse-geometry CT (MS-IGCT) system. The MS-IGCT system was built with a single module of 2×4 x-ray sources and a 2D detector array. It produced a 75 mm in-plane field-of-view (FOV) with 160 mm axial coverage in a single gantry rotation. To evaluate system performance, a 2.5 inch diameter uniform PMMA cylinder phantom, a 200 μm diameter tungsten wire, and a euthanized rat were scanned. Each scan acquired 125 views per source and the gantry rotation time was 1 second per revolution. Geometric calibration was performed using a bead phantom. The scanning parameters were 80 kVp, 125 mA, and 5.4 us pulse per source location per view. A data normalization technique was applied to the acquired projection data, and beam hardening and spectral nonlinearities of each detector channel were corrected. For image reconstruction, the projection data of each source row were rebinned into a full cone beam data set, and the FDK algorithm was used. The reconstructed volumes from upper and lower source rows shared an overlap volume which was combined in image space. The images of the uniform PMMA cylinder phantom showed good uniformity and no apparent artefacts. The measured in-plane MTF showed 13 lp/cm at 10% cutoff, in good agreement with expectations. The rat data were also reconstructed reliably. The initial experimental results from this rotating-gantry MS-IGCT system demonstrated its ability to image a complex anatomical object without any significant image artefacts and to achieve high image resolution and large axial coverage in a single gantry rotation. PMID:24556567
Full Waveform Inversion for Seismic Velocity And Anelastic Losses in Heterogeneous Structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Askan, A.; /Carnegie Mellon U.; Akcelik, V.
2009-04-30
We present a least-squares optimization method for solving the nonlinear full waveform inverse problem of determining the crustal velocity and intrinsic attenuation properties of sedimentary valleys in earthquake-prone regions. Given a known earthquake source and a set of seismograms generated by the source, the inverse problem is to reconstruct the anelastic properties of a heterogeneous medium with possibly discontinuous wave velocities. The inverse problem is formulated as a constrained optimization problem, where the constraints are the partial and ordinary differential equations governing the anelastic wave propagation from the source to the receivers in the time domain. This leads to amore » variational formulation in terms of the material model plus the state variables and their adjoints. We employ a wave propagation model in which the intrinsic energy-dissipating nature of the soil medium is modeled by a set of standard linear solids. The least-squares optimization approach to inverse wave propagation presents the well-known difficulties of ill posedness and multiple minima. To overcome ill posedness, we include a total variation regularization functional in the objective function, which annihilates highly oscillatory material property components while preserving discontinuities in the medium. To treat multiple minima, we use a multilevel algorithm that solves a sequence of subproblems on increasingly finer grids with increasingly higher frequency source components to remain within the basin of attraction of the global minimum. We illustrate the methodology with high-resolution inversions for two-dimensional sedimentary models of the San Fernando Valley, under SH-wave excitation. We perform inversions for both the seismic velocity and the intrinsic attenuation using synthetic waveforms at the observer locations as pseudoobserved data.« less
Material Interface Reconstruction in VisIt
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meredith, J S
In this paper, we first survey a variety of approaches to material interface reconstruction and their applicability to visualization, and we investigate the details of the current reconstruction algorithm in the VisIt scientific analysis and visualization tool. We then provide a novel implementation of the original VisIt algorithm that makes use of a wide range of the finite element zoo during reconstruction. This approach results in dramatic improvements in quality and performance without sacrificing the strengths of the VisIt algorithm as it relates to visualization.
The Effect of Flow Velocity on Waveform Inversion
NASA Astrophysics Data System (ADS)
Lee, D.; Shin, S.; Chung, W.; Ha, J.; Lim, Y.; Kim, S.
2017-12-01
The waveform inversion is a velocity modeling technique that reconstructs accurate subsurface physical properties. Therefore, using the model in its final, updated version, we generated data identical to modeled data. Flow velocity, like several other factors, affects observed data in seismic exploration. Despite this, there is insufficient research on its relationship with waveform inversion. In this study, the generated synthetic data considering flow velocity was factored in waveform inversion and the influence of flow velocity in waveform inversion was analyzed. Measuring the flow velocity generally requires additional equipment. However, for situations where only seismic data was available, flow velocity was calculated by fixed-point iteration method using direct wave in observed data. Further, a new waveform inversion was proposed, which can be applied to the calculated flow velocity. We used a wave equation, which can work with the flow velocities used in the study by Käser and Dumbser. Further, we enhanced the efficiency of computation by applying the back-propagation method. To verify the proposed algorithm, six different data sets were generated using the Marmousi2 model; each of these data sets used different flow velocities in the range 0-50, i.e., 0, 2, 5, 10, 25, and 50. Thereafter, the inversion results from these data sets along with the results without the use of flow velocity were compared and analyzed. In this study, we analyzed the results of waveform inversion after flow velocity has been factored in. It was demonstrated that the waveform inversion is not affected significantly when the flow velocity is of smaller value. However, when the flow velocity has a large value, factoring it in the waveform inversion produces superior results. This research was supported by the Basic Research Project(17-3312, 17-3313) of the Korea Institute of Geoscience and Mineral Resources(KIGAM) funded by the Ministry of Science, ICT and Future Planning of Korea.
Image reconstruction by domain-transform manifold learning
NASA Astrophysics Data System (ADS)
Zhu, Bo; Liu, Jeremiah Z.; Cauley, Stephen F.; Rosen, Bruce R.; Rosen, Matthew S.
2018-03-01
Image reconstruction is essential for imaging applications across the physical and life sciences, including optical and radar systems, magnetic resonance imaging, X-ray computed tomography, positron emission tomography, ultrasound imaging and radio astronomy. During image acquisition, the sensor encodes an intermediate representation of an object in the sensor domain, which is subsequently reconstructed into an image by an inversion of the encoding function. Image reconstruction is challenging because analytic knowledge of the exact inverse transform may not exist a priori, especially in the presence of sensor non-idealities and noise. Thus, the standard reconstruction approach involves approximating the inverse function with multiple ad hoc stages in a signal processing chain, the composition of which depends on the details of each acquisition strategy, and often requires expert parameter tuning to optimize reconstruction performance. Here we present a unified framework for image reconstruction—automated transform by manifold approximation (AUTOMAP)—which recasts image reconstruction as a data-driven supervised learning task that allows a mapping between the sensor and the image domain to emerge from an appropriate corpus of training data. We implement AUTOMAP with a deep neural network and exhibit its flexibility in learning reconstruction transforms for various magnetic resonance imaging acquisition strategies, using the same network architecture and hyperparameters. We further demonstrate that manifold learning during training results in sparse representations of domain transforms along low-dimensional data manifolds, and observe superior immunity to noise and a reduction in reconstruction artefacts compared with conventional handcrafted reconstruction methods. In addition to improving the reconstruction performance of existing acquisition methodologies, we anticipate that AUTOMAP and other learned reconstruction approaches will accelerate the development of new acquisition strategies across imaging modalities.
Black hole algorithm for determining model parameter in self-potential data
NASA Astrophysics Data System (ADS)
Sungkono; Warnana, Dwa Desa
2018-01-01
Analysis of self-potential (SP) data is increasingly popular in geophysical method due to its relevance in many cases. However, the inversion of SP data is often highly nonlinear. Consequently, local search algorithms commonly based on gradient approaches have often failed to find the global optimum solution in nonlinear problems. Black hole algorithm (BHA) was proposed as a solution to such problems. As the name suggests, the algorithm was constructed based on the black hole phenomena. This paper investigates the application of BHA to solve inversions of field and synthetic self-potential (SP) data. The inversion results show that BHA accurately determines model parameters and model uncertainty. This indicates that BHA is highly potential as an innovative approach for SP data inversion.
Comparing implementations of penalized weighted least-squares sinogram restoration.
Forthmann, Peter; Koehler, Thomas; Defrise, Michel; La Riviere, Patrick
2010-11-01
A CT scanner measures the energy that is deposited in each channel of a detector array by x rays that have been partially absorbed on their way through the object. The measurement process is complex and quantitative measurements are always and inevitably associated with errors, so CT data must be preprocessed prior to reconstruction. In recent years, the authors have formulated CT sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. The authors have explored both penalized Poisson likelihood (PL) and penalized weighted least-squares (PWLS) objective functions. At low doses, the authors found that the PL approach outperforms PWLS in terms of resolution-noise tradeoffs, but at standard doses they perform similarly. The PWLS objective function, being quadratic, is more amenable to computational acceleration than the PL objective. In this work, the authors develop and compare two different methods for implementing PWLS sinogram restoration with the hope of improving computational performance relative to PL in the standard-dose regime. Sinogram restoration is still significant in the standard-dose regime since it can still outperform standard approaches and it allows for correction of effects that are not usually modeled in standard CT preprocessing. The authors have explored and compared two implementation strategies for PWLS sinogram restoration: (1) A direct matrix-inversion strategy based on the closed-form solution to the PWLS optimization problem and (2) an iterative approach based on the conjugate-gradient algorithm. Obtaining optimal performance from each strategy required modifying the naive off-the-shelf implementations of the algorithms to exploit the particular symmetry and sparseness of the sinogram-restoration problem. For the closed-form approach, the authors subdivided the large matrix inversion into smaller coupled problems and exploited sparseness to minimize matrix operations. For the conjugate-gradient approach, the authors exploited sparseness and preconditioned the problem to speed up convergence. All methods produced qualitatively and quantitatively similar images as measured by resolution-variance tradeoffs and difference images. Despite the acceleration strategies, the direct matrix-inversion approach was found to be uncompetitive with iterative approaches, with a computational burden higher by an order of magnitude or more. The iterative conjugate-gradient approach, however, does appear promising, with computation times half that of the authors' previous penalized-likelihood implementation. Iterative conjugate-gradient based PWLS sinogram restoration with careful matrix optimizations has computational advantages over direct matrix PWLS inversion and over penalized-likelihood sinogram restoration and can be considered a good alternative in standard-dose regimes.
NASA Astrophysics Data System (ADS)
Wu, Wei; Zhao, Dewei; Zhang, Huan
2015-12-01
Super-resolution image reconstruction is an effective method to improve the image quality. It has important research significance in the field of image processing. However, the choice of the dictionary directly affects the efficiency of image reconstruction. A sparse representation theory is introduced into the problem of the nearest neighbor selection. Based on the sparse representation of super-resolution image reconstruction method, a super-resolution image reconstruction algorithm based on multi-class dictionary is analyzed. This method avoids the redundancy problem of only training a hyper complete dictionary, and makes the sub-dictionary more representatives, and then replaces the traditional Euclidean distance computing method to improve the quality of the whole image reconstruction. In addition, the ill-posed problem is introduced into non-local self-similarity regularization. Experimental results show that the algorithm is much better results than state-of-the-art algorithm in terms of both PSNR and visual perception.
3-D CSEM data inversion algorithm based on simultaneously active multiple transmitters concept
NASA Astrophysics Data System (ADS)
Dehiya, Rahul; Singh, Arun; Gupta, Pravin Kumar; Israil, Mohammad
2017-05-01
We present an algorithm for efficient 3-D inversion of marine controlled-source electromagnetic data. The efficiency is achieved by exploiting the redundancy in data. The data redundancy is reduced by compressing the data through stacking of the response of transmitters which are in close proximity. This stacking is equivalent to synthesizing the data as if the multiple transmitters are simultaneously active. The redundancy in data, arising due to close transmitter spacing, has been studied through singular value analysis of the Jacobian formed in 1-D inversion. This study reveals that the transmitter spacing of 100 m, typically used in marine data acquisition, does result in redundancy in the data. In the proposed algorithm, the data are compressed through stacking which leads to both computational advantage and reduction in noise. The performance of the algorithm for noisy data is demonstrated through the studies on two types of noise, viz., uncorrelated additive noise and correlated non-additive noise. It is observed that in case of uncorrelated additive noise, up to a moderately high (10 percent) noise level the algorithm addresses the noise as effectively as the traditional full data inversion. However, when the noise level in the data is high (20 percent), the algorithm outperforms the traditional full data inversion in terms of data misfit. Similar results are obtained in case of correlated non-additive noise and the algorithm performs better if the level of noise is high. The inversion results of a real field data set are also presented to demonstrate the robustness of the algorithm. The significant computational advantage in all cases presented makes this algorithm a better choice.
Support Minimized Inversion of Acoustic and Elastic Wave Scattering
NASA Astrophysics Data System (ADS)
Safaeinili, Ali
Inversion of limited data is common in many areas of NDE such as X-ray Computed Tomography (CT), Ultrasonic and eddy current flaw characterization and imaging. In many applications, it is common to have a bias toward a solution with minimum (L^2)^2 norm without any physical justification. When it is a priori known that objects are compact as, say, with cracks and voids, by choosing "Minimum Support" functional instead of the minimum (L^2)^2 norm, an image can be obtained that is equally in agreement with the available data, while it is more consistent with what is most probably seen in the real world. We have utilized a minimum support functional to find a solution with the smallest volume. This inversion algorithm is most successful in reconstructing objects that are compact like voids and cracks. To verify this idea, we first performed a variational nonlinear inversion of acoustic backscatter data using minimum support objective function. A full nonlinear forward model was used to accurately study the effectiveness of the minimized support inversion without error due to the linear (Born) approximation. After successful inversions using a full nonlinear forward model, a linearized acoustic inversion was developed to increase speed and efficiency in imaging process. The results indicate that by using minimum support functional, we can accurately size and characterize voids and/or cracks which otherwise might be uncharacterizable. An extremely important feature of support minimized inversion is its ability to compensate for unknown absolute phase (zero-of-time). Zero-of-time ambiguity is a serious problem in the inversion of the pulse-echo data. The minimum support inversion was successfully used for the inversion of acoustic backscatter data due to compact scatterers without the knowledge of the zero-of-time. The main drawback to this type of inversion is its computer intensiveness. In order to make this type of constrained inversion available for common use, work needs to be performed in three areas: (1) exploitation of state-of-the-art parallel computation, (2) improvement of theoretical formulation of the scattering process for better computation efficiency, and (3) development of better methods for guiding the non-linear inversion. (Abstract shortened by UMI.).
Near constant-time optimal piecewise LDR to HDR inverse tone mapping
NASA Astrophysics Data System (ADS)
Chen, Qian; Su, Guan-Ming; Yin, Peng
2015-02-01
In a backward compatible HDR image/video compression, it is a general approach to reconstruct HDR from compressed LDR as a prediction to original HDR, which is referred to as inverse tone mapping. Experimental results show that 2- piecewise 2nd order polynomial has the best mapping accuracy than 1 piece high order or 2-piecewise linear, but it is also the most time-consuming method because to find the optimal pivot point to split LDR range to 2 pieces requires exhaustive search. In this paper, we propose a fast algorithm that completes optimal 2-piecewise 2nd order polynomial inverse tone mapping in near constant time without quality degradation. We observe that in least square solution, each entry in the intermediate matrix can be written as the sum of some basic terms, which can be pre-calculated into look-up tables. Since solving the matrix becomes looking up values in tables, computation time barely differs regardless of the number of points searched. Hence, we can carry out the most thorough pivot point search to find the optimal pivot that minimizes MSE in near constant time. Experiment shows that our proposed method achieves the same PSNR performance while saving 60 times computation time compared to the traditional exhaustive search in 2-piecewise 2nd order polynomial inverse tone mapping with continuous constraint.
Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data
NASA Astrophysics Data System (ADS)
Martins, Fabio J. W. A.; Foucaut, Jean-Marc; Thomas, Lionel; Azevedo, Luis F. A.; Stanislas, Michel
2015-08-01
Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time.
NASA Astrophysics Data System (ADS)
Humphries, T.; Winn, J.; Faridani, A.
2017-08-01
Recent work in CT image reconstruction has seen increasing interest in the use of total variation (TV) and related penalties to regularize problems involving reconstruction from undersampled or incomplete data. Superiorization is a recently proposed heuristic which provides an automatic procedure to ‘superiorize’ an iterative image reconstruction algorithm with respect to a chosen objective function, such as TV. Under certain conditions, the superiorized algorithm is guaranteed to find a solution that is as satisfactory as any found by the original algorithm with respect to satisfying the constraints of the problem; this solution is also expected to be superior with respect to the chosen objective. Most work on superiorization has used reconstruction algorithms which assume a linear measurement model, which in the case of CT corresponds to data generated from a monoenergetic x-ray beam. Many CT systems generate x-rays from a polyenergetic spectrum, however, in which the measured data represent an integral of object attenuation over all energies in the spectrum. This inconsistency with the linear model produces the well-known beam hardening artifacts, which impair analysis of CT images. In this work we superiorize an iterative algorithm for reconstruction from polyenergetic data, using both TV and an anisotropic TV (ATV) penalty. We apply the superiorized algorithm in numerical phantom experiments modeling both sparse-view and limited-angle scenarios. In our experiments, the superiorized algorithm successfully finds solutions which are as constraints-compatible as those found by the original algorithm, with significantly reduced TV and ATV values. The superiorized algorithm thus produces images with greatly reduced sparse-view and limited angle artifacts, which are also largely free of the beam hardening artifacts that would be present if a superiorized version of a monoenergetic algorithm were used.
A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT
Cho, Seungryong; Xia, Dan; Pellizzari, Charles A.; Pan, Xiaochuan
2010-01-01
Purpose: Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. Methods: The proposed approach comprises of two reconstruction steps. In the first step, a chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack–Noo-formula-based filteredbackprojection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. Results: The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. Conclusions: They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories. PMID:20175463
A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT.
Cho, Seungryong; Xia, Dan; Pellizzari, Charles A; Pan, Xiaochuan
2010-01-01
Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. The proposed approach comprises of two reconstruction steps. In the first step, a chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack-Noo-formula-based filteredback-projection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories.
Jia, Xun; Lou, Yifei; Li, Ruijiang; Song, William Y; Jiang, Steve B
2010-04-01
Cone-beam CT (CBCT) plays an important role in image guided radiation therapy (IGRT). However, the large radiation dose from serial CBCT scans in most IGRT procedures raises a clinical concern, especially for pediatric patients who are essentially excluded from receiving IGRT for this reason. The goal of this work is to develop a fast GPU-based algorithm to reconstruct CBCT from undersampled and noisy projection data so as to lower the imaging dose. The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. The authors developed a GPU-friendly version of the forward-backward splitting algorithm to solve this model. A multigrid technique is also employed. It is found that 20-40 x-ray projections are sufficient to reconstruct images with satisfactory quality for IGRT. The reconstruction time ranges from 77 to 130 s on an NVIDIA Tesla C1060 (NVIDIA, Santa Clara, CA) GPU card, depending on the number of projections used, which is estimated about 100 times faster than similar iterative reconstruction approaches. Moreover, phantom studies indicate that the algorithm enables the CBCT to be reconstructed under a scanning protocol with as low as 0.1 mA s/projection. Comparing with currently widely used full-fan head and neck scanning protocol of approximately 360 projections with 0.4 mA s/projection, it is estimated that an overall 36-72 times dose reduction has been achieved in our fast CBCT reconstruction algorithm. This work indicates that the developed GPU-based CBCT reconstruction algorithm is capable of lowering imaging dose considerably. The high computation efficiency in this algorithm makes the iterative CBCT reconstruction approach applicable in real clinical environments.
USDA-ARS?s Scientific Manuscript database
Determination of the optical properties from intact biological materials based on diffusion approximation theory is a complicated inverse problem, and it requires proper implementation of inverse algorithm, instrumentation, and experiment. This work was aimed at optimizing the procedure of estimatin...
Toushmalani, Reza
2013-01-01
The purpose of this study was to compare the performance of two methods for gravity inversion of a fault. First method [Particle swarm optimization (PSO)] is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. Second method [The Levenberg-Marquardt algorithm (LM)] is an approximation to the Newton method used also for training ANNs. In this paper first we discussed the gravity field of a fault, then describes the algorithms of PSO and LM And presents application of Levenberg-Marquardt algorithm, and a particle swarm algorithm in solving inverse problem of a fault. Most importantly the parameters for the algorithms are given for the individual tests. Inverse solution reveals that fault model parameters are agree quite well with the known results. A more agreement has been found between the predicted model anomaly and the observed gravity anomaly in PSO method rather than LM method.
An infrared-visible image fusion scheme based on NSCT and compressed sensing
NASA Astrophysics Data System (ADS)
Zhang, Qiong; Maldague, Xavier
2015-05-01
Image fusion, as a research hot point nowadays in the field of infrared computer vision, has been developed utilizing different varieties of methods. Traditional image fusion algorithms are inclined to bring problems, such as data storage shortage and computational complexity increase, etc. Compressed sensing (CS) uses sparse sampling without knowing the priori knowledge and greatly reconstructs the image, which reduces the cost and complexity of image processing. In this paper, an advanced compressed sensing image fusion algorithm based on non-subsampled contourlet transform (NSCT) is proposed. NSCT provides better sparsity than the wavelet transform in image representation. Throughout the NSCT decomposition, the low-frequency and high-frequency coefficients can be obtained respectively. For the fusion processing of low-frequency coefficients of infrared and visible images , the adaptive regional energy weighting rule is utilized. Thus only the high-frequency coefficients are specially measured. Here we use sparse representation and random projection to obtain the required values of high-frequency coefficients, afterwards, the coefficients of each image block can be fused via the absolute maximum selection rule and/or the regional standard deviation rule. In the reconstruction of the compressive sampling results, a gradient-based iterative algorithm and the total variation (TV) method are employed to recover the high-frequency coefficients. Eventually, the fused image is recovered by inverse NSCT. Both the visual effects and the numerical computation results after experiments indicate that the presented approach achieves much higher quality of image fusion, accelerates the calculations, enhances various targets and extracts more useful information.
2D joint inversion of CSAMT and magnetic data based on cross-gradient theory
NASA Astrophysics Data System (ADS)
Wang, Kun-Peng; Tan, Han-Dong; Wang, Tao
2017-06-01
A two-dimensional forward and backward algorithm for the controlled-source audio-frequency magnetotelluric (CSAMT) method is developed to invert data in the entire region (near, transition, and far) and deal with the effects of artificial sources. First, a regularization factor is introduced in the 2D magnetic inversion, and the magnetic susceptibility is updated in logarithmic form so that the inversion magnetic susceptibility is always positive. Second, the joint inversion of the CSAMT and magnetic methods is completed with the introduction of the cross gradient. By searching for the weight of the cross-gradient term in the objective function, the mutual influence between two different physical properties at different locations are avoided. Model tests show that the joint inversion based on cross-gradient theory offers better results than the single-method inversion. The 2D forward and inverse algorithm for CSAMT with source can effectively deal with artificial sources and ensures the reliability of the final joint inversion algorithm.
NASA Astrophysics Data System (ADS)
Tang, Xiangyang
2003-05-01
In multi-slice helical CT, the single-tilted-plane-based reconstruction algorithm has been proposed to combat helical and cone beam artifacts by tilting a reconstruction plane to fit a helical source trajectory optimally. Furthermore, to improve the noise characteristics or dose efficiency of the single-tilted-plane-based reconstruction algorithm, the multi-tilted-plane-based reconstruction algorithm has been proposed, in which the reconstruction plane deviates from the pose globally optimized due to an extra rotation along the 3rd axis. As a result, the capability of suppressing helical and cone beam artifacts in the multi-tilted-plane-based reconstruction algorithm is compromised. An optomized tilted-plane-based reconstruction algorithm is proposed in this paper, in which a matched view weighting strategy is proposed to optimize the capability of suppressing helical and cone beam artifacts and noise characteristics. A helical body phantom is employed to quantitatively evaluate the imaging performance of the matched view weighting approach by tabulating artifact index and noise characteristics, showing that the matched view weighting improves both the helical artifact suppression and noise characteristics or dose efficiency significantly in comparison to the case in which non-matched view weighting is applied. Finally, it is believed that the matched view weighting approach is of practical importance in the development of multi-slive helical CT, because it maintains the computational structure of fan beam filtered backprojection and demands no extra computational services.
Load identification approach based on basis pursuit denoising algorithm
NASA Astrophysics Data System (ADS)
Ginsberg, D.; Ruby, M.; Fritzen, C. P.
2015-07-01
The information of the external loads is of great interest in many fields of structural analysis, such as structural health monitoring (SHM) systems or assessment of damage after extreme events. However, in most cases it is not possible to measure the external forces directly, so they need to be reconstructed. Load reconstruction refers to the problem of estimating an input to a dynamic system when the system output and the impulse response functions are usually the knowns. Generally, this leads to a so called ill-posed inverse problem, which involves solving an underdetermined linear system of equations. For most practical applications it can be assumed that the applied loads are not arbitrarily distributed in time and space, at least some specific characteristics about the external excitation are known a priori. In this contribution this knowledge was used to develop a more suitable force reconstruction method, which allows identifying the time history and the force location simultaneously by employing significantly fewer sensors compared to other reconstruction approaches. The properties of the external force are used to transform the ill-posed problem into a sparse recovery task. The sparse solution is acquired by solving a minimization problem known as basis pursuit denoising (BPDN). The possibility of reconstructing loads based on noisy structural measurement signals will be demonstrated by considering two frequently occurring loading conditions: harmonic excitation and impact events, separately and combined. First a simulation study of a simple plate structure is carried out and thereafter an experimental investigation of a real beam is performed.