Convex blind image deconvolution with inverse filtering
NASA Astrophysics Data System (ADS)
Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong
2018-03-01
Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.
Acoustic Inversion in Optoacoustic Tomography: A Review
Rosenthal, Amir; Ntziachristos, Vasilis; Razansky, Daniel
2013-01-01
Optoacoustic tomography enables volumetric imaging with optical contrast in biological tissue at depths beyond the optical mean free path by the use of optical excitation and acoustic detection. The hybrid nature of optoacoustic tomography gives rise to two distinct inverse problems: The optical inverse problem, related to the propagation of the excitation light in tissue, and the acoustic inverse problem, which deals with the propagation and detection of the generated acoustic waves. Since the two inverse problems have different physical underpinnings and are governed by different types of equations, they are often treated independently as unrelated problems. From an imaging standpoint, the acoustic inverse problem relates to forming an image from the measured acoustic data, whereas the optical inverse problem relates to quantifying the formed image. This review focuses on the acoustic aspects of optoacoustic tomography, specifically acoustic reconstruction algorithms and imaging-system practicalities. As these two aspects are intimately linked, and no silver bullet exists in the path towards high-performance imaging, we adopt a holistic approach in our review and discuss the many links between the two aspects. Four classes of reconstruction algorithms are reviewed: time-domain (so called back-projection) formulae, frequency-domain formulae, time-reversal algorithms, and model-based algorithms. These algorithms are discussed in the context of the various acoustic detectors and detection surfaces which are commonly used in experimental studies. We further discuss the effects of non-ideal imaging scenarios on the quality of reconstruction and review methods that can mitigate these effects. Namely, we consider the cases of finite detector aperture, limited-view tomography, spatial under-sampling of the acoustic signals, and acoustic heterogeneities and losses. PMID:24772060
An inverse problem in thermal imaging
NASA Technical Reports Server (NTRS)
Bryan, Kurt; Caudill, Lester F., Jr.
1994-01-01
This paper examines uniqueness and stability results for an inverse problem in thermal imaging. The goal is to identify an unknown boundary of an object by applying a heat flux and measuring the induced temperature on the boundary of the sample. The problem is studied both in the case in which one has data at every point on the boundary of the region and the case in which only finitely many measurements are available. An inversion procedure is developed and used to study the stability of the inverse problem for various experimental configurations.
Inverse transport problems in quantitative PAT for molecular imaging
NASA Astrophysics Data System (ADS)
Ren, Kui; Zhang, Rongting; Zhong, Yimin
2015-12-01
Fluorescence photoacoustic tomography (fPAT) is a molecular imaging modality that combines photoacoustic tomography with fluorescence imaging to obtain high-resolution imaging of fluorescence distributions inside heterogeneous media. The objective of this work is to study inverse problems in the quantitative step of fPAT where we intend to reconstruct physical coefficients in a coupled system of radiative transport equations using internal data recovered from ultrasound measurements. We derive uniqueness and stability results on the inverse problems and develop some efficient algorithms for image reconstructions. Numerical simulations based on synthetic data are presented to validate the theoretical analysis. The results we present here complement these in Ren K and Zhao H (2013 SIAM J. Imaging Sci. 6 2024-49) on the same problem but in the diffusive regime.
Regularization Reconstruction Method for Imaging Problems in Electrical Capacitance Tomography
NASA Astrophysics Data System (ADS)
Chu, Pan; Lei, Jing
2017-11-01
The electrical capacitance tomography (ECT) is deemed to be a powerful visualization measurement technique for the parametric measurement in a multiphase flow system. The inversion task in the ECT technology is an ill-posed inverse problem, and seeking for an efficient numerical method to improve the precision of the reconstruction images is important for practical measurements. By the introduction of the Tikhonov regularization (TR) methodology, in this paper a loss function that emphasizes the robustness of the estimation and the low rank property of the imaging targets is put forward to convert the solution of the inverse problem in the ECT reconstruction task into a minimization problem. Inspired by the split Bregman (SB) algorithm, an iteration scheme is developed for solving the proposed loss function. Numerical experiment results validate that the proposed inversion method not only reconstructs the fine structures of the imaging targets, but also improves the robustness.
FOREWORD: 5th International Workshop on New Computational Methods for Inverse Problems
NASA Astrophysics Data System (ADS)
Vourc'h, Eric; Rodet, Thomas
2015-11-01
This volume of Journal of Physics: Conference Series is dedicated to the scientific research presented during the 5th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2015 (http://complement.farman.ens-cachan.fr/NCMIP_2015.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 29, 2015. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011, and secondly at the initiative of Institut Farman, in May 2012, May 2013 and May 2014. The New Computational Methods for Inverse Problems (NCMIP) workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel methods, learning methods, convex optimization, free discontinuity problems, metamodels, proper orthogonal decomposition, reduced models for the inversion, non-linear inverse scattering, image reconstruction and restoration, and applications (bio-medical imaging, non-destructive evaluation...). NCMIP 2015 was a one-day workshop held in May 2015 which attracted around 70 attendees. Each of the submitted papers has been reviewed by two reviewers. There have been 15 accepted papers. In addition, three international speakers were invited to present a longer talk. The workshop was supported by Institut Farman (ENS Cachan, CNRS) and endorsed by the following French research networks: GDR ISIS, GDR MIA, GDR MOA and GDR Ondes. The program committee acknowledges the following research laboratories: CMLA, LMT, LURPA and SATIE.
2007-02-28
Iterative Ultrasonic Signal and Image Deconvolution for Estimation of the Complex Medium Response, International Journal of Imaging Systems and...1767-1782, 2006. 31. Z. Mu, R. Plemmons, and P. Santago. Iterative Ultrasonic Signal and Image Deconvolution for Estimation of the Complex...rigorous mathematical and computational research on inverse problems in optical imaging of direct interest to the Army and also the intelligence agencies
Bayesian parameter estimation in spectral quantitative photoacoustic tomography
NASA Astrophysics Data System (ADS)
Pulkkinen, Aki; Cox, Ben T.; Arridge, Simon R.; Kaipio, Jari P.; Tarvainen, Tanja
2016-03-01
Photoacoustic tomography (PAT) is an imaging technique combining strong contrast of optical imaging to high spatial resolution of ultrasound imaging. These strengths are achieved via photoacoustic effect, where a spatial absorption of light pulse is converted into a measurable propagating ultrasound wave. The method is seen as a potential tool for small animal imaging, pre-clinical investigations, study of blood vessels and vasculature, as well as for cancer imaging. The goal in PAT is to form an image of the absorbed optical energy density field via acoustic inverse problem approaches from the measured ultrasound data. Quantitative PAT (QPAT) proceeds from these images and forms quantitative estimates of the optical properties of the target. This optical inverse problem of QPAT is illposed. To alleviate the issue, spectral QPAT (SQPAT) utilizes PAT data formed at multiple optical wavelengths simultaneously with optical parameter models of tissue to form quantitative estimates of the parameters of interest. In this work, the inverse problem of SQPAT is investigated. Light propagation is modelled using the diffusion equation. Optical absorption is described with chromophore concentration weighted sum of known chromophore absorption spectra. Scattering is described by Mie scattering theory with an exponential power law. In the inverse problem, the spatially varying unknown parameters of interest are the chromophore concentrations, the Mie scattering parameters (power law factor and the exponent), and Gruneisen parameter. The inverse problem is approached with a Bayesian method. It is numerically demonstrated, that estimation of all parameters of interest is possible with the approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haber, Eldad
2014-03-17
The focus of research was: Developing adaptive mesh for the solution of Maxwell's equations; Developing a parallel framework for time dependent inverse Maxwell's equations; Developing multilevel methods for optimization problems with inequality constraints; A new inversion code for inverse Maxwell's equations in the 0th frequency (DC resistivity); A new inversion code for inverse Maxwell's equations in low frequency regime. Although the research concentrated on electromagnetic forward and in- verse problems the results of the research was applied to the problem of image registration.
FOREWORD: 4th International Workshop on New Computational Methods for Inverse Problems (NCMIP2014)
NASA Astrophysics Data System (ADS)
2014-10-01
This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 4th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2014 (http://www.farman.ens-cachan.fr/NCMIP_2014.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 23, 2014. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/), and secondly at the initiative of Institut Farman, in May 2012 and May 2013, (http://www.farman.ens-cachan.fr/NCMIP_2012.html), (http://www.farman.ens-cachan.fr/NCMIP_2013.html). The New Computational Methods for Inverse Problems (NCMIP) Workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel methods, learning methods, convex optimization, free discontinuity problems, metamodels, proper orthogonal decomposition, reduced models for the inversion, non-linear inverse scattering, image reconstruction and restoration, and applications (bio-medical imaging, non-destructive evaluation...). NCMIP 2014 was a one-day workshop held in May 2014 which attracted around sixty attendees. Each of the submitted papers has been reviewed by two reviewers. There have been nine accepted papers. In addition, three international speakers were invited to present a longer talk. The workshop was supported by Institut Farman (ENS Cachan, CNRS) and endorsed by the following French research networks (GDR ISIS, GDR MIA, GDR MOA, GDR Ondes). The program committee acknowledges the following research laboratories: CMLA, LMT, LURPA, SATIE. Eric Vourc'h and Thomas Rodet
A multi-frequency iterative imaging method for discontinuous inverse medium problem
NASA Astrophysics Data System (ADS)
Zhang, Lei; Feng, Lixin
2018-06-01
The inverse medium problem with discontinuous refractive index is a kind of challenging inverse problem. We employ the primal dual theory and fast solution of integral equations, and propose a new iterative imaging method. The selection criteria of regularization parameter is given by the method of generalized cross-validation. Based on multi-frequency measurements of the scattered field, a recursive linearization algorithm has been presented with respect to the frequency from low to high. We also discuss the initial guess selection strategy by semi-analytical approaches. Numerical experiments are presented to show the effectiveness of the proposed method.
Coll-Font, Jaume; Burton, Brett M; Tate, Jess D; Erem, Burak; Swenson, Darrel J; Wang, Dafang; Brooks, Dana H; van Dam, Peter; Macleod, Rob S
2014-09-01
Cardiac electrical imaging often requires the examination of different forward and inverse problem formulations based on mathematical and numerical approximations of the underlying source and the intervening volume conductor that can generate the associated voltages on the surface of the body. If the goal is to recover the source on the heart from body surface potentials, the solution strategy must include numerical techniques that can incorporate appropriate constraints and recover useful solutions, even though the problem is badly posed. Creating complete software solutions to such problems is a daunting undertaking. In order to make such tools more accessible to a broad array of researchers, the Center for Integrative Biomedical Computing (CIBC) has made an ECG forward/inverse toolkit available within the open source SCIRun system. Here we report on three new methods added to the inverse suite of the toolkit. These new algorithms, namely a Total Variation method, a non-decreasing TMP inverse and a spline-based inverse, consist of two inverse methods that take advantage of the temporal structure of the heart potentials and one that leverages the spatial characteristics of the transmembrane potentials. These three methods further expand the possibilities of researchers in cardiology to explore and compare solutions to their particular imaging problem.
The impact of approximations and arbitrary choices on geophysical images
NASA Astrophysics Data System (ADS)
Valentine, Andrew P.; Trampert, Jeannot
2016-01-01
Whenever a geophysical image is to be constructed, a variety of choices must be made. Some, such as those governing data selection and processing, or model parametrization, are somewhat arbitrary: there may be little reason to prefer one choice over another. Others, such as defining the theoretical framework within which the data are to be explained, may be more straightforward: typically, an `exact' theory exists, but various approximations may need to be adopted in order to make the imaging problem computationally tractable. Differences between any two images of the same system can be explained in terms of differences between these choices. Understanding the impact of each particular decision is essential if images are to be interpreted properly-but little progress has been made towards a quantitative treatment of this effect. In this paper, we consider a general linearized inverse problem, applicable to a wide range of imaging situations. We write down an expression for the difference between two images produced using similar inversion strategies, but where different choices have been made. This provides a framework within which inversion algorithms may be analysed, and allows us to consider how image effects may arise. In this paper, we take a general view, and do not specialize our discussion to any specific imaging problem or setup (beyond the restrictions implied by the use of linearized inversion techniques). In particular, we look at the concept of `hybrid inversion', in which highly accurate synthetic data (typically the result of an expensive numerical simulation) is combined with an inverse operator constructed based on theoretical approximations. It is generally supposed that this offers the benefits of using the more complete theory, without the full computational costs. We argue that the inverse operator is as important as the forward calculation in determining the accuracy of results. We illustrate this using a simple example, based on imaging the density structure of a vibrating string.
Music algorithm for imaging of a sound-hard arc in limited-view inverse scattering problem
NASA Astrophysics Data System (ADS)
Park, Won-Kwang
2017-07-01
MUltiple SIgnal Classification (MUSIC) algorithm for a non-iterative imaging of sound-hard arc in limited-view inverse scattering problem is considered. In order to discover mathematical structure of MUSIC, we derive a relationship between MUSIC and an infinite series of Bessel functions of integer order. This structure enables us to examine some properties of MUSIC in limited-view problem. Numerical simulations are performed to support the identified structure of MUSIC.
NASA Astrophysics Data System (ADS)
Gorpas, Dimitris; Politopoulos, Kostas; Yova, Dido; Andersson-Engels, Stefan
2008-02-01
One of the most challenging problems in medical imaging is to "see" a tumour embedded into tissue, which is a turbid medium, by using fluorescent probes for tumour labeling. This problem, despite the efforts made during the last years, has not been fully encountered yet, due to the non-linear nature of the inverse problem and the convergence failures of many optimization techniques. This paper describes a robust solution of the inverse problem, based on data fitting and image fine-tuning techniques. As a forward solver the coupled radiative transfer equation and diffusion approximation model is proposed and compromised via a finite element method, enhanced with adaptive multi-grids for faster and more accurate convergence. A database is constructed by application of the forward model on virtual tumours with known geometry, and thus fluorophore distribution, embedded into simulated tissues. The fitting procedure produces the best matching between the real and virtual data, and thus provides the initial estimation of the fluorophore distribution. Using this information, the coupled radiative transfer equation and diffusion approximation model has the required initial values for a computational reasonable and successful convergence during the image fine-tuning application.
Domain identification in impedance computed tomography by spline collocation method
NASA Technical Reports Server (NTRS)
Kojima, Fumio
1990-01-01
A method for estimating an unknown domain in elliptic boundary value problems is considered. The problem is formulated as an inverse problem of integral equations of the second kind. A computational method is developed using a splice collocation scheme. The results can be applied to the inverse problem of impedance computed tomography (ICT) for image reconstruction.
Toward 2D and 3D imaging of magnetic nanoparticles using EPR measurements.
Coene, A; Crevecoeur, G; Leliaert, J; Dupré, L
2015-09-01
Magnetic nanoparticles (MNPs) are an important asset in many biomedical applications. An effective working of these applications requires an accurate knowledge of the spatial MNP distribution. A promising, noninvasive, and sensitive technique to visualize MNP distributions in vivo is electron paramagnetic resonance (EPR). Currently only 1D MNP distributions can be reconstructed. In this paper, the authors propose extending 1D EPR toward 2D and 3D using computer simulations to allow accurate imaging of MNP distributions. To find the MNP distribution belonging to EPR measurements, an inverse problem needs to be solved. The solution of this inverse problem highly depends on the stability of the inverse problem. The authors adapt 1D EPR imaging to realize the imaging of multidimensional MNP distributions. Furthermore, the authors introduce partial volume excitation in which only parts of the volume are imaged to increase stability of the inverse solution and to speed up the measurements. The authors simulate EPR measurements of different 2D and 3D MNP distributions and solve the inverse problem. The stability is evaluated by calculating the condition measure and by comparing the actual MNP distribution to the reconstructed MNP distribution. Based on these simulations, the authors define requirements for the EPR system to cope with the added dimensions. Moreover, the authors investigate how EPR measurements should be conducted to improve the stability of the associated inverse problem and to increase reconstruction quality. The approach used in 1D EPR can only be employed for the reconstruction of small volumes in 2D and 3D EPRs due to numerical instability of the inverse solution. The authors performed EPR measurements of increasing cylindrical volumes and evaluated the condition measure. This showed that a reduction of the inherent symmetry in the EPR methodology is necessary. By reducing the symmetry of the EPR setup, quantitative images of larger volumes can be obtained. The authors found that, by selectively exciting parts of the volume, the authors could increase the reconstruction quality even further while reducing the amount of measurements. Additionally, the inverse solution of this activation method degrades slower for increasing volumes. Finally, the methodology was applied to noisy EPR measurements: using the reduced EPR setup's symmetry and the partial activation method, an increase in reconstruction quality of ≈ 80% can be seen with a speedup of the measurements with 10%. Applying the aforementioned requirements to the EPR setup and stabilizing the EPR measurements showed a tremendous increase in noise robustness, thereby making EPR a valuable method for quantitative imaging of multidimensional MNP distributions.
Pulkkinen, Aki; Cox, Ben T; Arridge, Simon R; Goh, Hwan; Kaipio, Jari P; Tarvainen, Tanja
2016-11-01
Estimation of optical absorption and scattering of a target is an inverse problem associated with quantitative photoacoustic tomography. Conventionally, the problem is expressed as two folded. First, images of initial pressure distribution created by absorption of a light pulse are formed based on acoustic boundary measurements. Then, the optical properties are determined based on these photoacoustic images. The optical stage of the inverse problem can thus suffer from, for example, artefacts caused by the acoustic stage. These could be caused by imperfections in the acoustic measurement setting, of which an example is a limited view acoustic measurement geometry. In this work, the forward model of quantitative photoacoustic tomography is treated as a coupled acoustic and optical model and the inverse problem is solved by using a Bayesian approach. Spatial distribution of the optical properties of the imaged target are estimated directly from the photoacoustic time series in varying acoustic detection and optical illumination configurations. It is numerically demonstrated, that estimation of optical properties of the imaged target is feasible in limited view acoustic detection setting.
NASA Astrophysics Data System (ADS)
Uhlmann, Gunther
2008-07-01
This volume represents the proceedings of the fourth Applied Inverse Problems (AIP) international conference and the first congress of the Inverse Problems International Association (IPIA) which was held in Vancouver, Canada, June 25 29, 2007. The organizing committee was formed by Uri Ascher, University of British Columbia, Richard Froese, University of British Columbia, Gary Margrave, University of Calgary, and Gunther Uhlmann, University of Washington, chair. The conference was part of the activities of the Pacific Institute of Mathematical Sciences (PIMS) Collaborative Research Group on inverse problems (http://www.pims.math.ca/scientific/collaborative-research-groups/past-crgs). This event was also supported by grants from NSF and MITACS. Inverse Problems (IP) are problems where causes for a desired or an observed effect are to be determined. They lie at the heart of scientific inquiry and technological development. The enormous increase in computing power and the development of powerful algorithms have made it possible to apply the techniques of IP to real-world problems of growing complexity. Applications include a number of medical as well as other imaging techniques, location of oil and mineral deposits in the earth's substructure, creation of astrophysical images from telescope data, finding cracks and interfaces within materials, shape optimization, model identification in growth processes and, more recently, modelling in the life sciences. The series of Applied Inverse Problems (AIP) Conferences aims to provide a primary international forum for academic and industrial researchers working on all aspects of inverse problems, such as mathematical modelling, functional analytic methods, computational approaches, numerical algorithms etc. The steering committee of the AIP conferences consists of Heinz Engl (Johannes Kepler Universität, Austria), Joyce McLaughlin (RPI, USA), William Rundell (Texas A&M, USA), Erkki Somersalo (Helsinki University of Technology, Finland), Masahiro Yamamoto (University of Tokyo, Japan), Gunther Uhlmann (University of Washington) and Jun Zou (Chinese University of Hong Kong). IPIA is a recently formed organization that intends to promote the field of inverse problem at all levels. See http://www.inverse-problems.net/. IPIA awarded the first Calderón prize at the opening of the conference to Matti Lassas (see first article in the Proceedings). There was also a general meeting of IPIA during the workshop. This was probably the largest conference ever on IP with 350 registered participants. The program consisted of 18 invited speakers and the Calderón Prize Lecture given by Matti Lassas. Another integral part of the program was the more than 60 mini-symposia that covered a broad spectrum of the theory and applications of inverse problems, focusing on recent developments in medical imaging, seismic exploration, remote sensing, industrial applications, numerical and regularization methods in inverse problems. Another important related topic was image processing in particular the advances which have allowed for significant enhancement of widely used imaging techniques. For more details on the program see the web page: http://www.pims.math.ca/science/2007/07aip. These proceedings reflect the broad spectrum of topics covered in AIP 2007. The conference and these proceedings would not have happened without the contributions of many people. I thank all my fellow organizers, the invited speakers, the speakers and organizers of mini-symposia for making this an exciting and vibrant event. I also thank PIMS, NSF and MITACS for their generous financial support. I take this opportunity to thank the PIMS staff, particularly Ken Leung, for making the local arrangements. Also thanks are due to Stephen McDowall for his help in preparing the schedule of the conference and Xiaosheng Li for the help in preparing these proceedings. I also would like to thank the contributors of this volume and the referees. Finally, many thanks are due to Graham Douglas and Elaine Longden-Chapman for suggesting publication in Journal of Physics: Conference Series.
Sparsity-based acoustic inversion in cross-sectional multiscale optoacoustic imaging.
Han, Yiyong; Tzoumas, Stratis; Nunes, Antonio; Ntziachristos, Vasilis; Rosenthal, Amir
2015-09-01
With recent advancement in hardware of optoacoustic imaging systems, highly detailed cross-sectional images may be acquired at a single laser shot, thus eliminating motion artifacts. Nonetheless, other sources of artifacts remain due to signal distortion or out-of-plane signals. The purpose of image reconstruction algorithms is to obtain the most accurate images from noisy, distorted projection data. In this paper, the authors use the model-based approach for acoustic inversion, combined with a sparsity-based inversion procedure. Specifically, a cost function is used that includes the L1 norm of the image in sparse representation and a total variation (TV) term. The optimization problem is solved by a numerically efficient implementation of a nonlinear gradient descent algorithm. TV-L1 model-based inversion is tested in the cross section geometry for numerically generated data as well as for in vivo experimental data from an adult mouse. In all cases, model-based TV-L1 inversion showed a better performance over the conventional Tikhonov regularization, TV inversion, and L1 inversion. In the numerical examples, the images reconstructed with TV-L1 inversion were quantitatively more similar to the originating images. In the experimental examples, TV-L1 inversion yielded sharper images and weaker streak artifact. The results herein show that TV-L1 inversion is capable of improving the quality of highly detailed, multiscale optoacoustic images obtained in vivo using cross-sectional imaging systems. As a result of its high fidelity, model-based TV-L1 inversion may be considered as the new standard for image reconstruction in cross-sectional imaging.
NASA Astrophysics Data System (ADS)
Agapiou, Sergios; Burger, Martin; Dashti, Masoumeh; Helin, Tapio
2018-04-01
We consider the inverse problem of recovering an unknown functional parameter u in a separable Banach space, from a noisy observation vector y of its image through a known possibly non-linear map {{\\mathcal G}} . We adopt a Bayesian approach to the problem and consider Besov space priors (see Lassas et al (2009 Inverse Problems Imaging 3 87-122)), which are well-known for their edge-preserving and sparsity-promoting properties and have recently attracted wide attention especially in the medical imaging community. Our key result is to show that in this non-parametric setup the maximum a posteriori (MAP) estimates are characterized by the minimizers of a generalized Onsager-Machlup functional of the posterior. This is done independently for the so-called weak and strong MAP estimates, which as we show coincide in our context. In addition, we prove a form of weak consistency for the MAP estimators in the infinitely informative data limit. Our results are remarkable for two reasons: first, the prior distribution is non-Gaussian and does not meet the smoothness conditions required in previous research on non-parametric MAP estimates. Second, the result analytically justifies existing uses of the MAP estimate in finite but high dimensional discretizations of Bayesian inverse problems with the considered Besov priors.
NASA Astrophysics Data System (ADS)
He, Xingyu; Tong, Ningning; Hu, Xiaowei
2018-01-01
Compressive sensing has been successfully applied to inverse synthetic aperture radar (ISAR) imaging of moving targets. By exploiting the block sparse structure of the target image, sparse solution for multiple measurement vectors (MMV) can be applied in ISAR imaging and a substantial performance improvement can be achieved. As an effective sparse recovery method, sparse Bayesian learning (SBL) for MMV involves a matrix inverse at each iteration. Its associated computational complexity grows significantly with the problem size. To address this problem, we develop a fast inverse-free (IF) SBL method for MMV. A relaxed evidence lower bound (ELBO), which is computationally more amiable than the traditional ELBO used by SBL, is obtained by invoking fundamental property for smooth functions. A variational expectation-maximization scheme is then employed to maximize the relaxed ELBO, and a computationally efficient IF-MSBL algorithm is proposed. Numerical results based on simulated and real data show that the proposed method can reconstruct row sparse signal accurately and obtain clear superresolution ISAR images. Moreover, the running time and computational complexity are reduced to a great extent compared with traditional SBL methods.
Parallelized Bayesian inversion for three-dimensional dental X-ray imaging.
Kolehmainen, Ville; Vanne, Antti; Siltanen, Samuli; Järvenpää, Seppo; Kaipio, Jari P; Lassas, Matti; Kalke, Martti
2006-02-01
Diagnostic and operational tasks based on dental radiology often require three-dimensional (3-D) information that is not available in a single X-ray projection image. Comprehensive 3-D information about tissues can be obtained by computerized tomography (CT) imaging. However, in dental imaging a conventional CT scan may not be available or practical because of high radiation dose, low-resolution or the cost of the CT scanner equipment. In this paper, we consider a novel type of 3-D imaging modality for dental radiology. We consider situations in which projection images of the teeth are taken from a few sparsely distributed projection directions using the dentist's regular (digital) X-ray equipment and the 3-D X-ray attenuation function is reconstructed. A complication in these experiments is that the reconstruction of the 3-D structure based on a few projection images becomes an ill-posed inverse problem. Bayesian inversion is a well suited framework for reconstruction from such incomplete data. In Bayesian inversion, the ill-posed reconstruction problem is formulated in a well-posed probabilistic form in which a priori information is used to compensate for the incomplete information of the projection data. In this paper we propose a Bayesian method for 3-D reconstruction in dental radiology. The method is partially based on Kolehmainen et al. 2003. The prior model for dental structures consist of a weighted l1 and total variation (TV)-prior together with the positivity prior. The inverse problem is stated as finding the maximum a posteriori (MAP) estimate. To make the 3-D reconstruction computationally feasible, a parallelized version of an optimization algorithm is implemented for a Beowulf cluster computer. The method is tested with projection data from dental specimens and patient data. Tomosynthetic reconstructions are given as reference for the proposed method.
Sparsity-based acoustic inversion in cross-sectional multiscale optoacoustic imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Yiyong; Tzoumas, Stratis; Nunes, Antonio
2015-09-15
Purpose: With recent advancement in hardware of optoacoustic imaging systems, highly detailed cross-sectional images may be acquired at a single laser shot, thus eliminating motion artifacts. Nonetheless, other sources of artifacts remain due to signal distortion or out-of-plane signals. The purpose of image reconstruction algorithms is to obtain the most accurate images from noisy, distorted projection data. Methods: In this paper, the authors use the model-based approach for acoustic inversion, combined with a sparsity-based inversion procedure. Specifically, a cost function is used that includes the L1 norm of the image in sparse representation and a total variation (TV) term. Themore » optimization problem is solved by a numerically efficient implementation of a nonlinear gradient descent algorithm. TV–L1 model-based inversion is tested in the cross section geometry for numerically generated data as well as for in vivo experimental data from an adult mouse. Results: In all cases, model-based TV–L1 inversion showed a better performance over the conventional Tikhonov regularization, TV inversion, and L1 inversion. In the numerical examples, the images reconstructed with TV–L1 inversion were quantitatively more similar to the originating images. In the experimental examples, TV–L1 inversion yielded sharper images and weaker streak artifact. Conclusions: The results herein show that TV–L1 inversion is capable of improving the quality of highly detailed, multiscale optoacoustic images obtained in vivo using cross-sectional imaging systems. As a result of its high fidelity, model-based TV–L1 inversion may be considered as the new standard for image reconstruction in cross-sectional imaging.« less
FOREWORD: Imaging from coupled physics Imaging from coupled physics
NASA Astrophysics Data System (ADS)
Arridge, S. R.; Scherzer, O.
2012-08-01
Due to the increased demand for tomographic imaging in applied sciences, such as medicine, biology and nondestructive testing, the field has expanded enormously in the past few decades. The common task of tomography is to image the interior of three-dimensional objects from indirect measurement data. In practical realizations, the specimen to be investigated is exposed to probing fields. A variety of these, such as acoustic, electromagnetic or thermal radiation, amongst others, have been advocated in the literature. In all cases, the field is measured after interaction with internal mechanisms of attenuation and/or scattering and images are reconstructed using inverse problems techniques, representing spatial maps of the parameters of these perturbation mechanisms. In the majority of these imaging modalities, either the useful contrast is of low resolution, or high resolution images are obtained with limited contrast or quantitative discriminatory ability. In the last decade, an alternative phenomenon has become of increasing interest, although its origins can be traced much further back; see Widlak and Scherzer [1], Kuchment and Steinhaur [2], and Seo et al [3] in this issue for references to this historical context. Rather than using the same physical field for probing and measurement, with a contrast caused by perturbation, these methods exploit the generation of a secondary physical field which can be measured in addition to, or without, the often dominating effect of the primary probe field. These techniques are variously called 'hybrid imaging' or 'multimodality imaging'. However, in this article and special section we suggest the term 'imaging from coupled physics' (ICP) to more clearly distinguish this methodology from those that simply measure several types of data simultaneously. The key idea is that contrast induced by one type of radiation is read by another kind, so that both high resolution and high contrast are obtained simultaneously. As with all new imaging techniques, the discovery of physical principles which can be exploited to yield information about internal physical parameters has led, hand in hand, to the development of new mathematical methods for solving the corresponding inverse problems. In many cases, the coupled physics imaging problems are expected to be much better posed than conventional tomographical imaging problems. However, still, at the current state of research, there exist a variety of open mathematical questions regarding uniqueness, existence and stability. In this special section we have invited contributions from many of the leading researchers in the mathematics, physics and engineering of these techniques to survey and to elaborate on these novel methodologies, and to present recent research directions. Historically, one of the best studied strongly ill-posed problems in the mathematical literature is the Calderón problem occuring in conductivity imaging, and one of the first examples of ICP is the use of magnetic resonance imaging (MRI) to detect internal current distributions. This topic, known as current density imaging (CDI) or magnetic resonance elecrical impedance tomography (MREIT), and its related technique of magnetic resonance electrical property tomography (MREPT), is reviewed by Wildak and Scherzer [1], and also by Seo et al [3], where experimental studies are documented. Mathematically, several of the ICP problems can be analyzed in terms of the 'p-Laplacian' which raises interesting research questions of non-linear partial differential equations. One approach for analyzing and for the solution of the CDI problem, using characteristics of the 1-Laplacian, is discussed by Tamasan and Veras [4]. Moreover, Moradifam et al [5] present a novel iterative algorithm based on Bregman splitting for solving the CDI problem. Probably the most active research areas in ICP are related to acoustic detection, because most of these techniques rely on the photoacoustic effect wherein absorption of an ultrashort pulse of light, having propagated by multiple scattering some distance into a diffusing medium, generates a source of acoustic waves that are propagated with hyperbolic stability to a surface detector. A complementary problem is that of 'acousto-optics' which uses focussed acoustic waves as the primary field to induce perturbations in optical or electrical properties, which are thus spatially localized. Similar physical principles apply to implement ultrasound modulated electrical impedance tomography (UMEIT). These topics are included in the review of Wildak and Scherzer [1], and Kuchment and Steinhauer [2] offer a general analysis of their structure in terms of pseudo-differential operators. 'Acousto-electrical' imaging is analyzed as a particular case by Ammari et al [6]. In the paper by Tarvainen et al [7], the photo-acoustic problem is studied with respect to different models of the light propagation step. In the paper by Monard and Bal [8], a more general problem for the reconstruction of an anisotropic diffusion parameter from power density measurements is considered; here, issues of uniqueness with respect to the number of measurements is of great importance. A distinctive, and highly important, example of ICP is that of elastography, in which the primary field is low-frequency ultrasound giving rise to mechanical displacement that reveals information on the local elasticity tensor. As in all the methods discussed in this section, this contrast mechanism is measured internally, with a secondary technique, which in this case can be either MRI or ultrasound. McLaughlin et al [9] give a comprehensive analysis of this problem. Our intention for this special section was to provide both an overview and a snapshot of current work in this exciting area. The increasing interest, and the involvement of cross-disciplinary groups of scientists, will continue to lead to the rapid expansion and important new results in this novel area of imaging science. References [1] Widlak T and Scherzer O 2012 Inverse Problems 28 084008 [2] Kuchment P and Steinhauer D 2012 Inverse Problems 28 084007 [3] Seo J K, Kim D-H, Lee J, Kwon O I, Sajib S Z K and Woo E J 2012 Inverse Problems 28 084002 [4] Tamasan A and Veras J 2012 Inverse Problems 28 084006 [5] Moradifam A, Nachman A and Timonov A 2012 Inverse Problems 28 084003 [6] Ammari H, Garnier J and Jing W 2012 Inverse Problems 28 084005 [7] Tarvainen T, Cox B T, Kaipio J P and Arridge S R 2012 Inverse Problems 28 084009 [8] Monard F and Bal G 2012 Inverse Problems 28 084001 [9] McLaughlin J, Oberai A and Yoon J R 2012 Inverse Problems 28 084004
Improved resistivity imaging of groundwater solute plumes using POD-based inversion
NASA Astrophysics Data System (ADS)
Oware, E. K.; Moysey, S. M.; Khan, T.
2012-12-01
We propose a new approach for enforcing physics-based regularization in electrical resistivity imaging (ERI) problems. The approach utilizes a basis-constrained inversion where an optimal set of basis vectors is extracted from training data by Proper Orthogonal Decomposition (POD). The key aspect of the approach is that Monte Carlo simulation of flow and transport is used to generate a training dataset, thereby intrinsically capturing the physics of the underlying flow and transport models in a non-parametric form. POD allows for these training data to be projected onto a subspace of the original domain, resulting in the extraction of a basis for the inversion that captures characteristics of the groundwater flow and transport system, while simultaneously allowing for dimensionality reduction of the original problem in the projected space We use two different synthetic transport scenarios in heterogeneous media to illustrate how the POD-based inversion compares with standard Tikhonov and coupled inversion. The first scenario had a single source zone leading to a unimodal solute plume (synthetic #1), whereas, the second scenario had two source zones that produced a bimodal plume (synthetic #2). For both coupled inversion and the POD approach, the conceptual flow and transport model used considered only a single source zone for both scenarios. Results were compared based on multiple metrics (concentration root-mean square error (RMSE), peak concentration, and total solute mass). In addition, results for POD inversion based on 3 different data densities (120, 300, and 560 data points) and varying number of selected basis images (100, 300, and 500) were compared. For synthetic #1, we found that all three methods provided qualitatively reasonable reproduction of the true plume. Quantitatively, the POD inversion performed best overall for each metric considered. Moreover, since synthetic #1 was consistent with the conceptual transport model, a small number of basis vectors (100) contained enough a priori information to constrain the inversion. Increasing the amount of data or number of selected basis images did not translate into significant improvement in imaging results. For synthetic #2, the RMSE and error in total mass were lowest for the POD inversion. However, the peak concentration was significantly overestimated by the POD approach. Regardless, the POD-based inversion was the only technique that could capture the bimodality of the plume in the reconstructed image, thus providing critical information that could be used to reconceptualize the transport problem. We also found that, in the case of synthetic #2, increasing the number of resistivity measurements and the number of selected basis vectors allowed for significant improvements in the reconstructed images.
Model based approach to UXO imaging using the time domain electromagnetic method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lavely, E.M.
1999-04-01
Time domain electromagnetic (TDEM) sensors have emerged as a field-worthy technology for UXO detection in a variety of geological and environmental settings. This success has been achieved with commercial equipment that was not optimized for UXO detection and discrimination. The TDEM response displays a rich spatial and temporal behavior which is not currently utilized. Therefore, in this paper the author describes a research program for enhancing the effectiveness of the TDEM method for UXO detection and imaging. Fundamental research is required in at least three major areas: (a) model based imaging capability i.e. the forward and inverse problem, (b) detectormore » modeling and instrument design, and (c) target recognition and discrimination algorithms. These research problems are coupled and demand a unified treatment. For example: (1) the inverse solution depends on solution of the forward problem and knowledge of the instrument response; (2) instrument design with improved diagnostic power requires forward and inverse modeling capability; and (3) improved target recognition algorithms (such as neural nets) must be trained with data collected from the new instrument and with synthetic data computed using the forward model. Further, the design of the appropriate input and output layers of the net will be informed by the results of the forward and inverse modeling. A more fully developed model of the TDEM response would enable the joint inversion of data collected from multiple sensors (e.g., TDEM sensors and magnetometers). Finally, the author suggests that a complementary approach to joint inversions is the statistical recombination of data using principal component analysis. The decomposition into principal components is useful since the first principal component contains those features that are most strongly correlated from image to image.« less
NASA Technical Reports Server (NTRS)
Liu, Gao-Lian
1991-01-01
Advances in inverse design and optimization theory in engineering fields in China are presented. Two original approaches, the image-space approach and the variational approach, are discussed in terms of turbomachine aerodynamic inverse design. Other areas of research in turbomachine aerodynamic inverse design include the improved mean-streamline (stream surface) method and optimization theory based on optimal control. Among the additional engineering fields discussed are the following: the inverse problem of heat conduction, free-surface flow, variational cogeneration of optimal grid and flow field, and optimal meshing theory of gears.
Inverse problems biomechanical imaging (Conference Presentation)
NASA Astrophysics Data System (ADS)
Oberai, Assad A.
2016-03-01
It is now well recognized that a host of imaging modalities (a list that includes Ultrasound, MRI, Optical Coherence Tomography, and optical microscopy) can be used to "watch" tissue as it deforms in response to an internal or external excitation. The result is a detailed map of the deformation field in the interior of the tissue. This deformation field can be used in conjunction with a material mechanical response to determine the spatial distribution of material properties of the tissue by solving an inverse problem. Images of material properties thus obtained can be used to quantify the health of the tissue. Recently, they have been used to detect, diagnose and monitor cancerous lesions, detect vulnerable plaque in arteries, diagnose liver cirrhosis, and possibly detect the onset of Alzheimer's disease. In this talk I will describe the mathematical and computational aspects of solving this class of inverse problems, and their applications in biology and medicine. In particular, I will discuss the well-posedness of these problems and quantify the amount of displacement data necessary to obtain a unique property distribution. I will describe an efficient algorithm for solving the resulting inverse problem. I will also describe some recent developments based on Bayesian inference in estimating the variance in the estimates of material properties. I will conclude with the applications of these techniques in diagnosing breast cancer and in characterizing the mechanical properties of cells with sub-cellular resolution.
NASA Astrophysics Data System (ADS)
Luo, Y.; Nissen-Meyer, T.; Morency, C.; Tromp, J.
2008-12-01
Seismic imaging in the exploration industry is often based upon ray-theoretical migration techniques (e.g., Kirchhoff) or other ideas which neglect some fraction of the seismic wavefield (e.g., wavefield continuation for acoustic-wave first arrivals) in the inversion process. In a companion paper we discuss the possibility of solving the full physical forward problem (i.e., including visco- and poroelastic, anisotropic media) using the spectral-element method. With such a tool at hand, we can readily apply the adjoint method to tomographic inversions, i.e., iteratively improving an initial 3D background model to fit the data. In the context of this inversion process, we draw connections between kernels in adjoint tomography and basic imaging principles in migration. We show that the images obtained by migration are nothing but particular kinds of adjoint kernels (mainly density kernels). Migration is basically a first step in the iterative inversion process of adjoint tomography. We apply the approach to basic 2D problems involving layered structures, overthrusting faults, topography, salt domes, and poroelastic regions.
Hessian Schatten-norm regularization for linear inverse problems.
Lefkimmiatis, Stamatios; Ward, John Paul; Unser, Michael
2013-05-01
We introduce a novel family of invariant, convex, and non-quadratic functionals that we employ to derive regularized solutions of ill-posed linear inverse imaging problems. The proposed regularizers involve the Schatten norms of the Hessian matrix, which are computed at every pixel of the image. They can be viewed as second-order extensions of the popular total-variation (TV) semi-norm since they satisfy the same invariance properties. Meanwhile, by taking advantage of second-order derivatives, they avoid the staircase effect, a common artifact of TV-based reconstructions, and perform well for a wide range of applications. To solve the corresponding optimization problems, we propose an algorithm that is based on a primal-dual formulation. A fundamental ingredient of this algorithm is the projection of matrices onto Schatten norm balls of arbitrary radius. This operation is performed efficiently based on a direct link we provide between vector projections onto lq norm balls and matrix projections onto Schatten norm balls. Finally, we demonstrate the effectiveness of the proposed methods through experimental results on several inverse imaging problems with real and simulated data.
Regularization of soft-X-ray imaging in the DIII-D tokamak
Wingen, A.; Shafer, M. W.; Unterberg, E. A.; ...
2015-03-02
We developed an image inversion scheme for the soft X-ray imaging system (SXRIS) diagnostic at the DIII-D tokamak in order to obtain the local soft X-ray emission at a poloidal cross-section from the spatially line-integrated image taken by the SXRIS camera. The scheme uses the Tikhonov regularization method since the inversion problem is generally ill-posed. The regularization technique uses the generalized singular value decomposition to determine a solution that depends on a free regularization parameter. The latter has to be chosen carefully, and the so called {\\it L-curve} method to find the optimum regularization parameter is outlined. A representative testmore » image is used to study the properties of the inversion scheme with respect to inversion accuracy, amount/strength of regularization, image noise and image resolution. Moreover, the optimum inversion parameters are identified, while the L-curve method successfully computes the optimum regularization parameter. Noise is found to be the most limiting issue, but sufficient regularization is still possible at noise to signal ratios up to 10%-15%. Finally, the inversion scheme is applied to measured SXRIS data and the line-integrated SXRIS image is successfully inverted.« less
EIT image reconstruction based on a hybrid FE-EFG forward method and the complete-electrode model.
Hadinia, M; Jafari, R; Soleimani, M
2016-06-01
This paper presents the application of the hybrid finite element-element free Galerkin (FE-EFG) method for the forward and inverse problems of electrical impedance tomography (EIT). The proposed method is based on the complete electrode model. Finite element (FE) and element-free Galerkin (EFG) methods are accurate numerical techniques. However, the FE technique has meshing task problems and the EFG method is computationally expensive. In this paper, the hybrid FE-EFG method is applied to take both advantages of FE and EFG methods, the complete electrode model of the forward problem is solved, and an iterative regularized Gauss-Newton method is adopted to solve the inverse problem. The proposed method is applied to compute Jacobian in the inverse problem. Utilizing 2D circular homogenous models, the numerical results are validated with analytical and experimental results and the performance of the hybrid FE-EFG method compared with the FE method is illustrated. Results of image reconstruction are presented for a human chest experimental phantom.
Computed inverse resonance imaging for magnetic susceptibility map reconstruction.
Chen, Zikuan; Calhoun, Vince
2012-01-01
This article reports a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a 2-step computational approach. The forward T2*-weighted MRI (T2*MRI) process is broken down into 2 steps: (1) from magnetic susceptibility source to field map establishment via magnetization in the main field and (2) from field map to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes 2 inverse steps to reverse the T2*MRI procedure: field map calculation from MR-phase image and susceptibility source calculation from the field map. The inverse step from field map to susceptibility map is a 3-dimensional ill-posed deconvolution problem, which can be solved with 3 kinds of approaches: the Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from an MR-phase image with high fidelity (spatial correlation ≈ 0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by 2 computational steps: calculating the field map from the phase image and reconstructing the susceptibility map from the field map. The crux of CIMRI lies in an ill-posed 3-dimensional deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm.
2012-01-01
research interests include in- 794 verse problems related to superresolution imaging and metamaterial design. 795 Dr. Fiddy is a Fellow of the Optical...verse problems related to superresolution imaging and metamaterial design. 795 Dr. Fiddy is a Fellow of the Optical Society of America, the IOP, and The
Minimizing EIT image artefacts from mesh variability in finite element models.
Adler, Andy; Lionheart, William R B
2011-07-01
Electrical impedance tomography (EIT) solves an inverse problem to estimate the conductivity distribution within a body from electrical simulation and measurements at the body surface, where the inverse problem is based on a solution of Laplace's equation in the body. Most commonly, a finite element model (FEM) is used, largely because of its ability to describe irregular body shapes. In this paper, we show that simulated variations in the positions of internal nodes within a FEM can result in serious image artefacts in the reconstructed images. Such variations occur when designing FEM meshes to conform to conductivity targets, but the effects may also be seen in other applications of absolute and difference EIT. We explore the hypothesis that these artefacts result from changes in the projection of the anisotropic conductivity tensor onto the FEM system matrix, which introduces anisotropic components into the simulated voltages, which cannot be reconstructed onto an isotropic image, and appear as artefacts. The magnitude of the anisotropic effect is analysed for a small regular FEM, and shown to be proportional to the relative node movement as a fraction of element size. In order to address this problem, we show that it is possible to incorporate a FEM node movement component into the formulation of the inverse problem. These results suggest that it is important to consider artefacts due to FEM mesh geometry in EIT image reconstruction.
A necessary condition for applying MUSIC algorithm in limited-view inverse scattering problem
NASA Astrophysics Data System (ADS)
Park, Taehoon; Park, Won-Kwang
2015-09-01
Throughout various results of numerical simulations, it is well-known that MUltiple SIgnal Classification (MUSIC) algorithm can be applied in the limited-view inverse scattering problems. However, the application is somehow heuristic. In this contribution, we identify a necessary condition of MUSIC for imaging of collection of small, perfectly conducting cracks. This is based on the fact that MUSIC imaging functional can be represented as an infinite series of Bessel function of integer order of the first kind. Numerical experiments from noisy synthetic data supports our investigation.
Freiberger, Manuel; Egger, Herbert; Liebmann, Manfred; Scharfetter, Hermann
2011-11-01
Image reconstruction in fluorescence optical tomography is a three-dimensional nonlinear ill-posed problem governed by a system of partial differential equations. In this paper we demonstrate that a combination of state of the art numerical algorithms and a careful hardware optimized implementation allows to solve this large-scale inverse problem in a few seconds on standard desktop PCs with modern graphics hardware. In particular, we present methods to solve not only the forward but also the non-linear inverse problem by massively parallel programming on graphics processors. A comparison of optimized CPU and GPU implementations shows that the reconstruction can be accelerated by factors of about 15 through the use of the graphics hardware without compromising the accuracy in the reconstructed images.
NASA Astrophysics Data System (ADS)
Hosani, E. Al; Zhang, M.; Abascal, J. F. P. J.; Soleimani, M.
2016-11-01
Electrical capacitance tomography (ECT) is an imaging technology used to reconstruct the permittivity distribution within the sensing region. So far, ECT has been primarily used to image non-conductive media only, since if the conductivity of the imaged object is high, the capacitance measuring circuit will be almost shortened by the conductivity path and a clear image cannot be produced using the standard image reconstruction approaches. This paper tackles the problem of imaging metallic samples using conventional ECT systems by investigating the two main aspects of image reconstruction algorithms, namely the forward problem and the inverse problem. For the forward problem, two different methods to model the region of high conductivity in ECT is presented. On the other hand, for the inverse problem, three different algorithms to reconstruct the high contrast images are examined. The first two methods are the linear single step Tikhonov method and the iterative total variation regularization method, and use two sets of ECT data to reconstruct the image in time difference mode. The third method, namely the level set method, uses absolute ECT measurements and was developed using a metallic forward model. The results indicate that the applications of conventional ECT systems can be extended to metal samples using the suggested algorithms and forward model, especially using a level set algorithm to find the boundary of the metal.
Liu, Tian; Spincemaille, Pascal; de Rochefort, Ludovic; Kressler, Bryan; Wang, Yi
2009-01-01
Magnetic susceptibility differs among tissues based on their contents of iron, calcium, contrast agent, and other molecular compositions. Susceptibility modifies the magnetic field detected in the MR signal phase. The determination of an arbitrary susceptibility distribution from the induced field shifts is a challenging, ill-posed inverse problem. A method called "calculation of susceptibility through multiple orientation sampling" (COSMOS) is proposed to stabilize this inverse problem. The field created by the susceptibility distribution is sampled at multiple orientations with respect to the polarization field, B(0), and the susceptibility map is reconstructed by weighted linear least squares to account for field noise and the signal void region. Numerical simulations and phantom and in vitro imaging validations demonstrated that COSMOS is a stable and precise approach to quantify a susceptibility distribution using MRI.
Computed inverse MRI for magnetic susceptibility map reconstruction
Chen, Zikuan; Calhoun, Vince
2015-01-01
Objective This paper reports on a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a two-step computational approach. Methods The forward T2*-weighted MRI (T2*MRI) process is decomposed into two steps: 1) from magnetic susceptibility source to fieldmap establishment via magnetization in a main field, and 2) from fieldmap to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes two inverse steps to reverse the T2*MRI procedure: fieldmap calculation from MR phase image and susceptibility source calculation from the fieldmap. The inverse step from fieldmap to susceptibility map is a 3D ill-posed deconvolution problem, which can be solved by three kinds of approaches: Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Results Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from a MR phase image with high fidelity (spatial correlation≈0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. Conclusions The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by two computational steps: calculating the fieldmap from the phase image and reconstructing the susceptibility map from the fieldmap. The crux of CIMRI lies in an ill-posed 3D deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm. PMID:22446372
NASA Astrophysics Data System (ADS)
Park, Won-Kwang
2015-02-01
Multi-frequency subspace migration imaging techniques are usually adopted for the non-iterative imaging of unknown electromagnetic targets, such as cracks in concrete walls or bridges and anti-personnel mines in the ground, in the inverse scattering problems. It is confirmed that this technique is very fast, effective, robust, and can not only be applied to full- but also to limited-view inverse problems if a suitable number of incidents and corresponding scattered fields are applied and collected. However, in many works, the application of such techniques is heuristic. With the motivation of such heuristic application, this study analyzes the structure of the imaging functional employed in the subspace migration imaging technique in two-dimensional full- and limited-view inverse scattering problems when the unknown targets are arbitrary-shaped, arc-like perfectly conducting cracks located in the two-dimensional homogeneous space. In contrast to the statistical approach based on statistical hypothesis testing, our approach is based on the fact that the subspace migration imaging functional can be expressed by a linear combination of the Bessel functions of integer order of the first kind. This is based on the structure of the Multi-Static Response (MSR) matrix collected in the far-field at nonzero frequency in either Transverse Magnetic (TM) mode (Dirichlet boundary condition) or Transverse Electric (TE) mode (Neumann boundary condition). The investigation of the expression of imaging functionals gives us certain properties of subspace migration and explains why multi-frequency enhances imaging resolution. In particular, we carefully analyze the subspace migration and confirm some properties of imaging when a small number of incident fields are applied. Consequently, we introduce a weighted multi-frequency imaging functional and confirm that it is an improved version of subspace migration in TM mode. Various results of numerical simulations performed on the far-field data affected by large amounts of random noise are similar to the analytical results derived in this study, and they provide a direction for future studies.
Hesford, Andrew J.; Chew, Weng C.
2010-01-01
The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths. PMID:20707438
Angle-domain inverse scattering migration/inversion in isotropic media
NASA Astrophysics Data System (ADS)
Li, Wuqun; Mao, Weijian; Li, Xuelei; Ouyang, Wei; Liang, Quan
2018-07-01
The classical seismic asymptotic inversion can be transformed into a problem of inversion of generalized Radon transform (GRT). In such methods, the combined parameters are linearly attached to the scattered wave-field by Born approximation and recovered by applying an inverse GRT operator to the scattered wave-field data. Typical GRT-style true-amplitude inversion procedure contains an amplitude compensation process after the weighted migration via dividing an illumination associated matrix whose elements are integrals of scattering angles. It is intuitional to some extent that performs the generalized linear inversion and the inversion of GRT together by this process for direct inversion. However, it is imprecise to carry out such operation when the illumination at the image point is limited, which easily leads to the inaccuracy and instability of the matrix. This paper formulates the GRT true-amplitude inversion framework in an angle-domain version, which naturally degrades the external integral term related to the illumination in the conventional case. We solve the linearized integral equation for combined parameters of different fixed scattering angle values. With this step, we obtain high-quality angle-domain common-image gathers (CIGs) in the migration loop which provide correct amplitude-versus-angle (AVA) behavior and reasonable illumination range for subsurface image points. Then we deal with the over-determined problem to solve each parameter in the combination by a standard optimization operation. The angle-domain GRT inversion method keeps away from calculating the inaccurate and unstable illumination matrix. Compared with the conventional method, the angle-domain method can obtain more accurate amplitude information and wider amplitude-preserved range. Several model tests demonstrate the effectiveness and practicability.
Model-based elastography: a survey of approaches to the inverse elasticity problem
Doyley, M M
2012-01-01
Elastography is emerging as an imaging modality that can distinguish normal versus diseased tissues via their biomechanical properties. This article reviews current approaches to elastography in three areas — quasi-static, harmonic, and transient — and describes inversion schemes for each elastographic imaging approach. Approaches include: first-order approximation methods; direct and iterative inversion schemes for linear elastic; isotropic materials; and advanced reconstruction methods for recovering parameters that characterize complex mechanical behavior. The paper’s objective is to document efforts to develop elastography within the framework of solving an inverse problem, so that elastography may provide reliable estimates of shear modulus and other mechanical parameters. We discuss issues that must be addressed if model-based elastography is to become the prevailing approach to quasi-static, harmonic, and transient elastography: (1) developing practical techniques to transform the ill-posed problem with a well-posed one; (2) devising better forward models to capture the transient behavior of soft tissue; and (3) developing better test procedures to evaluate the performance of modulus elastograms. PMID:22222839
NASA Astrophysics Data System (ADS)
Dorn, O.; Lesselier, D.
2010-07-01
Inverse problems in electromagnetics have a long history and have stimulated exciting research over many decades. New applications and solution methods are still emerging, providing a rich source of challenging topics for further investigation. The purpose of this special issue is to combine descriptions of several such developments that are expected to have the potential to fundamentally fuel new research, and to provide an overview of novel methods and applications for electromagnetic inverse problems. There have been several special sections published in Inverse Problems over the last decade addressing fully, or partly, electromagnetic inverse problems. Examples are: Electromagnetic imaging and inversion of the Earth's subsurface (Guest Editors: D Lesselier and T Habashy) October 2000 Testing inversion algorithms against experimental data (Guest Editors: K Belkebir and M Saillard) December 2001 Electromagnetic and ultrasonic nondestructive evaluation (Guest Editors: D Lesselier and J Bowler) December 2002 Electromagnetic characterization of buried obstacles (Guest Editors: D Lesselier and W C Chew) December 2004 Testing inversion algorithms against experimental data: inhomogeneous targets (Guest Editors: K Belkebir and M Saillard) December 2005 Testing inversion algorithms against experimental data: 3D targets (Guest Editors: A Litman and L Crocco) February 2009 In a certain sense, the current issue can be understood as a continuation of this series of special sections on electromagnetic inverse problems. On the other hand, its focus is intended to be more general than previous ones. Instead of trying to cover a well-defined, somewhat specialized research topic as completely as possible, this issue aims to show the broad range of techniques and applications that are relevant to electromagnetic imaging nowadays, which may serve as a source of inspiration and encouragement for all those entering this active and rapidly developing research area. Also, the construction of this special issue is likely to have been different from preceding ones. In addition to the invitations sent to specific research groups involved in electromagnetic inverse problems, the Guest Editors also solicited recommendations, from a large number of experts, of potential authors who were thereupon encouraged to contribute. Moreover, an open call for contributions was published on the homepage of Inverse Problems in order to attract as wide a scope of contributions as possible. This special issue's attempt at generality might also define its limitations: by no means could this collection of papers be exhaustive or complete, and as Guest Editors we are well aware that many exciting topics and potential contributions will be missing. This, however, also determines its very special flavor: besides addressing electromagnetic inverse problems in a broad sense, there were only a few restrictions on the contributions considered for this section. One requirement was plausible evidence of either novelty or the emergent nature of the technique or application described, judged mainly by the referees, and in some cases by the Guest Editors. The technical quality of the contributions always remained a stringent condition of acceptance, final adjudication (possibly questionable either way, not always positive) being made in most cases once a thorough revision process had been carried out. Therefore, we hope that the final result presented here constitutes an interesting collection of novel ideas and applications, properly refereed and edited, which will find its own readership and which can stimulate significant new research in the topics represented. Overall, as Guest Editors, we feel quite fortunate to have obtained such a strong response to the call for this issue and to have a really wide-ranging collection of high-quality contributions which, indeed, can be read from the first to the last page with sustained enthusiasm. A large number of applications and techniques is represented, overall via 16 contributions with 45 authors in total. This shows, in our opinion, that electromagnetic imaging and inversion remain amongst the most challenging and active research areas in applied inverse problems today. Below, we give a brief overview of the contributions included in this issue, ordered alphabetically by the surname of the leading author. 1. The complexity of handling potential randomness of the source in an inverse scattering problem is not minor, and the literature is far from being replete in this configuration. The contribution by G Bao, S N Chow, P Li and H Zhou, `Numerical solution of an inverse medium scattering problem with a stochastic source', exemplifies how to hybridize Wiener chaos expansion with a recursive linearization method in order to solve the stochastic problem as a set of decoupled deterministic ones. 2. In cases where the forward problem is expensive to evaluate, database methods might become a reliable method of choice, while enabling one to deliver more information on the inversion itself. The contribution by S Bilicz, M Lambert and Sz Gyimóthy, `Kriging-based generation of optimal databases as forward and inverse surrogate models', describes such a technique which uses kriging for constructing an efficient database with the goal of achieving an equidistant distribution of points in the measurement space. 3. Anisotropy remains a considerable challenge in electromagnetic imaging, which is tackled in the contribution by F Cakoni, D Colton, P Monk and J Sun, `The inverse electromagnetic scattering problem for anisotropic media', via the fact that transmission eigenvalues can be retrieved from a far-field scattering pattern, yielding, in particular, lower and upper bounds of the index of refraction of the unknown (dielectric anisotropic) scatterer. 4. So-called subspace optimization methods (SOM) have attracted a lot of interest recently in many fields. The contribution by X Chen, `Subspace-based optimization method for inverse scattering problems with an inhomogeneous background medium', illustrates how to address a realistic situation in which the medium containing the unknown obstacles is not homogeneous, via blending a properly developed SOM with a finite-element approach to the required Green's functions. 5. H Egger, M Hanke, C Schneider, J Schöberl and S Zaglmayr, in their contribution `Adjoint-based sampling methods for electromagnetic scattering', show how to efficiently develop sampling methods without explicit knowledge of the dyadic Green's function once an adjoint problem has been solved at much lower computational cost. This is demonstrated by examples in demanding propagative and diffusive situations. 6. Passive sensor arrays can be employed to image reflectors from ambient noise via proper migration of cross-correlation matrices into their embedding medium. This is investigated, and resolution, in particular, is considered in detail, as a function of the characteristics of the sensor array and those of the noise, in the contribution by J Garnier and G Papanicolaou, `Resolution analysis for imaging with noise'. 7. A direct reconstruction technique based on the conformal mapping theorem is proposed and investigated in depth in the contribution by H Haddar and R Kress, `Conformal mapping and impedance tomography'. This paper expands on previous work, with inclusions in homogeneous media, convergence results, and numerical illustrations. 8. The contribution by T Hohage and S Langer, `Acceleration techniques for regularized Newton methods applied to electromagnetic inverse medium scattering problems', focuses on a spectral preconditioner intended to accelerate regularized Newton methods as employed for the retrieval of a local inhomogeneity in a three-dimensional vector electromagnetic case, while also illustrating the implementation of a Lepskiĭ-type stopping rule outsmarting a traditional discrepancy principle. 9. Geophysical applications are a rich source of practically relevant inverse problems. The contribution by M Li, A Abubakar and T Habashy, `Application of a two-and-a-half dimensional model-based algorithm to crosswell electromagnetic data inversion', deals with a model-based inversion technique for electromagnetic imaging which addresses novel challenges such as multi-physics inversion, and incorporation of prior knowledge, such as in hydrocarbon recovery. 10. Non-stationary inverse problems, considered as a special class of Bayesian inverse problems, are framed via an orthogonal decomposition representation in the contribution by A Lipponen, A Seppänen and J P Kaipio, `Reduced order estimation of nonstationary flows with electrical impedance tomography'. The goal is to simultaneously estimate, from electrical impedance tomography data, certain characteristics of the Navier--Stokes fluid flow model together with time-varying concentration distribution. 11. Non-iterative imaging methods of thin, penetrable cracks, based on asymptotic expansion of the scattering amplitude and analysis of the multi-static response matrix, are discussed in the contribution by W-K Park, `On the imaging of thin dielectric inclusions buried within a half-space', completing, for a shallow burial case at multiple frequencies, the direct imaging of small obstacles (here, along their transverse dimension), MUSIC and non-MUSIC type indicator functions being used for that purpose. 12. The contribution by R Potthast, `A study on orthogonality sampling' envisages quick localization and shaping of obstacles from (portions of) far-field scattering patterns collected at one or more time-harmonic frequencies, via the simple calculation (and summation) of scalar products between those patterns and a test function. This is numerically exemplified for Neumann/Dirichlet boundary conditions and homogeneous/heterogeneous embedding media. 13. The contribution by J D Shea, P Kosmas, B D Van Veen and S C Hagness, `Contrast-enhanced microwave imaging of breast tumors: a computational study using 3D realistic numerical phantoms', aims at microwave medical imaging, namely the early detection of breast cancer. The use of contrast enhancing agents is discussed in detail and a number of reconstructions in three-dimensional geometry of realistic numerical breast phantoms are presented. 14. The contribution by D A Subbarayappa and V Isakov, `Increasing stability of the continuation for the Maxwell system', discusses enhanced log-type stability results for continuation of solutions of the time-harmonic Maxwell system, adding a fresh chapter to the interesting story of the study of the Cauchy problem for PDE. 15. In their contribution, `Recent developments of a monotonicity imaging method for magnetic induction tomography in the small skin-depth regime', A Tamburrino, S Ventre and G Rubinacci extend the recently developed monotonicity method toward the application of magnetic induction tomography in order to map surface-breaking defects affecting a damaged metal component. 16. The contribution by F Viani, P Rocca, M Benedetti, G Oliveri and A Massa, `Electromagnetic passive localization and tracking of moving targets in a WSN-infrastructured environment', contributes to what could still be seen as a niche problem, yet both useful in terms of applications, e.g., security, and challenging in terms of methodologies and experiments, in particular, in view of the complexity of environments in which this endeavor is to take place and the variability of the wireless sensor networks employed. To conclude, we would like to thank the able and tireless work of Kate Watt and Zoë Crossman, as past and present Publishers of the Journal, on what was definitely a long and exciting journey (sometimes a little discouraging when reports were not arriving, or authors were late, or Guest Editors overwhelmed) that started from a thorough discussion at the `Manchester workshop on electromagnetic inverse problems' held mid-June 2009, between Kate Watt and the Guest Editors. We gratefully acknowledge the fact that W W Symes gave us his full backing to carry out this special issue and that A K Louis completed it successfully. Last, but not least, the staff of Inverse Problems should be thanked, since they work together to make it a premier journal.
Nonlinear inversion of electrical resistivity imaging using pruning Bayesian neural networks
NASA Astrophysics Data System (ADS)
Jiang, Fei-Bo; Dai, Qian-Wei; Dong, Li
2016-06-01
Conventional artificial neural networks used to solve electrical resistivity imaging (ERI) inversion problem suffer from overfitting and local minima. To solve these problems, we propose to use a pruning Bayesian neural network (PBNN) nonlinear inversion method and a sample design method based on the K-medoids clustering algorithm. In the sample design method, the training samples of the neural network are designed according to the prior information provided by the K-medoids clustering results; thus, the training process of the neural network is well guided. The proposed PBNN, based on Bayesian regularization, is used to select the hidden layer structure by assessing the effect of each hidden neuron to the inversion results. Then, the hyperparameter α k , which is based on the generalized mean, is chosen to guide the pruning process according to the prior distribution of the training samples under the small-sample condition. The proposed algorithm is more efficient than other common adaptive regularization methods in geophysics. The inversion of synthetic data and field data suggests that the proposed method suppresses the noise in the neural network training stage and enhances the generalization. The inversion results with the proposed method are better than those of the BPNN, RBFNN, and RRBFNN inversion methods as well as the conventional least squares inversion.
Reconstructing Images in Astrophysics, an Inverse Problem Point of View
NASA Astrophysics Data System (ADS)
Theys, Céline; Aime, Claude
2016-04-01
After a short introduction, a first section provides a brief tutorial to the physics of image formation and its detection in the presence of noises. The rest of the chapter focuses on the resolution of the inverse problem
2D Seismic Imaging of Elastic Parameters by Frequency Domain Full Waveform Inversion
NASA Astrophysics Data System (ADS)
Brossier, R.; Virieux, J.; Operto, S.
2008-12-01
Thanks to recent advances in parallel computing, full waveform inversion is today a tractable seismic imaging method to reconstruct physical parameters of the earth interior at different scales ranging from the near- surface to the deep crust. We present a massively parallel 2D frequency-domain full-waveform algorithm for imaging visco-elastic media from multi-component seismic data. The forward problem (i.e. the resolution of the frequency-domain 2D PSV elastodynamics equations) is based on low-order Discontinuous Galerkin (DG) method (P0 and/or P1 interpolations). Thanks to triangular unstructured meshes, the DG method allows accurate modeling of both body waves and surface waves in case of complex topography for a discretization of 10 to 15 cells per shear wavelength. The frequency-domain DG system is solved efficiently for multiple sources with the parallel direct solver MUMPS. The local inversion procedure (i.e. minimization of residuals between observed and computed data) is based on the adjoint-state method which allows to efficiently compute the gradient of the objective function. Applying the inversion hierarchically from the low frequencies to the higher ones defines a multiresolution imaging strategy which helps convergence towards the global minimum. In place of expensive Newton algorithm, the combined use of the diagonal terms of the approximate Hessian matrix and optimization algorithms based on quasi-Newton methods (Conjugate Gradient, LBFGS, ...) allows to improve the convergence of the iterative inversion. The distribution of forward problem solutions over processors driven by a mesh partitioning performed by METIS allows to apply most of the inversion in parallel. We shall present the main features of the parallel modeling/inversion algorithm, assess its scalability and illustrate its performances with realistic synthetic case studies.
Grid-Independent Compressive Imaging and Fourier Phase Retrieval
ERIC Educational Resources Information Center
Liao, Wenjing
2013-01-01
This dissertation is composed of two parts. In the first part techniques of band exclusion(BE) and local optimization(LO) are proposed to solve linear continuum inverse problems independently of the grid spacing. The second part is devoted to the Fourier phase retrieval problem. Many situations in optics, medical imaging and signal processing call…
Image processing and reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chartrand, Rick
2012-06-15
This talk will examine some mathematical methods for image processing and the solution of underdetermined, linear inverse problems. The talk will have a tutorial flavor, mostly accessible to undergraduates, while still presenting research results. The primary approach is the use of optimization problems. We will find that relaxing the usual assumption of convexity will give us much better results.
1993-02-01
amplification induced by the inverse filter. The problem of noise amplification that arises in conventional image deblurring problems has often been... noise sensitivity, and strategies for selecting a regularization parameter have been developed. The probability of convergence to within a prescribed...Strategies in Image Deblurring .................. 12 2.2.2 CLS Parameter Selection ........................... 14 2.2.3 Wiener Parameter Selection
Wang, Qi; Wang, Huaxiang; Cui, Ziqiang; Yang, Chengyi
2012-11-01
Electrical impedance tomography (EIT) calculates the internal conductivity distribution within a body using electrical contact measurements. The image reconstruction for EIT is an inverse problem, which is both non-linear and ill-posed. The traditional regularization method cannot avoid introducing negative values in the solution. The negativity of the solution produces artifacts in reconstructed images in presence of noise. A statistical method, namely, the expectation maximization (EM) method, is used to solve the inverse problem for EIT in this paper. The mathematical model of EIT is transformed to the non-negatively constrained likelihood minimization problem. The solution is obtained by the gradient projection-reduced Newton (GPRN) iteration method. This paper also discusses the strategies of choosing parameters. Simulation and experimental results indicate that the reconstructed images with higher quality can be obtained by the EM method, compared with the traditional Tikhonov and conjugate gradient (CG) methods, even with non-negative processing. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Quy Muoi, Pham; Nho Hào, Dinh; Sahoo, Sujit Kumar; Tang, Dongliang; Cong, Nguyen Huu; Dang, Cuong
2018-05-01
In this paper, we study a gradient-type method and a semismooth Newton method for minimization problems in regularizing inverse problems with nonnegative and sparse solutions. We propose a special penalty functional forcing the minimizers of regularized minimization problems to be nonnegative and sparse, and then we apply the proposed algorithms in a practical the problem. The strong convergence of the gradient-type method and the local superlinear convergence of the semismooth Newton method are proven. Then, we use these algorithms for the phase retrieval problem and illustrate their efficiency in numerical examples, particularly in the practical problem of optical imaging through scattering media where all the noises from experiment are presented.
Inverse Diffusion Curves Using Shape Optimization.
Zhao, Shuang; Durand, Fredo; Zheng, Changxi
2018-07-01
The inverse diffusion curve problem focuses on automatic creation of diffusion curve images that resemble user provided color fields. This problem is challenging since the 1D curves have a nonlinear and global impact on resulting color fields via a partial differential equation (PDE). We introduce a new approach complementary to previous methods by optimizing curve geometry. In particular, we propose a novel iterative algorithm based on the theory of shape derivatives. The resulting diffusion curves are clean and well-shaped, and the final image closely approximates the input. Our method provides a user-controlled parameter to regularize curve complexity, and generalizes to handle input color fields represented in a variety of formats.
Graph-cut based discrete-valued image reconstruction.
Tuysuzoglu, Ahmet; Karl, W Clem; Stojanovic, Ivana; Castañòn, David; Ünlü, M Selim
2015-05-01
Efficient graph-cut methods have been used with great success for labeling and denoising problems occurring in computer vision. Unfortunately, the presence of linear image mappings has prevented the use of these techniques in most discrete-amplitude image reconstruction problems. In this paper, we develop a graph-cut based framework for the direct solution of discrete amplitude linear image reconstruction problems cast as regularized energy function minimizations. We first analyze the structure of discrete linear inverse problem cost functions to show that the obstacle to the application of graph-cut methods to their solution is the variable mixing caused by the presence of the linear sensing operator. We then propose to use a surrogate energy functional that overcomes the challenges imposed by the sensing operator yet can be utilized efficiently in existing graph-cut frameworks. We use this surrogate energy functional to devise a monotonic iterative algorithm for the solution of discrete valued inverse problems. We first provide experiments using local convolutional operators and show the robustness of the proposed technique to noise and stability to changes in regularization parameter. Then we focus on nonlocal, tomographic examples where we consider limited-angle data problems. We compare our technique with state-of-the-art discrete and continuous image reconstruction techniques. Experiments show that the proposed method outperforms state-of-the-art techniques in challenging scenarios involving discrete valued unknowns.
FOREWORD: 3rd International Workshop on New Computational Methods for Inverse Problems (NCMIP 2013)
NASA Astrophysics Data System (ADS)
Blanc-Féraud, Laure; Joubert, Pierre-Yves
2013-10-01
Conference logo This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 3rd International Workshop on New Computational Methods for Inverse Problems, NCMIP 2013 (http://www.farman.ens-cachan.fr/NCMIP_2013.html). This workshop took place at Ecole Normale Supérieure de Cachan, in Cachan, France, on 22 May 2013, at the initiative of Institut Farman. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of the ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/), and secondly at the initiative of Institut Farman, in May 2012 (http://www.farman.ens-cachan.fr/NCMIP_2012.html). The NCMIP Workshop focused on recent advances in the resolution of inverse problems. Indeed inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, kernel methods, learning methods, convex optimization, free discontinuity problems, metamodels, proper orthogonal decomposition, reduced models for the inversion, non-linear inverse scattering, image reconstruction and restoration, and applications (bio-medical imaging, non-destructive evaluation...). NCMIP 2013 was a one-day workshop held in May 2013 which attracted around 60 attendees. Each of the submitted papers has been reviewed by three reviewers. Among the accepted papers, there are seven oral presentations, five posters and one invited poster (On a deconvolution challenge presented by C Vonesch from EPFL, Switzerland). In addition, three international speakers were invited to present a longer talk. The workshop was supported by Institut Farman (ENS Cachan, CNRS) and endorsed by the following French research networks (GDR ISIS, GDR Ondes, GDR MOA, GDR MSPC). The program committee acknowledges the following research laboratories CMLA, LMT, LSV, LURPA, SATIE. Laure Blanc-Féraud and Pierre-Yves Joubert Workshop co-chair Laure Blanc-Féraud, I3S laboratory and INRIA Nice Sophia-Antipolis, France Pierre-Yves Joubert, IEF, Paris-Sud University, CNRS, France Technical program committee Gilles Aubert, J-A Dieudonné Laboratory, CNRS and University of Nice-Sophia Antipolis, France Nabil Anwer, LURPA, ENS Cachan, France Alexandre Baussard, ENSTA Bretagne, Lab-STICC, France Marc Bonnet, ENSTA, ParisTech, France Antonin Chambolle, CMAP, Ecole Polytechnique, CNRS, France Oliver Dorn, School of Mathematics, University of Manchester, UK Cécile Durieu, SATIE, ENS Cachan, CNRS, France Gérard Favier, I3S Laboratory, University of Nice Sophia-Antipolis, France Mário Figueiredo, Instituto Superior Técnico, Lisbon, Portugal Laurent Fribourg, LSV, ENS Cachan, CNRS, France Marc Lambert, L2S Laboratory, CNRS, SupElec, Paris-Sud University, France Dominique Lesselier, L2S Laboratory, CNRS, SupElec, Paris-Sud University, France Matteo Pastorino, DIBE, University of Genoa, Italy Christian Rey, LMT, ENS Cachan, CNRS, France Simon Setzer, Saarland University, Germany Cedric Vonesch, EPFL, Switzerland Local chair Sophie Abriet, SATIE Laboratory, ENS Cachan, France Béatrice Bacquet, SATIE Laboratory, ENS Cachan, France Lydia Matijevic, LMT Laboratory, ENS Cachan France Invited speakers Jérôme Idier, IRCCyN (UMR CNRS 6597), Ecole Centrale de Nantes, France Massimo Fornasier, Faculty of Mathematics, Technical University of Munich, Germany Matthias Fink, Institut Langevin, ESPCI, Université Paris Diderot, France
NASA Astrophysics Data System (ADS)
Talukdar, Karabi; Behera, Laxmidhar
2018-03-01
Imaging below the basalt for hydrocarbon exploration is a global problem because of poor penetration and significant loss of seismic energy due to scattering, attenuation, absorption and mode-conversion when the seismic waves encounter a highly heterogeneous and rugose basalt layer. The conventional (short offset) seismic data acquisition, processing and modeling techniques adopted by the oil industry generally fails to image hydrocarbon-bearing sub-trappean Mesozoic sediments hidden below the basalt and is considered as a serious problem for hydrocarbon exploration in the world. To overcome this difficulty of sub-basalt imaging, we have generated dense synthetic seismic data with the help of elastic finite-difference full-wave modeling using staggered-grid scheme for the model derived from ray-trace inversion using sparse wide-angle seismic data acquired along Sinor-Valod profile in the Deccan Volcanic Province of India. The full-wave synthetic seismic data generated have been processed and imaged using conventional seismic data processing technique with Kirchhoff pre-stack time and depth migrations. The seismic image obtained correlates with all the structural features of the model obtained through ray-trace inversion of wide-angle seismic data, validating the effectiveness of robust elastic finite-difference full-wave modeling approach for imaging below thick basalts. Using the full-wave modeling also allows us to decipher small-scale heterogeneities imposed in the model as a measure of the rugose basalt interfaces, which could not be dealt with ray-trace inversion. Furthermore, we were able to accurately image thin low-velocity hydrocarbon-bearing Mesozoic sediments sandwiched between and hidden below two thick sequences of high-velocity basalt layers lying above the basement.
Inverse transport calculations in optical imaging with subspace optimization algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Tian, E-mail: tding@math.utexas.edu; Ren, Kui, E-mail: ren@math.utexas.edu
2014-09-15
Inverse boundary value problems for the radiative transport equation play an important role in optics-based medical imaging techniques such as diffuse optical tomography (DOT) and fluorescence optical tomography (FOT). Despite the rapid progress in the mathematical theory and numerical computation of these inverse problems in recent years, developing robust and efficient reconstruction algorithms remains a challenging task and an active research topic. We propose here a robust reconstruction method that is based on subspace minimization techniques. The method splits the unknown transport solution (or a functional of it) into low-frequency and high-frequency components, and uses singular value decomposition to analyticallymore » recover part of low-frequency information. Minimization is then applied to recover part of the high-frequency components of the unknowns. We present some numerical simulations with synthetic data to demonstrate the performance of the proposed algorithm.« less
Imaging model for the scintillator and its application to digital radiography image enhancement.
Wang, Qian; Zhu, Yining; Li, Hongwei
2015-12-28
Digital Radiography (DR) images obtained by OCD-based (optical coupling detector) Micro-CT system usually suffer from low contrast. In this paper, a mathematical model is proposed to describe the image formation process in scintillator. By solving the correlative inverse problem, the quality of DR images is improved, i.e. higher contrast and spatial resolution. By analyzing the radiative transfer process of visible light in scintillator, scattering is recognized as the main factor leading to low contrast. Moreover, involved blurring effect is also concerned and described as point spread function (PSF). Based on these physical processes, the scintillator imaging model is then established. When solving the inverse problem, pre-correction to the intensity of x-rays, dark channel prior based haze removing technique, and an effective blind deblurring approach are employed. Experiments on a variety of DR images show that the proposed approach could improve the contrast of DR images dramatically as well as eliminate the blurring vision effectively. Compared with traditional contrast enhancement methods, such as CLAHE, our method could preserve the relative absorption values well.
Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A
2017-12-01
The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction.
Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A.
2017-01-01
The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction. PMID:29376111
NASA Astrophysics Data System (ADS)
Hintermüller, Michael; Holler, Martin; Papafitsoros, Kostas
2018-06-01
In this work, we introduce a function space setting for a wide class of structural/weighted total variation (TV) regularization methods motivated by their applications in inverse problems. In particular, we consider a regularizer that is the appropriate lower semi-continuous envelope (relaxation) of a suitable TV type functional initially defined for sufficiently smooth functions. We study examples where this relaxation can be expressed explicitly, and we also provide refinements for weighted TV for a wide range of weights. Since an integral characterization of the relaxation in function space is, in general, not always available, we show that, for a rather general linear inverse problems setting, instead of the classical Tikhonov regularization problem, one can equivalently solve a saddle-point problem where no a priori knowledge of an explicit formulation of the structural TV functional is needed. In particular, motivated by concrete applications, we deduce corresponding results for linear inverse problems with norm and Poisson log-likelihood data discrepancy terms. Finally, we provide proof-of-concept numerical examples where we solve the saddle-point problem for weighted TV denoising as well as for MR guided PET image reconstruction.
The inverse electroencephalography pipeline
NASA Astrophysics Data System (ADS)
Weinstein, David Michael
The inverse electroencephalography (EEG) problem is defined as determining which regions of the brain are active based on remote measurements recorded with scalp EEG electrodes. An accurate solution to this problem would benefit both fundamental neuroscience research and clinical neuroscience applications. However, constructing accurate patient-specific inverse EEG solutions requires complex modeling, simulation, and visualization algorithms, and to date only a few systems have been developed that provide such capabilities. In this dissertation, a computational system for generating and investigating patient-specific inverse EEG solutions is introduced, and the requirements for each stage of this Inverse EEG Pipeline are defined and discussed. While the requirements of many of the stages are satisfied with existing algorithms, others have motivated research into novel modeling and simulation methods. The principal technical results of this work include novel surface-based volume modeling techniques, an efficient construction for the EEG lead field, and the Open Source release of the Inverse EEG Pipeline software for use by the bioelectric field research community. In this work, the Inverse EEG Pipeline is applied to three research problems in neurology: comparing focal and distributed source imaging algorithms; separating measurements into independent activation components for multifocal epilepsy; and localizing the cortical activity that produces the P300 effect in schizophrenia.
NASA Astrophysics Data System (ADS)
Chen, Hao; Zhang, Xinggan; Bai, Yechao; Tang, Lan
2017-01-01
In inverse synthetic aperture radar (ISAR) imaging, the migration through resolution cells (MTRCs) will occur when the rotation angle of the moving target is large, thereby degrading image resolution. To solve this problem, an ISAR imaging method based on segmented preprocessing is proposed. In this method, the echoes of large rotating target are divided into several small segments, and every segment can generate a low-resolution image without MTRCs. Then, each low-resolution image is rotated back to the original position. After image registration and phase compensation, a high-resolution image can be obtained. Simulation and real experiments show that the proposed algorithm can deal with the radar system with different range and cross-range resolutions and significantly compensate the MTRCs.
Time domain localization technique with sparsity constraint for imaging acoustic sources
NASA Astrophysics Data System (ADS)
Padois, Thomas; Doutres, Olivier; Sgard, Franck; Berry, Alain
2017-09-01
This paper addresses source localization technique in time domain for broadband acoustic sources. The objective is to accurately and quickly detect the position and amplitude of noise sources in workplaces in order to propose adequate noise control options and prevent workers hearing loss or safety risk. First, the generalized cross correlation associated with a spherical microphone array is used to generate an initial noise source map. Then a linear inverse problem is defined to improve this initial map. Commonly, the linear inverse problem is solved with an l2 -regularization. In this study, two sparsity constraints are used to solve the inverse problem, the orthogonal matching pursuit and the truncated Newton interior-point method. Synthetic data are used to highlight the performances of the technique. High resolution imaging is achieved for various acoustic sources configurations. Moreover, the amplitudes of the acoustic sources are correctly estimated. A comparison of computation times shows that the technique is compatible with quasi real-time generation of noise source maps. Finally, the technique is tested with real data.
NASA Technical Reports Server (NTRS)
Berger, B. S.; Duangudom, S.
1973-01-01
A technique is introduced which extends the range of useful approximation of numerical inversion techniques to many cycles of an oscillatory function without requiring either the evaluation of the image function for many values of s or the computation of higher-order terms. The technique consists in reducing a given initial value problem defined over some interval into a sequence of initial value problems defined over a set of subintervals. Several numerical examples demonstrate the utility of the method.
Extraction of skin-friction fields from surface flow visualizations as an inverse problem
NASA Astrophysics Data System (ADS)
Liu, Tianshu
2013-12-01
Extraction of high-resolution skin-friction fields from surface flow visualization images as an inverse problem is discussed from a unified perspective. The surface flow visualizations used in this study are luminescent oil-film visualization and heat-transfer and mass-transfer visualizations with temperature- and pressure-sensitive paints (TSPs and PSPs). The theoretical foundations of these global methods are the thin-oil-film equation and the limiting forms of the energy- and mass-transport equations at a wall, which are projected onto the image plane to provide the relationships between a skin-friction field and the relevant quantities measured by using an imaging system. Since these equations can be re-cast in the same mathematical form as the optical flow equation, they can be solved by using the variational method in the image plane to extract relative or normalized skin-friction fields from images. Furthermore, in terms of instrumentation, essentially the same imaging system for measurements of luminescence can be used in these surface flow visualizations. Examples are given to demonstrate the applications of these methods in global skin-friction diagnostics of complex flows.
Adaptive image inversion of contrast 3D echocardiography for enabling automated analysis.
Shaheen, Anjuman; Rajpoot, Kashif
2015-08-01
Contrast 3D echocardiography (C3DE) is commonly used to enhance the visual quality of ultrasound images in comparison with non-contrast 3D echocardiography (3DE). Although the image quality in C3DE is perceived to be improved for visual analysis, however it actually deteriorates for the purpose of automatic or semi-automatic analysis due to higher speckle noise and intensity inhomogeneity. Therefore, the LV endocardial feature extraction and segmentation from the C3DE images remains a challenging problem. To address this challenge, this work proposes an adaptive pre-processing method to invert the appearance of C3DE image. The image inversion is based on an image intensity threshold value which is automatically estimated through image histogram analysis. In the inverted appearance, the LV cavity appears dark while the myocardium appears bright thus making it similar in appearance to a 3DE image. Moreover, the resulting inverted image has high contrast and low noise appearance, yielding strong LV endocardium boundary and facilitating feature extraction for segmentation. Our results demonstrate that the inverse appearance of contrast image enables the subsequent LV segmentation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Graph Laplacian Regularization for Image Denoising: Analysis in the Continuous Domain.
Pang, Jiahao; Cheung, Gene
2017-04-01
Inverse imaging problems are inherently underdetermined, and hence, it is important to employ appropriate image priors for regularization. One recent popular prior-the graph Laplacian regularizer-assumes that the target pixel patch is smooth with respect to an appropriately chosen graph. However, the mechanisms and implications of imposing the graph Laplacian regularizer on the original inverse problem are not well understood. To address this problem, in this paper, we interpret neighborhood graphs of pixel patches as discrete counterparts of Riemannian manifolds and perform analysis in the continuous domain, providing insights into several fundamental aspects of graph Laplacian regularization for image denoising. Specifically, we first show the convergence of the graph Laplacian regularizer to a continuous-domain functional, integrating a norm measured in a locally adaptive metric space. Focusing on image denoising, we derive an optimal metric space assuming non-local self-similarity of pixel patches, leading to an optimal graph Laplacian regularizer for denoising in the discrete domain. We then interpret graph Laplacian regularization as an anisotropic diffusion scheme to explain its behavior during iterations, e.g., its tendency to promote piecewise smooth signals under certain settings. To verify our analysis, an iterative image denoising algorithm is developed. Experimental results show that our algorithm performs competitively with state-of-the-art denoising methods, such as BM3D for natural images, and outperforms them significantly for piecewise smooth images.
A Stochastic Inversion Method for Potential Field Data: Ant Colony Optimization
NASA Astrophysics Data System (ADS)
Liu, Shuang; Hu, Xiangyun; Liu, Tianyou
2014-07-01
Simulating natural ants' foraging behavior, the ant colony optimization (ACO) algorithm performs excellently in combinational optimization problems, for example the traveling salesman problem and the quadratic assignment problem. However, the ACO is seldom used to inverted for gravitational and magnetic data. On the basis of the continuous and multi-dimensional objective function for potential field data optimization inversion, we present the node partition strategy ACO (NP-ACO) algorithm for inversion of model variables of fixed shape and recovery of physical property distributions of complicated shape models. We divide the continuous variables into discrete nodes and ants directionally tour the nodes by use of transition probabilities. We update the pheromone trails by use of Gaussian mapping between the objective function value and the quantity of pheromone. It can analyze the search results in real time and promote the rate of convergence and precision of inversion. Traditional mapping, including the ant-cycle system, weaken the differences between ant individuals and lead to premature convergence. We tested our method by use of synthetic data and real data from scenarios involving gravity and magnetic anomalies. The inverted model variables and recovered physical property distributions were in good agreement with the true values. The ACO algorithm for binary representation imaging and full imaging can recover sharper physical property distributions than traditional linear inversion methods. The ACO has good optimization capability and some excellent characteristics, for example robustness, parallel implementation, and portability, compared with other stochastic metaheuristics.
Monotonicity based imaging method for time-domain eddy current problems
NASA Astrophysics Data System (ADS)
Su, Z.; Ventre, S.; Udpa, L.; Tamburrino, A.
2017-12-01
Eddy current imaging is an example of inverse problem in nondestructive evaluation for detecting anomalies in conducting materials. This paper introduces the concept of time constants and associated natural modes in eddy current imaging. The monotonicity of time constants is then described and applied to develop a non-iterative imaging method. The proposed imaging method has a low computational cost which makes it suitable for real-time operations. Full 3D numerical examples prove the effectiveness of the method in realistic scenarios. This paper is dedicated to Professor Guglielmo Rubinacci on the occasion of his 65th Birthday.
Inverse methods for 3D quantitative optical coherence elasticity imaging (Conference Presentation)
NASA Astrophysics Data System (ADS)
Dong, Li; Wijesinghe, Philip; Hugenberg, Nicholas; Sampson, David D.; Munro, Peter R. T.; Kennedy, Brendan F.; Oberai, Assad A.
2017-02-01
In elastography, quantitative elastograms are desirable as they are system and operator independent. Such quantification also facilitates more accurate diagnosis, longitudinal studies and studies performed across multiple sites. In optical elastography (compression, surface-wave or shear-wave), quantitative elastograms are typically obtained by assuming some form of homogeneity. This simplifies data processing at the expense of smearing sharp transitions in elastic properties, and/or introducing artifacts in these regions. Recently, we proposed an inverse problem-based approach to compression OCE that does not assume homogeneity, and overcomes the drawbacks described above. In this approach, the difference between the measured and predicted displacement field is minimized by seeking the optimal distribution of elastic parameters. The predicted displacements and recovered elastic parameters together satisfy the constraint of the equations of equilibrium. This approach, which has been applied in two spatial dimensions assuming plane strain, has yielded accurate material property distributions. Here, we describe the extension of the inverse problem approach to three dimensions. In addition to the advantage of visualizing elastic properties in three dimensions, this extension eliminates the plane strain assumption and is therefore closer to the true physical state. It does, however, incur greater computational costs. We address this challenge through a modified adjoint problem, spatially adaptive grid resolution, and three-dimensional decomposition techniques. Through these techniques the inverse problem is solved on a typical desktop machine within a wall clock time of 20 hours. We present the details of the method and quantitative elasticity images of phantoms and tissue samples.
A fast time-difference inverse solver for 3D EIT with application to lung imaging.
Javaherian, Ashkan; Soleimani, Manuchehr; Moeller, Knut
2016-08-01
A class of sparse optimization techniques that require solely matrix-vector products, rather than an explicit access to the forward matrix and its transpose, has been paid much attention in the recent decade for dealing with large-scale inverse problems. This study tailors application of the so-called Gradient Projection for Sparse Reconstruction (GPSR) to large-scale time-difference three-dimensional electrical impedance tomography (3D EIT). 3D EIT typically suffers from the need for a large number of voxels to cover the whole domain, so its application to real-time imaging, for example monitoring of lung function, remains scarce since the large number of degrees of freedom of the problem extremely increases storage space and reconstruction time. This study shows the great potential of the GPSR for large-size time-difference 3D EIT. Further studies are needed to improve its accuracy for imaging small-size anomalies.
FOREWORD: 2nd International Workshop on New Computational Methods for Inverse Problems (NCMIP 2012)
NASA Astrophysics Data System (ADS)
Blanc-Féraud, Laure; Joubert, Pierre-Yves
2012-09-01
Conference logo This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 2nd International Workshop on New Computational Methods for Inverse Problems, (NCMIP 2012). This workshop took place at Ecole Normale Supérieure de Cachan, in Cachan, France, on 15 May 2012, at the initiative of Institut Farman. The first edition of NCMIP also took place in Cachan, France, within the scope of the ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/). The NCMIP Workshop focused on recent advances in the resolution of inverse problems. Indeed inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finance. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, kernel methods, learning methods, convex optimization, free discontinuity problems, metamodels, proper orthogonal decomposition, reduced models for the inversion, non-linear inverse scattering, image reconstruction and restoration, applications (bio-medical imaging, non-destructive evaluation etc). NCMIP 2012 was a one-day workshop. Each of the submitted papers was reviewed by 2 to 4 reviewers. Among the accepted papers, there are 8 oral presentations and 5 posters. Three international speakers were invited for a long talk. This second edition attracted 60 registered attendees in May 2012. NCMIP 2012 was supported by Institut Farman (ENS Cachan) and endorsed by the following French research networks (GDR ISIS, GDR Ondes, GDR MOA, GDR MSPC). The program committee acknowledges the following laboratories CMLA, LMT, LSV, LURPA, SATIE, as well as DIGITEO Network. Laure Blanc-Féraud and Pierre-Yves Joubert Workshop Co-chairs Laure Blanc-Féraud, I3S laboratory, CNRS, France Pierre-Yves Joubert, IEF laboratory, Paris-Sud University, CNRS, France Technical Program Committee Alexandre Baussard, ENSTA Bretagne, Lab-STICC, France Marc Bonnet, ENSTA, ParisTech, France Jerôme Darbon, CMLA, ENS Cachan, CNRS, France Oliver Dorn, School of Mathematics, University of Manchester, UK Mário Figueiredo, Instituto Superior Técnico, Lisbon, Portugal Laurent Fribourg, LSV, ENS Cachan, CNRS, France Marc Lambert, L2S Laboratory, CNRS, SupElec, Paris-Sud University, France Anthony Quinn, Trinity College, Dublin, Ireland Christian Rey, LMT, ENS Cachan, CNRS, France Joachim Weickert, Saarland University, Germany Local Chair Alejandro Mottini, Morpheme group I3S-INRIA Sophie Abriet, SATIE, ENS Cachan, CNRS, France Béatrice Bacquet, SATIE, ENS Cachan, CNRS, France Reviewers Gilles Aubert, J-A Dieudonné Laboratory, CNRS and University of Nice-Sophia Antipolis, France Alexandre Baussard, ENSTA Bretagne, Lab-STICC, France Laure Blanc-Féraud, I3S laboratory, CNRS, France Marc Bonnet, ENSTA, ParisTech, France Jerôme Darbon, CMLA, ENS Cachan, CNRS, France Oliver Dorn, School of Mathematics, University of Manchester, UK Gérard Favier, I3S laboratory, CNRS, France Mário Figueiredo, Instituto Superior Técnico, Lisbon, Portugal Laurent Fribourg, LSV, ENS Cachan, CNRS, France Jérôme Idier, IRCCyN, CNRS, France Pierre-Yves Joubert, IEF laboratory, Paris-Sud University, CNRS, France Marc Lambert, L2S Laboratory, CNRS, SupElec, Paris-Sud University, France Dominique Lesselier, L2S Laboratory, CNRS, SupElec, Paris-Sud University, France Anthony Quinn, Trinity College, Dublin, Ireland Christian Rey, LMT, ENS Cachan, CNRS, France Simon Setzer, Saarland University, Germany Joachim Weickert, Saarland University, Germany Invited speakers Antonin Chambolle: CMAP, Ecole Polytechnique, CNRS, France Matteo Pastorino: University of Genoa, Italy Michael Unser: Ecole polytechnique Fédérale de Lausanne, Switzerland
Lane, John W.; Day-Lewis, Frederick D.; Versteeg, Roelof J.; Casey, Clifton C.
2004-01-01
Crosswell radar methods can be used to dynamically image ground-water flow and mass transport associated with tracer tests, hydraulic tests, and natural physical processes, for improved characterization of preferential flow paths and complex aquifer heterogeneity. Unfortunately, because the raypath coverage of the interwell region is limited by the borehole geometry, the tomographic inverse problem is typically underdetermined, and tomograms may contain artifacts such as spurious blurring or streaking that confuse interpretation.We implement object-based inversion (using a constrained, non-linear, least-squares algorithm) to improve results from pixel-based inversion approaches that utilize regularization criteria, such as damping or smoothness. Our approach requires pre- and post-injection travel-time data. Parameterization of the image plane comprises a small number of objects rather than a large number of pixels, resulting in an overdetermined problem that reduces the need for prior information. The nature and geometry of the objects are based on hydrologic insight into aquifer characteristics, the nature of the experiment, and the planned use of the geophysical results.The object-based inversion is demonstrated using synthetic and crosswell radar field data acquired during vegetable-oil injection experiments at a site in Fridley, Minnesota. The region where oil has displaced ground water is discretized as a stack of rectangles of variable horizontal extents. The inversion provides the geometry of the affected region and an estimate of the radar slowness change for each rectangle. Applying petrophysical models to these results and porosity from neutron logs, we estimate the vegetable-oil emulsion saturation in various layers.Using synthetic- and field-data examples, object-based inversion is shown to be an effective strategy for inverting crosswell radar tomography data acquired to monitor the emplacement of vegetable-oil emulsions. A principal advantage of object-based inversion is that it yields images that hydrologists and engineers can easily interpret and use for model calibration.
Solving ill-posed inverse problems using iterative deep neural networks
NASA Astrophysics Data System (ADS)
Adler, Jonas; Öktem, Ozan
2017-12-01
We propose a partially learned approach for the solution of ill-posed inverse problems with not necessarily linear forward operators. The method builds on ideas from classical regularisation theory and recent advances in deep learning to perform learning while making use of prior information about the inverse problem encoded in the forward operator, noise model and a regularising functional. The method results in a gradient-like iterative scheme, where the ‘gradient’ component is learned using a convolutional network that includes the gradients of the data discrepancy and regulariser as input in each iteration. We present results of such a partially learned gradient scheme on a non-linear tomographic inversion problem with simulated data from both the Sheep-Logan phantom as well as a head CT. The outcome is compared against filtered backprojection and total variation reconstruction and the proposed method provides a 5.4 dB PSNR improvement over the total variation reconstruction while being significantly faster, giving reconstructions of 512 × 512 pixel images in about 0.4 s using a single graphics processing unit (GPU).
NASA Astrophysics Data System (ADS)
Dorn, Oliver; Lionheart, Bill
2010-11-01
This proceeding combines selected contributions from participants of the Workshop on Electromagnetic Inverse Problems which was hosted by the University of Manchester in June 2009. The workshop was organized by the two guest editors of this conference proceeding and ran in parallel to the 10th International Conference on Electrical Impedance Tomography, which was guided by Bill Lionheart, Richard Bayford, and Eung Je Woo. Both events shared plenary talks and several selected sessions. One reason for combining these two events was the goal of bringing together scientists from various related disciplines who normally might not attend the same conferences, and to enhance discussions between these different groups. So, for example, one day of the workshop was dedicated to the broader area of geophysical inverse problems (including inverse problems in petroleum engineering), where participants from the EIT community and from the medical imaging community were also encouraged to participate, with great success. Other sessions concentrated on microwave medical imaging, on inverse scattering, or on eddy current imaging, with active feedback also from geophysically oriented scientists. Furthermore, several talks addressed such diverse topics as optical tomography, photoacoustic tomography, time reversal, or electrosensing fish. As a result of the workshop, speakers were invited to contribute extended papers to this conference proceeding. All submissions were thoroughly reviewed and, after a thoughtful revision by the authors, combined in this proceeding. The resulting set of six papers presenting the work of in total 22 authors from 5 different countries provides a very interesting overview of several of the themes which were represented at the workshop. These can be divided into two important categories, namely (i) modelling and (ii) data inversion. The first three papers of this selection, as outlined below, focus more on modelling aspects, being an essential component of any successful inversion, whereas the other three papers discuss novel inversion techniques for specific applications. In the first contribution, with the title A Novel Simplified Mathematical Model for Antennas used in Medical Imaging Applications, the authors M J Fernando, M Elsdon, K Busawon and D Smith discuss a new technique for modelling the current across a monopole antenna from which the radiation fields of the antenna can be calculated very efficiently in specific medical imaging applications. This new technique is then tested on two examples, a quarter wavelength and a three quarter wavelength monopole antenna. The next contribution, with the title An investigation into the use of a mixture model for simulating the electrical properties of soil with varying effective saturation levels for sub-soil imaging using ECT by R R Hayes, P A Newill, F J W Podd, T A York, B D Grieve and O Dorn, considers the development of a new visualization tool for monitoring soil moisture content surrounding certain seed breeder plants. An electrical capacitance tomography technique is employed for verifying how efficiently each plant utilises the water and nutrients available in the surrounding soil. The goal of this study is to help in developing and identifying new drought tolerant food crops. In the third contribution Combination of Maximin and Kriging Prediction Methods for Eddy-Current Testing Database Generation by S Bilicz, M Lambert, E Vazquez and S Gyimóthy, a novel database generation technique is proposed for its use in solving inverse eddy-current testing problems. For avoiding expensive repeated forward simulations during the creation of this database, a kriging interpolation technique is employed for filling uniformly the data output space with sample points. Mathematically this is achieved by using a maximin formalism. The paper 2.5D inversion of CSEM data in a vertically anisotropic earth by C Ramananjaona and L MacGregor considers controlled-source electromagnetic techniques for imaging the earth in a marine environment. It focuses in particular on taking into account anisotropy effects in the inversion. Results of this technique are demonstrated from simulated and from real field data. Furthermore, in the contribution Multiple level-sets for elliptic Cauchy problems in three-dimensional domains by A Leitão and M Marques Alves the authors consider a TV-H1regularization technique for multiple level-set inversion of elliptic Cauchy problems. Generalized minimizers are defined and convergence and stability results are provided for this method, in addition to several numerical experiments. Finally, in the paper Development of in-vivo fluorescence imaging with the matrix-free method, the authors A Zacharopoulos, A Garofalakis, J Ripoll and S Arridge address a recently developed non-contact fluorescence molecular tomography technique where the use of non-contact acquisition systems poses new challenges on computational efficiency during data processing. The matrix-free method is designed to reduce computational cost and memory requirements during the inversion. Reconstructions from a simulated mouse phantom are provided for demonstrating the performance of the proposed technique in realistic scenarios. We hope that this selection of strong and thought-provoking papers will help stimulating further cross-disciplinary research in the spirit of the workshop. We thank all authors for providing us with this excellent set of high-quality contributions. We also thank EPSRC for having provided funding for the workshop under grant EP/G065047/1. Oliver Dorn, Bill Lionheart School of Mathematics, University of Manchester, Alan Turing Building, Oxford Rd Manchester, M13 9PL, UK E-mail: oliver.dorn@manchester.ac.uk, bill.lionheart@manchester.ac.uk Guest Editors
Inverting a dispersive scene's side-scanned image
NASA Technical Reports Server (NTRS)
Harger, R. O.
1983-01-01
Consideration is given to the problem of using a remotely sensed, side-scanned image of a time-variant scene, which changes according to a dispersion relation, to estimate the structure at a given moment. Additive thermal noise is neglected in the models considered in the formal treatment. It is shown that the dispersion relation is normalized by the scanning velocity, as is the group scanning velocity component. An inversion operation is defined for noise-free images generated by SAR. The method is extended to the inversion of noisy imagery, and a formulation is defined for spectral density estimation. Finally, the methods for a radar system are used for the case of sonar.
NASA Astrophysics Data System (ADS)
Courdurier, M.; Monard, F.; Osses, A.; Romero, F.
2015-09-01
In medical single-photon emission computed tomography (SPECT) imaging, we seek to simultaneously obtain the internal radioactive sources and the attenuation map using not only ballistic measurements but also first-order scattering measurements and assuming a very specific scattering regime. The problem is modeled using the radiative transfer equation by means of an explicit non-linear operator that gives the ballistic and scattering measurements as a function of the radioactive source and attenuation distributions. First, by differentiating this non-linear operator we obtain a linearized inverse problem. Then, under regularity hypothesis for the source distribution and attenuation map and considering small attenuations, we rigorously prove that the linear operator is invertible and we compute its inverse explicitly. This allows proof of local uniqueness for the non-linear inverse problem. Finally, using the previous inversion result for the linear operator, we propose a new type of iterative algorithm for simultaneous source and attenuation recovery for SPECT based on the Neumann series and a Newton-Raphson algorithm.
Data compression strategies for ptychographic diffraction imaging
NASA Astrophysics Data System (ADS)
Loetgering, Lars; Rose, Max; Treffer, David; Vartanyants, Ivan A.; Rosenhahn, Axel; Wilhein, Thomas
2017-12-01
Ptychography is a computational imaging method for solving inverse scattering problems. To date, the high amount of redundancy present in ptychographic data sets requires computer memory that is orders of magnitude larger than the retrieved information. Here, we propose and compare data compression strategies that significantly reduce the amount of data required for wavefield inversion. Information metrics are used to measure the amount of data redundancy present in ptychographic data. Experimental results demonstrate the technique to be memory efficient and stable in the presence of systematic errors such as partial coherence and noise.
Efficient Inversion of Mult-frequency and Multi-Source Electromagnetic Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gary D. Egbert
2007-03-22
The project covered by this report focused on development of efficient but robust non-linear inversion algorithms for electromagnetic induction data, in particular for data collected with multiple receivers, and multiple transmitters, a situation extremely common in eophysical EM subsurface imaging methods. A key observation is that for such multi-transmitter problems each step in commonly used linearized iterative limited memory search schemes such as conjugate gradients (CG) requires solution of forward and adjoint EM problems for each of the N frequencies or sources, essentially generating data sensitivities for an N dimensional data-subspace. These multiple sensitivities allow a good approximation to themore » full Jacobian of the data mapping to be built up in many fewer search steps than would be required by application of textbook optimization methods, which take no account of the multiplicity of forward problems that must be solved for each search step. We have applied this idea to a develop a hybrid inversion scheme that combines features of the iterative limited memory type methods with a Newton-type approach using a partial calculation of the Jacobian. Initial tests on 2D problems show that the new approach produces results essentially identical to a Newton type Occam minimum structure inversion, while running more rapidly than an iterative (fixed regularization parameter) CG style inversion. Memory requirements, while greater than for something like CG, are modest enough that even in 3D the scheme should allow 3D inverse problems to be solved on a common desktop PC, at least for modest (~ 100 sites, 15-20 frequencies) data sets. A secondary focus of the research has been development of a modular system for EM inversion, using an object oriented approach. This system has proven useful for more rapid prototyping of inversion algorithms, in particular allowing initial development and testing to be conducted with two-dimensional example problems, before approaching more computationally cumbersome three-dimensional problems.« less
NASA Astrophysics Data System (ADS)
Acton, Scott T.; Gilliam, Andrew D.; Li, Bing; Rossi, Adam
2008-02-01
Improvised explosive devices (IEDs) are common and lethal instruments of terrorism, and linking a terrorist entity to a specific device remains a difficult task. In the effort to identify persons associated with a given IED, we have implemented a specialized content based image retrieval system to search and classify IED imagery. The system makes two contributions to the art. First, we introduce a shape-based matching technique exploiting shape, color, and texture (wavelet) information, based on novel vector field convolution active contours and a novel active contour initialization method which treats coarse segmentation as an inverse problem. Second, we introduce a unique graph theoretic approach to match annotated printed circuit board images for which no schematic or connectivity information is available. The shape-based image retrieval method, in conjunction with the graph theoretic tool, provides an efficacious system for matching IED images. For circuit imagery, the basic retrieval mechanism has a precision of 82.1% and the graph based method has a precision of 98.1%. As of the fall of 2007, the working system has processed over 400,000 case images.
NASA Astrophysics Data System (ADS)
Ahn, Chi Young; Jeon, Kiwan; Park, Won-Kwang
2015-06-01
This study analyzes the well-known MUltiple SIgnal Classification (MUSIC) algorithm to identify unknown support of thin penetrable electromagnetic inhomogeneity from scattered field data collected within the so-called multi-static response matrix in limited-view inverse scattering problems. The mathematical theories of MUSIC are partially discovered, e.g., in the full-view problem, for an unknown target of dielectric contrast or a perfectly conducting crack with the Dirichlet boundary condition (Transverse Magnetic-TM polarization) and so on. Hence, we perform further research to analyze the MUSIC-type imaging functional and to certify some well-known but theoretically unexplained phenomena. For this purpose, we establish a relationship between the MUSIC imaging functional and an infinite series of Bessel functions of integer order of the first kind. This relationship is based on the rigorous asymptotic expansion formula in the existence of a thin inhomogeneity with a smooth supporting curve. Various results of numerical simulation are presented in order to support the identified structure of MUSIC. Although a priori information of the target is needed, we suggest a least condition of range of incident and observation directions to apply MUSIC in the limited-view problem.
On uncertainty quantification in hydrogeology and hydrogeophysics
NASA Astrophysics Data System (ADS)
Linde, Niklas; Ginsbourger, David; Irving, James; Nobile, Fabio; Doucet, Arnaud
2017-12-01
Recent advances in sensor technologies, field methodologies, numerical modeling, and inversion approaches have contributed to unprecedented imaging of hydrogeological properties and detailed predictions at multiple temporal and spatial scales. Nevertheless, imaging results and predictions will always remain imprecise, which calls for appropriate uncertainty quantification (UQ). In this paper, we outline selected methodological developments together with pioneering UQ applications in hydrogeology and hydrogeophysics. The applied mathematics and statistics literature is not easy to penetrate and this review aims at helping hydrogeologists and hydrogeophysicists to identify suitable approaches for UQ that can be applied and further developed to their specific needs. To bypass the tremendous computational costs associated with forward UQ based on full-physics simulations, we discuss proxy-modeling strategies and multi-resolution (Multi-level Monte Carlo) methods. We consider Bayesian inversion for non-linear and non-Gaussian state-space problems and discuss how Sequential Monte Carlo may become a practical alternative. We also describe strategies to account for forward modeling errors in Bayesian inversion. Finally, we consider hydrogeophysical inversion, where petrophysical uncertainty is often ignored leading to overconfident parameter estimation. The high parameter and data dimensions encountered in hydrogeological and geophysical problems make UQ a complicated and important challenge that has only been partially addressed to date.
Liu, Tian; Liu, Jing; de Rochefort, Ludovic; Spincemaille, Pascal; Khalidov, Ildar; Ledoux, James Robert; Wang, Yi
2011-09-01
Magnetic susceptibility varies among brain structures and provides insights into the chemical and molecular composition of brain tissues. However, the determination of an arbitrary susceptibility distribution from the measured MR signal phase is a challenging, ill-conditioned inverse problem. Although a previous method named calculation of susceptibility through multiple orientation sampling (COSMOS) has solved this inverse problem both theoretically and experimentally using multiple angle acquisitions, it is often impractical to carry out on human subjects. Recently, the feasibility of calculating the brain susceptibility distribution from a single-angle acquisition was demonstrated using morphology enabled dipole inversion (MEDI). In this study, we further improved the original MEDI method by sparsifying the edges in the quantitative susceptibility map that do not have a corresponding edge in the magnitude image. Quantitative susceptibility maps generated by the improved MEDI were compared qualitatively and quantitatively with those generated by calculation of susceptibility through multiple orientation sampling. The results show a high degree of agreement between MEDI and calculation of susceptibility through multiple orientation sampling, and the practicality of MEDI allows many potential clinical applications. Copyright © 2011 Wiley-Liss, Inc.
Sparseness- and continuity-constrained seismic imaging
NASA Astrophysics Data System (ADS)
Herrmann, Felix J.
2005-04-01
Non-linear solution strategies to the least-squares seismic inverse-scattering problem with sparseness and continuity constraints are proposed. Our approach is designed to (i) deal with substantial amounts of additive noise (SNR < 0 dB); (ii) use the sparseness and locality (both in position and angle) of directional basis functions (such as curvelets and contourlets) on the model: the reflectivity; and (iii) exploit the near invariance of these basis functions under the normal operator, i.e., the scattering-followed-by-imaging operator. Signal-to-noise ratio and the continuity along the imaged reflectors are significantly enhanced by formulating the solution of the seismic inverse problem in terms of an optimization problem. During the optimization, sparseness on the basis and continuity along the reflectors are imposed by jointly minimizing the l1- and anisotropic diffusion/total-variation norms on the coefficients and reflectivity, respectively. [Joint work with Peyman P. Moghaddam was carried out as part of the SINBAD project, with financial support secured through ITF (the Industry Technology Facilitator) from the following organizations: BG Group, BP, ExxonMobil, and SHELL. Additional funding came from the NSERC Discovery Grants 22R81254.
NASA Astrophysics Data System (ADS)
Horesh, L.; Haber, E.
2009-09-01
The ell1 minimization problem has been studied extensively in the past few years. Recently, there has been a growing interest in its application for inverse problems. Most studies have concentrated in devising ways for sparse representation of a solution using a given prototype dictionary. Very few studies have addressed the more challenging problem of optimal dictionary construction, and even these were primarily devoted to the simplistic sparse coding application. In this paper, sensitivity analysis of the inverse solution with respect to the dictionary is presented. This analysis reveals some of the salient features and intrinsic difficulties which are associated with the dictionary design problem. Equipped with these insights, we propose an optimization strategy that alleviates these hurdles while utilizing the derived sensitivity relations for the design of a locally optimal dictionary. Our optimality criterion is based on local minimization of the Bayesian risk, given a set of training models. We present a mathematical formulation and an algorithmic framework to achieve this goal. The proposed framework offers the design of dictionaries for inverse problems that incorporate non-trivial, non-injective observation operators, where the data and the recovered parameters may reside in different spaces. We test our algorithm and show that it yields improved dictionaries for a diverse set of inverse problems in geophysics and medical imaging.
On the Duality of Forward and Inverse Light Transport.
Chandraker, Manmohan; Bai, Jiamin; Ng, Tian-Tsong; Ramamoorthi, Ravi
2011-10-01
Inverse light transport seeks to undo global illumination effects, such as interreflections, that pervade images of most scenes. This paper presents the theoretical and computational foundations for inverse light transport as a dual of forward rendering. Mathematically, this duality is established through the existence of underlying Neumann series expansions. Physically, it can be shown that each term of our inverse series cancels an interreflection bounce, just as the forward series adds them. While the convergence properties of the forward series are well known, we show that the oscillatory convergence of the inverse series leads to more interesting conditions on material reflectance. Conceptually, the inverse problem requires the inversion of a large light transport matrix, which is impractical for realistic resolutions using standard techniques. A natural consequence of our theoretical framework is a suite of fast computational algorithms for light transport inversion--analogous to finite element radiosity, Monte Carlo and wavelet-based methods in forward rendering--that rely at most on matrix-vector multiplications. We demonstrate two practical applications, namely, separation of individual bounces of the light transport and fast projector radiometric compensation, to display images free of global illumination artifacts in real-world environments.
Aydmer, A.A.; Chew, W.C.; Cui, T.J.; Wright, D.L.; Smith, D.V.; Abraham, J.D.
2001-01-01
A simple and efficient method for large scale three-dimensional (3-D) subsurface imaging of inhomogeneous background is presented. One-dimensional (1-D) multifrequency distorted Born iterative method (DBIM) is employed in the inversion. Simulation results utilizing synthetic scattering data are given. Calibration of the very early time electromagnetic (VETEM) experimental waveforms is detailed along with major problems encountered in practice and their solutions. This discussion is followed by the results of a large scale application of the method to the experimental data provided by the VETEM system of the U.S. Geological Survey. The method is shown to have a computational complexity that is promising for on-site inversion.
Radar studies of the atmosphere using spatial and frequency diversity
NASA Astrophysics Data System (ADS)
Yu, Tian-You
This work provides results from a thorough investigation of atmospheric radar imaging including theory, numerical simulations, observational verification, and applications. The theory is generalized to include the existing imaging techniques of coherent radar imaging (CRI) and range imaging (RIM), which are shown to be special cases of three-dimensional imaging (3D Imaging). Mathematically, the problem of atmospheric radar imaging is posed as an inverse problem. In this study, the Fourier, Capon, and maximum entropy (MaxEnt) methods are proposed to solve the inverse problem. After the introduction of the theory, numerical simulations are used to test, validate, and exercise these techniques. Statistical comparisons of the three methods of atmospheric radar imaging are presented for various signal-to-noise ratio (SNR), receiver configuration, and frequency sampling. The MaxEnt method is shown to generally possess the best performance for low SNR. The performance of the Capon method approaches the performance of the MaxEnt method for high SNR. In limited cases, the Capon method actually outperforms the MaxEnt method. The Fourier method generally tends to distort the model structure due to its limited resolution. Experimental justification of CRI and RIM is accomplished using the Middle and Upper (MU) Atmosphere Radar in Japan and the SOUnding SYstem (SOUSY) in Germany, respectively. A special application of CRI to the observation of polar mesosphere summer echoes (PMSE) is used to show direct evidence of wave steepening and possibly explain gravity wave variations associated with PMSE.
Mathematical Problems in Synthetic Aperture Radar
NASA Astrophysics Data System (ADS)
Klein, Jens
2010-10-01
This thesis is concerned with problems related to Synthetic Aperture Radar (SAR). The thesis is structured as follows: The first chapter explains what SAR is, and the physical and mathematical background is illuminated. The following chapter points out a problem with a divergent integral in a common approach and proposes an improvement. Numerical comparisons are shown that indicate that the improvements allow for a superior image quality. Thereafter the problem of limited data is analyzed. In a realistic SAR-measurement the data gathered from the electromagnetic waves reflected from the surface can only be collected from a limited area. However the reconstruction formula requires data from an infinite distance. The chapter gives an analysis of the artifacts which can obscure the reconstructed images due to this problem. Additionally, some numerical examples are shown that point to the severity of the problem. In chapter 4 the fact that data is available only from a limited area is used to propose a new inversion formula. This inversion formula has the potential to make it easier to suppress artifacts due to limited data and, depending on the application, can be refined to a fast reconstruction formula. In the penultimate chapter a solution to the problem of left-right ambiguity is presented. This problem exists since the invention of SAR and is caused by the geometry of the measurements. This leads to the fact that only symmetric images can be obtained. With the solution from this chapter it is possible to reconstruct not only the even part of the reflectivity function, but also the odd part, thus making it possible to reconstruct asymmetric images. Numerical simulations are shown to demonstrate that this solution is not affected by stability problems as other approaches have been. The final chapter develops some continuative ideas that could be pursued in the future.
Reducing the uncertainty in the fidelity of seismic imaging results
NASA Astrophysics Data System (ADS)
Zhou, H. W.; Zou, Z.
2017-12-01
A key aspect in geoscientific inversion is quantifying the quality of the results. In seismic imaging, we must quantify the uncertainty of every imaging result based on field data, because data noise and methodology limitations may produce artifacts. Detection of artifacts is therefore an important aspect in uncertainty quantification in geoscientific inversion. Quantifying the uncertainty of seismic imaging solutions means assessing their fidelity, which defines the truthfulness of the imaged targets in terms of their resolution, position error and artifact. Key challenges to achieving the fidelity of seismic imaging include: (1) Difficulty to tell signal from artifact and noise; (2) Limitations in signal-to-noise ratio and seismic illumination; and (3) The multi-scale nature of the data space and model space. Most seismic imaging studies of the Earth's crust and mantle have employed inversion or modeling approaches. Though they are in opposite directions of mapping between the data space and model space, both inversion and modeling seek the best model to minimize the misfit in the data space, which unfortunately is not the output space. The fact that the selection and uncertainty of the output model are not judged in the output space has exacerbated the nonuniqueness problem for inversion and modeling. In contrast, the practice in exploration seismology has long established a two-fold approach of seismic imaging: Using velocity modeling building to establish the long-wavelength reference velocity models, and using seismic migration to map the short-wavelength reflectivity structures. Most interestingly, seismic migration maps the data into an output space called imaging space, where the output reflection images of the subsurface are formed based on an imaging condition. A good example is the reverse time migration, which seeks the reflectivity image as the best fit in the image space between the extrapolation of time-reversed waveform data and the prediction based on estimated velocity model and source parameters. I will illustrate the benefits of deciding the best output result in the output space for inversion, using examples from seismic imaging.
NASA Astrophysics Data System (ADS)
Jia, Ningning; Y Lam, Edmund
2010-04-01
Inverse lithography technology (ILT) synthesizes photomasks by solving an inverse imaging problem through optimization of an appropriate functional. Much effort on ILT is dedicated to deriving superior masks at a nominal process condition. However, the lower k1 factor causes the mask to be more sensitive to process variations. Robustness to major process variations, such as focus and dose variations, is desired. In this paper, we consider the focus variation as a stochastic variable, and treat the mask design as a machine learning problem. The stochastic gradient descent approach, which is a useful tool in machine learning, is adopted to train the mask design. Compared with previous work, simulation shows that the proposed algorithm is effective in producing robust masks.
Singular value decomposition for the truncated Hilbert transform
NASA Astrophysics Data System (ADS)
Katsevich, A.
2010-11-01
Starting from a breakthrough result by Gelfand and Graev, inversion of the Hilbert transform became a very important tool for image reconstruction in tomography. In particular, their result is useful when the tomographic data are truncated and one deals with an interior problem. As was established recently, the interior problem admits a stable and unique solution when some a priori information about the object being scanned is available. The most common approach to solving the interior problem is based on converting it to the Hilbert transform and performing analytic continuation. Depending on what type of tomographic data are available, one gets different Hilbert inversion problems. In this paper, we consider two such problems and establish singular value decomposition for the operators involved. We also propose algorithms for performing analytic continuation.
Quantitative imaging technique using the layer-stripping algorithm
NASA Astrophysics Data System (ADS)
Beilina, L.
2017-07-01
We present the layer-stripping algorithm for the solution of the hyperbolic coefficient inverse problem (CIP). Our numerical examples show quantitative reconstruction of small tumor-like inclusions in two-dimensions.
Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A
2015-03-01
Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the sound speed distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Both computer simulation and experimental phantom studies are conducted to demonstrate the use of the WISE method. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.
The Distortion of a Body's Visible Shape at Relativistic Speeds
ERIC Educational Resources Information Center
Arkadiy, Leonov
2009-01-01
The problem of obtaining the apparent equation of motion and shape of a moving body from its arbitrary given equation of motion in special relativity is considered. Also the inverse problem of obtaining the body's equation of motion from a known equation of motion of its image is discussed. Some examples of this problem solution are considered. As…
NASA Astrophysics Data System (ADS)
Edjlali, Ehsan; Bérubé-Lauzière, Yves
2018-01-01
We present the first Lq -Lp optimization scheme for fluorescence tomographic imaging. This is then applied to small animal imaging. Fluorescence tomography is an ill-posed, and in full generality, a nonlinear problem that seeks to image the 3D concentration distribution of a fluorescent agent inside a biological tissue. Standard candidates for regularization to deal with the ill-posedness of the image reconstruction problem include L1 and L2 regularization. In this work, a general Lq -Lp regularization framework (Lq discrepancy function - Lp regularization term) is introduced for fluorescence tomographic imaging. A method to calculate the gradient for this general framework is developed which allows evaluating the performance of different cost functions/regularization schemes in solving the fluorescence tomographic problem. The simplified spherical harmonics approximation is used to accurately model light propagation inside the tissue. Furthermore, a multigrid mesh is utilized to decrease the dimension of the inverse problem and reduce the computational cost of the solution. The inverse problem is solved iteratively using an lm-BFGS quasi-Newton optimization method. The simulations are performed under different scenarios of noisy measurements. These are carried out on the Digimouse numerical mouse model with the kidney being the target organ. The evaluation of the reconstructed images is performed both qualitatively and quantitatively using several metrics including QR, RMSE, CNR, and TVE under rigorous conditions. The best reconstruction results under different scenarios are obtained with an L1.5 -L1 scheme with premature termination of the optimization process. This is in contrast to approaches commonly found in the literature relying on L2 -L2 schemes.
Inverse solutions for electrical impedance tomography based on conjugate gradients methods
NASA Astrophysics Data System (ADS)
Wang, M.
2002-01-01
A multistep inverse solution for two-dimensional electric field distribution is developed to deal with the nonlinear inverse problem of electric field distribution in relation to its boundary condition and the problem of divergence due to errors introduced by the ill-conditioned sensitivity matrix and the noise produced by electrode modelling and instruments. This solution is based on a normalized linear approximation method where the change in mutual impedance is derived from the sensitivity theorem and a method of error vector decomposition. This paper presents an algebraic solution of the linear equations at each inverse step, using a generalized conjugate gradients method. Limiting the number of iterations in the generalized conjugate gradients method controls the artificial errors introduced by the assumption of linearity and the ill-conditioned sensitivity matrix. The solution of the nonlinear problem is approached using a multistep inversion. This paper also reviews the mathematical and physical definitions of the sensitivity back-projection algorithm based on the sensitivity theorem. Simulations and discussion based on the multistep algorithm, the sensitivity coefficient back-projection method and the Newton-Raphson method are given. Examples of imaging gas-liquid mixing and a human hand in brine are presented.
NASA Astrophysics Data System (ADS)
Rundell, William; Somersalo, Erkki
2008-07-01
The Inverse Problems International Association (IPIA) awarded the first Calderón Prize to Matti Lassas for his outstanding contributions to the field of inverse problems, especially in geometric inverse problems. The Calderón Prize is given to a researcher under the age of 40 who has made distinguished contributions to the field of inverse problems broadly defined. The first Calderón Prize Committee consisted of Professors Adrian Nachman, Lassi Päivärinta, William Rundell (chair), and Michael Vogelius. William Rundell For the Calderón Prize Committee Prize ceremony The ceremony awarding the Calderón Prize. Matti Lassas is on the left. He and William Rundell are on the right. Photos by P Stefanov. Brief Biography of Matti Lassas Matti Lassas was born in 1969 in Helsinki, Finland, and studied at the University of Helsinki. He finished his Master's studies in 1992 in three years and earned his PhD in 1996. His PhD thesis, written under the supervision of Professor Erkki Somersalo was entitled `Non-selfadjoint inverse spectral problems and their applications to random bodies'. Already in his thesis, Matti demonstrated a remarkable command of different fields of mathematics, bringing together the spectral theory of operators, geometry of Riemannian surfaces, Maxwell's equations and stochastic analysis. He has continued to develop all of these branches in the framework of inverse problems, the most remarkable results perhaps being in the field of differential geometry and inverse problems. Matti has always been a very generous researcher, sharing his ideas with his numerous collaborators. He has authored over sixty scientific articles, among which a monograph on inverse boundary spectral problems with Alexander Kachalov and Yaroslav Kurylev and over forty articles in peer reviewed journals of the highest standards. To get an idea of the wide range of Matti's interests, it is enough to say that he also has three US patents on medical imaging applications. Matti is currently professor of mathematics at Helsinki University of Technology, where he has created his own line of research with young talented researchers around him. He is a central person in the Centre of Excellence in Inverse Problems Research of the Academy of Finland. Previously, Matti Lassas has won several awards in his home country, including the prestigious Vaisala price of the Finnish Academy of Science and Letters in 2004. He is a highly esteemed colleague, teacher and friend, and the Great Diving Beetle of the Finnish Inverse Problems Society (http://venda.uku.fi/research/FIPS/), an honorary title for a person who has no fear of the deep. Erkki Somersalo
Ale, Angelique; Schulz, Ralf B; Sarantopoulos, Athanasios; Ntziachristos, Vasilis
2010-05-01
The performance is studied of two newly introduced and previously suggested methods that incorporate priors into inversion schemes associated with data from a recently developed hybrid x-ray computed tomography and fluorescence molecular tomography system, the latter based on CCD camera photon detection. The unique data set studied attains accurately registered data of high spatially sampled photon fields propagating through tissue along 360 degrees projections. Approaches that incorporate structural prior information were included in the inverse problem by adding a penalty term to the minimization function utilized for image reconstructions. Results were compared as to their performance with simulated and experimental data from a lung inflammation animal model and against the inversions achieved when not using priors. The importance of using priors over stand-alone inversions is also showcased with high spatial sampling simulated and experimental data. The approach of optimal performance in resolving fluorescent biodistribution in small animals is also discussed. Inclusion of prior information from x-ray CT data in the reconstruction of the fluorescence biodistribution leads to improved agreement between the reconstruction and validation images for both simulated and experimental data.
Theory of the amplitude-phase retrieval in any linear-transform system and its applications
NASA Astrophysics Data System (ADS)
Yang, Guozhen; Gu, Ben-Yuan; Dong, Bi-Zhen
1992-12-01
This paper is a summary of the theory of the amplitude-phase retrieval problem in any linear transform system and its applications based on our previous works in the past decade. We describe the general statement on the amplitude-phase retrieval problem in an imaging system and derive a set of equations governing the amplitude-phase distribution in terms of the rigorous mathematical derivation. We then show that, by using these equations and an iterative algorithm, a variety of amplitude-phase problems can be successfully handled. We carry out the systematic investigations and comprehensive numerical calculations to demonstrate the utilization of this new algorithm in various transform systems. For instance, we have achieved the phase retrieval from two intensity measurements in an imaging system with diffraction loss (non-unitary transform), both theoretically and experimentally, and the recovery of model real image from its Hartley-transform modulus only in one and two dimensional cases. We discuss the achievement of the phase retrieval problem from a single intensity only based on the sampling theorem and our algorithm. We also apply this algorithm to provide an optimal design of the phase-adjusted plate for a phase-adjustment focusing laser accelerator and a design approach of single phase-only element for implementing optical interconnect. In order to closely simulate the really measured data, we examine the reconstruction of image from its spectral modulus corrupted by a random noise in detail. The results show that the convergent solution can always be obtained and the quality of the recovered image is satisfactory. We also indicated the relationship and distinction between our algorithm and the original Gerchberg- Saxton algorithm. From these studies, we conclude that our algorithm shows great capability to deal with the comprehensive phase-retrieval problems in the imaging system and the inverse problem in solid state physics. It may open a new way to solve important inverse source problems extensively appearing in physics.
Three-Dimensional Inverse Transport Solver Based on Compressive Sensing Technique
NASA Astrophysics Data System (ADS)
Cheng, Yuxiong; Wu, Hongchun; Cao, Liangzhi; Zheng, Youqi
2013-09-01
According to the direct exposure measurements from flash radiographic image, a compressive sensing-based method for three-dimensional inverse transport problem is presented. The linear absorption coefficients and interface locations of objects are reconstructed directly at the same time. It is always very expensive to obtain enough measurements. With limited measurements, compressive sensing sparse reconstruction technique orthogonal matching pursuit is applied to obtain the sparse coefficients by solving an optimization problem. A three-dimensional inverse transport solver is developed based on a compressive sensing-based technique. There are three features in this solver: (1) AutoCAD is employed as a geometry preprocessor due to its powerful capacity in graphic. (2) The forward projection matrix rather than Gauss matrix is constructed by the visualization tool generator. (3) Fourier transform and Daubechies wavelet transform are adopted to convert an underdetermined system to a well-posed system in the algorithm. Simulations are performed and numerical results in pseudo-sine absorption problem, two-cube problem and two-cylinder problem when using compressive sensing-based solver agree well with the reference value.
NASA Astrophysics Data System (ADS)
Chen, Siyu; Zhang, Hanming; Li, Lei; Xi, Xiaoqi; Han, Yu; Yan, Bin
2016-10-01
X-ray computed tomography (CT) has been extensively applied in industrial non-destructive testing (NDT). However, in practical applications, the X-ray beam polychromaticity often results in beam hardening problems for image reconstruction. The beam hardening artifacts, which manifested as cupping, streaks and flares, not only debase the image quality, but also disturb the subsequent analyses. Unfortunately, conventional CT scanning requires that the scanned object is completely covered by the field of view (FOV), the state-of-art beam hardening correction methods only consider the ideal scanning configuration, and often suffer problems for interior tomography due to the projection truncation. Aiming at this problem, this paper proposed a beam hardening correction method based on radon inversion transform for interior tomography. Experimental results show that, compared to the conventional correction algorithms, the proposed approach has achieved excellent performance in both beam hardening artifacts reduction and truncation artifacts suppression. Therefore, the presented method has vitally theoretic and practicable meaning in artifacts correction of industrial CT.
NASA Astrophysics Data System (ADS)
Boughariou, Jihene; Zouch, Wassim; Slima, Mohamed Ben; Kammoun, Ines; Hamida, Ahmed Ben
2015-11-01
Electroencephalography (EEG) and magnetic resonance imaging (MRI) are noninvasive neuroimaging modalities. They are widely used and could be complementary. The fusion of these modalities may enhance some emerging research fields targeting the exploration better brain activities. Such research attracted various scientific investigators especially to provide a convivial and helpful advanced clinical-aid tool enabling better neurological explorations. Our present research was, in fact, in the context of EEG inverse problem resolution and investigated an advanced estimation methodology for the localization of the cerebral activity. Our focus was, therefore, on the integration of temporal priors to low-resolution brain electromagnetic tomography (LORETA) formalism and to solve the inverse problem in the EEG. The main idea behind our proposed method was in the integration of a temporal projection matrix within the LORETA weighting matrix. A hyperparameter is the principal fact for such a temporal integration, and its importance would be obvious when obtaining a regularized smoothness solution. Our experimental results clearly confirmed the impact of such an optimization procedure adopted for the temporal regularization parameter comparatively to the LORETA method.
SU-E-J-174: Adaptive PET-Based Dose Painting with Tomotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darwish, N; Mackie, T; Thomadsen, B
2014-06-01
Purpose: PET imaging can be converted into dose prescription directly. Due to the variability of the intensity of PET the image, PET prescription maybe superior over uniform dose prescription. Furthermore, unlike the case in image reconstruction of not knowing the image solution in advance, the prescribed dose is known from a PET image a priori. Therefore, optimum beam orientations are derivable. Methods: We can assume the PET image to be the prescribed dose and invert it to determine the energy fluence. The same method used to reconstruct tissue images from projections could be used to solve the inverse problem ofmore » determining beam orientations and modulation patterns from a dose prescription [10]. Unlike standard tomographic reconstruction of images from measured projection profiles, the inversion of the prescribed dose results in photon fluence which may be negative and therefore unphysical. Two-dimensional modulated beams can be modelled in terms of the attenuated or exponential radon transform of the prescribed dose function (assumed to be the PET image in this case), an application of a Ram-Lak filter, and inversion by backprojection. Unlike the case in PET processing, however, the filtered beam obtained from the inversion represents a physical photon fluence. Therefore, a positivity constraint for the fluence (setting negative fluence to zero) must be applied (Brahme et al 1982, Bortfeld et al 1990) Results: Truncating the negative profiles from the PET data results in an approximation of the derivable energy fluence. Backprojection of the deliverable fluence is an approximation of the dose delivered. The deliverable dose is comparable to the original PET image and is similar to the PET image. Conclusion: It is possible to use the PET data or image as a direct indicator of deliverable fluence for cylindrical radiotherapy systems such as TomoTherapy.« less
Training-Image Based Geostatistical Inversion Using a Spatial Generative Adversarial Neural Network
NASA Astrophysics Data System (ADS)
Laloy, Eric; Hérault, Romain; Jacques, Diederik; Linde, Niklas
2018-01-01
Probabilistic inversion within a multiple-point statistics framework is often computationally prohibitive for high-dimensional problems. To partly address this, we introduce and evaluate a new training-image based inversion approach for complex geologic media. Our approach relies on a deep neural network of the generative adversarial network (GAN) type. After training using a training image (TI), our proposed spatial GAN (SGAN) can quickly generate 2-D and 3-D unconditional realizations. A key characteristic of our SGAN is that it defines a (very) low-dimensional parameterization, thereby allowing for efficient probabilistic inversion using state-of-the-art Markov chain Monte Carlo (MCMC) methods. In addition, available direct conditioning data can be incorporated within the inversion. Several 2-D and 3-D categorical TIs are first used to analyze the performance of our SGAN for unconditional geostatistical simulation. Training our deep network can take several hours. After training, realizations containing a few millions of pixels/voxels can be produced in a matter of seconds. This makes it especially useful for simulating many thousands of realizations (e.g., for MCMC inversion) as the relative cost of the training per realization diminishes with the considered number of realizations. Synthetic inversion case studies involving 2-D steady state flow and 3-D transient hydraulic tomography with and without direct conditioning data are used to illustrate the effectiveness of our proposed SGAN-based inversion. For the 2-D case, the inversion rapidly explores the posterior model distribution. For the 3-D case, the inversion recovers model realizations that fit the data close to the target level and visually resemble the true model well.
NASA Astrophysics Data System (ADS)
Poudel, Joemini; Matthews, Thomas P.; Mitsuhashi, Kenji; Garcia-Uribe, Alejandro; Wang, Lihong V.; Anastasio, Mark A.
2017-03-01
Photoacoustic computed tomography (PACT) is an emerging computed imaging modality that exploits optical contrast and ultrasonic detection principles to form images of the photoacoustically induced initial pressure distribution within tissue. The PACT reconstruction problem corresponds to a time-domain inverse source problem, where the initial pressure distribution is recovered from the measurements recorded on an aperture outside the support of the source. A major challenge in transcranial PACT brain imaging is to compensate for aberrations in the measured data due to the propagation of the photoacoustic wavefields through the skull. To properly account for these effects, a wave equation-based inversion method should be employed that can model the heterogeneous elastic properties of the medium. In this study, an iterative image reconstruction method for 3D transcranial PACT is developed based on the elastic wave equation. To accomplish this, a forward model based on a finite-difference time-domain discretization of the elastic wave equation is established. Subsequently, gradient-based methods are employed for computing penalized least squares estimates of the initial source distribution that produced the measured photoacoustic data. The developed reconstruction algorithm is validated and investigated through computer-simulation studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, Jaideep; Lee, Jina; Lefantzi, Sophia
The estimation of fossil-fuel CO2 emissions (ffCO2) from limited ground-based and satellite measurements of CO2 concentrations will form a key component of the monitoring of treaties aimed at the abatement of greenhouse gas emissions. To that end, we construct a multiresolution spatial parametrization for fossil-fuel CO2 emissions (ffCO2), to be used in atmospheric inversions. Such a parametrization does not currently exist. The parametrization uses wavelets to accurately capture the multiscale, nonstationary nature of ffCO2 emissions and employs proxies of human habitation, e.g., images of lights at night and maps of built-up areas to reduce the dimensionality of the multiresolution parametrization.more » The parametrization is used in a synthetic data inversion to test its suitability for use in atmospheric inverse problem. This linear inverse problem is predicated on observations of ffCO2 concentrations collected at measurement towers. We adapt a convex optimization technique, commonly used in the reconstruction of compressively sensed images, to perform sparse reconstruction of the time-variant ffCO2 emission field. We also borrow concepts from compressive sensing to impose boundary conditions i.e., to limit ffCO2 emissions within an irregularly shaped region (the United States, in our case). We find that the optimization algorithm performs a data-driven sparsification of the spatial parametrization and retains only of those wavelets whose weights could be estimated from the observations. Further, our method for the imposition of boundary conditions leads to a 10computational saving over conventional means of doing so. We conclude with a discussion of the accuracy of the estimated emissions and the suitability of the spatial parametrization for use in inverse problems with a significant degree of regularization.« less
Imaging of isotropic and anisotropic conductivities from power densities in three dimensions
NASA Astrophysics Data System (ADS)
Monard, François; Rim, Donsub
2018-07-01
We present numerical reconstructions of anisotropic conductivity tensors in three dimensions, from knowledge of a finite family of power density functionals. Such a problem arises in the coupled-physics imaging modality ultrasound modulated electrical impedance tomography for instance. We improve on the algorithms previously derived in Bal et al (2013 Inverse Problems Imaging 7 353–75) Monard and Bal (2013 Commun. PDE 38 1183–207) for both isotropic and anisotropic cases, and we address the well-known issue of vanishing determinants in particular. The algorithm is implemented and we provide numerical results that illustrate the improvements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khosla, D.; Singh, M.
The estimation of three-dimensional dipole current sources on the cortical surface from the measured magnetoencephalogram (MEG) is a highly under determined inverse problem as there are many {open_quotes}feasible{close_quotes} images which are consistent with the MEG data. Previous approaches to this problem have concentrated on the use of weighted minimum norm inverse methods. While these methods ensure a unique solution, they often produce overly smoothed solutions and exhibit severe sensitivity to noise. In this paper we explore the maximum entropy approach to obtain better solutions to the problem. This estimation technique selects that image from the possible set of feasible imagesmore » which has the maximum entropy permitted by the information available to us. In order to account for the presence of noise in the data, we have also incorporated a noise rejection or likelihood term into our maximum entropy method. This makes our approach mirror a Bayesian maximum a posteriori (MAP) formulation. Additional information from other functional techniques like functional magnetic resonance imaging (fMRI) can be incorporated in the proposed method in the form of a prior bias function to improve solutions. We demonstrate the method with experimental phantom data from a clinical 122 channel MEG system.« less
Imaging with cross-hole seismoelectric tomography
Araji, A.H.; Revil, A.; Jardani, A.; Minsley, Burke J.; Karaoulis, M.
2012-01-01
We propose a cross-hole imaging approach based on seismoelectric conversions (SC) associated with the transmission of seismic waves from seismic sources located in a borehole to receivers (electrodes) located in a second borehole. The seismoelectric (seismic-to-electric) problem is solved using Biot theory coupled with a generalized Ohm's law with an electrokinetic streaming current contribution. The components of the displacement of the solid phase, the fluid pressure, and the electrical potential are solved using a finite element approach with Perfect Match Layer (PML) boundary conditions for the seismic waves and boundary conditions mimicking an infinite material for the electrostatic problem. We develop an inversion algorithm using the electrical disturbances recorded in the second borehole to localize the position of the heterogeneities responsible for the SC. Because of the ill-posed nature of the inverse problem (inherent to all potential-field problems), regularization is used to constrain the solution at each time in the SC-time window comprised between the time of the seismic shot and the time of the first arrival of the seismic waves in the second borehole. All the inverted volumetric current source densities are aggregated together to produce an image of the position of the heterogeneities between the two boreholes. Two simple synthetic case studies are presented to test this concept. The first case study corresponds to a vertical discontinuity between two homogeneous sub-domains. The second case study corresponds to a poroelastic inclusion (partially saturated by oil) embedded into an homogenous poroelastic formation. In both cases, the position of the heterogeneity is recovered using only the electrical disturbances associated with the SC. That said, a joint inversion of the seismic and seismoelectric data could improve these results.
Effects of adaptive refinement on the inverse EEG solution
NASA Astrophysics Data System (ADS)
Weinstein, David M.; Johnson, Christopher R.; Schmidt, John A.
1995-10-01
One of the fundamental problems in electroencephalography can be characterized by an inverse problem. Given a subset of electrostatic potentials measured on the surface of the scalp and the geometry and conductivity properties within the head, calculate the current vectors and potential fields within the cerebrum. Mathematically the generalized EEG problem can be stated as solving Poisson's equation of electrical conduction for the primary current sources. The resulting problem is mathematically ill-posed i.e., the solution does not depend continuously on the data, such that small errors in the measurement of the voltages on the scalp can yield unbounded errors in the solution, and, for the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions to such problems could be obtained, neurologists would gain noninvasive accesss to patient-specific cortical activity. Access to such data would ultimately increase the number of patients who could be effectively treated for pathological cortical conditions such as temporal lobe epilepsy. In this paper, we present the effects of spatial adaptive refinement on the inverse EEG problem and show that the use of adaptive methods allow for significantly better estimates of electric and potential fileds within the brain through an inverse procedure. To test these methods, we have constructed several finite element head models from magneteic resonance images of a patient. The finite element meshes ranged in size from 2724 nodes and 12,812 elements to 5224 nodes and 29,135 tetrahedral elements, depending on the level of discretization. We show that an adaptive meshing algorithm minimizes the error in the forward problem due to spatial discretization and thus increases the accuracy of the inverse solution.
Tomographic Neutron Imaging using SIRT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gregor, Jens; FINNEY, Charles E A; Toops, Todd J
2013-01-01
Neutron imaging is complementary to x-ray imaging in that materials such as water and plastic are highly attenuating while material such as metal is nearly transparent. We showcase tomographic imaging of a diesel particulate filter. Reconstruction is done using a modified version of SIRT called PSIRT. We expand on previous work and introduce Tikhonov regularization. We show that near-optimal relaxation can still be achieved. The algorithmic ideas apply to cone beam x-ray CT and other inverse problems.
P- and S-wave Receiver Function Imaging with Scattering Kernels
NASA Astrophysics Data System (ADS)
Hansen, S. M.; Schmandt, B.
2017-12-01
Full waveform inversion provides a flexible approach to the seismic parameter estimation problem and can account for the full physics of wave propagation using numeric simulations. However, this approach requires significant computational resources due to the demanding nature of solving the forward and adjoint problems. This issue is particularly acute for temporary passive-source seismic experiments (e.g. PASSCAL) that have traditionally relied on teleseismic earthquakes as sources resulting in a global scale forward problem. Various approximation strategies have been proposed to reduce the computational burden such as hybrid methods that embed a heterogeneous regional scale model in a 1D global model. In this study, we focus specifically on the problem of scattered wave imaging (migration) using both P- and S-wave receiver function data. The proposed method relies on body-wave scattering kernels that are derived from the adjoint data sensitivity kernels which are typically used for full waveform inversion. The forward problem is approximated using ray theory yielding a computationally efficient imaging algorithm that can resolve dipping and discontinuous velocity interfaces in 3D. From the imaging perspective, this approach is closely related to elastic reverse time migration. An energy stable finite-difference method is used to simulate elastic wave propagation in a 2D hypothetical subduction zone model. The resulting synthetic P- and S-wave receiver function datasets are used to validate the imaging method. The kernel images are compared with those generated by the Generalized Radon Transform (GRT) and Common Conversion Point stacking (CCP) methods. These results demonstrate the potential of the kernel imaging approach to constrain lithospheric structure in complex geologic environments with sufficiently dense recordings of teleseismic data. This is demonstrated using a receiver function dataset from the Central California Seismic Experiment which shows several dipping interfaces related to the tectonic assembly of this region. Figure 1. Scattering kernel examples for three receiver function phases. A) direct P-to-s (Ps), B) direct S-to-p and C) free-surface PP-to-s (PPs).
Breast ultrasound computed tomography using waveform inversion with source encoding
NASA Astrophysics Data System (ADS)
Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A.
2015-03-01
Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the speed-of-sound distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Computer-simulation studies are conducted to demonstrate the use of the WISE method. Using a single graphics processing unit card, each iteration can be completed within 25 seconds for a 128 × 128 mm2 reconstruction region. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.
Distorted Born iterative T-matrix method for inversion of CSEM data in anisotropic media
NASA Astrophysics Data System (ADS)
Jakobsen, Morten; Tveit, Svenn
2018-05-01
We present a direct iterative solutions to the nonlinear controlled-source electromagnetic (CSEM) inversion problem in the frequency domain, which is based on a volume integral equation formulation of the forward modelling problem in anisotropic conductive media. Our vectorial nonlinear inverse scattering approach effectively replaces an ill-posed nonlinear inverse problem with a series of linear ill-posed inverse problems, for which there already exist efficient (regularized) solution methods. The solution update the dyadic Green's function's from the source to the scattering-volume and from the scattering-volume to the receivers, after each iteration. The T-matrix approach of multiple scattering theory is used for efficient updating of all dyadic Green's functions after each linearized inversion step. This means that we have developed a T-matrix variant of the Distorted Born Iterative (DBI) method, which is often used in the acoustic and electromagnetic (medical) imaging communities as an alternative to contrast-source inversion. The main advantage of using the T-matrix approach in this context, is that it eliminates the need to perform a full forward simulation at each iteration of the DBI method, which is known to be consistent with the Gauss-Newton method. The T-matrix allows for a natural domain decomposition, since in the sense that a large model can be decomposed into an arbitrary number of domains that can be treated independently and in parallel. The T-matrix we use for efficient model updating is also independent of the source-receiver configuration, which could be an advantage when performing fast-repeat modelling and time-lapse inversion. The T-matrix is also compatible with the use of modern renormalization methods that can potentially help us to reduce the sensitivity of the CSEM inversion results on the starting model. To illustrate the performance and potential of our T-matrix variant of the DBI method for CSEM inversion, we performed a numerical experiments based on synthetic CSEM data associated with 2D VTI and 3D orthorombic model inversions. The results of our numerical experiment suggest that the DBIT method for inversion of CSEM data in anisotropic media is both accurate and efficient.
NASA Astrophysics Data System (ADS)
Szu, Harold H.; Buss, James R.; Kopriva, Ivica
2004-04-01
We proposed the physics approach to solve a physical inverse problem, namely to choose the unique equilibrium solution (at the minimum free energy: H= E - ToS, including the Wiener, l.m.s E, and ICA, Max S, as special cases). The "unsupervised classification" presumes that required information must be learned and derived directly and solely from the data alone, in consistence with the classical Duda-Hart ATR definition of the "unlabelled data". Such truly unsupervised methodology is presented for space-variant imaging processing for a single pixel in the real world case of remote sensing, early tumor detections and SARS. The indeterminacy of the multiple solutions of the inverse problem is regulated or selected by means of the absolute minimum of isothermal free energy as the ground truth of local equilibrium condition at the single-pixel foot print.
The attitude inversion method of geostationary satellites based on unscented particle filter
NASA Astrophysics Data System (ADS)
Du, Xiaoping; Wang, Yang; Hu, Heng; Gou, Ruixin; Liu, Hao
2018-04-01
The attitude information of geostationary satellites is difficult to be obtained since they are presented in non-resolved images on the ground observation equipment in space object surveillance. In this paper, an attitude inversion method for geostationary satellite based on Unscented Particle Filter (UPF) and ground photometric data is presented. The inversion algorithm based on UPF is proposed aiming at the strong non-linear feature in the photometric data inversion for satellite attitude, which combines the advantage of Unscented Kalman Filter (UKF) and Particle Filter (PF). This update method improves the particle selection based on the idea of UKF to redesign the importance density function. Moreover, it uses the RMS-UKF to partially correct the prediction covariance matrix, which improves the applicability of the attitude inversion method in view of UKF and the particle degradation and dilution of the attitude inversion method based on PF. This paper describes the main principles and steps of algorithm in detail, correctness, accuracy, stability and applicability of the method are verified by simulation experiment and scaling experiment in the end. The results show that the proposed method can effectively solve the problem of particle degradation and depletion in the attitude inversion method on account of PF, and the problem that UKF is not suitable for the strong non-linear attitude inversion. However, the inversion accuracy is obviously superior to UKF and PF, in addition, in the case of the inversion with large attitude error that can inverse the attitude with small particles and high precision.
Three-dimensional imaging of buried objects in very lossy earth by inversion of VETEM data
Cui, T.J.; Aydiner, A.A.; Chew, W.C.; Wright, D.L.; Smith, D.V.
2003-01-01
The very early time electromagnetic system (VETEM) is an efficient tool for the detection of buried objects in very lossy earth, which allows a deeper penetration depth compared to the ground-penetrating radar. In this paper, the inversion of VETEM data is investigated using three-dimensional (3-D) inverse scattering techniques, where multiple frequencies are applied in the frequency range from 0-5 MHz. For small and moderately sized problems, the Born approximation and/or the Born iterative method have been used with the aid of the singular value decomposition and/or the conjugate gradient method in solving the linearized integral equations. For large-scale problems, a localized 3-D inversion method based on the Born approximation has been proposed for the inversion of VETEM data over a large measurement domain. Ways to process and to calibrate the experimental VETEM data are discussed to capture the real physics of buried objects. Reconstruction examples using synthesized VETEM data and real-world VETEM data are given to test the validity and efficiency of the proposed approach.
NASA Astrophysics Data System (ADS)
Sun, Xiao-Dong; Ge, Zhong-Hui; Li, Zhen-Chun
2017-09-01
Although conventional reverse time migration can be perfectly applied to structural imaging it lacks the capability of enabling detailed delineation of a lithological reservoir due to irregular illumination. To obtain reliable reflectivity of the subsurface it is necessary to solve the imaging problem using inversion. The least-square reverse time migration (LSRTM) (also known as linearized reflectivity inversion) aims to obtain relatively high-resolution amplitude preserving imaging by including the inverse of the Hessian matrix. In practice, the conjugate gradient algorithm is proven to be an efficient iterative method for enabling use of LSRTM. The velocity gradient can be derived from a cross-correlation between observed data and simulated data, making LSRTM independent of wavelet signature and thus more robust in practice. Tests on synthetic and marine data show that LSRTM has good potential for use in reservoir description and four-dimensional (4D) seismic images compared to traditional RTM and Fourier finite difference (FFD) migration. This paper investigates the first order approximation of LSRTM, which is also known as the linear Born approximation. However, for more complex geological structures a higher order approximation should be considered to improve imaging quality.
NASA Astrophysics Data System (ADS)
Barriot, Jean-Pierre; Serafini, Jonathan; Sichoix, Lydie; Benna, Mehdi; Kofman, Wlodek; Herique, Alain
We investigate the inverse problem of imaging the internal structure of comet 67P/ Churyumov-Gerasimenko from radiotomography CONSERT data by using a coupled regularized inversion of the Helmholtz equations. A first set of Helmholtz equations, written w.r.t a basis of 3D Hankel functions describes the wave propagation outside the comet at large distances, a second set of Helmholtz equations, written w.r.t. a basis of 3D Zernike functions describes the wave propagation throughout the comet with avariable permittivity. Both sets are connected by continuity equations over a sphere that surrounds the comet. This approach, derived from GPS water vapor tomography of the atmosphere,will permit a full 3D inversion of the internal structure of the comet, contrary to traditional approaches that use a discretization of space at a fraction of the radiowave wavelength.
Sparse Image Reconstruction on the Sphere: Analysis and Synthesis.
Wallis, Christopher G R; Wiaux, Yves; McEwen, Jason D
2017-11-01
We develop techniques to solve ill-posed inverse problems on the sphere by sparse regularization, exploiting sparsity in both axisymmetric and directional scale-discretized wavelet space. Denoising, inpainting, and deconvolution problems and combinations thereof, are considered as examples. Inverse problems are solved in both the analysis and synthesis settings, with a number of different sampling schemes. The most effective approach is that with the most restricted solution-space, which depends on the interplay between the adopted sampling scheme, the selection of the analysis/synthesis problem, and any weighting of the l 1 norm appearing in the regularization problem. More efficient sampling schemes on the sphere improve reconstruction fidelity by restricting the solution-space and also by improving sparsity in wavelet space. We apply the technique to denoise Planck 353-GHz observations, improving the ability to extract the structure of Galactic dust emission, which is important for studying Galactic magnetism.
Approximated transport-of-intensity equation for coded-aperture x-ray phase-contrast imaging.
Das, Mini; Liang, Zhihua
2014-09-15
Transport-of-intensity equations (TIEs) allow better understanding of image formation and assist in simplifying the "phase problem" associated with phase-sensitive x-ray measurements. In this Letter, we present for the first time to our knowledge a simplified form of TIE that models x-ray differential phase-contrast (DPC) imaging with coded-aperture (CA) geometry. The validity of our approximation is demonstrated through comparison with an exact TIE in numerical simulations. The relative contributions of absorption, phase, and differential phase to the acquired phase-sensitive intensity images are made readily apparent with the approximate TIE, which may prove useful for solving the inverse phase-retrieval problem associated with these CA geometry based DPC.
Wang, G.L.; Chew, W.C.; Cui, T.J.; Aydiner, A.A.; Wright, D.L.; Smith, D.V.
2004-01-01
Three-dimensional (3D) subsurface imaging by using inversion of data obtained from the very early time electromagnetic system (VETEM) was discussed. The study was carried out by using the distorted Born iterative method to match the internal nonlinear property of the 3D inversion problem. The forward solver was based on the total-current formulation bi-conjugate gradient-fast Fourier transform (BCCG-FFT). It was found that the selection of regularization parameter follow a heuristic rule as used in the Levenberg-Marquardt algorithm so that the iteration is stable.
Super-resolution Time-Lapse Seismic Waveform Inversion
NASA Astrophysics Data System (ADS)
Ovcharenko, O.; Kazei, V.; Peter, D. B.; Alkhalifah, T.
2017-12-01
Time-lapse seismic waveform inversion is a technique, which allows tracking changes in the reservoirs over time. Such monitoring is relatively computationally extensive and therefore it is barely feasible to perform it on-the-fly. Most of the expenses are related to numerous FWI iterations at high temporal frequencies, which is inevitable since the low-frequency components can not resolve fine scale features of a velocity model. Inverted velocity changes are also blurred when there is noise in the data, so the problem of low-resolution images is widely known. One of the problems intensively tackled by computer vision research community is the recovering of high-resolution images having their low-resolution versions. Usage of artificial neural networks to reach super-resolution from a single downsampled image is one of the leading solutions for this problem. Each pixel of the upscaled image is affected by all the pixels of its low-resolution version, which enables the workflow to recover features that are likely to occur in the corresponding environment. In the present work, we adopt machine learning image enhancement technique to improve the resolution of time-lapse full-waveform inversion. We first invert the baseline model with conventional FWI. Then we run a few iterations of FWI on a set of the monitoring data to find desired model changes. These changes are blurred and we enhance their resolution by using a deep neural network. The network is trained to map low-resolution model updates predicted by FWI into the real perturbations of the baseline model. For supervised training of the network we generate a set of random perturbations in the baseline model and perform FWI on the noisy data from the perturbed models. We test the approach on a realistic perturbation of Marmousi II model and demonstrate that it outperforms conventional convolution-based deblurring techniques.
NASA Astrophysics Data System (ADS)
Mezgebo, Biniyam; Nagib, Karim; Fernando, Namal; Kordi, Behzad; Sherif, Sherif
2018-02-01
Swept Source optical coherence tomography (SS-OCT) is an important imaging modality for both medical and industrial diagnostic applications. A cross-sectional SS-OCT image is obtained by applying an inverse discrete Fourier transform (DFT) to axial interferograms measured in the frequency domain (k-space). This inverse DFT is typically implemented as a fast Fourier transform (FFT) that requires the data samples to be equidistant in k-space. As the frequency of light produced by a typical wavelength-swept laser is nonlinear in time, the recorded interferogram samples will not be uniformly spaced in k-space. Many image reconstruction methods have been proposed to overcome this problem. Most such methods rely on oversampling the measured interferogram then use either hardware, e.g., Mach-Zhender interferometer as a frequency clock module, or software, e.g., interpolation in k-space, to obtain equally spaced samples that are suitable for the FFT. To overcome the problem of nonuniform sampling in k-space without any need for interferogram oversampling, an earlier method demonstrated the use of the nonuniform discrete Fourier transform (NDFT) for image reconstruction in SS-OCT. In this paper, we present a more accurate method for SS-OCT image reconstruction from nonuniform samples in k-space using a scaled nonuniform Fourier transform. The result is demonstrated using SS-OCT images of Axolotl salamander eggs.
Bilinear Inverse Problems: Theory, Algorithms, and Applications
NASA Astrophysics Data System (ADS)
Ling, Shuyang
We will discuss how several important real-world signal processing problems, such as self-calibration and blind deconvolution, can be modeled as bilinear inverse problems and solved by convex and nonconvex optimization approaches. In Chapter 2, we bring together three seemingly unrelated concepts, self-calibration, compressive sensing and biconvex optimization. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations y = DAx, where the diagonal matrix D (which models the calibration error) is unknown and x is an unknown sparse signal. By "lifting" this biconvex inverse problem and exploiting sparsity in this model, we derive explicit theoretical guarantees under which both x and D can be recovered exactly, robustly, and numerically efficiently. In Chapter 3, we study the question of the joint blind deconvolution and blind demixing, i.e., extracting a sequence of functions [special characters omitted] from observing only the sum of their convolutions [special characters omitted]. In particular, for the special case s = 1, it becomes the well-known blind deconvolution problem. We present a non-convex algorithm which guarantees exact recovery under conditions that are competitive with convex optimization methods, with the additional advantage of being computationally much more efficient. We discuss several applications of the proposed framework in image processing and wireless communications in connection with the Internet-of-Things. In Chapter 4, we consider three different self-calibration models of practical relevance. We show how their corresponding bilinear inverse problems can be solved by both the simple linear least squares approach and the SVD-based approach. As a consequence, the proposed algorithms are numerically extremely efficient, thus allowing for real-time deployment. Explicit theoretical guarantees and stability theory are derived and the number of sampling complexity is nearly optimal (up to a poly-log factor). Applications in imaging sciences and signal processing are discussed and numerical simulations are presented to demonstrate the effectiveness and efficiency of our approach.
Image resolution enhancement via image restoration using neural network
NASA Astrophysics Data System (ADS)
Zhang, Shuangteng; Lu, Yihong
2011-04-01
Image super-resolution aims to obtain a high-quality image at a resolution that is higher than that of the original coarse one. This paper presents a new neural network-based method for image super-resolution. In this technique, the super-resolution is considered as an inverse problem. An observation model that closely follows the physical image acquisition process is established to solve the problem. Based on this model, a cost function is created and minimized by a Hopfield neural network to produce high-resolution images from the corresponding low-resolution ones. Not like some other single frame super-resolution techniques, this technique takes into consideration point spread function blurring as well as additive noise and therefore generates high-resolution images with more preserved or restored image details. Experimental results demonstrate that the high-resolution images obtained by this technique have a very high quality in terms of PSNR and visually look more pleasant.
Three-dimensional inversion of multisource array electromagnetic data
NASA Astrophysics Data System (ADS)
Tartaras, Efthimios
Three-dimensional (3-D) inversion is increasingly important for the correct interpretation of geophysical data sets in complex environments. To this effect, several approximate solutions have been developed that allow the construction of relatively fast inversion schemes. One such method that is fast and provides satisfactory accuracy is the quasi-linear (QL) approximation. It has, however, the drawback that it is source-dependent and, therefore, impractical in situations where multiple transmitters in different positions are employed. I have, therefore, developed a localized form of the QL approximation that is source-independent. This so-called localized quasi-linear (LQL) approximation can have a scalar, a diagonal, or a full tensor form. Numerical examples of its comparison with the full integral equation solution, the Born approximation, and the original QL approximation are given. The objective behind developing this approximation is to use it in a fast 3-D inversion scheme appropriate for multisource array data such as those collected in airborne surveys, cross-well logging, and other similar geophysical applications. I have developed such an inversion scheme using the scalar and diagonal LQL approximation. It reduces the original nonlinear inverse electromagnetic (EM) problem to three linear inverse problems. The first of these problems is solved using a weighted regularized linear conjugate gradient method, whereas the last two are solved in the least squares sense. The algorithm I developed provides the option of obtaining either smooth or focused inversion images. I have applied the 3-D LQL inversion to synthetic 3-D EM data that simulate a helicopter-borne survey over different earth models. The results demonstrate the stability and efficiency of the method and show that the LQL approximation can be a practical solution to the problem of 3-D inversion of multisource array frequency-domain EM data. I have also applied the method to helicopter-borne EM data collected by INCO Exploration over the Voisey's Bay area in Labrador, Canada. The results of the 3-D inversion successfully delineate the shallow massive sulfides and show that the method can produce reasonable results even in areas of complex geology and large resistivity contrasts.
NASA Astrophysics Data System (ADS)
Zhou, Renjie; So, Peter T. C.; Yaqoob, Zahid; Jin, Di; Hosseini, Poorya; Kuang, Cuifang; Singh, Vijay Raj; Kim, Yang-Hyo; Dasari, Ramachandra R.
2017-02-01
Most of the quantitative phase microscopy systems are unable to provide depth-resolved information for measuring complex biological structures. Optical diffraction tomography provides a non-trivial solution to it by 3D reconstructing the object with multiple measurements through different ways of realization. Previously, our lab developed a reflection-mode dynamic speckle-field phase microscopy (DSPM) technique, which can be used to perform depth resolved measurements in a single shot. Thus, this system is suitable for measuring dynamics in a layer of interest in the sample. DSPM can be also used for tomographic imaging, which promises to solve the long-existing "missing cone" problem in 3D imaging. However, the 3D imaging theory for this type of system has not been developed in the literature. Recently, we have developed an inverse scattering model to rigorously describe the imaging physics in DSPM. Our model is based on the diffraction tomography theory and the speckle statistics. Using our model, we first precisely calculated the defocus response and the depth resolution in our system. Then, we further calculated the 3D coherence transfer function to link the 3D object structural information with the axially scanned imaging data. From this transfer function, we found that in the reflection mode excellent sectioning effect exists in the low lateral spatial frequency region, thus allowing us to solve the "missing cone" problem. Currently, we are working on using this coherence transfer function to reconstruct layered structures and complex cells.
Simulation of Forward and Inverse X-ray Scattering From Shocked Materials
NASA Astrophysics Data System (ADS)
Barber, John; Marksteiner, Quinn; Barnes, Cris
2012-02-01
The next generation of high-intensity, coherent light sources should generate sufficient brilliance to perform in-situ coherent x-ray diffraction imaging (CXDI) of shocked materials. In this work, we present beginning-to-end simulations of this process. This includes the calculation of the partially-coherent intensity profiles of self-amplified stimulated emission (SASE) x-ray free electron lasers (XFELs), as well as the use of simulated, shocked molecular-dynamics-based samples to predict the evolution of the resulting diffraction patterns. In addition, we will explore the corresponding inverse problem by performing iterative phase retrieval to generate reconstructed images of the simulated sample. The development of these methods in the context of materials under extreme conditions should provide crucial insights into the design and capabilities of shocked in-situ imaging experiments.
Dynamic electrical impedance imaging with the interacting multiple model scheme.
Kim, Kyung Youn; Kim, Bong Seok; Kim, Min Chan; Kim, Sin; Isaacson, David; Newell, Jonathan C
2005-04-01
In this paper, an effective dynamical EIT imaging scheme is presented for on-line monitoring of the abruptly changing resistivity distribution inside the object, based on the interacting multiple model (IMM) algorithm. The inverse problem is treated as a stochastic nonlinear state estimation problem with the time-varying resistivity (state) being estimated on-line with the aid of the IMM algorithm. In the design of the IMM algorithm multiple models with different process noise covariance are incorporated to reduce the modeling uncertainty. Simulations and phantom experiments are provided to illustrate the proposed algorithm.
Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fowler, Michael James
2014-04-25
In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographsmore » is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy laboratories.« less
Direct integration of the inverse Radon equation for X-ray computed tomography.
Libin, E E; Chakhlov, S V; Trinca, D
2016-11-22
A new mathematical appoach using the inverse Radon equation for restoration of images in problems of linear two-dimensional x-ray tomography is formulated. In this approach, Fourier transformation is not used, and it gives the chance to create the practical computing algorithms having more reliable mathematical substantiation. Results of software implementation show that for especially for low number of projections, the described approach performs better than standard X-ray tomographic reconstruction algorithms.
Quantifying uncertainties of seismic Bayesian inversion of Northern Great Plains
NASA Astrophysics Data System (ADS)
Gao, C.; Lekic, V.
2017-12-01
Elastic waves excited by earthquakes are the fundamental observations of the seismological studies. Seismologists measure information such as travel time, amplitude, and polarization to infer the properties of earthquake source, seismic wave propagation, and subsurface structure. Across numerous applications, seismic imaging has been able to take advantage of complimentary seismic observables to constrain profiles and lateral variations of Earth's elastic properties. Moreover, seismic imaging plays a unique role in multidisciplinary studies of geoscience by providing direct constraints on the unreachable interior of the Earth. Accurate quantification of uncertainties of inferences made from seismic observations is of paramount importance for interpreting seismic images and testing geological hypotheses. However, such quantification remains challenging and subjective due to the non-linearity and non-uniqueness of geophysical inverse problem. In this project, we apply a reverse jump Markov chain Monte Carlo (rjMcMC) algorithm for a transdimensional Bayesian inversion of continental lithosphere structure. Such inversion allows us to quantify the uncertainties of inversion results by inverting for an ensemble solution. It also yields an adaptive parameterization that enables simultaneous inversion of different elastic properties without imposing strong prior information on the relationship between them. We present retrieved profiles of shear velocity (Vs) and radial anisotropy in Northern Great Plains using measurements from USArray stations. We use both seismic surface wave dispersion and receiver function data due to their complementary constraints of lithosphere structure. Furthermore, we analyze the uncertainties of both individual and joint inversion of those two data types to quantify the benefit of doing joint inversion. As an application, we infer the variation of Moho depths and crustal layering across the northern Great Plains.
Time reversal imaging, Inverse problems and Adjoint Tomography}
NASA Astrophysics Data System (ADS)
Montagner, J.; Larmat, C. S.; Capdeville, Y.; Kawakatsu, H.; Fink, M.
2010-12-01
With the increasing power of computers and numerical techniques (such as spectral element methods), it is possible to address a new class of seismological problems. The propagation of seismic waves in heterogeneous media is simulated more and more accurately and new applications developed, in particular time reversal methods and adjoint tomography in the three-dimensional Earth. Since the pioneering work of J. Claerbout, theorized by A. Tarantola, many similarities were found between time-reversal methods, cross-correlations techniques, inverse problems and adjoint tomography. By using normal mode theory, we generalize the scalar approach of Draeger and Fink (1999) and Lobkis and Weaver (2001) to the 3D- elastic Earth, for theoretically understanding time-reversal method on global scale. It is shown how to relate time-reversal methods on one hand, with auto-correlations of seismograms for source imaging and on the other hand, with cross-correlations between receivers for structural imaging and retrieving Green function. Time-reversal methods were successfully applied in the past to acoustic waves in many fields such as medical imaging, underwater acoustics, non destructive testing and to seismic waves in seismology for earthquake imaging. In the case of source imaging, time reversal techniques make it possible an automatic location in time and space as well as the retrieval of focal mechanism of earthquakes or unknown environmental sources . We present here some applications at the global scale of these techniques on synthetic tests and on real data, such as Sumatra-Andaman (Dec. 2004), Haiti (Jan. 2010), as well as glacial earthquakes and seismic hum.
Support Minimized Inversion of Acoustic and Elastic Wave Scattering
NASA Astrophysics Data System (ADS)
Safaeinili, Ali
Inversion of limited data is common in many areas of NDE such as X-ray Computed Tomography (CT), Ultrasonic and eddy current flaw characterization and imaging. In many applications, it is common to have a bias toward a solution with minimum (L^2)^2 norm without any physical justification. When it is a priori known that objects are compact as, say, with cracks and voids, by choosing "Minimum Support" functional instead of the minimum (L^2)^2 norm, an image can be obtained that is equally in agreement with the available data, while it is more consistent with what is most probably seen in the real world. We have utilized a minimum support functional to find a solution with the smallest volume. This inversion algorithm is most successful in reconstructing objects that are compact like voids and cracks. To verify this idea, we first performed a variational nonlinear inversion of acoustic backscatter data using minimum support objective function. A full nonlinear forward model was used to accurately study the effectiveness of the minimized support inversion without error due to the linear (Born) approximation. After successful inversions using a full nonlinear forward model, a linearized acoustic inversion was developed to increase speed and efficiency in imaging process. The results indicate that by using minimum support functional, we can accurately size and characterize voids and/or cracks which otherwise might be uncharacterizable. An extremely important feature of support minimized inversion is its ability to compensate for unknown absolute phase (zero-of-time). Zero-of-time ambiguity is a serious problem in the inversion of the pulse-echo data. The minimum support inversion was successfully used for the inversion of acoustic backscatter data due to compact scatterers without the knowledge of the zero-of-time. The main drawback to this type of inversion is its computer intensiveness. In order to make this type of constrained inversion available for common use, work needs to be performed in three areas: (1) exploitation of state-of-the-art parallel computation, (2) improvement of theoretical formulation of the scattering process for better computation efficiency, and (3) development of better methods for guiding the non-linear inversion. (Abstract shortened by UMI.).
On the Forward Scattering of Microwave Breast Imaging
Lui, Hoi-Shun; Fhager, Andreas; Persson, Mikael
2012-01-01
Microwave imaging for breast cancer detection has been of significant interest for the last two decades. Recent studies focus on solving the imaging problem using an inverse scattering approach. Efforts have mainly been focused on the development of the inverse scattering algorithms, experimental setup, antenna design and clinical trials. However, the success of microwave breast imaging also heavily relies on the quality of the forward data such that the tumor inside the breast volume is well illuminated. In this work, a numerical study of the forward scattering data is conducted. The scattering behavior of simple breast models under different polarization states and aspect angles of illumination are considered. Numerical results have demonstrated that better data contrast could be obtained when the breast volume is illuminated using cross-polarized components in linear polarization basis or the copolarized components in the circular polarization basis. PMID:22611371
Test images for the maximum entropy image restoration method
NASA Technical Reports Server (NTRS)
Mackey, James E.
1990-01-01
One of the major activities of any experimentalist is data analysis and reduction. In solar physics, remote observations are made of the sun in a variety of wavelengths and circumstances. In no case is the data collected free from the influence of the design and operation of the data gathering instrument as well as the ever present problem of noise. The presence of significant noise invalidates the simple inversion procedure regardless of the range of known correlation functions. The Maximum Entropy Method (MEM) attempts to perform this inversion by making minimal assumptions about the data. To provide a means of testing the MEM and characterizing its sensitivity to noise, choice of point spread function, type of data, etc., one would like to have test images of known characteristics that can represent the type of data being analyzed. A means of reconstructing these images is presented.
NASA Astrophysics Data System (ADS)
Son, J.; Medina-Cetina, Z.
2017-12-01
We discuss the comparison between deterministic and stochastic optimization approaches to the nonlinear geophysical full-waveform inverse problem, based on the seismic survey data from Mississippi Canyon in the Northern Gulf of Mexico. Since the subsea engineering and offshore construction projects actively require reliable ground models from various site investigations, the primary goal of this study is to reconstruct the accurate subsurface information of the soil and rock material profiles under the seafloor. The shallow sediment layers have naturally formed heterogeneous formations which may cause unwanted marine landslides or foundation failures of underwater infrastructure. We chose the quasi-Newton and simulated annealing as deterministic and stochastic optimization algorithms respectively. Seismic forward modeling based on finite difference method with absorbing boundary condition implements the iterative simulations in the inverse modeling. We briefly report on numerical experiments using a synthetic data as an offshore ground model which contains shallow artificial target profiles of geomaterials under the seafloor. We apply the seismic migration processing and generate Voronoi tessellation on two-dimensional space-domain to improve the computational efficiency of the imaging stratigraphical velocity model reconstruction. We then report on the detail of a field data implementation, which shows the complex geologic structures in the Northern Gulf of Mexico. Lastly, we compare the new inverted image of subsurface site profiles in the space-domain with the previously processed seismic image in the time-domain at the same location. Overall, stochastic optimization for seismic inversion with migration and Voronoi tessellation show significant promise to improve the subsurface imaging of ground models and improve the computational efficiency required for the full waveform inversion. We anticipate that by improving the inversion process of shallow layers from geophysical data will better support the offshore site investigation.
NASA Astrophysics Data System (ADS)
Chvetsov, Alevei V.; Sandison, George A.; Schwartz, Jeffrey L.; Rengan, Ramesh
2015-11-01
The main objective of this article is to improve the stability of reconstruction algorithms for estimation of radiobiological parameters using serial tumor imaging data acquired during radiation therapy. Serial images of tumor response to radiation therapy represent a complex summation of several exponential processes as treatment induced cell inactivation, tumor growth rates, and the rate of cell loss. Accurate assessment of treatment response would require separation of these processes because they define radiobiological determinants of treatment response and, correspondingly, tumor control probability. However, the estimation of radiobiological parameters using imaging data can be considered an inverse ill-posed problem because a sum of several exponentials would produce the Fredholm integral equation of the first kind which is ill posed. Therefore, the stability of reconstruction of radiobiological parameters presents a problem even for the simplest models of tumor response. To study stability of the parameter reconstruction problem, we used a set of serial CT imaging data for head and neck cancer and a simplest case of a two-level cell population model of tumor response. Inverse reconstruction was performed using a simulated annealing algorithm to minimize a least squared objective function. Results show that the reconstructed values of cell surviving fractions and cell doubling time exhibit significant nonphysical fluctuations if no stabilization algorithms are applied. However, after applying a stabilization algorithm based on variational regularization, the reconstruction produces statistical distributions for survival fractions and doubling time that are comparable to published in vitro data. This algorithm is an advance over our previous work where only cell surviving fractions were reconstructed. We conclude that variational regularization allows for an increase in the number of free parameters in our model which enables development of more-advanced parameter reconstruction algorithms.
Conjugate-gradient preconditioning methods for shift-variant PET image reconstruction.
Fessler, J A; Booth, S D
1999-01-01
Gradient-based iterative methods often converge slowly for tomographic image reconstruction and image restoration problems, but can be accelerated by suitable preconditioners. Diagonal preconditioners offer some improvement in convergence rate, but do not incorporate the structure of the Hessian matrices in imaging problems. Circulant preconditioners can provide remarkable acceleration for inverse problems that are approximately shift-invariant, i.e., for those with approximately block-Toeplitz or block-circulant Hessians. However, in applications with nonuniform noise variance, such as arises from Poisson statistics in emission tomography and in quantum-limited optical imaging, the Hessian of the weighted least-squares objective function is quite shift-variant, and circulant preconditioners perform poorly. Additional shift-variance is caused by edge-preserving regularization methods based on nonquadratic penalty functions. This paper describes new preconditioners that approximate more accurately the Hessian matrices of shift-variant imaging problems. Compared to diagonal or circulant preconditioning, the new preconditioners lead to significantly faster convergence rates for the unconstrained conjugate-gradient (CG) iteration. We also propose a new efficient method for the line-search step required by CG methods. Applications to positron emission tomography (PET) illustrate the method.
Sparse radar imaging using 2D compressed sensing
NASA Astrophysics Data System (ADS)
Hou, Qingkai; Liu, Yang; Chen, Zengping; Su, Shaoying
2014-10-01
Radar imaging is an ill-posed linear inverse problem and compressed sensing (CS) has been proved to have tremendous potential in this field. This paper surveys the theory of radar imaging and a conclusion is drawn that the processing of ISAR imaging can be denoted mathematically as a problem of 2D sparse decomposition. Based on CS, we propose a novel measuring strategy for ISAR imaging radar and utilize random sub-sampling in both range and azimuth dimensions, which will reduce the amount of sampling data tremendously. In order to handle 2D reconstructing problem, the ordinary solution is converting the 2D problem into 1D by Kronecker product, which will increase the size of dictionary and computational cost sharply. In this paper, we introduce the 2D-SL0 algorithm into the reconstruction of imaging. It is proved that 2D-SL0 can achieve equivalent result as other 1D reconstructing methods, but the computational complexity and memory usage is reduced significantly. Moreover, we will state the results of simulating experiments and prove the effectiveness and feasibility of our method.
Image restoration, uncertainty, and information.
Yu, F T
1969-01-01
Some of the physical interpretations about image restoration are discussed. From the theory of information the unrealizability of an inverse filter can be explained by degradation of information, which is due to distortion on the recorded image. The image restoration is a time and space problem, which can be recognized from the theory of relativity (the problem of image restoration is related to Heisenberg's uncertainty principle in quantum mechanics). A detailed discussion of the relationship between information and energy is given. Two general results may be stated: (1) the restoration of the image from the distorted signal is possible only if it satisfies the detectability condition. However, the restored image, at the best, can only approach to the maximum allowable time criterion. (2) The restoration of an image by superimposing the distorted signal (due to smearing) is a physically unrealizable method. However, this restoration procedure may be achieved by the expenditure of an infinite amount of energy.
NASA Astrophysics Data System (ADS)
Buchmann, Jens; Kaplan, Bernhard A.; Prohaska, Steffen; Laufer, Jan
2017-03-01
Quantitative photoacoustic tomography (qPAT) aims to extract physiological parameters, such as blood oxygen saturation (sO2), from measured multi-wavelength image data sets. The challenge of this approach lies in the inherently nonlinear fluence distribution in the tissue, which has to be accounted for by using an appropriate model, and the large scale of the inverse problem. In addition, the accuracy of experimental and scanner-specific parameters, such as the wavelength dependence of the incident fluence, the acoustic detector response, the beam profile and divergence, needs to be considered. This study aims at quantitative imaging of blood sO2, as it has been shown to be a more robust parameter compared to absolute concentrations. We propose a Monte-Carlo-based inversion scheme in conjunction with a reduction in the number of variables achieved using image segmentation. The inversion scheme is experimentally validated in tissue-mimicking phantoms consisting of polymer tubes suspended in a scattering liquid. The tubes were filled with chromophore solutions at different concentration ratios. 3-D multi-spectral image data sets were acquired using a Fabry-Perot based PA scanner. A quantitative comparison of the measured data with the output of the forward model is presented. Parameter estimates of chromophore concentration ratios were found to be within 5 % of the true values.
Hyde, Damon; Schulz, Ralf; Brooks, Dana; Miller, Eric; Ntziachristos, Vasilis
2009-04-01
Hybrid imaging systems combining x-ray computed tomography (CT) and fluorescence tomography can improve fluorescence imaging performance by incorporating anatomical x-ray CT information into the optical inversion problem. While the use of image priors has been investigated in the past, little is known about the optimal use of forward photon propagation models in hybrid optical systems. In this paper, we explore the impact on reconstruction accuracy of the use of propagation models of varying complexity, specifically in the context of these hybrid imaging systems where significant structural information is known a priori. Our results demonstrate that the use of generically known parameters provides near optimal performance, even when parameter mismatch remains.
Fractional-order TV-L2 model for image denoising
NASA Astrophysics Data System (ADS)
Chen, Dali; Sun, Shenshen; Zhang, Congrong; Chen, YangQuan; Xue, Dingyu
2013-10-01
This paper proposes a new fractional order total variation (TV) denoising method, which provides a much more elegant and effective way of treating problems of the algorithm implementation, ill-posed inverse, regularization parameter selection and blocky effect. Two fractional order TV-L2 models are constructed for image denoising. The majorization-minimization (MM) algorithm is used to decompose these two complex fractional TV optimization problems into a set of linear optimization problems which can be solved by the conjugate gradient algorithm. The final adaptive numerical procedure is given. Finally, we report experimental results which show that the proposed methodology avoids the blocky effect and achieves state-of-the-art performance. In addition, two medical image processing experiments are presented to demonstrate the validity of the proposed methodology.
Bai, Chen; Xu, Shanshan; Duan, Junbo; Jing, Bowen; Yang, Miao; Wan, Mingxi
2017-08-01
Pulse-inversion subharmonic (PISH) imaging can display information relating to pure cavitation bubbles while excluding that of tissue. Although plane-wave-based ultrafast active cavitation imaging (UACI) can monitor the transient activities of cavitation bubbles, its resolution and cavitation-to-tissue ratio (CTR) are barely satisfactory but can be significantly improved by introducing eigenspace-based (ESB) adaptive beamforming. PISH and UACI are a natural combination for imaging of pure cavitation activity in tissue; however, it raises two problems: 1) the ESB beamforming is hard to implement in real time due to the enormous amount of computation associated with the covariance matrix inversion and eigendecomposition and 2) the narrowband characteristic of the subharmonic filter will incur a drastic degradation in resolution. Thus, in order to jointly address these two problems, we propose a new PISH-UACI method using novel fast ESB (F-ESB) beamforming and cavitation deconvolution for nonlinear signals. This method greatly reduces the computational complexity by using F-ESB beamforming through dimensionality reduction based on principal component analysis, while maintaining the high quality of ESB beamforming. The degraded resolution is recovered using cavitation deconvolution through a modified convolution model and compressive deconvolution. Both simulations and in vitro experiments were performed to verify the effectiveness of the proposed method. Compared with the ESB-based PISH-UACI, the entire computation of our proposed approach was reduced by 99%, while the axial resolution gain and CTR were increased by 3 times and 2 dB, respectively, confirming that satisfactory performance can be obtained for monitoring pure cavitation bubbles in tissue erosion.
Inverse imaging of the breast with a material classification technique.
Manry, C W; Broschat, S L
1998-03-01
In recent publications [Chew et al., IEEE Trans. Blomed. Eng. BME-9, 218-225 (1990); Borup et al., Ultrason. Imaging 14, 69-85 (1992)] the inverse imaging problem has been solved by means of a two-step iterative method. In this paper, a third step is introduced for ultrasound imaging of the breast. In this step, which is based on statistical pattern recognition, classification of tissue types and a priori knowledge of the anatomy of the breast are integrated into the iterative method. Use of this material classification technique results in more rapid convergence to the inverse solution--approximately 40% fewer iterations are required--as well as greater accuracy. In addition, tumors are detected early in the reconstruction process. Results for reconstructions of a simple two-dimensional model of the human breast are presented. These reconstructions are extremely accurate when system noise and variations in tissue parameters are not too great. However, for the algorithm used, degradation of the reconstructions and divergence from the correct solution occur when system noise and variations in parameters exceed threshold values. Even in this case, however, tumors are still identified within a few iterations.
3D lens-free time-lapse microscopy for 3D cell culture
NASA Astrophysics Data System (ADS)
Berdeu, Anthony; Momey, Fabien; Laperrousaz, Bastien; Bordy, Thomas; Gidrol, Xavier; Dinten, Jean-Marc; Picollet-D'hahan, Nathalie; Allier, Cédric
2017-07-01
We propose a new imaging platform based on lens-free time-lapse microscopy for 3D cell culture and its dedicated algorithm lying on a fully 3D regularized inverse problem approach. First 3D+t results are presented
NASA Technical Reports Server (NTRS)
Petty, Grant W.; Stettner, David R.
1994-01-01
This paper discusses certain aspects of a new inversion based algorithm for the retrieval of rain rate over the open ocean from the special sensor microwave/imager (SSM/I) multichannel imagery. This algorithm takes a more detailed physical approach to the retrieval problem than previously discussed algorithms that perform explicit forward radiative transfer calculations based on detailed model hydrometer profiles and attempt to match the observations to the predicted brightness temperature.
NASA Astrophysics Data System (ADS)
Honarvar, M.; Lobo, J.; Mohareri, O.; Salcudean, S. E.; Rohling, R.
2015-05-01
To produce images of tissue elasticity, the vibro-elastography technique involves applying a steady-state multi-frequency vibration to tissue, estimating displacements from ultrasound echo data, and using the estimated displacements in an inverse elasticity problem with the shear modulus spatial distribution as the unknown. In order to fully solve the inverse problem, all three displacement components are required. However, using ultrasound, the axial component of the displacement is measured much more accurately than the other directions. Therefore, simplifying assumptions must be used in this case. Usually, the equations of motion are transformed into a Helmholtz equation by assuming tissue incompressibility and local homogeneity. The local homogeneity assumption causes significant imaging artifacts in areas of varying elasticity. In this paper, we remove the local homogeneity assumption. In particular we introduce a new finite element based direct inversion technique in which only the coupling terms in the equation of motion are ignored, so it can be used with only one component of the displacement. Both Cartesian and cylindrical coordinate systems are considered. The use of multi-frequency excitation also allows us to obtain multiple measurements and reduce artifacts in areas where the displacement of one frequency is close to zero. The proposed method was tested in simulations and experiments against a conventional approach in which the local homogeneity is used. The results show significant improvements in elasticity imaging with the new method compared to previous methods that assumes local homogeneity. For example in simulations, the contrast to noise ratio (CNR) for the region with spherical inclusion increases from an average value of 1.5-17 after using the proposed method instead of the local inversion with homogeneity assumption, and similarly in the prostate phantom experiment, the CNR improved from an average value of 1.6 to about 20.
Marais, Willem J; Holz, Robert E; Hu, Yu Hen; Kuehn, Ralph E; Eloranta, Edwin E; Willett, Rebecca M
2016-10-10
Atmospheric lidar observations provide a unique capability to directly observe the vertical column of cloud and aerosol scattering properties. Detector and solar-background noise, however, hinder the ability of lidar systems to provide reliable backscatter and extinction cross-section estimates. Standard methods for solving this inverse problem are most effective with high signal-to-noise ratio observations that are only available at low resolution in uniform scenes. This paper describes a novel method for solving the inverse problem with high-resolution, lower signal-to-noise ratio observations that are effective in non-uniform scenes. The novelty is twofold. First, the inferences of the backscatter and extinction are applied to images, whereas current lidar algorithms only use the information content of single profiles. Hence, the latent spatial and temporal information in noisy images are utilized to infer the cross-sections. Second, the noise associated with photon-counting lidar observations can be modeled using a Poisson distribution, and state-of-the-art tools for solving Poisson inverse problems are adapted to the atmospheric lidar problem. It is demonstrated through photon-counting high spectral resolution lidar (HSRL) simulations that the proposed algorithm yields inverted backscatter and extinction cross-sections (per unit volume) with smaller mean squared error values at higher spatial and temporal resolutions, compared to the standard approach. Two case studies of real experimental data are also provided where the proposed algorithm is applied on HSRL observations and the inverted backscatter and extinction cross-sections are compared against the standard approach.
Bayesian image reconstruction for improving detection performance of muon tomography.
Wang, Guobao; Schultz, Larry J; Qi, Jinyi
2009-05-01
Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.
Peirlinck, Mathias; De Beule, Matthieu; Segers, Patrick; Rebelo, Nuno
2018-05-28
Patient-specific biomechanical modeling of the cardiovascular system is complicated by the presence of a physiological pressure load given that the imaged tissue is in a pre-stressed and -strained state. Neglect of this prestressed state into solid tissue mechanics models leads to erroneous metrics (e.g. wall deformation, peak stress, wall shear stress) which in their turn are used for device design choices, risk assessment (e.g. procedure, rupture) and surgery planning. It is thus of utmost importance to incorporate this deformed and loaded tissue state into the computational models, which implies solving an inverse problem (calculating an undeformed geometry given the load and the deformed geometry). Methodologies to solve this inverse problem can be categorized into iterative and direct methodologies, both having their inherent advantages and disadvantages. Direct methodologies are typically based on the inverse elastostatics (IE) approach and offer a computationally efficient single shot methodology to compute the in vivo stress state. However, cumbersome and problem-specific derivations of the formulations and non-trivial access to the finite element analysis (FEA) code, especially for commercial products, refrain a broad implementation of these methodologies. For that reason, we developed a novel, modular IE approach and implemented this methodology in a commercial FEA solver with minor user subroutine interventions. The accuracy of this methodology was demonstrated in an arterial tube and porcine biventricular myocardium model. The computational power and efficiency of the methodology was shown by computing the in vivo stress and strain state, and the corresponding unloaded geometry, for two models containing multiple interacting incompressible, anisotropic (fiber-embedded) and hyperelastic material behaviors: a patient-specific abdominal aortic aneurysm and a full 4-chamber heart model. Copyright © 2018 Elsevier Ltd. All rights reserved.
Three-Dimensional Anisotropic Acoustic and Elastic Full-Waveform Seismic Inversion
NASA Astrophysics Data System (ADS)
Warner, M.; Morgan, J. V.
2013-12-01
Three-dimensional full-waveform inversion is a high-resolution, high-fidelity, quantitative, seismic imaging technique that has advanced rapidly within the oil and gas industry. The method involves the iterative improvement of a starting model using a series of local linearized updates to solve the full non-linear inversion problem. During the inversion, forward modeling employs the full two-way three-dimensional heterogeneous anisotropic acoustic or elastic wave equation to predict the observed raw field data, wiggle-for-wiggle, trace-by-trace. The method is computationally demanding; it is highly parallelized, and runs on large multi-core multi-node clusters. Here, we demonstrate what can be achieved by applying this newly practical technique to several high-density 3D seismic datasets that were acquired to image four contrasting sedimentary targets: a gas cloud above an oil reservoir, a radially faulted dome, buried fluvial channels, and collapse structures overlying an evaporate sequence. We show that the resulting anisotropic p-wave velocity models match in situ measurements in deep boreholes, reproduce detailed structure observed independently on high-resolution seismic reflection sections, accurately predict the raw seismic data, simplify and sharpen reverse-time-migrated reflection images of deeper horizons, and flatten Kirchhoff-migrated common-image gathers. We also show that full-elastic 3D full-waveform inversion of pure pressure data can generate a reasonable shear-wave velocity model for one of these datasets. For two of the four datasets, the inclusion of significant transversely isotropic anisotropy with a vertical axis of symmetry was necessary in order to fit the kinematics of the field data properly. For the faulted dome, the full-waveform-inversion p-wave velocity model recovers the detailed structure of every fault that can be seen on coincident seismic reflection data. Some of the individual faults represent high-velocity zones, some represent low-velocity zones, some have more-complex internal structure, and some are visible merely as offsets between two regions with contrasting velocity. Although this has not yet been demonstrated quantitatively for this dataset, it seems likely that at least some of this fine structure in the recovered velocity model is related to the detailed lithology, strain history and fluid properties within the individual faults. We have here applied this technique to seismic data that were acquired by the extractive industries, however this inversion scheme is immediately scalable and applicable to a much wider range of problems given sufficient quality and density of observed data. Potential targets range from shallow magma chambers beneath active volcanoes, through whole-crustal sections across plate boundaries, to regional and whole-Earth models.
Terahertz reflection imaging using Kirchhoff migration.
Dorney, T D; Johnson, J L; Van Rudd, J; Baraniuk, R G; Symes, W W; Mittleman, D M
2001-10-01
We describe a new imaging method that uses single-cycle pulses of terahertz (THz) radiation. This technique emulates data-collection and image-processing procedures developed for geophysical prospecting and is made possible by the availability of fiber-coupled THz receiver antennas. We use a simple migration procedure to solve the inverse problem; this permits us to reconstruct the location and shape of targets. These results demonstrate the feasibility of the THz system as a test-bed for the exploration of new seismic processing methods involving complex model systems.
Markov chain Monte Carlo techniques and spatial-temporal modelling for medical EIT.
West, Robert M; Aykroyd, Robert G; Meng, Sha; Williams, Richard A
2004-02-01
Many imaging problems such as imaging with electrical impedance tomography (EIT) can be shown to be inverse problems: that is either there is no unique solution or the solution does not depend continuously on the data. As a consequence solution of inverse problems based on measured data alone is unstable, particularly if the mapping between the solution distribution and the measurements is also nonlinear as in EIT. To deliver a practical stable solution, it is necessary to make considerable use of prior information or regularization techniques. The role of a Bayesian approach is therefore of fundamental importance, especially when coupled with Markov chain Monte Carlo (MCMC) sampling to provide information about solution behaviour. Spatial smoothing is a commonly used approach to regularization. In the human thorax EIT example considered here nonlinearity increases the difficulty of imaging, using only boundary data, leading to reconstructions which are often rather too smooth. In particular, in medical imaging the resistivity distribution usually contains substantial jumps at the boundaries of different anatomical regions. With spatial smoothing these boundaries can be masked by blurring. This paper focuses on the medical application of EIT to monitor lung and cardiac function and uses explicit geometric information regarding anatomical structure and incorporates temporal correlation. Some simple properties are assumed known, or at least reliably estimated from separate studies, whereas others are estimated from the voltage measurements. This structural formulation will also allow direct estimation of clinically important quantities, such as ejection fraction and residual capacity, along with assessment of precision.
NASA Astrophysics Data System (ADS)
Edler, Karl T.
The issue of eddy currents induced by the rapid switching of magnetic field gradients is a long-standing problem in magnetic resonance imaging. A new method for dealing with this problem is presented whereby spatial harmonic components of the magnetic field are continuously sensed, through their temporal rates of change, and corrected. In this way, the effects of the eddy currents on multiple spatial harmonic components of the magnetic field can be detected and corrections applied during the rise time of the gradients. Sensing the temporal changes in each spatial harmonic is made possible with specially designed detection coils. However to make the design of these coils possible, general relationships between the spatial harmonics of the field, scalar potential, and vector potential are found within the quasi-static approximation. These relationships allow the vector potential to be found from the field -- an inverse curl operation -- and may be of use beyond the specific problem of detection coil design. Using the detection coils as sensors, methods are developed for designing a negative feedback system to control the eddy current effects and optimizing that system with respect to image noise and distortion. The design methods are successfully tested in a series of proof-of-principle experiments which lead to a discussion of how to incorporate similar designs into an operational MRI. Keywords: magnetic resonance imaging, eddy currents, dynamic shimming, negative feedback, quasi-static fields, vector potential, inverse curl
Regularized magnetotelluric inversion based on a minimum support gradient stabilizing functional
NASA Astrophysics Data System (ADS)
Xiang, Yang; Yu, Peng; Zhang, Luolei; Feng, Shaokong; Utada, Hisashi
2017-11-01
Regularization is used to solve the ill-posed problem of magnetotelluric inversion usually by adding a stabilizing functional to the objective functional that allows us to obtain a stable solution. Among a number of possible stabilizing functionals, smoothing constraints are most commonly used, which produce spatially smooth inversion results. However, in some cases, the focused imaging of a sharp electrical boundary is necessary. Although past works have proposed functionals that may be suitable for the imaging of a sharp boundary, such as minimum support and minimum gradient support (MGS) functionals, they involve some difficulties and limitations in practice. In this paper, we propose a minimum support gradient (MSG) stabilizing functional as another possible choice of focusing stabilizer. In this approach, we calculate the gradient of the model stabilizing functional of the minimum support, which affects both the stability and the sharp boundary focus of the inversion. We then apply the discrete weighted matrix form of each stabilizing functional to build a unified form of the objective functional, allowing us to perform a regularized inversion with variety of stabilizing functionals in the same framework. By comparing the one-dimensional and two-dimensional synthetic inversion results obtained using the MSG stabilizing functional and those obtained using other stabilizing functionals, we demonstrate that the MSG results are not only capable of clearly imaging a sharp geoelectrical interface but also quite stable and robust. Overall good performance in terms of both data fitting and model recovery suggests that this stabilizing functional is effective and useful in practical applications.[Figure not available: see fulltext.
Electromagnetic Inverse Methods and Applications for Inhomogeneous Media Probing and Synthesis.
NASA Astrophysics Data System (ADS)
Xia, Jake Jiqing
The electromagnetic inverse scattering problems concerned in this thesis are to find unknown inhomogeneous permittivity and conductivity profiles in a medium from the scattering data. Both analytical and numerical methods are studied in the thesis. The inverse methods can be applied to geophysical medium probing, non-destructive testing, medical imaging, optical waveguide synthesis and material characterization. An introduction is given in Chapter 1. The first part of the thesis presents inhomogeneous media probing. The Riccati equation approach is discussed in Chapter 2 for a one-dimensional planar profile inversion problem. Two types of the Riccati equations are derived and distinguished. New renormalized formulae based inverting one specific type of the Riccati equation are derived. Relations between the inverse methods of Green's function, the Riccati equation and the Gel'fand-Levitan-Marchenko (GLM) theory are studied. In Chapter 3, the renormalized source-type integral equation (STIE) approach is formulated for inversion of cylindrically inhomogeneous permittivity and conductivity profiles. The advantages of the renormalized STIE approach are demonstrated in numerical examples. The cylindrical profile inversion problem has an application for borehole inversion. In Chapter 4 the renormalized STIE approach is extended to a planar case where the two background media are different. Numerical results have shown fast convergence. This formulation is applied to inversion of the underground soil moisture profiles in remote sensing. The second part of the thesis presents the synthesis problem of inhomogeneous dielectric waveguides using the electromagnetic inverse methods. As a particular example, the rational function representation of reflection coefficients in the GLM theory is used. The GLM method is reviewed in Chapter 5. Relations between modal structures and transverse reflection coefficients of an inhomogeneous medium are established in Chapter 6. A stratified medium model is used to derive the guidance condition and the reflection coefficient. Results obtained in Chapter 6 provide the physical foundation for applying the inverse methods for the waveguide design problem. In Chapter 7, a global guidance condition for continuously varying medium is derived using the Riccati equation. It is further shown that the discrete modes in an inhomogeneous medium have the same wave vectors as the poles of the transverse reflection coefficient. An example of synthesizing an inhomogeneous dielectric waveguide using a rational reflection coefficient is presented. A summary of the thesis is given in Chapter 8. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.).
Frequency Domain Full-Waveform Inversion in Imaging Thrust Related Features
NASA Astrophysics Data System (ADS)
Jaiswal, P.; Zelt, C. A.
2010-12-01
Seismic acquisition in rough terrain such as mountain belts suffers from problems related to near-surface conditions such as statics, inconsistent energy penetration, rapid decay of signal, and imperfect receiver coupling. Moreover in the presence of weakly compacted soil, strong ground roll may obscure the reflection arrivals at near offsets further diminishing the scope of estimating a reliable near surface image though conventional processing. Traveltime and waveform inversion not only overcome the simplistic assumptions inherent in conventional processing such as hyperbolic moveout and convolution model, but also use parts of the seismic coda, such as the direct arrival and refractions, that are discarded in the latter. Traveltime and waveform inversion are model-based methods that honour the physics of wave propagation. Given the right set of preconditioned data and starting model, waveform inversion in particular has been realized as a powerful tool for velocity model building. This paper examines two case studies on waveform inversion using real data from the Naga Thrust Belt in the Northeast India. Waveform inversion in this paper is performed in the frequency domain and is multiscale in nature i.e., the inversion progressively ascends from the lower to the higher end of the frequency spectra increasing the wavenumber content of the recovered model. Since the real data are band limited, the success of waveform inversion depends on how well the starting model can account for the missing low wavenumbers. In this paper it is observed that the required starting model can be prepared using the regularized inversion of direct and reflected arrival times.
Initial evaluation of discrete orthogonal basis reconstruction of ECT images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moody, E.B.; Donohue, K.D.
1996-12-31
Discrete orthogonal basis restoration (DOBR) is a linear, non-iterative, and robust method for solving inverse problems for systems characterized by shift-variant transfer functions. This simulation study evaluates the feasibility of using DOBR for reconstructing emission computed tomographic (ECT) images. The imaging system model uses typical SPECT parameters and incorporates the effects of attenuation, spatially-variant PSF, and Poisson noise in the projection process. Sample reconstructions and statistical error analyses for a class of digital phantoms compare the DOBR performance for Hartley and Walsh basis functions. Test results confirm that DOBR with either basis set produces images with good statistical properties. Nomore » problems were encountered with reconstruction instability. The flexibility of the DOBR method and its consistent performance warrants further investigation of DOBR as a means of ECT image reconstruction.« less
Self-prior strategy for organ reconstruction in fluorescence molecular tomography
Zhou, Yuan; Chen, Maomao; Su, Han; Luo, Jianwen
2017-01-01
The purpose of this study is to propose a strategy for organ reconstruction in fluorescence molecular tomography (FMT) without prior information from other imaging modalities, and to overcome the high cost and ionizing radiation caused by the traditional structural prior strategy. The proposed strategy is designed as an iterative architecture to solve the inverse problem of FMT. In each iteration, a short time Fourier transform (STFT) based algorithm is used to extract the self-prior information in the space-frequency energy spectrum with the assumption that the regions with higher fluorescence concentration have larger energy intensity, then the cost function of the inverse problem is modified by the self-prior information, and lastly an iterative Laplacian regularization algorithm is conducted to solve the updated inverse problem and obtains the reconstruction results. Simulations and in vivo experiments on liver reconstruction are carried out to test the performance of the self-prior strategy on organ reconstruction. The organ reconstruction results obtained by the proposed self-prior strategy are closer to the ground truth than those obtained by the iterative Tikhonov regularization (ITKR) method (traditional non-prior strategy). Significant improvements are shown in the evaluation indexes of relative locational error (RLE), relative error (RE) and contrast-to-noise ratio (CNR). The self-prior strategy improves the organ reconstruction results compared with the non-prior strategy and also overcomes the shortcomings of the traditional structural prior strategy. Various applications such as metabolic imaging and pharmacokinetic study can be aided by this strategy. PMID:29082094
Self-prior strategy for organ reconstruction in fluorescence molecular tomography.
Zhou, Yuan; Chen, Maomao; Su, Han; Luo, Jianwen
2017-10-01
The purpose of this study is to propose a strategy for organ reconstruction in fluorescence molecular tomography (FMT) without prior information from other imaging modalities, and to overcome the high cost and ionizing radiation caused by the traditional structural prior strategy. The proposed strategy is designed as an iterative architecture to solve the inverse problem of FMT. In each iteration, a short time Fourier transform (STFT) based algorithm is used to extract the self-prior information in the space-frequency energy spectrum with the assumption that the regions with higher fluorescence concentration have larger energy intensity, then the cost function of the inverse problem is modified by the self-prior information, and lastly an iterative Laplacian regularization algorithm is conducted to solve the updated inverse problem and obtains the reconstruction results. Simulations and in vivo experiments on liver reconstruction are carried out to test the performance of the self-prior strategy on organ reconstruction. The organ reconstruction results obtained by the proposed self-prior strategy are closer to the ground truth than those obtained by the iterative Tikhonov regularization (ITKR) method (traditional non-prior strategy). Significant improvements are shown in the evaluation indexes of relative locational error (RLE), relative error (RE) and contrast-to-noise ratio (CNR). The self-prior strategy improves the organ reconstruction results compared with the non-prior strategy and also overcomes the shortcomings of the traditional structural prior strategy. Various applications such as metabolic imaging and pharmacokinetic study can be aided by this strategy.
2017-01-01
Objective Electrical Impedance Tomography (EIT) is a powerful non-invasive technique for imaging applications. The goal is to estimate the electrical properties of living tissues by measuring the potential at the boundary of the domain. Being safe with respect to patient health, non-invasive, and having no known hazards, EIT is an attractive and promising technology. However, it suffers from a particular technical difficulty, which consists of solving a nonlinear inverse problem in real time. Several nonlinear approaches have been proposed as a replacement for the linear solver, but in practice very few are capable of stable, high-quality, and real-time EIT imaging because of their very low robustness to errors and inaccurate modeling, or because they require considerable computational effort. Methods In this paper, a post-processing technique based on an artificial neural network (ANN) is proposed to obtain a nonlinear solution to the inverse problem, starting from a linear solution. While common reconstruction methods based on ANNs estimate the solution directly from the measured data, the method proposed here enhances the solution obtained from a linear solver. Conclusion Applying a linear reconstruction algorithm before applying an ANN reduces the effects of noise and modeling errors. Hence, this approach significantly reduces the error associated with solving 2D inverse problems using machine-learning-based algorithms. Significance This work presents radical enhancements in the stability of nonlinear methods for biomedical EIT applications. PMID:29206856
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chvetsov, A; Sandison, G; Schwartz, J
Purpose: Combination of serial tumor imaging with radiobiological modeling can provide more accurate information on the nature of treatment response and what underlies resistance. The purpose of this article is to improve the algorithms related to imaging-based radiobilogical modeling of tumor response. Methods: Serial imaging of tumor response to radiation therapy represents a sum of tumor cell sensitivity, tumor growth rates, and the rate of cell loss which are not separated explicitly. Accurate treatment response assessment would require separation of these radiobiological determinants of treatment response because they define tumor control probability. We show that the problem of reconstruction ofmore » radiobiological parameters from serial imaging data can be considered as inverse ill-posed problem described by the Fredholm integral equation of the first kind because it is governed by a sum of several exponential processes. Therefore, the parameter reconstruction can be solved using regularization methods. Results: To study the reconstruction problem, we used a set of serial CT imaging data for the head and neck cancer and a two-level cell population model of tumor response which separates the entire tumor cell population in two subpopulations of viable and lethally damage cells. The reconstruction was done using a least squared objective function and a simulated annealing algorithm. Using in vitro data for radiobiological parameters as reference data, we shown that the reconstructed values of cell surviving fractions and potential doubling time exhibit non-physical fluctuations if no stabilization algorithms are applied. The variational regularization allowed us to obtain statistical distribution for cell surviving fractions and cell number doubling times comparable to in vitro data. Conclusion: Our results indicate that using variational regularization can increase the number of free parameters in the model and open the way to development of more advanced algorithms which take into account tumor heterogeneity, for example, related to hypoxia.« less
Mitsuhashi, Kenji; Poudel, Joemini; Matthews, Thomas P.; Garcia-Uribe, Alejandro; Wang, Lihong V.; Anastasio, Mark A.
2017-01-01
Photoacoustic computed tomography (PACT) is an emerging imaging modality that exploits optical contrast and ultrasonic detection principles to form images of the photoacoustically induced initial pressure distribution within tissue. The PACT reconstruction problem corresponds to an inverse source problem in which the initial pressure distribution is recovered from measurements of the radiated wavefield. A major challenge in transcranial PACT brain imaging is compensation for aberrations in the measured data due to the presence of the skull. Ultrasonic waves undergo absorption, scattering and longitudinal-to-shear wave mode conversion as they propagate through the skull. To properly account for these effects, a wave-equation-based inversion method should be employed that can model the heterogeneous elastic properties of the skull. In this work, a forward model based on a finite-difference time-domain discretization of the three-dimensional elastic wave equation is established and a procedure for computing the corresponding adjoint of the forward operator is presented. Massively parallel implementations of these operators employing multiple graphics processing units (GPUs) are also developed. The developed numerical framework is validated and investigated in computer19 simulation and experimental phantom studies whose designs are motivated by transcranial PACT applications. PMID:29387291
NASA Astrophysics Data System (ADS)
Petrov, P.; Newman, G. A.
2010-12-01
Quantitative imaging of the subsurface objects is essential part of modern geophysical technology important in oil and gas exploration and wide-range engineering applications. A significant advancement in developing a robust, high resolution imaging technology is concerned with using the different geophysical measurements (gravity, EM and seismic) sense the subsurface structure. A joint image of the subsurface geophysical attributes (velocity, electrical conductivity and density) requires the consistent treatment of the different geophysical data (electromagnetic and seismic) due to their differing physical nature - diffusive and attenuated propagation of electromagnetic energy and nonlinear, multiple scattering wave propagation of seismic energy. Recent progress has been reported in the solution of this problem by reducing the complexity of seismic wave field. Works formed by Shin and Cha (2009 and 2008) suggests that low-pass filtering the seismic trace via Laplace-Fourier transformation can be an effective approach for obtaining seismic data that has similar spatial resolution to EM data. The effect of Laplace- Fourier transformation on the low-pass filtered trace changes the modeling of the seismic wave field from multi-wave propagation to diffusion. The key benefit of transformation is that diffusive wave-field inversion works well for both data sets seismic (Shin and Cha, 2008) and electromagnetic (Commer and Newman 2008, Newman et al., 2010). Moreover the different data sets can also be matched for similar and consistent resolution. Finally, the low pass seismic image is also an excellent choice for a starting model when analyzing the entire seismic waveform to recover the high spatial frequency components of the seismic image; its reflectivity (Shin and Cha, 2009). Without a good starting model full waveform seismic imaging and migration can encounter serious difficulties. To produce seismic wave fields consistent for joint imaging in the Laplace-Fourier domain we had developed 3D code for full-wave field simulation in the elastic media which take into account nonlinearity introduced by free-surface effects. Our approach is based on the velocity-stress formulation. In the contrast to conventional formulation we defined the material properties such as density and Lame constants not at nodal points but within cells. This second order finite differences method formulated in the cell-based grid, generate numerical solutions compatible with analytical ones within the range errors determinate by dispersion analysis. Our simulator will be embedded in an inversion scheme for joint seismic- electromagnetic imaging. It also offers possibilities for preconditioning the seismic wave propagation problems in the frequency domain. References. Shin, C. & Cha, Y. (2009), Waveform inversion in the Laplace-Fourier domain, Geophys. J. Int. 177(3), 1067- 1079. Shin, C. & Cha, Y. H. (2008), Waveform inversion in the Laplace domain, Geophys. J. Int. 173(3), 922-931. Commer, M. & Newman, G. (2008), New advances in three-dimensional controlled-source electromagnetic inversion, Geophys. J. Int. 172(2), 513-535. Newman, G. A., Commer, M. & Carazzone, J. J. (2010), Imaging CSEM data in the presence of electrical anisotropy, Geophysics, in press.
Inferring Spatial Variations of Microstructural Properties from Macroscopic Mechanical Response
Liu, Tengxiao; Hall, Timothy J.; Barbone, Paul E.; Oberai, Assad A.
2016-01-01
Disease alters tissue microstructure, which in turn affects the macroscopic mechanical properties of tissue. In elasticity imaging, the macroscopic response is measured and is used to infer the spatial distribution of the elastic constitutive parameters. When an empirical constitutive model is used these parameters cannot be linked to the microstructure. However, when the constitutive model is derived from a microstructural representation of the material, it allows for the possibility of inferring the local averages of the spatial distribution of the microstructural parameters. This idea forms the basis of this study. In particular, we first derive a constitutive model by homogenizing the mechanical response of a network of elastic, tortuous fibers. Thereafter, we use this model in an inverse problem to determine the spatial distribution of the microstructural parameters. We solve the inverse problem as a constrained minimization problem, and develop efficient methods for solving it. We apply these methods to displacement fields obtained by deforming gelatin-agar co-gels, and determine the spatial distribution of agar concentration and fiber tortuosity, thereby demonstrating that it is possible to image local averages of microstructural parameters from macroscopic measurements of deformation. PMID:27655420
A filtering approach to edge preserving MAP estimation of images.
Humphrey, David; Taubman, David
2011-05-01
The authors present a computationally efficient technique for maximum a posteriori (MAP) estimation of images in the presence of both blur and noise. The image is divided into statistically independent regions. Each region is modelled with a WSS Gaussian prior. Classical Wiener filter theory is used to generate a set of convex sets in the solution space, with the solution to the MAP estimation problem lying at the intersection of these sets. The proposed algorithm uses an underlying segmentation of the image, and a means of determining the segmentation and refining it are described. The algorithm is suitable for a range of image restoration problems, as it provides a computationally efficient means to deal with the shortcomings of Wiener filtering without sacrificing the computational simplicity of the filtering approach. The algorithm is also of interest from a theoretical viewpoint as it provides a continuum of solutions between Wiener filtering and Inverse filtering depending upon the segmentation used. We do not attempt to show here that the proposed method is the best general approach to the image reconstruction problem. However, related work referenced herein shows excellent performance in the specific problem of demosaicing.
Finke, Stefan; Gulrajani, Ramesh M; Gotman, Jean; Savard, Pierre
2013-01-01
The non-invasive localization of the primary sensory hand area can be achieved by solving the inverse problem of electroencephalography (EEG) for N(20)-P(20) somatosensory evoked potentials (SEPs). This study compares two different mathematical approaches for the computation of transfer matrices used to solve the EEG inverse problem. Forward transfer matrices relating dipole sources to scalp potentials are determined via conventional and reciprocal approaches using individual, realistically shaped head models. The reciprocal approach entails calculating the electric field at the dipole position when scalp electrodes are reciprocally energized with unit current-scalp potentials are obtained from the scalar product of this electric field and the dipole moment. Median nerve stimulation is performed on three healthy subjects and single-dipole inverse solutions for the N(20)-P(20) SEPs are then obtained by simplex minimization and validated against the primary sensory hand area identified on magnetic resonance images. Solutions are presented for different time points, filtering strategies, boundary-element method discretizations, and skull conductivity values. Both approaches produce similarly small position errors for the N(20)-P(20) SEP. Position error for single-dipole inverse solutions is inherently robust to inaccuracies in forward transfer matrices but dependent on the overlapping activity of other neural sources. Significantly smaller time and storage requirements are the principal advantages of the reciprocal approach. Reduced computational requirements and similar dipole position accuracy support the use of reciprocal approaches over conventional approaches for N(20)-P(20) SEP source localization.
NASA Astrophysics Data System (ADS)
Yang, Li; Wang, Ye; Liu, Huikai; Yan, Guanghui; Kou, Wei
2014-11-01
The components overheating inside an object, such as inside an electric control cabinet, a moving object, and a running machine, can easily lead to equipment failure or fire accident. The infrared remote sensing method is used to inspect the surface temperature of object to identify the overheating components inside the object in recent years. It has important practical application of using infrared thermal imaging surface temperature measurement to identify the internal overheating elements inside an electric control cabinet. In this paper, through the establishment of test bench of electric control cabinet, the experimental study was conducted on the inverse identification technology of internal overheating components inside an electric control cabinet using infrared thermal imaging. The heat transfer model of electric control cabinet was built, and the temperature distribution of electric control cabinet with internal overheating element is simulated using the finite volume method (FVM). The outer surface temperature of electric control cabinet was measured using the infrared thermal imager. Combining the computer image processing technology and infrared temperature measurement, the surface temperature distribution of electric control cabinet was extracted, and using the identification algorithm of inverse heat transfer problem (IHTP) the position and temperature of internal overheating element were identified. The results obtained show that for single element overheating inside the electric control cabinet the identifying errors of the temperature and position were 2.11% and 5.32%. For multiple elements overheating inside the electric control cabinet the identifying errors of the temperature and positions were 3.28% and 15.63%. The feasibility and effectiveness of the method of IHTP and the correctness of identification algorithm of FVM were validated.
NASA Astrophysics Data System (ADS)
McLaughlin, Joyce; Renzi, Daniel
2006-04-01
Transient elastography and supersonic imaging are promising new techniques for characterizing the elasticity of soft tissues. Using this method, an 'ultrafast imaging' system (up to 10 000 frames s-1) follows in real time the propagation of a low-frequency shear wave. The displacement of the propagating shear wave is measured as a function of time and space. Here we develop a fast level set based algorithm for finding the shear wave speed from the interior positions of the propagating front. We compare the performance of level curve methods developed here and our previously developed (McLaughlin J and Renzi D 2006 Shear wave speed recovery in transient elastography and supersonic imaging using propagating fronts Inverse Problems 22 681-706) distance methods. We give reconstruction examples from synthetic data and from data obtained from a phantom experiment accomplished by Mathias Fink's group (the Laboratoire Ondes et Acoustique, ESPCI, Université Paris VII).
3D and 4D magnetic susceptibility tomography based on complex MR images
Chen, Zikuan; Calhoun, Vince D
2014-11-11
Magnetic susceptibility is the physical property for T2*-weighted magnetic resonance imaging (T2*MRI). The invention relates to methods for reconstructing an internal distribution (3D map) of magnetic susceptibility values, .chi. (x,y,z), of an object, from 3D T2*MRI phase images, by using Computed Inverse Magnetic Resonance Imaging (CIMRI) tomography. The CIMRI technique solves the inverse problem of the 3D convolution by executing a 3D Total Variation (TV) regularized iterative convolution scheme, using a split Bregman iteration algorithm. The reconstruction of .chi. (x,y,z) can be designed for low-pass, band-pass, and high-pass features by using a convolution kernel that is modified from the standard dipole kernel. Multiple reconstructions can be implemented in parallel, and averaging the reconstructions can suppress noise. 4D dynamic magnetic susceptibility tomography can be implemented by reconstructing a 3D susceptibility volume from a 3D phase volume by performing 3D CIMRI magnetic susceptibility tomography at each snapshot time.
Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.
2005-01-01
This paper is the second of a set of two papers in which we study the inverse refraction problem. The first paper, "Types of Geophysical Nonuniqueness through Minimization," studies and classifies the types of nonuniqueness that exist when solving inverse problems depending on the participation of a priori information required to obtain reliable solutions of inverse geophysical problems. In view of the classification developed, in this paper we study the type of nonuniqueness associated with the inverse refraction problem. An approach for obtaining a realistic solution to the inverse refraction problem is offered in a third paper that is in preparation. The nonuniqueness of the inverse refraction problem is examined by using a simple three-layer model. Like many other inverse geophysical problems, the inverse refraction problem does not have a unique solution. Conventionally, nonuniqueness is considered to be a result of insufficient data and/or error in the data, for any fixed number of model parameters. This study illustrates that even for overdetermined and error free data, nonlinear inverse refraction problems exhibit exact-data nonuniqueness, which further complicates the problem of nonuniqueness. By evaluating the nonuniqueness of the inverse refraction problem, this paper targets the improvement of refraction inversion algorithms, and as a result, the achievement of more realistic solutions. The nonuniqueness of the inverse refraction problem is examined initially by using a simple three-layer model. The observations and conclusions of the three-layer model nonuniqueness study are used to evaluate the nonuniqueness of more complicated n-layer models and multi-parameter cell models such as in refraction tomography. For any fixed number of model parameters, the inverse refraction problem exhibits continuous ranges of exact-data nonuniqueness. Such an unfavorable type of nonuniqueness can be uniquely solved only by providing abundant a priori information. Insufficient a priori information during the inversion is the reason why refraction methods often may not produce desired results or even fail. This work also demonstrates that the application of the smoothing constraints, typical when solving ill-posed inverse problems, has a dual and contradictory role when applied to the ill-posed inverse problem of refraction travel times. This observation indicates that smoothing constraints may play such a two-fold role when applied to other inverse problems. Other factors that contribute to inverse-refraction-problem nonuniqueness are also considered, including indeterminacy, statistical data-error distribution, numerical error and instability, finite data, and model parameters. ?? Birkha??user Verlag, Basel, 2005.
Ship dynamics for maritime ISAR imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doerry, Armin Walter
2008-02-01
Demand is increasing for imaging ships at sea. Conventional SAR fails because the ships are usually in motion, both with a forward velocity, and other linear and angular motions that accompany sea travel. Because the target itself is moving, this becomes an Inverse- SAR, or ISAR problem. Developing useful ISAR techniques and algorithms is considerably aided by first understanding the nature and characteristics of ship motion. Consequently, a brief study of some principles of naval architecture sheds useful light on this problem. We attempt to do so here. Ship motions are analyzed for their impact on range-Doppler imaging using Inversemore » Synthetic Aperture Radar (ISAR). A framework for analysis is developed, and limitations of simple ISAR systems are discussed.« less
Connection of Scattering Principles: A Visual and Mathematical Tour
ERIC Educational Resources Information Center
Broggini, Filippo; Snieder, Roel
2012-01-01
Inverse scattering, Green's function reconstruction, focusing, imaging and the optical theorem are subjects usually studied as separate problems in different research areas. We show a physical connection between the principles because the equations that rule these "scattering principles" have a similar functional form. We first lead the reader…
Three Dimensional Inverse Synthetic Aperture Radar Imaging
1995-12-01
unfortunately produces a blurred image. To correct this problem, a deblurring filter must be applied to the data. It is preferred in some applications to...when the pulse is an impulse in time. So in order to get a high degree of downrange resolution directly it would be necessary to transmit the entire...bandwidth of frequencies simultaneously such as in an Impulse Radar. This would prove to be extremely difficult if not impossible. Luckily, the same
Obtaining sparse distributions in 2D inverse problems.
Reci, A; Sederman, A J; Gladden, L F
2017-08-01
The mathematics of inverse problems has relevance across numerous estimation problems in science and engineering. L 1 regularization has attracted recent attention in reconstructing the system properties in the case of sparse inverse problems; i.e., when the true property sought is not adequately described by a continuous distribution, in particular in Compressed Sensing image reconstruction. In this work, we focus on the application of L 1 regularization to a class of inverse problems; relaxation-relaxation, T 1 -T 2 , and diffusion-relaxation, D-T 2 , correlation experiments in NMR, which have found widespread applications in a number of areas including probing surface interactions in catalysis and characterizing fluid composition and pore structures in rocks. We introduce a robust algorithm for solving the L 1 regularization problem and provide a guide to implementing it, including the choice of the amount of regularization used and the assignment of error estimates. We then show experimentally that L 1 regularization has significant advantages over both the Non-Negative Least Squares (NNLS) algorithm and Tikhonov regularization. It is shown that the L 1 regularization algorithm stably recovers a distribution at a signal to noise ratio<20 and that it resolves relaxation time constants and diffusion coefficients differing by as little as 10%. The enhanced resolving capability is used to measure the inter and intra particle concentrations of a mixture of hexane and dodecane present within porous silica beads immersed within a bulk liquid phase; neither NNLS nor Tikhonov regularization are able to provide this resolution. This experimental study shows that the approach enables discrimination between different chemical species when direct spectroscopic discrimination is impossible, and hence measurement of chemical composition within porous media, such as catalysts or rocks, is possible while still being stable to high levels of noise. Copyright © 2017. Published by Elsevier Inc.
Obtaining sparse distributions in 2D inverse problems
NASA Astrophysics Data System (ADS)
Reci, A.; Sederman, A. J.; Gladden, L. F.
2017-08-01
The mathematics of inverse problems has relevance across numerous estimation problems in science and engineering. L1 regularization has attracted recent attention in reconstructing the system properties in the case of sparse inverse problems; i.e., when the true property sought is not adequately described by a continuous distribution, in particular in Compressed Sensing image reconstruction. In this work, we focus on the application of L1 regularization to a class of inverse problems; relaxation-relaxation, T1-T2, and diffusion-relaxation, D-T2, correlation experiments in NMR, which have found widespread applications in a number of areas including probing surface interactions in catalysis and characterizing fluid composition and pore structures in rocks. We introduce a robust algorithm for solving the L1 regularization problem and provide a guide to implementing it, including the choice of the amount of regularization used and the assignment of error estimates. We then show experimentally that L1 regularization has significant advantages over both the Non-Negative Least Squares (NNLS) algorithm and Tikhonov regularization. It is shown that the L1 regularization algorithm stably recovers a distribution at a signal to noise ratio < 20 and that it resolves relaxation time constants and diffusion coefficients differing by as little as 10%. The enhanced resolving capability is used to measure the inter and intra particle concentrations of a mixture of hexane and dodecane present within porous silica beads immersed within a bulk liquid phase; neither NNLS nor Tikhonov regularization are able to provide this resolution. This experimental study shows that the approach enables discrimination between different chemical species when direct spectroscopic discrimination is impossible, and hence measurement of chemical composition within porous media, such as catalysts or rocks, is possible while still being stable to high levels of noise.
Recovering an elastic obstacle containing embedded objects by the acoustic far-field measurements
NASA Astrophysics Data System (ADS)
Qu, Fenglong; Yang, Jiaqing; Zhang, Bo
2018-01-01
Consider the inverse scattering problem of time-harmonic acoustic waves by a 3D bounded elastic obstacle which may contain embedded impenetrable obstacles inside. We propose a novel and simple technique to show that the elastic obstacle can be uniquely recovered by the acoustic far-field pattern at a fixed frequency, disregarding its contents. Our method is based on constructing a well-posed modified interior transmission problem on a small domain and makes use of an a priori estimate for both the acoustic and elastic wave fields in the usual H 1-norm. In the case when there is no obstacle embedded inside the elastic body, our method gives a much simpler proof for the uniqueness result obtained previously in the literature (Natroshvili et al 2000 Rend. Mat. Serie VII 20 57-92 Monk and Selgas 2009 Inverse Problems Imaging 3 173-98).
Efficient electromagnetic source imaging with adaptive standardized LORETA/FOCUSS.
Schimpf, Paul H; Liu, Hesheng; Ramon, Ceon; Haueisen, Jens
2005-05-01
Functional brain imaging and source localization based on the scalp's potential field require a solution to an ill-posed inverse problem with many solutions. This makes it necessary to incorporate a priori knowledge in order to select a particular solution. A computational challenge for some subject-specific head models is that many inverse algorithms require a comprehensive sampling of the candidate source space at the desired resolution. In this study, we present an algorithm that can accurately reconstruct details of localized source activity from a sparse sampling of the candidate source space. Forward computations are minimized through an adaptive procedure that increases source resolution as the spatial extent is reduced. With this algorithm, we were able to compute inverses using only 6% to 11% of the full resolution lead-field, with a localization accuracy that was not significantly different than an exhaustive search through a fully-sampled source space. The technique is, therefore, applicable for use with anatomically-realistic, subject-specific forward models for applications with spatially concentrated source activity.
Semisupervised kernel marginal Fisher analysis for face recognition.
Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun
2013-01-01
Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.
GASPACHO: a generic automatic solver using proximal algorithms for convex huge optimization problems
NASA Astrophysics Data System (ADS)
Goossens, Bart; Luong, Hiêp; Philips, Wilfried
2017-08-01
Many inverse problems (e.g., demosaicking, deblurring, denoising, image fusion, HDR synthesis) share various similarities: degradation operators are often modeled by a specific data fitting function while image prior knowledge (e.g., sparsity) is incorporated by additional regularization terms. In this paper, we investigate automatic algorithmic techniques for evaluating proximal operators. These algorithmic techniques also enable efficient calculation of adjoints from linear operators in a general matrix-free setting. In particular, we study the simultaneous-direction method of multipliers (SDMM) and the parallel proximal algorithm (PPXA) solvers and show that the automatically derived implementations are well suited for both single-GPU and multi-GPU processing. We demonstrate this approach for an Electron Microscopy (EM) deconvolution problem.
Wavelet extractor: A Bayesian well-tie and wavelet extraction program
NASA Astrophysics Data System (ADS)
Gunning, James; Glinsky, Michael E.
2006-06-01
We introduce a new open-source toolkit for the well-tie or wavelet extraction problem of estimating seismic wavelets from seismic data, time-to-depth information, and well-log suites. The wavelet extraction model is formulated as a Bayesian inverse problem, and the software will simultaneously estimate wavelet coefficients, other parameters associated with uncertainty in the time-to-depth mapping, positioning errors in the seismic imaging, and useful amplitude-variation-with-offset (AVO) related parameters in multi-stack extractions. It is capable of multi-well, multi-stack extractions, and uses continuous seismic data-cube interpolation to cope with the problem of arbitrary well paths. Velocity constraints in the form of checkshot data, interpreted markers, and sonic logs are integrated in a natural way. The Bayesian formulation allows computation of full posterior uncertainties of the model parameters, and the important problem of the uncertain wavelet span is addressed uses a multi-model posterior developed from Bayesian model selection theory. The wavelet extraction tool is distributed as part of the Delivery seismic inversion toolkit. A simple log and seismic viewing tool is included in the distribution. The code is written in Java, and thus platform independent, but the Seismic Unix (SU) data model makes the inversion particularly suited to Unix/Linux environments. It is a natural companion piece of software to Delivery, having the capacity to produce maximum likelihood wavelet and noise estimates, but will also be of significant utility to practitioners wanting to produce wavelet estimates for other inversion codes or purposes. The generation of full parameter uncertainties is a crucial function for workers wishing to investigate questions of wavelet stability before proceeding to more advanced inversion studies.
Automatic alignment for three-dimensional tomographic reconstruction
NASA Astrophysics Data System (ADS)
van Leeuwen, Tristan; Maretzke, Simon; Joost Batenburg, K.
2018-02-01
In tomographic reconstruction, the goal is to reconstruct an unknown object from a collection of line integrals. Given a complete sampling of such line integrals for various angles and directions, explicit inverse formulas exist to reconstruct the object. Given noisy and incomplete measurements, the inverse problem is typically solved through a regularized least-squares approach. A challenge for both approaches is that in practice the exact directions and offsets of the x-rays are only known approximately due to, e.g. calibration errors. Such errors lead to artifacts in the reconstructed image. In the case of sufficient sampling and geometrically simple misalignment, the measurements can be corrected by exploiting so-called consistency conditions. In other cases, such conditions may not apply and we have to solve an additional inverse problem to retrieve the angles and shifts. In this paper we propose a general algorithmic framework for retrieving these parameters in conjunction with an algebraic reconstruction technique. The proposed approach is illustrated by numerical examples for both simulated data and an electron tomography dataset.
Liu, Hesheng; Gao, Xiaorong; Schimpf, Paul H; Yang, Fusheng; Gao, Shangkai
2004-10-01
Estimation of intracranial electric activity from the scalp electroencephalogram (EEG) requires a solution to the EEG inverse problem, which is known as an ill-conditioned problem. In order to yield a unique solution, weighted minimum norm least square (MNLS) inverse methods are generally used. This paper proposes a recursive algorithm, termed Shrinking LORETA-FOCUSS, which combines and expands upon the central features of two well-known weighted MNLS methods: LORETA and FOCUSS. This recursive algorithm makes iterative adjustments to the solution space as well as the weighting matrix, thereby dramatically reducing the computation load, and increasing local source resolution. Simulations are conducted on a 3-shell spherical head model registered to the Talairach human brain atlas. A comparative study of four different inverse methods, standard Weighted Minimum Norm, L1-norm, LORETA-FOCUSS and Shrinking LORETA-FOCUSS are presented. The results demonstrate that Shrinking LORETA-FOCUSS is able to reconstruct a three-dimensional source distribution with smaller localization and energy errors compared to the other methods.
MT+, integrating magnetotellurics to determine earth structure, physical state, and processes
Bedrosian, P.A.
2007-01-01
As one of the few deep-earth imaging techniques, magnetotellurics provides information on both the structure and physical state of the crust and upper mantle. Magnetotellurics is sensitive to electrical conductivity, which varies within the earth by many orders of magnitude and is modified by a range of earth processes. As with all geophysical techniques, magnetotellurics has a non-unique inverse problem and has limitations in resolution and sensitivity. As such, an integrated approach, either via the joint interpretation of independent geophysical models, or through the simultaneous inversion of independent data sets is valuable, and at times essential to an accurate interpretation. Magnetotelluric data and models are increasingly integrated with geological, geophysical and geochemical information. This review considers recent studies that illustrate the ways in which such information is combined, from qualitative comparisons to statistical correlation studies to multi-property inversions. Also emphasized are the range of problems addressed by these integrated approaches, and their value in elucidating earth structure, physical state, and processes. ?? Springer Science+Business Media B.V. 2007.
Inverse problems in heterogeneous and fractured media using peridynamics
Turner, Daniel Z.; van Bloemen Waanders, Bart G.; Parks, Michael L.
2015-12-10
The following work presents an adjoint-based methodology for solving inverse problems in heterogeneous and fractured media using state-based peridynamics. We show that the inner product involving the peridynamic operators is self-adjoint. The proposed method is illustrated for several numerical examples with constant and spatially varying material parameters as well as in the context of fractures. We also present a framework for obtaining material parameters by integrating digital image correlation (DIC) with inverse analysis. This framework is demonstrated by evaluating the bulk and shear moduli for a sample of nuclear graphite using digital photographs taken during the experiment. The resulting measuredmore » values correspond well with other results reported in the literature. Lastly, we show that this framework can be used to determine the load state given observed measurements of a crack opening. Furthermore, this type of analysis has many applications in characterizing subsurface stress-state conditions given fracture patterns in cores of geologic material.« less
Microwave imaging by three-dimensional Born linearization of electromagnetic scattering
NASA Astrophysics Data System (ADS)
Caorsi, S.; Gragnani, G. L.; Pastorino, M.
1990-11-01
An approach to microwave imaging is proposed that uses a three-dimensional vectorial form of the Born approximation to linearize the equation of electromagnetic scattering. The inverse scattering problem is numerically solved for three-dimensional geometries by means of the moment method. A pseudoinversion algorithm is adopted to overcome ill conditioning. Results show that the method is well suited for qualitative imaging purposes, while its capability for exactly reconstructing the complex dielectric permittivity is affected by the limitations inherent in the Born approximation and in ill conditioning.
An inverse approach to determining spatially varying arterial compliance using ultrasound imaging
NASA Astrophysics Data System (ADS)
Mcgarry, Matthew; Li, Ronny; Apostolakis, Iason; Nauleau, Pierre; Konofagou, Elisa E.
2016-08-01
The mechanical properties of arteries are implicated in a wide variety of cardiovascular diseases, many of which are expected to involve a strong spatial variation in properties that can be depicted by diagnostic imaging. A pulse wave inverse problem (PWIP) is presented, which can produce spatially resolved estimates of vessel compliance from ultrasound measurements of the vessel wall displacements. The 1D equations governing pulse wave propagation in a flexible tube are parameterized by the spatially varying properties, discrete cosine transform components of the inlet pressure boundary conditions, viscous loss constant and a resistance outlet boundary condition. Gradient descent optimization is used to fit displacements from the model to the measured data by updating the model parameters. Inversion of simulated data showed that the PWIP can accurately recover the correct compliance distribution and inlet pressure under realistic conditions, even under high simulated measurement noise conditions. Silicone phantoms with known compliance contrast were imaged with a clinical ultrasound system. The PWIP produced spatially and quantitatively accurate maps of the phantom compliance compared to independent static property estimates, and the known locations of stiff inclusions (which were as small as 7 mm). The PWIP is necessary for these phantom experiments as the spatiotemporal resolution, measurement noise and compliance contrast does not allow accurate tracking of the pulse wave velocity using traditional approaches (e.g. 50% upstroke markers). Results from simulations indicate reflections generated from material interfaces may negatively affect wave velocity estimates, whereas these reflections are accounted for in the PWIP and do not cause problems.
An l1-TV algorithm for deconvolution with salt and pepper noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wohlberg, Brendt; Rodriguez, Paul
2008-01-01
There has recently been considerable interest in applying Total Variation with an {ell}{sup 1} data fidelity term to the denoising of images subject to salt and pepper noise, but the extension of this formulation to more general problems, such as deconvolution, has received little attention, most probably because most efficient algorithms for {ell}{sup 1}-TV denoising can not handle more general inverse problems. We apply the Iteratively Reweighted Norm algorithm to this problem, and compare performance with an alternative algorithm based on the Mumford-Shah functional.
Computing Fourier integral operators with caustics
NASA Astrophysics Data System (ADS)
Caday, Peter
2016-12-01
Fourier integral operators (FIOs) have widespread applications in imaging, inverse problems, and PDEs. An implementation of a generic algorithm for computing FIOs associated with canonical graphs is presented, based on a recent paper of de Hoop et al. Given the canonical transformation and principal symbol of the operator, a preprocessing step reduces application of an FIO approximately to multiplications, pushforwards and forward and inverse discrete Fourier transforms, which can be computed in O({N}n+(n-1)/2{log}N) time for an n-dimensional FIO. The same preprocessed data also allows computation of the inverse and transpose of the FIO, with identical runtime. Examples demonstrate the algorithm’s output, and easily extendible MATLAB/C++ source code is available from the author.
Separated Component-Based Restoration of Speckled SAR Images
2013-01-01
unsupervised change detection from SAR amplitude imagery,” IEEE Trans. Geosci. Remote Sens., vol. 44, no. 10, pp. 2972–2982, Oct. 2006. [5] F. Argenti , T...Sens., vol. 40, no. 10, pp. 2196–2212, Oct. 2002. [13] F. Argenti and L. Alparone, “Speckle removal from SAR images in the undecimated wavelet domain...iterative thresh- olding algorithm for linear inverse problems with a sparsity con- straint,” Commun . Pure Appl. Math., vol. 57, no. 11, pp. 1413
Tomographic Processing of Synthetic Aperture Radar Signals for Enhanced Resolution
1989-11-01
to image 3 larger scenes, this problem becomes more important. A byproduct of this investigation is a duality theorem which is a generalization of the...well-known Projection-Slice Theorem . The second prob- - lem proposed is that of imaging a rapidly-spinning object, for example in inverse SAR mode...slices is absent. There is a possible connection of the word to the Projection-Slice Theorem , but, as seen in Chapter 4, even this is absent in the
A microwave tomography strategy for structural monitoring
NASA Astrophysics Data System (ADS)
Catapano, I.; Crocco, L.; Isernia, T.
2009-04-01
The capability of the electromagnetic waves to penetrate optical dense regions can be conveniently exploited to provide high informative images of the internal status of manmade structures in a non destructive and minimally invasive way. In this framework, as an alternative to the wide adopted radar techniques, Microwave Tomography approaches are worth to be considered. As a matter of fact, they may accurately reconstruct the permittivity and conductivity distributions of a given region from the knowledge of a set of incident fields and measures of the corresponding scattered fields. As far as cultural heritage conservation is concerned, this allow not only to detect the anomalies, which can possibly damage the integrity and the stability of the structure, but also characterize their morphology and electric features, which are useful information to properly address the repair actions. However, since a non linear and ill-posed inverse scattering problem has to be solved, proper regularization strategies and sophisticated data processing tools have to be adopt to assure the reliability of the results. To pursue this aim, in the last years huge attention has been focused on the advantages introduced by diversity in data acquisition (multi-frequency/static/view data) [1,2] as well as on the analysis of the factors affecting the solution of an inverse scattering problem [3]. Moreover, how the degree of non linearity of the relationship between the scattered field and the electromagnetic parameters of the targets can be changed by properly choosing the mathematical model adopt to formulate the scattering problem has been shown in [4]. Exploiting the above results, in this work we propose an imaging procedure in which the inverse scattering problem is formulated as an optimization problem where the mathematical relationship between data and unknowns is expressed by means of a convenient integral equations model and the sought solution is defined as the global minimum of a cost functional. In particular, a local minimization scheme is exploited and a pre-processing step, devoted to preliminary asses the location and the shape of the anomalies, is exploited. The effectiveness of the proposed strategy has been preliminary assessed by means of numerical examples concerning the diagnostic of masonry structures, which will be shown in the Conference. [1] O. M. Bucci, L. Crocco, T. Isernia, and V. Pascazio, Subsurface inverse scattering problems: Quantifying, qualifying and achieving the available information, IEEE Trans. Geosci. Remote Sens., 39(5), 2527-2538, 2001. [2] R. Persico, R. Bernini, and F. Soldovieri, "The role of the measurement configuration in inverse scattering from buried objects under the distorted Born approximation," IEEE Trans. Antennas Propag., vol. 53, no. 6, pp. 1875-1887, Jun. 2005. [3] I. Catapano, L. Crocco, M. D'Urso, T. Isernia, "On the Effect of Support Estimation and of a New Model in 2-D Inverse Scattering Problems," IEEE Trans. Antennas Propagat., vol.55, no.6, pp.1895-1899, 2007. [4] M. D'Urso, I. Catapano, L. Crocco and T. Isernia, Effective solution of 3D scattering problems via series expansions: applicability and a new hybrid scheme, IEEE Trans. On Geosci. Remote Sens., vol.45, no.3, pp. 639-648, 2007.
A new approach to ultrasonic elasticity imaging
NASA Astrophysics Data System (ADS)
Hoerig, Cameron; Ghaboussi, Jamshid; Fatemi, Mostafa; Insana, Michael F.
2016-04-01
Biomechanical properties of soft tissues can provide information regarding the local health status. Often the cells in pathological tissues can be found to form a stiff extracellular environment, which is a sensitive, early diagnostic indicator of disease. Quasi-static ultrasonic elasticity imaging provides a way to image the mechanical properties of tissues. Strain images provide a map of the relative tissue stiffness, but ambiguities and artifacts limit its diagnostic value. Accurately mapping intrinsic mechanical parameters of a region may increase diagnostic specificity. However, the inverse problem, whereby force and displacement estimates are used to estimate a constitutive matrix, is ill conditioned. Our method avoids many of the issues involved with solving the inverse problem, such as unknown boundary conditions and incomplete information about the stress field, by building an empirical model directly from measured data. Surface force and volumetric displacement data gathered during imaging are used in conjunction with the AutoProgressive method to teach artificial neural networks the stress-strain relationship of tissues. The Autoprogressive algorithm has been successfully used in many civil engineering applications and to estimate ocular pressure and corneal stiffness; here, we are expanding its use to any tissues imaged ultrasonically. We show that force-displacement data recorded with an ultrasound probe and displacements estimated at a few points in the imaged region can be used to estimate the full stress and strain vectors throughout an entire model while only assuming conservation laws. We will also demonstrate methods to parameterize the mechanical properties based on the stress-strain response of trained neural networks. This method is a fundamentally new approach to medical elasticity imaging that for the first time provides full stress and strain vectors from one set of observation data.
Hamilton, S J
2017-05-22
Electrical impedance tomography (EIT) is an emerging imaging modality that uses harmless electrical measurements taken on electrodes at a body's surface to recover information about the internal electrical conductivity and or permittivity. The image reconstruction task of EIT is a highly nonlinear inverse problem that is sensitive to noise and modeling errors making the image reconstruction task challenging. D-bar methods solve the nonlinear problem directly, bypassing the need for detailed and time-intensive forward models, to provide absolute (static) as well as time-difference EIT images. Coupling the D-bar methodology with the inclusion of high confidence a priori data results in a noise-robust regularized image reconstruction method. In this work, the a priori D-bar method for complex admittivities is demonstrated effective on experimental tank data for absolute imaging for the first time. Additionally, the method is adjusted for, and tested on, time-difference imaging scenarios. The ability of the method to be used for conductivity, permittivity, absolute as well as time-difference imaging provides the user with great flexibility without a high computational cost.
Thin-section ratiometric Ca2+ images obtained by optical sectioning of fura-2 loaded mast cells
1992-01-01
The availability of the ratiometric Ca2+ indicator dyes, fura-2, and indo-1, and advances in digital imaging and computer technology have made it possible to detect Ca2+ changes in single cells with high temporal and spatial resolution. However, the optical properties of the conventional epifluorescence microscope do not produce a perfect image of the specimen. Instead, the observed image is a spatial low pass filtered version of the object and is contaminated with out of focus information. As a result, the image has reduced contrast and an increased depth of field. This problem is especially important for measurements of localized Ca2+ concentrations. One solution to this problem is to use a scanning confocal microscope which only detects in focus information, but this approach has several disadvantages for low light fluorescence measurements in living cells. An alternative approach is to use digital image processing and a deblurring algorithm to remove the out of focus information by using a knowledge of the point spread function of the microscope. All of these algorithms require a stack of two-dimensional images taken at different focal planes, although the "nearest neighbor deblurring" algorithm only requires one image above and below the image plane. We have used a modification of this scheme to construct a simple inverse filter, which extracts optical sections comparable to those of the nearest neighbors scheme, but without the need for adjacent image sections. We have used this "no neighbors" processing scheme to deblur images of fura-2-loaded mast cells from beige mice and generate high resolution ratiometric Ca2+ images of thin sections through the cell. The shallow depth of field of these images is demonstrated by taking pairs of images at different focal planes, 0.5-microns apart. The secretory granules, which exclude the fura-2, appear in focus in all sections and distinct changes in their size and shape can be seen in adjacent sections. In addition, we show, with the aid of model objects, how the combination of inverse filtering and ratiometric imaging corrects for some of the inherent limitations of using an inverse filter and can be used for quantitative measurements of localized Ca2+ gradients. With this technique, we can observe Ca2+ transients in narrow regions of cytosol between the secretory granules and plasma membrane that can be less than 0.5-microns wide. Moreover, these Ca2+ increases can be seen to coincide with the swelling of the secretory granules that follows exocytotic fusion. PMID:1730775
NASA Astrophysics Data System (ADS)
Day-Lewis, F. D.
2014-12-01
Geophysical imaging (e.g., electrical, radar, seismic) can provide valuable information for the characterization of hydrologic properties and monitoring of hydrologic processes, as evidenced in the rapid growth of literature on the subject. Geophysical imaging has been used for monitoring tracer migration and infiltration, mapping zones of focused groundwater/surface-water exchange, and verifying emplacement of amendments for bioremediation. Despite the enormous potential for extraction of hydrologic information from geophysical images, there also is potential for misinterpretation and over-interpretation. These concerns are particularly relevant when geophysical results are used within quantitative frameworks, e.g., conversion to hydrologic properties through petrophysical relations, geostatistical estimation and simulation conditioned to geophysical inversions, and joint inversion. We review pitfalls to interpretation associated with limited image resolution, spatially variable image resolution, incorrect data weighting, errors in the timing of measurements, temporal smearing resulting from changes during data acquisition, support-volume/scale effects, and incorrect assumptions or approximations involved in modeling geophysical or other jointly inverted data. A series of numerical and field-based examples illustrate these potential problems. Our goal in this talk is to raise awareness of common pitfalls and present strategies for recognizing and avoiding them.
Iterative electromagnetic Born inversion applied to earth conductivity imaging
NASA Astrophysics Data System (ADS)
Alumbaugh, D. L.
1993-08-01
This thesis investigates the use of a fast imaging technique to deduce the spatial conductivity distribution in the earth from low frequency (less than 1 MHz), cross well electromagnetic (EM) measurements. The theory embodied in this work is the extension of previous strategies and is based on the Born series approximation to solve both the forward and inverse problem. Nonlinear integral equations are employed to derive the series expansion which accounts for the scattered magnetic fields that are generated by inhomogeneities embedded in either a homogenous or a layered earth. A sinusoidally oscillating, vertically oriented magnetic dipole is employed as a source, and it is assumed that the scattering bodies are azimuthally symmetric about the source dipole axis. The use of this model geometry reduces the 3-D vector problem to a more manageable 2-D scalar form. The validity of the cross well EM method is tested by applying the imaging scheme to two sets of field data. Images of the data collected at the Devine, Texas test site show excellent correlation with the well logs. Unfortunately there is a drift error present in the data that limits the accuracy of the results. A more complete set of data collected at the Richmond field station in Richmond, California demonstrates that cross well EM can be successfully employed to monitor the position of an injected mass of salt water. Both the data and the resulting images clearly indicate the plume migrates toward the north-northwest. The plausibility of these conclusions is verified by applying the imaging code to synthetic data generated by a 3-D sheet model.
NASA Astrophysics Data System (ADS)
Sourbier, Florent; Operto, Stéphane; Virieux, Jean; Amestoy, Patrick; L'Excellent, Jean-Yves
2009-03-01
This is the first paper in a two-part series that describes a massively parallel code that performs 2D frequency-domain full-waveform inversion of wide-aperture seismic data for imaging complex structures. Full-waveform inversion methods, namely quantitative seismic imaging methods based on the resolution of the full wave equation, are computationally expensive. Therefore, designing efficient algorithms which take advantage of parallel computing facilities is critical for the appraisal of these approaches when applied to representative case studies and for further improvements. Full-waveform modelling requires the resolution of a large sparse system of linear equations which is performed with the massively parallel direct solver MUMPS for efficient multiple-shot simulations. Efficiency of the multiple-shot solution phase (forward/backward substitutions) is improved by using the BLAS3 library. The inverse problem relies on a classic local optimization approach implemented with a gradient method. The direct solver returns the multiple-shot wavefield solutions distributed over the processors according to a domain decomposition driven by the distribution of the LU factors. The domain decomposition of the wavefield solutions is used to compute in parallel the gradient of the objective function and the diagonal Hessian, this latter providing a suitable scaling of the gradient. The algorithm allows one to test different strategies for multiscale frequency inversion ranging from successive mono-frequency inversion to simultaneous multifrequency inversion. These different inversion strategies will be illustrated in the following companion paper. The parallel efficiency and the scalability of the code will also be quantified.
Riverine Bathymetry Imaging with Indirect Observations
NASA Astrophysics Data System (ADS)
Farthing, M.; Lee, J. H.; Ghorbanidehno, H.; Hesser, T.; Darve, E. F.; Kitanidis, P. K.
2017-12-01
Bathymetry, i.e, depth, imaging in a river is of crucial importance for shipping operations and flood management. With advancements in sensor technology and computational resources, various types of indirect measurements can be used to estimate high-resolution riverbed topography. Especially, the use of surface velocity measurements has been actively investigated recently since they are easy to acquire at a low cost in all river conditions and surface velocities are sensitive to the river depth. In this work, we image riverbed topography using depth-averaged quasi-steady velocity observations related to the topography through the 2D shallow water equations (SWE). The principle component geostatistical approach (PCGA), a fast and scalable variational inverse modeling method powered by low-rank representation of covariance matrix structure, is presented and applied to two "twin" riverine bathymetry identification problems. To compare the efficiency and effectiveness of the proposed method, an ensemble-based approach is also applied to the test problems. Results demonstrate that PCGA is superior to the ensemble-based approach in terms of computational effort and accuracy. Especially, the results obtained from PCGA capture small-scale bathymetry features irrespective of the initial guess through the successive linearization of the forward model. Analysis on the direct survey data of the riverine bathymetry used in one of the test problems shows an efficient, parsimonious choice of the solution basis in PCGA so that the number of the numerical model runs used to achieve the inversion results is close to the minimum number that reconstructs the underlying bathymetry.
Numerical methods for the inverse problem of density functional theory
Jensen, Daniel S.; Wasserman, Adam
2017-07-17
Here, the inverse problem of Kohn–Sham density functional theory (DFT) is often solved in an effort to benchmark and design approximate exchange-correlation potentials. The forward and inverse problems of DFT rely on the same equations but the numerical methods for solving each problem are substantially different. We examine both problems in this tutorial with a special emphasis on the algorithms and error analysis needed for solving the inverse problem. Two inversion methods based on partial differential equation constrained optimization and constrained variational ideas are introduced. We compare and contrast several different inversion methods applied to one-dimensional finite and periodic modelmore » systems.« less
Numerical methods for the inverse problem of density functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jensen, Daniel S.; Wasserman, Adam
Here, the inverse problem of Kohn–Sham density functional theory (DFT) is often solved in an effort to benchmark and design approximate exchange-correlation potentials. The forward and inverse problems of DFT rely on the same equations but the numerical methods for solving each problem are substantially different. We examine both problems in this tutorial with a special emphasis on the algorithms and error analysis needed for solving the inverse problem. Two inversion methods based on partial differential equation constrained optimization and constrained variational ideas are introduced. We compare and contrast several different inversion methods applied to one-dimensional finite and periodic modelmore » systems.« less
Continuous analog of multiplicative algebraic reconstruction technique for computed tomography
NASA Astrophysics Data System (ADS)
Tateishi, Kiyoko; Yamaguchi, Yusaku; Abou Al-Ola, Omar M.; Kojima, Takeshi; Yoshinaga, Tetsuya
2016-03-01
We propose a hybrid dynamical system as a continuous analog to the block-iterative multiplicative algebraic reconstruction technique (BI-MART), which is a well-known iterative image reconstruction algorithm for computed tomography. The hybrid system is described by a switched nonlinear system with a piecewise smooth vector field or differential equation and, for consistent inverse problems, the convergence of non-negatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem. Namely, we can prove theoretically that a weighted Kullback-Leibler divergence measure can be a common Lyapunov function for the switched system. We show that discretizing the differential equation by using the first-order approximation (Euler's method) based on the geometric multiplicative calculus leads to the same iterative formula of the BI-MART with the scaling parameter as a time-step of numerical discretization. The present paper is the first to reveal that a kind of iterative image reconstruction algorithm is constructed by the discretization of a continuous-time dynamical system for solving tomographic inverse problems. Iterative algorithms with not only the Euler method but also the Runge-Kutta methods of lower-orders applied for discretizing the continuous-time system can be used for image reconstruction. A numerical example showing the characteristics of the discretized iterative methods is presented.
Yang, C L; Wei, H Y; Adler, A; Soleimani, M
2013-06-01
Electrical impedance tomography (EIT) is a fast and cost-effective technique to provide a tomographic conductivity image of a subject from boundary current-voltage data. This paper proposes a time and memory efficient method for solving a large scale 3D EIT inverse problem using a parallel conjugate gradient (CG) algorithm. The 3D EIT system with a large number of measurement data can produce a large size of Jacobian matrix; this could cause difficulties in computer storage and the inversion process. One of challenges in 3D EIT is to decrease the reconstruction time and memory usage, at the same time retaining the image quality. Firstly, a sparse matrix reduction technique is proposed using thresholding to set very small values of the Jacobian matrix to zero. By adjusting the Jacobian matrix into a sparse format, the element with zeros would be eliminated, which results in a saving of memory requirement. Secondly, a block-wise CG method for parallel reconstruction has been developed. The proposed method has been tested using simulated data as well as experimental test samples. Sparse Jacobian with a block-wise CG enables the large scale EIT problem to be solved efficiently. Image quality measures are presented to quantify the effect of sparse matrix reduction in reconstruction results.
SU-E-J-161: Inverse Problems for Optical Parameters in Laser Induced Thermal Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fahrenholtz, SJ; Stafford, RJ; Fuentes, DT
Purpose: Magnetic resonance-guided laser-induced thermal therapy (MRgLITT) is investigated as a neurosurgical intervention for oncological applications throughout the body by active post market studies. Real-time MR temperature imaging is used to monitor ablative thermal delivery in the clinic. Additionally, brain MRgLITT could improve through effective planning for laser fiber's placement. Mathematical bioheat models have been extensively investigated but require reliable patient specific physical parameter data, e.g. optical parameters. This abstract applies an inverse problem algorithm to characterize optical parameter data obtained from previous MRgLITT interventions. Methods: The implemented inverse problem has three primary components: a parameter-space search algorithm, a physicsmore » model, and training data. First, the parameter-space search algorithm uses a gradient-based quasi-Newton method to optimize the effective optical attenuation coefficient, μ-eff. A parameter reduction reduces the amount of optical parameter-space the algorithm must search. Second, the physics model is a simplified bioheat model for homogeneous tissue where closed-form Green's functions represent the exact solution. Third, the training data was temperature imaging data from 23 MRgLITT oncological brain ablations (980 nm wavelength) from seven different patients. Results: To three significant figures, the descriptive statistics for μ-eff were 1470 m{sup −1} mean, 1360 m{sup −1} median, 369 m{sup −1} standard deviation, 933 m{sup −1} minimum and 2260 m{sup −1} maximum. The standard deviation normalized by the mean was 25.0%. The inverse problem took <30 minutes to optimize all 23 datasets. Conclusion: As expected, the inferred average is biased by underlying physics model. However, the standard deviation normalized by the mean is smaller than literature values and indicates an increased precision in the characterization of the optical parameters needed to plan MRgLITT procedures. This investigation demonstrates the potential for the optimization and validation of more sophisticated bioheat models that incorporate the uncertainty of the data into the predictions, e.g. stochastic finite element methods.« less
Inoue, Yuji; Yoneyama, Masami; Nakamura, Masanobu; Ozaki, Satoshi; Ito, Kenjiro; Hiura, Mikio
2012-01-01
Vulnerable plaque can be attributed to induction of ischemic symptoms and magnetic resonance imaging of carotid artery is valuable to detect the plaque. Magnetization prepared rapid acquisition with gradient echo (MPRAGE) method could detect hemorrhagic vulnerable plaque as high intensity signal; however, blood flow is not sufficiently masked by this method. The contrast for plaque in T
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kitanidis, Peter
As large-scale, commercial storage projects become operational, the problem of utilizing information from diverse sources becomes more critically important. In this project, we developed, tested, and applied an advanced joint data inversion system for CO 2 storage modeling with large data sets for use in site characterization and real-time monitoring. Emphasis was on the development of advanced and efficient computational algorithms for joint inversion of hydro-geophysical data, coupled with state-of-the-art forward process simulations. The developed system consists of (1) inversion tools using characterization data, such as 3D seismic survey (amplitude images), borehole log and core data, as well as hydraulic,more » tracer and thermal tests before CO 2 injection, (2) joint inversion tools for updating the geologic model with the distribution of rock properties, thus reducing uncertainty, using hydro-geophysical monitoring data, and (3) highly efficient algorithms for directly solving the dense or sparse linear algebra systems derived from the joint inversion. The system combines methods from stochastic analysis, fast linear algebra, and high performance computing. The developed joint inversion tools have been tested through synthetic CO 2 storage examples.« less
The Earthquake‐Source Inversion Validation (SIV) Project
Mai, P. Martin; Schorlemmer, Danijel; Page, Morgan T.; Ampuero, Jean-Paul; Asano, Kimiyuki; Causse, Mathieu; Custodio, Susana; Fan, Wenyuan; Festa, Gaetano; Galis, Martin; Gallovic, Frantisek; Imperatori, Walter; Käser, Martin; Malytskyy, Dmytro; Okuwaki, Ryo; Pollitz, Fred; Passone, Luca; Razafindrakoto, Hoby N. T.; Sekiguchi, Haruko; Song, Seok Goo; Somala, Surendra N.; Thingbaijam, Kiran K. S.; Twardzik, Cedric; van Driel, Martin; Vyas, Jagdish C.; Wang, Rongjiang; Yagi, Yuji; Zielke, Olaf
2016-01-01
Finite‐fault earthquake source inversions infer the (time‐dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, multiple source models for the same earthquake, obtained by different research teams, often exhibit remarkable dissimilarities. To address the uncertainties in earthquake‐source inversion methods and to understand strengths and weaknesses of the various approaches used, the Source Inversion Validation (SIV) project conducts a set of forward‐modeling exercises and inversion benchmarks. In this article, we describe the SIV strategy, the initial benchmarks, and current SIV results. Furthermore, we apply statistical tools for quantitative waveform comparison and for investigating source‐model (dis)similarities that enable us to rank the solutions, and to identify particularly promising source inversion approaches. All SIV exercises (with related data and descriptions) and statistical comparison tools are available via an online collaboration platform, and we encourage source modelers to use the SIV benchmarks for developing and testing new methods. We envision that the SIV efforts will lead to new developments for tackling the earthquake‐source imaging problem.
Nakahara, Hisashi; Haney, Matt
2015-01-01
Recently, various methods have been proposed and applied for earthquake source imaging, and theoretical relationships among the methods have been studied. In this study, we make a follow-up theoretical study to better understand the meanings of earthquake source imaging. For imaging problems, the point spread function (PSF) is used to describe the degree of blurring and degradation in an obtained image of a target object as a response of an imaging system. In this study, we formulate PSFs for earthquake source imaging. By calculating the PSFs, we find that waveform source inversion methods remove the effect of the PSF and are free from artifacts. However, the other source imaging methods are affected by the PSF and suffer from the effect of blurring and degradation due to the restricted distribution of receivers. Consequently, careful treatment of the effect is necessary when using the source imaging methods other than waveform inversions. Moreover, the PSF for source imaging is found to have a link with seismic interferometry with the help of the source-receiver reciprocity of Green’s functions. In particular, the PSF can be related to Green’s function for cases in which receivers are distributed so as to completely surround the sources. Furthermore, the PSF acts as a low-pass filter. Given these considerations, the PSF is quite useful for understanding the physical meaning of earthquake source imaging.
Saranathan, Manojkumar; Worters, Pauline W; Rettmann, Dan W; Winegar, Blair; Becker, Jennifer
2017-12-01
A pedagogical review of fluid-attenuated inversion recovery (FLAIR) and double inversion recovery (DIR) imaging is conducted in this article. The basics of the two pulse sequences are first described, including the details of the inversion preparation and imaging sequences with accompanying mathematical formulae for choosing the inversion time in a variety of scenarios for use on clinical MRI scanners. Magnetization preparation (or T2prep), a strategy for improving image signal-to-noise ratio and contrast and reducing T 1 weighting at high field strengths, is also described. Lastly, image artifacts commonly associated with FLAIR and DIR are described with clinical examples, to help avoid misdiagnosis. 5 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2017;46:1590-1600. © 2017 International Society for Magnetic Resonance in Medicine.
Poisson image reconstruction with Hessian Schatten-norm regularization.
Lefkimmiatis, Stamatios; Unser, Michael
2013-11-01
Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.
Spectral reflectance inversion with high accuracy on green target
NASA Astrophysics Data System (ADS)
Jiang, Le; Yuan, Jinping; Li, Yong; Bai, Tingzhu; Liu, Shuoqiong; Jin, Jianzhou; Shen, Jiyun
2016-09-01
Using Landsat-7 ETM remote sensing data, the inversion of spectral reflectance of green wheat in visible and near infrared waveband in Yingke, China is studied. In order to solve the problem of lower inversion accuracy, custom atmospheric conditions method based on moderate resolution transmission model (MODTRAN) is put forward. Real atmospheric parameters are considered when adopting this method. The atmospheric radiative transfer theory to calculate atmospheric parameters is introduced first and then the inversion process of spectral reflectance is illustrated in detail. At last the inversion result is compared with simulated atmospheric conditions method which was a widely used method by previous researchers. The comparison shows that the inversion accuracy of this paper's method is higher in all inversion bands; the inversed spectral reflectance curve by this paper's method is more similar to the measured reflectance curve of wheat and better reflects the spectral reflectance characteristics of green plant which is very different from green artificial target. Thus, whether a green target is a plant or artificial target can be judged by reflectance inversion based on remote sensing image. This paper's research is helpful for the judgment of green artificial target hidden in the greenery, which has a great significance on the precise strike of green camouflaged weapons in military field.
Merging information in geophysics: the triumvirat of geology, geophysics, and petrophysics
NASA Astrophysics Data System (ADS)
Revil, A.
2016-12-01
We know that geophysical inversion is non-unique and that many classical regularization techniques are unphysical. Despite this, we like to use them because of their simplicity and because geophysicists are often afraid to bias the inverse problem by introducing too much prior information (in a broad sense). It is also clear that geophysics is done on geological objects that are not random structures. Spending some time with a geologist in the field, before organizing a field geophysical campaign, is always an instructive experience. Finally, the measured properties are connected to physicochemical and textural parameters of the porous media and the interfaces between the various phases of a porous body. .Some fundamental parameters may control the geophysical observtions or their time variations. If we want to improve our geophysical tomograms, we need to be risk-takers and acknowledge, or rather embrqce, the cross-fertilization arising by coupling geology, geophysics, and ptrophysics. In this presentation, I will discuss various techniques to do so. They will include non-stationary geostatistical descriptors, facies deformation, cross-coupled petrophysical properties using petrophysical clustering, and image-guided inversion. I will show various applications to a number of relevant cases in hydrogeophysics. From these applications, it may become clear that there are many ways to address inverse or time-lapse inverse problems and geophysicists have to be pragmatic regarding the methods used depending on the degree of available prior information.
Terahertz multistatic reflection imaging.
Dorney, Timothy D; Symes, William W; Baraniuk, Richard G; Mittleman, Daniel M
2002-07-01
We describe a new imaging method using single-cycle pulses of terahertz (THz) radiation. This technique emulates the data collection and image processing procedures developed for geophysical prospecting and is made possible by the availability of fiber-coupled THz receiver antennas. We use a migration procedure to solve the inverse problem; this permits us to reconstruct the location, the shape, and the refractive index of targets. We show examples for both metallic and dielectric model targets, and we perform velocity analysis on dielectric targets to estimate the refractive indices of imaged components. These results broaden the capabilities of THz imaging systems and also demonstrate the viability of the THz system as a test bed for the exploration of new seismic processing methods.
Orthoscopic real-image display of digital holograms.
Makowski, P L; Kozacki, T; Zaperty, W
2017-10-01
We present a practical solution for the long-standing problem of depth inversion in real-image holographic display of digital holograms. It relies on a field lens inserted in front of the spatial light modulator device addressed by a properly processed hologram. The processing algorithm accounts for pixel size and wavelength mismatch between capture and display devices in a way that prevents image deformation. Complete images of large dimensions are observable from one position with a naked eye. We demonstrate the method experimentally on a 10-cm-long 3D object using a single full-HD spatial light modulator, but it can supplement most holographic displays designed to form a real image, including circular wide angle configurations.
Interferometric inversion for passive imaging and navigation
2017-05-01
AFRL-AFOSR-VA-TR-2017-0096 Interferometric inversion for passive imaging and navigation Laurent Demanet MASSACHUSETTS INSTITUTE OF TECHNOLOGY Final...COVERED (From - To) Feb 2015-Jan 2017 4. TITLE AND SUBTITLE Interferometric inversion for passive imaging and navigation 5a. CONTRACT NUMBER...Grant title: Interferometric inversion for passive imaging and navigation • Grant number: FA9550-15-1-0078 • Period: Feburary 2015 - January 2017
A compressed sensing based 3D resistivity inversion algorithm for hydrogeological applications
NASA Astrophysics Data System (ADS)
Ranjan, Shashi; Kambhammettu, B. V. N. P.; Peddinti, Srinivasa Rao; Adinarayana, J.
2018-04-01
Image reconstruction from discrete electrical responses pose a number of computational and mathematical challenges. Application of smoothness constrained regularized inversion from limited measurements may fail to detect resistivity anomalies and sharp interfaces separated by hydro stratigraphic units. Under favourable conditions, compressed sensing (CS) can be thought of an alternative to reconstruct the image features by finding sparse solutions to highly underdetermined linear systems. This paper deals with the development of a CS assisted, 3-D resistivity inversion algorithm for use with hydrogeologists and groundwater scientists. CS based l1-regularized least square algorithm was applied to solve the resistivity inversion problem. Sparseness in the model update vector is introduced through block oriented discrete cosine transformation, with recovery of the signal achieved through convex optimization. The equivalent quadratic program was solved using primal-dual interior point method. Applicability of the proposed algorithm was demonstrated using synthetic and field examples drawn from hydrogeology. The proposed algorithm has outperformed the conventional (smoothness constrained) least square method in recovering the model parameters with much fewer data, yet preserving the sharp resistivity fronts separated by geologic layers. Resistivity anomalies represented by discrete homogeneous blocks embedded in contrasting geologic layers were better imaged using the proposed algorithm. In comparison to conventional algorithm, CS has resulted in an efficient (an increase in R2 from 0.62 to 0.78; a decrease in RMSE from 125.14 Ω-m to 72.46 Ω-m), reliable, and fast converging (run time decreased by about 25%) solution.
Comparison of Compressed Sensing Algorithms for Inversion of 3-D Electrical Resistivity Tomography.
NASA Astrophysics Data System (ADS)
Peddinti, S. R.; Ranjan, S.; Kbvn, D. P.
2016-12-01
Image reconstruction algorithms derived from electrical resistivity tomography (ERT) are highly non-linear, sparse, and ill-posed. The inverse problem is much severe, when dealing with 3-D datasets that result in large sized matrices. Conventional gradient based techniques using L2 norm minimization with some sort of regularization can impose smoothness constraint on the solution. Compressed sensing (CS) is relatively new technique that takes the advantage of inherent sparsity in parameter space in one or the other form. If favorable conditions are met, CS was proven to be an efficient image reconstruction technique that uses limited observations without losing edge sharpness. This paper deals with the development of an open source 3-D resistivity inversion tool using CS framework. The forward model was adopted from RESINVM3D (Pidlisecky et al., 2007) with CS as the inverse code. Discrete cosine transformation (DCT) function was used to induce model sparsity in orthogonal form. Two CS based algorithms viz., interior point method and two-step IST were evaluated on a synthetic layered model with surface electrode observations. The algorithms were tested (in terms of quality and convergence) under varying degrees of parameter heterogeneity, model refinement, and reduced observation data space. In comparison to conventional gradient algorithms, CS was proven to effectively reconstruct the sub-surface image with less computational cost. This was observed by a general increase in NRMSE from 0.5 in 10 iterations using gradient algorithm to 0.8 in 5 iterations using CS algorithms.
Mathematical Problems in Imaging in Random Media
2015-01-15
of matrix Γ in [1], in the context of intensity based imaging of remote sources in random waveguides. That work is a direct application of the results...for which j we have z > ε−2Sj), and filters them out. It images by time reversing the received wave, weighting the modes based on their being coherent...transport based inversion), so we regularize to obtain ( |ξ̂1|2/β1, . . . |ξ̂N |2/βN )T ? ≈ J∑ j=1 e|Λj |Z ( uTj BQ−1M ) uj , (31) for J chosen so that
NASA Astrophysics Data System (ADS)
Davis, A. B.; Bal, G.; Chen, J.
2015-12-01
Operational remote sensing of microphysical and optical cloud properties is invariably predicated on the assumption of plane-parallel slab geometry for the targeted cloud. The sole benefit of this often-questionable assumption about the cloud is that it leads to one-dimensional (1D) radiative transfer (RT)---a textbook, computationally tractable model. We present new results as evidence that, thanks to converging advances in 3D RT, inverse problem theory, algorithm implementation, and computer hardware, we are at the dawn of a new era in cloud remote sensing where we can finally go beyond the plane-parallel paradigm. Granted, the plane-parallel/1D RT assumption is reasonable for spatially extended stratiform cloud layers, as well as the smoothly distributed background aerosol layers. However, these 1D RT-friendly scenarios exclude cases that are critically important for climate physics. 1D RT---whence operational cloud remote sensing---fails catastrophically for cumuliform clouds that have fully 3D outer shapes and internal structures driven by shallow or deep convection. For these situations, the first order of business in a robust characterization by remote sensing is to abandon the slab geometry framework and determine the 3D geometry of the cloud, as a first step toward bone fide 3D cloud tomography. With this specific goal in mind, we deliver a proof-of-concept for an entirely new kind of remote sensing applicable to 3D clouds. It is based on highly simplified 3D RT and exploits multi-angular suites of cloud images at high spatial resolution. Airborne sensors like AirMSPI readily acquire such data. The key element of the reconstruction algorithm is a sophisticated solution of the nonlinear inverse problem via linearization of the forward model and an iteration scheme supported, where necessary, by adaptive regularization. Currently, the demo uses a 2D setting to show how either vertical profiles or horizontal slices of the cloud can be accurately reconstructed. Extension to 3D volumes is straightforward but the next challenge is to accommodate images at lower spatial resolution, e.g., from MISR/Terra. G. Bal, J. Chen, and A.B. Davis (2015). Reconstruction of cloud geometry from multi-angle images, Inverse Problems in Imaging (submitted).
3D optical tomography in the presence of void regions
NASA Astrophysics Data System (ADS)
Riley, J.; Dehghani, Hamid; Schweiger, Martin; Arridge, Simon R.; Ripoll, Jorge; Nieto-Vesperinas, Manuel
2000-12-01
We present an investigation of the effect of a 3D non-scattering gap region on image reconstruction in diffuse optical tomography. The void gap is modelled by the Radiosity-Diffusion method and the inverse problem is solved using the adjoint field method. The case of a sphere with concentric spherical gap is used as an example.
3D optical tomography in the presence of void regions.
Riley, J; Dehghani, H; Schweiger, M; Arridge, S; Ripoll, J; Nieto-Vesperinas, M
2000-12-18
We present an investigation of the effect of a 3D non-scattering gap region on image reconstruction in diffuse optical tomography. The void gap is modelled by the Radiosity-Diffusion method and the inverse problem is solved using the adjoint field method. The case of a sphere with concentric spherical gap is used as an example.
Karaoulis, M.; Revil, A.; Werkema, D.D.; Minsley, B.J.; Woodruff, W.F.; Kemna, A.
2011-01-01
Induced polarization (more precisely the magnitude and phase of impedance of the subsurface) is measured using a network of electrodes located at the ground surface or in boreholes. This method yields important information related to the distribution of permeability and contaminants in the shallow subsurface. We propose a new time-lapse 3-D modelling and inversion algorithm to image the evolution of complex conductivity over time. We discretize the subsurface using hexahedron cells. Each cell is assigned a complex resistivity or conductivity value. Using the finite-element approach, we model the in-phase and out-of-phase (quadrature) electrical potentials on the 3-D grid, which are then transformed into apparent complex resistivity. Inhomogeneous Dirichlet boundary conditions are used at the boundary of the domain. The calculation of the Jacobian matrix is based on the principles of reciprocity. The goal of time-lapse inversion is to determine the change in the complex resistivity of each cell of the spatial grid as a function of time. Each model along the time axis is called a 'reference space model'. This approach can be simplified into an inverse problem looking for the optimum of several reference space models using the approximation that the material properties vary linearly in time between two subsequent reference models. Regularizations in both space domain and time domain reduce inversion artefacts and improve the stability of the inversion problem. In addition, the use of the time-lapse equations allows the simultaneous inversion of data obtained at different times in just one inversion step (4-D inversion). The advantages of this new inversion algorithm are demonstrated on synthetic time-lapse data resulting from the simulation of a salt tracer test in a heterogeneous random material described by an anisotropic semi-variogram. ?? 2011 The Authors Geophysical Journal International ?? 2011 RAS.
NASA Astrophysics Data System (ADS)
Nurhandoko, Bagus Endar B.; Sukmana, Indriani; Mubarok, Syahrul; Deny, Agus; Widowati, Sri; Kurniadi, Rizal
2012-06-01
Migration is important issue for seismic imaging in complex structure. In this decade, depth imaging becomes important tools for producing accurate image in depth imaging instead of time domain imaging. The challenge of depth migration method, however, is in revealing the complex structure of subsurface. There are many methods of depth migration with their advantages and weaknesses. In this paper, we show our propose method of pre-stack depth migration based on time domain inverse scattering wave equation. Hopefully this method can be as solution for imaging complex structure in Indonesia, especially in rich thrusting fault zones. In this research, we develop a recent advance wave equation migration based on time domain inverse scattering wave which use more natural wave propagation using scattering wave. This wave equation pre-stack depth migration use time domain inverse scattering wave equation based on Helmholtz equation. To provide true amplitude recovery, an inverse of divergence procedure and recovering transmission loss are considered of pre-stack migration. Benchmarking the propose inverse scattering pre-stack depth migration with the other migration methods are also presented, i.e.: wave equation pre-stack depth migration, waveequation depth migration, and pre-stack time migration method. This inverse scattering pre-stack depth migration could image successfully the rich fault zone which consist extremely dip and resulting superior quality of seismic image. The image quality of inverse scattering migration is much better than the others migration methods.
Basis set expansion for inverse problems in plasma diagnostic analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, B.; Ruiz, C. L.
A basis set expansion method [V. Dribinski, A. Ossadtchi, V. A. Mandelshtam, and H. Reisler, Rev. Sci. Instrum. 73, 2634 (2002)] is applied to recover physical information about plasma radiation sources from instrument data, which has been forward transformed due to the nature of the measurement technique. This method provides a general approach for inverse problems, and we discuss two specific examples relevant to diagnosing fast z pinches on the 20–25 MA Z machine [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats, J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M.more » Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K. W. Struve, W. A. Stygar, L. K. Warne, J. R. Woodworth, C. W. Mendel, K. R. Prestwich, R. W. Shoup, D. L. Johnson, J. P. Corley, K. C. Hodge, T. C. Wagoner, and P. E. Wakeland, in Proceedings of the Pulsed Power Plasma Sciences Conference (IEEE, 2007), p. 979]. First, Abel inversion of time-gated, self-emission x-ray images from a wire array implosion is studied. Second, we present an approach for unfolding neutron time-of-flight measurements from a deuterium gas puff z pinch to recover information about emission time history and energy distribution. Through these examples, we discuss how noise in the measured data limits the practical resolution of the inversion, and how the method handles discontinuities in the source function and artifacts in the projected image. We add to the method a propagation of errors calculation for estimating uncertainties in the inverted solution.« less
Basis set expansion for inverse problems in plasma diagnostic analysis
NASA Astrophysics Data System (ADS)
Jones, B.; Ruiz, C. L.
2013-07-01
A basis set expansion method [V. Dribinski, A. Ossadtchi, V. A. Mandelshtam, and H. Reisler, Rev. Sci. Instrum. 73, 2634 (2002)], 10.1063/1.1482156 is applied to recover physical information about plasma radiation sources from instrument data, which has been forward transformed due to the nature of the measurement technique. This method provides a general approach for inverse problems, and we discuss two specific examples relevant to diagnosing fast z pinches on the 20-25 MA Z machine [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats, J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K. W. Struve, W. A. Stygar, L. K. Warne, J. R. Woodworth, C. W. Mendel, K. R. Prestwich, R. W. Shoup, D. L. Johnson, J. P. Corley, K. C. Hodge, T. C. Wagoner, and P. E. Wakeland, in Proceedings of the Pulsed Power Plasma Sciences Conference (IEEE, 2007), p. 979]. First, Abel inversion of time-gated, self-emission x-ray images from a wire array implosion is studied. Second, we present an approach for unfolding neutron time-of-flight measurements from a deuterium gas puff z pinch to recover information about emission time history and energy distribution. Through these examples, we discuss how noise in the measured data limits the practical resolution of the inversion, and how the method handles discontinuities in the source function and artifacts in the projected image. We add to the method a propagation of errors calculation for estimating uncertainties in the inverted solution.
Easy way to determine quantitative spatial resolution distribution for a general inverse problem
NASA Astrophysics Data System (ADS)
An, M.; Feng, M.
2013-12-01
The spatial resolution computation of a solution was nontrivial and more difficult than solving an inverse problem. Most geophysical studies, except for tomographic studies, almost uniformly neglect the calculation of a practical spatial resolution. In seismic tomography studies, a qualitative resolution length can be indicatively given via visual inspection of the restoration of a synthetic structure (e.g., checkerboard tests). An effective strategy for obtaining quantitative resolution length is to calculate Backus-Gilbert resolution kernels (also referred to as a resolution matrix) by matrix operation. However, not all resolution matrices can provide resolution length information, and the computation of resolution matrix is often a difficult problem for very large inverse problems. A new class of resolution matrices, called the statistical resolution matrices (An, 2012, GJI), can be directly determined via a simple one-parameter nonlinear inversion performed based on limited pairs of random synthetic models and their inverse solutions. The total procedure were restricted to forward/inversion processes used in the real inverse problem and were independent of the degree of inverse skill used in the solution inversion. Spatial resolution lengths can be directly given during the inversion. Tests on 1D/2D/3D model inversion demonstrated that this simple method can be at least valid for a general linear inverse problem.
NASA Astrophysics Data System (ADS)
Chen, Xudong
2010-07-01
This paper proposes a version of the subspace-based optimization method to solve the inverse scattering problem with an inhomogeneous background medium where the known inhomogeneities are bounded in a finite domain. Although the background Green's function at each discrete point in the computational domain is not directly available in an inhomogeneous background scenario, the paper uses the finite element method to simultaneously obtain the Green's function at all discrete points. The essence of the subspace-based optimization method is that part of the contrast source is determined from the spectrum analysis without using any optimization, whereas the orthogonally complementary part is determined by solving a lower dimension optimization problem. This feature significantly speeds up the convergence of the algorithm and at the same time makes it robust against noise. Numerical simulations illustrate the efficacy of the proposed algorithm. The algorithm presented in this paper finds wide applications in nondestructive evaluation, such as through-wall imaging.
Magnetic resonance separation imaging using a divided inversion recovery technique (DIRT).
Goldfarb, James W
2010-04-01
The divided inversion recovery technique is an MRI separation method based on tissue T(1) relaxation differences. When tissue T(1) relaxation times are longer than the time between inversion pulses in a segmented inversion recovery pulse sequence, longitudinal magnetization does not pass through the null point. Prior to additional inversion pulses, longitudinal magnetization may have an opposite polarity. Spatial displacement of tissues in inversion recovery balanced steady-state free-precession imaging has been shown to be due to this magnetization phase change resulting from incomplete magnetization recovery. In this paper, it is shown how this phase change can be used to provide image separation. A pulse sequence parameter, the time between inversion pulses (T180), can be adjusted to provide water-fat or fluid separation. Example water-fat and fluid separation images of the head, heart, and abdomen are presented. The water-fat separation performance was investigated by comparing image intensities in short-axis divided inversion recovery technique images of the heart. Fat, blood, and fluid signal was suppressed to the background noise level. Additionally, the separation performance was not affected by main magnetic field inhomogeneities.
Fast, Nonlinear, Fully Probabilistic Inversion of Large Geophysical Problems
NASA Astrophysics Data System (ADS)
Curtis, A.; Shahraeeni, M.; Trampert, J.; Meier, U.; Cho, G.
2010-12-01
Almost all Geophysical inverse problems are in reality nonlinear. Fully nonlinear inversion including non-approximated physics, and solving for probability distribution functions (pdf’s) that describe the solution uncertainty, generally requires sampling-based Monte-Carlo style methods that are computationally intractable in most large problems. In order to solve such problems, physical relationships are usually linearized leading to efficiently-solved, (possibly iterated) linear inverse problems. However, it is well known that linearization can lead to erroneous solutions, and in particular to overly optimistic uncertainty estimates. What is needed across many Geophysical disciplines is a method to invert large inverse problems (or potentially tens of thousands of small inverse problems) fully probabilistically and without linearization. This talk shows how very large nonlinear inverse problems can be solved fully probabilistically and incorporating any available prior information using mixture density networks (driven by neural network banks), provided the problem can be decomposed into many small inverse problems. In this talk I will explain the methodology, compare multi-dimensional pdf inversion results to full Monte Carlo solutions, and illustrate the method with two applications: first, inverting surface wave group and phase velocities for a fully-probabilistic global tomography model of the Earth’s crust and mantle, and second inverting industrial 3D seismic data for petrophysical properties throughout and around a subsurface hydrocarbon reservoir. The latter problem is typically decomposed into 104 to 105 individual inverse problems, each solved fully probabilistically and without linearization. The results in both cases are sufficiently close to the Monte Carlo solution to exhibit realistic uncertainty, multimodality and bias. This provides far greater confidence in the results, and in decisions made on their basis.
Robinson, Katherine M; Ninowski, Jerilyn E
2003-12-01
Problems of the form a + b - b have been used to assess conceptual understanding of the relationship between addition and subtraction. No study has investigated the same relationship between multiplication and division on problems of the form d x e / e. In both types of inversion problems, no calculation is required if the inverse relationship between the operations is understood. Adult participants solved addition/subtraction and multiplication/division inversion (e.g., 9 x 22 / 22) and standard (e.g., 2 + 27 - 28) problems. Participants started to use the inversion strategy earlier and more frequently on addition/subtraction problems. Participants took longer to solve both types of multiplication/division problems. Overall, conceptual understanding of the relationship between multiplication and division was not as strong as that between addition and subtraction. One explanation for this difference in performance is that the operation of division is more weakly represented and understood than the other operations and that this weakness affects performance on problems of the form d x e / e.
NASA Astrophysics Data System (ADS)
Darrh, A.; Downs, C. M.; Poppeliers, C.
2017-12-01
Born Scattering Inversion (BSI) of electromagnetic (EM) data is a geophysical imaging methodology for mapping weak conductivity, permeability, and/or permittivity contrasts in the subsurface. The high computational cost of full waveform inversion is reduced by adopting the First Born Approximation for scattered EM fields. This linearizes the inverse problem in terms of Born scattering amplitudes for a set of effective EM body sources within a 3D imaging volume. Estimation of scatterer amplitudes is subsequently achieved by solving the normal equations. Our present BSI numerical experiments entail Fourier transforming real-valued synthetic EM data to the frequency-domain, and minimizing the L2 residual between complex-valued observed and predicted data. We are testing the ability of BSI to resolve simple scattering models. For our initial experiments, synthetic data are acquired by three-component (3C) electric field receivers distributed on a plane above a single point electric dipole within a homogeneous and isotropic wholespace. To suppress artifacts, candidate Born scatterer locations are confined to a volume beneath the receiver array. Also, we explore two different numerical linear algebra algorithms for solving the normal equations: Damped Least Squares (DLS), and Non-Negative Least Squares (NNLS). Results from NNLS accurately recover the source location only for a large dense 3C receiver array, but fail when the array is decimated, or is restricted to horizontal component data. Using all receiver stations and all components per station, NNLS results are relatively insensitive to a sub-sampled frequency spectrum, suggesting that coarse frequency-domain sampling may be adequate for good target resolution. Results from DLS are insensitive to diminishing array density, but contain spatially oscillatory structure. DLS-generated images are consistently centered at the known point source location, despite an abundance of surrounding structure.
NASA Astrophysics Data System (ADS)
Reiter, D. T.; Rodi, W. L.
2015-12-01
Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.
Fast myopic 2D-SIM super resolution microscopy with joint modulation pattern estimation
NASA Astrophysics Data System (ADS)
Orieux, François; Loriette, Vincent; Olivo-Marin, Jean-Christophe; Sepulveda, Eduardo; Fragola, Alexandra
2017-12-01
Super-resolution in structured illumination microscopy (SIM) is obtained through de-aliasing of modulated raw images, in which high frequencies are measured indirectly inside the optical transfer function. Usual approaches that use 9 or 15 images are often too slow for dynamic studies. Moreover, as experimental conditions change with time, modulation parameters must be estimated within the images. This paper tackles the problem of image reconstruction for fast super resolution in SIM, where the number of available raw images is reduced to four instead of nine or fifteen. Within an optimization framework, the solution is inferred via a joint myopic criterion for image and modulation (or acquisition) parameters, leading to what is frequently called a myopic or semi-blind inversion problem. The estimate is chosen as the minimizer of the nonlinear criterion, numerically calculated by means of a block coordinate optimization algorithm. The effectiveness of the proposed method is demonstrated for simulated and experimental examples. The results show precise estimation of the modulation parameters jointly with the reconstruction of the super resolution image. The method also shows its effectiveness for thick biological samples.
Linearized inversion of multiple scattering seismic energy
NASA Astrophysics Data System (ADS)
Aldawood, Ali; Hoteit, Ibrahim; Zuberi, Mohammad
2014-05-01
Internal multiples deteriorate the quality of the migrated image obtained conventionally by imaging single scattering energy. So, imaging seismic data with the single-scattering assumption does not locate multiple bounces events in their actual subsurface positions. However, imaging internal multiples properly has the potential to enhance the migrated image because they illuminate zones in the subsurface that are poorly illuminated by single scattering energy such as nearly vertical faults. Standard migration of these multiples provides subsurface reflectivity distributions with low spatial resolution and migration artifacts due to the limited recording aperture, coarse sources and receivers sampling, and the band-limited nature of the source wavelet. The resultant image obtained by the adjoint operator is a smoothed depiction of the true subsurface reflectivity model and is heavily masked by migration artifacts and the source wavelet fingerprint that needs to be properly deconvolved. Hence, we proposed a linearized least-square inversion scheme to mitigate the effect of the migration artifacts, enhance the spatial resolution, and provide more accurate amplitude information when imaging internal multiples. The proposed algorithm uses the least-square image based on single-scattering assumption as a constraint to invert for the part of the image that is illuminated by internal scattering energy. Then, we posed the problem of imaging double-scattering energy as a least-square minimization problem that requires solving the normal equation of the following form: GTGv = GTd, (1) where G is a linearized forward modeling operator that predicts double-scattered seismic data. Also, GT is a linearized adjoint operator that image double-scattered seismic data. Gradient-based optimization algorithms solve this linear system. Hence, we used a quasi-Newton optimization technique to find the least-square minimizer. In this approach, an estimate of the Hessian matrix that contains curvature information is modified at every iteration by a low-rank update based on gradient changes at every step. At each iteration, the data residual is imaged using GT to determine the model update. Application of the linearized inversion to synthetic data to image a vertical fault plane demonstrate the effectiveness of this methodology to properly delineate the vertical fault plane and give better amplitude information than the standard migrated image using the adjoint operator that takes into account internal multiples. Thus, least-square imaging of multiple scattering enhances the spatial resolution of the events illuminated by internal scattering energy. It also deconvolves the source signature and helps remove the fingerprint of the acquisition geometry. The final image is obtained by the superposition of the least-square solution based on single scattering assumption and the least-square solution based on double scattering assumption.
Effect of conductor geometry on source localization: Implications for epilepsy studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schlitt, H.; Heller, L.; Best, E.
1994-07-01
We shall discuss the effects of conductor geometry on source localization for applications in epilepsy studies. The most popular conductor model for clinical MEG studies is a homogeneous sphere. However, several studies have indicated that a sphere is a poor model for the head when the sources are deep, as is the case for epileptic foci in the mesial temporal lobe. We believe that replacing the spherical model with a more realistic one in the inverse fitting procedure will improve the accuracy of localizing epileptic sources. In order to include a realistic head model in the inverse problem, we mustmore » first solve the forward problem for the realistic conductor geometry. We create a conductor geometry model from MR images, and then solve the forward problem via a boundary integral equation for the electric potential due to a specified primary source. One the electric potential is known, the magnetic field can be calculated directly. The most time-intensive part of the problem is generating the conductor model; fortunately, this needs to be done only once for each patient. It takes little time to change the primary current and calculate a new magnetic field for use in the inverse fitting procedure. We present the results of a series of computer simulations in which we investigate the localization accuracy due to replacing the spherical model with the realistic head model in the inverse fitting procedure. The data to be fit consist of a computer generated magnetic field due to a known current dipole in a realistic head model, with added noise. We compare the localization errors when this field is fit using a spherical model to the fit using a realistic head model. Using a spherical model is comparable to what is usually done when localizing epileptic sources in humans, where the conductor model used in the inverse fitting procedure does not correspond to the actual head.« less
NASA Astrophysics Data System (ADS)
Kunze, Herb; La Torre, Davide; Lin, Jianyi
2017-01-01
We consider the inverse problem associated with IFSM: Given a target function f , find an IFSM, such that its fixed point f ¯ is sufficiently close to f in the Lp distance. Forte and Vrscay [1] showed how to reduce this problem to a quadratic optimization model. In this paper, we extend the collage-based method developed by Kunze, La Torre and Vrscay ([2][3][4]), by proposing the minimization of the 1-norm instead of the 0-norm. In fact, optimization problems involving the 0-norm are combinatorial in nature, and hence in general NP-hard. To overcome these difficulties, we introduce the 1-norm and propose a Sequential Quadratic Programming algorithm to solve the corresponding inverse problem. As in Kunze, La Torre and Vrscay [3] in our formulation, the minimization of collage error is treated as a multi-criteria problem that includes three different and conflicting criteria i.e., collage error, entropy and sparsity. This multi-criteria program is solved by means of a scalarization technique which reduces the model to a single-criterion program by combining all objective functions with different trade-off weights. The results of some numerical computations are presented.
Imaging using cross-hole seismoelectric tomography
Araji, A.H.; Revil, A.; Jardani, A.; Minsley, B.
2011-01-01
We propose a new cross-hole imaging approach based on seismoelectric conversions associated with the transmission of seismic waves from seismic sources located in a borehole to receivers electrodes located in a second borehole. The seismoelectric seismic-to-electric problem is solved using Biot theory coupled with a generalized Ohm's law with an electrokinetic coupling term. The components of the displacement of the solid phase, the fluid pressure, and the electrical potential are solved using a finite element approach with PML boundary conditions for the seismic waves and boundary conditions mimicking an infinite material for the electrostatic problem. We have developed an inversion algorithm using the electrical disturbances recorded in the second borehole to localize the position of the heterogeneities responsible for the seismoelectric conversions. Because of the ill-posed nature of the inverse problem, regularization is used to constrain the solution at each time in the seismoelectric time window comprised between the time of the seismic shot and the time of the first arrival of the seismic waves in the second borehole. All the inverted volumetric current source densities are stacked to produce an image of the position of the heterogeneities between the two boreholes. Two simple synthetic case studies are presented to test this concept. The first case study corresponds to a vertical discontinuity between two homogeneous sub-domains. The second case study corresponds to a poroelastic inclusion embedded into an homogenous poroelastic formation. In both cases, the position of the heterogeneity is fairly well-recovered using only the electrical disturbances associated with the seismoelectric conversions. ?? 2011 Society of Exploration Geophysicists.
NASA Astrophysics Data System (ADS)
Jardani, A.; Soueid Ahmed, A.; Revil, A.; Dupont, J.
2013-12-01
Pumping tests are usually employed to predict the hydraulic conductivity filed from the inversion of the head measurements. Nevertheless, the inverse problem is strongly underdetermined and a reliable imaging requires a considerable number of wells. We propose to add more information to the inversion of the heads by adding (non-intrusive) streaming potentials (SP) data. The SP corresponds to perturbations in the local electrical field caused directly by the fow of the ground water. These SP are obtained with a set of the non-polarising electrodes installed at the ground surface. We developed a geostatistical method for the estimation of the hydraulic conductivity field from measurements of hydraulic heads and SP during pumping and injection experiments. We use the adjoint state method and a recent petrophysical formulation of the streaming potential problem in which the streaming coupling coefficient is derived from the hydraulic conductivity allowed reducing of the unknown parameters. The geostatistical inverse framework is applied to three synthetic case studies with different number of the wells and electrodes used to measure the hydraulic heads and the streaming potentials. To evaluate the benefits of the incorporating of the streaming potential to the hydraulic data, we compared the cases in which the data are coupled or not to map the hydraulic conductivity. The results of the inversion revealed that a dense distribution of electrodes can be used to infer the heterogeneities in the hydraulic conductivity field. Incorporating the streaming potential information to the hydraulic head data improves the estimate of hydraulic conductivity field especially when the number of piezometers is limited.
Children's Understanding of the Arithmetic Concepts of Inversion and Associativity
ERIC Educational Resources Information Center
Robinson, Katherine M.; Ninowski, Jerilyn E.; Gray, Melissa L.
2006-01-01
Previous studies have shown that even preschoolers can solve inversion problems of the form a + b - b by using the knowledge that addition and subtraction are inverse operations. In this study, a new type of inversion problem of the form d x e [divided by] e was also examined. Grade 6 and 8 students solved inversion problems of both types as well…
2.5D transient electromagnetic inversion with OCCAM method
NASA Astrophysics Data System (ADS)
Li, R.; Hu, X.
2016-12-01
In the application of time-domain electromagnetic method (TEM), some multidimensional inversion schemes are applied for imaging in the past few decades to overcome great error produced by 1D model inversion when the subsurface structure is complex. The current mainstream multidimensional inversion for EM data, with the finite-difference time-domain (FDTD) forward method, mainly implemented by Nonlinear Conjugate Gradient (NLCG). But the convergence rate of NLCG heavily depends on Lagrange multiplier and maybe fail to converge. We use the OCCAM inversion method to avoid the weakness. OCCAM inversion is proven to be a more stable and reliable method to image the subsurface 2.5D electrical conductivity. Firstly, we simulate the 3D transient EM fields governed by Maxwell's equations with FDTD method. Secondly, we use the OCCAM inversion scheme with the appropriate objective error functional we established to image the 2.5D structure. And the data space OCCAM's inversion (DASOCC) strategy based on OCCAM scheme were given in this paper. The sensitivity matrix is calculated with the method of time-integrated back-propagated fields. Imaging result of example model shown in Fig. 1 have proven that the OCCAM scheme is an efficient inversion method for TEM with FDTD method. The processes of the inversion iterations have shown the great ability of convergence with few iterations. Summarizing the process of the imaging, we can make the following conclusions. Firstly, the 2.5D imaging in FDTD system with OCCAM inversion demonstrates that we could get desired imaging results for the resistivity structure in the homogeneous half-space. Secondly, the imaging results usually do not over-depend on the initial model, but the iteration times can be reduced distinctly if the background resistivity of initial model get close to the truthful model. So it is batter to set the initial model based on the other geologic information in the application. When the background resistivity fit the truthful model well, the imaging of anomalous body only need a few iteration steps. Finally, the speed of imaging vertical boundaries is slower than the speed of imaging the horizontal boundaries.
A comparative study of minimum norm inverse methods for MEG imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leahy, R.M.; Mosher, J.C.; Phillips, J.W.
1996-07-01
The majority of MEG imaging techniques currently in use fall into the general class of (weighted) minimum norm methods. The minimization of a norm is used as the basis for choosing one from a generally infinite set of solutions that provide an equally good fit to the data. This ambiguity in the solution arises from the inherent non- uniqueness of the continuous inverse problem and is compounded by the imbalance between the relatively small number of measurements and the large number of source voxels. Here we present a unified view of the minimum norm methods and describe how we canmore » use Tikhonov regularization to avoid instabilities in the solutions due to noise. We then compare the performance of regularized versions of three well known linear minimum norm methods with the non-linear iteratively reweighted minimum norm method and a Bayesian approach.« less
NASA Astrophysics Data System (ADS)
Nguyen van yen, R.; Fedorczak, N.; Brochard, F.; Bonhomme, G.; Schneider, K.; Farge, M.; Monier-Garbet, P.
2012-01-01
Images acquired by cameras installed in tokamaks are difficult to interpret because the three-dimensional structure of the plasma is flattened in a non-trivial way. Nevertheless, taking advantage of the slow variation of the fluctuations along magnetic field lines, the optical transformation may be approximated by a generalized Abel transform, for which we propose an inversion technique based on the wavelet-vaguelette decomposition. After validation of the new method using an academic test case and numerical data obtained with the Tokam 2D code, we present an application to an experimental movie obtained in the tokamak Tore Supra. A comparison with a classical regularization technique for ill-posed inverse problems, the singular value decomposition, allows us to assess the efficiency. The superiority of the wavelet-vaguelette technique is reflected in preserving local features, such as blobs and fronts, in the denoised emissivity map.
Three-dimensional facial recognition using passive long-wavelength infrared polarimetric imaging.
Yuffa, Alex J; Gurton, Kristan P; Videen, Gorden
2014-12-20
We use a polarimetric camera to record the Stokes parameters and the degree of linear polarization of long-wavelength infrared radiation emitted by human faces. These Stokes images are combined with Fresnel relations to extract the surface normal at each pixel. Integrating over these surface normals yields a three-dimensional facial image. One major difficulty of this technique is that the normal vectors determined from the polarizations are not unique. We overcome this problem by introducing an additional boundary condition on the subject. The major sources of error in producing inversions are noise in the images caused by scattering of the background signal and the ambiguity in determining the surface normals from the Fresnel coefficients.
Nguyen, N; Milanfar, P; Golub, G
2001-01-01
In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.
Schlagenhauf, Florian; Rapp, Michael A.; Huys, Quentin J. M.; Beck, Anne; Wüstenberg, Torsten; Deserno, Lorenz; Buchholz, Hans-Georg; Kalbitzer, Jan; Buchert, Ralph; Kienast, Thorsten; Cumming, Paul; Plotkin, Michail; Kumakura, Yoshitaka; Grace, Anthony A.; Dolan, Raymond J.; Heinz, Andreas
2013-01-01
Fluid intelligence represents the capacity for flexible problem solving and rapid behavioral adaptation. Rewards drive flexible behavioral adaptation, in part via a teaching signal expressed as reward prediction errors in the ventral striatum, which has been associated with phasic dopamine release in animal studies. We examined a sample of 28 healthy male adults using multimodal imaging and biological parametric mapping with 1) functional magnetic resonance imaging during a reversal learning task and 2) in a subsample of 17 subjects also with positron emission tomography using 6-[18F]fluoro-L-DOPA to assess dopamine synthesis capacity. Fluid intelligence was measured using a battery of nine standard neuropsychological tests. Ventral striatal BOLD correlates of reward prediction errors were positively correlated with fluid intelligence and, in the right ventral striatum, also inversely correlated with dopamine synthesis capacity (FDOPA Kinapp). When exploring aspects of fluid intelligence, we observed that prediction error signaling correlates with complex attention and reasoning. These findings indicate that individual differences in the capacity for flexible problem solving may be driven by ventral striatal activation during reward-related learning, which in turn proved to be inversely associated with ventral striatal dopamine synthesis capacity. PMID:22344813
Baust, Maximilian; Weinmann, Andreas; Wieczorek, Matthias; Lasser, Tobias; Storath, Martin; Navab, Nassir
2016-08-01
In this paper, we consider combined TV denoising and diffusion tensor fitting in DTI using the affine-invariant Riemannian metric on the space of diffusion tensors. Instead of first fitting the diffusion tensors, and then denoising them, we define a suitable TV type energy functional which incorporates the measured DWIs (using an inverse problem setup) and which measures the nearness of neighboring tensors in the manifold. To approach this functional, we propose generalized forward- backward splitting algorithms which combine an explicit and several implicit steps performed on a decomposition of the functional. We validate the performance of the derived algorithms on synthetic and real DTI data. In particular, we work on real 3D data. To our knowledge, the present paper describes the first approach to TV regularization in a combined manifold and inverse problem setup.
The Iterative Reweighted Mixed-Norm Estimate for Spatio-Temporal MEG/EEG Source Reconstruction.
Strohmeier, Daniel; Bekhti, Yousra; Haueisen, Jens; Gramfort, Alexandre
2016-10-01
Source imaging based on magnetoencephalography (MEG) and electroencephalography (EEG) allows for the non-invasive analysis of brain activity with high temporal and good spatial resolution. As the bioelectromagnetic inverse problem is ill-posed, constraints are required. For the analysis of evoked brain activity, spatial sparsity of the neuronal activation is a common assumption. It is often taken into account using convex constraints based on the l 1 -norm. The resulting source estimates are however biased in amplitude and often suboptimal in terms of source selection due to high correlations in the forward model. In this work, we demonstrate that an inverse solver based on a block-separable penalty with a Frobenius norm per block and a l 0.5 -quasinorm over blocks addresses both of these issues. For solving the resulting non-convex optimization problem, we propose the iterative reweighted Mixed Norm Estimate (irMxNE), an optimization scheme based on iterative reweighted convex surrogate optimization problems, which are solved efficiently using a block coordinate descent scheme and an active set strategy. We compare the proposed sparse imaging method to the dSPM and the RAP-MUSIC approach based on two MEG data sets. We provide empirical evidence based on simulations and analysis of MEG data that the proposed method improves on the standard Mixed Norm Estimate (MxNE) in terms of amplitude bias, support recovery, and stability.
Low rank alternating direction method of multipliers reconstruction for MR fingerprinting.
Assländer, Jakob; Cloos, Martijn A; Knoll, Florian; Sodickson, Daniel K; Hennig, Jürgen; Lattanzi, Riccardo
2018-01-01
The proposed reconstruction framework addresses the reconstruction accuracy, noise propagation and computation time for magnetic resonance fingerprinting. Based on a singular value decomposition of the signal evolution, magnetic resonance fingerprinting is formulated as a low rank (LR) inverse problem in which one image is reconstructed for each singular value under consideration. This LR approximation of the signal evolution reduces the computational burden by reducing the number of Fourier transformations. Also, the LR approximation improves the conditioning of the problem, which is further improved by extending the LR inverse problem to an augmented Lagrangian that is solved by the alternating direction method of multipliers. The root mean square error and the noise propagation are analyzed in simulations. For verification, in vivo examples are provided. The proposed LR alternating direction method of multipliers approach shows a reduced root mean square error compared to the original fingerprinting reconstruction, to a LR approximation alone and to an alternating direction method of multipliers approach without a LR approximation. Incorporating sensitivity encoding allows for further artifact reduction. The proposed reconstruction provides robust convergence, reduced computational burden and improved image quality compared to other magnetic resonance fingerprinting reconstruction approaches evaluated in this study. Magn Reson Med 79:83-96, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Atzberger, C.; Richter, K.
2009-09-01
The robust and accurate retrieval of vegetation biophysical variables using radiative transfer models (RTM) is seriously hampered by the ill-posedness of the inverse problem. With this research we further develop our previously published (object-based) inversion approach [Atzberger (2004)]. The object-based RTM inversion takes advantage of the geostatistical fact that the biophysical characteristics of nearby pixel are generally more similar than those at a larger distance. A two-step inversion based on PROSPECT+SAIL generated look-up-tables is presented that can be easily implemented and adapted to other radiative transfer models. The approach takes into account the spectral signatures of neighboring pixel and optimizes a common value of the average leaf angle (ALA) for all pixel of a given image object, such as an agricultural field. Using a large set of leaf area index (LAI) measurements (n = 58) acquired over six different crops of the Barrax test site, Spain), we demonstrate that the proposed geostatistical regularization yields in most cases more accurate and spatially consistent results compared to the traditional (pixel-based) inversion. Pros and cons of the approach are discussed and possible future extensions presented.
MUSIC-type imaging of small perfectly conducting cracks with an unknown frequency
NASA Astrophysics Data System (ADS)
Park, Won-Kwang
2015-09-01
MUltiple SIgnal Classification (MUSIC) is a famous non-iterative detection algorithm in inverse scattering problems. However, when the applied frequency is unknown, inaccurate locations are identified via MUSIC. This fact has been confirmed through numerical simulations. However, the reason behind this phenomenon has not been investigated theoretically. Motivated by this fact, we identify the structure of MUSIC-type imaging functionals with unknown frequency, by establishing a relationship with Bessel functions of order zero of the first kind. Through this, we can explain why inaccurate results appear.
Optical restoration of images blurred by atmospheric turbulence using optimum filter theory.
Horner, J L
1970-01-01
The results of optimum filtering from communications theory have been applied to an image restoration problem. Photographic film imagery, degraded by long-term artificial atmospheric turbulence, has been restored by spatial filters placed in the Fourier transform plane. The time-averaged point spread function was measured and used in designing the filters. Both the simple inverse filter and the optimum least-mean-square filters were used in the restoration experiments. The superiority of the latter is conclusively demonstrated. An optical analog processor was used for the restoration.
NASA Technical Reports Server (NTRS)
Sabatier, P. C.
1972-01-01
The progressive realization of the consequences of nonuniqueness imply an evolution of both the methods and the centers of interest in inverse problems. This evolution is schematically described together with the various mathematical methods used. A comparative description is given of inverse methods in scientific research, with examples taken from mathematics, quantum and classical physics, seismology, transport theory, radiative transfer, electromagnetic scattering, electrocardiology, etc. It is hoped that this paper will pave the way for an interdisciplinary study of inverse problems.
Multiscale image contrast amplification (MUSICA)
NASA Astrophysics Data System (ADS)
Vuylsteke, Pieter; Schoeters, Emile P.
1994-05-01
This article presents a novel approach to the problem of detail contrast enhancement, based on multiresolution representation of the original image. The image is decomposed into a weighted sum of smooth, localized, 2D basis functions at multiple scales. Each transform coefficient represents the amount of local detail at some specific scale and at a specific position in the image. Detail contrast is enhanced by non-linear amplification of the transform coefficients. An inverse transform is then applied to the modified coefficients. This yields a uniformly contrast- enhanced image without artefacts. The MUSICA-algorithm is being applied routinely to computed radiography images of chest, skull, spine, shoulder, pelvis, extremities, and abdomen examinations, with excellent acceptance. It is useful for a wide range of applications in the medical, graphical, and industrial area.
3D Compton scattering imaging and contour reconstruction for a class of Radon transforms
NASA Astrophysics Data System (ADS)
Rigaud, Gaël; Hahn, Bernadette N.
2018-07-01
Compton scattering imaging is a nascent concept arising from the current development of high-sensitive energy detectors and is devoted to exploit the scattering radiation to image the electron density of the studied medium. Such detectors are able to collect incoming photons in terms of energy. This paper introduces potential 3D modalities in Compton scattering imaging (CSI). The associated measured data are modeled using a class of generalized Radon transforms. The study of this class of operators leads to build a filtered back-projection kind algorithm preserving the contours of the sought-for function and offering a fast approach to partially solve the associated inverse problems. Simulation results including Poisson noise demonstrate the potential of this new imaging concept as well as the proposed image reconstruction approach.
NASA Astrophysics Data System (ADS)
Capdeville, Yann; Métivier, Ludovic
2018-05-01
Seismic imaging is an efficient tool to investigate the Earth interior. Many of the different imaging techniques currently used, including the so-called full waveform inversion (FWI), are based on limited frequency band data. Such data are not sensitive to the true earth model, but to a smooth version of it. This smooth version can be related to the true model by the homogenization technique. Homogenization for wave propagation in deterministic media with no scale separation, such as geological media, has been recently developed. With such an asymptotic theory, it is possible to compute an effective medium valid for a given frequency band such that effective waveforms and true waveforms are the same up to a controlled error. In this work we make the link between limited frequency band inversion, mainly FWI, and homogenization. We establish the relation between a true model and an FWI result model. This relation is important for a proper interpretation of FWI images. We numerically illustrate, in the 2-D case, that an FWI result is at best the homogenized version of the true model. Moreover, it appears that the homogenized FWI model is quite independent of the FWI parametrization, as long as it has enough degrees of freedom. In particular, inverting for the full elastic tensor is, in each of our tests, always a good choice. We show how the homogenization can help to understand FWI behaviour and help to improve its robustness and convergence by efficiently constraining the solution space of the inverse problem.
anisotropic microseismic focal mechanism inversion by waveform imaging matching
NASA Astrophysics Data System (ADS)
Wang, L.; Chang, X.; Wang, Y.; Xue, Z.
2016-12-01
The focal mechanism is one of the most important parameters in source inversion, for both natural earthquakes and human-induced seismic events. It has been reported to be useful for understanding stress distribution and evaluating the fracturing effect. The conventional focal mechanism inversion method picks the first arrival waveform of P wave. This method assumes the source as a Double Couple (DC) type and the media isotropic, which is usually not the case for induced seismic focal mechanism inversion. For induced seismic events, the inappropriate source and media model in inversion processing, by introducing ambiguity or strong simulation errors, will seriously reduce the inversion effectiveness. First, the focal mechanism contains significant non-DC source type. Generally, the source contains three components: DC, isotropic (ISO) and the compensated linear vector dipole (CLVD), which makes focal mechanisms more complicated. Second, the anisotropy of media will affect travel time and waveform to generate inversion bias. The common way to describe focal mechanism inversion is based on moment tensor (MT) inversion which can be decomposed into the combination of DC, ISO and CLVD components. There are two ways to achieve MT inversion. The wave-field migration method is applied to achieve moment tensor imaging. This method can construct elements imaging of MT in 3D space without picking the first arrival, but the retrieved MT value is influenced by imaging resolution. The full waveform inversion is employed to retrieve MT. In this method, the source position and MT can be reconstructed simultaneously. However, this method needs vast numerical calculation. Moreover, the source position and MT also influence each other in the inversion process. In this paper, the waveform imaging matching (WIM) method is proposed, which combines source imaging with waveform inversion for seismic focal mechanism inversion. Our method uses the 3D tilted transverse isotropic (TTI) elastic wave equation to approximate wave propagating in anisotropic media. First, a source imaging procedure is employed to obtain the source position. Second, we refine a waveform inversion algorithm to retrieve MT. We also use a microseismic data set recorded in surface acquisition to test our method.
SeisFlows-Flexible waveform inversion software
NASA Astrophysics Data System (ADS)
Modrak, Ryan T.; Borisov, Dmitry; Lefebvre, Matthieu; Tromp, Jeroen
2018-06-01
SeisFlows is an open source Python package that provides a customizable waveform inversion workflow and framework for research in oil and gas exploration, earthquake tomography, medical imaging, and other areas. New methods can be rapidly prototyped in SeisFlows by inheriting from default inversion or migration classes, and code can be tested on 2D examples before application to more expensive 3D problems. Wave simulations must be performed using an external software package such as SPECFEM3D. The ability to interface with external solvers lends flexibility, and the choice of SPECFEM3D as a default option provides optional GPU acceleration and other useful capabilities. Through support for massively parallel solvers and interfaces for high-performance computing (HPC) systems, inversions with thousands of seismic traces and billions of model parameters can be performed. So far, SeisFlows has run on clusters managed by the Department of Defense, Chevron Corp., Total S.A., Princeton University, and the University of Alaska, Fairbanks.
The use of the Kalman filter in the automated segmentation of EIT lung images.
Zifan, A; Liatsis, P; Chapman, B E
2013-06-01
In this paper, we present a new pipeline for the fast and accurate segmentation of impedance images of the lungs using electrical impedance tomography (EIT). EIT is an emerging, promising, non-invasive imaging modality that produces real-time, low spatial but high temporal resolution images of impedance inside a body. Recovering impedance itself constitutes a nonlinear ill-posed inverse problem, therefore the problem is usually linearized, which produces impedance-change images, rather than static impedance ones. Such images are highly blurry and fuzzy along object boundaries. We provide a mathematical reasoning behind the high suitability of the Kalman filter when it comes to segmenting and tracking conductivity changes in EIT lung images. Next, we use a two-fold approach to tackle the segmentation problem. First, we construct a global lung shape to restrict the search region of the Kalman filter. Next, we proceed with augmenting the Kalman filter by incorporating an adaptive foreground detection system to provide the boundary contours for the Kalman filter to carry out the tracking of the conductivity changes as the lungs undergo deformation in a respiratory cycle. The proposed method has been validated by using performance statistics such as misclassified area, and false positive rate, and compared to previous approaches. The results show that the proposed automated method can be a fast and reliable segmentation tool for EIT imaging.
Fast downscaled inverses for images compressed with M-channel lapped transforms.
de Queiroz, R L; Eschbach, R
1997-01-01
Compressed images may be decompressed and displayed or printed using different devices at different resolutions. Full decompression and rescaling in space domain is a very expensive method. We studied downscaled inverses where the image is decompressed partially, and a reduced inverse transform is used to recover the image. In this fashion, fewer transform coefficients are used and the synthesis process is simplified. We studied the design of fast inverses, for a given forward transform. General solutions are presented for M-channel finite impulse response (FIR) filterbanks, of which block and lapped transforms are a subset. Designs of faster inverses are presented for popular block and lapped transforms.
Multiple Fan-Beam Optical Tomography: Modelling Techniques
Rahim, Ruzairi Abdul; Chen, Leong Lai; San, Chan Kok; Rahiman, Mohd Hafiz Fazalul; Fea, Pang Jon
2009-01-01
This paper explains in detail the solution to the forward and inverse problem faced in this research. In the forward problem section, the projection geometry and the sensor modelling are discussed. The dimensions, distributions and arrangements of the optical fibre sensors are determined based on the real hardware constructed and these are explained in the projection geometry section. The general idea in sensor modelling is to simulate an artificial environment, but with similar system properties, to predict the actual sensor values for various flow models in the hardware system. The sensitivity maps produced from the solution of the forward problems are important in reconstructing the tomographic image. PMID:22291523
BOOK REVIEW: Inverse Problems. Activities for Undergraduates
NASA Astrophysics Data System (ADS)
Yamamoto, Masahiro
2003-06-01
This book is a valuable introduction to inverse problems. In particular, from the educational point of view, the author addresses the questions of what constitutes an inverse problem and how and why we should study them. Such an approach has been eagerly awaited for a long time. Professor Groetsch, of the University of Cincinnati, is a world-renowned specialist in inverse problems, in particular the theory of regularization. Moreover, he has made a remarkable contribution to educational activities in the field of inverse problems, which was the subject of his previous book (Groetsch C W 1993 Inverse Problems in the Mathematical Sciences (Braunschweig: Vieweg)). For this reason, he is one of the most qualified to write an introductory book on inverse problems. Without question, inverse problems are important, necessary and appear in various aspects. So it is crucial to introduce students to exercises in inverse problems. However, there are not many introductory books which are directly accessible by students in the first two undergraduate years. As a consequence, students often encounter diverse concrete inverse problems before becoming aware of their general principles. The main purpose of this book is to present activities to allow first-year undergraduates to learn inverse theory. To my knowledge, this book is a rare attempt to do this and, in my opinion, a great success. The author emphasizes that it is very important to teach inverse theory in the early years. He writes; `If students consider only the direct problem, they are not looking at the problem from all sides .... The habit of always looking at problems from the direct point of view is intellectually limiting ...' (page 21). The book is very carefully organized so that teachers will be able to use it as a textbook. After an introduction in chapter 1, sucessive chapters deal with inverse problems in precalculus, calculus, differential equations and linear algebra. In order to let one gain some insight into the nature of inverse problems and the appropriate mode of thought, chapter 1 offers historical vignettes, most of which have played an essential role in the development of natural science. These vignettes cover the first successful application of `non-destructive testing' by Archimedes (page 4) via Newton's laws of motion up to literary tomography, and readers will be able to enjoy a wide overview of inverse problems. Therefore, as the author asks, the reader should not skip this chapter. This may not be hard to do, since the headings of the sections are quite intriguing (`Archimedes' Bath', `Another World', `Got the Time?', `Head Games', etc). The author embarks on the technical approach to inverse problems in chapter 2. He has elegantly designed each section with a guide specifying course level, objective, mathematical and scientifical background and appropriate technology (e.g. types of calculators required). The guides are designed such that teachers may be able to construct effective and attractive courses by themselves. The book is not intended to offer one rigidly determined course, but should be used flexibly and independently according to the situation. Moreover, every section closes with activities which can be chosen according to the students' interests and levels of ability. Some of these exercises do not have ready solutions, but require long-term study, so readers are not required to solve all of them. After chapter 5, which contains discrete inverse problems such as the algebraic reconstruction technique and the Backus - Gilbert method, there are answers and commentaries to the activities. Finally, scripts in MATLAB are attached, although they can also be downloaded from the author's web page (http://math.uc.edu/~groetsch/). This book is aimed at students but it will be very valuable to researchers wishing to retain a wide overview of inverse problems in the midst of busy research activities. A Japanese version was published in 2002.
Numerical reconstruction of tsunami source using combined seismic, satellite and DART data
NASA Astrophysics Data System (ADS)
Krivorotko, Olga; Kabanikhin, Sergey; Marinin, Igor
2014-05-01
Recent tsunamis, for instance, in Japan (2011), in Sumatra (2004), and at the Indian coast (2004) showed that a system of producing exact and timely information about tsunamis is of a vital importance. Numerical simulation is an effective instrument for providing such information. Bottom relief characteristics and the initial perturbation data (a tsunami source) are required for the direct simulation of tsunamis. The seismic data about the source are usually obtained in a few tens of minutes after an event has occurred (the seismic waves velocity being about five hundred kilometres per minute, while the velocity of tsunami waves is less than twelve kilometres per minute). A difference in the arrival times of seismic and tsunami waves can be used when operationally refining the tsunami source parameters and modelling expected tsunami wave height on the shore. The most suitable physical models related to the tsunamis simulation are based on the shallow water equations. The problem of identification parameters of a tsunami source using additional measurements of a passing wave is called inverse tsunami problem. We investigate three different inverse problems of determining a tsunami source using three different additional data: Deep-ocean Assessment and Reporting of Tsunamis (DART) measurements, satellite wave-form images and seismic data. These problems are severely ill-posed. We apply regularization techniques to control the degree of ill-posedness such as Fourier expansion, truncated singular value decomposition, numerical regularization. The algorithm of selecting the truncated number of singular values of an inverse problem operator which is agreed with the error level in measured data is described and analyzed. In numerical experiment we used gradient methods (Landweber iteration and conjugate gradient method) for solving inverse tsunami problems. Gradient methods are based on minimizing the corresponding misfit function. To calculate the gradient of the misfit function, the adjoint problem is solved. The conservative finite-difference schemes for solving the direct and adjoint problems in the approximation of shallow water are constructed. Results of numerical experiments of the tsunami source reconstruction are presented and discussed. We show that using a combination of three different types of data allows one to increase the stability and efficiency of tsunami source reconstruction. Non-profit organization WAPMERR (World Agency of Planetary Monitoring and Earthquake Risk Reduction) in collaboration with Informap software development department developed the Integrated Tsunami Research and Information System (ITRIS) to simulate tsunami waves and earthquakes, river course changes, coastal zone floods, and risk estimates for coastal constructions at wave run-ups and earthquakes. The special scientific plug-in components are embedded in a specially developed GIS-type graphic shell for easy data retrieval, visualization and processing. This work was supported by the Russian Foundation for Basic Research (project No. 12-01-00773 'Theory and Numerical Methods for Solving Combined Inverse Problems of Mathematical Physics') and interdisciplinary project of SB RAS 14 'Inverse Problems and Applications: Theory, Algorithms, Software'.
NASA Astrophysics Data System (ADS)
Custo, Anna; Wells, William M., III; Barnett, Alex H.; Hillman, Elizabeth M. C.; Boas, David A.
2006-07-01
An efficient computation of the time-dependent forward solution for photon transport in a head model is a key capability for performing accurate inversion for functional diffuse optical imaging of the brain. The diffusion approximation to photon transport is much faster to simulate than the physically correct radiative transport equation (RTE); however, it is commonly assumed that scattering lengths must be much smaller than all system dimensions and all absorption lengths for the approximation to be accurate. Neither of these conditions is satisfied in the cerebrospinal fluid (CSF). Since line-of-sight distances in the CSF are small, of the order of a few millimeters, we explore the idea that the CSF scattering coefficient may be modeled by any value from zero up to the order of the typical inverse line-of-sight distance, or approximately 0.3 mm-1, without significantly altering the calculated detector signals or the partial path lengths relevant for functional measurements. We demonstrate this in detail by using a Monte Carlo simulation of the RTE in a three-dimensional head model based on clinical magnetic resonance imaging data, with realistic optode geometries. Our findings lead us to expect that the diffusion approximation will be valid even in the presence of the CSF, with consequences for faster solution of the inverse problem.
Probabilistic numerical methods for PDE-constrained Bayesian inverse problems
NASA Astrophysics Data System (ADS)
Cockayne, Jon; Oates, Chris; Sullivan, Tim; Girolami, Mark
2017-06-01
This paper develops meshless methods for probabilistically describing discretisation error in the numerical solution of partial differential equations. This construction enables the solution of Bayesian inverse problems while accounting for the impact of the discretisation of the forward problem. In particular, this drives statistical inferences to be more conservative in the presence of significant solver error. Theoretical results are presented describing rates of convergence for the posteriors in both the forward and inverse problems. This method is tested on a challenging inverse problem with a nonlinear forward model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguilo Valentin, Miguel Alejandro
2016-07-01
This study presents a new nonlinear programming formulation for the solution of inverse problems. First, a general inverse problem formulation based on the compliance error functional is presented. The proposed error functional enables the computation of the Lagrange multipliers, and thus the first order derivative information, at the expense of just one model evaluation. Therefore, the calculation of the Lagrange multipliers does not require the solution of the computationally intensive adjoint problem. This leads to significant speedups for large-scale, gradient-based inverse problems.
Successive Over-Relaxation Technique for High-Performance Blind Image Deconvolution
2015-06-08
deconvolution, space surveillance, Gauss - Seidel iteration 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18, NUMBER OF PAGES 5...sensible approximate solutions to the ill-posed nonlinear inverse problem. These solutions are addresses as fixed points of the iteration which consists in...alternating approximations (AA) for the object and for the PSF performed with a prescribed number of inner iterative descents from trivial (zero
Physics-based Inverse Problem to Deduce Marine Atmospheric Boundary Layer Parameters
2017-03-07
please find the Final Technical Report with SF 298 for Dr. Erin E. Hackett’s ONR grant entitled Physics-based Inverse Problem to Deduce Marine...From- To) 07/03/2017 Final Technica l Dec 2012- Dec 2016 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Physics-based Inverse Problem to Deduce Marine...SUPPLEMENTARY NOTES 14. ABSTRACT This report describes research results related to the development and implementation of an inverse problem approach for
Comparison Of Eigenvector-Based Statistical Pattern Recognition Algorithms For Hybrid Processing
NASA Astrophysics Data System (ADS)
Tian, Q.; Fainman, Y.; Lee, Sing H.
1989-02-01
The pattern recognition algorithms based on eigenvector analysis (group 2) are theoretically and experimentally compared in this part of the paper. Group 2 consists of Foley-Sammon (F-S) transform, Hotelling trace criterion (HTC), Fukunaga-Koontz (F-K) transform, linear discriminant function (LDF) and generalized matched filter (GMF). It is shown that all eigenvector-based algorithms can be represented in a generalized eigenvector form. However, the calculations of the discriminant vectors are different for different algorithms. Summaries on how to calculate the discriminant functions for the F-S, HTC and F-K transforms are provided. Especially for the more practical, underdetermined case, where the number of training images is less than the number of pixels in each image, the calculations usually require the inversion of a large, singular, pixel correlation (or covariance) matrix. We suggest solving this problem by finding its pseudo-inverse, which requires inverting only the smaller, non-singular image correlation (or covariance) matrix plus multiplying several non-singular matrices. We also compare theoretically the effectiveness for classification with the discriminant functions from F-S, HTC and F-K with LDF and GMF, and between the linear-mapping-based algorithms and the eigenvector-based algorithms. Experimentally, we compare the eigenvector-based algorithms using a set of image data bases each image consisting of 64 x 64 pixels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sano, Ryuichi; Iwama, Naofumi; Peterson, Byron J.
A three-dimensional (3D) tomography system using four InfraRed imaging Video Bolometers (IRVBs) has been designed with a helical periodicity assumption for the purpose of plasma radiation measurement in the large helical device. For the spatial inversion of large sized arrays, the system has been numerically and experimentally examined using the Tikhonov regularization with the criterion of minimum generalized cross validation, which is the standard solver of inverse problems. The 3D transport code EMC3-EIRENE for impurity behavior and related radiation has been used to produce phantoms for numerical tests, and the relative calibration of the IRVB images has been carried outmore » with a simple function model of the decaying plasma in a radiation collapse. The tomography system can respond to temporal changes in the plasma profile and identify the 3D dynamic behavior of radiation, such as the radiation enhancement that starts from the inboard side of the torus, during the radiation collapse. The reconstruction results are also consistent with the output signals of a resistive bolometer. These results indicate that the designed 3D tomography system is available for the 3D imaging of radiation. The first 3D direct tomographic measurement of a magnetically confined plasma has been achieved.« less
Comparing implementations of penalized weighted least-squares sinogram restoration.
Forthmann, Peter; Koehler, Thomas; Defrise, Michel; La Riviere, Patrick
2010-11-01
A CT scanner measures the energy that is deposited in each channel of a detector array by x rays that have been partially absorbed on their way through the object. The measurement process is complex and quantitative measurements are always and inevitably associated with errors, so CT data must be preprocessed prior to reconstruction. In recent years, the authors have formulated CT sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. The authors have explored both penalized Poisson likelihood (PL) and penalized weighted least-squares (PWLS) objective functions. At low doses, the authors found that the PL approach outperforms PWLS in terms of resolution-noise tradeoffs, but at standard doses they perform similarly. The PWLS objective function, being quadratic, is more amenable to computational acceleration than the PL objective. In this work, the authors develop and compare two different methods for implementing PWLS sinogram restoration with the hope of improving computational performance relative to PL in the standard-dose regime. Sinogram restoration is still significant in the standard-dose regime since it can still outperform standard approaches and it allows for correction of effects that are not usually modeled in standard CT preprocessing. The authors have explored and compared two implementation strategies for PWLS sinogram restoration: (1) A direct matrix-inversion strategy based on the closed-form solution to the PWLS optimization problem and (2) an iterative approach based on the conjugate-gradient algorithm. Obtaining optimal performance from each strategy required modifying the naive off-the-shelf implementations of the algorithms to exploit the particular symmetry and sparseness of the sinogram-restoration problem. For the closed-form approach, the authors subdivided the large matrix inversion into smaller coupled problems and exploited sparseness to minimize matrix operations. For the conjugate-gradient approach, the authors exploited sparseness and preconditioned the problem to speed up convergence. All methods produced qualitatively and quantitatively similar images as measured by resolution-variance tradeoffs and difference images. Despite the acceleration strategies, the direct matrix-inversion approach was found to be uncompetitive with iterative approaches, with a computational burden higher by an order of magnitude or more. The iterative conjugate-gradient approach, however, does appear promising, with computation times half that of the authors' previous penalized-likelihood implementation. Iterative conjugate-gradient based PWLS sinogram restoration with careful matrix optimizations has computational advantages over direct matrix PWLS inversion and over penalized-likelihood sinogram restoration and can be considered a good alternative in standard-dose regimes.
NASA Astrophysics Data System (ADS)
Derkachov, G.; Jakubczyk, T.; Jakubczyk, D.; Archer, J.; Woźniak, M.
2017-07-01
Utilising Compute Unified Device Architecture (CUDA) platform for Graphics Processing Units (GPUs) enables significant reduction of computation time at a moderate cost, by means of parallel computing. In the paper [Jakubczyk et al., Opto-Electron. Rev., 2016] we reported using GPU for Mie scattering inverse problem solving (up to 800-fold speed-up). Here we report the development of two subroutines utilising GPU at data preprocessing stages for the inversion procedure: (i) A subroutine, based on ray tracing, for finding spherical aberration correction function. (ii) A subroutine performing the conversion of an image to a 1D distribution of light intensity versus azimuth angle (i.e. scattering diagram), fed from a movie-reading CPU subroutine running in parallel. All subroutines are incorporated in PikeReader application, which we make available on GitHub repository. PikeReader returns a sequence of intensity distributions versus a common azimuth angle vector, corresponding to the recorded movie. We obtained an overall ∼ 400 -fold speed-up of calculations at data preprocessing stages using CUDA codes running on GPU in comparison to single thread MATLAB-only code running on CPU.
Mirus, B.B.; Perkins, K.S.; Nimmo, J.R.; Singha, K.
2009-01-01
To understand their relation to pedogenic development, soil hydraulic properties in the Mojave Desert were investi- gated for three deposit types: (i) recently deposited sediments in an active wash, (ii) a soil of early Holocene age, and (iii) a highly developed soil of late Pleistocene age. Eff ective parameter values were estimated for a simplifi ed model based on Richards' equation using a fl ow simulator (VS2D), an inverse algorithm (UCODE-2005), and matric pressure and water content data from three ponded infi ltration experiments. The inverse problem framework was designed to account for the eff ects of subsurface lateral spreading of infi ltrated water. Although none of the inverse problems converged on a unique, best-fi t parameter set, a minimum standard error of regression was reached for each deposit type. Parameter sets from the numerous inversions that reached the minimum error were used to develop probability distribu tions for each parameter and deposit type. Electrical resistance imaging obtained for two of the three infi ltration experiments was used to independently test fl ow model performance. Simulations for the active wash and Holocene soil successfully depicted the lateral and vertical fl uxes. Simulations of the more pedogenically developed Pleistocene soil did not adequately replicate the observed fl ow processes, which would require a more complex conceptual model to include smaller scale heterogeneities. The inverse-modeling results, however, indicate that with increasing age, the steep slope of the soil water retention curve shitis toward more negative matric pressures. Assigning eff ective soil hydraulic properties based on soil age provides a promising framework for future development of regional-scale models of soil moisture dynamics in arid environments for land-management applications. ?? Soil Science Society of America.
Photon-efficient super-resolution laser radar
NASA Astrophysics Data System (ADS)
Shin, Dongeek; Shapiro, Jeffrey H.; Goyal, Vivek K.
2017-08-01
The resolution achieved in photon-efficient active optical range imaging systems can be low due to non-idealities such as propagation through a diffuse scattering medium. We propose a constrained optimization-based frame- work to address extremes in scarcity of photons and blurring by a forward imaging kernel. We provide two algorithms for the resulting inverse problem: a greedy algorithm, inspired by sparse pursuit algorithms; and a convex optimization heuristic that incorporates image total variation regularization. We demonstrate that our framework outperforms existing deconvolution imaging techniques in terms of peak signal-to-noise ratio. Since our proposed method is able to super-resolve depth features using small numbers of photon counts, it can be useful for observing fine-scale phenomena in remote sensing through a scattering medium and through-the-skin biomedical imaging applications.
Computational optical tomography using 3-D deep convolutional neural networks
NASA Astrophysics Data System (ADS)
Nguyen, Thanh; Bui, Vy; Nehmetallah, George
2018-04-01
Deep convolutional neural networks (DCNNs) offer a promising performance for many image processing areas, such as super-resolution, deconvolution, image classification, denoising, and segmentation, with outstanding results. Here, we develop for the first time, to our knowledge, a method to perform 3-D computational optical tomography using 3-D DCNN. A simulated 3-D phantom dataset was first constructed and converted to a dataset of phase objects imaged on a spatial light modulator. For each phase image in the dataset, the corresponding diffracted intensity image was experimentally recorded on a CCD. We then experimentally demonstrate the ability of the developed 3-D DCNN algorithm to solve the inverse problem by reconstructing the 3-D index of refraction distributions of test phantoms from the dataset from their corresponding diffraction patterns.
Image contrast enhancement based on a local standard deviation model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Dah-Chung; Wu, Wen-Rong
1996-12-31
The adaptive contrast enhancement (ACE) algorithm is a widely used image enhancement method, which needs a contrast gain to adjust high frequency components of an image. In the literature, the gain is usually inversely proportional to the local standard deviation (LSD) or is a constant. But these cause two problems in practical applications, i.e., noise overenhancement and ringing artifact. In this paper a new gain is developed based on Hunt`s Gaussian image model to prevent the two defects. The new gain is a nonlinear function of LSD and has the desired characteristic emphasizing the LSD regions in which details aremore » concentrated. We have applied the new ACE algorithm to chest x-ray images and the simulations show the effectiveness of the proposed algorithm.« less
Bayesian Inversion of 2D Models from Airborne Transient EM Data
NASA Astrophysics Data System (ADS)
Blatter, D. B.; Key, K.; Ray, A.
2016-12-01
The inherent non-uniqueness in most geophysical inverse problems leads to an infinite number of Earth models that fit observed data to within an adequate tolerance. To resolve this ambiguity, traditional inversion methods based on optimization techniques such as the Gauss-Newton and conjugate gradient methods rely on an additional regularization constraint on the properties that an acceptable model can possess, such as having minimal roughness. While allowing such an inversion scheme to converge on a solution, regularization makes it difficult to estimate the uncertainty associated with the model parameters. This is because regularization biases the inversion process toward certain models that satisfy the regularization constraint and away from others that don't, even when both may suitably fit the data. By contrast, a Bayesian inversion framework aims to produce not a single `most acceptable' model but an estimate of the posterior likelihood of the model parameters, given the observed data. In this work, we develop a 2D Bayesian framework for the inversion of transient electromagnetic (TEM) data. Our method relies on a reversible-jump Markov Chain Monte Carlo (RJ-MCMC) Bayesian inverse method with parallel tempering. Previous gradient-based inversion work in this area used a spatially constrained scheme wherein individual (1D) soundings were inverted together and non-uniqueness was tackled by using lateral and vertical smoothness constraints. By contrast, our work uses a 2D model space of Voronoi cells whose parameterization (including number of cells) is fully data-driven. To make the problem work practically, we approximate the forward solution for each TEM sounding using a local 1D approximation where the model is obtained from the 2D model by retrieving a vertical profile through the Voronoi cells. The implicit parsimony of the Bayesian inversion process leads to the simplest models that adequately explain the data, obviating the need for explicit smoothness constraints. In addition, credible intervals in model space are directly obtained, resolving some of the uncertainty introduced by regularization. An example application shows how the method can be used to quantify the uncertainty in airborne EM soundings for imaging subglacial brine channels and groundwater systems.
[EEG source localization using LORETA (low resolution electromagnetic tomography)].
Puskás, Szilvia
2011-03-30
Eledctroencephalography (EEG) has excellent temporal resolution, but the spatial resolution is poor. Different source localization methods exist to solve the so-called inverse problem, thus increasing the accuracy of spatial localization. This paper provides an overview of the history of source localization and the main categories of techniques are discussed. LORETA (low resolution electromagnetic tomography) is introduced in details: technical informations are discussed and localization properties of LORETA method are compared to other inverse solutions. Validation of the method with different imaging techniques is also discussed. This paper reviews several publications using LORETA both in healthy persons and persons with different neurological and psychiatric diseases. Finally future possible applications are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bender, Edward T.; Hardcastle, Nicholas; Tome, Wolfgang A.
2012-01-15
Purpose: Deformable image registration (DIR) is necessary for accurate dose accumulation between multiple radiotherapy image sets. DIR algorithms can suffer from inverse and transitivity inconsistencies. When using deformation vector fields (DVFs) that exhibit inverse-inconsistency and are nontransitive, dose accumulation on a given image set via different image pathways will lead to different accumulated doses. The purpose of this study was to investigate the dosimetric effect of and propose a postprocessing solution to reduce inverse consistency and transitivity errors. Methods: Four MVCT images and four phases of a lung 4DCT, each with an associated calculated dose, were selected for analysis. DVFsmore » between all four images in each data set were created using the Fast Symmetric Demons algorithm. Dose was accumulated on the fourth image in each set using DIR via two different image pathways. The two accumulated doses on the fourth image were compared. The inverse consistency and transitivity errors in the DVFs were then reduced. The dose accumulation was repeated using the processed DVFs, the results of which were compared with the accumulated dose from the original DVFs. To evaluate the influence of the postprocessing technique on DVF accuracy, the original and processed DVF accuracy was evaluated on the lung 4DCT data on which anatomical landmarks had been identified by an expert. Results: Dose accumulation to the same image via different image pathways resulted in two different accumulated dose results. After the inverse consistency errors were reduced, the difference between the accumulated doses diminished. The difference was further reduced after reducing the transitivity errors. The postprocessing technique had minimal effect on the accuracy of the DVF for the lung 4DCT images. Conclusions: This study shows that inverse consistency and transitivity errors in DIR have a significant dosimetric effect in dose accumulation; Depending on the image pathway taken to accumulate the dose, different results may be obtained. A postprocessing technique that reduces inverse consistency and transitivity error is presented, which allows for consistent dose accumulation regardless of the image pathway followed.« less
Optical tomographic imaging for breast cancer detection
NASA Astrophysics Data System (ADS)
Cong, Wenxiang; Intes, Xavier; Wang, Ge
2017-09-01
Diffuse optical breast imaging utilizes near-infrared (NIR) light propagation through tissues to assess the optical properties of tissues for the identification of abnormal tissue. This optical imaging approach is sensitive, cost-effective, and does not involve any ionizing radiation. However, the image reconstruction of diffuse optical tomography (DOT) is a nonlinear inverse problem and suffers from severe illposedness due to data noise, NIR light scattering, and measurement incompleteness. An image reconstruction method is proposed for the detection of breast cancer. This method splits the image reconstruction problem into the localization of abnormal tissues and quantification of absorption variations. The localization of abnormal tissues is performed based on a well-posed optimization model, which can be solved via a differential evolution optimization method to achieve a stable reconstruction. The quantification of abnormal absorption is then determined in localized regions of relatively small extents, in which a potential tumor might be. Consequently, the number of unknown absorption variables can be greatly reduced to overcome the underdetermined nature of DOT. Numerical simulation experiments are performed to verify merits of the proposed method, and the results show that the image reconstruction method is stable and accurate for the identification of abnormal tissues, and robust against the measurement noise of data.
Experimental determination of pore shapes using phase retrieval from q -space NMR diffraction
NASA Astrophysics Data System (ADS)
Demberg, Kerstin; Laun, Frederik Bernd; Bertleff, Marco; Bachert, Peter; Kuder, Tristan Anselm
2018-05-01
This paper presents an approach to solving the phase problem in nuclear magnetic resonance (NMR) diffusion pore imaging, a method that allows imaging the shape of arbitrary closed pores filled with an NMR-detectable medium for investigation of the microstructure of biological tissue and porous materials. Classical q -space imaging composed of two short diffusion-encoding gradient pulses yields, analogously to diffraction experiments, the modulus squared of the Fourier transform of the pore image which entails an inversion problem: An unambiguous reconstruction of the pore image requires both magnitude and phase. Here the phase information is recovered from the Fourier modulus by applying a phase retrieval algorithm. This allows omitting experimentally challenging phase measurements using specialized temporal gradient profiles. A combination of the hybrid input-output algorithm and the error reduction algorithm was used with dynamically adapting support (shrinkwrap extension). No a priori knowledge on the pore shape was fed to the algorithm except for a finite pore extent. The phase retrieval approach proved successful for simulated data with and without noise and was validated in phantom experiments with well-defined pores using hyperpolarized xenon gas.
Experimental determination of pore shapes using phase retrieval from q-space NMR diffraction.
Demberg, Kerstin; Laun, Frederik Bernd; Bertleff, Marco; Bachert, Peter; Kuder, Tristan Anselm
2018-05-01
This paper presents an approach to solving the phase problem in nuclear magnetic resonance (NMR) diffusion pore imaging, a method that allows imaging the shape of arbitrary closed pores filled with an NMR-detectable medium for investigation of the microstructure of biological tissue and porous materials. Classical q-space imaging composed of two short diffusion-encoding gradient pulses yields, analogously to diffraction experiments, the modulus squared of the Fourier transform of the pore image which entails an inversion problem: An unambiguous reconstruction of the pore image requires both magnitude and phase. Here the phase information is recovered from the Fourier modulus by applying a phase retrieval algorithm. This allows omitting experimentally challenging phase measurements using specialized temporal gradient profiles. A combination of the hybrid input-output algorithm and the error reduction algorithm was used with dynamically adapting support (shrinkwrap extension). No a priori knowledge on the pore shape was fed to the algorithm except for a finite pore extent. The phase retrieval approach proved successful for simulated data with and without noise and was validated in phantom experiments with well-defined pores using hyperpolarized xenon gas.
Reconstruction of structural damage based on reflection intensity spectra of fiber Bragg gratings
NASA Astrophysics Data System (ADS)
Huang, Guojun; Wei, Changben; Chen, Shiyuan; Yang, Guowei
2014-12-01
We present an approach for structural damage reconstruction based on the reflection intensity spectra of fiber Bragg gratings (FBGs). Our approach incorporates the finite element method, transfer matrix (T-matrix), and genetic algorithm to solve the inverse photo-elastic problem of damage reconstruction, i.e. to identify the location, size, and shape of a defect. By introducing a parameterized characterization of the damage information, the inverse photo-elastic problem is reduced to an optimization problem, and a relevant computational scheme was developed. The scheme iteratively searches for the solution to the corresponding direct photo-elastic problem until the simulated and measured (or target) reflection intensity spectra of the FBGs near the defect coincide within a prescribed error. Proof-of-concept validations of our approach were performed numerically and experimentally using both holed and cracked plate samples as typical cases of plane-stress problems. The damage identifiability was simulated by changing the deployment of the FBG sensors, including the total number of sensors and their distance to the defect. Both the numerical and experimental results demonstrate that our approach is effective and promising. It provides us with a photo-elastic method for developing a remote, automatic damage-imaging technique that substantially improves damage identification for structural health monitoring.
NASA Astrophysics Data System (ADS)
Polydorides, Nick; Lionheart, William R. B.
2002-12-01
The objective of the Electrical Impedance and Diffuse Optical Reconstruction Software project is to develop freely available software that can be used to reconstruct electrical or optical material properties from boundary measurements. Nonlinear and ill posed problems such as electrical impedance and optical tomography are typically approached using a finite element model for the forward calculations and a regularized nonlinear solver for obtaining a unique and stable inverse solution. Most of the commercially available finite element programs are unsuitable for solving these problems because of their conventional inefficient way of calculating the Jacobian, and their lack of accurate electrode modelling. A complete package for the two-dimensional EIT problem was officially released by Vauhkonen et al at the second half of 2000. However most industrial and medical electrical imaging problems are fundamentally three-dimensional. To assist the development we have developed and released a free toolkit of Matlab routines which can be employed to solve the forward and inverse EIT problems in three dimensions based on the complete electrode model along with some basic visualization utilities, in the hope that it will stimulate further development. We also include a derivation of the formula for the Jacobian (or sensitivity) matrix based on the complete electrode model.
Task-driven dictionary learning.
Mairal, Julien; Bach, Francis; Ponce, Jean
2012-04-01
Modeling data with linear combinations of a few elements from a learned dictionary has been the focus of much recent research in machine learning, neuroscience, and signal processing. For signals such as natural images that admit such sparse representations, it is now well established that these models are well suited to restoration tasks. In this context, learning the dictionary amounts to solving a large-scale matrix factorization problem, which can be done efficiently with classical optimization tools. The same approach has also been used for learning features from data for other purposes, e.g., image classification, but tuning the dictionary in a supervised way for these tasks has proven to be more difficult. In this paper, we present a general formulation for supervised dictionary learning adapted to a wide variety of tasks, and present an efficient algorithm for solving the corresponding optimization problem. Experiments on handwritten digit classification, digital art identification, nonlinear inverse image problems, and compressed sensing demonstrate that our approach is effective in large-scale settings, and is well suited to supervised and semi-supervised classification, as well as regression tasks for data that admit sparse representations.
Using Deep Learning Model for Meteorological Satellite Cloud Image Prediction
NASA Astrophysics Data System (ADS)
Su, X.
2017-12-01
A satellite cloud image contains much weather information such as precipitation information. Short-time cloud movement forecast is important for precipitation forecast and is the primary means for typhoon monitoring. The traditional methods are mostly using the cloud feature matching and linear extrapolation to predict the cloud movement, which makes that the nonstationary process such as inversion and deformation during the movement of the cloud is basically not considered. It is still a hard task to predict cloud movement timely and correctly. As deep learning model could perform well in learning spatiotemporal features, to meet this challenge, we could regard cloud image prediction as a spatiotemporal sequence forecasting problem and introduce deep learning model to solve this problem. In this research, we use a variant of Gated-Recurrent-Unit(GRU) that has convolutional structures to deal with spatiotemporal features and build an end-to-end model to solve this forecast problem. In this model, both the input and output are spatiotemporal sequences. Compared to Convolutional LSTM(ConvLSTM) model, this model has lower amount of parameters. We imply this model on GOES satellite data and the model perform well.
A Forward Glimpse into Inverse Problems through a Geology Example
ERIC Educational Resources Information Center
Winkel, Brian J.
2012-01-01
This paper describes a forward approach to an inverse problem related to detecting the nature of geological substrata which makes use of optimization techniques in a multivariable calculus setting. The true nature of the related inverse problem is highlighted. (Contains 2 figures.)
Aftermath of Ankle Inversion Injuries: Spectrum of MR Imaging Findings.
Meehan, Timothy M; Martinez-Salazar, Edgar Leonardo; Torriani, Martin
2017-02-01
Acute and chronic ankle inversion injuries are a common source of pain and a diagnostic challenge. Several studies have shown a variety of injury patterns after inversion injury both in acute and chronic settings. Although traditional assessment with clinical examination and radiographs is generally accepted for inversion injuries, MR imaging is a useful tool to detect occult injuries and in patients with chronic symptoms. This article examines a range of MR imaging findings that may be present in patients with lateral ankle pain following an acute or chronic inversion injury. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Roy, Ankita
2007-12-01
This research using Hyperspectral imaging involves recognizing targets through spatial and spectral matching and spectral un-mixing of data ranging from remote sensing to medical imaging kernels for clinical studies based on Hyperspectral data-sets generated using the VFTHSI [Visible Fourier Transform Hyperspectral Imager], whose high resolution Si detector makes the analysis achievable. The research may be broadly classified into (I) A Physically Motivated Correlation Formalism (PMCF), which places both spatial and spectral data on an equivalent mathematical footing in the context of a specific Kernel and (II) An application in RF plasma specie detection during carbon nanotube growing process. (III) Hyperspectral analysis for assessing density and distribution of retinopathies like age related macular degeneration (ARMD) and error estimation enabling the early recognition of ARMD, which is treated as an ill-conditioned inverse imaging problem. The broad statistical scopes of this research are two fold-target recognition problems and spectral unmixing problems. All processes involve experimental and computational analysis of Hyperspectral data sets is presented, which is based on the principle of a Sagnac Interferometer, calibrated to obtain high SNR levels. PMCF computes spectral/spatial/cross moments and answers the question of how optimally the entire hypercube should be sampled and finds how many spatial-spectral pixels are required precisely for a particular target recognition. Spectral analysis of RF plasma radicals, typically Methane plasma and Argon plasma using VFTHSI has enabled better process monitoring during growth of vertically aligned multi-walled carbon nanotubes by instant registration of the chemical composition or density changes temporally, which is key since a significant correlation can be found between plasma state and structural properties. A vital focus of this dissertation is towards medical Hyperspectral imaging applied to retinopathies like age related macular degeneration targets taken with a Fundus imager, which is akin to the VFTHSI. Detection of the constituent components in the diseased hyper-pigmentation area is also computed. The target or reflectance matrix is treated as a highly ill-conditioned spectral un-mixing problem, to which methodologies like inverse techniques, principal component analysis (PCA) and receiver operating curves (ROC) for precise spectral recognition of infected area. The region containing ARMD was easily distinguishable from the spectral mesh plots over the entire band-pass area. Once the location was detected the PMCF coefficients were calculated by cross correlating a target of normal oxygenated retina with the deoxygenated one. The ROCs generated using PMCF shows 30% higher detection probability with improved accuracy than ROCs based on Spectral Angle Mapper (SAM). By spectral unmixing methods, the important endmembers/carotenoids of the MD pigment were found to be Xanthophyl and lutein, while beta-carotene which showed a negative correlation in the unconstrained inverse problem is a supplement given to ARMD patients to prevent the disease and does not occur in the eye. Literature also shows degeneration of meso-zeaxanthin. Ophthalmologists may assert the presence of ARMD and commence the diagnosis process if the Xanthophyl pigment have degenerated 89.9%, while the lutein has decayed almost 80%, as found deduced computationally. This piece of current research takes it to the next level of precise investigation in the continuing process of improved clinical findings by correlating the microanatomy of the diseased fovea and shows promise of an early detection of this disease.
NASA Astrophysics Data System (ADS)
Camporese, M.; Cassiani, G.; Deiana, R.; Salandin, P.
2011-12-01
In recent years geophysical methods have become increasingly popular for hydrological applications. Time-lapse electrical resistivity tomography (ERT) represents a potentially powerful tool for subsurface solute transport characterization since a full picture of the spatiotemporal evolution of the process can be obtained. However, the quantitative interpretation of tracer tests is difficult because of the uncertainty related to the geoelectrical inversion, the constitutive models linking geophysical and hydrological quantities, and the a priori unknown heterogeneous properties of natural formations. Here an approach based on the Lagrangian formulation of transport and the ensemble Kalman filter (EnKF) data assimilation technique is applied to assess the spatial distribution of hydraulic conductivity K by incorporating time-lapse cross-hole ERT data. Electrical data consist of three-dimensional cross-hole ERT images generated for a synthetic tracer test in a heterogeneous aquifer. Under the assumption that the solute spreads as a passive tracer, for high Peclet numbers the spatial moments of the evolving plume are dominated by the spatial distribution of the hydraulic conductivity. The assimilation of the electrical conductivity 4D images allows updating of the hydrological state as well as the spatial distribution of K. Thus, delineation of the tracer plume and estimation of the local aquifer heterogeneity can be achieved at the same time by means of this interpretation of time-lapse electrical images from tracer tests. We assess the impact on the performance of the hydrological inversion of (i) the uncertainty inherently affecting ERT inversions in terms of tracer concentration and (ii) the choice of the prior statistics of K. Our findings show that realistic ERT images can be integrated into a hydrological model even within an uncoupled inverse modeling framework. The reconstruction of the hydraulic conductivity spatial distribution is satisfactory in the portion of the domain directly covered by the passage of the tracer. Aside from the issues commonly affecting inverse models, the proposed approach is subject to the problem of the filter inbreeding and the retrieval performance is sensitive to the choice of K prior geostatistical parameters.
Bayesian Abel Inversion in Quantitative X-Ray Radiography
Howard, Marylesa; Fowler, Michael; Luttman, Aaron; ...
2016-05-19
A common image formation process in high-energy X-ray radiography is to have a pulsed power source that emits X-rays through a scene, a scintillator that absorbs X-rays and uoresces in the visible spectrum in response to the absorbed photons, and a CCD camera that images the visible light emitted from the scintillator. The intensity image is related to areal density, and, for an object that is radially symmetric about a central axis, the Abel transform then gives the object's volumetric density. Two of the primary drawbacks to classical variational methods for Abel inversion are their sensitivity to the type andmore » scale of regularization chosen and the lack of natural methods for quantifying the uncertainties associated with the reconstructions. In this work we cast the Abel inversion problem within a statistical framework in order to compute volumetric object densities from X-ray radiographs and to quantify uncertainties in the reconstruction. A hierarchical Bayesian model is developed with a likelihood based on a Gaussian noise model and with priors placed on the unknown density pro le, the data precision matrix, and two scale parameters. This allows the data to drive the localization of features in the reconstruction and results in a joint posterior distribution for the unknown density pro le, the prior parameters, and the spatial structure of the precision matrix. Results of the density reconstructions and pointwise uncertainty estimates are presented for both synthetic signals and real data from a U.S. Department of Energy X-ray imaging facility.« less
NASA Astrophysics Data System (ADS)
Godano, M.; Regnier, M.; Deschamps, A.; Bardainne, T.
2009-04-01
Since these last years, the feasibility of CO2 storage in geological reservoir is carefully investigated. The monitoring of the seismicity (natural or induced by the gas injection) in the reservoir area is crucial for safety concerns. The location of the seismic events provide an imaging of the active structures which can be a potential leakage paths. Besides, the focal mechanism is an other important seismic attribute providing direct informations about the rock fracturing, and indirect information about the state of stress in the reservoir. We address the problem of focal mechanism determination for the micro-earthquakes induced in reservoirs with a potential application to the sites of CO2 storage. We developed a non linear inversion method of P, SV and SH direct waves amplitudes. To solve the inverse problem, we perfected our own simulated annealing algorithm. Our method allows simply determining the fault plane solution (strike, dip and rake of the fault plane) in the case of a double-couple source assumption. More generally, our method allows also determining the full moment tensor in case of non-purely shear source assumption. We searched to quantify the uncertainty associated to the obtained focal mechanisms. We defined three uncertainty causes. The first is related to the convergence process of the inversion, the second is related the amplitude picking error caused by the noise level and the third is related to the event location uncertainty. We performed a series of tests on synthetic data generated in reservoir configuration in order to validate our inversion method.
NASA Astrophysics Data System (ADS)
Oware, E. K.; Moysey, S. M.
2016-12-01
Regularization stabilizes the geophysical imaging problem resulting from sparse and noisy measurements that render solutions unstable and non-unique. Conventional regularization constraints are, however, independent of the physics of the underlying process and often produce smoothed-out tomograms with mass underestimation. Cascaded time-lapse (CTL) is a widely used reconstruction technique for monitoring wherein a tomogram obtained from the background dataset is employed as starting model for the inversion of subsequent time-lapse datasets. In contrast, a proper orthogonal decomposition (POD)-constrained inversion framework enforces physics-based regularization based upon prior understanding of the expected evolution of state variables. The physics-based constraints are represented in the form of POD basis vectors. The basis vectors are constructed from numerically generated training images (TIs) that mimic the desired process. The target can be reconstructed from a small number of selected basis vectors, hence, there is a reduction in the number of inversion parameters compared to the full dimensional space. The inversion involves finding the optimal combination of the selected basis vectors conditioned on the geophysical measurements. We apply the algorithm to 2-D lab-scale saline transport experiments with electrical resistivity (ER) monitoring. We consider two transport scenarios with one and two mass injection points evolving into unimodal and bimodal plume morphologies, respectively. The unimodal plume is consistent with the assumptions underlying the generation of the TIs, whereas bimodality in plume morphology was not conceptualized. We compare difference tomograms retrieved from POD with those obtained from CTL. Qualitative comparisons of the difference tomograms with images of their corresponding dye plumes suggest that POD recovered more compact plumes in contrast to those of CTL. While mass recovery generally deteriorated with increasing number of time-steps, POD outperformed CTL in terms of mass recovery accuracy rates. POD is computationally superior requiring only 2.5 mins to complete each inversion compared to 3 hours for CTL to do the same.
Greedy algorithms for diffuse optical tomography reconstruction
NASA Astrophysics Data System (ADS)
Dileep, B. P. V.; Das, Tapan; Dutta, Pranab K.
2018-03-01
Diffuse optical tomography (DOT) is a noninvasive imaging modality that reconstructs the optical parameters of a highly scattering medium. However, the inverse problem of DOT is ill-posed and highly nonlinear due to the zig-zag propagation of photons that diffuses through the cross section of tissue. The conventional DOT imaging methods iteratively compute the solution of forward diffusion equation solver which makes the problem computationally expensive. Also, these methods fail when the geometry is complex. Recently, the theory of compressive sensing (CS) has received considerable attention because of its efficient use in biomedical imaging applications. The objective of this paper is to solve a given DOT inverse problem by using compressive sensing framework and various Greedy algorithms such as orthogonal matching pursuit (OMP), compressive sampling matching pursuit (CoSaMP), and stagewise orthogonal matching pursuit (StOMP), regularized orthogonal matching pursuit (ROMP) and simultaneous orthogonal matching pursuit (S-OMP) have been studied to reconstruct the change in the absorption parameter i.e, Δα from the boundary data. Also, the Greedy algorithms have been validated experimentally on a paraffin wax rectangular phantom through a well designed experimental set up. We also have studied the conventional DOT methods like least square method and truncated singular value decomposition (TSVD) for comparison. One of the main features of this work is the usage of less number of source-detector pairs, which can facilitate the use of DOT in routine applications of screening. The performance metrics such as mean square error (MSE), normalized mean square error (NMSE), structural similarity index (SSIM), and peak signal to noise ratio (PSNR) have been used to evaluate the performance of the algorithms mentioned in this paper. Extensive simulation results confirm that CS based DOT reconstruction outperforms the conventional DOT imaging methods in terms of computational efficiency. The main advantage of this study is that the forward diffusion equation solver need not be repeatedly solved.
Photofragment image analysis using the Onion-Peeling Algorithm
NASA Astrophysics Data System (ADS)
Manzhos, Sergei; Loock, Hans-Peter
2003-07-01
With the growing popularity of the velocity map imaging technique, a need for the analysis of photoion and photoelectron images arose. Here, a computer program is presented that allows for the analysis of cylindrically symmetric images. It permits the inversion of the projection of the 3D charged particle distribution using the Onion Peeling Algorithm. Further analysis includes the determination of radial and angular distributions, from which velocity distributions and spatial anisotropy parameters are obtained. Identification and quantification of the different photolysis channels is therefore straightforward. In addition, the program features geometry correction, centering, and multi-Gaussian fitting routines, as well as a user-friendly graphical interface and the possibility of generating synthetic images using either the fitted or user-defined parameters. Program summaryTitle of program: Glass Onion Catalogue identifier: ADRY Program Summary URL:http://cpc.cs.qub.ac.uk/summaries/ADRY Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Computer: IBM PC Operating system under which the program has been tested: Windows 98, Windows 2000, Windows NT Programming language used: Delphi 4.0 Memory required to execute with typical data: 18 Mwords No. of bits in a word: 32 No. of bytes in distributed program, including test data, etc.: 9 911 434 Distribution format: zip file Keywords: Photofragment image, onion peeling, anisotropy parameters Nature of physical problem: Information about velocity and angular distributions of photofragments is the basis on which the analysis of the photolysis process resides. Reconstructing the three-dimensional distribution from the photofragment image is the first step, further processing involving angular and radial integration of the inverted image to obtain velocity and angular distributions. Provisions have to be made to correct for slight distortions of the image, and to verify the accuracy of the analysis process. Method of solution: The "Onion Peeling" algorithm described by Helm [Rev. Sci. Instrum. 67 (6) (1996)] is used to perform the image reconstruction. Angular integration with a subsequent multi-Gaussian fit supplies information about the velocity distribution of the photofragments, whereas radial integration with subsequent expansion of the angular distributions over Legendre Polynomials gives the spatial anisotropy parameters. Fitting algorithms have been developed to centre the image and to correct for image distortion. Restrictions on the complexity of the problem: The maximum image size (1280×1280) and resolution (16 bit) are restricted by available memory and can be changed in the source code. Initial centre coordinates within 5 pixels may be required for the correction and the centering algorithm to converge. Peaks on the velocity profile separated by less then the peak width may not be deconvolved. In the charged particle image reconstruction, it is assumed that the kinetic energy released in the dissociation process is small compared to the energy acquired in the electric field. For the fitting parameters to be physically meaningful, cylindrical symmetry of the image has to be assumed but the actual inversion algorithm is stable to distortions of such symmetry in experimental images. Typical running time: The analysis procedure can be divided into three parts: inversion, fitting, and geometry correction. The inversion time grows approx. as R3, where R is the radius of the region of interest: for R=200 pixels it is less than a minute, for R=400 pixels less then 6 min on a 400 MHz IBM personal computer. The time for the velocity fitting procedure to converge depends strongly on the number of peaks in the velocity profile and the convergence criterion. It ranges between less then a second for simple curves and a few minutes for profiles with up to twenty peaks. The time taken for the image correction scales as R2 and depends on the curve profile. It is on the order of a few minutes for images with R=500 pixels. Unusual features of the program: Our centering and image correction algorithm is based on Fourier analysis of the radial distribution to insure the sharpest velocity profile and is insensitive to an uneven intensity distribution. There exists an angular averaging option to stabilize the inversion algorithm and not to loose the resolution at the same time.
Calculating tissue shear modulus and pressure by 2D log-elastographic methods
NASA Astrophysics Data System (ADS)
McLaughlin, Joyce R.; Zhang, Ning; Manduca, Armando
2010-08-01
Shear modulus imaging, often called elastography, enables detection and characterization of tissue abnormalities. In this paper the data are two displacement components obtained from successive MR or ultrasound data sets acquired while the tissue is excited mechanically. A 2D plane strain elastic model is assumed to govern the 2D displacement, u. The shear modulus, μ, is unknown and whether or not the first Lamé parameter, λ, is known the pressure p = λ∇ sdot u which is present in the plane strain model cannot be measured and is unreliably computed from measured data and can be shown to be an order one quantity in the units kPa. So here we present a 2D log-elastographic inverse algorithm that (1) simultaneously reconstructs the shear modulus, μ, and p, which together satisfy a first-order partial differential equation system, with the goal of imaging μ (2) controls potential exponential growth in the numerical error and (3) reliably reconstructs the quantity p in the inverse algorithm as compared to the same quantity computed with a forward algorithm. This work generalizes the log-elastographic algorithm in Lin et al (2009 Inverse Problems 25) which uses one displacement component, is derived assuming that the component satisfies the wave equation and is tested on synthetic data computed with the wave equation model. The 2D log-elastographic algorithm is tested on 2D synthetic data and 2D in vivo data from Mayo Clinic. We also exhibit examples to show that the 2D log-elastographic algorithm improves the quality of the recovered images as compared to the log-elastographic and direct inversion algorithms.
2008-10-17
R E 1960 A new approach to linear filtering and prediction problems Trans. ASME D 82 35–45 [23] Brown R G and Hwang Y C 1992 Introduction to Random...vector Wiener filter [21]. TDSI is also somewhat similar to the Kalman filter [22, 23] which is applied in many areas including tomography [24–27]. The...453–76 [21] Links J M, Prince J L and Gupta S N 1996 A vector Wiener filter for dual-radionuclide imaging IEEE Trans. Med. Imaging 15 700–9 [22] Kalman
A regularized clustering approach to brain parcellation from functional MRI data
NASA Astrophysics Data System (ADS)
Dillon, Keith; Wang, Yu-Ping
2017-08-01
We consider a data-driven approach for the subdivision of an individual subject's functional Magnetic Resonance Imaging (fMRI) scan into regions of interest, i.e., brain parcellation. The approach is based on a computational technique for calculating resolution from inverse problem theory, which we apply to neighborhood selection for brain connectivity networks. This can be efficiently calculated even for very large images, and explicitly incorporates regularization in the form of spatial smoothing and a noise cutoff. We demonstrate the reproducibility of the method on multiple scans of the same subjects, as well as the variations between subjects.
NASA Astrophysics Data System (ADS)
Frerichs, H.; Effenberg, F.; Feng, Y.; Schmitz, O.; Stephey, L.; Reiter, D.; Börner, P.; The W7-X Team
2017-12-01
The interpretation of spectroscopic measurements in the edge region of high-temperature plasmas can be guided by modeling with the EMC3-EIRENE code. A versatile synthetic diagnostic module, initially developed for the generation of synthetic camera images, has been extended for the evaluation of the inverse problem in which the observable photon flux is related back to the originating particle flux (recycling). An application of this synthetic diagnostic to the startup phase (inboard) limiter in Wendelstein 7-X (W7-X) is presented, and reconstruction of recycling from synthetic observation of \\renewcommand{\
Bayesian approach to inverse statistical mechanics.
Habeck, Michael
2014-05-01
Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.
Bayesian approach to inverse statistical mechanics
NASA Astrophysics Data System (ADS)
Habeck, Michael
2014-05-01
Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.
Applications of Bayesian spectrum representation in acoustics
NASA Astrophysics Data System (ADS)
Botts, Jonathan M.
This dissertation utilizes a Bayesian inference framework to enhance the solution of inverse problems where the forward model maps to acoustic spectra. A Bayesian solution to filter design inverts a acoustic spectra to pole-zero locations of a discrete-time filter model. Spatial sound field analysis with a spherical microphone array is a data analysis problem that requires inversion of spatio-temporal spectra to directions of arrival. As with many inverse problems, a probabilistic analysis results in richer solutions than can be achieved with ad-hoc methods. In the filter design problem, the Bayesian inversion results in globally optimal coefficient estimates as well as an estimate the most concise filter capable of representing the given spectrum, within a single framework. This approach is demonstrated on synthetic spectra, head-related transfer function spectra, and measured acoustic reflection spectra. The Bayesian model-based analysis of spatial room impulse responses is presented as an analogous problem with equally rich solution. The model selection mechanism provides an estimate of the number of arrivals, which is necessary to properly infer the directions of simultaneous arrivals. Although, spectrum inversion problems are fairly ubiquitous, the scope of this dissertation has been limited to these two and derivative problems. The Bayesian approach to filter design is demonstrated on an artificial spectrum to illustrate the model comparison mechanism and then on measured head-related transfer functions to show the potential range of application. Coupled with sampling methods, the Bayesian approach is shown to outperform least-squares filter design methods commonly used in commercial software, confirming the need for a global search of the parameter space. The resulting designs are shown to be comparable to those that result from global optimization methods, but the Bayesian approach has the added advantage of a filter length estimate within the same unified framework. The application to reflection data is useful for representing frequency-dependent impedance boundaries in finite difference acoustic simulations. Furthermore, since the filter transfer function is a parametric model, it can be modified to incorporate arbitrary frequency weighting and account for the band-limited nature of measured reflection spectra. Finally, the model is modified to compensate for dispersive error in the finite difference simulation, from the filter design process. Stemming from the filter boundary problem, the implementation of pressure sources in finite difference simulation is addressed in order to assure that schemes properly converge. A class of parameterized source functions is proposed and shown to offer straightforward control of residual error in the simulation. Guided by the notion that the solution to be approximated affects the approximation error, sources are designed which reduce residual dispersive error to the size of round-off errors. The early part of a room impulse response can be characterized by a series of isolated plane waves. Measured with an array of microphones, plane waves map to a directional response of the array or spatial intensity map. Probabilistic inversion of this response results in estimates of the number and directions of image source arrivals. The model-based inversion is shown to avoid ambiguities associated with peak-finding or inspection of the spatial intensity map. For this problem, determining the number of arrivals in a given frame is critical for properly inferring the state of the sound field. This analysis is effectively compression of the spatial room response, which is useful for analysis or encoding of the spatial sound field. Parametric, model-based formulations of these problems enhance the solution in all cases, and a Bayesian interpretation provides a principled approach to model comparison and parameter estimation. v
Is 3D true non linear traveltime tomography reasonable ?
NASA Astrophysics Data System (ADS)
Herrero, A.; Virieux, J.
2003-04-01
The data sets requiring 3D analysis tools in the context of seismic exploration (both onshore and offshore experiments) or natural seismicity (micro seismicity surveys or post event measurements) are more and more numerous. Classical linearized tomographies and also earthquake localisation codes need an accurate 3D background velocity model. However, if the medium is complex and a priori information not available, a 1D analysis is not able to provide an adequate background velocity image. Moreover, the design of the acquisition layouts is often intrinsically 3D and renders difficult even 2D approaches, especially in natural seismicity cases. Thus, the solution relies on the use of a 3D true non linear approach, which allows to explore the model space and to identify an optimal velocity image. The problem becomes then practical and its feasibility depends on the available computing resources (memory and time). In this presentation, we show that facing a 3D traveltime tomography problem with an extensive non-linear approach combining fast travel time estimators based on level set methods and optimisation techniques such as multiscale strategy is feasible. Moreover, because management of inhomogeneous inversion parameters is more friendly in a non linear approach, we describe how to perform a jointly non-linear inversion for the seismic velocities and the sources locations.
Single photon emission computed tomography-guided Cerenkov luminescence tomography
NASA Astrophysics Data System (ADS)
Hu, Zhenhua; Chen, Xueli; Liang, Jimin; Qu, Xiaochao; Chen, Duofang; Yang, Weidong; Wang, Jing; Cao, Feng; Tian, Jie
2012-07-01
Cerenkov luminescence tomography (CLT) has become a valuable tool for preclinical imaging because of its ability of reconstructing the three-dimensional distribution and activity of the radiopharmaceuticals. However, it is still far from a mature technology and suffers from relatively low spatial resolution due to the ill-posed inverse problem for the tomographic reconstruction. In this paper, we presented a single photon emission computed tomography (SPECT)-guided reconstruction method for CLT, in which a priori information of the permissible source region (PSR) from SPECT imaging results was incorporated to effectively reduce the ill-posedness of the inverse reconstruction problem. The performance of the method was first validated with the experimental reconstruction of an adult athymic nude mouse implanted with a Na131I radioactive source and an adult athymic nude mouse received an intravenous tail injection of Na131I. A tissue-mimic phantom based experiment was then conducted to illustrate the ability of the proposed method in resolving double sources. Compared with the traditional PSR strategy in which the PSR was determined by the surface flux distribution, the proposed method obtained much more accurate and encouraging localization and resolution results. Preliminary results showed that the proposed SPECT-guided reconstruction method was insensitive to the regularization methods and ignored the heterogeneity of tissues which can avoid the segmentation procedure of the organs.
Elastic-Waveform Inversion with Compressive Sensing for Sparse Seismic Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Youzuo; Huang, Lianjie
2015-01-28
Accurate velocity models of compressional- and shear-waves are essential for geothermal reservoir characterization and microseismic imaging. Elastic-waveform inversion of multi-component seismic data can provide high-resolution inversion results of subsurface geophysical properties. However, the method requires seismic data acquired using dense source and receiver arrays. In practice, seismic sources and/or geophones are often sparsely distributed on the surface and/or in a borehole, such as 3D vertical seismic profiling (VSP) surveys. We develop a novel elastic-waveform inversion method with compressive sensing for inversion of sparse seismic data. We employ an alternating-minimization algorithm to solve the optimization problem of our new waveform inversionmore » method. We validate our new method using synthetic VSP data for a geophysical model built using geologic features found at the Raft River enhanced-geothermal-system (EGS) field. We apply our method to synthetic VSP data with a sparse source array and compare the results with those obtained with a dense source array. Our numerical results demonstrate that the velocity models produced with our new method using a sparse source array are almost as accurate as those obtained using a dense source array.« less
Full-wave Nonlinear Inverse Scattering for Acoustic and Electromagnetic Breast Imaging
NASA Astrophysics Data System (ADS)
Haynes, Mark Spencer
Acoustic and electromagnetic full-wave nonlinear inverse scattering techniques are explored in both theory and experiment with the ultimate aim of noninvasively mapping the material properties of the breast. There is evidence that benign and malignant breast tissue have different acoustic and electrical properties and imaging these properties directly could provide higher quality images with better diagnostic certainty. In this dissertation, acoustic and electromagnetic inverse scattering algorithms are first developed and validated in simulation. The forward solvers and optimization cost functions are modified from traditional forms in order to handle the large or lossy imaging scenes present in ultrasonic and microwave breast imaging. An antenna model is then presented, modified, and experimentally validated for microwave S-parameter measurements. Using the antenna model, a new electromagnetic volume integral equation is derived in order to link the material properties of the inverse scattering algorithms to microwave S-parameters measurements allowing direct comparison of model predictions and measurements in the imaging algorithms. This volume integral equation is validated with several experiments and used as the basis of a free-space inverse scattering experiment, where images of the dielectric properties of plastic objects are formed without the use of calibration targets. These efforts are used as the foundation of a solution and formulation for the numerical characterization of a microwave near-field cavity-based breast imaging system. The system is constructed and imaging results of simple targets are given. Finally, the same techniques are used to explore a new self-characterization method for commercial ultrasound probes. The method is used to calibrate an ultrasound inverse scattering experiment and imaging results of simple targets are presented. This work has demonstrated the feasibility of quantitative microwave inverse scattering by way of a self-consistent characterization formalism, and has made headway in the same area for ultrasound.
NASA Astrophysics Data System (ADS)
Bubis, E. L.; Lozhrkarev, V. V.; Stepanov, A. N.; Smirnov, A. I.; Martynov, V. O.; Mal'shakova, O. A.; Silin, D. E.; Gusev, S. A.
2017-03-01
We describe the process of adaptive self-inversion of an image (nonlinear switching) of smallscale opaque object, when the amplitude-modulated laser beam, which illuminates it, is focused in a weakly absorbing medium. It is shown that, despite the nonlocal character of the process, which is due to thermal nonlinearity, the brightness-inverse image is characterized by acceptable quality and a high conversion coefficient. It is shown that the coefficient of conversion of the original image to the inverse one depends on the ratio of the object dimensions and the size of the illuminating beam, and decreases sharply for relatively large objects. The obtained experimental data agree with the numerical calculations. Inversion of the images of several model objects and microdefects in a nonlinear KDP crystal is demonstrated.
Inverse Modelling Problems in Linear Algebra Undergraduate Courses
ERIC Educational Resources Information Center
Martinez-Luaces, Victor E.
2013-01-01
This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…
Inverse problems in quantum chemistry
NASA Astrophysics Data System (ADS)
Karwowski, Jacek
Inverse problems constitute a branch of applied mathematics with well-developed methodology and formalism. A broad family of tasks met in theoretical physics, in civil and mechanical engineering, as well as in various branches of medical and biological sciences has been formulated as specific implementations of the general theory of inverse problems. In this article, it is pointed out that a number of approaches met in quantum chemistry can (and should) be classified as inverse problems. Consequently, the methodology used in these approaches may be enriched by applying ideas and theorems developed within the general field of inverse problems. Several examples, including the RKR method for the construction of potential energy curves, determining parameter values in semiempirical methods, and finding external potentials for which the pertinent Schrödinger equation is exactly solvable, are discussed in detail.
NASA Astrophysics Data System (ADS)
Ahmed, A. Soueid; Jardani, A.; Revil, A.; Dupont, J. P.
2016-03-01
Transient hydraulic tomography is used to image the heterogeneous hydraulic conductivity and specific storage fields of shallow aquifers using time series of hydraulic head data. Such ill-posed and non-unique inverse problem can be regularized using some spatial geostatistical characteristic of the two fields. In addition to hydraulic heads changes, the flow of water, during pumping tests, generates an electrical field of electrokinetic nature. These electrical field fluctuations can be passively recorded at the ground surface using a network of non-polarizing electrodes connected to a high impedance (> 10 MOhm) and sensitive (0.1 mV) voltmeter, a method known in geophysics as the self-potential method. We perform a joint inversion of the self-potential and hydraulic head data to image the hydraulic conductivity and specific storage fields. We work on a 3D synthetic confined aquifer and we use the adjoint state method to compute the sensitivities of the hydraulic parameters to the hydraulic head and self-potential data in both steady-state and transient conditions. The inverse problem is solved using the geostatistical quasi-linear algorithm framework of Kitanidis. When the number of piezometers is small, the record of the transient self-potential signals provides useful information to characterize the hydraulic conductivity and specific storage fields. These results show that the self-potential method reveals the heterogeneities of some areas of the aquifer, which could not been captured by the tomography based on the hydraulic heads alone. In our analysis, the improvement on the hydraulic conductivity and specific storage estimations were based on perfect knowledge of electrical resistivity field. This implies that electrical resistivity will need to be jointly inverted with the hydraulic parameters in future studies and the impact of its uncertainty assessed with respect to the final tomograms of the hydraulic parameters.
Variable Grid Traveltime Tomography for Near-surface Seismic Imaging
NASA Astrophysics Data System (ADS)
Cai, A.; Zhang, J.
2017-12-01
We present a new algorithm of traveltime tomography for imaging the subsurface with automated variable grids upon geological structures. The nonlinear traveltime tomography along with Tikhonov regularization using conjugate gradient method is a conventional method for near surface imaging. However, model regularization for any regular and even grids assumes uniform resolution. From geophysical point of view, long-wavelength and large scale structures can be reliably resolved, the details along geological boundaries are difficult to resolve. Therefore, we solve a traveltime tomography problem that automatically identifies large scale structures and aggregates grids within the structures for inversion. As a result, the number of velocity unknowns is reduced significantly, and inversion intends to resolve small-scale structures or the boundaries of large-scale structures. The approach is demonstrated by tests on both synthetic and field data. One synthetic model is a buried basalt model with one horizontal layer. Using the variable grid traveltime tomography, the resulted model is more accurate in top layer velocity, and basalt blocks, and leading to a less number of grids. The field data was collected in an oil field in China. The survey was performed in an area where the subsurface structures were predominantly layered. The data set includes 476 shots with a 10 meter spacing and 1735 receivers with a 10 meter spacing. The first-arrival traveltime of the seismogram is picked for tomography. The reciprocal errors of most shots are between 2ms and 6ms. The normal tomography results in fluctuations in layers and some artifacts in the velocity model. In comparison, the implementation of new method with proper threshold provides blocky model with resolved flat layer and less artifacts. Besides, the number of grids reduces from 205,656 to 4,930 and the inversion produces higher resolution due to less unknowns and relatively fine grids in small structures. The variable grid traveltime tomography provides an alternative imaging solution for blocky structures in the subsurface and builds a good starting model for waveform inversion and statics.
Brain functional BOLD perturbation modelling for forward fMRI and inverse mapping
Robinson, Jennifer; Calhoun, Vince
2018-01-01
Purpose To computationally separate dynamic brain functional BOLD responses from static background in a brain functional activity for forward fMRI signal analysis and inverse mapping. Methods A brain functional activity is represented in terms of magnetic source by a perturbation model: χ = χ0 +δχ, with δχ for BOLD magnetic perturbations and χ0 for background. A brain fMRI experiment produces a timeseries of complex-valued images (T2* images), whereby we extract the BOLD phase signals (denoted by δP) by a complex division. By solving an inverse problem, we reconstruct the BOLD δχ dataset from the δP dataset, and the brain χ distribution from a (unwrapped) T2* phase image. Given a 4D dataset of task BOLD fMRI, we implement brain functional mapping by temporal correlation analysis. Results Through a high-field (7T) and high-resolution (0.5mm in plane) task fMRI experiment, we demonstrated in detail the BOLD perturbation model for fMRI phase signal separation (P + δP) and reconstructing intrinsic brain magnetic source (χ and δχ). We also provided to a low-field (3T) and low-resolution (2mm) task fMRI experiment in support of single-subject fMRI study. Our experiments show that the δχ-depicted functional map reveals bidirectional BOLD χ perturbations during the task performance. Conclusions The BOLD perturbation model allows us to separate fMRI phase signal (by complex division) and to perform inverse mapping for pure BOLD δχ reconstruction for intrinsic functional χ mapping. The full brain χ reconstruction (from unwrapped fMRI phase) provides a new brain tissue image that allows to scrutinize the brain tissue idiosyncrasy for the pure BOLD δχ response through an automatic function/structure co-localization. PMID:29351339
3D fast adaptive correlation imaging for large-scale gravity data based on GPU computation
NASA Astrophysics Data System (ADS)
Chen, Z.; Meng, X.; Guo, L.; Liu, G.
2011-12-01
In recent years, large scale gravity data sets have been collected and employed to enhance gravity problem-solving abilities of tectonics studies in China. Aiming at the large scale data and the requirement of rapid interpretation, previous authors have carried out a lot of work, including the fast gradient module inversion and Euler deconvolution depth inversion ,3-D physical property inversion using stochastic subspaces and equivalent storage, fast inversion using wavelet transforms and a logarithmic barrier method. So it can be say that 3-D gravity inversion has been greatly improved in the last decade. Many authors added many different kinds of priori information and constraints to deal with nonuniqueness using models composed of a large number of contiguous cells of unknown property and obtained good results. However, due to long computation time, instability and other shortcomings, 3-D physical property inversion has not been widely applied to large-scale data yet. In order to achieve 3-D interpretation with high efficiency and precision for geological and ore bodies and obtain their subsurface distribution, there is an urgent need to find a fast and efficient inversion method for large scale gravity data. As an entirely new geophysical inversion method, 3D correlation has a rapid development thanks to the advantage of requiring no a priori information and demanding small amount of computer memory. This method was proposed to image the distribution of equivalent excess masses of anomalous geological bodies with high resolution both longitudinally and transversely. In order to tranform the equivalence excess masses into real density contrasts, we adopt the adaptive correlation imaging for gravity data. After each 3D correlation imaging, we change the equivalence into density contrasts according to the linear relationship, and then carry out forward gravity calculation for each rectangle cells. Next, we compare the forward gravity data with real data, and comtinue to perform 3D correlation imaging for the redisual gravity data. After several iterations, we can obtain a satisfactoy results. Newly developed general purpose computing technology from Nvidia GPU (Graphics Processing Unit) has been put into practice and received widespread attention in many areas. Based on the GPU programming mode and two parallel levels, five CPU loops for the main computation of 3D correlation imaging are converted into three loops in GPU kernel functions, thus achieving GPU/CPU collaborative computing. The two inner loops are defined as the dimensions of blocks and the three outer loops are defined as the dimensions of threads, thus realizing the double loop block calculation. Theoretical and real gravity data tests show that results are reliable and the computing time is greatly reduced. Acknowledgments We acknowledge the financial support of Sinoprobe project (201011039 and 201011049-03), the Fundamental Research Funds for the Central Universities (2010ZY26 and 2011PY0183), the National Natural Science Foundation of China (41074095) and the Open Project of State Key Laboratory of Geological Processes and Mineral Resources (GPMR0945).
Application of a stochastic inverse to the geophysical inverse problem
NASA Technical Reports Server (NTRS)
Jordan, T. H.; Minster, J. B.
1972-01-01
The inverse problem for gross earth data can be reduced to an undertermined linear system of integral equations of the first kind. A theory is discussed for computing particular solutions to this linear system based on the stochastic inverse theory presented by Franklin. The stochastic inverse is derived and related to the generalized inverse of Penrose and Moore. A Backus-Gilbert type tradeoff curve is constructed for the problem of estimating the solution to the linear system in the presence of noise. It is shown that the stochastic inverse represents an optimal point on this tradeoff curve. A useful form of the solution autocorrelation operator as a member of a one-parameter family of smoothing operators is derived.
Analysis of space telescope data collection system
NASA Technical Reports Server (NTRS)
Ingels, F. M.; Schoggen, W. O.
1982-01-01
An analysis of the expected performance for the Multiple Access (MA) system is provided. The analysis covers the expected bit error rate performance, the effects of synchronization loss, the problem of self-interference, and the problem of phase ambiguity. The problem of false acceptance of a command word due to data inversion is discussed. A mathematical determination of the probability of accepting an erroneous command word due to a data inversion is presented. The problem is examined for three cases: (1) a data inversion only, (2) a data inversion and a random error within the same command word, and a block (up to 256 48-bit words) containing both a data inversion and a random error.
A stochastic approach for model reduction and memory function design in hydrogeophysical inversion
NASA Astrophysics Data System (ADS)
Hou, Z.; Kellogg, A.; Terry, N.
2009-12-01
Geophysical (e.g., seismic, electromagnetic, radar) techniques and statistical methods are essential for research related to subsurface characterization, including monitoring subsurface flow and transport processes, oil/gas reservoir identification, etc. For deep subsurface characterization such as reservoir petroleum exploration, seismic methods have been widely used. Recently, electromagnetic (EM) methods have drawn great attention in the area of reservoir characterization. However, considering the enormous computational demand corresponding to seismic and EM forward modeling, it is usually a big problem to have too many unknown parameters in the modeling domain. For shallow subsurface applications, the characterization can be very complicated considering the complexity and nonlinearity of flow and transport processes in the unsaturated zone. It is warranted to reduce the dimension of parameter space to a reasonable level. Another common concern is how to make the best use of time-lapse data with spatial-temporal correlations. This is even more critical when we try to monitor subsurface processes using geophysical data collected at different times. The normal practice is to get the inverse images individually. These images are not necessarily continuous or even reasonably related, because of the non-uniqueness of hydrogeophysical inversion. We propose to use a stochastic framework by integrating minimum-relative-entropy concept, quasi Monto Carlo sampling techniques, and statistical tests. The approach allows efficient and sufficient exploration of all possibilities of model parameters and evaluation of their significances to geophysical responses. The analyses enable us to reduce the parameter space significantly. The approach can be combined with Bayesian updating, allowing us to treat the updated ‘posterior’ pdf as a memory function, which stores all the information up to date about the distributions of soil/field attributes/properties, then consider the memory function as a new prior and generate samples from it for further updating when more geophysical data is available. We applied this approach for deep oil reservoir characterization and for shallow subsurface flow monitoring. The model reduction approach reliably helps reduce the joint seismic/EM/radar inversion computational time to reasonable levels. Continuous inversion images are obtained using time-lapse data with the “memory function” applied in the Bayesian inversion.
Determining biosonar images using sparse representations.
Fontaine, Bertrand; Peremans, Herbert
2009-05-01
Echolocating bats are thought to be able to create an image of their environment by emitting pulses and analyzing the reflected echoes. In this paper, the theory of sparse representations and its more recent further development into compressed sensing are applied to this biosonar image formation task. Considering the target image representation as sparse allows formulation of this inverse problem as a convex optimization problem for which well defined and efficient solution methods have been established. The resulting technique, referred to as L1-minimization, is applied to simulated data to analyze its performance relative to delay accuracy and delay resolution experiments. This method performs comparably to the coherent receiver for the delay accuracy experiments, is quite robust to noise, and can reconstruct complex target impulse responses as generated by many closely spaced reflectors with different reflection strengths. This same technique, in addition to reconstructing biosonar target images, can be used to simultaneously localize these complex targets by interpreting location cues induced by the bat's head related transfer function. Finally, a tentative explanation is proposed for specific bat behavioral experiments in terms of the properties of target images as reconstructed by the L1-minimization method.
Translation-aware semantic segmentation via conditional least-square generative adversarial networks
NASA Astrophysics Data System (ADS)
Zhang, Mi; Hu, Xiangyun; Zhao, Like; Pang, Shiyan; Gong, Jinqi; Luo, Min
2017-10-01
Semantic segmentation has recently made rapid progress in the field of remote sensing and computer vision. However, many leading approaches cannot simultaneously translate label maps to possible source images with a limited number of training images. The core issue is insufficient adversarial information to interpret the inverse process and proper objective loss function to overcome the vanishing gradient problem. We propose the use of conditional least squares generative adversarial networks (CLS-GAN) to delineate visual objects and solve these problems. We trained the CLS-GAN network for semantic segmentation to discriminate dense prediction information either from training images or generative networks. We show that the optimal objective function of CLS-GAN is a special class of f-divergence and yields a generator that lies on the decision boundary of discriminator that reduces possible vanished gradient. We also demonstrate the effectiveness of the proposed architecture at translating images from label maps in the learning process. Experiments on a limited number of high resolution images, including close-range and remote sensing datasets, indicate that the proposed method leads to the improved semantic segmentation accuracy and can simultaneously generate high quality images from label maps.
Iterative Nonlinear Tikhonov Algorithm with Constraints for Electromagnetic Tomography
NASA Technical Reports Server (NTRS)
Xu, Feng; Deshpande, Manohar
2012-01-01
Low frequency electromagnetic tomography such as the capacitance tomography (ECT) has been proposed for monitoring and mass-gauging of gas-liquid two-phase system under microgravity condition in NASA's future long-term space missions. Due to the ill-posed inverse problem of ECT, images reconstructed using conventional linear algorithms often suffer from limitations such as low resolution and blurred edges. Hence, new efficient high resolution nonlinear imaging algorithms are needed for accurate two-phase imaging. The proposed Iterative Nonlinear Tikhonov Regularized Algorithm with Constraints (INTAC) is based on an efficient finite element method (FEM) forward model of quasi-static electromagnetic problem. It iteratively minimizes the discrepancy between FEM simulated and actual measured capacitances by adjusting the reconstructed image using the Tikhonov regularized method. More importantly, it enforces the known permittivity of two phases to the unknown pixels which exceed the reasonable range of permittivity in each iteration. This strategy does not only stabilize the converging process, but also produces sharper images. Simulations show that resolution improvement of over 2 times can be achieved by INTAC with respect to conventional approaches. Strategies to further improve spatial imaging resolution are suggested, as well as techniques to accelerate nonlinear forward model and thus increase the temporal resolution.
A theoretical formulation of the electrophysiological inverse problem on the sphere
NASA Astrophysics Data System (ADS)
Riera, Jorge J.; Valdés, Pedro A.; Tanabe, Kunio; Kawashima, Ryuta
2006-04-01
The construction of three-dimensional images of the primary current density (PCD) produced by neuronal activity is a problem of great current interest in the neuroimaging community, though being initially formulated in the 1970s. There exist even now enthusiastic debates about the authenticity of most of the inverse solutions proposed in the literature, in which low resolution electrical tomography (LORETA) is a focus of attention. However, in our opinion, the capabilities and limitations of the electro and magneto encephalographic techniques to determine PCD configurations have not been extensively explored from a theoretical framework, even for simple volume conductor models of the head. In this paper, the electrophysiological inverse problem for the spherical head model is cast in terms of reproducing kernel Hilbert spaces (RKHS) formalism, which allows us to identify the null spaces of the implicated linear integral operators and also to define their representers. The PCD are described in terms of a continuous basis for the RKHS, which explicitly separates the harmonic and non-harmonic components. The RKHS concept permits us to bring LORETA into the scope of the general smoothing splines theory. A particular way of calculating the general smoothing splines is illustrated, avoiding a brute force discretization prematurely. The Bayes information criterion is used to handle dissimilarities in the signal/noise ratios and physical dimensions of the measurement modalities, which could affect the estimation of the amount of smoothness required for that class of inverse solution to be well specified. In order to validate the proposed method, we have estimated the 3D spherical smoothing splines from two data sets: electric potentials obtained from a skull phantom and magnetic fields recorded from subjects performing an experiment of human faces recognition.
NASA Astrophysics Data System (ADS)
Cheng, Jin; Hon, Yiu-Chung; Seo, Jin Keun; Yamamoto, Masahiro
2005-01-01
The Second International Conference on Inverse Problems: Recent Theoretical Developments and Numerical Approaches was held at Fudan University, Shanghai from 16-21 June 2004. The first conference in this series was held at the City University of Hong Kong in January 2002 and it was agreed to hold the conference once every two years in a Pan-Pacific Asian country. The next conference is scheduled to be held at Hokkaido University, Sapporo, Japan in July 2006. The purpose of this series of biennial conferences is to establish and develop constant international collaboration, especially among the Pan-Pacific Asian countries. In recent decades, interest in inverse problems has been flourishing all over the globe because of both the theoretical interest and practical requirements. In particular, in Asian countries, one is witnessing remarkable new trends of research in inverse problems as well as the participation of many young talents. Considering these trends, the second conference was organized with the chairperson Professor Li Tat-tsien (Fudan University), in order to provide forums for developing research cooperation and to promote activities in the field of inverse problems. Because solutions to inverse problems are needed in various applied fields, we entertained a total of 92 participants at the second conference and arranged various talks which ranged from mathematical analyses to solutions of concrete inverse problems in the real world. This volume contains 18 selected papers, all of which have undergone peer review. The 18 papers are classified as follows: Surveys: four papers give reviews of specific inverse problems. Theoretical aspects: six papers investigate the uniqueness, stability, and reconstruction schemes. Numerical methods: four papers devise new numerical methods and their applications to inverse problems. Solutions to applied inverse problems: four papers discuss concrete inverse problems such as scattering problems and inverse problems in atmospheric sciences and oceanography. Last but not least is our gratitude. As editors we would like to express our sincere thanks to all the plenary and invited speakers, the members of the International Scientific Committee and the Advisory Board for the success of the conference, which has given rise to this present volume of selected papers. We would also like to thank Mr Wang Yanbo, Miss Wan Xiqiong and the graduate students at Fudan University for their effective work to make this conference a success. The conference was financially supported by the NFS of China, the Mathematical Center of Ministry of Education of China, E-Institutes of Shanghai Municipal Education Commission (No E03004) and Fudan University, Grant 15340027 from the Japan Society for the Promotion of Science, and Grant 15654015 from the Ministry of Education, Cultures, Sports and Technology.
Riemann–Hilbert problem approach for two-dimensional flow inverse scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agaltsov, A. D., E-mail: agalets@gmail.com; Novikov, R. G., E-mail: novikov@cmap.polytechnique.fr; IEPT RAS, 117997 Moscow
2014-10-15
We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given.
NASA Astrophysics Data System (ADS)
Jaiswal, Priyank; Dasgupta, Rahul
2010-05-01
We demonstrate that imaging of 2-D multichannel land seismic data can be effectively accomplished by a combination of reflection traveltime tomography and pre-stack depth migration (PSDM); we refer to the combined process as "the unified imaging". The unified imaging comprises cyclic runs of joint reflection and direct arrival inversion and pre-stack depth migration. From one cycle to another, both the inversion and the migration provide mutual feedbacks that are guided by the geological interpretation. The unified imaging is implemented in two broad stages. The first stage is similar to the conventional imaging except that it involves a significant use of velocity model from the inversion of the direct arrivals for both datuming and stacking velocity analysis. The first stage ends with an initial interval velocity model (from the stacking velocity analysis) and a corresponding depth migrated image. The second stage updates the velocity model and the depth image from the first stage in a cyclic manner; a single cycle comprises a single run of reflection traveltime inversion followed by PSDM. Interfaces used in the inversion are interpretations of the PSDM image in the previous cycle and the velocity model used in PSDM is from the joint inversion in the current cycle. Additionally in every cycle interpreted horizons in the stacked data are inverted as zero-offset reflections for constraining the interfaces; the velocity model is maintained stationary for the zero-offset inversion. A congruency factor, j, which measures the discrepancy between interfaces from the interpretation of the PSDM image and their corresponding counterparts from the inversion of the zero-offset reflections within assigned uncertainties, is computed in every cycle. A value of unity for jindicates that images from both the inversion and the migration are equivalent; at this point the unified imaging is said to have converged and is halted. We apply the unified imaging to 2-D multichannel seismic data from the Naga Thrust and Fold Belt (NTFB), India, were several exploratory wells in the last decade targeting sub-thrust leads in the footwall have failed. This failure is speculatively due to incorrect depth images which are in turn attributed to incorrect velocity models that are developed using conventional methods. The 2-D seismic data in this study is acquired perpendicular to the trend of the NTFB where the outcropping hanging wall has a topographic culmination. The acquisition style is split-spread with 30 m shot and receiver spacing and a nominal fold of 90. The data are recorded with a sample interval of 2 ms. Overall the data have a moderate signal-to-noise ratio and a broad frequency bandwidth of 8-80 Hz. The seismic line contains the failed exploratory well in the central part. The final results from unified imaging (both the depth image and the corresponding velocity model) suggest presence of a triangle zone, which was previously undiscovered. Conventional imaging had falsely portrayed the triangle zone as structural high which was interpreted as an anticline. As a result, the exploratory well, meant to target the anticline, met with pressure changes which were neither expected nor explained. The unified imaging results not only explain the observations in the well but also reveal new leads in the region. The velocity model from unified imaging was also found to be adequate for frequency-domain full-waveform imaging of the hanging wall. Results from waveform inversion are further corroborated by the geological interpretation of the exploratory well.
Model-based image analysis of a tethered Brownian fibre for shear stress sensing
2017-01-01
The measurement of fluid dynamic shear stress acting on a biologically relevant surface is a challenging problem, particularly in the complex environment of, for example, the vasculature. While an experimental method for the direct detection of wall shear stress via the imaging of a synthetic biology nanorod has recently been developed, the data interpretation so far has been limited to phenomenological random walk modelling, small-angle approximation, and image analysis techniques which do not take into account the production of an image from a three-dimensional subject. In this report, we develop a mathematical and statistical framework to estimate shear stress from rapid imaging sequences based firstly on stochastic modelling of the dynamics of a tethered Brownian fibre in shear flow, and secondly on a novel model-based image analysis, which reconstructs fibre positions by solving the inverse problem of image formation. This framework is tested on experimental data, providing the first mechanistically rational analysis of the novel assay. What follows further develops the established theory for an untethered particle in a semi-dilute suspension, which is of relevance to, for example, the study of Brownian nanowires without flow, and presents new ideas in the field of multi-disciplinary image analysis. PMID:29212755
NASA Astrophysics Data System (ADS)
Nawaz, Muhammad Atif; Curtis, Andrew
2018-04-01
We introduce a new Bayesian inversion method that estimates the spatial distribution of geological facies from attributes of seismic data, by showing how the usual probabilistic inverse problem can be solved using an optimization framework still providing full probabilistic results. Our mathematical model consists of seismic attributes as observed data, which are assumed to have been generated by the geological facies. The method infers the post-inversion (posterior) probability density of the facies plus some other unknown model parameters, from the seismic attributes and geological prior information. Most previous research in this domain is based on the localized likelihoods assumption, whereby the seismic attributes at a location are assumed to depend on the facies only at that location. Such an assumption is unrealistic because of imperfect seismic data acquisition and processing, and fundamental limitations of seismic imaging methods. In this paper, we relax this assumption: we allow probabilistic dependence between seismic attributes at a location and the facies in any neighbourhood of that location through a spatial filter. We term such likelihoods quasi-localized.
NASA Astrophysics Data System (ADS)
Jiang, Peng; Peng, Lihui; Xiao, Deyun
2007-06-01
This paper presents a regularization method by using different window functions as regularization for electrical capacitance tomography (ECT) image reconstruction. Image reconstruction for ECT is a typical ill-posed inverse problem. Because of the small singular values of the sensitivity matrix, the solution is sensitive to the measurement noise. The proposed method uses the spectral filtering properties of different window functions to make the solution stable by suppressing the noise in measurements. The window functions, such as the Hanning window, the cosine window and so on, are modified for ECT image reconstruction. Simulations with respect to five typical permittivity distributions are carried out. The reconstructions are better and some of the contours are clearer than the results from the Tikhonov regularization. Numerical results show that the feasibility of the image reconstruction algorithm using different window functions as regularization.
Homogenization of Electromagnetic and Seismic Wavefields for Joint Inverse Modeling
NASA Astrophysics Data System (ADS)
Newman, G. A.; Commer, M.; Petrov, P.; Um, E. S.
2011-12-01
A significant obstacle in developing a robust joint imaging technology exploiting seismic and electromagnetic (EM) wave fields is the resolution at which these different geophysical measurements sense the subsurface. Imaging of seismic reflection data is an order of magnitude finer in resolution and scale compared to images produced with EM data. A consistent joint image of the subsurface geophysical attributes (velocity, electrical conductivity) requires/demands the different geophysical data types be similar in their resolution of the subsurface. The superior resolution of seismic data results from the fact that the energy propagates as a wave, while propagation of EM energy is diffusive and attenuates with distance. On the other hand, the complexity of the seismic wave field can be a significant problem due to high reflectivity of the subsurface and the generation of multiple scattering events. While seismic wave fields have been very useful in mapping the subsurface for energy resources, too much scattering and too many reflections can lead to difficulties in imaging and interpreting seismic data. To overcome these obstacles a formulation for joint imaging of seismic and EM wave fields is introduced, where each data type is matched in resolution. In order to accomplish this, seismic data are first transformed into the Laplace-Fourier Domain, which changes the modeling of the seismic wave field from wave propagation to diffusion. Though high frequency information (reflectivity) is lost with this transformation, several benefits follow: (1) seismic and EM data can be easily matched in resolution, governed by the same physics of diffusion, (2) standard least squares inversion works well with diffusive type problems including both transformed seismic and EM, (3) joint imaging of seismic and EM data may produce better starting velocity models critical for successful reverse time migration or full waveform imaging of seismic data (non transformed) and (4) possibilities to image across multiple scale lengths, incorporating different types of geophysical data and attributes in the process. Important numerical details of 3D seismic wave field simulation in the Laplace-Fourier domain for both acoustic and elastic cases will also be discussed.
The performance of the spatiotemporal Kalman filter and LORETA in seizure onset localization.
Hamid, Laith; Sarabi, Masoud; Japaridze, Natia; Wiegand, Gert; Heute, Ulrich; Stephani, Ulrich; Galka, Andreas; Siniatchkin, Michael
2015-08-01
The assumption of spatial-smoothness is often used to solve the bioelectric inverse problem during electroencephalographic (EEG) source imaging, e.g., in low resolution electromagnetic tomography (LORETA). Since the EEG data show a temporal structure, the combination of the temporal-smoothness and the spatial-smoothness constraints may improve the solution of the EEG inverse problem. This study investigates the performance of the spatiotemporal Kalman filter (STKF) method, which is based on spatial and temporal smoothness, in the localization of a focal seizure's onset and compares its results to those of LORETA. The main finding of the study was that the STKF with an autoregressive model of order two significantly outperformed LORETA in the accuracy and consistency of the localization, provided that the source space consists of a whole-brain volumetric grid. In the future, these promising results will be confirmed using data from more patients and performing statistical analyses on the results. Furthermore, the effects of the temporal smoothness constraint will be studied using different types of focal seizures.
Inverse Source Data-Processing Strategies for Radio-Frequency Localization in Indoor Environments.
Gennarelli, Gianluca; Al Khatib, Obada; Soldovieri, Francesco
2017-10-27
Indoor positioning of mobile devices plays a key role in many aspects of our daily life. These include real-time people tracking and monitoring, activity recognition, emergency detection, navigation, and numerous location based services. Despite many wireless technologies and data-processing algorithms have been developed in recent years, indoor positioning is still a problem subject of intensive research. This paper deals with the active radio-frequency (RF) source localization in indoor scenarios. The localization task is carried out at the physical layer thanks to receiving sensor arrays which are deployed on the border of the surveillance region to record the signal emitted by the source. The localization problem is formulated as an imaging one by taking advantage of the inverse source approach. Different measurement configurations and data-processing/fusion strategies are examined to investigate their effectiveness in terms of localization accuracy under both line-of-sight (LOS) and non-line of sight (NLOS) conditions. Numerical results based on full-wave synthetic data are reported to support the analysis.
Inverse Source Data-Processing Strategies for Radio-Frequency Localization in Indoor Environments
Gennarelli, Gianluca; Al Khatib, Obada; Soldovieri, Francesco
2017-01-01
Indoor positioning of mobile devices plays a key role in many aspects of our daily life. These include real-time people tracking and monitoring, activity recognition, emergency detection, navigation, and numerous location based services. Despite many wireless technologies and data-processing algorithms have been developed in recent years, indoor positioning is still a problem subject of intensive research. This paper deals with the active radio-frequency (RF) source localization in indoor scenarios. The localization task is carried out at the physical layer thanks to receiving sensor arrays which are deployed on the border of the surveillance region to record the signal emitted by the source. The localization problem is formulated as an imaging one by taking advantage of the inverse source approach. Different measurement configurations and data-processing/fusion strategies are examined to investigate their effectiveness in terms of localization accuracy under both line-of-sight (LOS) and non-line of sight (NLOS) conditions. Numerical results based on full-wave synthetic data are reported to support the analysis. PMID:29077071
A systematic linear space approach to solving partially described inverse eigenvalue problems
NASA Astrophysics Data System (ADS)
Hu, Sau-Lon James; Li, Haujun
2008-06-01
Most applications of the inverse eigenvalue problem (IEP), which concerns the reconstruction of a matrix from prescribed spectral data, are associated with special classes of structured matrices. Solving the IEP requires one to satisfy both the spectral constraint and the structural constraint. If the spectral constraint consists of only one or few prescribed eigenpairs, this kind of inverse problem has been referred to as the partially described inverse eigenvalue problem (PDIEP). This paper develops an efficient, general and systematic approach to solve the PDIEP. Basically, the approach, applicable to various structured matrices, converts the PDIEP into an ordinary inverse problem that is formulated as a set of simultaneous linear equations. While solving simultaneous linear equations for model parameters, the singular value decomposition method is applied. Because of the conversion to an ordinary inverse problem, other constraints associated with the model parameters can be easily incorporated into the solution procedure. The detailed derivation and numerical examples to implement the newly developed approach to symmetric Toeplitz and quadratic pencil (including mass, damping and stiffness matrices of a linear dynamic system) PDIEPs are presented. Excellent numerical results for both kinds of problem are achieved under the situations that have either unique or infinitely many solutions.
Convergence of Chahine's nonlinear relaxation inversion method used for limb viewing remote sensing
NASA Technical Reports Server (NTRS)
Chu, W. P.
1985-01-01
The application of Chahine's (1970) inversion technique to remote sensing problems utilizing the limb viewing geometry is discussed. The problem considered here involves occultation-type measurements and limb radiance-type measurements from either spacecraft or balloon platforms. The kernel matrix of the inversion problem is either an upper or lower triangular matrix. It is demonstrated that the Chahine inversion technique always converges, provided the diagonal elements of the kernel matrix are nonzero.
Forward model with space-variant of source size for reconstruction on X-ray radiographic image
NASA Astrophysics Data System (ADS)
Liu, Jin; Liu, Jun; Jing, Yue-feng; Xiao, Bo; Wei, Cai-hua; Guan, Yong-hong; Zhang, Xuan
2018-03-01
The Forward Imaging Technique is a method to solve the inverse problem of density reconstruction in radiographic imaging. In this paper, we introduce the forward projection equation (IFP model) for the radiographic system with areal source blur and detector blur. Our forward projection equation, based on X-ray tracing, is combined with the Constrained Conjugate Gradient method to form a new method for density reconstruction. We demonstrate the effectiveness of the new technique by reconstructing density distributions from simulated and experimental images. We show that for radiographic systems with source sizes larger than the pixel size, the effect of blur on the density reconstruction is reduced through our method and can be controlled within one or two pixels. The method is also suitable for reconstruction of non-homogeneousobjects.
Vorticity field measurement using digital inline holography
NASA Astrophysics Data System (ADS)
Mallery, Kevin; Hong, Jiarong
2017-11-01
We demonstrate the direct measurement of a 3D vorticity field using digital inline holographic microscopy. Microfiber tracer particles are illuminated with a 532 nm continuous diode laser and imaged using a single CCD camera. The recorded holographic images are processed using a GPU-accelerated inverse problem approach to reconstruct the 3D structure of each microfiber in the imaged volume. The translation and rotation of each microfiber are measured using a time-resolved image sequence - yielding velocity and vorticity point measurements. The accuracy and limitations of this method are investigated using synthetic holograms. Measurements of solid body rotational flow are used to validate the accuracy of the technique under known flow conditions. The technique is further applied to a practical turbulent flow case for investigating its 3D velocity field and vorticity distribution.
Computational inverse methods of heat source in fatigue damage problems
NASA Astrophysics Data System (ADS)
Chen, Aizhou; Li, Yuan; Yan, Bo
2018-04-01
Fatigue dissipation energy is the research focus in field of fatigue damage at present. It is a new idea to solve the problem of calculating fatigue dissipation energy by introducing inverse method of heat source into parameter identification of fatigue dissipation energy model. This paper introduces the research advances on computational inverse method of heat source and regularization technique to solve inverse problem, as well as the existing heat source solution method in fatigue process, prospects inverse method of heat source applying in fatigue damage field, lays the foundation for further improving the effectiveness of fatigue dissipation energy rapid prediction.
Regional regularization method for ECT based on spectral transformation of Laplacian
NASA Astrophysics Data System (ADS)
Guo, Z. H.; Kan, Z.; Lv, D. C.; Shao, F. Q.
2016-10-01
Image reconstruction in electrical capacitance tomography is an ill-posed inverse problem, and regularization techniques are usually used to solve the problem for suppressing noise. An anisotropic regional regularization algorithm for electrical capacitance tomography is constructed using a novel approach called spectral transformation. Its function is derived and applied to the weighted gradient magnitude of the sensitivity of Laplacian as a regularization term. With the optimum regional regularizer, the a priori knowledge on the local nonlinearity degree of the forward map is incorporated into the proposed online reconstruction algorithm. Simulation experimentations were performed to verify the capability of the new regularization algorithm to reconstruct a superior quality image over two conventional Tikhonov regularization approaches. The advantage of the new algorithm for improving performance and reducing shape distortion is demonstrated with the experimental data.
Incoherent dictionary learning for reducing crosstalk noise in least-squares reverse time migration
NASA Astrophysics Data System (ADS)
Wu, Juan; Bai, Min
2018-05-01
We propose to apply a novel incoherent dictionary learning (IDL) algorithm for regularizing the least-squares inversion in seismic imaging. The IDL is proposed to overcome the drawback of traditional dictionary learning algorithm in losing partial texture information. Firstly, the noisy image is divided into overlapped image patches, and some random patches are extracted for dictionary learning. Then, we apply the IDL technology to minimize the coherency between atoms during dictionary learning. Finally, the sparse representation problem is solved by a sparse coding algorithm, and image is restored by those sparse coefficients. By reducing the correlation among atoms, it is possible to preserve most of the small-scale features in the image while removing much of the long-wavelength noise. The application of the IDL method to regularization of seismic images from least-squares reverse time migration shows successful performance.
NASA Astrophysics Data System (ADS)
Engquist, Björn; Frederick, Christina; Huynh, Quyen; Zhou, Haomin
2017-06-01
We present a multiscale approach for identifying features in ocean beds by solving inverse problems in high frequency seafloor acoustics. The setting is based on Sound Navigation And Ranging (SONAR) imaging used in scientific, commercial, and military applications. The forward model incorporates multiscale simulations, by coupling Helmholtz equations and geometrical optics for a wide range of spatial scales in the seafloor geometry. This allows for detailed recovery of seafloor parameters including material type. Simulated backscattered data is generated using numerical microlocal analysis techniques. In order to lower the computational cost of the large-scale simulations in the inversion process, we take advantage of a pre-computed library of representative acoustic responses from various seafloor parameterizations.
Direct T2 Quantification of Myocardial Edema in Acute Ischemic Injury
Verhaert, David; Thavendiranathan, Paaladinesh; Giri, Shivraman; Mihai, Georgeta; Rajagopalan, Sanjay; Simonetti, Orlando P.; Raman, Subha V.
2014-01-01
OBJECTIVES To evaluate the utility of rapid, quantitative T2 mapping compared with conventional T2-weighted imaging in patients presenting with various forms of acute myocardial infarction. BACKGROUND T2-weighted cardiac magnetic resonance (CMR) identifies myocardial edema before the onset of irreversible ischemic injury and has shown value in risk-stratifying patients with chest pain. Clinical acceptance of T2-weighted CMR has, however, been limited by well-known technical problems associated with existing techniques. T2 quantification has recently been shown to overcome these problems; we hypothesized that T2 measurement in infarcted myocardium versus remote regions versus zones of microvascular obstruction in acute myocardial infarction patients could help reduce uncertainty in interpretation of T2-weighted images. METHODS T2 values using a novel mapping technique were prospectively recorded in 16 myocardial segments in 27 patients admitted with acute myocardial infarction. Regional T2 values were averaged in the infarct zone and remote myocardium, both defined by a reviewer blinded to the results of T2 mapping. Myocardial T2 was also measured in a group of 21 healthy volunteers. RESULTS T2 of the infarct zone was 69 ± 6 ms compared with 56 ± 3.4 ms for remote myocardium (p < 0.0001). No difference in T2 was observed between remote myocardium and myocardium of healthy volunteers (56 ± 3.4 ms and 55.5 ± 2.3 ms, respectively, p = NS). T2 mapping allowed for the detection of edematous myocardium in 26 of 27 patients; by comparison, segmented breath-hold T2-weighted short tau inversion recovery images were negative in 7 and uninterpretable in another 2 due to breathing artifacts. Within the infarct zone, areas of microvascular obstruction were characterized by a lower T2 value (59 ± 6 ms) compared with areas with no microvascular obstruction (71.6 ± 10 ms, p < 0.0001). T2 mapping provided consistent high-quality results in patients unable to breath-hold and in those with irregular heart rhythms, in whom short tau inversion recovery often yielded inadequate imaging. CONCLUSIONS Quantitative T2 mapping reliably identifies myocardial edema without the limitations encountered by T2-weighted short tau inversion recovery imaging, and may therefore be clinically more robust in showing acute ischemic injury. PMID:21414575
NASA Astrophysics Data System (ADS)
Feng, Xinzeng; Hormuth, David A.; Yankeelov, Thomas E.
2018-06-01
We present an efficient numerical method to quantify the spatial variation of glioma growth based on subject-specific medical images using a mechanically-coupled tumor model. The method is illustrated in a murine model of glioma in which we consider the tumor as a growing elastic mass that continuously deforms the surrounding healthy-appearing brain tissue. As an inverse parameter identification problem, we quantify the volumetric growth of glioma and the growth component of deformation by fitting the model predicted cell density to the cell density estimated using the diffusion-weighted magnetic resonance imaging data. Numerically, we developed an adjoint-based approach to solve the optimization problem. Results on a set of experimentally measured, in vivo rat glioma data indicate good agreement between the fitted and measured tumor area and suggest a wide variation of in-plane glioma growth with the growth-induced Jacobian ranging from 1.0 to 6.0.
NASA Astrophysics Data System (ADS)
Luo, H.; Zhang, H.; Gao, J.
2016-12-01
Seismic and magnetotelluric (MT) imaging methods are generally used to characterize subsurface structures at various scales. The two methods are complementary to each other and the integration of them is helpful for more reliably determining the resistivity and velocity models of the target region. Because of the difficulty in finding empirical relationship between resistivity and velocity parameters, Gallardo and Meju [2003] proposed a joint inversion method enforcing resistivity and velocity models consistent in structure, which is realized by minimizing cross gradients between two models. However, it is extremely challenging to combine two different inversion systems together along with the cross gradient constraints. For this reason, Gallardo [2007] proposed a joint inversion scheme that decouples the seismic and MT inversion systems by iteratively performing seismic and MT inversions as well as cross gradient minimization separately. This scheme avoids the complexity of combining two different systems together but it suffers the issue of balancing between data fitting and structure constraint. In this study, we have developed a new joint inversion scheme that avoids the problem encountered by the scheme of Gallardo [2007]. In the new scheme, seismic and MT inversions are still separately performed but the cross gradient minimization is also constrained by model perturbations from separate inversions. In this way, the new scheme still avoids the complexity of combining two different systems together and at the same time the balance between data fitting and structure consistency constraint can be enforced. We have tested our joint inversion algorithm for both 2D and 3D cases. Synthetic tests show that joint inversion better reconstructed the velocity and resistivity models than separate inversions. Compared to separate inversions, joint inversion can remove artifacts in the resistivity model and can improve the resolution for deeper resistivity structures. We will also show results applying the new joint seismic and MT inversion scheme to southwest China, where several MT profiles are available and earthquakes are very active.
NASA Astrophysics Data System (ADS)
Holman, Benjamin R.
In recent years, revolutionary "hybrid" or "multi-physics" methods of medical imaging have emerged. By combining two or three different types of waves these methods overcome limitations of classical tomography techniques and deliver otherwise unavailable, potentially life-saving diagnostic information. Thermoacoustic (and photoacoustic) tomography is the most developed multi-physics imaging modality. Thermo- and photo- acoustic tomography require reconstructing initial acoustic pressure in a body from time series of pressure measured on a surface surrounding the body. For the classical case of free space wave propagation, various reconstruction techniques are well known. However, some novel measurement schemes place the object of interest between reflecting walls that form a de facto resonant cavity. In this case, known methods cannot be used. In chapter 2 we present a fast iterative reconstruction algorithm for measurements made at the walls of a rectangular reverberant cavity with a constant speed of sound. We prove the convergence of the iterations under a certain sufficient condition, and demonstrate the effectiveness and efficiency of the algorithm in numerical simulations. In chapter 3 we consider the more general problem of an arbitrarily shaped resonant cavity with a non constant speed of sound and present the gradual time reversal method for computing solutions to the inverse source problem. It consists in solving back in time on the interval [0, T] the initial/boundary value problem for the wave equation, with the Dirichlet boundary data multiplied by a smooth cutoff function. If T is sufficiently large one obtains a good approximation to the initial pressure; in the limit of large T such an approximation converges (under certain conditions) to the exact solution.
Children's Understanding of the Inverse Relation between Multiplication and Division
ERIC Educational Resources Information Center
Robinson, Katherine M.; Dube, Adam K.
2009-01-01
Children's understanding of the inversion concept in multiplication and division problems (i.e., that on problems of the form "d multiplied by e/e" no calculations are required) was investigated. Children in Grades 6, 7, and 8 completed an inversion problem-solving task, an assessment of procedures task, and a factual knowledge task of simple…
Space-Based Remote Sensing of Atmospheric Aerosols: The Multi-Angle Spectro-Polarimetric Frontier
NASA Technical Reports Server (NTRS)
Kokhanovsky, A. A.; Davis, A. B.; Cairns, B.; Dubovik, O.; Hasekamp, O. P.; Sano, I.; Mukai, S.; Rozanov, V. V.; Litvinov, P.; Lapyonok, T.;
2015-01-01
The review of optical instrumentation, forward modeling, and inverse problem solution for the polarimetric aerosol remote sensing from space is presented. The special emphasis is given to the description of current airborne and satellite imaging polarimeters and also to modern satellite aerosol retrieval algorithms based on the measurements of the Stokes vector of reflected solar light as detected on a satellite. Various underlying surface reflectance models are discussed and evaluated.
Inverse Problems and Imaging (Pitman Research Notes in Mathematics Series Number 245)
1991-01-01
Multiparamcter spectral theory in Hilbert space functional differential cquations B D Sleeman F Kappel and W Schappacher 24 Mathematical modelling...techniques 49 Sequence spaces R Aris W 11 Ruckle 25 Singular points of smooth mappings 50 Recent contributions to nonlinear C G Gibson partial...of convergence in the central limit T Husain theorem 86 Hamilton-Jacobi equations in Hilbert spaces Peter Hall V Barbu and G Da Prato 63 Solution of
NASA Astrophysics Data System (ADS)
Belkebir, Kamal; Saillard, Marc
2005-12-01
This special section deals with the reconstruction of scattering objects from experimental data. A few years ago, inspired by the Ipswich database [1 4], we started to build an experimental database in order to validate and test inversion algorithms against experimental data. In the special section entitled 'Testing inversion algorithms against experimental data' [5], preliminary results were reported through 11 contributions from several research teams. (The experimental data are free for scientific use and can be downloaded from the web site.) The success of this previous section has encouraged us to go further and to design new challenges for the inverse scattering community. Taking into account the remarks formulated by several colleagues, the new data sets deal with inhomogeneous cylindrical targets and transverse electric (TE) polarized incident fields have also been used. Among the four inhomogeneous targets, three are purely dielectric, while the last one is a `hybrid' target mixing dielectric and metallic cylinders. Data have been collected in the anechoic chamber of the Centre Commun de Ressources Micro-ondes in Marseille. The experimental setup as well as the layout of the files containing the measurements are presented in the contribution by J-M Geffrin, P Sabouroux and C Eyraud. The antennas did not change from the ones used previously [5], namely wide-band horn antennas. However, improvements have been achieved by refining the mechanical positioning devices. In order to enlarge the scope of applications, both TE and transverse magnetic (TM) polarizations have been carried out for all targets. Special care has been taken not to move the target under test when switching from TE to TM measurements, ensuring that TE and TM data are available for the same configuration. All data correspond to electric field measurements. In TE polarization the measured component is orthogonal to the axis of invariance. Contributions A Abubakar, P M van den Berg and T M Habashy, Application of the multiplicative regularized contrast source inversion method TM- and TE-polarized experimental Fresnel data, present results of profile inversions obtained using the contrast source inversion (CSI) method, in which a multiplicative regularization is plugged in. The authors successfully inverted both TM- and TE-polarized fields. Note that this paper is one of only two contributions which address the inversion of TE-polarized data. A Baussard, Inversion of multi-frequency experimental data using an adaptive multiscale approach, reports results of reconstructions using the modified gradient method (MGM). It suggests that a coarse-to-fine iterative strategy based on spline pyramids. In this iterative technique, the number of degrees of freedom is reduced, which improves robustness. The introduction, during the iterative process, of finer scales inside areas of interest leads to an accurate representation of the object under test. The efficiency of this technique is shown via comparisons between the results obtained with the standard MGM and those from an adaptive approach. L Crocco, M D'Urso and T Isernia, Testing the contrast source extended Born inversion method against real data: the case of TM data, assume that the main contribution in the domain integral formulation comes from the singularity of Green's function, even though the media involved are lossless. A Fourier Bessel analysis of the incident and scattered measured fields is used to derive a model of the incident field and an estimate of the location and size of the target. The iterative procedure lies on a conjugate gradient method associated with Tikhonov regularization, and the multi-frequency data are dealt with using a frequency-hopping approach. In many cases, it is difficult to reconstruct accurately both real and imaginary parts of the permittivity if no prior information is included. M Donelli, D Franceschini, A Massa, M Pastorino and A Zanetti, Multi-resolution iterative inversion of real inhomogeneous targets, adopt a multi-resolution strategy, which, at each step, adaptive discretization of the integral equation is performed over an irregular mesh, with a coarser grid outside the regions of interest and tighter sampling where better resolution is required. Here, this procedure is achieved while keeping the number of unknowns constant. The way such a strategy could be combined with multi-frequency data, edge preserving regularization, or any technique also devoted to improve resolution, remains to be studied. As done by some other contributors, the model of incident field is chosen to fit the Fourier Bessel expansion of the measured one. A Dubois, K Belkebir and M Saillard, Retrieval of inhomogeneous targets from experimental frequency diversity data, present results of the reconstruction of targets using three different non-regularized techniques. It is suggested to minimize a frequency weighted cost function rather than a standard one. The different approaches are compared and discussed. C Estatico, G Bozza, A Massa, M Pastorino and A Randazzo, A two-step iterative inexact-Newton method for electromagnetic imaging of dielectric structures from real data, use a two nested iterative methods scheme, based on the second-order Born approximation, which is nonlinear in terms of contrast but does not involve the total field. At each step of the outer iteration, the problem is linearized and solved iteratively using the Landweber method. Better reconstructions than with the Born approximation are obtained at low numerical cost. O Feron, B Duchêne and A Mohammad-Djafari, Microwave imaging of inhomogeneous objects made of a finite number of dielectric and conductive materials from experimental data, adopt a Bayesian framework based on a hidden Markov model, built to take into account, as prior knowledge, that the target is composed of a finite number of homogeneous regions. It has been applied to diffraction tomography and to a rigorous formulation of the inverse problem. The latter can be viewed as a Bayesian adaptation of the contrast source method such that prior information about the contrast can be introduced in the prior law distribution, and it results in estimating the posterior mean instead of minimizing a cost functional. The accuracy of the result is thus closely linked to the prior knowledge of the contrast, making this approach well suited for non-destructive testing. J-M Geffrin, P Sabouroux and C Eyraud, Free space experimental scattering database continuation: experimental set-up and measurement precision, describe the experimental set-up used to carry out the data for the inversions. They report the modifications of the experimental system used previously in order to improve the precision of the measurements. Reliability of data is demonstrated through comparisons between measurements and computed scattered field with both fundamental polarizations. In addition, the reader interested in using the database will find the relevant information needed to perform inversions as well as the description of the targets under test. A Litman, Reconstruction by level sets of n-ary scattering obstacles, presents the reconstruction of targets using a level sets representation. It is assumed that the constitutive materials of the obstacles under test are known and the shape is retrieved. Two approaches are reported. In the first one the obstacles of different constitutive materials are represented in a single level set, while in the second approach several level sets are combined. The approaches are applied to the experimental data and compared. U Shahid, M Testorf and M A Fiddy, Minimum-phase-based inverse scattering algorithm applied to Institut Fresnel data, suggest a way of extending the use of minimum phase functions to 2D problems. In the kind of inverse problems we are concerned with, it consists of separating the contributions from the field and from the contrast in the so-called contrast source term, through homomorphic filtering. Images of the targets are obtained by combination with diffraction tomography. Both pre-processing and imaging are thus based on the use of Fourier transforms, making the algorithm very fast compared to classical iterative approaches. It is also pointed out that the design of appropriate filters remains an open topic. C Yu, L-P Song and Q H Liu, Inversion of multi-frequency experimental data for imaging complex objects by a DTA CSI method, use the contrast source inversion (CSI) method for the reconstruction of the targets, in which the initial guess is a solution deduced from another iterative technique based on the diagonal tensor approximation (DTA). In so doing, the authors combine the fast convergence of the DTA method for generating an accurate initial estimate for the CSI method. Note that this paper is one of only two contributions which address the inversion of TE-polarized data. Conclusion In this special section various inverse scattering techniques were used to successfully reconstruct inhomogeneous targets from multi-frequency multi-static measurements. This shows that the database is reliable and can be useful for researchers wanting to test and validate inversion algorithms. From the database, it is also possible to extract subsets to study particular inverse problems, for instance from phaseless data or from `aspect-limited' configurations. Our future efforts will be directed towards extending the database in order to explore inversions from transient fields and the full three-dimensional problem. Acknowledgments The authors would like to thank the Inverse Problems board for opening the journal to us, and offer profound thanks to Elaine Longden-Chapman and Kate Hooper for their help in organizing this special section.
A Volunteer Computing Project for Solving Geoacoustic Inversion Problems
NASA Astrophysics Data System (ADS)
Zaikin, Oleg; Petrov, Pavel; Posypkin, Mikhail; Bulavintsev, Vadim; Kurochkin, Ilya
2017-12-01
A volunteer computing project aimed at solving computationally hard inverse problems in underwater acoustics is described. This project was used to study the possibilities of the sound speed profile reconstruction in a shallow-water waveguide using a dispersion-based geoacoustic inversion scheme. The computational capabilities provided by the project allowed us to investigate the accuracy of the inversion for different mesh sizes of the sound speed profile discretization grid. This problem suits well for volunteer computing because it can be easily decomposed into independent simpler subproblems.
Identifing Atmospheric Pollutant Sources Using Artificial Neural Networks
NASA Astrophysics Data System (ADS)
Paes, F. F.; Campos, H. F.; Luz, E. P.; Carvalho, A. R.
2008-05-01
The estimation of the area source pollutant strength is a relevant issue for atmospheric environment. This characterizes an inverse problem in the atmospheric pollution dispersion. In the inverse analysis, an area source domain is considered, where the strength of such area source term is assumed unknown. The inverse problem is solved by using a supervised artificial neural network: multi-layer perceptron. The conection weights of the neural network are computed from delta rule - learning process. The neural network inversion is compared with results from standard inverse analysis (regularized inverse solution). In the regularization method, the inverse problem is formulated as a non-linear optimization approach, whose the objective function is given by the square difference between the measured pollutant concentration and the mathematical models, associated with a regularization operator. In our numerical experiments, the forward problem is addressed by a source-receptor scheme, where a regressive Lagrangian model is applied to compute the transition matrix. The second order maximum entropy regularization is used, and the regularization parameter is calculated by the L-curve technique. The objective function is minimized employing a deterministic scheme (a quasi-Newton algorithm) [1] and a stochastic technique (PSO: particle swarm optimization) [2]. The inverse problem methodology is tested with synthetic observational data, from six measurement points in the physical domain. The best inverse solutions were obtained with neural networks. References: [1] D. R. Roberti, D. Anfossi, H. F. Campos Velho, G. A. Degrazia (2005): Estimating Emission Rate and Pollutant Source Location, Ciencia e Natura, p. 131-134. [2] E.F.P. da Luz, H.F. de Campos Velho, J.C. Becceneri, D.R. Roberti (2007): Estimating Atmospheric Area Source Strength Through Particle Swarm Optimization. Inverse Problems, Desing and Optimization Symposium IPDO-2007, April 16-18, Miami (FL), USA, vol 1, p. 354-359.
NASA Astrophysics Data System (ADS)
Schumacher, Florian; Friederich, Wolfgang
Due to increasing computational resources, the development of new numerically demanding methods and software for imaging Earth's interior remains of high interest in Earth sciences. Here, we give a description from a user's and programmer's perspective of the highly modular, flexible and extendable software package ASKI-Analysis of Sensitivity and Kernel Inversion-recently developed for iterative scattering-integral-based seismic full waveform inversion. In ASKI, the three fundamental steps of solving the seismic forward problem, computing waveform sensitivity kernels and deriving a model update are solved by independent software programs that interact via file output/input only. Furthermore, the spatial discretizations of the model space used for solving the seismic forward problem and for deriving model updates, respectively, are kept completely independent. For this reason, ASKI does not contain a specific forward solver but instead provides a general interface to established community wave propagation codes. Moreover, the third fundamental step of deriving a model update can be repeated at relatively low costs applying different kinds of model regularization or re-selecting/weighting the inverted dataset without need to re-solve the forward problem or re-compute the kernels. Additionally, ASKI offers the user sensitivity and resolution analysis tools based on the full sensitivity matrix and allows to compose customized workflows in a consistent computational environment. ASKI is written in modern Fortran and Python, it is well documented and freely available under terms of the GNU General Public License (http://www.rub.de/aski).
NASA Astrophysics Data System (ADS)
Grayver, Alexander V.
2015-07-01
This paper presents a distributed magnetotelluric inversion scheme based on adaptive finite-element method (FEM). The key novel aspect of the introduced algorithm is the use of automatic mesh refinement techniques for both forward and inverse modelling. These techniques alleviate tedious and subjective procedure of choosing a suitable model parametrization. To avoid overparametrization, meshes for forward and inverse problems were decoupled. For calculation of accurate electromagnetic (EM) responses, automatic mesh refinement algorithm based on a goal-oriented error estimator has been adopted. For further efficiency gain, EM fields for each frequency were calculated using independent meshes in order to account for substantially different spatial behaviour of the fields over a wide range of frequencies. An automatic approach for efficient initial mesh design in inverse problems based on linearized model resolution matrix was developed. To make this algorithm suitable for large-scale problems, it was proposed to use a low-rank approximation of the linearized model resolution matrix. In order to fill a gap between initial and true model complexities and resolve emerging 3-D structures better, an algorithm for adaptive inverse mesh refinement was derived. Within this algorithm, spatial variations of the imaged parameter are calculated and mesh is refined in the neighborhoods of points with the largest variations. A series of numerical tests were performed to demonstrate the utility of the presented algorithms. Adaptive mesh refinement based on the model resolution estimates provides an efficient tool to derive initial meshes which account for arbitrary survey layouts, data types, frequency content and measurement uncertainties. Furthermore, the algorithm is capable to deliver meshes suitable to resolve features on multiple scales while keeping number of unknowns low. However, such meshes exhibit dependency on an initial model guess. Additionally, it is demonstrated that the adaptive mesh refinement can be particularly efficient in resolving complex shapes. The implemented inversion scheme was able to resolve a hemisphere object with sufficient resolution starting from a coarse discretization and refining mesh adaptively in a fully automatic process. The code is able to harness the computational power of modern distributed platforms and is shown to work with models consisting of millions of degrees of freedom. Significant computational savings were achieved by using locally refined decoupled meshes.
Fast reconstruction of optical properties for complex segmentations in near infrared imaging
NASA Astrophysics Data System (ADS)
Jiang, Jingjing; Wolf, Martin; Sánchez Majos, Salvador
2017-04-01
The intrinsic ill-posed nature of the inverse problem in near infrared imaging makes the reconstruction of fine details of objects deeply embedded in turbid media challenging even for the large amounts of data provided by time-resolved cameras. In addition, most reconstruction algorithms for this type of measurements are only suitable for highly symmetric geometries and rely on a linear approximation to the diffusion equation since a numerical solution of the fully non-linear problem is computationally too expensive. In this paper, we will show that a problem of practical interest can be successfully addressed making efficient use of the totality of the information supplied by time-resolved cameras. We set aside the goal of achieving high spatial resolution for deep structures and focus on the reconstruction of complex arrangements of large regions. We show numerical results based on a combined approach of wavelength-normalized data and prior geometrical information, defining a fully parallelizable problem in arbitrary geometries for time-resolved measurements. Fast reconstructions are obtained using a diffusion approximation and Monte-Carlo simulations, parallelized in a multicore computer and a GPU respectively.
Image deblurring using a joint entropy prior in x-ray luminescence computed tomography
NASA Astrophysics Data System (ADS)
Su, Chang; Dutta, Joyita; Zhang, Hui; El Fakhri, Georges; Li, Quanzheng
2017-03-01
X-ray luminescence computed tomography (XLCT) is an emerging hybrid imaging modality that can provide functional and anatomical images at the same time. Traditional narrow beam XLCT can achieve high spatial resolution as well as high sensitivity. However, by treating the CCD camera as a single pixel detector, this kind of scheme resembles the first generation of CT scanner which results in a long scanning time and a high radiation dose. Although cone beam or fan beam XLCT has the ability to mitigate this problem with an optical propagation model introduced, image quality is affected because the inverse problem is ill-conditioned. Much effort has been done to improve the image quality through hardware improvements or by developing new reconstruction techniques for XLCT. The objective of this work is to further enhance the already reconstructed image by introducing anatomical information through retrospective processing. The deblurring process used a spatially variant point spread function (PSF) model and a joint entropy based anatomical prior derived from a CT image acquired using the same XLCT system. A numerical experiment was conducted with a real mouse CT image from the Digimouse phantom used as the anatomical prior. The resultant images of bone and lung regions showed sharp edges and good consistency with the CT image. Activity error was reduced by 52.3% even for nanophosphor lesion size as small as 0.8mm.
DAMIT: a database of asteroid models
NASA Astrophysics Data System (ADS)
Durech, J.; Sidorin, V.; Kaasalainen, M.
2010-04-01
Context. Apart from a few targets that were directly imaged by spacecraft, remote sensing techniques are the main source of information about the basic physical properties of asteroids, such as the size, the spin state, or the spectral type. The most widely used observing technique - time-resolved photometry - provides us with data that can be used for deriving asteroid shapes and spin states. In the past decade, inversion of asteroid lightcurves has led to more than a hundred asteroid models. In the next decade, when data from all-sky surveys are available, the number of asteroid models will increase. Combining photometry with, e.g., adaptive optics data produces more detailed models. Aims: We created the Database of Asteroid Models from Inversion Techniques (DAMIT) with the aim of providing the astronomical community access to reliable and up-to-date physical models of asteroids - i.e., their shapes, rotation periods, and spin axis directions. Models from DAMIT can be used for further detailed studies of individual objects, as well as for statistical studies of the whole set. Methods: Most DAMIT models were derived from photometric data by the lightcurve inversion method. Some of them have been further refined or scaled using adaptive optics images, infrared observations, or occultation data. A substantial number of the models were derived also using sparse photometric data from astrometric databases. Results: At present, the database contains models of more than one hundred asteroids. For each asteroid, DAMIT provides the polyhedral shape model, the sidereal rotation period, the spin axis direction, and the photometric data used for the inversion. The database is updated when new models are available or when already published models are updated or refined. We have also released the C source code for the lightcurve inversion and for the direct problem (updates and extensions will follow).
In vivo bioluminescence tomography based on multi-view projection and 3D surface reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Shuang; Wang, Kun; Leng, Chengcai; Deng, Kexin; Hu, Yifang; Tian, Jie
2015-03-01
Bioluminescence tomography (BLT) is a powerful optical molecular imaging modality, which enables non-invasive realtime in vivo imaging as well as 3D quantitative analysis in preclinical studies. In order to solve the inverse problem and reconstruct inner light sources accurately, the prior structural information is commonly necessary and obtained from computed tomography or magnetic resonance imaging. This strategy requires expensive hybrid imaging system, complicated operation protocol and possible involvement of ionizing radiation. The overall robustness highly depends on the fusion accuracy between the optical and structural information. In this study we present a pure optical bioluminescence tomographic system (POBTS) and a novel BLT method based on multi-view projection acquisition and 3D surface reconstruction. The POBTS acquired a sparse set of white light surface images and bioluminescent images of a mouse. Then the white light images were applied to an approximate surface model to generate a high quality textured 3D surface reconstruction of the mouse. After that we integrated multi-view luminescent images based on the previous reconstruction, and applied an algorithm to calibrate and quantify the surface luminescent flux in 3D.Finally, the internal bioluminescence source reconstruction was achieved with this prior information. A BALB/C mouse with breast tumor of 4T1-fLuc cells mouse model were used to evaluate the performance of the new system and technique. Compared with the conventional hybrid optical-CT approach using the same inverse reconstruction method, the reconstruction accuracy of this technique was improved. The distance error between the actual and reconstructed internal source was decreased by 0.184 mm.
Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart
2011-01-01
We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the "model-free" variational analysis (VA)-based image enhancement approach and the "model-based" descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bledsoe, Keith C.
2015-04-01
The DiffeRential Evolution Adaptive Metropolis (DREAM) method is a powerful optimization/uncertainty quantification tool used to solve inverse transport problems in Los Alamos National Laboratory’s INVERSE code system. The DREAM method has been shown to be adept at accurate uncertainty quantification, but it can be very computationally demanding. Previously, the DREAM method in INVERSE performed a user-defined number of particle transport calculations. This placed a burden on the user to guess the number of calculations that would be required to accurately solve any given problem. This report discusses a new approach that has been implemented into INVERSE, the Gelman-Rubin convergence metric.more » This metric automatically detects when an appropriate number of transport calculations have been completed and the uncertainty in the inverse problem has been accurately calculated. In a test problem with a spherical geometry, this method was found to decrease the number of transport calculations (and thus time required) to solve a problem by an average of over 90%. In a cylindrical test geometry, a 75% decrease was obtained.« less
Larkin, Kieran G; Fletcher, Peter A
2014-03-01
X-ray Talbot moiré interferometers can now simultaneously generate two differential phase images of a specimen. The conventional approach to integrating differential phase is unstable and often leads to images with loss of visible detail. We propose a new reconstruction method based on the inverse Riesz transform. The Riesz approach is stable and the final image retains visibility of high resolution detail without directional bias. The outline Riesz theory is developed and an experimentally acquired X-ray differential phase data set is presented for qualitative visual appraisal. The inverse Riesz phase image is compared with two alternatives: the integrated (quantitative) phase and the modulus of the gradient of the phase. The inverse Riesz transform has the computational advantages of a unitary linear operator, and is implemented directly as a complex multiplication in the Fourier domain also known as the spiral phase transform.
Larkin, Kieran G.; Fletcher, Peter A.
2014-01-01
X-ray Talbot moiré interferometers can now simultaneously generate two differential phase images of a specimen. The conventional approach to integrating differential phase is unstable and often leads to images with loss of visible detail. We propose a new reconstruction method based on the inverse Riesz transform. The Riesz approach is stable and the final image retains visibility of high resolution detail without directional bias. The outline Riesz theory is developed and an experimentally acquired X-ray differential phase data set is presented for qualitative visual appraisal. The inverse Riesz phase image is compared with two alternatives: the integrated (quantitative) phase and the modulus of the gradient of the phase. The inverse Riesz transform has the computational advantages of a unitary linear operator, and is implemented directly as a complex multiplication in the Fourier domain also known as the spiral phase transform. PMID:24688823
NASA Astrophysics Data System (ADS)
Li, Yi-Ling; Liu, Zhen-Bo; Ma, Qing-Yu; Guo, Xia-Sheng; Zhang, Dong
2010-08-01
Magnetoacoustic tomography with magnetic induction has shown potential applications in imaging the electrical impedance for biological tissues. We present a novel methodology for the inverse problem solution of the 2-D Lorentz force distribution reconstruction based on the acoustic straight line propagation theory. The magnetic induction and acoustic generation as well as acoustic detection are theoretically provided as explicit formulae and also validated by the numerical simulations for a multilayered cylindrical phantom model. The reconstructed 2-D Lorentz force distribution reveals not only the conductivity configuration in terms of shape and size but also the amplitude value of the Lorentz force in the examined layer. This study provides a basis for further study of conductivity distribution reconstruction of MAT-MI in medical imaging.
Applications of Electrical Impedance Tomography (EIT): A Short Review
NASA Astrophysics Data System (ADS)
Kanti Bera, Tushar
2018-03-01
Electrical Impedance Tomography (EIT) is a tomographic imaging method which solves an ill posed inverse problem using the boundary voltage-current data collected from the surface of the object under test. Though the spatial resolution is comparatively low compared to conventional tomographic imaging modalities, due to several advantages EIT has been studied for a number of applications such as medical imaging, material engineering, civil engineering, biotechnology, chemical engineering, MEMS and other fields of engineering and applied sciences. In this paper, the applications of EIT have been reviewed and presented as a short summary. The working principal, instrumentation and advantages are briefly discussed followed by a detail discussion on the applications of EIT technology in different areas of engineering, technology and applied sciences.
Body image and sexual function in women after treatment for anal and rectal cancer.
Benedict, Catherine; Philip, Errol J; Baser, Raymond E; Carter, Jeanne; Schuler, Tammy A; Jandorf, Lina; DuHamel, Katherine; Nelson, Christian
2016-03-01
Treatment for anal and rectal cancer (ARCa) often results in side effects that directly impact sexual functioning; however, ARCa survivors are an understudied group, and factors contributing to the sexual sequelae are not well understood. Body image problems are distressing and may further exacerbate sexual difficulties, particularly for women. This preliminary study sought to (1) describe body image problems, including sociodemographic and disease/treatment correlates, and (2) examine relations between body image and sexual function. For the baseline assessment of a larger study, 70 women completed the European Organization for Research and Treatment of Cancer Core Quality of Life Questionnaire and Colorectal Cancer-specific Module, including the Body Image subscale, and Female Sexual Function Index. Pearson's correlation and multiple regression evaluated correlates of body image. Among sexually active women (n = 41), hierarchical regression examined relations between body image and sexual function domains. Women were on average 55 years old (standard deviation = 11.6), non-Hispanic White (79%), married (57%), and employed (47%). The majority (86%) reported at least one body image problem. Younger age, lower global health status, and greater severity of symptoms related to poorer body image (p's < 0.05). Poor body image was inversely related to all aspects of sexual function (β range 0.50-0.70, p's < 0.05), except pain. The strongest association was with Female Sexual Function Index Sexual/Relationship Satisfaction. These preliminary findings suggest the importance of assessing body image as a potentially modifiable target to address sexual difficulties in this understudied group. Further longitudinal research is needed to inform the development and implementation of effective interventions to improve the sexual health and well-being of female ARCa survivors. Copyright © 2015 John Wiley & Sons, Ltd.
Body Image and Sexual Function in Women after Treatment for Anal and Rectal Cancer
Benedict, Catherine; Philip, Errol J.; Baser, Raymond E.; Carter, Jeanne; Schuler, Tammy A.; Jandorf, Lina; DuHamel, Katherine; Nelson, Christian
2016-01-01
Objective Treatment for anal and rectal cancer (ARCa) often results in side effects that directly impact sexual functioning; however, ARCa survivors are an understudied group and factors contributing to the sexual sequelae are not well understood. Body image problems are distressing and may further exacerbate sexual difficulties, particularly for women. This preliminary study sought to (1) describe body image problems, including sociodemographic and disease/treatment correlates; and (2) examine relations between body image and sexual function. Methods For the baseline assessment of a larger study, 70 women completed the EORTC QLQ-C30 and CR38, including the Body Image subscale, and Female Sexual Function Index (FSFI). Pearson’s correlation and multiple regression evaluated correlates of body image. Among sexually active women (n=41), hierarchical regression examined relations between body image and sexual function domains. Results Women were an average 55 years old (SD=11.6), Non-Hispanic White (79%), married (57%), and employed (47%). The majority (86%) reported at least one body image problem. Younger age, lower global health status, and greater severity of symptoms related to poorer body image (p’s<.05). Poor body image was inversely related to all aspects of sexual function (β range .50 to .70, p’s<.05), except pain. The strongest association was with FSFI Sexual/Relationship Satisfaction. Conclusion These preliminary findings suggest the importance of assessing body image as a potentially modifiable target to address sexual difficulties in this understudied group. Further longitudinal research is needed to inform the development and implementation of effective interventions to improve the sexual health and well-being of female ARCa survivors. PMID:25974874
Improving waveform inversion using modified interferometric imaging condition
NASA Astrophysics Data System (ADS)
Guo, Xuebao; Liu, Hong; Shi, Ying; Wang, Weihong; Zhang, Zhen
2017-12-01
Similar to the reverse-time migration, full waveform inversion in the time domain is a memory-intensive processing method. The computational storage size for waveform inversion mainly depends on the model size and time recording length. In general, 3D and 4D data volumes need to be saved for 2D and 3D waveform inversion gradient calculations, respectively. Even the boundary region wavefield-saving strategy creates a huge storage demand. Using the last two slices of the wavefield to reconstruct wavefields at other moments through the random boundary, avoids the need to store a large number of wavefields; however, traditional random boundary method is less effective at low frequencies. In this study, we follow a new random boundary designed to regenerate random velocity anomalies in the boundary region for each shot of each iteration. The results obtained using the random boundary condition in less illuminated areas are more seriously affected by random scattering than other areas due to the lack of coverage. In this paper, we have replaced direct correlation for computing the waveform inversion gradient by modified interferometric imaging, which enhances the continuity of the imaging path and reduces noise interference. The new imaging condition is a weighted average of extended imaging gathers can be directly used in the gradient computation. In this process, we have not changed the objective function, and the role of the imaging condition is similar to regularization. The window size for the modified interferometric imaging condition-based waveform inversion plays an important role in this process. The numerical examples show that the proposed method significantly enhances waveform inversion performance.
Improving waveform inversion using modified interferometric imaging condition
NASA Astrophysics Data System (ADS)
Guo, Xuebao; Liu, Hong; Shi, Ying; Wang, Weihong; Zhang, Zhen
2018-02-01
Similar to the reverse-time migration, full waveform inversion in the time domain is a memory-intensive processing method. The computational storage size for waveform inversion mainly depends on the model size and time recording length. In general, 3D and 4D data volumes need to be saved for 2D and 3D waveform inversion gradient calculations, respectively. Even the boundary region wavefield-saving strategy creates a huge storage demand. Using the last two slices of the wavefield to reconstruct wavefields at other moments through the random boundary, avoids the need to store a large number of wavefields; however, traditional random boundary method is less effective at low frequencies. In this study, we follow a new random boundary designed to regenerate random velocity anomalies in the boundary region for each shot of each iteration. The results obtained using the random boundary condition in less illuminated areas are more seriously affected by random scattering than other areas due to the lack of coverage. In this paper, we have replaced direct correlation for computing the waveform inversion gradient by modified interferometric imaging, which enhances the continuity of the imaging path and reduces noise interference. The new imaging condition is a weighted average of extended imaging gathers can be directly used in the gradient computation. In this process, we have not changed the objective function, and the role of the imaging condition is similar to regularization. The window size for the modified interferometric imaging condition-based waveform inversion plays an important role in this process. The numerical examples show that the proposed method significantly enhances waveform inversion performance.
NASA Astrophysics Data System (ADS)
Ma, Xu; Li, Yanqiu; Guo, Xuejia; Dong, Lisong
2012-03-01
Optical proximity correction (OPC) and phase shifting mask (PSM) are the most widely used resolution enhancement techniques (RET) in the semiconductor industry. Recently, a set of OPC and PSM optimization algorithms have been developed to solve for the inverse lithography problem, which are only designed for the nominal imaging parameters without giving sufficient attention to the process variations due to the aberrations, defocus and dose variation. However, the effects of process variations existing in the practical optical lithography systems become more pronounced as the critical dimension (CD) continuously shrinks. On the other hand, the lithography systems with larger NA (NA>0.6) are now extensively used, rendering the scalar imaging models inadequate to describe the vector nature of the electromagnetic field in the current optical lithography systems. In order to tackle the above problems, this paper focuses on developing robust gradient-based OPC and PSM optimization algorithms to the process variations under a vector imaging model. To achieve this goal, an integrative and analytic vector imaging model is applied to formulate the optimization problem, where the effects of process variations are explicitly incorporated in the optimization framework. The steepest descent algorithm is used to optimize the mask iteratively. In order to improve the efficiency of the proposed algorithms, a set of algorithm acceleration techniques (AAT) are exploited during the optimization procedure.
Application of kernel method in fluorescence molecular tomography
NASA Astrophysics Data System (ADS)
Zhao, Yue; Baikejiang, Reheman; Li, Changqing
2017-02-01
Reconstruction of fluorescence molecular tomography (FMT) is an ill-posed inverse problem. Anatomical guidance in the FMT reconstruction can improve FMT reconstruction efficiently. We have developed a kernel method to introduce the anatomical guidance into FMT robustly and easily. The kernel method is from machine learning for pattern analysis and is an efficient way to represent anatomical features. For the finite element method based FMT reconstruction, we calculate a kernel function for each finite element node from an anatomical image, such as a micro-CT image. Then the fluorophore concentration at each node is represented by a kernel coefficient vector and the corresponding kernel function. In the FMT forward model, we have a new system matrix by multiplying the sensitivity matrix with the kernel matrix. Thus, the kernel coefficient vector is the unknown to be reconstructed following a standard iterative reconstruction process. We convert the FMT reconstruction problem into the kernel coefficient reconstruction problem. The desired fluorophore concentration at each node can be calculated accordingly. Numerical simulation studies have demonstrated that the proposed kernel-based algorithm can improve the spatial resolution of the reconstructed FMT images. In the proposed kernel method, the anatomical guidance can be obtained directly from the anatomical image and is included in the forward modeling. One of the advantages is that we do not need to segment the anatomical image for the targets and background.
Review on solving the forward problem in EEG source analysis
Hallez, Hans; Vanrumste, Bart; Grech, Roberta; Muscat, Joseph; De Clercq, Wim; Vergult, Anneleen; D'Asseler, Yves; Camilleri, Kenneth P; Fabri, Simon G; Van Huffel, Sabine; Lemahieu, Ignace
2007-01-01
Background The aim of electroencephalogram (EEG) source localization is to find the brain areas responsible for EEG waves of interest. It consists of solving forward and inverse problems. The forward problem is solved by starting from a given electrical source and calculating the potentials at the electrodes. These evaluations are necessary to solve the inverse problem which is defined as finding brain sources which are responsible for the measured potentials at the EEG electrodes. Methods While other reviews give an extensive summary of the both forward and inverse problem, this review article focuses on different aspects of solving the forward problem and it is intended for newcomers in this research field. Results It starts with focusing on the generators of the EEG: the post-synaptic potentials in the apical dendrites of pyramidal neurons. These cells generate an extracellular current which can be modeled by Poisson's differential equation, and Neumann and Dirichlet boundary conditions. The compartments in which these currents flow can be anisotropic (e.g. skull and white matter). In a three-shell spherical head model an analytical expression exists to solve the forward problem. During the last two decades researchers have tried to solve Poisson's equation in a realistically shaped head model obtained from 3D medical images, which requires numerical methods. The following methods are compared with each other: the boundary element method (BEM), the finite element method (FEM) and the finite difference method (FDM). In the last two methods anisotropic conducting compartments can conveniently be introduced. Then the focus will be set on the use of reciprocity in EEG source localization. It is introduced to speed up the forward calculations which are here performed for each electrode position rather than for each dipole position. Solving Poisson's equation utilizing FEM and FDM corresponds to solving a large sparse linear system. Iterative methods are required to solve these sparse linear systems. The following iterative methods are discussed: successive over-relaxation, conjugate gradients method and algebraic multigrid method. Conclusion Solving the forward problem has been well documented in the past decades. In the past simplified spherical head models are used, whereas nowadays a combination of imaging modalities are used to accurately describe the geometry of the head model. Efforts have been done on realistically describing the shape of the head model, as well as the heterogenity of the tissue types and realistically determining the conductivity. However, the determination and validation of the in vivo conductivity values is still an important topic in this field. In addition, more studies have to be done on the influence of all the parameters of the head model and of the numerical techniques on the solution of the forward problem. PMID:18053144
Multiresolution MR elastography using nonlinear inversion
McGarry, M. D. J.; Van Houten, E. E. W.; Johnson, C. L.; Georgiadis, J. G.; Sutton, B. P.; Weaver, J. B.; Paulsen, K. D.
2012-01-01
Purpose: Nonlinear inversion (NLI) in MR elastography requires discretization of the displacement field for a finite element (FE) solution of the “forward problem”, and discretization of the unknown mechanical property field for the iterative solution of the “inverse problem”. The resolution requirements for these two discretizations are different: the forward problem requires sufficient resolution of the displacement FE mesh to ensure convergence, whereas lowering the mechanical property resolution in the inverse problem stabilizes the mechanical property estimates in the presence of measurement noise. Previous NLI implementations use the same FE mesh to support the displacement and property fields, requiring a trade-off between the competing resolution requirements. Methods: This work implements and evaluates multiresolution FE meshes for NLI elastography, allowing independent discretizations of the displacements and each mechanical property parameter to be estimated. The displacement resolution can then be selected to ensure mesh convergence, and the resolution of the property meshes can be independently manipulated to control the stability of the inversion. Results: Phantom experiments indicate that eight nodes per wavelength (NPW) are sufficient for accurate mechanical property recovery, whereas mechanical property estimation from 50 Hz in vivo brain data stabilizes once the displacement resolution reaches 1.7 mm (approximately 19 NPW). Viscoelastic mechanical property estimates of in vivo brain tissue show that subsampling the loss modulus while holding the storage modulus resolution constant does not substantially alter the storage modulus images. Controlling the ratio of the number of measurements to unknown mechanical properties by subsampling the mechanical property distributions (relative to the data resolution) improves the repeatability of the property estimates, at a cost of modestly decreased spatial resolution. Conclusions: Multiresolution NLI elastography provides a more flexible framework for mechanical property estimation compared to previous single mesh implementations. PMID:23039674
3D near-infrared imaging based on a single-photon avalanche diode array sensor
NASA Astrophysics Data System (ADS)
Mata Pavia, Juan; Charbon, Edoardo; Wolf, Martin
2011-07-01
An imager for optical tomography was designed based on a detector with 128×128 single-photon pixels that included a bank of 32 time-to-digital converters. Due to the high spatial resolution and the possibility of performing time resolved measurements, a new contact-less setup has been conceived in which scanning of the object is not necessary. This enables one to perform high-resolution optical tomography with much higher acquisition rate, which is fundamental in clinical applications. The setup has a resolution of 97ps and operates with a laser source with an average power of 3mW. This new imaging system generated a high amount of data that could not be processed by established methods, therefore new concepts and algorithms were developed to take full advantage of it. Images were generated using a new reconstruction algorithm that combined general inverse problem methods with Fourier transforms in order to reduce the complexity of the problem. Simulations show that the potential resolution of the new setup is in the order of millimeters. Experiments have been performed to confirm this potential. Images derived from the measurements demonstrate that we have already reached a resolution of 5mm.
Ray, J.; Lee, J.; Yadav, V.; ...
2015-04-29
Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO 2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) andmore » fitting. Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO 2 (ffCO 2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO 2 emissions and synthetic observations of ffCO 2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of 2. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, J.; Lee, J.; Yadav, V.
Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO 2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) andmore » fitting. Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO 2 (ffCO 2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO 2 emissions and synthetic observations of ffCO 2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of 2. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less
An evolutive real-time source inversion based on a linear inverse formulation
NASA Astrophysics Data System (ADS)
Sanchez Reyes, H. S.; Tago, J.; Cruz-Atienza, V. M.; Metivier, L.; Contreras Zazueta, M. A.; Virieux, J.
2016-12-01
Finite source inversion is a steppingstone to unveil earthquake rupture. It is used on ground motion predictions and its results shed light on seismic cycle for better tectonic understanding. It is not yet used for quasi-real-time analysis. Nowadays, significant progress has been made on approaches regarding earthquake imaging, thanks to new data acquisition and methodological advances. However, most of these techniques are posterior procedures once seismograms are available. Incorporating source parameters estimation into early warning systems would require to update the source build-up while recording data. In order to go toward this dynamic estimation, we developed a kinematic source inversion formulated in the time-domain, for which seismograms are linearly related to the slip distribution on the fault through convolutions with Green's functions previously estimated and stored (Perton et al., 2016). These convolutions are performed in the time-domain as we progressively increase the time window of records at each station specifically. Selected unknowns are the spatio-temporal slip-rate distribution to keep the linearity of the forward problem with respect to unknowns, as promoted by Fan and Shearer (2014). Through the spatial extension of the expected rupture zone, we progressively build-up the slip-rate when adding new data by assuming rupture causality. This formulation is based on the adjoint-state method for efficiency (Plessix, 2006). The inverse problem is non-unique and, in most cases, underdetermined. While standard regularization terms are used for stabilizing the inversion, we avoid strategies based on parameter reduction leading to an unwanted non-linear relationship between parameters and seismograms for our progressive build-up. Rise time, rupture velocity and other quantities can be extracted later on as attributs from the slip-rate inversion we perform. Satisfactory results are obtained on a synthetic example (FIgure 1) proposed by the Source Inversion Validation project (Mai et al. 2011). A real case application is currently being explored. Our specific formulation, combined with simple prior information, as well as numerical results obtained so far, yields interesting perspectives for a real-time implementation.
Comparing implementations of penalized weighted least-squares sinogram restoration
Forthmann, Peter; Koehler, Thomas; Defrise, Michel; La Riviere, Patrick
2010-01-01
Purpose: A CT scanner measures the energy that is deposited in each channel of a detector array by x rays that have been partially absorbed on their way through the object. The measurement process is complex and quantitative measurements are always and inevitably associated with errors, so CT data must be preprocessed prior to reconstruction. In recent years, the authors have formulated CT sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. The authors have explored both penalized Poisson likelihood (PL) and penalized weighted least-squares (PWLS) objective functions. At low doses, the authors found that the PL approach outperforms PWLS in terms of resolution-noise tradeoffs, but at standard doses they perform similarly. The PWLS objective function, being quadratic, is more amenable to computational acceleration than the PL objective. In this work, the authors develop and compare two different methods for implementing PWLS sinogram restoration with the hope of improving computational performance relative to PL in the standard-dose regime. Sinogram restoration is still significant in the standard-dose regime since it can still outperform standard approaches and it allows for correction of effects that are not usually modeled in standard CT preprocessing. Methods: The authors have explored and compared two implementation strategies for PWLS sinogram restoration: (1) A direct matrix-inversion strategy based on the closed-form solution to the PWLS optimization problem and (2) an iterative approach based on the conjugate-gradient algorithm. Obtaining optimal performance from each strategy required modifying the naive off-the-shelf implementations of the algorithms to exploit the particular symmetry and sparseness of the sinogram-restoration problem. For the closed-form approach, the authors subdivided the large matrix inversion into smaller coupled problems and exploited sparseness to minimize matrix operations. For the conjugate-gradient approach, the authors exploited sparseness and preconditioned the problem to speed up convergence. Results: All methods produced qualitatively and quantitatively similar images as measured by resolution-variance tradeoffs and difference images. Despite the acceleration strategies, the direct matrix-inversion approach was found to be uncompetitive with iterative approaches, with a computational burden higher by an order of magnitude or more. The iterative conjugate-gradient approach, however, does appear promising, with computation times half that of the authors’ previous penalized-likelihood implementation. Conclusions: Iterative conjugate-gradient based PWLS sinogram restoration with careful matrix optimizations has computational advantages over direct matrix PWLS inversion and over penalized-likelihood sinogram restoration and can be considered a good alternative in standard-dose regimes. PMID:21158306
NASA Astrophysics Data System (ADS)
Mai, P. M.; Schorlemmer, D.; Page, M.
2012-04-01
Earthquake source inversions image the spatio-temporal rupture evolution on one or more fault planes using seismic and/or geodetic data. Such studies are critically important for earthquake seismology in general, and for advancing seismic hazard analysis in particular, as they reveal earthquake source complexity and help (i) to investigate earthquake mechanics; (ii) to develop spontaneous dynamic rupture models; (iii) to build models for generating rupture realizations for ground-motion simulations. In applications (i - iii), the underlying finite-fault source models are regarded as "data" (input information), but their uncertainties are essentially unknown. After all, source models are obtained from solving an inherently ill-posed inverse problem to which many a priori assumptions and uncertain observations are applied. The Source Inversion Validation (SIV) project is a collaborative effort to better understand the variability between rupture models for a single earthquake (as manifested in the finite-source rupture model database) and to develop robust uncertainty quantification for earthquake source inversions. The SIV project highlights the need to develop a long-standing and rigorous testing platform to examine the current state-of-the-art in earthquake source inversion, and to develop and test novel source inversion approaches. We will review the current status of the SIV project, and report the findings and conclusions of the recent workshops. We will briefly discuss several source-inversion methods, how they treat uncertainties in data, and assess the posterior model uncertainty. Case studies include initial forward-modeling tests on Green's function calculations, and inversion results for synthetic data from spontaneous dynamic crack-like strike-slip earthquake on steeply dipping fault, embedded in a layered crustal velocity-density structure.
NASA Astrophysics Data System (ADS)
Zhang, Junwei
I built parts-based and manifold based mathematical learning model for the geophysical inverse problem and I applied this approach to two problems. One is related to the detection of the oil-water encroachment front during the water flooding of an oil reservoir. In this application, I propose a new 4D inversion approach based on the Gauss-Newton approach to invert time-lapse cross-well resistance data. The goal of this study is to image the position of the oil-water encroachment front in a heterogeneous clayey sand reservoir. This approach is based on explicitly connecting the change of resistivity to the petrophysical properties controlling the position of the front (porosity and permeability) and to the saturation of the water phase through a petrophysical resistivity model accounting for bulk and surface conductivity contributions and saturation. The distributions of the permeability and porosity are also inverted using the time-lapse resistivity data in order to better reconstruct the position of the oil water encroachment front. In our synthetic test case, we get a better position of the front with the by-products of porosity and permeability inferences near the flow trajectory and close to the wells. The numerical simulations show that the position of the front is recovered well but the distribution of the recovered porosity and permeability is only fair. A comparison with a commercial code based on a classical Gauss-Newton approach with no information provided by the two-phase flow model fails to recover the position of the front. The new approach could be also used for the time-lapse monitoring of various processes in both geothermal fields and oil and gas reservoirs using a combination of geophysical methods. A paper has been published in Geophysical Journal International on this topic and I am the first author of this paper. The second application is related to the detection of geological facies boundaries and their deforation to satisfy to geophysica data and prior distributions. We pose the geophysical inverse problem in terms of Gaussian random fields with mean functions controlled by petrophysical relationships and covariance functions controlled by a prior geological cross-section, including the definition of spatial boundaries for the geological facies. The petrophysical relationship problem is formulated as a regression problem upon each facies. The inversion is performed in a Bayesian framework. We demonstrate the usefulness of this strategy using a first synthetic case study, performing a joint inversion of gravity and galvanometric resistivity data with the stations all located at the ground surface. The joint inversion is used to recover the density and resistivity distributions of the subsurface. In a second step, we consider the possibility that the facies boundaries are deformable and their shapes are inverted as well. We use the level set approach to deform the facies boundaries preserving prior topological properties of the facies throughout the inversion. With the additional help of prior facies petrophysical relationships, topological characteristic of each facies, we make posterior inference about multiple geophysical tomograms based on their corresponding geophysical data misfits. The result of the inversion technique is encouraging when applied to a second synthetic case study, showing that we can recover the heterogeneities inside the facies, the mean values for the petrophysical properties, and, to some extent, the facies boundaries. A paper has been submitted to Geophysics on this topic and I am the first author of this paper. During this thesis, I also worked on the time lapse inversion problem of gravity data in collaboration with Marios Karaoulis and a paper was published in Geophysical Journal international on this topic. I also worked on the time-lapse inversion of cross-well geophysical data (seismic and resistivity) using both a structural approach named the cross-gradient approach and a petrophysical approach. A paper was published in Geophysics on this topic.
NASA Astrophysics Data System (ADS)
Guseinov, I. M.; Khanmamedov, A. Kh.; Mamedova, A. F.
2018-04-01
We consider the Schrödinger equation with an additional quadratic potential on the entire axis and use the transformation operator method to study the direct and inverse problems of the scattering theory. We obtain the main integral equations of the inverse problem and prove that the basic equations are uniquely solvable.
Assessing non-uniqueness: An algebraic approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vasco, Don W.
Geophysical inverse problems are endowed with a rich mathematical structure. When discretized, most differential and integral equations of interest are algebraic (polynomial) in form. Techniques from algebraic geometry and computational algebra provide a means to address questions of existence and uniqueness for both linear and non-linear inverse problem. In a sense, the methods extend ideas which have proven fruitful in treating linear inverse problems.
Phase-sensitive dual-inversion recovery for accelerated carotid vessel wall imaging.
Bonanno, Gabriele; Brotman, David; Stuber, Matthias
2015-03-01
Dual-inversion recovery (DIR) is widely used for magnetic resonance vessel wall imaging. However, optimal contrast may be difficult to obtain and is subject to RR variability. Furthermore, DIR imaging is time-inefficient and multislice acquisitions may lead to prolonged scanning times. Therefore, an extension of phase-sensitive (PS) DIR is proposed for carotid vessel wall imaging. The statistical distribution of the phase signal after DIR is probed to segment carotid lumens and suppress their residual blood signal. The proposed PS-DIR technique was characterized over a broad range of inversion times. Multislice imaging was then implemented by interleaving the acquisition of 3 slices after DIR. Quantitative evaluation was then performed in healthy adult subjects and compared with conventional DIR imaging. Single-slice PS-DIR provided effective blood-signal suppression over a wide range of inversion times, enhancing wall-lumen contrast and vessel wall conspicuity for carotid arteries. Multislice PS-DIR imaging with effective blood-signal suppression is enabled. A variant of the PS-DIR method has successfully been implemented and tested for carotid vessel wall imaging. This technique removes timing constraints related to inversion recovery, enhances wall-lumen contrast, and enables a 3-fold increase in volumetric coverage at no extra cost in scanning time.
Reconstructed imaging of acoustic cloak using time-lapse reversal method
NASA Astrophysics Data System (ADS)
Zhou, Chen; Cheng, Ying; Xu, Jian-yi; Li, Bo; Liu, Xiao-jun
2014-08-01
We proposed and investigated a solution to the inverse acoustic cloak problem, an anti-stealth technology to make cloaks visible, using the time-lapse reversal (TLR) method. The TLR method reconstructs the image of an unknown acoustic cloak by utilizing scattered acoustic waves. Compared to previous anti-stealth methods, the TLR method can determine not only the existence of a cloak but also its exact geometric information like definite shape, size, and position. Here, we present the process for TLR reconstruction based on time reversal invariance. This technology may have potential applications in detecting various types of cloaks with different geometric parameters.
Adaptive Filtering in the Wavelet Transform Domain Via Genetic Algorithms
2004-08-01
inverse transform process. 2. BACKGROUND The image processing research conducted at the AFRL/IFTA Reconfigurable Computing Laboratory has been...coefficients from the wavelet domain back into the original signal domain. In other words, the inverse transform produces the original signal x(t) from the...coefficients for an inverse wavelet transform, such that the MSE of images reconstructed by this inverse transform is significantly less than the mean squared
NASA Astrophysics Data System (ADS)
Song, Xizi; Xu, Yanbin; Dong, Feng
2017-04-01
Electrical resistance tomography (ERT) is a promising measurement technique with important industrial and clinical applications. However, with limited effective measurements, it suffers from poor spatial resolution due to the ill-posedness of the inverse problem. Recently, there has been an increasing research interest in hybrid imaging techniques, utilizing couplings of physical modalities, because these techniques obtain much more effective measurement information and promise high resolution. Ultrasound modulated electrical impedance tomography (UMEIT) is one of the newly developed hybrid imaging techniques, which combines electric and acoustic modalities. A linearized image reconstruction method based on power density is proposed for UMEIT. The interior data, power density distribution, is adopted to reconstruct the conductivity distribution with the proposed image reconstruction method. At the same time, relating the power density change to the change in conductivity, the Jacobian matrix is employed to make the nonlinear problem into a linear one. The analytic formulation of this Jacobian matrix is derived and its effectiveness is also verified. In addition, different excitation patterns are tested and analyzed, and opposite excitation provides the best performance with the proposed method. Also, multiple power density distributions are combined to implement image reconstruction. Finally, image reconstruction is implemented with the linear back-projection (LBP) algorithm. Compared with ERT, with the proposed image reconstruction method, UMEIT can produce reconstructed images with higher quality and better quantitative evaluation results.
Yang, Xiaoli; Hofmann, Ralf; Dapp, Robin; van de Kamp, Thomas; dos Santos Rolo, Tomy; Xiao, Xianghui; Moosmann, Julian; Kashef, Jubin; Stotzka, Rainer
2015-03-09
High-resolution, three-dimensional (3D) imaging of soft tissues requires the solution of two inverse problems: phase retrieval and the reconstruction of the 3D image from a tomographic stack of two-dimensional (2D) projections. The number of projections per stack should be small to accommodate fast tomography of rapid processes and to constrain X-ray radiation dose to optimal levels to either increase the duration of in vivo time-lapse series at a given goal for spatial resolution and/or the conservation of structure under X-ray irradiation. In pursuing the 3D reconstruction problem in the sense of compressive sampling theory, we propose to reduce the number of projections by applying an advanced algebraic technique subject to the minimisation of the total variation (TV) in the reconstructed slice. This problem is formulated in a Lagrangian multiplier fashion with the parameter value determined by appealing to a discrete L-curve in conjunction with a conjugate gradient method. The usefulness of this reconstruction modality is demonstrated for simulated and in vivo data, the latter acquired in parallel-beam imaging experiments using synchrotron radiation.
Yang, Xiaoli; Hofmann, Ralf; Dapp, Robin; ...
2015-01-01
High-resolution, three-dimensional (3D) imaging of soft tissues requires the solution of two inverse problems: phase retrieval and the reconstruction of the 3D image from a tomographic stack of two-dimensional (2D) projections. The number of projections per stack should be small to accommodate fast tomography of rapid processes and to constrain X-ray radiation dose to optimal levels to either increase the duration o f in vivo time-lapse series at a given goal for spatial resolution and/or the conservation of structure under X-ray irradiation. In pursuing the 3D reconstruction problem in the sense of compressive sampling theory, we propose to reduce themore » number of projections by applying an advanced algebraic technique subject to the minimisation of the total variation (TV) in the reconstructed slice. This problem is formulated in a Lagrangian multiplier fashion with the parameter value determined by appealing to a discrete L-curve in conjunction with a conjugate gradient method. The usefulness of this reconstruction modality is demonstrated for simulated and in vivo data, the latter acquired in parallel-beam imaging experiments using synchrotron radiation.« less
Lensfree diffractive tomography for the imaging of 3D cell cultures
NASA Astrophysics Data System (ADS)
Berdeu, Anthony; Momey, Fabien; Dinten, Jean-Marc; Gidrol, Xavier; Picollet-D'hahan, Nathalie; Allier, Cédric
2017-02-01
New microscopes are needed to help reaching the full potential of 3D organoid culture studies by gathering large quantitative and systematic data over extended periods of time while preserving the integrity of the living sample. In order to reconstruct large volumes while preserving the ability to catch every single cell, we propose new imaging platforms based on lens-free microscopy, a technic which is addressing these needs in the context of 2D cell culture, providing label-free and non-phototoxic acquisition of large datasets. We built lens-free diffractive tomography setups performing multi-angle acquisitions of 3D organoid cultures embedded in Matrigel and developed dedicated 3D holographic reconstruction algorithms based on the Fourier diffraction theorem. Nonetheless, holographic setups do not record the phase of the incident wave front and the biological samples in Petri dish strongly limit the angular coverage. These limitations introduce numerous artefacts in the sample reconstruction. We developed several methods to overcome them, such as multi-wavelength imaging or iterative phase retrieval. The most promising technic currently developed is based on a regularised inverse problem approach directly applied on the 3D volume to reconstruct. 3D reconstructions were performed on several complex samples such as 3D networks or spheroids embedded in capsules with large reconstructed volumes up to 25 mm3 while still being able to identify single cells. To our knowledge, this is the first time that such an inverse problem approach is implemented in the context of lens-free diffractive tomography enabling to reconstruct large fully 3D volumes of unstained biological samples.
Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.; Park, C.B.
2005-01-01
In a set of two papers we study the inverse problem of refraction travel times. The purpose of this work is to use the study as a basis for development of more sophisticated methods for finding more reliable solutions to the inverse problem of refraction travel times, which is known to be nonunique. The first paper, "Types of Geophysical Nonuniqueness through Minimization," emphasizes the existence of different forms of nonuniqueness in the realm of inverse geophysical problems. Each type of nonuniqueness requires a different type and amount of a priori information to acquire a reliable solution. Based on such coupling, a nonuniqueness classification is designed. Therefore, since most inverse geophysical problems are nonunique, each inverse problem must be studied to define what type of nonuniqueness it belongs to and thus determine what type of a priori information is necessary to find a realistic solution. The second paper, "Quantifying Refraction Nonuniqueness Using a Three-layer Model," serves as an example of such an approach. However, its main purpose is to provide a better understanding of the inverse refraction problem by studying the type of nonuniqueness it possesses. An approach for obtaining a realistic solution to the inverse refraction problem is planned to be offered in a third paper that is in preparation. The main goal of this paper is to redefine the existing generalized notion of nonuniqueness and a priori information by offering a classified, discriminate structure. Nonuniqueness is often encountered when trying to solve inverse problems. However, possible nonuniqueness diversity is typically neglected and nonuniqueness is regarded as a whole, as an unpleasant "black box" and is approached in the same manner by applying smoothing constraints, damping constraints with respect to the solution increment and, rarely, damping constraints with respect to some sparse reference information about the true parameters. In practice, when solving geophysical problems different types of nonuniqueness exist, and thus there are different ways to solve the problems. Nonuniqueness is usually regarded as due to data error, assuming the true geology is acceptably approximated by simple mathematical models. Compounding the nonlinear problems, geophysical applications routinely exhibit exact-data nonuniqueness even for models with very few parameters adding to the nonuniqueness due to data error. While nonuniqueness variations have been defined earlier, they have not been linked to specific use of a priori information necessary to resolve each case. Four types of nonuniqueness, typical for minimization problems are defined with the corresponding methods for inclusion of a priori information to find a realistic solution without resorting to a non-discriminative approach. The above-developed stand-alone classification is expected to be helpful when solving any geophysical inverse problems. ?? Birkha??user Verlag, Basel, 2005.
Computational methods for inverse problems in geophysics: inversion of travel time observations
Pereyra, V.; Keller, H.B.; Lee, W.H.K.
1980-01-01
General ways of solving various inverse problems are studied for given travel time observations between sources and receivers. These problems are separated into three components: (a) the representation of the unknown quantities appearing in the model; (b) the nonlinear least-squares problem; (c) the direct, two-point ray-tracing problem used to compute travel time once the model parameters are given. Novel software is described for (b) and (c), and some ideas given on (a). Numerical results obtained with artificial data and an implementation of the algorithm are also presented. ?? 1980.
A fixed energy fixed angle inverse scattering in interior transmission problem
NASA Astrophysics Data System (ADS)
Chen, Lung-Hui
2017-06-01
We study the inverse acoustic scattering problem in mathematical physics. The problem is to recover the index of refraction in an inhomogeneous medium by measuring the scattered wave fields in the far field. We transform the problem to the interior transmission problem in the study of the Helmholtz equation. We find an inverse uniqueness on the scatterer with a knowledge of a fixed interior transmission eigenvalue. By examining the solution in a series of spherical harmonics in the far field, we can determine uniquely the perturbation source for the radially symmetric perturbations.
NASA Astrophysics Data System (ADS)
Yang, Qingsong; Cong, Wenxiang; Wang, Ge
2016-10-01
X-ray phase contrast imaging is an important mode due to its sensitivity to subtle features of soft biological tissues. Grating-based differential phase contrast (DPC) imaging is one of the most promising phase imaging techniques because it works with a normal x-ray tube of a large focal spot at a high flux rate. However, a main obstacle before this paradigm shift is the fabrication of large-area gratings of a small period and a high aspect ratio. Imaging large objects with a size-limited grating results in data truncation which is a new type of the interior problem. While the interior problem was solved for conventional x-ray CT through analytic extension, compressed sensing and iterative reconstruction, the difficulty for interior reconstruction from DPC data lies in that the implementation of the system matrix requires the differential operation on the detector array, which is often inaccurate and unstable in the case of noisy data. Here, we propose an iterative method based on spline functions. The differential data are first back-projected to the image space. Then, a system matrix is calculated whose components are the Hilbert transforms of the spline bases. The system matrix takes the whole image as an input and outputs the back-projected interior data. Prior information normally assumed for compressed sensing is enforced to iteratively solve this inverse problem. Our results demonstrate that the proposed algorithm can successfully reconstruct an interior region of interest (ROI) from the differential phase data through the ROI.
Total-variation based velocity inversion with Bregmanized operator splitting algorithm
NASA Astrophysics Data System (ADS)
Zand, Toktam; Gholami, Ali
2018-04-01
Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.
Zatsiorsky, Vladimir M.
2011-01-01
One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423–453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem. PMID:21311907
Accelerated gradient methods for the x-ray imaging of solar flares
NASA Astrophysics Data System (ADS)
Bonettini, S.; Prato, M.
2014-05-01
In this paper we present new optimization strategies for the reconstruction of x-ray images of solar flares by means of the data collected by the Reuven Ramaty high energy solar spectroscopic imager. The imaging concept of the satellite is based on rotating modulation collimator instruments, which allow the use of both Fourier imaging approaches and reconstruction techniques based on the straightforward inversion of the modulated count profiles. Although in the last decade, greater attention has been devoted to the former strategies due to their very limited computational cost, here we consider the latter model and investigate the effectiveness of different accelerated gradient methods for the solution of the corresponding constrained minimization problem. Moreover, regularization is introduced through either an early stopping of the iterative procedure, or a Tikhonov term added to the discrepancy function by means of a discrepancy principle accounting for the Poisson nature of the noise affecting the data.
Efficient robust reconstruction of dynamic PET activity maps with radioisotope decay constraints.
Gao, Fei; Liu, Huafeng; Shi, Pengcheng
2010-01-01
Dynamic PET imaging performs sequence of data acquisition in order to provide visualization and quantification of physiological changes in specific tissues and organs. The reconstruction of activity maps is generally the first step in dynamic PET. State space Hinfinity approaches have been proved to be a robust method for PET image reconstruction where, however, temporal constraints are not considered during the reconstruction process. In addition, the state space strategies for PET image reconstruction have been computationally prohibitive for practical usage because of the need for matrix inversion. In this paper, we present a minimax formulation of the dynamic PET imaging problem where a radioisotope decay model is employed as physics-based temporal constraints on the photon counts. Furthermore, a robust steady state Hinfinity filter is developed to significantly improve the computational efficiency with minimal loss of accuracy. Experiments are conducted on Monte Carlo simulated image sequences for quantitative analysis and validation.
Peel, Sarah A; Hussain, Tarique; Cecelja, Marina; Abbas, Abeera; Greil, Gerald F; Chowienczyk, Philip; Spector, Tim; Smith, Alberto; Waltham, Matthew; Botnar, Rene M
2011-11-01
To accelerate and optimize black blood properties of the quadruple inversion recovery (QIR) technique for imaging the abdominal aortic wall. QIR inversion delays were optimized for different heart rates in simulations and phantom studies by minimizing the steady state magnetization of blood for T(1) = 100-1400 ms. To accelerate and improve black blood properties of aortic vessel wall imaging, the QIR prepulse was combined with zoom imaging and (a) "traditional" and (b) "trailing" electrocardiogram (ECG) triggering. Ten volunteers were imaged pre- and post-contrast administration using a conventional ECG-triggered double inversion recovery (DIR) and the two QIR implementations in combination with a zoom-TSE readout. The QIR implemented with "trailing" ECG-triggering resulted in consistently good blood suppression as the second inversion delay was timed during maximum systolic flow in the aorta. The blood signal-to-noise ratio and vessel wall to blood contrast-to-noise ratio, vessel wall sharpness, and image quality scores showed a statistically significant improvement compared with the traditional QIR implementation with and without ECG-triggering. We demonstrate that aortic vessel wall imaging can be accelerated with zoom imaging and that "trailing" ECG-triggering improves black blood properties of the aorta which is subject to motion and variable blood flow during the cardiac cycle. Copyright © 2011 Wiley Periodicals, Inc.
MToS: A Tree of Shapes for Multivariate Images.
Carlinet, Edwin; Géraud, Thierry
2015-12-01
The topographic map of a gray-level image, also called tree of shapes, provides a high-level hierarchical representation of the image contents. This representation, invariant to contrast changes and to contrast inversion, has been proved very useful to achieve many image processing and pattern recognition tasks. Its definition relies on the total ordering of pixel values, so this representation does not exist for color images, or more generally, multivariate images. Common workarounds, such as marginal processing, or imposing a total order on data, are not satisfactory and yield many problems. This paper presents a method to build a tree-based representation of multivariate images, which features marginally the same properties of the gray-level tree of shapes. Briefly put, we do not impose an arbitrary ordering on values, but we only rely on the inclusion relationship between shapes in the image definition domain. The interest of having a contrast invariant and self-dual representation of multivariate image is illustrated through several applications (filtering, segmentation, and object recognition) on different types of data: color natural images, document images, satellite hyperspectral imaging, multimodal medical imaging, and videos.
NASA Technical Reports Server (NTRS)
Alfano, Robert R. (Inventor); Cai, Wei (Inventor)
2007-01-01
A reconstruction technique for reducing computation burden in the 3D image processes, wherein the reconstruction procedure comprises an inverse and a forward model. The inverse model uses a hybrid dual Fourier algorithm that combines a 2D Fourier inversion with a 1D matrix inversion to thereby provide high-speed inverse computations. The inverse algorithm uses a hybrid transfer to provide fast Fourier inversion for data of multiple sources and multiple detectors. The forward model is based on an analytical cumulant solution of a radiative transfer equation. The accurate analytical form of the solution to the radiative transfer equation provides an efficient formalism for fast computation of the forward model.
NASA Astrophysics Data System (ADS)
Shimelevich, M. I.; Obornev, E. A.; Obornev, I. E.; Rodionov, E. A.
2017-07-01
The iterative approximation neural network method for solving conditionally well-posed nonlinear inverse problems of geophysics is presented. The method is based on the neural network approximation of the inverse operator. The inverse problem is solved in the class of grid (block) models of the medium on a regularized parameterization grid. The construction principle of this grid relies on using the calculated values of the continuity modulus of the inverse operator and its modifications determining the degree of ambiguity of the solutions. The method provides approximate solutions of inverse problems with the maximal degree of detail given the specified degree of ambiguity with the total number of the sought parameters n × 103 of the medium. The a priori and a posteriori estimates of the degree of ambiguity of the approximated solutions are calculated. The work of the method is illustrated by the example of the three-dimensional (3D) inversion of the synthesized 2D areal geoelectrical (audio magnetotelluric sounding, AMTS) data corresponding to the schematic model of a kimberlite pipe.
NASA Astrophysics Data System (ADS)
Li, Jinghe; Song, Linping; Liu, Qing Huo
2016-02-01
A simultaneous multiple frequency contrast source inversion (CSI) method is applied to reconstructing hydrocarbon reservoir targets in a complex multilayered medium in two dimensions. It simulates the effects of a salt dome sedimentary formation in the context of reservoir monitoring. In this method, the stabilized biconjugate-gradient fast Fourier transform (BCGS-FFT) algorithm is applied as a fast solver for the 2D volume integral equation for the forward computation. The inversion technique with CSI combines the efficient FFT algorithm to speed up the matrix-vector multiplication and the stable convergence of the simultaneous multiple frequency CSI in the iteration process. As a result, this method is capable of making quantitative conductivity image reconstruction effectively for large-scale electromagnetic oil exploration problems, including the vertical electromagnetic profiling (VEP) survey investigated here. A number of numerical examples have been demonstrated to validate the effectiveness and capacity of the simultaneous multiple frequency CSI method for a limited array view in VEP.
Source counting in MEG neuroimaging
NASA Astrophysics Data System (ADS)
Lei, Tianhu; Dell, John; Magee, Ralphy; Roberts, Timothy P. L.
2009-02-01
Magnetoencephalography (MEG) is a multi-channel, functional imaging technique. It measures the magnetic field produced by the primary electric currents inside the brain via a sensor array composed of a large number of superconducting quantum interference devices. The measurements are then used to estimate the locations, strengths, and orientations of these electric currents. This magnetic source imaging technique encompasses a great variety of signal processing and modeling techniques which include Inverse problem, MUltiple SIgnal Classification (MUSIC), Beamforming (BF), and Independent Component Analysis (ICA) method. A key problem with Inverse problem, MUSIC and ICA methods is that the number of sources must be detected a priori. Although BF method scans the source space on a point-to-point basis, the selection of peaks as sources, however, is finally made by subjective thresholding. In practice expert data analysts often select results based on physiological plausibility. This paper presents an eigenstructure approach for the source number detection in MEG neuroimaging. By sorting eigenvalues of the estimated covariance matrix of the acquired MEG data, the measured data space is partitioned into the signal and noise subspaces. The partition is implemented by utilizing information theoretic criteria. The order of the signal subspace gives an estimate of the number of sources. The approach does not refer to any model or hypothesis, hence, is an entirely data-led operation. It possesses clear physical interpretation and efficient computation procedure. The theoretical derivation of this method and the results obtained by using the real MEG data are included to demonstrates their agreement and the promise of the proposed approach.
Geostatistical regularization operators for geophysical inverse problems on irregular meshes
NASA Astrophysics Data System (ADS)
Jordi, C.; Doetsch, J.; Günther, T.; Schmelzbach, C.; Robertsson, J. OA
2018-05-01
Irregular meshes allow to include complicated subsurface structures into geophysical modelling and inverse problems. The non-uniqueness of these inverse problems requires appropriate regularization that can incorporate a priori information. However, defining regularization operators for irregular discretizations is not trivial. Different schemes for calculating smoothness operators on irregular meshes have been proposed. In contrast to classical regularization constraints that are only defined using the nearest neighbours of a cell, geostatistical operators include a larger neighbourhood around a particular cell. A correlation model defines the extent of the neighbourhood and allows to incorporate information about geological structures. We propose an approach to calculate geostatistical operators for inverse problems on irregular meshes by eigendecomposition of a covariance matrix that contains the a priori geological information. Using our approach, the calculation of the operator matrix becomes tractable for 3-D inverse problems on irregular meshes. We tested the performance of the geostatistical regularization operators and compared them against the results of anisotropic smoothing in inversions of 2-D surface synthetic electrical resistivity tomography (ERT) data as well as in the inversion of a realistic 3-D cross-well synthetic ERT scenario. The inversions of 2-D ERT and seismic traveltime field data with geostatistical regularization provide results that are in good accordance with the expected geology and thus facilitate their interpretation. In particular, for layered structures the geostatistical regularization provides geologically more plausible results compared to the anisotropic smoothness constraints.
Otazo, Ricardo; Lin, Fa-Hsuan; Wiggins, Graham; Jordan, Ramiro; Sodickson, Daniel; Posse, Stefan
2009-01-01
Standard parallel magnetic resonance imaging (MRI) techniques suffer from residual aliasing artifacts when the coil sensitivities vary within the image voxel. In this work, a parallel MRI approach known as Superresolution SENSE (SURE-SENSE) is presented in which acceleration is performed by acquiring only the central region of k-space instead of increasing the sampling distance over the complete k-space matrix and reconstruction is explicitly based on intra-voxel coil sensitivity variation. In SURE-SENSE, parallel MRI reconstruction is formulated as a superresolution imaging problem where a collection of low resolution images acquired with multiple receiver coils are combined into a single image with higher spatial resolution using coil sensitivities acquired with high spatial resolution. The effective acceleration of conventional gradient encoding is given by the gain in spatial resolution, which is dictated by the degree of variation of the different coil sensitivity profiles within the low resolution image voxel. Since SURE-SENSE is an ill-posed inverse problem, Tikhonov regularization is employed to control noise amplification. Unlike standard SENSE, for which acceleration is constrained to the phase-encoding dimension/s, SURE-SENSE allows acceleration along all encoding directions — for example, two-dimensional acceleration of a 2D echo-planar acquisition. SURE-SENSE is particularly suitable for low spatial resolution imaging modalities such as spectroscopic imaging and functional imaging with high temporal resolution. Application to echo-planar functional and spectroscopic imaging in human brain is presented using two-dimensional acceleration with a 32-channel receiver coil. PMID:19341804
Simulating contrast inversion in atomic force microscopy imaging with real-space pseudopotentials
NASA Astrophysics Data System (ADS)
Lee, Alex J.; Sakai, Yuki; Chelikowsky, James R.
2017-02-01
Atomic force microscopy (AFM) measurements have reported contrast inversions for systems such as Cu2N and graphene that can hamper image interpretation and characterization. Here, we apply a simulation method based on ab initio real-space pseudopotentials to gain an understanding of the tip-sample interactions that influence the inversion. We find that chemically reactive tips induce an attractive binding force that results in the contrast inversion. We find that the inversion is tip height dependent and not observed when using less reactive CO-functionalized tips.
Zeng, C.; Xia, J.; Miller, R.D.; Tsoflias, G.P.
2011-01-01
Conventional surface wave inversion for shallow shear (S)-wave velocity relies on the generation of dispersion curves of Rayleigh waves. This constrains the method to only laterally homogeneous (or very smooth laterally heterogeneous) earth models. Waveform inversion directly fits waveforms on seismograms, hence, does not have such a limitation. Waveforms of Rayleigh waves are highly related to S-wave velocities. By inverting the waveforms of Rayleigh waves on a near-surface seismogram, shallow S-wave velocities can be estimated for earth models with strong lateral heterogeneity. We employ genetic algorithm (GA) to perform waveform inversion of Rayleigh waves for S-wave velocities. The forward problem is solved by finite-difference modeling in the time domain. The model space is updated by generating offspring models using GA. Final solutions can be found through an iterative waveform-fitting scheme. Inversions based on synthetic records show that the S-wave velocities can be recovered successfully with errors no more than 10% for several typical near-surface earth models. For layered earth models, the proposed method can generate one-dimensional S-wave velocity profiles without the knowledge of initial models. For earth models containing lateral heterogeneity in which case conventional dispersion-curve-based inversion methods are challenging, it is feasible to produce high-resolution S-wave velocity sections by GA waveform inversion with appropriate priori information. The synthetic tests indicate that the GA waveform inversion of Rayleigh waves has the great potential for shallow S-wave velocity imaging with the existence of strong lateral heterogeneity. ?? 2011 Elsevier B.V.
Towards the Early Detection of Breast Cancer in Young Women
2006-10-01
approach. 4. Poroelastic model for tissue deformation: We have implemented the model of Netti et al. in a finite element program in order to simulate...changes would not be expected. 44Interstitial Fluid Flow 5. Conclusions A poroelastic model that includes the effects of fluid flow and the possibility of...images to produce a displacement field. Using this displacement field, and an assumed linear elastic model for the tissue, an inverse problem is solved
A new phase correction method in NMR imaging based on autocorrelation and histogram analysis.
Ahn, C B; Cho, Z H
1987-01-01
A new statistical approach to phase correction in NMR imaging is proposed. The proposed scheme consists of first-and zero-order phase corrections each by the inverse multiplication of estimated phase error. The first-order error is estimated by the phase of autocorrelation calculated from the complex valued phase distorted image while the zero-order correction factor is extracted from the histogram of phase distribution of the first-order corrected image. Since all the correction procedures are performed on the spatial domain after completion of data acquisition, no prior adjustments or additional measurements are required. The algorithm can be applicable to most of the phase-involved NMR imaging techniques including inversion recovery imaging, quadrature modulated imaging, spectroscopic imaging, and flow imaging, etc. Some experimental results with inversion recovery imaging as well as quadrature spectroscopic imaging are shown to demonstrate the usefulness of the algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yu; Gao, Kai; Huang, Lianjie
Accurate imaging and characterization of fracture zones is crucial for geothermal energy exploration. Aligned fractures within fracture zones behave as anisotropic media for seismic-wave propagation. The anisotropic properties in fracture zones introduce extra difficulties for seismic imaging and waveform inversion. We have recently developed a new anisotropic elastic-waveform inversion method using a modified total-variation regularization scheme and a wave-energy-base preconditioning technique. Our new inversion method uses the parameterization of elasticity constants to describe anisotropic media, and hence it can properly handle arbitrary anisotropy. We apply our new inversion method to a seismic velocity model along a 2D-line seismic data acquiredmore » at Eleven-Mile Canyon located at the Southern Dixie Valley in Nevada for geothermal energy exploration. Our inversion results show that anisotropic elastic-waveform inversion has potential to reconstruct subsurface anisotropic elastic parameters for imaging and characterization of fracture zones.« less
Inverse problems in the design, modeling and testing of engineering systems
NASA Technical Reports Server (NTRS)
Alifanov, Oleg M.
1991-01-01
Formulations, classification, areas of application, and approaches to solving different inverse problems are considered for the design of structures, modeling, and experimental data processing. Problems in the practical implementation of theoretical-experimental methods based on solving inverse problems are analyzed in order to identify mathematical models of physical processes, aid in input data preparation for design parameter optimization, help in design parameter optimization itself, and to model experiments, large-scale tests, and real tests of engineering systems.
Multiscale solvers and systematic upscaling in computational physics
NASA Astrophysics Data System (ADS)
Brandt, A.
2005-07-01
Multiscale algorithms can overcome the scale-born bottlenecks that plague most computations in physics. These algorithms employ separate processing at each scale of the physical space, combined with interscale iterative interactions, in ways which use finer scales very sparingly. Having been developed first and well known as multigrid solvers for partial differential equations, highly efficient multiscale techniques have more recently been developed for many other types of computational tasks, including: inverse PDE problems; highly indefinite (e.g., standing wave) equations; Dirac equations in disordered gauge fields; fast computation and updating of large determinants (as needed in QCD); fast integral transforms; integral equations; astrophysics; molecular dynamics of macromolecules and fluids; many-atom electronic structures; global and discrete-state optimization; practical graph problems; image segmentation and recognition; tomography (medical imaging); fast Monte-Carlo sampling in statistical physics; and general, systematic methods of upscaling (accurate numerical derivation of large-scale equations from microscopic laws).
Quantitative imaging of aggregated emulsions.
Penfold, Robert; Watson, Andrew D; Mackie, Alan R; Hibberd, David J
2006-02-28
Noise reduction, restoration, and segmentation methods are developed for the quantitative structural analysis in three dimensions of aggregated oil-in-water emulsion systems imaged by fluorescence confocal laser scanning microscopy. Mindful of typical industrial formulations, the methods are demonstrated for concentrated (30% volume fraction) and polydisperse emulsions. Following a regularized deconvolution step using an analytic optical transfer function and appropriate binary thresholding, novel application of the Euclidean distance map provides effective discrimination of closely clustered emulsion droplets with size variation over at least 1 order of magnitude. The a priori assumption of spherical nonintersecting objects provides crucial information to combat the ill-posed inverse problem presented by locating individual particles. Position coordinates and size estimates are recovered with sufficient precision to permit quantitative study of static geometrical features. In particular, aggregate morphology is characterized by a novel void distribution measure based on the generalized Apollonius problem. This is also compared with conventional Voronoi/Delauney analysis.
NASA Astrophysics Data System (ADS)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.
A TV-constrained decomposition method for spectral CT
NASA Astrophysics Data System (ADS)
Guo, Xiaoyue; Zhang, Li; Xing, Yuxiang
2017-03-01
Spectral CT is attracting more and more attention in medicine, industrial nondestructive testing and security inspection field. Material decomposition is an important issue to a spectral CT to discriminate materials. Because of the spectrum overlap of energy channels, as well as the correlation of basis functions, it is well acknowledged that decomposition step in spectral CT imaging causes noise amplification and artifacts in component coefficient images. In this work, we propose materials decomposition via an optimization method to improve the quality of decomposed coefficient images. On the basis of general optimization problem, total variance minimization is constrained on coefficient images in our overall objective function with adjustable weights. We solve this constrained optimization problem under the framework of ADMM. Validation on both a numerical dental phantom in simulation and a real phantom of pig leg on a practical CT system using dual-energy imaging is executed. Both numerical and physical experiments give visually obvious better reconstructions than a general direct inverse method. SNR and SSIM are adopted to quantitatively evaluate the image quality of decomposed component coefficients. All results demonstrate that the TV-constrained decomposition method performs well in reducing noise without losing spatial resolution so that improving the image quality. The method can be easily incorporated into different types of spectral imaging modalities, as well as for cases with energy channels more than two.
Characterisation of a resolution enhancing image inversion interferometer.
Wicker, Kai; Sindbert, Simon; Heintzmann, Rainer
2009-08-31
Image inversion interferometers have the potential to significantly enhance the lateral resolution and light efficiency of scanning fluorescence microscopes. Self-interference of a point source's coherent point spread function with its inverted copy leads to a reduction in the integrated signal for off-axis sources compared to sources on the inversion axis. This can be used to enhance the resolution in a confocal laser scanning microscope. We present a simple image inversion interferometer relying solely on reflections off planar surfaces. Measurements of the detection point spread function for several types of light sources confirm the predicted performance and suggest its usability for scanning confocal fluorescence microscopy.
2-D traveltime and waveform inversion for improved seismic imaging: Naga Thrust and Fold Belt, India
NASA Astrophysics Data System (ADS)
Jaiswal, Priyank; Zelt, Colin A.; Bally, Albert W.; Dasgupta, Rahul
2008-05-01
Exploration along the Naga Thrust and Fold Belt in the Assam province of Northeast India encounters geological as well as logistic challenges. Drilling for hydrocarbons, traditionally guided by surface manifestations of the Naga thrust fault, faces additional challenges in the northeast where the thrust fault gradually deepens leaving subtle surface expressions. In such an area, multichannel 2-D seismic data were collected along a line perpendicular to the trend of the thrust belt. The data have a moderate signal-to-noise ratio and suffer from ground roll and other acquisition-related noise. In addition to data quality, the complex geology of the thrust belt limits the ability of conventional seismic processing to yield a reliable velocity model which in turn leads to poor subsurface image. In this paper, we demonstrate the application of traveltime and waveform inversion as supplements to conventional seismic imaging and interpretation processes. Both traveltime and waveform inversion utilize the first arrivals that are typically discarded during conventional seismic processing. As a first step, a smooth velocity model with long wavelength characteristics of the subsurface is estimated through inversion of the first-arrival traveltimes. This velocity model is then used to obtain a Kirchhoff pre-stack depth-migrated image which in turn is used for the interpretation of the fault. Waveform inversion is applied to the central part of the seismic line to a depth of ~1 km where the quality of the migrated image is poor. Waveform inversion is performed in the frequency domain over a series of iterations, proceeding from low to high frequency (11-19 Hz) using the velocity model from traveltime inversion as the starting model. In the end, the pre-stack depth-migrated image and the waveform inversion model are jointly interpreted. This study demonstrates that a combination of traveltime and waveform inversion with Kirchhoff pre-stack depth migration is a promising approach for the interpretation of geological structures in a thrust belt.
Spatiotemporal matrix image formation for programmable ultrasound scanners
NASA Astrophysics Data System (ADS)
Berthon, Beatrice; Morichau-Beauchant, Pierre; Porée, Jonathan; Garofalakis, Anikitos; Tavitian, Bertrand; Tanter, Mickael; Provost, Jean
2018-02-01
As programmable ultrasound scanners become more common in research laboratories, it is increasingly important to develop robust software-based image formation algorithms that can be obtained in a straightforward fashion for different types of probes and sequences with a small risk of error during implementation. In this work, we argue that as the computational power keeps increasing, it is becoming practical to directly implement an approximation to the matrix operator linking reflector point targets to the corresponding radiofrequency signals via thoroughly validated and widely available simulations software. Once such a spatiotemporal forward-problem matrix is constructed, standard and thus highly optimized inversion procedures can be leveraged to achieve very high quality images in real time. Specifically, we show that spatiotemporal matrix image formation produces images of similar or enhanced quality when compared against standard delay-and-sum approaches in phantoms and in vivo, and show that this approach can be used to form images even when using non-conventional probe designs for which adapted image formation algorithms are not readily available.
Zhang, Hao; Zeng, Dong; Zhang, Hua; Wang, Jing; Liang, Zhengrong
2017-01-01
Low-dose X-ray computed tomography (LDCT) imaging is highly recommended for use in the clinic because of growing concerns over excessive radiation exposure. However, the CT images reconstructed by the conventional filtered back-projection (FBP) method from low-dose acquisitions may be severely degraded with noise and streak artifacts due to excessive X-ray quantum noise, or with view-aliasing artifacts due to insufficient angular sampling. In 2005, the nonlocal means (NLM) algorithm was introduced as a non-iterative edge-preserving filter to denoise natural images corrupted by additive Gaussian noise, and showed superior performance. It has since been adapted and applied to many other image types and various inverse problems. This paper specifically reviews the applications of the NLM algorithm in LDCT image processing and reconstruction, and explicitly demonstrates its improving effects on the reconstructed CT image quality from low-dose acquisitions. The effectiveness of these applications on LDCT and their relative performance are described in detail. PMID:28303644
Solving geosteering inverse problems by stochastic Hybrid Monte Carlo method
Shen, Qiuyang; Wu, Xuqing; Chen, Jiefu; ...
2017-11-20
The inverse problems arise in almost all fields of science where the real-world parameters are extracted from a set of measured data. The geosteering inversion plays an essential role in the accurate prediction of oncoming strata as well as a reliable guidance to adjust the borehole position on the fly to reach one or more geological targets. This mathematical treatment is not easy to solve, which requires finding an optimum solution among a large solution space, especially when the problem is non-linear and non-convex. Nowadays, a new generation of logging-while-drilling (LWD) tools has emerged on the market. The so-called azimuthalmore » resistivity LWD tools have azimuthal sensitivity and a large depth of investigation. Hence, the associated inverse problems become much more difficult since the earth model to be inverted will have more detailed structures. The conventional deterministic methods are incapable to solve such a complicated inverse problem, where they suffer from the local minimum trap. Alternatively, stochastic optimizations are in general better at finding global optimal solutions and handling uncertainty quantification. In this article, we investigate the Hybrid Monte Carlo (HMC) based statistical inversion approach and suggest that HMC based inference is more efficient in dealing with the increased complexity and uncertainty faced by the geosteering problems.« less
High resolution seismic tomography imaging of Ireland with quarry blast data
NASA Astrophysics Data System (ADS)
Arroucau, P.; Lebedev, S.; Bean, C. J.; Grannell, J.
2017-12-01
Local earthquake tomography is a well established tool to image geological structure at depth. That technique, however, is difficult to apply in slowly deforming regions, where local earthquakes are typically rare and of small magnitude, resulting in sparse data sampling. The natural earthquake seismicity of Ireland is very low. That due to quarry and mining blasts, on the other hand, is high and homogeneously distributed. As a consequence, and thanks to the dense and nearly uniform coverage achieved in the past ten years by temporary and permanent broadband seismological stations, the quarry blasts offer an alternative approach for high resolution seismic imaging of the crust and uppermost mantle beneath Ireland. We detected about 1,500 quarry blasts in Ireland and Northern Ireland between 2011 and 2014, for which we manually picked more than 15,000 P- and 20,000 S-wave first arrival times. The anthropogenic, explosive origin of those events was unambiguously assessed based on location, occurrence time and waveform characteristics. Here, we present a preliminary 3D tomographic model obtained from the inversion of 3,800 P-wave arrival times associated with a subset of 500 events observed in 2011, using FMTOMO tomographic code. Forward modeling is performed with the Fast Marching Method (FMM) and the inverse problem is solved iteratively using a gradient-based subspace inversion scheme after careful selection of damping and smoothing regularization parameters. The results illuminate the geological structure of Ireland from deposit to crustal scale in unprecedented detail, as demonstrated by sensitivity analysis, source relocation with the 3D velocity model and comparisons with surface geology.
Using a derivative-free optimization method for multiple solutions of inverse transport problems
Armstrong, Jerawan C.; Favorite, Jeffrey A.
2016-01-14
Identifying unknown components of an object that emits radiation is an important problem for national and global security. Radiation signatures measured from an object of interest can be used to infer object parameter values that are not known. This problem is called an inverse transport problem. An inverse transport problem may have multiple solutions and the most widely used approach for its solution is an iterative optimization method. This paper proposes a stochastic derivative-free global optimization algorithm to find multiple solutions of inverse transport problems. The algorithm is an extension of a multilevel single linkage (MLSL) method where a meshmore » adaptive direct search (MADS) algorithm is incorporated into the local phase. Furthermore, numerical test cases using uncollided fluxes of discrete gamma-ray lines are presented to show the performance of this new algorithm.« less
Frnakenstein: multiple target inverse RNA folding.
Lyngsø, Rune B; Anderson, James W J; Sizikova, Elena; Badugu, Amarendra; Hyland, Tomas; Hein, Jotun
2012-10-09
RNA secondary structure prediction, or folding, is a classic problem in bioinformatics: given a sequence of nucleotides, the aim is to predict the base pairs formed in its three dimensional conformation. The inverse problem of designing a sequence folding into a particular target structure has only more recently received notable interest. With a growing appreciation and understanding of the functional and structural properties of RNA motifs, and a growing interest in utilising biomolecules in nano-scale designs, the interest in the inverse RNA folding problem is bound to increase. However, whereas the RNA folding problem from an algorithmic viewpoint has an elegant and efficient solution, the inverse RNA folding problem appears to be hard. In this paper we present a genetic algorithm approach to solve the inverse folding problem. The main aims of the development was to address the hitherto mostly ignored extension of solving the inverse folding problem, the multi-target inverse folding problem, while simultaneously designing a method with superior performance when measured on the quality of designed sequences. The genetic algorithm has been implemented as a Python program called Frnakenstein. It was benchmarked against four existing methods and several data sets totalling 769 real and predicted single structure targets, and on 292 two structure targets. It performed as well as or better at finding sequences which folded in silico into the target structure than all existing methods, without the heavy bias towards CG base pairs that was observed for all other top performing methods. On the two structure targets it also performed well, generating a perfect design for about 80% of the targets. Our method illustrates that successful designs for the inverse RNA folding problem does not necessarily have to rely on heavy biases in base pair and unpaired base distributions. The design problem seems to become more difficult on larger structures when the target structures are real structures, while no deterioration was observed for predicted structures. Design for two structure targets is considerably more difficult, but far from impossible, demonstrating the feasibility of automated design of artificial riboswitches. The Python implementation is available at http://www.stats.ox.ac.uk/research/genome/software/frnakenstein.
Frnakenstein: multiple target inverse RNA folding
2012-01-01
Background RNA secondary structure prediction, or folding, is a classic problem in bioinformatics: given a sequence of nucleotides, the aim is to predict the base pairs formed in its three dimensional conformation. The inverse problem of designing a sequence folding into a particular target structure has only more recently received notable interest. With a growing appreciation and understanding of the functional and structural properties of RNA motifs, and a growing interest in utilising biomolecules in nano-scale designs, the interest in the inverse RNA folding problem is bound to increase. However, whereas the RNA folding problem from an algorithmic viewpoint has an elegant and efficient solution, the inverse RNA folding problem appears to be hard. Results In this paper we present a genetic algorithm approach to solve the inverse folding problem. The main aims of the development was to address the hitherto mostly ignored extension of solving the inverse folding problem, the multi-target inverse folding problem, while simultaneously designing a method with superior performance when measured on the quality of designed sequences. The genetic algorithm has been implemented as a Python program called Frnakenstein. It was benchmarked against four existing methods and several data sets totalling 769 real and predicted single structure targets, and on 292 two structure targets. It performed as well as or better at finding sequences which folded in silico into the target structure than all existing methods, without the heavy bias towards CG base pairs that was observed for all other top performing methods. On the two structure targets it also performed well, generating a perfect design for about 80% of the targets. Conclusions Our method illustrates that successful designs for the inverse RNA folding problem does not necessarily have to rely on heavy biases in base pair and unpaired base distributions. The design problem seems to become more difficult on larger structures when the target structures are real structures, while no deterioration was observed for predicted structures. Design for two structure targets is considerably more difficult, but far from impossible, demonstrating the feasibility of automated design of artificial riboswitches. The Python implementation is available at http://www.stats.ox.ac.uk/research/genome/software/frnakenstein. PMID:23043260
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Lianjie; Chen, Ting; Tan, Sirui
Imaging fault zones and fractures is crucial for geothermal operators, providing important information for reservoir evaluation and management strategies. However, there are no existing techniques available for directly and clearly imaging fault zones, particularly for steeply dipping faults and fracture zones. In this project, we developed novel acoustic- and elastic-waveform inversion methods for high-resolution velocity model building. In addition, we developed acoustic and elastic reverse-time migration methods for high-resolution subsurface imaging of complex subsurface structures and steeply-dipping fault/fracture zones. We first evaluated and verified the improved capabilities of our newly developed seismic inversion and migration imaging methods using synthetic seismicmore » data. Our numerical tests verified that our new methods directly image subsurface fracture/fault zones using surface seismic reflection data. We then applied our novel seismic inversion and migration imaging methods to a field 3D surface seismic dataset acquired at the Soda Lake geothermal field using Vibroseis sources. Our migration images of the Soda Lake geothermal field obtained using our seismic inversion and migration imaging algorithms revealed several possible fault/fracture zones. AltaRock Energy, Inc. is working with Cyrq Energy, Inc. to refine the geologic interpretation at the Soda Lake geothermal field. Trenton Cladouhos, Senior Vice President R&D of AltaRock, was very interested in our imaging results of 3D surface seismic data from the Soda Lake geothermal field. He planed to perform detailed interpretation of our images in collaboration with James Faulds and Holly McLachlan of University of Nevada at Reno. Using our high-resolution seismic inversion and migration imaging results can help determine the optimal locations to drill wells for geothermal energy production and reduce the risk of geothermal exploration.« less
Adaptive eigenspace method for inverse scattering problems in the frequency domain
NASA Astrophysics Data System (ADS)
Grote, Marcus J.; Kray, Marie; Nahum, Uri
2017-02-01
A nonlinear optimization method is proposed for the solution of inverse scattering problems in the frequency domain, when the scattered field is governed by the Helmholtz equation. The time-harmonic inverse medium problem is formulated as a PDE-constrained optimization problem and solved by an inexact truncated Newton-type iteration. Instead of a grid-based discrete representation, the unknown wave speed is projected to a particular finite-dimensional basis of eigenfunctions, which is iteratively adapted during the optimization. Truncating the adaptive eigenspace (AE) basis at a (small and slowly increasing) finite number of eigenfunctions effectively introduces regularization into the inversion and thus avoids the need for standard Tikhonov-type regularization. Both analytical and numerical evidence underpins the accuracy of the AE representation. Numerical experiments demonstrate the efficiency and robustness to missing or noisy data of the resulting adaptive eigenspace inversion method.
NASA Astrophysics Data System (ADS)
Corbard, T.; Berthomieu, G.; Provost, J.; Blanc-Feraud, L.
Inferring the solar rotation from observed frequency splittings represents an ill-posed problem in the sense of Hadamard and the traditional approach used to override this difficulty consists in regularizing the problem by adding some a priori information on the global smoothness of the solution defined as the norm of its first or second derivative. Nevertheless, inversions of rotational splittings (e.g. Corbard et al., 1998; Schou et al., 1998) have shown that the surface layers and the so-called solar tachocline (Spiegel & Zahn 1992) at the base of the convection zone are regions in which high radial gradients of the rotation rate occur. %there exist high gradients in the solar rotation profile near %the surface and at the base of the convection zone (e.g. Corbard et al. 1998) %in the so-called solar tachocline (Spiegel & Zahn 1992). Therefore, the global smoothness a-priori which tends to smooth out every high gradient in the solution may not be appropriate for the study of a zone like the tachocline which is of particular interest for the study of solar dynamics (e.g. Elliot 1997). In order to infer the fine structure of such regions with high gradients by inverting helioseismic data, we have to find a way to preserve these zones in the inversion process. Setting a more adapted constraint on the solution leads to non-linear regularization methods that are in current use for edge-preserving regularization in computed imaging (e.g. Blanc-Feraud et al. 1995). In this work, we investigate their use in the helioseismic context of rotational inversions.
Direct EIT reconstructions of complex admittivities on a chest-shaped domain in 2-D.
Hamilton, Sarah J; Mueller, Jennifer L
2013-04-01
Electrical impedance tomography (EIT) is a medical imaging technique in which current is applied on electrodes on the surface of the body, the resulting voltage is measured, and an inverse problem is solved to recover the conductivity and/or permittivity in the interior. Images are then formed from the reconstructed conductivity and permittivity distributions. In the 2-D geometry, EIT is clinically useful for chest imaging. In this work, an implementation of a D-bar method for complex admittivities on a general 2-D domain is presented. In particular, reconstructions are computed on a chest-shaped domain for several realistic phantoms including a simulated pneumothorax, hyperinflation, and pleural effusion. The method demonstrates robustness in the presence of noise. Reconstructions from trigonometric and pairwise current injection patterns are included.
PREFACE: Inverse Problems in Applied Sciences—towards breakthrough
NASA Astrophysics Data System (ADS)
Cheng, Jin; Iso, Yuusuke; Nakamura, Gen; Yamamoto, Masahiro
2007-06-01
These are the proceedings of the international conference `Inverse Problems in Applied Sciences—towards breakthrough' which was held at Hokkaido University, Sapporo, Japan on 3-7 July 2006 (http://coe.math.sci.hokudai.ac.jp/sympo/inverse/). There were 88 presentations and more than 100 participants, and we are proud to say that the conference was very successful. Nowadays, many new activities on inverse problems are flourishing at many centers of research around the world, and the conference has successfully gathered a world-wide variety of researchers. We believe that this volume contains not only main papers, but also conveys the general status of current research into inverse problems. This conference was the third biennial international conference on inverse problems, the core of which is the Pan-Pacific Asian area. The purpose of this series of conferences is to establish and develop constant international collaboration, especially among the Pan-Pacific Asian countries, and to lead the organization of activities concerning inverse problems centered in East Asia. The first conference was held at City University of Hong Kong in January 2002 and the second was held at Fudan University in June 2004. Following the preceding two successes, the third conference was organized in order to extend the scope of activities and build useful bridges to the next conference in Seoul in 2008. Therefore this third biennial conference was intended not only to establish collaboration and links between researchers in Asia and leading researchers worldwide in inverse problems but also to nurture interdisciplinary collaboration in theoretical fields such as mathematics, applied fields and evolving aspects of inverse problems. For these purposes, we organized tutorial lectures, serial lectures and a panel discussion as well as conference research presentations. This volume contains three lecture notes from the tutorial and serial lectures, and 22 papers. Especially at this flourishing time, it is necessary to carefully analyse the current status of inverse problems for further development. Thus we have opened with the panel discussion entitled `Future of Inverse Problems' with panelists: Professors J Cheng, H W Engl, V Isakov, R Kress, J-K Seo, G Uhlmann and the commentator: Elaine Longden-Chapman from IOP Publishing. The aims of the panel discussion were to examine the current research status from various viewpoints, to discuss how we can overcome any difficulties and how we can promote young researchers and open new possibilities for inverse problems such as industrial linkages. As one output, the panel discussion has triggered the organization of the Inverse Problems International Association (IPIA) which has led to its first international congress in the summer of 2007. Another remarkable outcome of the conference is, of course, the present volume: this is the very high quality online proceedings volume of Journal of Physics: Conference Series. Readers can see in these proceedings very well written tutorial lecture notes, and very high quality original research and review papers all of which show what was achieved by the time the conference was held. The electronic publication of the proceedings is a new way of publicizing the achievement of the conference. It has the advantage of wide circulation and cost reduction. We believe this is a most efficient method for our needs and purposes. We would like to take this opportunity to acknowledge all the people who helped to organize the conference. Guest Editors Jin Cheng, Fudan University, Shanghai, China Yuusuke Iso, Kyoto University, Kyoto, Japan Gen Nakamura, Hokkaido University, Sapporo, Japan Masahiro Yamamoto, University of Tokyo, Tokyo, Japan
Ellipsoidal head model for fetal magnetoencephalography: forward and inverse solutions
NASA Astrophysics Data System (ADS)
Gutiérrez, David; Nehorai, Arye; Preissl, Hubert
2005-05-01
Fetal magnetoencephalography (fMEG) is a non-invasive technique where measurements of the magnetic field outside the maternal abdomen are used to infer the source location and signals of the fetus' neural activity. There are a number of aspects related to fMEG modelling that must be addressed, such as the conductor volume, fetal position and orientation, gestation period, etc. We propose a solution to the forward problem of fMEG based on an ellipsoidal head geometry. This model has the advantage of highlighting special characteristics of the field that are inherent to the anisotropy of the human head, such as the spread and orientation of the field in relationship with the localization and position of the fetal head. Our forward solution is presented in the form of a kernel matrix that facilitates the solution of the inverse problem through decoupling of the dipole localization parameters from the source signals. Then, we use this model and the maximum likelihood technique to solve the inverse problem assuming the availability of measurements from multiple trials. The applicability and performance of our methods are illustrated through numerical examples based on a real 151-channel SQUID fMEG measurement system (SARA). SARA is an MEG system especially designed for fetal assessment and is currently used for heart and brain studies. Finally, since our model requires knowledge of the best-fitting ellipsoid's centre location and semiaxes lengths, we propose a method for estimating these parameters through a least-squares fit on anatomical information obtained from three-dimensional ultrasound images.
Sparse and redundant representations for inverse problems and recognition
NASA Astrophysics Data System (ADS)
Patel, Vishal M.
Sparse and redundant representation of data enables the description of signals as linear combinations of a few atoms from a dictionary. In this dissertation, we study applications of sparse and redundant representations in inverse problems and object recognition. Furthermore, we propose two novel imaging modalities based on the recently introduced theory of Compressed Sensing (CS). This dissertation consists of four major parts. In the first part of the dissertation, we study a new type of deconvolution algorithm that is based on estimating the image from a shearlet decomposition. Shearlets provide a multi-directional and multi-scale decomposition that has been mathematically shown to represent distributed discontinuities such as edges better than traditional wavelets. We develop a deconvolution algorithm that allows for the approximation inversion operator to be controlled on a multi-scale and multi-directional basis. Furthermore, we develop a method for the automatic determination of the threshold values for the noise shrinkage for each scale and direction without explicit knowledge of the noise variance using a generalized cross validation method. In the second part of the dissertation, we study a reconstruction method that recovers highly undersampled images assumed to have a sparse representation in a gradient domain by using partial measurement samples that are collected in the Fourier domain. Our method makes use of a robust generalized Poisson solver that greatly aids in achieving a significantly improved performance over similar proposed methods. We will demonstrate by experiments that this new technique is more flexible to work with either random or restricted sampling scenarios better than its competitors. In the third part of the dissertation, we introduce a novel Synthetic Aperture Radar (SAR) imaging modality which can provide a high resolution map of the spatial distribution of targets and terrain using a significantly reduced number of needed transmitted and/or received electromagnetic waveforms. We demonstrate that this new imaging scheme, requires no new hardware components and allows the aperture to be compressed. Also, it presents many new applications and advantages which include strong resistance to countermesasures and interception, imaging much wider swaths and reduced on-board storage requirements. The last part of the dissertation deals with object recognition based on learning dictionaries for simultaneous sparse signal approximations and feature extraction. A dictionary is learned for each object class based on given training examples which minimize the representation error with a sparseness constraint. A novel test image is then projected onto the span of the atoms in each learned dictionary. The residual vectors along with the coefficients are then used for recognition. Applications to illumination robust face recognition and automatic target recognition are presented.
Solvability of the electrocardiology inverse problem for a moving dipole.
Tolkachev, V; Bershadsky, B; Nemirko, A
1993-01-01
New formulations of the direct and inverse problems for the moving dipole are offered. It has been suggested to limit the study by a small area on the chest surface. This lowers the role of the medium inhomogeneity. When formulating the direct problem, irregular components are considered. The algorithm of simultaneous determination of the dipole and regular noise parameters has been described and analytically investigated. It is shown that temporal overdetermination of the equations offers a single solution of the inverse problem for the four leads.
NASA Astrophysics Data System (ADS)
Gosselin, J.; Audet, P.; Schaeffer, A. J.
2017-12-01
The seismic velocity structure in the forearc of subduction zones provides important constraints on material properties, with implications for seismogenesis. In Cascadia, previous studies have imaged a downgoing low-velocity zone (LVZ) characterized by an elevated P-to-S velocity ratio (Vp/Vs) down to 45 km depth, near the intersection with the mantle wedge corner, beyond which the signature of the LVZ disappears. These results, combined with the absence of a "normal" continental Moho, indicate that the down-going oceanic crust likely carries large amounts of overpressured free fluids that are released downdip at the onset of crustal eclogitization, and are further stored in the mantle wedge as serpentinite. These overpressured free fluids affect the stability of the plate interface and facilitate slow slip. These results are based on the inversion and migration of scattered teleseismic data for individual layer properties; a methodology which suffers from regularization and smoothing, non-uniqueness, and does not consider model uncertainty. This study instead applies trans-dimensional Bayesian inversion of teleseismic data collected in the forearc of northern Cascadia (the CAFÉ experiment in northern Washington) to provide rigorous, quantitative estimates of local velocity structure, and associated uncertainties (particularly Vp/Vs structure and depth to the plate interface). Trans-dimensional inversion is a generalization of fixed-dimensional inversion that includes the number (and type) of parameters required to describe the velocity model (or data error model) as unknown in the problem. This allows model complexity to be inherently determined by data information content, not by subjective regularization. The inversion is implemented here using the reversible-jump Markov chain Monte Carlo algorithm. The result is an ensemble set of candidate velocity-structure models which approximate the posterior probability density (PPD) of the model parameters. The solution to the inverse problem, and associated uncertainties, are described by properties of the PPD. The results obtained here will eventually be integrated with teleseismic data from OBS stations from the Cascadia Initiative to provide constraints across the entire seismogenic portion of the plate interface.
The Earth isn't flat: The (large) influence of topography on geodetic fault slip imaging.
NASA Astrophysics Data System (ADS)
Thompson, T. B.; Meade, B. J.
2017-12-01
While earthquakes both occur near and generate steep topography, most geodetic slip inversions assume that the Earth's surface is flat. We have developed a new boundary element tool, Tectosaur, with the capability to study fault and earthquake problems including complex fault system geometries, topography, material property contrasts, and millions of elements. Using Tectosaur, we study the model error induced by neglecting topography in both idealized synthetic fault models and for the cases of the MW=7.3 Landers and MW=8.0 Wenchuan earthquakes. Near the steepest topography, we find the use of flat Earth dislocation models may induce errors of more than 100% in the inferred slip magnitude and rake. In particular, neglecting topographic effects leads to an inferred shallow slip deficit. Thus, we propose that the shallow slip deficit observed in several earthquakes may be an artefact resulting from the systematic use of elastic dislocation models assuming a flat Earth. Finally, using this study as an example, we emphasize the dangerous potential for forward model errors to be amplified by an order of magnitude in inverse problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
On numerical reconstructions of lithographic masks in DUV scatterometry
NASA Astrophysics Data System (ADS)
Henn, M.-A.; Model, R.; Bär, M.; Wurm, M.; Bodermann, B.; Rathsfeld, A.; Gross, H.
2009-06-01
The solution of the inverse problem in scatterometry employing deep ultraviolet light (DUV) is discussed, i.e. we consider the determination of periodic surface structures from light diffraction patterns. With decreasing dimensions of the structures on photo lithography masks and wafers, increasing demands on the required metrology techniques arise. Scatterometry as a non-imaging indirect optical method is applied to periodic line structures in order to determine the sidewall angles, heights, and critical dimensions (CD), i.e., the top and bottom widths. The latter quantities are typically in the range of tens of nanometers. All these angles, heights, and CDs are the fundamental figures in order to evaluate the quality of the manufacturing process. To measure those quantities a DUV scatterometer is used, which typically operates at a wavelength of 193 nm. The diffraction of light by periodic 2D structures can be simulated using the finite element method for the Helmholtz equation. The corresponding inverse problem seeks to reconstruct the grating geometry from measured diffraction patterns. Fixing the class of gratings and the set of measurements, this inverse problem reduces to a finite dimensional nonlinear operator equation. Reformulating the problem as an optimization problem, a vast number of numerical schemes can be applied. Our tool is a sequential quadratic programing (SQP) variant of the Gauss-Newton iteration. In a first step, in which we use a simulated data set, we investigate how accurate the geometrical parameters of an EUV mask can be reconstructed, using light in the DUV range. We then determine the expected uncertainties of geometric parameters by reconstructing from simulated input data perturbed by noise representing the estimated uncertainties of input data. In the last step, we use the measurement data obtained from the new DUV scatterometer at PTB to determine the geometrical parameters of a typical EUV mask with our reconstruction algorithm. The results are compared to the outcome of investigations with two alternative methods namely EUV scatterometry and SEM measurements.
MR Imaging with Metal-suppression Sequences for Evaluation of Total Joint Arthroplasty.
Talbot, Brett S; Weinberg, Eric P
2016-01-01
Metallic artifact at orthopedic magnetic resonance (MR) imaging continues to be an important problem, particularly in the realm of total joint arthroplasty. Complications often follow total joint arthroplasty and can be expected for a small percentage of all implanted devices. Postoperative complications involve not only osseous structures but also adjacent soft tissues-a highly problematic area at MR imaging because of artifacts from metallic prostheses. Without special considerations, susceptibility artifacts from ferromagnetic implants can unacceptably degrade image quality. Common artifacts include in-plane distortions (signal loss and signal pileup), poor or absent fat suppression, geometric distortion, and through-section distortion. Basic methods to reduce metallic artifacts include use of spin-echo or fast spin-echo sequences with long echo train lengths, short inversion time inversion-recovery (STIR) sequences for fat suppression, a high bandwidth, thin section selection, and an increased matrix. With care and attention to the alloy type (eg, titanium, cobalt-chromium, stainless steel), orientation of the implant, and magnetic field strength, as well as use of proprietary and nonproprietary metal-suppression techniques, previously nondiagnostic studies can yield key diagnostic information. Specifically, sequences such as the metal artifact reduction sequence (MARS), WARP (Siemens Healthcare, Munich, Germany), slice encoding for metal artifact correction (SEMAC), and multiacquisition with variable-resonance image combination (MAVRIC) can be optimized to reveal pathologic conditions previously hidden by periprosthetic artifacts. Complications of total joint arthroplasty that can be evaluated by using MR imaging with metal-suppression sequences include pseudotumoral conditions such as metallosis and particle disease, infection, aseptic prosthesis loosening, tendon injury, and muscle injury. ©RSNA, 2015.
Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart
2011-01-01
We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the “model-free” variational analysis (VA)-based image enhancement approach and the “model-based” descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations. PMID:22163859
MAP Estimators for Piecewise Continuous Inversion
2016-08-08
MAP estimators for piecewise continuous inversion M M Dunlop1 and A M Stuart Mathematics Institute, University of Warwick, Coventry, CV4 7AL, UK E...Published 8 August 2016 Abstract We study the inverse problem of estimating a field ua from data comprising a finite set of nonlinear functionals of ua...then natural to study maximum a posterior (MAP) estimators. Recently (Dashti et al 2013 Inverse Problems 29 095017) it has been shown that MAP
Online quantitative analysis of multispectral images of human body tissues
NASA Astrophysics Data System (ADS)
Lisenko, S. A.
2013-08-01
A method is developed for online monitoring of structural and morphological parameters of biological tissues (haemoglobin concentration, degree of blood oxygenation, average diameter of capillaries and the parameter characterising the average size of tissue scatterers), which involves multispectral tissue imaging, image normalisation to one of its spectral layers and determination of unknown parameters based on their stable regression relation with the spectral characteristics of the normalised image. Regression is obtained by simulating numerically the diffuse reflectance spectrum of the tissue by the Monte Carlo method at a wide variation of model parameters. The correctness of the model calculations is confirmed by the good agreement with the experimental data. The error of the method is estimated under conditions of general variability of structural and morphological parameters of the tissue. The method developed is compared with the traditional methods of interpretation of multispectral images of biological tissues, based on the solution of the inverse problem for each pixel of the image in the approximation of different analytical models.
Combined multi-spectrum and orthogonal Laplacianfaces for fast CB-XLCT imaging with single-view data
NASA Astrophysics Data System (ADS)
Zhang, Haibo; Geng, Guohua; Chen, Yanrong; Qu, Xuan; Zhao, Fengjun; Hou, Yuqing; Yi, Huangjian; He, Xiaowei
2017-12-01
Cone-beam X-ray luminescence computed tomography (CB-XLCT) is an attractive hybrid imaging modality, which has the potential of monitoring the metabolic processes of nanophosphors-based drugs in vivo. Single-view data reconstruction as a key issue of CB-XLCT imaging promotes the effective study of dynamic XLCT imaging. However, it suffers from serious ill-posedness in the inverse problem. In this paper, a multi-spectrum strategy is adopted to relieve the ill-posedness of reconstruction. The strategy is based on the third-order simplified spherical harmonic approximation model. Then, an orthogonal Laplacianfaces-based method is proposed to reduce the large computational burden without degrading the imaging quality. Both simulated data and in vivo experimental data were used to evaluate the efficiency and robustness of the proposed method. The results are satisfactory in terms of both location and quantitative recovering with computational efficiency, indicating that the proposed method is practical and promising for single-view CB-XLCT imaging.
Time-domain full waveform inversion using instantaneous phase information with damping
NASA Astrophysics Data System (ADS)
Luo, Jingrui; Wu, Ru-Shan; Gao, Fuchun
2018-06-01
In time domain, the instantaneous phase can be obtained from the complex seismic trace using Hilbert transform. The instantaneous phase information has great potential in overcoming the local minima problem and improving the result of full waveform inversion. However, the phase wrapping problem, which comes from numerical calculation, prevents its application. In order to avoid the phase wrapping problem, we choose to use the exponential phase combined with the damping method, which gives instantaneous phase-based multi-stage inversion. We construct the objective functions based on the exponential instantaneous phase, and also derive the corresponding gradient operators. Conventional full waveform inversion and the instantaneous phase-based inversion are compared with numerical examples, which indicates that in the case without low frequency information in seismic data, our method is an effective and efficient approach for initial model construction for full waveform inversion.
Kida, Hirotaka; Satoh, Masayuki; Ii, Yuichiro; Fukuyama, Hidenao; Maeda, Masayuki; Tomimoto, Hidekazu
2017-01-01
The patient was an 81-year-old man who had been treated for hypertension for several decades. In 2012, he developed gait disturbance and mild amnesia. One year later, his gait disturbance worsened, and he developed urinary incontinence. Conventional brain magnetic resonance imaging using T 2 -weighted images and fluid-attenuated inversion recovery showed multiple lacunar infarctions. These findings fulfilled the diagnostic criteria for subcortical ischaemic vascular dementia. However, susceptibility weighted imaging showed multiple lobar microbleeds in the bilateral occipitoparietal lobes, and double inversion recovery and 3-D fluid-attenuated inversion recovery images on 3-T magnetic resonance imaging revealed cortical microinfarctions in the left parietal-temporo-occipito region. Pittsburgh compound B-positron emission tomography revealed diffuse uptake in the cerebral cortex. Therefore, we diagnosed the patient with subcortical ischaemic vascular dementia associated with Alzheimer's disease. The use of the double inversion recovery and susceptibility weighted imaging on 3-T magnetic resonance imaging may be a supplemental strategy for diagnosing cerebral amyloid angiopathy, which is closely associated with Alzheimer's disease. © 2016 The Authors. Psychogeriatrics © 2016 Japanese Psychogeriatric Society.
Solutions to inverse plume in a crosswind problem using a predictor - corrector method
NASA Astrophysics Data System (ADS)
Vanderveer, Joseph; Jaluria, Yogesh
2013-11-01
Investigation for minimalist solutions to the inverse convection problem of a plume in a crosswind has developed a predictor - corrector method. The inverse problem is to predict the strength and location of the plume with respect to a select few downstream sampling points. This is accomplished with the help of two numerical simulations of the domain at differing source strengths, allowing the generation of two inverse interpolation functions. These functions in turn are utilized by the predictor step to acquire the plume strength. Finally, the same interpolation functions with the corrections from the plume strength are used to solve for the plume location. Through optimization of the relative location of the sampling points, the minimum number of samples for accurate predictions is reduced to two for the plume strength and three for the plume location. After the optimization, the predictor-corrector method demonstrates global uniqueness of the inverse solution for all test cases. The solution error is less than 1% for both plume strength and plume location. The basic approach could be extended to other inverse convection transport problems, particularly those encountered in environmental flows.
NASA Astrophysics Data System (ADS)
Shang, Ruibo; Archibald, Richard; Gelb, Anne; Luke, Geoffrey P.
2018-02-01
In photoacoustic (PA) imaging, the optical absorption can be acquired from the initial pressure distribution (IPD). An accurate reconstruction of the IPD will be very helpful for the reconstruction of the optical absorption. However, the image quality of PA imaging in scattering media is deteriorated by the acoustic diffraction, imaging artifacts, and weak PA signals. In this paper, we propose a sparsity-based optimization approach that improves the reconstruction of the IPD in PA imaging. A linear imaging forward model was set up based on time-and-delay method with the assumption that the point spread function (PSF) is spatial invariant. Then, an optimization equation was proposed with a regularization term to denote the sparsity of the IPD in a certain domain to solve this inverse problem. As a proof of principle, the approach was applied to reconstructing point objects and blood vessel phantoms. The resolution and signal-to-noise ratio (SNR) were compared between conventional back-projection and our proposed approach. Overall these results show that computational imaging can leverage the sparsity of PA images to improve the estimation of the IPD.
Photon migration in non-scattering tissue and the effects on image reconstruction
NASA Astrophysics Data System (ADS)
Dehghani, H.; Delpy, D. T.; Arridge, S. R.
1999-12-01
Photon propagation in tissue can be calculated using the relationship described by the transport equation. For scattering tissue this relationship is often simplified and expressed in terms of the diffusion approximation. This approximation, however, is not valid for non-scattering regions, for example cerebrospinal fluid (CSF) below the skull. This study looks at the effects of a thin clear layer in a simple model representing the head and examines its effect on image reconstruction. Specifically, boundary photon intensities (total number of photons exiting at a point on the boundary due to a source input at another point on the boundary) are calculated using the transport equation and compared with data calculated using the diffusion approximation for both non-scattering and scattering regions. The effect of non-scattering regions on the calculated boundary photon intensities is presented together with the advantages and restrictions of the transport code used. Reconstructed images are then presented where the forward problem is solved using the transport equation for a simple two-dimensional system containing a non-scattering ring and the inverse problem is solved using the diffusion approximation to the transport equation.
Accurate image-charge method by the use of the residue theorem for core-shell dielectric sphere
NASA Astrophysics Data System (ADS)
Fu, Jing; Xu, Zhenli
2018-02-01
An accurate image-charge method (ICM) is developed for ionic interactions outside a core-shell structured dielectric sphere. Core-shell particles have wide applications for which the theoretical investigation requires efficient methods for the Green's function used to calculate pairwise interactions of ions. The ICM is based on an inverse Mellin transform from the coefficients of spherical harmonic series of the Green's function such that the polarization charge due to dielectric boundaries is represented by a series of image point charges and an image line charge. The residue theorem is used to accurately calculate the density of the line charge. Numerical results show that the ICM is promising in fast evaluation of the Green's function, and thus it is useful for theoretical investigations of core-shell particles. This routine can also be applicable for solving other problems with spherical dielectric interfaces such as multilayered media and Debye-Hückel equations.
Sparse representation-based image restoration via nonlocal supervised coding
NASA Astrophysics Data System (ADS)
Li, Ao; Chen, Deyun; Sun, Guanglu; Lin, Kezheng
2016-10-01
Sparse representation (SR) and nonlocal technique (NLT) have shown great potential in low-level image processing. However, due to the degradation of the observed image, SR and NLT may not be accurate enough to obtain a faithful restoration results when they are used independently. To improve the performance, in this paper, a nonlocal supervised coding strategy-based NLT for image restoration is proposed. The novel method has three main contributions. First, to exploit the useful nonlocal patches, a nonnegative sparse representation is introduced, whose coefficients can be utilized as the supervised weights among patches. Second, a novel objective function is proposed, which integrated the supervised weights learning and the nonlocal sparse coding to guarantee a more promising solution. Finally, to make the minimization tractable and convergence, a numerical scheme based on iterative shrinkage thresholding is developed to solve the above underdetermined inverse problem. The extensive experiments validate the effectiveness of the proposed method.
Advanced Fast 3-D Electromagnetic Solver for Microwave Tomography Imaging.
Simonov, Nikolai; Kim, Bo-Ra; Lee, Kwang-Jae; Jeon, Soon-Ik; Son, Seong-Ho
2017-10-01
This paper describes a fast-forward electromagnetic solver (FFS) for the image reconstruction algorithm of our microwave tomography system. Our apparatus is a preclinical prototype of a biomedical imaging system, designed for the purpose of early breast cancer detection. It operates in the 3-6-GHz frequency band using a circular array of probe antennas immersed in a matching liquid; it produces image reconstructions of the permittivity and conductivity profiles of the breast under examination. Our reconstruction algorithm solves the electromagnetic (EM) inverse problem and takes into account the real EM properties of the probe antenna array as well as the influence of the patient's body and that of the upper metal screen sheet. This FFS algorithm is much faster than conventional EM simulation solvers. In comparison, in the same PC, the CST solver takes ~45 min, while the FFS takes ~1 s of effective simulation time for the same EM model of a numerical breast phantom.
Time-marching multi-grid seismic tomography
NASA Astrophysics Data System (ADS)
Tong, P.; Yang, D.; Liu, Q.
2016-12-01
From the classic ray-based traveltime tomography to the state-of-the-art full waveform inversion, because of the nonlinearity of seismic inverse problems, a good starting model is essential for preventing the convergence of the objective function toward local minima. With a focus on building high-accuracy starting models, we propose the so-called time-marching multi-grid seismic tomography method in this study. The new seismic tomography scheme consists of a temporal time-marching approach and a spatial multi-grid strategy. We first divide the recording period of seismic data into a series of time windows. Sequentially, the subsurface properties in each time window are iteratively updated starting from the final model of the previous time window. There are at least two advantages of the time-marching approach: (1) the information included in the seismic data of previous time windows has been explored to build the starting models of later time windows; (2) seismic data of later time windows could provide extra information to refine the subsurface images. Within each time window, we use a multi-grid method to decompose the scale of the inverse problem. Specifically, the unknowns of the inverse problem are sampled on a coarse mesh to capture the macro-scale structure of the subsurface at the beginning. Because of the low dimensionality, it is much easier to reach the global minimum on a coarse mesh. After that, finer meshes are introduced to recover the micro-scale properties. That is to say, the subsurface model is iteratively updated on multi-grid in every time window. We expect that high-accuracy starting models should be generated for the second and later time windows. We will test this time-marching multi-grid method by using our newly developed eikonal-based traveltime tomography software package tomoQuake. Real application results in the 2016 Kumamoto earthquake (Mw 7.0) region in Japan will be demonstrated.
The Earthquake Source Inversion Validation (SIV) - Project: Summary, Status, Outlook
NASA Astrophysics Data System (ADS)
Mai, P. M.
2017-12-01
Finite-fault earthquake source inversions infer the (time-dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, this kinematic source inversion is ill-posed and returns non-unique solutions, as seen for instance in multiple source models for the same earthquake, obtained by different research teams, that often exhibit remarkable dissimilarities. To address the uncertainties in earthquake-source inversions and to understand strengths and weaknesses of various methods, the Source Inversion Validation (SIV) project developed a set of forward-modeling exercises and inversion benchmarks. Several research teams then use these validation exercises to test their codes and methods, but also to develop and benchmark new approaches. In this presentation I will summarize the SIV strategy, the existing benchmark exercises and corresponding results. Using various waveform-misfit criteria and newly developed statistical comparison tools to quantify source-model (dis)similarities, the SIV platforms is able to rank solutions and identify particularly promising source inversion approaches. Existing SIV exercises (with related data and descriptions) and all computational tools remain available via the open online collaboration platform; additional exercises and benchmark tests will be uploaded once they are fully developed. I encourage source modelers to use the SIV benchmarks for developing and testing new methods. The SIV efforts have already led to several promising new techniques for tackling the earthquake-source imaging problem. I expect that future SIV benchmarks will provide further innovations and insights into earthquake source kinematics that will ultimately help to better understand the dynamics of the rupture process.
Total variation based image deconvolution for extended depth-of-field microscopy images
NASA Astrophysics Data System (ADS)
Hausser, F.; Beckers, I.; Gierlak, M.; Kahraman, O.
2015-03-01
One approach for a detailed understanding of dynamical cellular processes during drug delivery is the use of functionalized biocompatible nanoparticles and fluorescent markers. An appropriate imaging system has to detect these moving particles so as whole cell volumes in real time with high lateral resolution in a range of a few 100 nm. In a previous study Extended depth-of-field microscopy (EDF-microscopy) has been applied to fluorescent beads and tradiscantia stamen hair cells and the concept of real-time imaging has been proved in different microscopic modes. In principle a phase retardation system like a programmable space light modulator or a static waveplate is incorporated in the light path and modulates the wavefront of light. Hence the focal ellipsoid is smeared out and images seem to be blurred in a first step. An image restoration by deconvolution using the known point-spread-function (PSF) of the optical system is necessary to achieve sharp microscopic images of an extended depth-of-field. This work is focused on the investigation and optimization of deconvolution algorithms to solve this restoration problem satisfactorily. This inverse problem is challenging due to presence of Poisson distributed noise and Gaussian noise, and since the PSF used for deconvolution exactly fits in just one plane within the object. We use non-linear Total Variation based image restoration techniques, where different types of noise can be treated properly. Various algorithms are evaluated for artificially generated 3D images as well as for fluorescence measurements of BPAE cells.
General phase regularized reconstruction using phase cycling.
Ong, Frank; Cheng, Joseph Y; Lustig, Michael
2018-07-01
To develop a general phase regularized image reconstruction method, with applications to partial Fourier imaging, water-fat imaging and flow imaging. The problem of enforcing phase constraints in reconstruction was studied under a regularized inverse problem framework. A general phase regularized reconstruction algorithm was proposed to enable various joint reconstruction of partial Fourier imaging, water-fat imaging and flow imaging, along with parallel imaging (PI) and compressed sensing (CS). Since phase regularized reconstruction is inherently non-convex and sensitive to phase wraps in the initial solution, a reconstruction technique, named phase cycling, was proposed to render the overall algorithm invariant to phase wraps. The proposed method was applied to retrospectively under-sampled in vivo datasets and compared with state of the art reconstruction methods. Phase cycling reconstructions showed reduction of artifacts compared to reconstructions without phase cycling and achieved similar performances as state of the art results in partial Fourier, water-fat and divergence-free regularized flow reconstruction. Joint reconstruction of partial Fourier + water-fat imaging + PI + CS, and partial Fourier + divergence-free regularized flow imaging + PI + CS were demonstrated. The proposed phase cycling reconstruction provides an alternative way to perform phase regularized reconstruction, without the need to perform phase unwrapping. It is robust to the choice of initial solutions and encourages the joint reconstruction of phase imaging applications. Magn Reson Med 80:112-125, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Review of the inverse scattering problem at fixed energy in quantum mechanics
NASA Technical Reports Server (NTRS)
Sabatier, P. C.
1972-01-01
Methods of solution of the inverse scattering problem at fixed energy in quantum mechanics are presented. Scattering experiments of a beam of particles at a nonrelativisitic energy by a target made up of particles are analyzed. The Schroedinger equation is used to develop the quantum mechanical description of the system and one of several functions depending on the relative distance of the particles. The inverse problem is the construction of the potentials from experimental measurements.
Seismic velocity estimation from time migration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cameron, Maria Kourkina
2007-01-01
This is concerned with imaging and wave propagation in nonhomogeneous media, and includes a collection of computational techniques, such as level set methods with material transport, Dijkstra-like Hamilton-Jacobi solvers for first arrival Eikonal equations and techniques for data smoothing. The theoretical components include aspects of seismic ray theory, and the results rely on careful comparison with experiment and incorporation as input into large production-style geophysical processing codes. Producing an accurate image of the Earth's interior is a challenging aspect of oil recovery and earthquake analysis. The ultimate computational goal, which is to accurately produce a detailed interior map of themore » Earth's makeup on the basis of external soundings and measurements, is currently out of reach for several reasons. First, although vast amounts of data have been obtained in some regions, this has not been done uniformly, and the data contain noise and artifacts. Simply sifting through the data is a massive computational job. Second, the fundamental inverse problem, namely to deduce the local sound speeds of the earth that give rise to measured reacted signals, is exceedingly difficult: shadow zones and complex structures can make for ill-posed problems, and require vast computational resources. Nonetheless, seismic imaging is a crucial part of the oil and gas industry. Typically, one makes assumptions about the earth's substructure (such as laterally homogeneous layering), and then uses this model as input to an iterative procedure to build perturbations that more closely satisfy the measured data. Such models often break down when the material substructure is significantly complex: not surprisingly, this is often where the most interesting geological features lie. Data often come in a particular, somewhat non-physical coordinate system, known as time migration coordinates. The construction of substructure models from these data is less and less reliable as the earth becomes horizontally nonconstant. Even mild lateral velocity variations can significantly distort subsurface structures on the time migrated images. Conversely, depth migration provides the potential for more accurate reconstructions, since it can handle significant lateral variations. However, this approach requires good input data, known as a 'velocity model'. We address the problem of estimating seismic velocities inside the earth, i.e., the problem of constructing a velocity model, which is necessary for obtaining seismic images in regular Cartesian coordinates. The main goals are to develop algorithms to convert time-migration velocities to true seismic velocities, and to convert time-migrated images to depth images in regular Cartesian coordinates. Our main results are three-fold. First, we establish a theoretical relation between the true seismic velocities and the 'time migration velocities' using the paraxial ray tracing. Second, we formulate an appropriate inverse problem describing the relation between time migration velocities and depth velocities, and show that this problem is mathematically ill-posed, i.e., unstable to small perturbations. Third, we develop numerical algorithms to solve regularized versions of these equations which can be used to recover smoothed velocity variations. Our algorithms consist of efficient time-to-depth conversion algorithms, based on Dijkstra-like Fast Marching Methods, as well as level set and ray tracing algorithms for transforming Dix velocities into seismic velocities. Our algorithms are applied to both two-dimensional and three-dimensional problems, and we test them on a collection of both synthetic examples and field data.« less
Reconstruction From Multiple Particles for 3D Isotropic Resolution in Fluorescence Microscopy.
Fortun, Denis; Guichard, Paul; Hamel, Virginie; Sorzano, Carlos Oscar S; Banterle, Niccolo; Gonczy, Pierre; Unser, Michael
2018-05-01
The imaging of proteins within macromolecular complexes has been limited by the low axial resolution of optical microscopes. To overcome this problem, we propose a novel computational reconstruction method that yields isotropic resolution in fluorescence imaging. The guiding principle is to reconstruct a single volume from the observations of multiple rotated particles. Our new operational framework detects particles, estimates their orientation, and reconstructs the final volume. The main challenge comes from the absence of initial template and a priori knowledge about the orientations. We formulate the estimation as a blind inverse problem, and propose a block-coordinate stochastic approach to solve the associated non-convex optimization problem. The reconstruction is performed jointly in multiple channels. We demonstrate that our method is able to reconstruct volumes with 3D isotropic resolution on simulated data. We also perform isotropic reconstructions from real experimental data of doubly labeled purified human centrioles. Our approach revealed the precise localization of the centriolar protein Cep63 around the centriole microtubule barrel. Overall, our method offers new perspectives for applications in biology that require the isotropic mapping of proteins within macromolecular assemblies.
Shi, Junwei; Zhang, Bin; Liu, Fei; Luo, Jianwen; Bai, Jing
2013-09-15
For the ill-posed fluorescent molecular tomography (FMT) inverse problem, the L1 regularization can protect the high-frequency information like edges while effectively reduce the image noise. However, the state-of-the-art L1 regularization-based algorithms for FMT reconstruction are expensive in memory, especially for large-scale problems. An efficient L1 regularization-based reconstruction algorithm based on nonlinear conjugate gradient with restarted strategy is proposed to increase the computational speed with low memory consumption. The reconstruction results from phantom experiments demonstrate that the proposed algorithm can obtain high spatial resolution and high signal-to-noise ratio, as well as high localization accuracy for fluorescence targets.
Complex optimization for big computational and experimental neutron datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Feng; Oak Ridge National Lab.; Archibald, Richard
Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less
Group refractive index reconstruction with broadband interferometric confocal microscopy
Marks, Daniel L.; Schlachter, Simon C.; Zysk, Adam M.; Boppart, Stephen A.
2010-01-01
We propose a novel method of measuring the group refractive index of biological tissues at the micrometer scale. The technique utilizes a broadband confocal microscope embedded into a Mach–Zehnder interferometer, with which spectral interferograms are measured as the sample is translated through the focus of the beam. The method does not require phase unwrapping and is insensitive to vibrations in the sample and reference arms. High measurement stability is achieved because a single spectral interferogram contains all the information necessary to compute the optical path delay of the beam transmitted through the sample. Included are a physical framework defining the forward problem, linear solutions to the inverse problem, and simulated images of biologically relevant phantoms. PMID:18451922
Complex optimization for big computational and experimental neutron datasets
Bao, Feng; Oak Ridge National Lab.; Archibald, Richard; ...
2016-11-07
Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less
NASA Astrophysics Data System (ADS)
Ravenna, Matteo; Lebedev, Sergei; Celli, Nicolas
2017-04-01
We develop a Markov Chain Monte Carlo inversion of fundamental and higher mode phase-velocity curves for radially and azimuthally anisotropic structure of the crust and upper mantle. In the inversions of Rayleigh- and Love-wave dispersion curves for radially anisotropic structure, we obtain probabilistic 1D radially anisotropic shear-velocity profiles of the isotropic average Vs and anisotropy (or Vsv and Vsh) as functions of depth. In the inversions for azimuthal anisotropy, Rayleigh-wave dispersion curves at different azimuths are inverted for the vertically polarized shear-velocity structure (Vsv) and the 2-phi component of azimuthal anisotropy. The strength and originality of the method is in its fully non-linear approach. Each model realization is computed using exact forward calculations. The uncertainty of the models is a part of the output. In the inversions for azimuthal anisotropy, in particular, the computation of the forward problem is performed separately at different azimuths, with no linear approximations on the relation of the Earth's elastic parameters to surface wave phase velocities. The computations are performed in parallel in order reduce the computing time. We compare inversions of the fundamental mode phase-velocity curves alone with inversions that also include overtones. The addition of higher modes enhances the resolving power of the anisotropic structure of the deep upper mantle. We apply the inversion method to phase-velocity curves in a few regions, including the Hangai dome region in Mongolia. Our models provide constraints on the Moho depth, the Lithosphere-Asthenosphere Boundary, and the alignment of the anisotropic fabric and the direction of current and past flow, from the crust down to the deep asthenosphere.
a method of gravity and seismic sequential inversion and its GPU implementation
NASA Astrophysics Data System (ADS)
Liu, G.; Meng, X.
2011-12-01
In this abstract, we introduce a gravity and seismic sequential inversion method to invert for density and velocity together. For the gravity inversion, we use an iterative method based on correlation imaging algorithm; for the seismic inversion, we use the full waveform inversion. The link between the density and velocity is an empirical formula called Gardner equation, for large volumes of data, we use the GPU to accelerate the computation. For the gravity inversion method , we introduce a method based on correlation imaging algorithm,it is also a interative method, first we calculate the correlation imaging of the observed gravity anomaly, it is some value between -1 and +1, then we multiply this value with a little density ,this value become the initial density model. We get a forward reuslt with this initial model and also calculate the correaltion imaging of the misfit of observed data and the forward data, also multiply the correaltion imaging result a little density and add it to the initial model, then do the same procedure above , at last ,we can get a inversion density model. For the seismic inveron method ,we use a mothod base on the linearity of acoustic wave equation written in the frequency domain,with a intial velociy model, we can get a good velocity result. In the sequential inversion of gravity and seismic , we need a link formula to convert between density and velocity ,in our method , we use the Gardner equation. Driven by the insatiable market demand for real time, high-definition 3D images, the programmable NVIDIA Graphic Processing Unit (GPU) as co-processor of CPU has been developed for high performance computing. Compute Unified Device Architecture (CUDA) is a parallel programming model and software environment provided by NVIDIA designed to overcome the challenge of using traditional general purpose GPU while maintaining a low learn curve for programmers familiar with standard programming languages such as C. In our inversion processing, we use the GPU to accelerate our gravity and seismic inversion. Taking the gravity inversion as an example, its kernels are gravity forward simulation and correlation imaging, after the parallelization in GPU, in 3D case,the inversion module, the original five CPU loops are reduced to three,the forward module the original five CPU loops are reduced to two. Acknowledgments We acknowledge the financial support of Sinoprobe project (201011039 and 201011049-03), the Fundamental Research Funds for the Central Universities (2010ZY26 and 2011PY0183), the National Natural Science Foundation of China (41074095) and the Open Project of State Key Laboratory of Geological Processes and Mineral Resources (GPMR0945).
Butler, T; Graham, L; Estep, D; Dawson, C; Westerink, J J
2015-04-01
The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.
NASA Astrophysics Data System (ADS)
Butler, T.; Graham, L.; Estep, D.; Dawson, C.; Westerink, J. J.
2015-04-01
The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.
Inverse models: A necessary next step in ground-water modeling
Poeter, E.P.; Hill, M.C.
1997-01-01
Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares repression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares regression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.
Iterative algorithms for a non-linear inverse problem in atmospheric lidar
NASA Astrophysics Data System (ADS)
Denevi, Giulia; Garbarino, Sara; Sorrentino, Alberto
2017-08-01
We consider the inverse problem of retrieving aerosol extinction coefficients from Raman lidar measurements. In this problem the unknown and the data are related through the exponential of a linear operator, the unknown is non-negative and the data follow the Poisson distribution. Standard methods work on the log-transformed data and solve the resulting linear inverse problem, but neglect to take into account the noise statistics. In this study we show that proper modelling of the noise distribution can improve substantially the quality of the reconstructed extinction profiles. To achieve this goal, we consider the non-linear inverse problem with non-negativity constraint, and propose two iterative algorithms derived using the Karush-Kuhn-Tucker conditions. We validate the algorithms with synthetic and experimental data. As expected, the proposed algorithms out-perform standard methods in terms of sensitivity to noise and reliability of the estimated profile.
NASA Astrophysics Data System (ADS)
Hansen, T. M.; Cordua, K. S.
2017-12-01
Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.
The incomplete inverse and its applications to the linear least squares problem
NASA Technical Reports Server (NTRS)
Morduch, G. E.
1977-01-01
A modified matrix product is explained, and it is shown that this product defiles a group whose inverse is called the incomplete inverse. It was proven that the incomplete inverse of an augmented normal matrix includes all the quantities associated with the least squares solution. An answer is provided to the problem that occurs when the data residuals are too large and when insufficient data to justify augmenting the model are available.
Analytic semigroups: Applications to inverse problems for flexible structures
NASA Technical Reports Server (NTRS)
Banks, H. T.; Rebnord, D. A.
1990-01-01
Convergence and stability results for least squares inverse problems involving systems described by analytic semigroups are presented. The practical importance of these results is demonstrated by application to several examples from problems of estimation of material parameters in flexible structures using accelerometer data.
Development of a sensor coordinated kinematic model for neural network controller training
NASA Technical Reports Server (NTRS)
Jorgensen, Charles C.
1990-01-01
A robotic benchmark problem useful for evaluating alternative neural network controllers is presented. Specifically, it derives two camera models and the kinematic equations of a multiple degree of freedom manipulator whose end effector is under observation. The mapping developed include forward and inverse translations from binocular images to 3-D target position and the inverse kinematics of mapping point positions into manipulator commands in joint space. Implementation is detailed for a three degree of freedom manipulator with one revolute joint at the base and two prismatic joints on the arms. The example is restricted to operate within a unit cube with arm links of 0.6 and 0.4 units respectively. The development is presented in the context of more complex simulations and a logical path for extension of the benchmark to higher degree of freedom manipulators is presented.
A direct method for nonlinear ill-posed problems
NASA Astrophysics Data System (ADS)
Lakhal, A.
2018-02-01
We propose a direct method for solving nonlinear ill-posed problems in Banach-spaces. The method is based on a stable inversion formula we explicitly compute by applying techniques for analytic functions. Furthermore, we investigate the convergence and stability of the method and prove that the derived noniterative algorithm is a regularization. The inversion formula provides a systematic sensitivity analysis. The approach is applicable to a wide range of nonlinear ill-posed problems. We test the algorithm on a nonlinear problem of travel-time inversion in seismic tomography. Numerical results illustrate the robustness and efficiency of the algorithm.
A gradient based algorithm to solve inverse plane bimodular problems of identification
NASA Astrophysics Data System (ADS)
Ran, Chunjiang; Yang, Haitian; Zhang, Guoqing
2018-02-01
This paper presents a gradient based algorithm to solve inverse plane bimodular problems of identifying constitutive parameters, including tensile/compressive moduli and tensile/compressive Poisson's ratios. For the forward bimodular problem, a FE tangent stiffness matrix is derived facilitating the implementation of gradient based algorithms, for the inverse bimodular problem of identification, a two-level sensitivity analysis based strategy is proposed. Numerical verification in term of accuracy and efficiency is provided, and the impacts of initial guess, number of measurement points, regional inhomogeneity, and noisy data on the identification are taken into accounts.
Gravity inversion of a fault by Particle swarm optimization (PSO).
Toushmalani, Reza
2013-01-01
Particle swarm optimization is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. In this paper we introduce and use this method in gravity inverse problem. We discuss the solution for the inverse problem of determining the shape of a fault whose gravity anomaly is known. Application of the proposed algorithm to this problem has proven its capability to deal with difficult optimization problems. The technique proved to work efficiently when tested to a number of models.
The Inverse Problem in Jet Acoustics
NASA Technical Reports Server (NTRS)
Wooddruff, S. L.; Hussaini, M. Y.
2001-01-01
The inverse problem for jet acoustics, or the determination of noise sources from far-field pressure information, is proposed as a tool for understanding the generation of noise by turbulence and for the improved prediction of jet noise. An idealized version of the problem is investigated first to establish the extent to which information about the noise sources may be determined from far-field pressure data and to determine how a well-posed inverse problem may be set up. Then a version of the industry-standard MGB code is used to predict a jet noise source spectrum from experimental noise data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiaoli; Hofmann, Ralf; Dapp, Robin
2015-01-01
High-resolution, three-dimensional (3D) imaging of soft tissues requires the solution of two inverse problems: phase retrieval and the reconstruction of the 3D image from a tomographic stack of two-dimensional (2D) projections. The number of projections per stack should be small to accommodate fast tomography of rapid processes and to constrain X-ray radiation dose to optimal levels to either increase the duration of in vivo time-lapse series at a given goal for spatial resolution and/or the conservation of structure under X-ray irradiation. In pursuing the 3D reconstruction problem in the sense of compressive sampling theory, we propose to reduce the numbermore » of projections by applying an advanced algebraic technique subject to the minimisation of the total variation (TV) in the reconstructed slice. This problem is formulated in a Lagrangian multiplier fashion with the parameter value determined by appealing to a discrete L-curve in conjunction with a conjugate gradient method. The usefulness of this reconstruction modality is demonstrated for simulated and in vivo data, the latter acquired in parallel-beam imaging experiments using synchrotron radiation. (C) 2015 Optical Society of America« less
Geodesic active fields--a geometric framework for image registration.
Zosso, Dominique; Bresson, Xavier; Thiran, Jean-Philippe
2011-05-01
In this paper we present a novel geometric framework called geodesic active fields for general image registration. In image registration, one looks for the underlying deformation field that best maps one image onto another. This is a classic ill-posed inverse problem, which is usually solved by adding a regularization term. Here, we propose a multiplicative coupling between the registration term and the regularization term, which turns out to be equivalent to embed the deformation field in a weighted minimal surface problem. Then, the deformation field is driven by a minimization flow toward a harmonic map corresponding to the solution of the registration problem. This proposed approach for registration shares close similarities with the well-known geodesic active contours model in image segmentation, where the segmentation term (the edge detector function) is coupled with the regularization term (the length functional) via multiplication as well. As a matter of fact, our proposed geometric model is actually the exact mathematical generalization to vector fields of the weighted length problem for curves and surfaces introduced by Caselles-Kimmel-Sapiro. The energy of the deformation field is measured with the Polyakov energy weighted by a suitable image distance, borrowed from standard registration models. We investigate three different weighting functions, the squared error and the approximated absolute error for monomodal images, and the local joint entropy for multimodal images. As compared to specialized state-of-the-art methods tailored for specific applications, our geometric framework involves important contributions. Firstly, our general formulation for registration works on any parametrizable, smooth and differentiable surface, including nonflat and multiscale images. In the latter case, multiscale images are registered at all scales simultaneously, and the relations between space and scale are intrinsically being accounted for. Second, this method is, to the best of our knowledge, the first reparametrization invariant registration method introduced in the literature. Thirdly, the multiplicative coupling between the registration term, i.e. local image discrepancy, and the regularization term naturally results in a data-dependent tuning of the regularization strength. Finally, by choosing the metric on the deformation field one can freely interpolate between classic Gaussian and more interesting anisotropic, TV-like regularization.
Numerical modeling of Harmonic Imaging and Pulse Inversion fields
NASA Astrophysics Data System (ADS)
Humphrey, Victor F.; Duncan, Tracy M.; Duck, Francis
2003-10-01
Tissue Harmonic Imaging (THI) and Pulse Inversion (PI) Harmonic Imaging exploit the harmonics generated as a result of nonlinear propagation through tissue to improve the performance of imaging systems. A 3D finite difference model, that solves the KZK equation in the frequency domain, is used to investigate the finite amplitude fields produced by rectangular transducers driven with short pulses and their inverses, in water and homogeneous tissue. This enables the characteristic of the fields and the effective PI field to be calculated. The suppression of the fundamental field in PI is monitored, and the suppression of side lobes and a reduction in the effective beamwidth for each field are calculated. In addition, the differences between the pulse and inverse pulse spectra resulting from the use of very short pulses are noted, and the differences in the location of the fundamental and second harmonic spectral peaks observed.
Inverse kinematics problem in robotics using neural networks
NASA Technical Reports Server (NTRS)
Choi, Benjamin B.; Lawrence, Charles
1992-01-01
In this paper, Multilayer Feedforward Networks are applied to the robot inverse kinematic problem. The networks are trained with endeffector position and joint angles. After training, performance is measured by having the network generate joint angles for arbitrary endeffector trajectories. A 3-degree-of-freedom (DOF) spatial manipulator is used for the study. It is found that neural networks provide a simple and effective way to both model the manipulator inverse kinematics and circumvent the problems associated with algorithmic solution methods.
Bayesian Inference in Satellite Gravity Inversion
NASA Technical Reports Server (NTRS)
Kis, K. I.; Taylor, Patrick T.; Wittmann, G.; Kim, Hyung Rae; Torony, B.; Mayer-Guerr, T.
2005-01-01
To solve a geophysical inverse problem means applying measurements to determine the parameters of the selected model. The inverse problem is formulated as the Bayesian inference. The Gaussian probability density functions are applied in the Bayes's equation. The CHAMP satellite gravity data are determined at the altitude of 400 kilometer altitude over the South part of the Pannonian basin. The model of interpretation is the right vertical cylinder. The parameters of the model are obtained from the minimum problem solved by the Simplex method.
EDITORIAL: Inverse Problems in Engineering
NASA Astrophysics Data System (ADS)
West, Robert M.; Lesnic, Daniel
2007-01-01
Presented here are 11 noteworthy papers selected from the Fifth International Conference on Inverse Problems in Engineering: Theory and Practice held in Cambridge, UK during 11-15 July 2005. The papers have been peer-reviewed to the usual high standards of this journal and the contributions of reviewers are much appreciated. The conference featured a good balance of the fundamental mathematical concepts of inverse problems with a diverse range of important and interesting applications, which are represented here by the selected papers. Aspects of finite-element modelling and the performance of inverse algorithms are investigated by Autrique et al and Leduc et al. Statistical aspects are considered by Emery et al and Watzenig et al with regard to Bayesian parameter estimation and inversion using particle filters. Electrostatic applications are demonstrated by van Berkel and Lionheart and also Nakatani et al. Contributions to the applications of electrical techniques and specifically electrical tomographies are provided by Wakatsuki and Kagawa, Kim et al and Kortschak et al. Aspects of inversion in optical tomography are investigated by Wright et al and Douiri et al. The authors are representative of the worldwide interest in inverse problems relating to engineering applications and their efforts in producing these excellent papers will be appreciated by many readers of this journal.
Inverse problem for multispecies ferromagneticlike mean-field models in phase space with many states
NASA Astrophysics Data System (ADS)
Fedele, Micaela; Vernia, Cecilia
2017-10-01
In this paper we solve the inverse problem for the Curie-Weiss model and its multispecies version when multiple thermodynamic states are present as in the low temperature phase where the phase space is clustered. The inverse problem consists of reconstructing the model parameters starting from configuration data generated according to the distribution of the model. We demonstrate that, without taking into account the presence of many states, the application of the inversion procedure produces very poor inference results. To overcome this problem, we use the clustering algorithm. When the system has two symmetric states of positive and negative magnetizations, the parameter reconstruction can also be obtained with smaller computational effort simply by flipping the sign of the magnetizations from positive to negative (or vice versa). The parameter reconstruction fails when the system undergoes a phase transition: In that case we give the correct inversion formulas for the Curie-Weiss model and we show that they can be used to measure how close the system gets to being critical.
PRIFIRA: General regularization using prior-conditioning for fast radio interferometric imaging†
NASA Astrophysics Data System (ADS)
Naghibzadeh, Shahrzad; van der Veen, Alle-Jan
2018-06-01
Image formation in radio astronomy is a large-scale inverse problem that is inherently ill-posed. We present a general algorithmic framework based on a Bayesian-inspired regularized maximum likelihood formulation of the radio astronomical imaging problem with a focus on diffuse emission recovery from limited noisy correlation data. The algorithm is dubbed PRIor-conditioned Fast Iterative Radio Astronomy (PRIFIRA) and is based on a direct embodiment of the regularization operator into the system by right preconditioning. The resulting system is then solved using an iterative method based on projections onto Krylov subspaces. We motivate the use of a beamformed image (which includes the classical "dirty image") as an efficient prior-conditioner. Iterative reweighting schemes generalize the algorithmic framework and can account for different regularization operators that encourage sparsity of the solution. The performance of the proposed method is evaluated based on simulated one- and two-dimensional array arrangements as well as actual data from the core stations of the Low Frequency Array radio telescope antenna configuration, and compared to state-of-the-art imaging techniques. We show the generality of the proposed method in terms of regularization schemes while maintaining a competitive reconstruction quality with the current reconstruction techniques. Furthermore, we show that exploiting Krylov subspace methods together with the proper noise-based stopping criteria results in a great improvement in imaging efficiency.
Phase in Optical Image Processing
NASA Astrophysics Data System (ADS)
Naughton, Thomas J.
2010-04-01
The use of phase has a long standing history in optical image processing, with early milestones being in the field of pattern recognition, such as VanderLugt's practical construction technique for matched filters, and (implicitly) Goodman's joint Fourier transform correlator. In recent years, the flexibility afforded by phase-only spatial light modulators and digital holography, for example, has enabled many processing techniques based on the explicit encoding and decoding of phase. One application area concerns efficient numerical computations. Pushing phase measurement to its physical limits, designs employing the physical properties of phase have ranged from the sensible to the wonderful, in some cases making computationally easy problems easier to solve and in other cases addressing mathematics' most challenging computationally hard problems. Another application area is optical image encryption, in which, typically, a phase mask modulates the fractional Fourier transformed coefficients of a perturbed input image, and the phase of the inverse transform is then sensed as the encrypted image. The inherent linearity that makes the system so elegant mitigates against its use as an effective encryption technique, but we show how a combination of optical and digital techniques can restore confidence in that security. We conclude with the concept of digital hologram image processing, and applications of same that are uniquely suited to optical implementation, where the processing, recognition, or encryption step operates on full field information, such as that emanating from a coherently illuminated real-world three-dimensional object.
Computational structures for robotic computations
NASA Technical Reports Server (NTRS)
Lee, C. S. G.; Chang, P. R.
1987-01-01
The computational problem of inverse kinematics and inverse dynamics of robot manipulators by taking advantage of parallelism and pipelining architectures is discussed. For the computation of inverse kinematic position solution, a maximum pipelined CORDIC architecture has been designed based on a functional decomposition of the closed-form joint equations. For the inverse dynamics computation, an efficient p-fold parallel algorithm to overcome the recurrence problem of the Newton-Euler equations of motion to achieve the time lower bound of O(log sub 2 n) has also been developed.
Burton, Brett M; Tate, Jess D; Erem, Burak; Swenson, Darrell J; Wang, Dafang F; Steffen, Michael; Brooks, Dana H; van Dam, Peter M; Macleod, Rob S
2012-01-01
Computational modeling in electrocardiography often requires the examination of cardiac forward and inverse problems in order to non-invasively analyze physiological events that are otherwise inaccessible or unethical to explore. The study of these models can be performed in the open-source SCIRun problem solving environment developed at the Center for Integrative Biomedical Computing (CIBC). A new toolkit within SCIRun provides researchers with essential frameworks for constructing and manipulating electrocardiographic forward and inverse models in a highly efficient and interactive way. The toolkit contains sample networks, tutorials and documentation which direct users through SCIRun-specific approaches in the assembly and execution of these specific problems. PMID:22254301
Wavelet Filter Banks for Super-Resolution SAR Imaging
NASA Technical Reports Server (NTRS)
Sheybani, Ehsan O.; Deshpande, Manohar; Memarsadeghi, Nargess
2011-01-01
This paper discusses Innovative wavelet-based filter banks designed to enhance the analysis of super resolution Synthetic Aperture Radar (SAR) images using parametric spectral methods and signal classification algorithms, SAR finds applications In many of NASA's earth science fields such as deformation, ecosystem structure, and dynamics of Ice, snow and cold land processes, and surface water and ocean topography. Traditionally, standard methods such as Fast-Fourier Transform (FFT) and Inverse Fast-Fourier Transform (IFFT) have been used to extract Images from SAR radar data, Due to non-parametric features of these methods and their resolution limitations and observation time dependence, use of spectral estimation and signal pre- and post-processing techniques based on wavelets to process SAR radar data has been proposed. Multi-resolution wavelet transforms and advanced spectral estimation techniques have proven to offer efficient solutions to this problem.
Dushaw, Brian D; Sagen, Hanne
2017-12-01
Ocean acoustic tomography depends on a suitable reference ocean environment with which to set the basic parameters of the inverse problem. Some inverse problems may require a reference ocean that includes the small-scale variations from internal waves, small mesoscale, or spice. Tomographic inversions that employ data of stable shadow zone arrivals, such as those that have been observed in the North Pacific and Canary Basin, are an example. Estimating temperature from the unique acoustic data that have been obtained in Fram Strait is another example. The addition of small-scale variability to augment a smooth reference ocean is essential to understanding the acoustic forward problem in these cases. Rather than a hindrance, the stochastic influences of the small scale can be exploited to obtain accurate inverse estimates. Inverse solutions are readily obtained, and they give computed arrival patterns that matched the observations. The approach is not ad hoc, but universal, and it has allowed inverse estimates for ocean temperature variations in Fram Strait to be readily computed on several acoustic paths for which tomographic data were obtained.
Joint Geophysical Inversion With Multi-Objective Global Optimization Methods
NASA Astrophysics Data System (ADS)
Lelievre, P. G.; Bijani, R.; Farquharson, C. G.
2015-12-01
Pareto multi-objective global optimization (PMOGO) methods generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. We are applying PMOGO methods to three classes of inverse problems. The first class are standard mesh-based problems where the physical property values in each cell are treated as continuous variables. The second class of problems are also mesh-based but cells can only take discrete physical property values corresponding to known or assumed rock units. In the third class we consider a fundamentally different type of inversion in which a model comprises wireframe surfaces representing contacts between rock units; the physical properties of each rock unit remain fixed while the inversion controls the position of the contact surfaces via control nodes. This third class of problem is essentially a geometry inversion, which can be used to recover the unknown geometry of a target body or to investigate the viability of a proposed Earth model. Joint inversion is greatly simplified for the latter two problem classes because no additional mathematical coupling measure is required in the objective function. PMOGO methods can solve numerically complicated problems that could not be solved with standard descent-based local minimization methods. This includes the latter two classes of problems mentioned above. There are significant increases in the computational requirements when PMOGO methods are used but these can be ameliorated using parallelization and problem dimension reduction strategies.
NASA Astrophysics Data System (ADS)
Dokht, R.; Gu, Y. J.; Sacchi, M. D.
2016-12-01
Seismic velocities and the topography of mantle discontinuities are crucial for the understanding of mantle structure, dynamics and mineralogy. While these two observables are closely linked, the vast majority of high-resolution seismic images are retrieved under the assumption of horizontally stratified mantle interfaces. This conventional correction-based process could lead to considerable errors due to the inherent trade-off between velocity and discontinuity depth. In this study, we introduce a nonlinear joint waveform inversion method that simultaneously recovers discontinuity depths and seismic velocities using the waveforms of SS precursors. Our target region is the upper mantle and transition zone beneath Northeast Asia. In this region, the inversion outcomes clearly delineate a westward dipping high-velocity structure in association with the subducting Pacific plate. Above the flat part of the slab west of the Japan sea, our results show a shear wave velocity reduction of 1.5% in the upper mantle and 10-15 km depression of the 410 km discontinuity beneath the Changbaishan volcanic field. We also identify the maximum correlation between shear velocity and transition zone thickness at an approximate slab dip of 30 degrees, which is consistent with previously reported values in this region.To validate the results of the 1D waveform inversion of SS precursors, we discretize the mantle beneath the study region and conduct a 2D waveform tomographic survey using the same nonlinear approach. The problem is simplified by adopting the discontinuity depths from the 1D inversion and solving only for perturbations in shear velocities. The resulting models obtained from the 1D and 2D approaches are self-consistent. Low-velocities beneath the Changbai intraplate volcano likely persist to a depth of 500 km. Collectively, our seismic observations suggest that the active volcanoes in eastern China may be fueled by a hot thermal anomaly originating from the mantle transition zone.
Ground-penetrating radar research in Belgium: from developments to applications
NASA Astrophysics Data System (ADS)
Lambot, Sébastien; Van Meirvenne, Marc; Craeye, Christophe
2014-05-01
Ground-penetrating radar research in Belgium spans a series of developments and applications, including mainly ultra wideband radar antenna design and optimization, non-destructive testing for the characterization of the electrical properties of soils and materials, and high-resolution subsurface imaging in agricultural engineering, archeology and transport infrastructures (e.g., road inspection and pipe detection). Security applications have also been the topic of active research for several years (i.e., landmine detection) and developments in forestry have recently been initiated (i.e., for root zone and tree trunk imaging and characterization). In particular, longstanding research has been devoted to the intrinsic modeling of antenna-medium systems for full-wave inversion, thereby providing an effective way for retrieving the electrical properties of soils and materials. Full-wave modeling is a prerequisite for benefiting from the full information contained in the radar data and is necessary to provide robust and accurate estimates of the properties of interest. Nevertheless, this has remained a major challenge in geophysics and electromagnetics for many years, mainly due to the complex interactions between the antennas and the media as well as to the significant computing resources that are usually required. Efforts have also been dedicated to the development of specific inversion strategies to cope with the complexity of the inverse problems usually dealt with as well as ill-posedness issues that arise from a lack of information in the radar data. To circumvent this last limitation, antenna arrays have been developed and modeled in order to provide additional information. Moreover, data fusion ways have been investigated, by mainly combining GPR data with electromagnetic induction complementary information in joint interpretation analyses and inversion procedures. Finally, inversions have been regularized by combining electromagnetics models together with soil hydrodynamic models in mechanistic data assimilation frameworks, assuming process knowledge as information as well. Acknowledgement: GPR research in Belgium benefits from networking activities carried out within the EU funded COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar".
NASA Astrophysics Data System (ADS)
Parris, B. A.; Egbert, G. D.; Key, K.; Livelybrooks, D.
2016-12-01
Magnetotellurics (MT) is an electromagnetic technique used to model the inner Earth's electrical conductivity structure. MT data can be analyzed using iterative, linearized inversion techniques to generate models imaging, in particular, conductive partial melts and aqueous fluids that play critical roles in subduction zone processes and volcanism. For example, the Magnetotelluric Observations of Cascadia using a Huge Array (MOCHA) experiment provides amphibious data useful for imaging subducted fluids from trench to mantle wedge corner. When using MOD3DEM(Egbert et al. 2012), a finite difference inversion package, we have encountered problems inverting, particularly, sea floor stations due to the strong, nearby conductivity gradients. As a work-around, we have found that denser, finer model grids near the land-sea interface produce better inversions, as characterized by reduced data residuals. This is partly to be due to our ability to more accurately capture topography and bathymetry. We are experimenting with improved interpolation schemes that more accurately track EM fields across cell boundaries, with an eye to enhancing the accuracy of the simulated responses and, thus, inversion results. We are adapting how MOD3DEM interpolates EM fields in two ways. The first seeks to improve weighting functions for interpolants to better address current continuity across grid boundaries. Electric fields are interpolated using a tri-linear spline technique, where the eight nearest electrical field estimates are each given weights determined by the technique, a kind of weighted average. We are modifying these weights to include cross-boundary conductivity ratios to better model current continuity. We are also adapting some of the techniques discussed in Shantsev et al (2014) to enhance the accuracy of the interpolated fields calculated by our forward solver, as well as to better approximate the sensitivities passed to the software's Jacobian that are used to generate a new forward model during each iteration of the inversion.
An inverse dynamics approach to trajectory optimization and guidance for an aerospace plane
NASA Technical Reports Server (NTRS)
Lu, Ping
1992-01-01
The optimal ascent problem for an aerospace planes is formulated as an optimal inverse dynamic problem. Both minimum-fuel and minimax type of performance indices are considered. Some important features of the optimal trajectory and controls are used to construct a nonlinear feedback midcourse controller, which not only greatly simplifies the difficult constrained optimization problem and yields improved solutions, but is also suited for onboard implementation. Robust ascent guidance is obtained by using combination of feedback compensation and onboard generation of control through the inverse dynamics approach. Accurate orbital insertion can be achieved with near-optimal control of the rocket through inverse dynamics even in the presence of disturbances.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vasco, D.W.; Ferretti, Alessandro; Novali, Fabrizio
2008-05-01
Transient pressure variations within a reservoir can be treated as a propagating front and analyzed using an asymptotic formulation. From this perspective one can define a pressure 'arrival time' and formulate solutions along trajectories, in the manner of ray theory. We combine this methodology and a technique for mapping overburden deformation into reservoir volume change as a means to estimate reservoir flow properties, such as permeability. Given the entire 'travel time' or phase field, obtained from the deformation data, we can construct the trajectories directly, there-by linearizing the inverse problem. A numerical study indicates that, using this approach, we canmore » infer large-scale variations in flow properties. In an application to Interferometric Synthetic Aperture (InSAR) observations associated with a CO{sub 2} injection at the Krechba field, Algeria, we image pressure propagation to the northwest. An inversion for flow properties indicates a linear trend of high permeability. The high permeability correlates with a northwest trending fault on the flank of the anticline which defines the field.« less
Johnson, Tim; Versteeg, Roelof; Thomle, Jon; ...
2015-08-01
Our paper describes and demonstrates two methods of providing a priori information to the surface-based time-lapse three-dimensional electrical resistivity tomography (ERT) problem for monitoring stage-driven or tide-driven surface water intrusion into aquifers. First, a mesh boundary is implemented that conforms to the known location of the water table through time, thereby enabling the inversion to place a sharp bulk conductivity contrast at that boundary without penalty. Moreover, a nonlinear inequality constraint is used to allow only positive or negative transient changes in EC to occur within the saturated zone, dependent on the relative contrast in fluid electrical conductivity between surfacemore » water and groundwater. A 3-D field experiment demonstrates that time-lapse imaging results using traditional smoothness constraints are unable to delineate river water intrusion. The water table and inequality constraints provide the inversion with the additional information necessary to resolve the spatial extent of river water intrusion through time.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Tim; Versteeg, Roelof; Thomle, Jon
Our paper describes and demonstrates two methods of providing a priori information to the surface-based time-lapse three-dimensional electrical resistivity tomography (ERT) problem for monitoring stage-driven or tide-driven surface water intrusion into aquifers. First, a mesh boundary is implemented that conforms to the known location of the water table through time, thereby enabling the inversion to place a sharp bulk conductivity contrast at that boundary without penalty. Moreover, a nonlinear inequality constraint is used to allow only positive or negative transient changes in EC to occur within the saturated zone, dependent on the relative contrast in fluid electrical conductivity between surfacemore » water and groundwater. A 3-D field experiment demonstrates that time-lapse imaging results using traditional smoothness constraints are unable to delineate river water intrusion. The water table and inequality constraints provide the inversion with the additional information necessary to resolve the spatial extent of river water intrusion through time.« less
Time-reversal and Bayesian inversion
NASA Astrophysics Data System (ADS)
Debski, Wojciech
2017-04-01
Probabilistic inversion technique is superior to the classical optimization-based approach in all but one aspects. It requires quite exhaustive computations which prohibit its use in huge size inverse problems like global seismic tomography or waveform inversion to name a few. The advantages of the approach are, however, so appealing that there is an ongoing continuous afford to make the large inverse task as mentioned above manageable with the probabilistic inverse approach. One of the perspective possibility to achieve this goal relays on exploring the internal symmetry of the seismological modeling problems in hand - a time reversal and reciprocity invariance. This two basic properties of the elastic wave equation when incorporating into the probabilistic inversion schemata open a new horizons for Bayesian inversion. In this presentation we discuss the time reversal symmetry property, its mathematical aspects and propose how to combine it with the probabilistic inverse theory into a compact, fast inversion algorithm. We illustrate the proposed idea with the newly developed location algorithm TRMLOC and discuss its efficiency when applied to mining induced seismic data.
NASA Astrophysics Data System (ADS)
Nurge, Mark A.
2007-05-01
An electrical capacitance volume tomography system has been created for use with a new image reconstruction algorithm capable of imaging high contrast dielectric distributions. The electrode geometry consists of two 4 × 4 parallel planes of copper conductors connected through custom built switch electronics to a commercially available capacitance to digital converter. Typical electrical capacitance tomography (ECT) systems rely solely on mutual capacitance readings to reconstruct images of dielectric distributions. This paper presents a method of reconstructing images of high contrast dielectric materials using only the self-capacitance measurements. By constraining the unknown dielectric material to one of two values, the inverse problem is no longer ill-determined. Resolution becomes limited only by the accuracy and resolution of the measurement circuitry. Images were reconstructed using this method with both synthetic and real data acquired using an aluminium structure inserted at different positions within the sensing region. Comparisons with standard two-dimensional ECT systems highlight the capabilities and limitations of the electronics and reconstruction algorithm.
Electrical capacitance volume tomography of high contrast dielectrics using a cuboid geometry
NASA Astrophysics Data System (ADS)
Nurge, Mark A.
An Electrical Capacitance Volume Tomography system has been created for use with a new image reconstruction algorithm capable of imaging high contrast dielectric distributions. The electrode geometry consists of two 4 x 4 parallel planes of copper conductors connected through custom built switch electronics to a commercially available capacitance to digital converter. Typical electrical capacitance tomography (ECT) systems rely solely on mutual capacitance readings to reconstruct images of dielectric distributions. This dissertation presents a method of reconstructing images of high contrast dielectric materials using only the self capacitance measurements. By constraining the unknown dielectric material to one of two values, the inverse problem is no longer ill-determined. Resolution becomes limited only by the accuracy and resolution of the measurement circuitry. Images were reconstructed using this method with both synthetic and real data acquired using an aluminum structure inserted at different positions within the sensing region. Comparisons with standard two dimensional ECT systems highlight the capabilities and limitations of the electronics and reconstruction algorithm.