Sample records for inversion methods utilizing

  1. Nanoindentation study of electrodeposited Ag thin coating: An inverse calculation of anisotropic elastic-plastic properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Guang; Sun, Xin; Wang, Yuxin

    A new inverse method was proposed to calculate the anisotropic elastic-plastic properties (flow stress) of thin electrodeposited Ag coating utilizing nanoindentation tests, previously reported inverse method for isotropic materials and three-dimensional (3-D) finite element analyses (FEA). Indentation depth was ~4% of coating thickness (~10 μm) to avoid substrate effect and different indentation responses were observed in the longitudinal (L) and the transverse (T) directions. The estimated elastic-plastic properties were obtained in the newly developed inverse method by matching the predicted indentation responses in the L and T directions with experimental measurements considering indentation size effect (ISE). The results were validatedmore » with tensile flow curves measured from free-standing (FS) Ag film. The current method can be utilized to characterize the anisotropic elastic-plastic properties of coatings and to provide the constitutive properties for coating performance evaluations.« less

  2. High resolution tsunami inversion for 2010 Chile earthquake

    NASA Astrophysics Data System (ADS)

    Wu, T.-R.; Ho, T.-C.

    2011-12-01

    We investigate the feasibility of inverting high-resolution vertical seafloor displacement from tsunami waveforms. An inversion method named "SUTIM" (small unit tsunami inversion method) is developed to meet this goal. In addition to utilizing the conventional least-square inversion, this paper also enhances the inversion resolution by Grid-Shifting method. A smooth constraint is adopted to gain stability. After a series of validation and performance tests, SUTIM is used to study the 2010 Chile earthquake. Based upon data quality and azimuthal distribution, we select tsunami waveforms from 6 GLOSS stations and 1 DART buoy record. In total, 157 sub-faults are utilized for the high-resolution inversion. The resolution reaches 10 sub-faults per wavelength. The result is compared with the distribution of the aftershocks and waveforms at each gauge location with very good agreement. The inversion result shows that the source profile features a non-uniform distribution of the seafloor displacement. The highly elevated vertical seafloor is mainly concentrated in two areas: one is located in the northern part of the epicentre, between 34° S and 36° S; the other is in the southern part, between 37° S and 38° S.

  3. Fast Geostatistical Inversion using Randomized Matrix Decompositions and Sketchings for Heterogeneous Aquifer Characterization

    NASA Astrophysics Data System (ADS)

    O'Malley, D.; Le, E. B.; Vesselinov, V. V.

    2015-12-01

    We present a fast, scalable, and highly-implementable stochastic inverse method for characterization of aquifer heterogeneity. The method utilizes recent advances in randomized matrix algebra and exploits the structure of the Quasi-Linear Geostatistical Approach (QLGA), without requiring a structured grid like Fast-Fourier Transform (FFT) methods. The QLGA framework is a more stable version of Gauss-Newton iterates for a large number of unknown model parameters, but provides unbiased estimates. The methods are matrix-free and do not require derivatives or adjoints, and are thus ideal for complex models and black-box implementation. We also incorporate randomized least-square solvers and data-reduction methods, which speed up computation and simulate missing data points. The new inverse methodology is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. Inversion results based on series of synthetic problems with steady-state and transient calibration data are presented.

  4. A direct-inverse method for transonic and separated flows about airfoils

    NASA Technical Reports Server (NTRS)

    Carlson, K. D.

    1985-01-01

    A direct-inverse technique and computer program called TAMSEP that can be sued for the analysis of the flow about airfoils at subsonic and low transonic freestream velocities is presented. The method is based upon a direct-inverse nonconservative full potential inviscid method, a Thwaites laminar boundary layer technique, and the Barnwell turbulent momentum integral scheme; and it is formulated using Cartesian coordinates. Since the method utilizes inverse boundary conditions in regions of separated flow, it is suitable for predicing the flowfield about airfoils having trailing edge separated flow under high lift conditions. Comparisons with experimental data indicate that the method should be a useful tool for applied aerodynamic analyses.

  5. A direct-inverse method for transonic and separated flows about airfoils

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1990-01-01

    A direct-inverse technique and computer program called TAMSEP that can be used for the analysis of the flow about airfoils at subsonic and low transonic freestream velocities is presented. The method is based upon a direct-inverse nonconservative full potential inviscid method, a Thwaites laminar boundary layer technique, and the Barnwell turbulent momentum integral scheme; and it is formulated using Cartesian coordinates. Since the method utilizes inverse boundary conditions in regions of separated flow, it is suitable for predicting the flow field about airfoils having trailing edge separated flow under high lift conditions. Comparisons with experimental data indicate that the method should be a useful tool for applied aerodynamic analyses.

  6. An approach to quantum-computational hydrologic inverse analysis

    DOE PAGES

    O'Malley, Daniel

    2018-05-02

    Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealermore » to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.« less

  7. An approach to quantum-computational hydrologic inverse analysis.

    PubMed

    O'Malley, Daniel

    2018-05-02

    Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealer to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.

  8. An approach to quantum-computational hydrologic inverse analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Malley, Daniel

    Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealermore » to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.« less

  9. Large Airborne Full Tensor Gradient Data Inversion Based on a Non-Monotone Gradient Method

    NASA Astrophysics Data System (ADS)

    Sun, Yong; Meng, Zhaohai; Li, Fengting

    2018-03-01

    Following the development of gravity gradiometer instrument technology, the full tensor gravity (FTG) data can be acquired on airborne and marine platforms. Large-scale geophysical data can be obtained using these methods, making such data sets a number of the "big data" category. Therefore, a fast and effective inversion method is developed to solve the large-scale FTG data inversion problem. Many algorithms are available to accelerate the FTG data inversion, such as conjugate gradient method. However, the conventional conjugate gradient method takes a long time to complete data processing. Thus, a fast and effective iterative algorithm is necessary to improve the utilization of FTG data. Generally, inversion processing is formulated by incorporating regularizing constraints, followed by the introduction of a non-monotone gradient-descent method to accelerate the convergence rate of FTG data inversion. Compared with the conventional gradient method, the steepest descent gradient algorithm, and the conjugate gradient algorithm, there are clear advantages of the non-monotone iterative gradient-descent algorithm. Simulated and field FTG data were applied to show the application value of this new fast inversion method.

  10. Studies of Trace Gas Chemical Cycles Using Observations, Inverse Methods and Global Chemical Transport Models

    NASA Technical Reports Server (NTRS)

    Prinn, Ronald G.

    2001-01-01

    For interpreting observational data, and in particular for use in inverse methods, accurate and realistic chemical transport models are essential. Toward this end we have, in recent years, helped develop and utilize a number of three-dimensional models including the Model for Atmospheric Transport and Chemistry (MATCH).

  11. Inverse transonic airfoil design methods including boundary layer and viscous interaction effects

    NASA Technical Reports Server (NTRS)

    Carlson, L. A.

    1979-01-01

    The development and incorporation into TRANDES of a fully conservative analysis method utilizing the artificial compressibility approach is described. The method allows for lifting cases and finite thickness airfoils and utilizes a stretched coordinate system. Wave drag and massive separation studies are also discussed.

  12. Convergence of Chahine's nonlinear relaxation inversion method used for limb viewing remote sensing

    NASA Technical Reports Server (NTRS)

    Chu, W. P.

    1985-01-01

    The application of Chahine's (1970) inversion technique to remote sensing problems utilizing the limb viewing geometry is discussed. The problem considered here involves occultation-type measurements and limb radiance-type measurements from either spacecraft or balloon platforms. The kernel matrix of the inversion problem is either an upper or lower triangular matrix. It is demonstrated that the Chahine inversion technique always converges, provided the diagonal elements of the kernel matrix are nonzero.

  13. Identification of polymorphic inversions from genotypes

    PubMed Central

    2012-01-01

    Background Polymorphic inversions are a source of genetic variability with a direct impact on recombination frequencies. Given the difficulty of their experimental study, computational methods have been developed to infer their existence in a large number of individuals using genome-wide data of nucleotide variation. Methods based on haplotype tagging of known inversions attempt to classify individuals as having a normal or inverted allele. Other methods that measure differences between linkage disequilibrium attempt to identify regions with inversions but unable to classify subjects accurately, an essential requirement for association studies. Results We present a novel method to both identify polymorphic inversions from genome-wide genotype data and classify individuals as containing a normal or inverted allele. Our method, a generalization of a published method for haplotype data [1], utilizes linkage between groups of SNPs to partition a set of individuals into normal and inverted subpopulations. We employ a sliding window scan to identify regions likely to have an inversion, and accumulation of evidence from neighboring SNPs is used to accurately determine the inversion status of each subject. Further, our approach detects inversions directly from genotype data, thus increasing its usability to current genome-wide association studies (GWAS). Conclusions We demonstrate the accuracy of our method to detect inversions and classify individuals on principled-simulated genotypes, produced by the evolution of an inversion event within a coalescent model [2]. We applied our method to real genotype data from HapMap Phase III to characterize the inversion status of two known inversions within the regions 17q21 and 8p23 across 1184 individuals. Finally, we scan the full genomes of the European Origin (CEU) and Yoruba (YRI) HapMap samples. We find population-based evidence for 9 out of 15 well-established autosomic inversions, and for 52 regions previously predicted by independent experimental methods in ten (9+1) individuals [3,4]. We provide efficient implementations of both genotype and haplotype methods as a unified R package inveRsion. PMID:22321652

  14. Electrical resistance tomography using steel cased boreholes as electrodes

    DOEpatents

    Daily, W.D.; Ramirez, A.L.

    1999-06-22

    An electrical resistance tomography method is described which uses steel cased boreholes as electrodes. The method enables mapping the electrical resistivity distribution in the subsurface from measurements of electrical potential caused by electrical currents injected into an array of electrodes in the subsurface. By use of current injection and potential measurement electrodes to generate data about the subsurface resistivity distribution, which data is then used in an inverse calculation, a model of the electrical resistivity distribution can be obtained. The inverse model may be constrained by independent data to better define an inverse solution. The method utilizes pairs of electrically conductive (steel) borehole casings as current injection electrodes and as potential measurement electrodes. The greater the number of steel cased boreholes in an array, the greater the amount of data is obtained. The steel cased boreholes may be utilized for either current injection or potential measurement electrodes. The subsurface model produced by this method can be 2 or 3 dimensional in resistivity depending on the detail desired in the calculated resistivity distribution and the amount of data to constrain the models. 2 figs.

  15. Electrical resistance tomography using steel cased boreholes as electrodes

    DOEpatents

    Daily, William D.; Ramirez, Abelardo L.

    1999-01-01

    An electrical resistance tomography method using steel cased boreholes as electrodes. The method enables mapping the electrical resistivity distribution in the subsurface from measurements of electrical potential caused by electrical currents injected into an array of electrodes in the subsurface. By use of current injection and potential measurement electrodes to generate data about the subsurface resistivity distribution, which data is then used in an inverse calculation, a model of the electrical resistivity distribution can be obtained. The inverse model may be constrained by independent data to better define an inverse solution. The method utilizes pairs of electrically conductive (steel) borehole casings as current injection electrodes and as potential measurement electrodes. The greater the number of steel cased boreholes in an array, the greater the amount of data is obtained. The steel cased boreholes may be utilized for either current injection or potential measurement electrodes. The subsurface model produced by this method can be 2 or 3 dimensional in resistivity depending on the detail desired in the calculated resistivity distribution and the amount of data to constain the models.

  16. 3-D imaging of large scale buried structure by 1-D inversion of very early time electromagnetic (VETEM) data

    USGS Publications Warehouse

    Aydmer, A.A.; Chew, W.C.; Cui, T.J.; Wright, D.L.; Smith, D.V.; Abraham, J.D.

    2001-01-01

    A simple and efficient method for large scale three-dimensional (3-D) subsurface imaging of inhomogeneous background is presented. One-dimensional (1-D) multifrequency distorted Born iterative method (DBIM) is employed in the inversion. Simulation results utilizing synthetic scattering data are given. Calibration of the very early time electromagnetic (VETEM) experimental waveforms is detailed along with major problems encountered in practice and their solutions. This discussion is followed by the results of a large scale application of the method to the experimental data provided by the VETEM system of the U.S. Geological Survey. The method is shown to have a computational complexity that is promising for on-site inversion.

  17. On the value of incorporating spatial statistics in large-scale geophysical inversions: the SABRe case

    NASA Astrophysics Data System (ADS)

    Kokkinaki, A.; Sleep, B. E.; Chambers, J. E.; Cirpka, O. A.; Nowak, W.

    2010-12-01

    Electrical Resistance Tomography (ERT) is a popular method for investigating subsurface heterogeneity. The method relies on measuring electrical potential differences and obtaining, through inverse modeling, the underlying electrical conductivity field, which can be related to hydraulic conductivities. The quality of site characterization strongly depends on the utilized inversion technique. Standard ERT inversion methods, though highly computationally efficient, do not consider spatial correlation of soil properties; as a result, they often underestimate the spatial variability observed in earth materials, thereby producing unrealistic subsurface models. Also, these methods do not quantify the uncertainty of the estimated properties, thus limiting their use in subsequent investigations. Geostatistical inverse methods can be used to overcome both these limitations; however, they are computationally expensive, which has hindered their wide use in practice. In this work, we compare a standard Gauss-Newton smoothness constrained least squares inversion method against the quasi-linear geostatistical approach using the three-dimensional ERT dataset of the SABRe (Source Area Bioremediation) project. The two methods are evaluated for their ability to: a) produce physically realistic electrical conductivity fields that agree with the wide range of data available for the SABRe site while being computationally efficient, and b) provide information on the spatial statistics of other parameters of interest, such as hydraulic conductivity. To explore the trade-off between inversion quality and computational efficiency, we also employ a 2.5-D forward model with corrections for boundary conditions and source singularities. The 2.5-D model accelerates the 3-D geostatistical inversion method. New adjoint equations are developed for the 2.5-D forward model for the efficient calculation of sensitivities. Our work shows that spatial statistics can be incorporated in large-scale ERT inversions to improve the inversion results without making them computationally prohibitive.

  18. Restricted access Improved hydrogeophysical characterization and monitoring through parallel modeling and inversion of time-domain resistivity andinduced-polarization data

    USGS Publications Warehouse

    Johnson, Timothy C.; Versteeg, Roelof J.; Ward, Andy; Day-Lewis, Frederick D.; Revil, André

    2010-01-01

    Electrical geophysical methods have found wide use in the growing discipline of hydrogeophysics for characterizing the electrical properties of the subsurface and for monitoring subsurface processes in terms of the spatiotemporal changes in subsurface conductivity, chargeability, and source currents they govern. Presently, multichannel and multielectrode data collections systems can collect large data sets in relatively short periods of time. Practitioners, however, often are unable to fully utilize these large data sets and the information they contain because of standard desktop-computer processing limitations. These limitations can be addressed by utilizing the storage and processing capabilities of parallel computing environments. We have developed a parallel distributed-memory forward and inverse modeling algorithm for analyzing resistivity and time-domain induced polar-ization (IP) data. The primary components of the parallel computations include distributed computation of the pole solutions in forward mode, distributed storage and computation of the Jacobian matrix in inverse mode, and parallel execution of the inverse equation solver. We have tested the corresponding parallel code in three efforts: (1) resistivity characterization of the Hanford 300 Area Integrated Field Research Challenge site in Hanford, Washington, U.S.A., (2) resistivity characterization of a volcanic island in the southern Tyrrhenian Sea in Italy, and (3) resistivity and IP monitoring of biostimulation at a Superfund site in Brandywine, Maryland, U.S.A. Inverse analysis of each of these data sets would be limited or impossible in a standard serial computing environment, which underscores the need for parallel high-performance computing to fully utilize the potential of electrical geophysical methods in hydrogeophysical applications.

  19. Solutions to inverse plume in a crosswind problem using a predictor - corrector method

    NASA Astrophysics Data System (ADS)

    Vanderveer, Joseph; Jaluria, Yogesh

    2013-11-01

    Investigation for minimalist solutions to the inverse convection problem of a plume in a crosswind has developed a predictor - corrector method. The inverse problem is to predict the strength and location of the plume with respect to a select few downstream sampling points. This is accomplished with the help of two numerical simulations of the domain at differing source strengths, allowing the generation of two inverse interpolation functions. These functions in turn are utilized by the predictor step to acquire the plume strength. Finally, the same interpolation functions with the corrections from the plume strength are used to solve for the plume location. Through optimization of the relative location of the sampling points, the minimum number of samples for accurate predictions is reduced to two for the plume strength and three for the plume location. After the optimization, the predictor-corrector method demonstrates global uniqueness of the inverse solution for all test cases. The solution error is less than 1% for both plume strength and plume location. The basic approach could be extended to other inverse convection transport problems, particularly those encountered in environmental flows.

  20. Seismic data restoration with a fast L1 norm trust region method

    NASA Astrophysics Data System (ADS)

    Cao, Jingjie; Wang, Yanfei

    2014-08-01

    Seismic data restoration is a major strategy to provide reliable wavefield when field data dissatisfy the Shannon sampling theorem. Recovery by sparsity-promoting inversion often get sparse solutions of seismic data in a transformed domains, however, most methods for sparsity-promoting inversion are line-searching methods which are efficient but are inclined to obtain local solutions. Using trust region method which can provide globally convergent solutions is a good choice to overcome this shortcoming. A trust region method for sparse inversion has been proposed, however, the efficiency should be improved to suitable for large-scale computation. In this paper, a new L1 norm trust region model is proposed for seismic data restoration and a robust gradient projection method for solving the sub-problem is utilized. Numerical results of synthetic and field data demonstrate that the proposed trust region method can get excellent computation speed and is a viable alternative for large-scale computation.

  1. Iterative Inverse Modeling for Reconciliation of Emission Inventories during the 2006 TexAQS Intensive Field Campaign

    NASA Astrophysics Data System (ADS)

    Xiao, X.; Cohan, D. S.

    2009-12-01

    Substantial uncertainties in current emission inventories have been detected by the Texas Air Quality Study 2006 (TexAQS 2006) intensive field program. These emission uncertainties have caused large inaccuracies in model simulations of air quality and its responses to management strategies. To improve the quantitative understanding of the temporal, spatial, and categorized distributions of primary pollutant emissions by utilizing the corresponding measurements collected during TexAQS 2006, we implemented both the recursive Kalman filter and a batch matrix inversion 4-D data assimilation (FDDA) method in an iterative inverse modeling framework of the CMAQ-DDM model. Equipped with the decoupled direct method, CMAQ-DDM enables simultaneous calculation of the sensitivity coefficients of pollutant concentrations to emissions to be used in the inversions. Primary pollutant concentrations measured by the multiple platforms (TCEQ ground-based, NOAA WP-3D aircraft and Ronald H. Brown vessel, and UH Moody Tower) during TexAQS 2006 have been integrated for the use in the inverse modeling. Firstly pseudo-data analyses have been conducted to assess the two methods, taking a coarse spatial resolution emission inventory as a case. Model base case concentrations of isoprene and ozone at arbitrarily selected ground grid cells were perturbed to generate pseudo measurements with different assumed Gaussian uncertainties expressed by 1-sigma standard deviations. Single-species inversions have been conducted with both methods for isoprene and NOx surface emissions from eight states in the Southeastern United States by using the pseudo measurements of isoprene and ozone, respectively. Utilization of ozone pseudo data to invert for NOx emissions serves only for the purpose of method assessment. Both the Kalman filter and FDDA methods show good performance in tuning arbitrarily shifted a priori emissions to the base case “true” values within 3-4 iterations even for the nonlinear responses of ozone to NOx emissions. While the Kalman filter has better performance under the situation of very large observational uncertainties, the batch matrix FDDA method is better suited for incorporating temporally and spatially irregular data such as those measured by NOAA aircraft and ship. After validating the methods with the pseudo data, the inverse technique is applied to improve emission estimates of NOx from different source sectors and regions in the Houston metropolitan area by using NOx measurements during TexAQS 2006. EPA NEI2005-based and Texas-specified Emission Inventories for 2006 are used as the a priori emission estimates before optimization. The inversion results will be presented and discussed. Future work will conduct inverse modeling for additional species, and then perform a multi-species inversion for emissions consistency and reconciliation with secondary pollutants such as ozone.

  2. Interpretation of Trace Gas Data Using Inverse Methods and Global Chemical Transport Models

    NASA Technical Reports Server (NTRS)

    Prinn, Ronald G.

    1997-01-01

    This is a theoretical research project aimed at: (1) development, testing, and refining of inverse methods for determining regional and global transient source and sink strengths for long lived gases important in ozone depletion and climate forcing, (2) utilization of inverse methods to determine these source/sink strengths which use the NCAR/Boulder CCM2-T42 3-D model and a global 3-D Model for Atmospheric Transport and Chemistry (MATCH) which is based on analyzed observed wind fields (developed in collaboration by MIT and NCAR/Boulder), (3) determination of global (and perhaps regional) average hydroxyl radical concentrations using inverse methods with multiple titrating gases, and, (4) computation of the lifetimes and spatially resolved destruction rates of trace gases using 3-D models. Important goals include determination of regional source strengths of methane, nitrous oxide, and other climatically and chemically important biogenic trace gases and also of halocarbons restricted by the Montreal Protocol and its follow-on agreements and hydrohalocarbons used as alternatives to the restricted halocarbons.

  3. EIT image reconstruction based on a hybrid FE-EFG forward method and the complete-electrode model.

    PubMed

    Hadinia, M; Jafari, R; Soleimani, M

    2016-06-01

    This paper presents the application of the hybrid finite element-element free Galerkin (FE-EFG) method for the forward and inverse problems of electrical impedance tomography (EIT). The proposed method is based on the complete electrode model. Finite element (FE) and element-free Galerkin (EFG) methods are accurate numerical techniques. However, the FE technique has meshing task problems and the EFG method is computationally expensive. In this paper, the hybrid FE-EFG method is applied to take both advantages of FE and EFG methods, the complete electrode model of the forward problem is solved, and an iterative regularized Gauss-Newton method is adopted to solve the inverse problem. The proposed method is applied to compute Jacobian in the inverse problem. Utilizing 2D circular homogenous models, the numerical results are validated with analytical and experimental results and the performance of the hybrid FE-EFG method compared with the FE method is illustrated. Results of image reconstruction are presented for a human chest experimental phantom.

  4. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.

  5. Inversion of Robin coefficient by a spectral stochastic finite element approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin Bangti; Zou Jun

    2008-03-01

    This paper investigates a variational approach to the nonlinear stochastic inverse problem of probabilistically calibrating the Robin coefficient from boundary measurements for the steady-state heat conduction. The problem is formulated into an optimization problem, and mathematical properties relevant to its numerical computations are investigated. The spectral stochastic finite element method using polynomial chaos is utilized for the discretization of the optimization problem, and its convergence is analyzed. The nonlinear conjugate gradient method is derived for the optimization system. Numerical results for several two-dimensional problems are presented to illustrate the accuracy and efficiency of the stochastic finite element method.

  6. Inverse Abbe-method for observing small refractive index changes in liquids.

    PubMed

    Räty, Jukka; Peiponen, Kai-Erik

    2015-05-01

    This study concerns an optical method for the detection of minuscule refractive index changes in the liquid phase. The proposed method reverses the operation of the traditional Abbe refractometer and thus utilizes the light dispersion properties of materials, i.e. it involves the dependence of the refractive index on light wavelength. In practice, the method includes the detection of light reflection spectra in the visible spectral range. This inverse Abbe method is suitable for liquid quality studies e.g. for monitoring water purity. Tests have shown that the method reveals less than per mil NaCl or ethanol concentrations in water. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Eigenvectors phase correction in inverse modal problem

    NASA Astrophysics Data System (ADS)

    Qiao, Guandong; Rahmatalla, Salam

    2017-12-01

    The solution of the inverse modal problem for the spatial parameters of mechanical and structural systems is heavily dependent on the quality of the modal parameters obtained from the experiments. While experimental and environmental noises will always exist during modal testing, the resulting modal parameters are expected to be corrupted with different levels of noise. A novel methodology is presented in this work to mitigate the errors in the eigenvectors when solving the inverse modal problem for the spatial parameters. The phases of the eigenvector component were utilized as design variables within an optimization problem that minimizes the difference between the calculated and experimental transfer functions. The equation of motion in terms of the modal and spatial parameters was used as a constraint in the optimization problem. Constraints that reserve the positive and semi-positive definiteness and the inter-connectivity of the spatial matrices were implemented using semi-definite programming. Numerical examples utilizing noisy eigenvectors with augmented Gaussian white noise of 1%, 5%, and 10% were used to demonstrate the efficacy of the proposed method. The results showed that the proposed method is superior when compared with a known method in the literature.

  8. Bayesian linearized amplitude-versus-frequency inversion for quality factor and its application

    NASA Astrophysics Data System (ADS)

    Yang, Xinchao; Teng, Long; Li, Jingnan; Cheng, Jiubing

    2018-06-01

    We propose a straightforward attenuation inversion method by utilizing the amplitude-versus-frequency (AVF) characteristics of seismic data. A new linearized approximation equation of the angle and frequency dependent reflectivity in viscoelastic media is derived. We then use the presented equation to implement the Bayesian linear AVF inversion. The inversion result includes not only P-wave and S-wave velocities, and densities, but also P-wave and S-wave quality factors. Synthetic tests show that the AVF inversion surpasses the AVA inversion for quality factor estimation. However, a higher signal noise ratio (SNR) of data is necessary for the AVF inversion. To show its feasibility, we apply both the new Bayesian AVF inversion and conventional AVA inversion to a tight gas reservoir data in Sichuan Basin in China. Considering the SNR of the field data, a combination of AVF inversion for attenuation parameters and AVA inversion for elastic parameters is recommended. The result reveals that attenuation estimations could serve as a useful complement in combination with the AVA inversion results for the detection of tight gas reservoirs.

  9. Indium oxide inverse opal films synthesized by structure replication method

    NASA Astrophysics Data System (ADS)

    Amrehn, Sabrina; Berghoff, Daniel; Nikitin, Andreas; Reichelt, Matthias; Wu, Xia; Meier, Torsten; Wagner, Thorsten

    2016-04-01

    We present the synthesis of indium oxide (In2O3) inverse opal films with photonic stop bands in the visible range by a structure replication method. Artificial opal films made of poly(methyl methacrylate) (PMMA) spheres are utilized as template. The opal films are deposited via sedimentation facilitated by ultrasonication, and then impregnated by indium nitrate solution, which is thermally converted to In2O3 after drying. The quality of the resulting inverse opal film depends on many parameters; in this study the water content of the indium nitrate/PMMA composite after drying is investigated. Comparison of the reflectance spectra recorded by vis-spectroscopy with simulated data shows a good agreement between the peak position and calculated stop band positions for the inverse opals. This synthesis is less complex and highly efficient compared to most other techniques and is suitable for use in many applications.

  10. Assimilating data into open ocean tidal models

    NASA Astrophysics Data System (ADS)

    Kivman, Gennady A.

    The problem of deriving tidal fields from observations by reason of incompleteness and imperfectness of every data set practically available has an infinitely large number of allowable solutions fitting the data within measurement errors and hence can be treated as ill-posed. Therefore, interpolating the data always relies on some a priori assumptions concerning the tides, which provide a rule of sampling or, in other words, a regularization of the ill-posed problem. Data assimilation procedures used in large scale tide modeling are viewed in a common mathematical framework as such regularizations. It is shown that they all (basis functions expansion, parameter estimation, nudging, objective analysis, general inversion, and extended general inversion), including those (objective analysis and general inversion) originally formulated in stochastic terms, may be considered as utilizations of one of the three general methods suggested by the theory of ill-posed problems. The problem of grid refinement critical for inverse methods and nudging is discussed.

  11. Studies of Trace Gas Chemical Cycles Using Inverse Methods and Global Chemical Transport Models

    NASA Technical Reports Server (NTRS)

    Prinn, Ronald G.

    2003-01-01

    We report progress in the first year, and summarize proposed work for the second year of the three-year dynamical-chemical modeling project devoted to: (a) development, testing, and refining of inverse methods for determining regional and global transient source and sink strengths for long lived gases important in ozone depletion and climate forcing, (b) utilization of inverse methods to determine these source/sink strengths using either MATCH (Model for Atmospheric Transport and Chemistry) which is based on analyzed observed wind fields or back-trajectories computed from these wind fields, (c) determination of global (and perhaps regional) average hydroxyl radical concentrations using inverse methods with multiple titrating gases, and (d) computation of the lifetimes and spatially resolved destruction rates of trace gases using 3D models. Important goals include determination of regional source strengths of methane, nitrous oxide, methyl bromide, and other climatically and chemically important biogenic/anthropogenic trace gases and also of halocarbons restricted by the Montreal protocol and its follow-on agreements and hydrohalocarbons now used as alternatives to the restricted halocarbons.

  12. Utility of inverse probability weighting in molecular pathological epidemiology.

    PubMed

    Liu, Li; Nevo, Daniel; Nishihara, Reiko; Cao, Yin; Song, Mingyang; Twombly, Tyler S; Chan, Andrew T; Giovannucci, Edward L; VanderWeele, Tyler J; Wang, Molin; Ogino, Shuji

    2018-04-01

    As one of causal inference methodologies, the inverse probability weighting (IPW) method has been utilized to address confounding and account for missing data when subjects with missing data cannot be included in a primary analysis. The transdisciplinary field of molecular pathological epidemiology (MPE) integrates molecular pathological and epidemiological methods, and takes advantages of improved understanding of pathogenesis to generate stronger biological evidence of causality and optimize strategies for precision medicine and prevention. Disease subtyping based on biomarker analysis of biospecimens is essential in MPE research. However, there are nearly always cases that lack subtype information due to the unavailability or insufficiency of biospecimens. To address this missing subtype data issue, we incorporated inverse probability weights into Cox proportional cause-specific hazards regression. The weight was inverse of the probability of biomarker data availability estimated based on a model for biomarker data availability status. The strategy was illustrated in two example studies; each assessed alcohol intake or family history of colorectal cancer in relation to the risk of developing colorectal carcinoma subtypes classified by tumor microsatellite instability (MSI) status, using a prospective cohort study, the Nurses' Health Study. Logistic regression was used to estimate the probability of MSI data availability for each cancer case with covariates of clinical features and family history of colorectal cancer. This application of IPW can reduce selection bias caused by nonrandom variation in biospecimen data availability. The integration of causal inference methods into the MPE approach will likely have substantial potentials to advance the field of epidemiology.

  13. Full waveform inversion using a decomposed single frequency component from a spectrogram

    NASA Astrophysics Data System (ADS)

    Ha, Jiho; Kim, Seongpil; Koo, Namhyung; Kim, Young-Ju; Woo, Nam-Sub; Han, Sang-Mok; Chung, Wookeen; Shin, Sungryul; Shin, Changsoo; Lee, Jaejoon

    2018-06-01

    Although many full waveform inversion methods have been developed to construct velocity models of subsurface, various approaches have been presented to obtain an inversion result with long-wavelength features even though seismic data lacking low-frequency components were used. In this study, a new full waveform inversion algorithm was proposed to recover a long-wavelength velocity model that reflects the inherent characteristics of each frequency component of seismic data using a single-frequency component decomposed from the spectrogram. We utilized the wavelet transform method to obtain the spectrogram, and the decomposed signal from the spectrogram was used as transformed data. The Gauss-Newton method with the diagonal elements of an approximate Hessian matrix was used to update the model parameters at each iteration. Based on the results of time-frequency analysis in the spectrogram, numerical tests with some decomposed frequency components were performed using a modified SEG/EAGE salt dome (A-A‧) line to demonstrate the feasibility of the proposed inversion algorithm. This demonstrated that a reasonable inverted velocity model with long-wavelength structures can be obtained using a single frequency component. It was also confirmed that when strong noise occurs in part of the frequency band, it is feasible to obtain a long-wavelength velocity model from the noise data with a frequency component that is less affected by the noise. Finally, it was confirmed that the results obtained from the spectrogram inversion can be used as an initial velocity model in conventional inversion methods.

  14. Analysis and Inverse Design of the HSR Arrow Wing Configuration with Fuselage, Wing, and Flow Through Nacelles

    NASA Technical Reports Server (NTRS)

    Krist, Steven E.; Bauer, Steven X. S.

    1999-01-01

    The design process for developing the natural flow wing design on the HSR arrow wing configuration utilized several design tools and analysis methods. Initial fuselage/wing designs were generated with inviscid analysis and optimization methods in conjunction with the natural flow wing design philosophy. A number of designs were generated, satisfying different system constraints. Of the three natural flow wing designs developed, the NFWAc2 configuration is the design which satisfies the constraints utilized by McDonnell Douglas Aerospace (MDA) in developing a series of optimized configurations; a wind tunnel model of the MDA designed OPT5 configuration was constructed and tested. The present paper is concerned with the viscous analysis and inverse design of the arrow wing configurations, including the effects of the installed diverters/nacelles. Analyses were conducted with OVERFLOW, a Navier-Stokes flow solver for overset grids. Inverse designs were conducted with OVERDISC, which couples OVERFLOW with the CDISC inverse design method. An initial system of overset grids was generated for the OPT5 configuration with installed diverters/nacelles. An automated regridding process was then developed to use the OPT5 component grids to create grids for the natural flow wing designs. The inverse design process was initiated using the NFWAc2 configuration as a starting point, eventually culminating in the NFWAc4 design-for which a wind tunnel model was constructed. Due to the time constraints on the design effort, initial analyses and designs were conducted with a fairly coarse grid; subsequent analyses have been conducted on a refined system of grids. Comparisons of the computational results to experiment are provided at the end of this paper.

  15. A comparative study of surface waves inversion techniques at strong motion recording sites in Greece

    USGS Publications Warehouse

    Panagiotis C. Pelekis,; Savvaidis, Alexandros; Kayen, Robert E.; Vlachakis, Vasileios S.; Athanasopoulos, George A.

    2015-01-01

    Surface wave method was used for the estimation of Vs vs depth profile at 10 strong motion stations in Greece. The dispersion data were obtained by SASW method, utilizing a pair of electromechanical harmonic-wave source (shakers) or a random source (drop weight). In this study, three inversion techniques were used a) a recently proposed Simplified Inversion Method (SIM), b) an inversion technique based on a neighborhood algorithm (NA) which allows the incorporation of a priori information regarding the subsurface structure parameters, and c) Occam's inversion algorithm. For each site constant value of Poisson's ratio was assumed (ν=0.4) since the objective of the current study is the comparison of the three inversion schemes regardless the uncertainties resulting due to the lack of geotechnical data. A penalty function was introduced to quantify the deviations of the derived Vs profiles. The Vs models are compared as of Vs(z), Vs30 and EC8 soil category, in order to show the insignificance of the existing variations. The comparison results showed that the average variation of SIM profiles is 9% and 4.9% comparing with NA and Occam's profiles respectively whilst the average difference of Vs30 values obtained from SIM is 7.4% and 5.0% compared with NA and Occam's.

  16. Recovering Long-wavelength Velocity Models using Spectrogram Inversion with Single- and Multi-frequency Components

    NASA Astrophysics Data System (ADS)

    Ha, J.; Chung, W.; Shin, S.

    2015-12-01

    Many waveform inversion algorithms have been proposed in order to construct subsurface velocity structures from seismic data sets. These algorithms have suffered from computational burden, local minima problems, and the lack of low-frequency components. Computational efficiency can be improved by the application of back-propagation techniques and advances in computing hardware. In addition, waveform inversion algorithms, for obtaining long-wavelength velocity models, could avoid both the local minima problem and the effect of the lack of low-frequency components in seismic data. In this study, we proposed spectrogram inversion as a technique for recovering long-wavelength velocity models. In spectrogram inversion, decomposed frequency components from spectrograms of traces, in the observed and calculated data, are utilized to generate traces with reproduced low-frequency components. Moreover, since each decomposed component can reveal the different characteristics of a subsurface structure, several frequency components were utilized to analyze the velocity features in the subsurface. We performed the spectrogram inversion using a modified SEG/SEGE salt A-A' line. Numerical results demonstrate that spectrogram inversion could also recover the long-wavelength velocity features. However, inversion results varied according to the frequency components utilized. Based on the results of inversion using a decomposed single-frequency component, we noticed that robust inversion results are obtained when a dominant frequency component of the spectrogram was utilized. In addition, detailed information on recovered long-wavelength velocity models was obtained using a multi-frequency component combined with single-frequency components. Numerical examples indicate that various detailed analyses of long-wavelength velocity models can be carried out utilizing several frequency components.

  17. Force sensing using 3D displacement measurements in linear elastic bodies

    NASA Astrophysics Data System (ADS)

    Feng, Xinzeng; Hui, Chung-Yuen

    2016-07-01

    In cell traction microscopy, the mechanical forces exerted by a cell on its environment is usually determined from experimentally measured displacement by solving an inverse problem in elasticity. In this paper, an innovative numerical method is proposed which finds the "optimal" traction to the inverse problem. When sufficient regularization is applied, we demonstrate that the proposed method significantly improves the widely used approach using Green's functions. Motivated by real cell experiments, the equilibrium condition of a slowly migrating cell is imposed as a set of equality constraints on the unknown traction. Our validation benchmarks demonstrate that the numeric solution to the constrained inverse problem well recovers the actual traction when the optimal regularization parameter is used. The proposed method can thus be applied to study general force sensing problems, which utilize displacement measurements to sense inaccessible forces in linear elastic bodies with a priori constraints.

  18. Joint time/frequency-domain inversion of reflection data for seabed geoacoustic profiles and uncertainties.

    PubMed

    Dettmer, Jan; Dosso, Stan E; Holland, Charles W

    2008-03-01

    This paper develops a joint time/frequency-domain inversion for high-resolution single-bounce reflection data, with the potential to resolve fine-scale profiles of sediment velocity, density, and attenuation over small seafloor footprints (approximately 100 m). The approach utilizes sequential Bayesian inversion of time- and frequency-domain reflection data, employing ray-tracing inversion for reflection travel times and a layer-packet stripping method for spherical-wave reflection-coefficient inversion. Posterior credibility intervals from the travel-time inversion are passed on as prior information to the reflection-coefficient inversion. Within the reflection-coefficient inversion, parameter information is passed from one layer packet inversion to the next in terms of marginal probability distributions rotated into principal components, providing an efficient approach to (partially) account for multi-dimensional parameter correlations with one-dimensional, numerical distributions. Quantitative geoacoustic parameter uncertainties are provided by a nonlinear Gibbs sampling approach employing full data error covariance estimation (including nonstationary effects) and accounting for possible biases in travel-time picks. Posterior examination of data residuals shows the importance of including data covariance estimates in the inversion. The joint inversion is applied to data collected on the Malta Plateau during the SCARAB98 experiment.

  19. Site Classification using Multichannel Channel Analysis of Surface Wave (MASW) method on Soft and Hard Ground

    NASA Astrophysics Data System (ADS)

    Ashraf, M. A. M.; Kumar, N. S.; Yusoh, R.; Hazreek, Z. A. M.; Aziman, M.

    2018-04-01

    Site classification utilizing average shear wave velocity (Vs(30) up to 30 meters depth is a typical parameter. Numerous geophysical methods have been proposed for estimation of shear wave velocity by utilizing assortment of testing configuration, processing method, and inversion algorithm. Multichannel Analysis of Surface Wave (MASW) method is been rehearsed by numerous specialist and professional to geotechnical engineering for local site characterization and classification. This study aims to determine the site classification on soft and hard ground using MASW method. The subsurface classification was made utilizing National Earthquake Hazards Reduction Program (NERHP) and international Building Code (IBC) classification. Two sites are chosen to acquire the shear wave velocity which is in the state of Pulau Pinang for soft soil and Perlis for hard rock. Results recommend that MASW technique can be utilized to spatially calculate the distribution of shear wave velocity (Vs(30)) in soil and rock to characterize areas.

  20. Retrieving rupture history using waveform inversions in time sequence

    NASA Astrophysics Data System (ADS)

    Yi, L.; Xu, C.; Zhang, X.

    2017-12-01

    The rupture history of large earthquakes is generally regenerated using the waveform inversion through utilizing seismological waveform records. In the waveform inversion, based on the superposition principle, the rupture process is linearly parameterized. After discretizing the fault plane into sub-faults, the local source time function of each sub-fault is usually parameterized using the multi-time window method, e.g., mutual overlapped triangular functions. Then the forward waveform of each sub-fault is synthesized through convoluting the source time function with its Green function. According to the superposition principle, these forward waveforms generated from the fault plane are summarized in the recorded waveforms after aligning the arrival times. Then the slip history is retrieved using the waveform inversion method after the superposing of all forward waveforms for each correspond seismological waveform records. Apart from the isolation of these forward waveforms generated from each sub-fault, we also realize that these waveforms are gradually and sequentially superimposed in the recorded waveforms. Thus we proposed a idea that the rupture model is possibly detachable in sequent rupture times. According to the constrained waveform length method emphasized in our previous work, the length of inverted waveforms used in the waveform inversion is objectively constrained by the rupture velocity and rise time. And one essential prior condition is the predetermined fault plane that limits the duration of rupture time, which means the waveform inversion is restricted in a pre-set rupture duration time. Therefore, we proposed a strategy to inverse the rupture process sequentially using the progressively shift rupture times as the rupture front expanding in the fault plane. And we have designed a simulation inversion to test the feasibility of the method. Our test result shows the prospect of this idea that requiring furthermore investigation.

  1. Control Theory based Shape Design for the Incompressible Navier-Stokes Equations

    NASA Astrophysics Data System (ADS)

    Cowles, G.; Martinelli, L.

    2003-12-01

    A design method for shape optimization in incompressible turbulent viscous flow has been developed and validated for inverse design. The gradient information is determined using a control theory based algorithm. With such an approach, the cost of computing the gradient is negligible. An additional adjoint system must be solved which requires the cost of a single steady state flow solution. Thus, this method has an enormous advantage over traditional finite-difference based algorithms. The method of artificial compressibility is utilized to solve both the flow and adjoint systems. An algebraic turbulence model is used to compute the eddy viscosity. The method is validated using several inverse wing design test cases. In each case, the program must modify the shape of the initial wing such that its pressure distribution matches that of the target wing. Results are shown for the inversion of both finite thickness wings as well as zero thickness wings which can be considered a model of yacht sails.

  2. Mixed linear-nonlinear fault slip inversion: Bayesian inference of model, weighting, and smoothing parameters

    NASA Astrophysics Data System (ADS)

    Fukuda, J.; Johnson, K. M.

    2009-12-01

    Studies utilizing inversions of geodetic data for the spatial distribution of coseismic slip on faults typically present the result as a single fault plane and slip distribution. Commonly the geometry of the fault plane is assumed to be known a priori and the data are inverted for slip. However, sometimes there is not strong a priori information on the geometry of the fault that produced the earthquake and the data is not always strong enough to completely resolve the fault geometry. We develop a method to solve for the full posterior probability distribution of fault slip and fault geometry parameters in a Bayesian framework using Monte Carlo methods. The slip inversion problem is particularly challenging because it often involves multiple data sets with unknown relative weights (e.g. InSAR, GPS), model parameters that are related linearly (slip) and nonlinearly (fault geometry) through the theoretical model to surface observations, prior information on model parameters, and a regularization prior to stabilize the inversion. We present the theoretical framework and solution method for a Bayesian inversion that can handle all of these aspects of the problem. The method handles the mixed linear/nonlinear nature of the problem through combination of both analytical least-squares solutions and Monte Carlo methods. We first illustrate and validate the inversion scheme using synthetic data sets. We then apply the method to inversion of geodetic data from the 2003 M6.6 San Simeon, California earthquake. We show that the uncertainty in strike and dip of the fault plane is over 20 degrees. We characterize the uncertainty in the slip estimate with a volume around the mean fault solution in which the slip most likely occurred. Slip likely occurred somewhere in a volume that extends 5-10 km in either direction normal to the fault plane. We implement slip inversions with both traditional, kinematic smoothing constraints on slip and a simple physical condition of uniform stress drop.

  3. A new inversion algorithm for HF sky-wave backscatter ionograms

    NASA Astrophysics Data System (ADS)

    Feng, Jing; Ni, Binbin; Lou, Peng; Wei, Na; Yang, Longquan; Liu, Wen; Zhao, Zhengyu; Li, Xue

    2018-05-01

    HF sky-wave backscatter sounding system is capable of measuring the large-scale, two-dimensional (2-D) distributions of ionospheric electron density. The leading edge (LE) of a backscatter ionogram (BSI) is widely used for ionospheric inversion since it is hardly affected by any factors other than ionospheric electron density. Traditional BSI inversion methods have failed to distinguish LEs associated with different ionospheric layers, and simply utilize the minimum group path of each operating frequency, which generally corresponds to the LE associated with the F2 layer. Consequently, while the inversion results can provide accurate profiles of the F region below the F2 peak, the diagnostics may not be so effective for other ionospheric layers. In order to resolve this issue, we present a new BSI inversion method using LEs associated with different layers, which can further improve the accuracy of electron density distribution, especially the profile of the ionospheric layers below the F2 region. The efficiency of the algorithm is evaluated by computing the mean and the standard deviation of the differences between inverted parameter values and true values obtained from both vertical and oblique incidence sounding. Test results clearly manifest that the method we have developed outputs more accurate electron density profiles due to improvements to acquire the profiles of the layers below the F2 region. Our study can further improve the current BSI inversion methods on the reconstruction of 2-D electron density distribution in a vertical plane aligned with the direction of sounding.

  4. Identification and Control of Non-Linear Time-Varying Dynamical Systems Using Artificial Neural Networks

    DTIC Science & Technology

    1992-09-01

    finding an inverse plant such as was done by Bertrand [BD91] and by Levin, Gewirtzman and Inbar in a binary type inverse controller [LGI91], to self tuning...gain robust control. 2) Self oscillating adaptive controller. 3) Gain scheduling. 4) Self tuning. 5) Model-reference adaptive systems. Although the...of multidimensional systems (CS881 as well as aircraft [HG90]. The self oscillating method is also a feedback based mechanism, utilizing a relay in the

  5. A technique for increasing the accuracy of the numerical inversion of the Laplace transform with applications

    NASA Technical Reports Server (NTRS)

    Berger, B. S.; Duangudom, S.

    1973-01-01

    A technique is introduced which extends the range of useful approximation of numerical inversion techniques to many cycles of an oscillatory function without requiring either the evaluation of the image function for many values of s or the computation of higher-order terms. The technique consists in reducing a given initial value problem defined over some interval into a sequence of initial value problems defined over a set of subintervals. Several numerical examples demonstrate the utility of the method.

  6. Three-dimensional magnetotelluric axial anisotropic forward modeling and inversion

    NASA Astrophysics Data System (ADS)

    Cao, Hui; Wang, Kunpeng; Wang, Tao; Hua, Boguang

    2018-06-01

    Magnetotelluric (MT) data has been widely used to image underground electrical structural. However, when the significant axial resistivity anisotropy presents, how this influences three-dimensional MT data has not been resolved clearly yet. We here propose a scheme for three-dimensional modeling of MT data in presence of axial anisotropic resistivity, where the electromagnetic fields are decomposed into primary and secondary components. A 3D staggered-grid finite difference method is then used to resolve the resulting 3D governing equations. Numerical tests have completed to validate the correctness and accuracy of the present algorithm. A limited-memory Broyden-Fletcher-Goldfarb-Shanno method is then utilized to realize the 3D MT axial anisotropic inversion. The testing results show that, compared to the results of isotropic resistivity inversion, taking account the axial anisotropy can much improve the inverted results.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kılıç, Emre, E-mail: emre.kilic@tum.de; Eibert, Thomas F.

    An approach combining boundary integral and finite element methods is introduced for the solution of three-dimensional inverse electromagnetic medium scattering problems. Based on the equivalence principle, unknown equivalent electric and magnetic surface current densities on a closed surface are utilized to decompose the inverse medium problem into two parts: a linear radiation problem and a nonlinear cavity problem. The first problem is formulated by a boundary integral equation, the computational burden of which is reduced by employing the multilevel fast multipole method (MLFMM). Reconstructed Cauchy data on the surface allows the utilization of the Lorentz reciprocity and the Poynting's theorems.more » Exploiting these theorems, the noise level and an initial guess are estimated for the cavity problem. Moreover, it is possible to determine whether the material is lossy or not. In the second problem, the estimated surface currents form inhomogeneous boundary conditions of the cavity problem. The cavity problem is formulated by the finite element technique and solved iteratively by the Gauss–Newton method to reconstruct the properties of the object. Regularization for both the first and the second problems is achieved by a Krylov subspace method. The proposed method is tested against both synthetic and experimental data and promising reconstruction results are obtained.« less

  8. 2D Unstructured Grid Based Constrained Inversion of Magnetic Data Using Fuzzy C Means Clustering and Lithology Classification

    NASA Astrophysics Data System (ADS)

    Kumar, V.; Singh, A.; Sharma, S. P.

    2016-12-01

    Regular grid discretization is often utilized to define complex geological models. However, this subdivision strategy performs at lower precision to represent the topographical observation surface. We have developed a new 2D unstructured grid based inversion for magnetic data for models including topography. It will consolidate prior parametric information into a deterministic inversion system to enhance the boundary between the different lithology based on recovered magnetic susceptibility distribution from the inversion. The presented susceptibility model will satisfy both the observed magnetic data and parametric information and therefore can represent the earth better than geophysical inversion models that only honor the observed magnetic data. Geophysical inversion and lithology classification are generally treated as two autonomous methodologies and connected in a serial way. The presented inversion strategy integrates these two parts into a unified scheme. To reduce the storage space and computation time, the conjugate gradient method is used. It results in feasible and practical imaging inversion of magnetic data to deal with large number of triangular grids. The efficacy of the presented inversion is demonstrated using two synthetic examples and one field data example.

  9. An inverse finance problem for estimation of the volatility

    NASA Astrophysics Data System (ADS)

    Neisy, A.; Salmani, K.

    2013-01-01

    Black-Scholes model, as a base model for pricing in derivatives markets has some deficiencies, such as ignoring market jumps, and considering market volatility as a constant factor. In this article, we introduce a pricing model for European-Options under jump-diffusion underlying asset. Then, using some appropriate numerical methods we try to solve this model with integral term, and terms including derivative. Finally, considering volatility as an unknown parameter, we try to estimate it by using our proposed model. For the purpose of estimating volatility, in this article, we utilize inverse problem, in which inverse problem model is first defined, and then volatility is estimated using minimization function with Tikhonov regularization.

  10. Procedures utilized for obtaining direct and remote atmospheric carbon monoxide measurements over the lower Lake Michigan Basin in August of 1976

    NASA Technical Reports Server (NTRS)

    Casas, J. C.; Condon, E.; Campbell, S. A.

    1978-01-01

    In order to establish the applicability of a gas filter correlation radiometer, GFCR, to remote carbon monoxide, CO, measurements on a regional and worldwide basis, Old Dominion University has been engaged in the development of accurate and cost effective techniques for inversion of GFCR CO data and in the development of an independent gas chromatographic technique for measuring CO. This independent method is used to verify the results and the associated inversion method obtained from the GFCR. A description of both methods (direct and remote) will be presented. Data obtained by both techniques during a flight test over the lower Lake Michigan Basin in August of 1976 will also be discussed.

  11. Reconciling ocean mass content change based on direct and inverse approaches by utilizing data from GRACE, altimetry and Swarm

    NASA Astrophysics Data System (ADS)

    Rietbroek, R.; Uebbing, B.; Lück, C.; Kusche, J.

    2017-12-01

    Ocean mass content (OMC) change due to the melting of the ice-sheets in Greenland and Antarctica, melting of glaciers and changes in terrestrial hydrology is a major contributor to present-day sea level rise. Since 2002, the GRACE satellite mission serves as a valuable tool for directly measuring the variations in OMC. As GRACE has almost reached the end of its lifetime, efforts are being made to utilize the Swarm mission for the recovery of low degree time-variable gravity fields to bridge a possible gap until the GRACE-FO mission and to fill up periods where GRACE data was not existent. To this end we compute Swarm monthly normal equations and spherical harmonics that are found competitive to other solutions. In addition to directly measuring the OMC, combination of GRACE gravity data with altimetry data in a global inversion approach allows to separate the total sea level change into individual mass-driven and steric contributions. However, published estimates of OMC from the direct and inverse methods differ not only depending on the time window, but also are influenced by numerous post-processing choices. Here, we will look into sources of such differences between direct and inverse approaches and evaluate the capabilities of Swarm to derive OMC. Deriving time series of OMC requires several processing steps; choosing a GRACE (and altimetry) product, data coverage, masks and filters to be applied in either spatial or spectral domain, corrections related to spatial leakage, GIA and geocenter motion. In this study, we compare and quantify the effects of the different processing choices of the direct and inverse methods. Our preliminary results point to the GIA correction as the major source of difference between the two approaches.

  12. [Global Atmospheric Chemistry/Transport Modeling and Data-Analysis

    NASA Technical Reports Server (NTRS)

    Prinn, Ronald G.

    1999-01-01

    This grant supported a global atmospheric chemistry/transport modeling and data- analysis project devoted to: (a) development, testing, and refining of inverse methods for determining regional and global transient source and sink strengths for trace gases; (b) utilization of these inverse methods which use either the Model for Atmospheric Chemistry and Transport (MATCH) which is based on analyzed observed winds or back- trajectories calculated from these same winds for determining regional and global source and sink strengths for long-lived trace gases important in ozone depletion and the greenhouse effect; (c) determination of global (and perhaps regional) average hydroxyl radical concentrations using inverse methods with multiple "titrating" gases; and (d) computation of the lifetimes and spatially resolved destruction rates of trace gases using 3D models. Important ultimate goals included determination of regional source strengths of important biogenic/anthropogenic trace gases and also of halocarbons restricted by the Montreal Protocol and its follow-on agreements, and hydrohalocarbons now used as alternatives to the above restricted halocarbons.

  13. Photoelectrodes based upon Mo:BiVO4 inverse opals for photoelectrochemical water splitting.

    PubMed

    Zhou, Min; Bao, Jian; Xu, Yang; Zhang, Jiajia; Xie, Junfeng; Guan, Meili; Wang, Chengliang; Wen, Liaoyong; Lei, Yong; Xie, Yi

    2014-07-22

    BiVO4 has been regarded as a promising material for photoelectrochemical water splitting, but it suffers from a major challenge on charge collection and utilization. In order to meet this challenge, we design a nanoengineered three-dimensional (3D) ordered macro-mesoporous architecture (a kind of inverse opal) of Mo:BiVO4 through a controllable colloidal crystal template method with the help of a sandwich solution infiltration method and adjustable post-heating time. Within expectation, a superior photocurrent density is achieved in return for this design. This enhancement originates primarily from effective charge collection and utilization according to the analysis of electrochemical impedance spectroscopy and so on. All the results highlight the great significance of the 3D ordered macro-mesoporous architecture as a promising photoelectrode model for the application in solar conversion. The cooperating amplification effects of nanoengineering from composition regulation and morphology innovation are helpful for creating more purpose-designed photoelectrodes with highly efficient performance.

  14. Efficient Storage Scheme of Covariance Matrix during Inverse Modeling

    NASA Astrophysics Data System (ADS)

    Mao, D.; Yeh, T. J.

    2013-12-01

    During stochastic inverse modeling, the covariance matrix of geostatistical based methods carries the information about the geologic structure. Its update during iterations reflects the decrease of uncertainty with the incorporation of observed data. For large scale problem, its storage and update cost too much memory and computational resources. In this study, we propose a new efficient storage scheme for storage and update. Compressed Sparse Column (CSC) format is utilized to storage the covariance matrix, and users can assign how many data they prefer to store based on correlation scales since the data beyond several correlation scales are usually not very informative for inverse modeling. After every iteration, only the diagonal terms of the covariance matrix are updated. The off diagonal terms are calculated and updated based on shortened correlation scales with a pre-assigned exponential model. The correlation scales are shortened by a coefficient, i.e. 0.95, every iteration to show the decrease of uncertainty. There is no universal coefficient for all the problems and users are encouraged to try several times. This new scheme is tested with 1D examples first. The estimated results and uncertainty are compared with the traditional full storage method. In the end, a large scale numerical model is utilized to validate this new scheme.

  15. An efficient sequential strategy for realizing cross-gradient joint inversion: method and its application to 2-D cross borehole seismic traveltime and DC resistivity tomography

    NASA Astrophysics Data System (ADS)

    Gao, Ji; Zhang, Haijiang

    2018-05-01

    Cross-gradient joint inversion that enforces structural similarity between different models has been widely utilized in jointly inverting different geophysical data types. However, it is a challenge to combine different geophysical inversion systems with the cross-gradient structural constraint into one joint inversion system because they may differ greatly in the model representation, forward modelling and inversion algorithm. Here we propose a new joint inversion strategy that can avoid this issue. Different models are separately inverted using the existing inversion packages and model structure similarity is only enforced through cross-gradient minimization between two models after each iteration. Although the data fitting and structural similarity enforcing processes are decoupled, our proposed strategy is still able to choose appropriate models to balance the trade-off between geophysical data fitting and structural similarity. This is realized by using model perturbations from separate data inversions to constrain the cross-gradient minimization process. We have tested this new strategy on 2-D cross borehole synthetic seismic traveltime and DC resistivity data sets. Compared to separate geophysical inversions, our proposed joint inversion strategy fits the separate data sets at comparable levels while at the same time resulting in a higher structural similarity between the velocity and resistivity models.

  16. Volume effect of non-polar solvent towards the synthesis of hydrophilic polymer nanoparticles prepares via inverse miniemulsion polymerization

    NASA Astrophysics Data System (ADS)

    Kamaruddin, Nur Nasyita; Kassim, Syara; Harun, Noor Aniza

    2017-09-01

    Polymeric nanoparticles have drawn tremendous attention to researchers and have utilized in diverse fields especially in biomedical applications. Nevertheless, question has raised about the safety and hydrophilicity of the nanoparticles to be utilized in medical and biological applications. One promising solution to this problem is to develop biodegradable polymeric nanoparticles with improve hydrophilicity. This study is focusing to develop safer and "greener" polymeric nanoparticles via inverse miniemulsion polymerization techniques, a robust and convenient method to produce water-soluble polymer nanoparticles. Acrylamide (Am), acrylic acid (AA) and methacrylic acid (MAA) monomers have chosen, as they are biocompatible, non-toxic and ecological. The effect of different volumes of cyclohexane towards the formation of polymer nanoparticles, particle size, particle size distribution and morphology of polymer nanoparticles are investigated. The formation and morphology of polymer nanoparticles are determined using FTIR and SEM respectively. The mean diameters of the polymer nanoparticles were in a range of 80 - 250 nm and with broad particle size distributions as determined by dynamic light scattering (DLS). Hydrophilic polyacrylamide (pAm), poly(acrylic acid) (pAA) and poly(methacrylic acid) (pMAA) nanoparticles were successfully achieved by inverse miniemulsion polymerization and have potentiality to be further utilized in the fabrication of hybrid polymer composite nanoparticles especially in biological and medical applications.

  17. Azimuthal Seismic Amplitude Variation with Offset and Azimuth Inversion in Weakly Anisotropic Media with Orthorhombic Symmetry

    NASA Astrophysics Data System (ADS)

    Pan, Xinpeng; Zhang, Guangzhi; Yin, Xingyao

    2018-01-01

    Seismic amplitude variation with offset and azimuth (AVOaz) inversion is well known as a popular and pragmatic tool utilized to estimate fracture parameters. A single set of vertical fractures aligned along a preferred horizontal direction embedded in a horizontally layered medium can be considered as an effective long-wavelength orthorhombic medium. Estimation of Thomsen's weak-anisotropy (WA) parameters and fracture weaknesses plays an important role in characterizing the orthorhombic anisotropy in a weakly anisotropic medium. Our goal is to demonstrate an orthorhombic anisotropic AVOaz inversion approach to describe the orthorhombic anisotropy utilizing the observable wide-azimuth seismic reflection data in a fractured reservoir with the assumption of orthorhombic symmetry. Combining Thomsen's WA theory and linear-slip model, we first derive a perturbation in stiffness matrix of a weakly anisotropic medium with orthorhombic symmetry under the assumption of small WA parameters and fracture weaknesses. Using the perturbation matrix and scattering function, we then derive an expression for linearized PP-wave reflection coefficient in terms of P- and S-wave moduli, density, Thomsen's WA parameters, and fracture weaknesses in such an orthorhombic medium, which avoids the complicated nonlinear relationship between the orthorhombic anisotropy and azimuthal seismic reflection data. Incorporating azimuthal seismic data and Bayesian inversion theory, the maximum a posteriori solutions of Thomsen's WA parameters and fracture weaknesses in a weakly anisotropic medium with orthorhombic symmetry are reasonably estimated with the constraints of Cauchy a priori probability distribution and smooth initial models of model parameters to enhance the inversion resolution and the nonlinear iteratively reweighted least squares strategy. The synthetic examples containing a moderate noise demonstrate the feasibility of the derived orthorhombic anisotropic AVOaz inversion method, and the real data illustrate the inversion stabilities of orthorhombic anisotropy in a fractured reservoir.

  18. (125)I-Tetrazines and Inverse-Electron-Demand Diels-Alder Chemistry: A Convenient Radioiodination Strategy for Biomolecule Labeling, Screening, and Biodistribution Studies.

    PubMed

    Albu, Silvia A; Al-Karmi, Salma A; Vito, Alyssa; Dzandzi, James P K; Zlitni, Aimen; Beckford-Vera, Denis; Blacker, Megan; Janzen, Nancy; Patel, Ramesh M; Capretta, Alfredo; Valliant, John F

    2016-01-20

    A convenient method to prepare radioiodinated tetrazines was developed, such that a bioorthogonal inverse electron demand Diels-Alder reaction can be used to label biomolecules with iodine-125 for in vitro screening and in vivo biodistribution studies. The tetrazine was prepared by employing a high-yielding oxidative halo destannylation reaction that concomitantly oxidized the dihydrotetrazine precursor. The product reacts quickly and efficiently with trans-cyclooctene derivatives. Utility was demonstrated through antibody and hormone labeling experiments and by evaluating products using standard analytical methods, in vitro assays, and quantitative biodistribution studies where the latter was performed in direct comparison to Bolton-Hunter and direct iodination methods. The approach described provides a convenient and advantageous alternative to conventional protein iodination methods that can expedite preclinical development and evaluation of biotherapeutics.

  19. Charge Stabilized Crystalline Colloidal Arrays As Templates For Fabrication of Non-Close-Packed Inverted Photonic Crystals

    PubMed Central

    Bohn, Justin J.; Ben-Moshe, Matti; Tikhonov, Alexander; Qu, Dan; Lamont, Daniel N.

    2010-01-01

    We developed a straightforward method to form non close-packed highly ordered fcc direct and inverse opal silica photonic crystals. We utilize an electrostatically self assembled crystalline colloidal array (CCA) template formed by monodisperse, highly charged polystyrene particles. We then polymerize a hydrogel around the CCA (PCCA) and condense the silica to form a highly ordered silica impregnated (siPCCA) photonic crystal. Heating at 450 °C removes the organic polymer leaving a silica inverse opal structure. By altering the colloidal particle concentration we independently control the particle spacing and the wall thickness of the inverse opal photonic crystals. This allows us to control the optical dielectric constant modulation in order to optimize the diffraction; the dielectric constant modulation is controlled independently of the photonic crystal periodicity. These fcc photonic crystals are better ordered than typical close-packed photonic crystals because their self assembly utilizes soft electrostatic repulsive potentials. We show that colloidal particle size and charge polydispersity has modest impact on ordering, in contrast to that for close-packed crystals. PMID:20163800

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kitanidis, Peter

    As large-scale, commercial storage projects become operational, the problem of utilizing information from diverse sources becomes more critically important. In this project, we developed, tested, and applied an advanced joint data inversion system for CO 2 storage modeling with large data sets for use in site characterization and real-time monitoring. Emphasis was on the development of advanced and efficient computational algorithms for joint inversion of hydro-geophysical data, coupled with state-of-the-art forward process simulations. The developed system consists of (1) inversion tools using characterization data, such as 3D seismic survey (amplitude images), borehole log and core data, as well as hydraulic,more » tracer and thermal tests before CO 2 injection, (2) joint inversion tools for updating the geologic model with the distribution of rock properties, thus reducing uncertainty, using hydro-geophysical monitoring data, and (3) highly efficient algorithms for directly solving the dense or sparse linear algebra systems derived from the joint inversion. The system combines methods from stochastic analysis, fast linear algebra, and high performance computing. The developed joint inversion tools have been tested through synthetic CO 2 storage examples.« less

  1. Simulation studies of phase inversion in agitated vessels using a Monte Carlo technique.

    PubMed

    Yeo, Leslie Y; Matar, Omar K; Perez de Ortiz, E Susana; Hewitt, Geoffrey F

    2002-04-15

    A speculative study on the conditions under which phase inversion occurs in agitated liquid-liquid dispersions is conducted using a Monte Carlo technique. The simulation is based on a stochastic model, which accounts for fundamental physical processes such as drop deformation, breakup, and coalescence, and utilizes the minimization of interfacial energy as a criterion for phase inversion. Profiles of the interfacial energy indicate that a steady-state equilibrium is reached after a sufficiently large number of random moves and that predictions are insensitive to initial drop conditions. The calculated phase inversion holdup is observed to increase with increasing density and viscosity ratio, and to decrease with increasing agitation speed for a fixed viscosity ratio. It is also observed that, for a fixed viscosity ratio, the phase inversion holdup remains constant for large enough agitation speeds. The proposed model is therefore capable of achieving reasonable qualitative agreement with general experimental trends and of reproducing key features observed experimentally. The results of this investigation indicate that this simple stochastic method could be the basis upon which more advanced models for predicting phase inversion behavior can be developed.

  2. An ionospheric occultation inversion technique based on epoch difference

    NASA Astrophysics Data System (ADS)

    Lin, Jian; Xiong, Jing; Zhu, Fuying; Yang, Jian; Qiao, Xuejun

    2013-09-01

    Of the ionospheric radio occultation (IRO) electron density profile (EDP) retrievals, the Abel based calibrated TEC inversion (CTI) is the most widely used technique. In order to eliminate the contribution from the altitude above the RO satellite, it is necessary to utilize the calibrated TEC to retrieve the EDP, which introduces the error due to the coplanar assumption. In this paper, a new technique based on the epoch difference inversion (EDI) is firstly proposed to eliminate this error. The comparisons between CTI and EDI have been done, taking advantage of the simulated and real COSMIC data. The following conclusions can be drawn: the EDI technique can successfully retrieve the EDPs without non-occultation side measurements and shows better performance than the CTI method, especially for lower orbit mission; no matter which technique is used, the inversion results at the higher altitudes are better than those at the lower altitudes, which could be explained theoretically.

  3. Estimation of the zeta potential and the dielectric constant using velocity measurements in the electroosmotic flows.

    PubMed

    Park, H M; Hong, S M

    2006-12-15

    In this paper we develop a method for the determination of the zeta potential zeta and the dielectric constant epsilon by exploiting velocity measurements of the electroosmotic flow in microchannels. The inverse problem is solved through the minimization of a performance function utilizing the conjugate gradient method. The present method is found to estimate zeta and epsilon with reasonable accuracy even with noisy velocity measurements.

  4. Verifying entanglement in the Hong-Ou-Mandel dip

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ray, Megan R.; Enk, S. J. van

    2011-04-15

    The Hong-Ou-Mandel interference dip is caused by an entangled state, a delocalized biphoton state. We propose a method of detecting this entanglement by utilizing inverse Hong-Ou-Mandel interference, while taking into account vacuum and multiphoton contaminations, phase noise, and other imperfections. The method uses just linear optics and photodetectors, and for single-mode photodetectors we find a lower bound on the amount of entanglement.

  5. Modeling Drinking Behavior Progression in Youth: a Non-identified Probability Discrete Event System Using Cross-sectional Data

    PubMed Central

    Hu, Xingdi; Chen, Xinguang; Cook, Robert L.; Chen, Ding-Geng; Okafor, Chukwuemeka

    2016-01-01

    Background The probabilistic discrete event systems (PDES) method provides a promising approach to study dynamics of underage drinking using cross-sectional data. However, the utility of this approach is often limited because the constructed PDES model is often non-identifiable. The purpose of the current study is to attempt a new method to solve the model. Methods A PDES-based model of alcohol use behavior was developed with four progression stages (never-drinkers [ND], light/moderate-drinker [LMD], heavy-drinker [HD], and ex-drinker [XD]) linked with 13 possible transition paths. We tested the proposed model with data for participants aged 12–21 from the 2012 National Survey on Drug Use and Health (NSDUH). The Moore-Penrose (M-P) generalized inverse matrix method was applied to solve the proposed model. Results Annual transitional probabilities by age groups for the 13 drinking progression pathways were successfully estimated with the M-P generalized inverse matrix approach. Result from our analysis indicates an inverse “J” shape curve characterizing pattern of experimental use of alcohol from adolescence to young adulthood. We also observed a dramatic increase for the initiation of LMD and HD after age 18 and a sharp decline in quitting light and heavy drinking. Conclusion Our findings are consistent with the developmental perspective regarding the dynamics of underage drinking, demonstrating the utility of the M-P method in obtaining a unique solution for the partially-observed PDES drinking behavior model. The M-P approach we tested in this study will facilitate the use of the PDES approach to examine many health behaviors with the widely available cross-sectional data. PMID:26511344

  6. Method and apparatus for producing laser radiation following two-photon excitation of a gaseous medium

    DOEpatents

    Bischel, William K. [Menlo Park, CA; Jacobs, Ralph R. [Livermore, CA; Prosnitz, Donald [Hamden, CT; Rhodes, Charles K. [Palo Alto, CA; Kelly, Patrick J. [Fort Lewis, WA

    1979-02-20

    Method and apparatus for producing laser radiation by two-photon optical pumping of an atomic or molecular gaseous medium and subsequent lasing action. A population inversion is created as a result of two-photon absorption of the gaseous species. Stark tuning is utilized, if necessary, in order to tune the two-photon transition into exact resonance. In particular, gaseous ammonia (NH.sub.3) or methyl fluoride (CH.sub.3 F) is optically pumped by a pair of CO.sub.2 lasers to create a population inversion resulting from simultaneous two-photon excitation of a high-lying vibrational state, and laser radiation is produced by stimulated emission of coherent radiation from the inverted level.

  7. Method and apparatus for producing laser radiation following two-photon excitation of a gaseous medium

    DOEpatents

    Bischel, W.K.; Jacobs, R.R.; Prosnitz, D.P.; Rhodes, C.K.; Kelly, P.J.

    1979-02-20

    Method and apparatus are disclosed for producing laser radiation by two-photon optical pumping of an atomic or molecular gaseous medium and subsequent lasing action. A population inversion is created as a result of two-photon absorption of the gaseous species. Stark tuning is utilized, if necessary, in order to tune the two-photon transition into exact resonance. In particular, gaseous ammonia (NH[sub 3]) or methyl fluoride (CH[sub 3]F) is optically pumped by a pair of CO[sub 2] lasers to create a population inversion resulting from simultaneous two-photon excitation of a high-lying vibrational state, and laser radiation is produced by stimulated emission of coherent radiation from the inverted level. 3 figs.

  8. Advances in interpretation of subsurface processes with time-lapse electrical imaging

    USGS Publications Warehouse

    Singha, Kaminit; Day-Lewis, Frederick D.; Johnson, Tim B.; Slater, Lee D.

    2015-01-01

    Electrical geophysical methods, including electrical resistivity, time-domain induced polarization, and complex resistivity, have become commonly used to image the near subsurface. Here, we outline their utility for time-lapse imaging of hydrological, geochemical, and biogeochemical processes, focusing on new instrumentation, processing, and analysis techniques specific to monitoring. We review data collection procedures, parameters measured, and petrophysical relationships and then outline the state of the science with respect to inversion methodologies, including coupled inversion. We conclude by highlighting recent research focused on innovative applications of time-lapse imaging in hydrology, biology, ecology, and geochemistry, among other areas of interest.

  9. Advances in interpretation of subsurface processes with time-lapse electrical imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singha, Kamini; Day-Lewis, Frederick D.; Johnson, Timothy C.

    2015-03-15

    Electrical geophysical methods, including electrical resistivity, time-domain induced polarization, and complex resistivity, have become commonly used to image the near subsurface. Here, we outline their utility for time-lapse imaging of hydrological, geochemical, and biogeochemical processes, focusing on new instrumentation, processing, and analysis techniques specific to monitoring. We review data collection procedures, parameters measured, and petrophysical relationships and then outline the state of the science with respect to inversion methodologies, including coupled inversion. We conclude by highlighting recent research focused on innovative applications of time-lapse imaging in hydrology, biology, ecology, and geochemistry, among other areas of interest.

  10. A quantum retrograde canon: complete population inversion in n 2-state systems

    NASA Astrophysics Data System (ADS)

    Padan, Alon; Suchowski, Haim

    2018-04-01

    We present a novel approach for analytically reducing a family of time-dependent multi-state quantum control problems to two-state systems. The presented method translates between {SU}(2)× {SU}(2) related n 2-state systems and two-state systems, such that the former undergo complete population inversion (CPI) if and only if the latter reach specific states. For even n, the method translates any two-state CPI scheme to a family of CPI schemes in n 2-state systems. In particular, facilitating CPI in a four-state system via real time-dependent nearest-neighbors couplings is reduced to facilitating CPI in a two-level system. Furthermore, we show that the method can be used for operator control, and provide conditions for producing several universal gates for quantum computation as an example. In addition, we indicate a basis for utilizing the method in optimal control problems.

  11. Analysis of harmonic spline gravity models for Venus and Mars

    NASA Technical Reports Server (NTRS)

    Bowin, Carl

    1986-01-01

    Methodology utilizing harmonic splines for determining the true gravity field from Line-Of-Sight (LOS) acceleration data from planetary spacecraft missions was tested. As is well known, the LOS data incorporate errors in the zero reference level that appear to be inherent in the processing procedure used to obtain the LOS vectors. The proposed method offers a solution to this problem. The harmonic spline program was converted from the VAX 11/780 to the Ridge 32C computer. The problem with the matrix inversion routine that improved inversion of the data matrices used in the Optimum Estimate program for global Earth studies was solved. The problem of obtaining a successful matrix inversion for a single rev supplemented by data for the two adjacent revs still remains.

  12. Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies

    PubMed Central

    Theis, Fabian J.

    2017-01-01

    Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464

  13. Randomly iterated search and statistical competency as powerful inversion tools for deformation source modeling: Application to volcano interferometric synthetic aperture radar data

    NASA Astrophysics Data System (ADS)

    Shirzaei, M.; Walter, T. R.

    2009-10-01

    Modern geodetic techniques provide valuable and near real-time observations of volcanic activity. Characterizing the source of deformation based on these observations has become of major importance in related monitoring efforts. We investigate two random search approaches, simulated annealing (SA) and genetic algorithm (GA), and utilize them in an iterated manner. The iterated approach helps to prevent GA in general and SA in particular from getting trapped in local minima, and it also increases redundancy for exploring the search space. We apply a statistical competency test for estimating the confidence interval of the inversion source parameters, considering their internal interaction through the model, the effect of the model deficiency, and the observational error. Here, we present and test this new randomly iterated search and statistical competency (RISC) optimization method together with GA and SA for the modeling of data associated with volcanic deformations. Following synthetic and sensitivity tests, we apply the improved inversion techniques to two episodes of activity in the Campi Flegrei volcanic region in Italy, observed by the interferometric synthetic aperture radar technique. Inversion of these data allows derivation of deformation source parameters and their associated quality so that we can compare the two inversion methods. The RISC approach was found to be an efficient method in terms of computation time and search results and may be applied to other optimization problems in volcanic and tectonic environments.

  14. FACTORS ASSOCIATED WITH HEALTHCARE UTILIZATION AMONG ARAB IMMIGRANTS AND REFUGEES

    PubMed Central

    2015-01-01

    Background Arab migrants are exposed to pre- and post migration stressors that increase their risk for health problems. However, little is known regarding healthcare utilization rates or factors associated with healthcare utilization among Arab immigrants and refugees. Methods 590 participants were interviewed 1 year post-migration to the United States Factors associated with healthcare utilization including stress coping mechanisms were examined using binary logistic regressions. Results Compared to national healthcare utilization data, immigrants had significantly lower and refugees had significantly higher rates. Being a refugee, married, and having health insurance were significantly associated with medical service utilization. None of the immigrants in this study had utilized psychological services. Among refugees, the use of medications and having strategies for dealing with stress were inversely associated with utilization of psychological services. Discussion (Conclusion) Healthcare utilization was significantly higher among refugees, who also reported a greater need for services than immigrants. PMID:25331684

  15. Tuning Fractures With Dynamic Data

    NASA Astrophysics Data System (ADS)

    Yao, Mengbi; Chang, Haibin; Li, Xiang; Zhang, Dongxiao

    2018-02-01

    Flow in fractured porous media is crucial for production of oil/gas reservoirs and exploitation of geothermal energy. Flow behaviors in such media are mainly dictated by the distribution of fractures. Measuring and inferring the distribution of fractures is subject to large uncertainty, which, in turn, leads to great uncertainty in the prediction of flow behaviors. Inverse modeling with dynamic data may assist to constrain fracture distributions, thus reducing the uncertainty of flow prediction. However, inverse modeling for flow in fractured reservoirs is challenging, owing to the discrete and non-Gaussian distribution of fractures, as well as strong nonlinearity in the relationship between flow responses and model parameters. In this work, building upon a series of recent advances, an inverse modeling approach is proposed to efficiently update the flow model to match the dynamic data while retaining geological realism in the distribution of fractures. In the approach, the Hough-transform method is employed to parameterize non-Gaussian fracture fields with continuous parameter fields, thus rendering desirable properties required by many inverse modeling methods. In addition, a recently developed forward simulation method, the embedded discrete fracture method (EDFM), is utilized to model the fractures. The EDFM maintains computational efficiency while preserving the ability to capture the geometrical details of fractures because the matrix is discretized as structured grid, while the fractures being handled as planes are inserted into the matrix grids. The combination of Hough representation of fractures with the EDFM makes it possible to tune the fractures (through updating their existence, location, orientation, length, and other properties) without requiring either unstructured grids or regridding during updating. Such a treatment is amenable to numerous inverse modeling approaches, such as the iterative inverse modeling method employed in this study, which is capable of dealing with strongly nonlinear problems. A series of numerical case studies with increasing complexity are set up to examine the performance of the proposed approach.

  16. Probabilistic Magnetotelluric Inversion with Adaptive Regularisation Using the No-U-Turns Sampler

    NASA Astrophysics Data System (ADS)

    Conway, Dennis; Simpson, Janelle; Didana, Yohannes; Rugari, Joseph; Heinson, Graham

    2018-04-01

    We present the first inversion of magnetotelluric (MT) data using a Hamiltonian Monte Carlo algorithm. The inversion of MT data is an underdetermined problem which leads to an ensemble of feasible models for a given dataset. A standard approach in MT inversion is to perform a deterministic search for the single solution which is maximally smooth for a given data-fit threshold. An alternative approach is to use Markov Chain Monte Carlo (MCMC) methods, which have been used in MT inversion to explore the entire solution space and produce a suite of likely models. This approach has the advantage of assigning confidence to resistivity models, leading to better geological interpretations. Recent advances in MCMC techniques include the No-U-Turns Sampler (NUTS), an efficient and rapidly converging method which is based on Hamiltonian Monte Carlo. We have implemented a 1D MT inversion which uses the NUTS algorithm. Our model includes a fixed number of layers of variable thickness and resistivity, as well as probabilistic smoothing constraints which allow sharp and smooth transitions. We present the results of a synthetic study and show the accuracy of the technique, as well as the fast convergence, independence of starting models, and sampling efficiency. Finally, we test our technique on MT data collected from a site in Boulia, Queensland, Australia to show its utility in geological interpretation and ability to provide probabilistic estimates of features such as depth to basement.

  17. Use of the ventricular propagated excitation model in the magnetocardiographic inverse problem for reconstruction of electrophysiological properties.

    PubMed

    Ohyu, Shigeharu; Okamoto, Yoshiwo; Kuriki, Shinya

    2002-06-01

    A novel magnetocardiographic inverse method for reconstructing the action potential amplitude (APA) and the activation time (AT) on the ventricular myocardium is proposed. This method is based on the propagated excitation model, in which the excitation is propagated through the ventricle with nonuniform height of action potential. Assumption of stepwise waveform on the transmembrane potential was introduced in the model. Spatial gradient of transmembrane potential, which is defined by APA and AT distributed in the ventricular wall, is used for the computation of a current source distribution. Based on this source model, the distributions of APA and AT are inversely reconstructed from the QRS interval of magnetocardiogram (MCG) utilizing a maximum a posteriori approach. The proposed reconstruction method was tested through computer simulations. Stability of the methods with respect to measurement noise was demonstrated. When reference APA was provided as a uniform distribution, root-mean-square errors of estimated APA were below 10 mV for MCG signal-to-noise ratios greater than, or equal to, 20 dB. Low-amplitude regions located at several sites in reference APA distributions were correctly reproduced in reconstructed APA distributions. The goal of our study is to develop a method for detecting myocardial ischemia through the depression of reconstructed APA distributions.

  18. The Modularized Software Package ASKI - Full Waveform Inversion Based on Waveform Sensitivity Kernels Utilizing External Seismic Wave Propagation Codes

    NASA Astrophysics Data System (ADS)

    Schumacher, F.; Friederich, W.

    2015-12-01

    We present the modularized software package ASKI which is a flexible and extendable toolbox for seismic full waveform inversion (FWI) as well as sensitivity or resolution analysis operating on the sensitivity matrix. It utilizes established wave propagation codes for solving the forward problem and offers an alternative to the monolithic, unflexible and hard-to-modify codes that have typically been written for solving inverse problems. It is available under the GPL at www.rub.de/aski. The Gauss-Newton FWI method for 3D-heterogeneous elastic earth models is based on waveform sensitivity kernels and can be applied to inverse problems at various spatial scales in both Cartesian and spherical geometries. The kernels are derived in the frequency domain from Born scattering theory as the Fréchet derivatives of linearized full waveform data functionals, quantifying the influence of elastic earth model parameters on the particular waveform data values. As an important innovation, we keep two independent spatial descriptions of the earth model - one for solving the forward problem and one representing the inverted model updates. Thereby we account for the independent needs of spatial model resolution of forward and inverse problem, respectively. Due to pre-integration of the kernels over the (in general much coarser) inversion grid, storage requirements for the sensitivity kernels are dramatically reduced.ASKI can be flexibly extended to other forward codes by providing it with specific interface routines that contain knowledge about forward code-specific file formats and auxiliary information provided by the new forward code. In order to sustain flexibility, the ASKI tools must communicate via file output/input, thus large storage capacities need to be accessible in a convenient way. Storing the complete sensitivity matrix to file, however, permits the scientist full manual control over each step in a customized procedure of sensitivity/resolution analysis and full waveform inversion.

  19. A fast reconstruction algorithm for fluorescence optical diffusion tomography based on preiteration.

    PubMed

    Song, Xiaolei; Xiong, Xiaoyun; Bai, Jing

    2007-01-01

    Fluorescence optical diffusion tomography in the near-infrared (NIR) bandwidth is considered to be one of the most promising ways for noninvasive molecular-based imaging. Many reconstructive approaches to it utilize iterative methods for data inversion. However, they are time-consuming and they are far from meeting the real-time imaging demands. In this work, a fast preiteration algorithm based on the generalized inverse matrix is proposed. This method needs only one step of matrix-vector multiplication online, by pushing the iteration process to be executed offline. In the preiteration process, the second-order iterative format is employed to exponentially accelerate the convergence. Simulations based on an analytical diffusion model show that the distribution of fluorescent yield can be well estimated by this algorithm and the reconstructed speed is remarkably increased.

  20. People learn other people's preferences through inverse decision-making.

    PubMed

    Jern, Alan; Lucas, Christopher G; Kemp, Charles

    2017-11-01

    People are capable of learning other people's preferences by observing the choices they make. We propose that this learning relies on inverse decision-making-inverting a decision-making model to infer the preferences that led to an observed choice. In Experiment 1, participants observed 47 choices made by others and ranked them by how strongly each choice suggested that the decision maker had a preference for a specific item. An inverse decision-making model generated predictions that were in accordance with participants' inferences. Experiment 2 replicated and extended a previous study by Newtson (1974) in which participants observed pairs of choices and made judgments about which choice provided stronger evidence for a preference. Inverse decision-making again predicted the results, including a result that previous accounts could not explain. Experiment 3 used the same method as Experiment 2 and found that participants did not expect decision makers to be perfect utility-maximizers. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. An Inverse Problem for a Class of Conditional Probability Measure-Dependent Evolution Equations

    PubMed Central

    Mirzaev, Inom; Byrne, Erin C.; Bortz, David M.

    2016-01-01

    We investigate the inverse problem of identifying a conditional probability measure in measure-dependent evolution equations arising in size-structured population modeling. We formulate the inverse problem as a least squares problem for the probability measure estimation. Using the Prohorov metric framework, we prove existence and consistency of the least squares estimates and outline a discretization scheme for approximating a conditional probability measure. For this scheme, we prove general method stability. The work is motivated by Partial Differential Equation (PDE) models of flocculation for which the shape of the post-fragmentation conditional probability measure greatly impacts the solution dynamics. To illustrate our methodology, we apply the theory to a particular PDE model that arises in the study of population dynamics for flocculating bacterial aggregates in suspension, and provide numerical evidence for the utility of the approach. PMID:28316360

  2. Estimation of slip distribution using an inverse method based on spectral decomposition of Green's function utilizing Global Positioning System (GPS) data

    NASA Astrophysics Data System (ADS)

    Jin, Honglin; Kato, Teruyuki; Hori, Muneo

    2007-07-01

    An inverse method based on the spectral decomposition of the Green's function was employed for estimating a slip distribution. We conducted numerical simulations along the Philippine Sea plate (PH) boundary in southwest Japan using this method to examine how to determine the essential parameters which are the number of deformation function modes and their coefficients. Japanese GPS Earth Observation Network (GEONET) Global Positioning System (GPS) data were used for three years covering 1997-1999 to estimate interseismic back slip distribution in this region. The estimated maximum back slip rate is about 7 cm/yr, which is consistent with the Philippine Sea plate convergence rate. Areas of strong coupling are confined between depths of 10 and 30 km and three areas of strong coupling were delineated. These results are consistent with other studies that have estimated locations of coupling distribution.

  3. Population Genomics of Inversion Polymorphisms in Drosophila melanogaster

    PubMed Central

    Corbett-Detig, Russell B.; Hartl, Daniel L.

    2012-01-01

    Chromosomal inversions have been an enduring interest of population geneticists since their discovery in Drosophila melanogaster. Numerous lines of evidence suggest powerful selective pressures govern the distributions of polymorphic inversions, and these observations have spurred the development of many explanatory models. However, due to a paucity of nucleotide data, little progress has been made towards investigating selective hypotheses or towards inferring the genealogical histories of inversions, which can inform models of inversion evolution and suggest selective mechanisms. Here, we utilize population genomic data to address persisting gaps in our knowledge of D. melanogaster's inversions. We develop a method, termed Reference-Assisted Reassembly, to assemble unbiased, highly accurate sequences near inversion breakpoints, which we use to estimate the age and the geographic origins of polymorphic inversions. We find that inversions are young, and most are African in origin, which is consistent with the demography of the species. The data suggest that inversions interact with polymorphism not only in breakpoint regions but also chromosome-wide. Inversions remain differentiated at low levels from standard haplotypes even in regions that are distant from breakpoints. Although genetic exchange appears fairly extensive, we identify numerous regions that are qualitatively consistent with selective hypotheses. Finally, we show that In(1)Be, which we estimate to be ∼60 years old (95% CI 5.9 to 372.8 years), has likely achieved high frequency via sex-ratio segregation distortion in males. With deeper sampling, it will be possible to build on our inferences of inversion histories to rigorously test selective models—particularly those that postulate that inversions achieve a selective advantage through the maintenance of co-adapted allele complexes. PMID:23284285

  4. Principal Component Geostatistical Approach for large-dimensional inverse problems

    PubMed Central

    Kitanidis, P K; Lee, J

    2014-01-01

    The quasi-linear geostatistical approach is for weakly nonlinear underdetermined inverse problems, such as Hydraulic Tomography and Electrical Resistivity Tomography. It provides best estimates as well as measures for uncertainty quantification. However, for its textbook implementation, the approach involves iterations, to reach an optimum, and requires the determination of the Jacobian matrix, i.e., the derivative of the observation function with respect to the unknown. Although there are elegant methods for the determination of the Jacobian, the cost is high when the number of unknowns, m, and the number of observations, n, is high. It is also wasteful to compute the Jacobian for points away from the optimum. Irrespective of the issue of computing derivatives, the computational cost of implementing the method is generally of the order of m2n, though there are methods to reduce the computational cost. In this work, we present an implementation that utilizes a matrix free in terms of the Jacobian matrix Gauss-Newton method and improves the scalability of the geostatistical inverse problem. For each iteration, it is required to perform K runs of the forward problem, where K is not just much smaller than m but can be smaller that n. The computational and storage cost of implementation of the inverse procedure scales roughly linearly with m instead of m2 as in the textbook approach. For problems of very large m, this implementation constitutes a dramatic reduction in computational cost compared to the textbook approach. Results illustrate the validity of the approach and provide insight in the conditions under which this method perform best. PMID:25558113

  5. Principal Component Geostatistical Approach for large-dimensional inverse problems.

    PubMed

    Kitanidis, P K; Lee, J

    2014-07-01

    The quasi-linear geostatistical approach is for weakly nonlinear underdetermined inverse problems, such as Hydraulic Tomography and Electrical Resistivity Tomography. It provides best estimates as well as measures for uncertainty quantification. However, for its textbook implementation, the approach involves iterations, to reach an optimum, and requires the determination of the Jacobian matrix, i.e., the derivative of the observation function with respect to the unknown. Although there are elegant methods for the determination of the Jacobian, the cost is high when the number of unknowns, m , and the number of observations, n , is high. It is also wasteful to compute the Jacobian for points away from the optimum. Irrespective of the issue of computing derivatives, the computational cost of implementing the method is generally of the order of m 2 n , though there are methods to reduce the computational cost. In this work, we present an implementation that utilizes a matrix free in terms of the Jacobian matrix Gauss-Newton method and improves the scalability of the geostatistical inverse problem. For each iteration, it is required to perform K runs of the forward problem, where K is not just much smaller than m but can be smaller that n . The computational and storage cost of implementation of the inverse procedure scales roughly linearly with m instead of m 2 as in the textbook approach. For problems of very large m , this implementation constitutes a dramatic reduction in computational cost compared to the textbook approach. Results illustrate the validity of the approach and provide insight in the conditions under which this method perform best.

  6. Time-Lapse Acoustic Impedance Inversion in CO2 Sequestration Study (Weyburn Field, Canada)

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Morozov, I. B.

    2016-12-01

    Acoustic-impedance (AI) pseudo-logs are useful for characterising subtle variations of fluid content during seismic monitoring of reservoirs undergoing enhanced oil recovery and/or geologic CO2 sequestration. However, highly accurate AI images are required for time-lapse analysis, which may be difficult to achieve with conventional inversion approaches. In this study, two enhancements of time-lapse AI analysis are proposed. First, a well-known uncertainty of AI inversion is caused by the lack of low-frequency signal in reflection seismic data. To resolve this difficulty, we utilize an integrated AI inversion approach combining seismic data, acoustic well logs and seismic-processing velocities. The use of well logs helps stabilizing the recursive AI inverse, and seismic-processing velocities are used to complement the low-frequency information in seismic records. To derive the low-frequency AI from seismic-processing velocity data, an empirical relation is determined by using the available acoustic logs. This method is simple and does not require subjective choices of parameters and regularization schemes as in the more sophisticated joint inversion methods. The second improvement to accurate time-lapse AI imaging consists in time-variant calibration of reflectivity. Calibration corrections consist of time shifts, amplitude corrections, spectral shaping and phase rotations. Following the calibration, average and differential reflection amplitudes are calculated, from which the average and differential AI are obtained. The approaches are applied to a time-lapse 3-D 3-C dataset from Weyburn CO2 sequestration project in southern Saskatchewan, Canada. High quality time-lapse AI volumes are obtained. Comparisons with traditional recursive and colored AI inversions (obtained without using seismic-processing velocities) show that the new method gives a better representation of spatial AI variations. Although only early stages of monitoring seismic data are available, time-lapse AI variations mapped within and near the reservoir zone suggest correlations with CO2 injection. By extending this procedure to elastic impedances, additional constraints on the variations of physical properties within the reservoir can be obtained.

  7. Calculation of earthquake rupture histories using a hybrid global search algorithm: Application to the 1992 Landers, California, earthquake

    USGS Publications Warehouse

    Hartzell, S.; Liu, P.

    1996-01-01

    A method is presented for the simultaneous calculation of slip amplitudes and rupture times for a finite fault using a hybrid global search algorithm. The method we use combines simulated annealing with the downhill simplex method to produce a more efficient search algorithm then either of the two constituent parts. This formulation has advantages over traditional iterative or linearized approaches to the problem because it is able to escape local minima in its search through model space for the global optimum. We apply this global search method to the calculation of the rupture history for the Landers, California, earthquake. The rupture is modeled using three separate finite-fault planes to represent the three main fault segments that failed during this earthquake. Both the slip amplitude and the time of slip are calculated for a grid work of subfaults. The data used consist of digital, teleseismic P and SH body waves. Long-period, broadband, and short-period records are utilized to obtain a wideband characterization of the source. The results of the global search inversion are compared with a more traditional linear-least-squares inversion for only slip amplitudes. We use a multi-time-window linear analysis to relax the constraints on rupture time and rise time in the least-squares inversion. Both inversions produce similar slip distributions, although the linear-least-squares solution has a 10% larger moment (7.3 ?? 1026 dyne-cm compared with 6.6 ?? 1026 dyne-cm). Both inversions fit the data equally well and point out the importance of (1) using a parameterization with sufficient spatial and temporal flexibility to encompass likely complexities in the rupture process, (2) including suitable physically based constraints on the inversion to reduce instabilities in the solution, and (3) focusing on those robust rupture characteristics that rise above the details of the parameterization and data set.

  8. Sustained hole inversion layer in a wide-bandgap metal-oxide semiconductor with enhanced tunnel current

    NASA Astrophysics Data System (ADS)

    Shoute, Gem; Afshar, Amir; Muneshwar, Triratna; Cadien, Kenneth; Barlage, Douglas

    2016-02-01

    Wide-bandgap, metal-oxide thin-film transistors have been limited to low-power, n-type electronic applications because of the unipolar nature of these devices. Variations from the n-type field-effect transistor architecture have not been widely investigated as a result of the lack of available p-type wide-bandgap inorganic semiconductors. Here, we present a wide-bandgap metal-oxide n-type semiconductor that is able to sustain a strong p-type inversion layer using a high-dielectric-constant barrier dielectric when sourced with a heterogeneous p-type material. A demonstration of the utility of the inversion layer was also investigated and utilized as the controlling element in a unique tunnelling junction transistor. The resulting electrical performance of this prototype device exhibited among the highest reported current, power and transconductance densities. Further utilization of the p-type inversion layer is critical to unlocking the previously unexplored capability of metal-oxide thin-film transistors, such applications with next-generation display switches, sensors, radio frequency circuits and power converters.

  9. Multiparameter elastic full waveform inversion with facies-based constraints

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen-dong; Alkhalifah, Tariq; Naeini, Ehsan Zabihi; Sun, Bingbing

    2018-06-01

    Full waveform inversion (FWI) incorporates all the data characteristics to estimate the parameters described by the assumed physics of the subsurface. However, current efforts to utilize FWI beyond improved acoustic imaging, like in reservoir delineation, faces inherent challenges related to the limited resolution and the potential trade-off between the elastic model parameters. Some anisotropic parameters are insufficiently updated because of their minor contributions to the surface collected data. Adding rock physics constraints to the inversion helps mitigate such limited sensitivity, but current approaches to add such constraints are based on including them as a priori knowledge mostly valid around the well or as a global constraint for the whole area. Since similar rock formations inside the Earth admit consistent elastic properties and relative values of elasticity and anisotropy parameters (this enables us to define them as a seismic facies), utilizing such localized facies information in FWI can improve the resolution of inverted parameters. We propose a novel approach to use facies-based constraints in both isotropic and anisotropic elastic FWI. We invert for such facies using Bayesian theory and update them at each iteration of the inversion using both the inverted models and a priori information. We take the uncertainties of the estimated parameters (approximated by radiation patterns) into consideration and improve the quality of estimated facies maps. Four numerical examples corresponding to different acquisition, physical assumptions and model circumstances are used to verify the effectiveness of the proposed method.

  10. Non-cavitating propeller noise modeling and inversion

    NASA Astrophysics Data System (ADS)

    Kim, Dongho; Lee, Keunhwa; Seong, Woojae

    2014-12-01

    Marine propeller is the dominant exciter of the hull surface above it causing high level of noise and vibration in the ship structure. Recent successful developments have led to non-cavitating propeller designs and thus present focus is the non-cavitating characteristics of propeller such as hydrodynamic noise and its induced hull excitation. In this paper, analytic source model of propeller non-cavitating noise, described by longitudinal quadrupoles and dipoles, is suggested based on the propeller hydrodynamics. To find the source unknown parameters, the multi-parameter inversion technique is adopted using the pressure data obtained from the model scale experiment and pressure field replicas calculated by boundary element method. The inversion results show that the proposed source model is appropriate in modeling non-cavitating propeller noise. The result of this study can be utilized in the prediction of propeller non-cavitating noise and hull excitation at various stages in design and analysis.

  11. Two-dimensional inverse opal hydrogel for pH sensing.

    PubMed

    Xue, Fei; Meng, Zihui; Qi, Fenglian; Xue, Min; Wang, Fengyan; Chen, Wei; Yan, Zequn

    2014-12-07

    A novel hydrogel film with a highly ordered macropore monolayer on its surface was prepared by templated photo-polymerization of hydrogel monomers on a two-dimensional (2D) polystyrene colloidal array. The 2D inverse opal hydrogel has prominent advantages over traditional three-dimensional (3D) inverse opal hydrogels. First, the formation of the 2D array template through a self-assembly method is considerably faster and simpler. Second, the stable ordering structure of the 2D array template makes it easier to introduce the polymerization solution into the template. Third, a simple measurement, a Debye diffraction ring, is utilized to characterize the neighboring pore spacing of the 2D inverse opal hydrogel. Acrylic acid was copolymerized into the hydrogel; thus, the hydrogel responded to pH through volume change, which resulted from the formation of the Donnan potential. The 2D inverse opal hydrogel showed that the neighboring pore spacing increased by about 150 nm and diffracted color red-shifted from blue to red as the pH increased from pH 2 to 7. In addition, the pH response kinetics and ionic strength effect of this 2D mesoporous polymer film were also investigated.

  12. A gEUD-based inverse planning technique for HDR prostate brachytherapy: Feasibility study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giantsoudi, D.; Department of Radiation Oncology, Francis H. Burr Proton Therapy Center, Boston, Massachusetts 02114; Baltas, D.

    2013-04-15

    Purpose: The purpose of this work was to study the feasibility of a new inverse planning technique based on the generalized equivalent uniform dose for image-guided high dose rate (HDR) prostate cancer brachytherapy in comparison to conventional dose-volume based optimization. Methods: The quality of 12 clinical HDR brachytherapy implants for prostate utilizing HIPO (Hybrid Inverse Planning Optimization) is compared with alternative plans, which were produced through inverse planning using the generalized equivalent uniform dose (gEUD). All the common dose-volume indices for the prostate and the organs at risk were considered together with radiobiological measures. The clinical effectiveness of the differentmore » dose distributions was investigated by comparing dose volume histogram and gEUD evaluators. Results: Our results demonstrate the feasibility of gEUD-based inverse planning in HDR brachytherapy implants for prostate. A statistically significant decrease in D{sub 10} or/and final gEUD values for the organs at risk (urethra, bladder, and rectum) was found while improving dose homogeneity or dose conformity of the target volume. Conclusions: Following the promising results of gEUD-based optimization in intensity modulated radiation therapy treatment optimization, as reported in the literature, the implementation of a similar model in HDR brachytherapy treatment plan optimization is suggested by this study. The potential of improved sparing of organs at risk was shown for various gEUD-based optimization parameter protocols, which indicates the ability of this method to adapt to the user's preferences.« less

  13. Bayesian parameter estimation in spectral quantitative photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Pulkkinen, Aki; Cox, Ben T.; Arridge, Simon R.; Kaipio, Jari P.; Tarvainen, Tanja

    2016-03-01

    Photoacoustic tomography (PAT) is an imaging technique combining strong contrast of optical imaging to high spatial resolution of ultrasound imaging. These strengths are achieved via photoacoustic effect, where a spatial absorption of light pulse is converted into a measurable propagating ultrasound wave. The method is seen as a potential tool for small animal imaging, pre-clinical investigations, study of blood vessels and vasculature, as well as for cancer imaging. The goal in PAT is to form an image of the absorbed optical energy density field via acoustic inverse problem approaches from the measured ultrasound data. Quantitative PAT (QPAT) proceeds from these images and forms quantitative estimates of the optical properties of the target. This optical inverse problem of QPAT is illposed. To alleviate the issue, spectral QPAT (SQPAT) utilizes PAT data formed at multiple optical wavelengths simultaneously with optical parameter models of tissue to form quantitative estimates of the parameters of interest. In this work, the inverse problem of SQPAT is investigated. Light propagation is modelled using the diffusion equation. Optical absorption is described with chromophore concentration weighted sum of known chromophore absorption spectra. Scattering is described by Mie scattering theory with an exponential power law. In the inverse problem, the spatially varying unknown parameters of interest are the chromophore concentrations, the Mie scattering parameters (power law factor and the exponent), and Gruneisen parameter. The inverse problem is approached with a Bayesian method. It is numerically demonstrated, that estimation of all parameters of interest is possible with the approach.

  14. An Adaptive Model of Student Performance Using Inverse Bayes

    ERIC Educational Resources Information Center

    Lang, Charles

    2014-01-01

    This article proposes a coherent framework for the use of Inverse Bayesian estimation to summarize and make predictions about student behaviour in adaptive educational settings. The Inverse Bayes Filter utilizes Bayes theorem to estimate the relative impact of contextual factors and internal student factors on student performance using time series…

  15. Stability and uncertainty of finite-fault slip inversions: Application to the 2004 Parkfield, California, earthquake

    USGS Publications Warehouse

    Hartzell, S.; Liu, P.; Mendoza, C.; Ji, C.; Larson, K.M.

    2007-01-01

    The 2004 Parkfield, California, earthquake is used to investigate stability and uncertainty aspects of the finite-fault slip inversion problem with different a priori model assumptions. We utilize records from 54 strong ground motion stations and 13 continuous, 1-Hz sampled, geodetic instruments. Two inversion procedures are compared: a linear least-squares subfault-based methodology and a nonlinear global search algorithm. These two methods encompass a wide range of the different approaches that have been used to solve the finite-fault slip inversion problem. For the Parkfield earthquake and the inversion of velocity or displacement waveforms, near-surface related site response (top 100 m, frequencies above 1 Hz) is shown to not significantly affect the solution. Results are also insensitive to selection of slip rate functions with similar duration and to subfault size if proper stabilizing constraints are used. The linear and nonlinear formulations yield consistent results when the same limitations in model parameters are in place and the same inversion norm is used. However, the solution is sensitive to the choice of inversion norm, the bounds on model parameters, such as rake and rupture velocity, and the size of the model fault plane. The geodetic data set for Parkfield gives a slip distribution different from that of the strong-motion data, which may be due to the spatial limitation of the geodetic stations and the bandlimited nature of the strong-motion data. Cross validation and the bootstrap method are used to set limits on the upper bound for rupture velocity and to derive mean slip models and standard deviations in model parameters. This analysis shows that slip on the northwestern half of the Parkfield rupture plane from the inversion of strong-motion data is model dependent and has a greater uncertainty than slip near the hypocenter.

  16. Imaging performance of a hybrid x-ray computed tomography-fluorescence molecular tomography system using priors.

    PubMed

    Ale, Angelique; Schulz, Ralf B; Sarantopoulos, Athanasios; Ntziachristos, Vasilis

    2010-05-01

    The performance is studied of two newly introduced and previously suggested methods that incorporate priors into inversion schemes associated with data from a recently developed hybrid x-ray computed tomography and fluorescence molecular tomography system, the latter based on CCD camera photon detection. The unique data set studied attains accurately registered data of high spatially sampled photon fields propagating through tissue along 360 degrees projections. Approaches that incorporate structural prior information were included in the inverse problem by adding a penalty term to the minimization function utilized for image reconstructions. Results were compared as to their performance with simulated and experimental data from a lung inflammation animal model and against the inversions achieved when not using priors. The importance of using priors over stand-alone inversions is also showcased with high spatial sampling simulated and experimental data. The approach of optimal performance in resolving fluorescent biodistribution in small animals is also discussed. Inclusion of prior information from x-ray CT data in the reconstruction of the fluorescence biodistribution leads to improved agreement between the reconstruction and validation images for both simulated and experimental data.

  17. Hypothesis Support Mechanism for Mid-Level Visual Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Amador, Jose J (Inventor)

    2007-01-01

    A method of mid-level pattern recognition provides for a pose invariant Hough Transform by parametrizing pairs of points in a pattern with respect to at least two reference points, thereby providing a parameter table that is scale- or rotation-invariant. A corresponding inverse transform may be applied to test hypothesized matches in an image and a distance transform utilized to quantify the level of match.

  18. Bio-Optics Based Sensation Imaging for Breast Tumor Detection Using Tissue Characterization

    PubMed Central

    Lee, Jong-Ha; Kim, Yoon Nyun; Park, Hee-Jun

    2015-01-01

    The tissue inclusion parameter estimation method is proposed to measure the stiffness as well as geometric parameters. The estimation is performed based on the tactile data obtained at the surface of the tissue using an optical tactile sensation imaging system (TSIS). A forward algorithm is designed to comprehensively predict the tactile data based on the mechanical properties of tissue inclusion using finite element modeling (FEM). This forward information is used to develop an inversion algorithm that will be used to extract the size, depth, and Young's modulus of a tissue inclusion from the tactile data. We utilize the artificial neural network (ANN) for the inversion algorithm. The proposed estimation method was validated by a realistic tissue phantom with stiff inclusions. The experimental results showed that the proposed estimation method can measure the size, depth, and Young's modulus of a tissue inclusion with 0.58%, 3.82%, and 2.51% relative errors, respectively. The obtained results prove that the proposed method has potential to become a useful screening and diagnostic method for breast cancer. PMID:25785306

  19. Sustained hole inversion layer in a wide-bandgap metal-oxide semiconductor with enhanced tunnel current

    PubMed Central

    Shoute, Gem; Afshar, Amir; Muneshwar, Triratna; Cadien, Kenneth; Barlage, Douglas

    2016-01-01

    Wide-bandgap, metal-oxide thin-film transistors have been limited to low-power, n-type electronic applications because of the unipolar nature of these devices. Variations from the n-type field-effect transistor architecture have not been widely investigated as a result of the lack of available p-type wide-bandgap inorganic semiconductors. Here, we present a wide-bandgap metal-oxide n-type semiconductor that is able to sustain a strong p-type inversion layer using a high-dielectric-constant barrier dielectric when sourced with a heterogeneous p-type material. A demonstration of the utility of the inversion layer was also investigated and utilized as the controlling element in a unique tunnelling junction transistor. The resulting electrical performance of this prototype device exhibited among the highest reported current, power and transconductance densities. Further utilization of the p-type inversion layer is critical to unlocking the previously unexplored capability of metal-oxide thin-film transistors, such applications with next-generation display switches, sensors, radio frequency circuits and power converters. PMID:26842997

  20. Model based approach to UXO imaging using the time domain electromagnetic method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lavely, E.M.

    1999-04-01

    Time domain electromagnetic (TDEM) sensors have emerged as a field-worthy technology for UXO detection in a variety of geological and environmental settings. This success has been achieved with commercial equipment that was not optimized for UXO detection and discrimination. The TDEM response displays a rich spatial and temporal behavior which is not currently utilized. Therefore, in this paper the author describes a research program for enhancing the effectiveness of the TDEM method for UXO detection and imaging. Fundamental research is required in at least three major areas: (a) model based imaging capability i.e. the forward and inverse problem, (b) detectormore » modeling and instrument design, and (c) target recognition and discrimination algorithms. These research problems are coupled and demand a unified treatment. For example: (1) the inverse solution depends on solution of the forward problem and knowledge of the instrument response; (2) instrument design with improved diagnostic power requires forward and inverse modeling capability; and (3) improved target recognition algorithms (such as neural nets) must be trained with data collected from the new instrument and with synthetic data computed using the forward model. Further, the design of the appropriate input and output layers of the net will be informed by the results of the forward and inverse modeling. A more fully developed model of the TDEM response would enable the joint inversion of data collected from multiple sensors (e.g., TDEM sensors and magnetometers). Finally, the author suggests that a complementary approach to joint inversions is the statistical recombination of data using principal component analysis. The decomposition into principal components is useful since the first principal component contains those features that are most strongly correlated from image to image.« less

  1. "Utilizing" signal detection theory.

    PubMed

    Lynn, Spencer K; Barrett, Lisa Feldman

    2014-09-01

    What do inferring what a person is thinking or feeling, judging a defendant's guilt, and navigating a dimly lit room have in common? They involve perceptual uncertainty (e.g., a scowling face might indicate anger or concentration, for which different responses are appropriate) and behavioral risk (e.g., a cost to making the wrong response). Signal detection theory describes these types of decisions. In this tutorial, we show how incorporating the economic concept of utility allows signal detection theory to serve as a model of optimal decision making, going beyond its common use as an analytic method. This utility approach to signal detection theory clarifies otherwise enigmatic influences of perceptual uncertainty on measures of decision-making performance (accuracy and optimality) and on behavior (an inverse relationship between bias magnitude and sensitivity optimizes utility). A "utilized" signal detection theory offers the possibility of expanding the phenomena that can be understood within a decision-making framework. © The Author(s) 2014.

  2. Reconstructed imaging of acoustic cloak using time-lapse reversal method

    NASA Astrophysics Data System (ADS)

    Zhou, Chen; Cheng, Ying; Xu, Jian-yi; Li, Bo; Liu, Xiao-jun

    2014-08-01

    We proposed and investigated a solution to the inverse acoustic cloak problem, an anti-stealth technology to make cloaks visible, using the time-lapse reversal (TLR) method. The TLR method reconstructs the image of an unknown acoustic cloak by utilizing scattered acoustic waves. Compared to previous anti-stealth methods, the TLR method can determine not only the existence of a cloak but also its exact geometric information like definite shape, size, and position. Here, we present the process for TLR reconstruction based on time reversal invariance. This technology may have potential applications in detecting various types of cloaks with different geometric parameters.

  3. An Inverse Interpolation Method Utilizing In-Flight Strain Measurements for Determining Loads and Structural Response of Aerospace Vehicles

    NASA Technical Reports Server (NTRS)

    Shkarayev, S.; Krashantisa, R.; Tessler, A.

    2004-01-01

    An important and challenging technology aimed at the next generation of aerospace vehicles is that of structural health monitoring. The key problem is to determine accurately, reliably, and in real time the applied loads, stresses, and displacements experienced in flight, with such data establishing an information database for structural health monitoring. The present effort is aimed at developing a finite element-based methodology involving an inverse formulation that employs measured surface strains to recover the applied loads, stresses, and displacements in an aerospace vehicle in real time. The computational procedure uses a standard finite element model (i.e., "direct analysis") of a given airframe, with the subsequent application of the inverse interpolation approach. The inverse interpolation formulation is based on a parametric approximation of the loading and is further constructed through a least-squares minimization of calculated and measured strains. This procedure results in the governing system of linear algebraic equations, providing the unknown coefficients that accurately define the load approximation. Numerical simulations are carried out for problems involving various levels of structural approximation. These include plate-loading examples and an aircraft wing box. Accuracy and computational efficiency of the proposed method are discussed in detail. The experimental validation of the methodology by way of structural testing of an aircraft wing is also discussed.

  4. Quantitative phase microscopy via optimized inversion of the phase optical transfer function.

    PubMed

    Jenkins, Micah H; Gaylord, Thomas K

    2015-10-01

    Although the field of quantitative phase imaging (QPI) has wide-ranging biomedical applicability, many QPI methods are not well-suited for such applications due to their reliance on coherent illumination and specialized hardware. By contrast, methods utilizing partially coherent illumination have the potential to promote the widespread adoption of QPI due to their compatibility with microscopy, which is ubiquitous in the biomedical community. Described herein is a new defocus-based reconstruction method that utilizes a small number of efficiently sampled micrographs to optimally invert the partially coherent phase optical transfer function under assumptions of weak absorption and slowly varying phase. Simulation results are provided that compare the performance of this method with similar algorithms and demonstrate compatibility with large phase objects. The accuracy of the method is validated experimentally using a microlens array as a test phase object. Lastly, time-lapse images of live adherent cells are obtained with an off-the-shelf microscope, thus demonstrating the new method's potential for extending QPI capability widely in the biomedical community.

  5. A comparison of techniques for inversion of radio-ray phase data in presence of ray bending

    NASA Technical Reports Server (NTRS)

    Wallio, H. A.; Grossi, M. D.

    1972-01-01

    Derivations are presented of the straight-line Abel transform and the seismological Herglotz-Wiechert transform (which takes ray bending into account) that are used in the reconstruction of refractivity profiles from radio-wave phase data. Profile inversion utilizing these approaches, performed in computer-simulated experiments, are compared for cases of positive, zero, and negative ray bending. For thin atmospheres and ionospheres, such as the Martian atmosphere and ionosphere, radio wave signals are shown to be inverted accurately with both methods. For dense media, such as the solar corona or the lower Venus atmosphere, the refractive recovered by the seismological Herglotz-Wiechert transform provide a significant improvement compared with the straight-line Abel transform.

  6. Wavelength modulation spectroscopy--digital detection of gas absorption harmonics based on Fourier analysis.

    PubMed

    Mei, Liang; Svanberg, Sune

    2015-03-20

    This work presents a detailed study of the theoretical aspects of the Fourier analysis method, which has been utilized for gas absorption harmonic detection in wavelength modulation spectroscopy (WMS). The lock-in detection of the harmonic signal is accomplished by studying the phase term of the inverse Fourier transform of the Fourier spectrum that corresponds to the harmonic signal. The mathematics and the corresponding simulation results are given for each procedure when applying the Fourier analysis method. The present work provides a detailed view of the WMS technique when applying the Fourier analysis method.

  7. Tomographic inversion techniques incorporating physical constraints for line integrated spectroscopy in stellarators and tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pablant, N. A.; Bell, R. E.; Bitter, M.

    2014-11-15

    Accurate tomographic inversion is important for diagnostic systems on stellarators and tokamaks which rely on measurements of line integrated emission spectra. A tomographic inversion technique based on spline optimization with enforcement of constraints is described that can produce unique and physically relevant inversions even in situations with noisy or incomplete input data. This inversion technique is routinely used in the analysis of data from the x-ray imaging crystal spectrometer (XICS) installed at the Large Helical Device. The XICS diagnostic records a 1D image of line integrated emission spectra from impurities in the plasma. Through the use of Doppler spectroscopy andmore » tomographic inversion, XICS can provide profile measurements of the local emissivity, temperature, and plasma flow. Tomographic inversion requires the assumption that these measured quantities are flux surface functions, and that a known plasma equilibrium reconstruction is available. In the case of low signal levels or partial spatial coverage of the plasma cross-section, standard inversion techniques utilizing matrix inversion and linear-regularization often cannot produce unique and physically relevant solutions. The addition of physical constraints, such as parameter ranges, derivative directions, and boundary conditions, allow for unique solutions to be reliably found. The constrained inversion technique described here utilizes a modified Levenberg-Marquardt optimization scheme, which introduces a condition avoidance mechanism by selective reduction of search directions. The constrained inversion technique also allows for the addition of more complicated parameter dependencies, for example, geometrical dependence of the emissivity due to asymmetries in the plasma density arising from fast rotation. The accuracy of this constrained inversion technique is discussed, with an emphasis on its applicability to systems with limited plasma coverage.« less

  8. Tomographic inversion techniques incorporating physical constraints for line integrated spectroscopy in stellarators and tokamaksa)

    DOE PAGES

    Pablant, N. A.; Bell, R. E.; Bitter, M.; ...

    2014-08-08

    Accurate tomographic inversion is important for diagnostic systems on stellarators and tokamaks which rely on measurements of line integrated emission spectra. A tomographic inversion technique based on spline optimization with enforcement of constraints is described that can produce unique and physically relevant inversions even in situations with noisy or incomplete input data. This inversion technique is routinely used in the analysis of data from the x-ray imaging crystal spectrometer (XICS) installed at LHD. The XICS diagnostic records a 1D image of line integrated emission spectra from impurities in the plasma. Through the use of Doppler spectroscopy and tomographic inversion, XICSmore » can provide pro file measurements of the local emissivity, temperature and plasma flow. Tomographic inversion requires the assumption that these measured quantities are flux surface functions, and that a known plasma equilibrium reconstruction is available. In the case of low signal levels or partial spatial coverage of the plasma cross-section, standard inversion techniques utilizing matrix inversion and linear-regularization often cannot produce unique and physically relevant solutions. The addition of physical constraints, such as parameter ranges, derivative directions, and boundary conditions, allow for unique solutions to be reliably found. The constrained inversion technique described here utilizes a modifi ed Levenberg-Marquardt optimization scheme, which introduces a condition avoidance mechanism by selective reduction of search directions. The constrained inversion technique also allows for the addition of more complicated parameter dependencies, for example geometrical dependence of the emissivity due to asymmetries in the plasma density arising from fast rotation. The accuracy of this constrained inversion technique is discussed, with an emphasis on its applicability to systems with limited plasma coverage.« less

  9. Variable-permittivity linear inverse problem for the H(sub z)-polarized case

    NASA Technical Reports Server (NTRS)

    Moghaddam, M.; Chew, W. C.

    1993-01-01

    The H(sub z)-polarized inverse problem has rarely been studied before due to the complicated way in which the unknown permittivity appears in the wave equation. This problem is equivalent to the acoustic inverse problem with variable density. We have recently reported the solution to the nonlinear variable-permittivity H(sub z)-polarized inverse problem using the Born iterative method. Here, the linear inverse problem is solved for permittivity (epsilon) and permeability (mu) using a different approach which is an extension of the basic ideas of diffraction tomography (DT). The key to solving this problem is to utilize frequency diversity to obtain the required independent measurements. The receivers are assumed to be in the far field of the object, and plane wave incidence is also assumed. It is assumed that the scatterer is weak, so that the Born approximation can be used to arrive at a relationship between the measured pressure field and two terms related to the spatial Fourier transform of the two unknowns, epsilon and mu. The term involving permeability corresponds to monopole scattering and that for permittivity to dipole scattering. Measurements at several frequencies are used and a least squares problem is solved to reconstruct epsilon and mu. It is observed that the low spatial frequencies in the spectra of epsilon and mu produce inaccuracies in the results. Hence, a regularization method is devised to remove this problem. Several results are shown. Low contrast objects for which the above analysis holds are used to show that good reconstructions are obtained for both permittivity and permeability after regularization is applied.

  10. Micro-seismic imaging using a source function independent full waveform inversion method

    NASA Astrophysics Data System (ADS)

    Wang, Hanchen; Alkhalifah, Tariq

    2018-03-01

    At the heart of micro-seismic event measurements is the task to estimate the location of the source micro-seismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image micro-seismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, full waveform inversion of micro-seismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent full waveform inversion of micro-seismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modeled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers is calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.

  11. Towards an Effective Theory of Reformulation. Part 1; Semantics

    NASA Technical Reports Server (NTRS)

    Benjamin, D. Paul

    1992-01-01

    This paper describes an investigation into the structure of representations of sets of actions, utilizing semigroup theory. The goals of this project are twofold: to shed light on the relationship between tasks and representations, leading to a classification of tasks according to the representations they admit; and to develop techniques for automatically transforming representations so as to improve problem-solving performance. A method is demonstrated for automatically generating serial algorithms for representations whose actions form a finite group. This method is then extended to representations whose actions form a finite inverse semigroup.

  12. Optimal utilization of total elastic scattering cross section data for the determination of interatomic potentials

    NASA Technical Reports Server (NTRS)

    Bernstein, R. B.; Labudde, R. A.

    1972-01-01

    The problem of inversion is considered in relation to absolute total cross sections Q(v) for atom-atom collisions and their velocity dependence, and the glory undulations and the transition to high velocity behavior. There is a limit to the amount of information available from Q(v) even when observations of good accuracy (e.g., + or - 0.25%) are in hand over an extended energy range (from thermal energies upward by a factor of greater than 1000 in relative kinetic energy). Methods were developed for data utilization, which take full advantage of the accuracy of the experimental Q(v) measurements.

  13. Estimation of surface area concentration of workplace incidental nanoparticles based on number and mass concentrations

    NASA Astrophysics Data System (ADS)

    Park, J. Y.; Ramachandran, G.; Raynor, P. C.; Kim, S. W.

    2011-10-01

    Surface area was estimated by three different methods using number and/or mass concentrations obtained from either two or three instruments that are commonly used in the field. The estimated surface area concentrations were compared with reference surface area concentrations (SAREF) calculated from the particle size distributions obtained from a scanning mobility particle sizer and an optical particle counter (OPC). The first estimation method (SAPSD) used particle size distribution measured by a condensation particle counter (CPC) and an OPC. The second method (SAINV1) used an inversion routine based on PM1.0, PM2.5, and number concentrations to reconstruct assumed lognormal size distributions by minimizing the difference between measurements and calculated values. The third method (SAINV2) utilized a simpler inversion method that used PM1.0 and number concentrations to construct a lognormal size distribution with an assumed value of geometric standard deviation. All estimated surface area concentrations were calculated from the reconstructed size distributions. These methods were evaluated using particle measurements obtained in a restaurant, an aluminum die-casting factory, and a diesel engine laboratory. SAPSD was 0.7-1.8 times higher and SAINV1 and SAINV2 were 2.2-8 times higher than SAREF in the restaurant and diesel engine laboratory. In the die casting facility, all estimated surface area concentrations were lower than SAREF. However, the estimated surface area concentration using all three methods had qualitatively similar exposure trends and rankings to those using SAREF within a workplace. This study suggests that surface area concentration estimation based on particle size distribution (SAPSD) is a more accurate and convenient method to estimate surface area concentrations than estimation methods using inversion routines and may be feasible to use for classifying exposure groups and identifying exposure trends.

  14. Support Minimized Inversion of Acoustic and Elastic Wave Scattering

    NASA Astrophysics Data System (ADS)

    Safaeinili, Ali

    Inversion of limited data is common in many areas of NDE such as X-ray Computed Tomography (CT), Ultrasonic and eddy current flaw characterization and imaging. In many applications, it is common to have a bias toward a solution with minimum (L^2)^2 norm without any physical justification. When it is a priori known that objects are compact as, say, with cracks and voids, by choosing "Minimum Support" functional instead of the minimum (L^2)^2 norm, an image can be obtained that is equally in agreement with the available data, while it is more consistent with what is most probably seen in the real world. We have utilized a minimum support functional to find a solution with the smallest volume. This inversion algorithm is most successful in reconstructing objects that are compact like voids and cracks. To verify this idea, we first performed a variational nonlinear inversion of acoustic backscatter data using minimum support objective function. A full nonlinear forward model was used to accurately study the effectiveness of the minimized support inversion without error due to the linear (Born) approximation. After successful inversions using a full nonlinear forward model, a linearized acoustic inversion was developed to increase speed and efficiency in imaging process. The results indicate that by using minimum support functional, we can accurately size and characterize voids and/or cracks which otherwise might be uncharacterizable. An extremely important feature of support minimized inversion is its ability to compensate for unknown absolute phase (zero-of-time). Zero-of-time ambiguity is a serious problem in the inversion of the pulse-echo data. The minimum support inversion was successfully used for the inversion of acoustic backscatter data due to compact scatterers without the knowledge of the zero-of-time. The main drawback to this type of inversion is its computer intensiveness. In order to make this type of constrained inversion available for common use, work needs to be performed in three areas: (1) exploitation of state-of-the-art parallel computation, (2) improvement of theoretical formulation of the scattering process for better computation efficiency, and (3) development of better methods for guiding the non-linear inversion. (Abstract shortened by UMI.).

  15. Unlocking the spatial inversion of large scanning magnetic microscopy datasets

    NASA Astrophysics Data System (ADS)

    Myre, J. M.; Lascu, I.; Andrade Lima, E.; Feinberg, J. M.; Saar, M. O.; Weiss, B. P.

    2013-12-01

    Modern scanning magnetic microscopy provides the ability to perform high-resolution, ultra-high sensitivity moment magnetometry, with spatial resolutions better than 10^-4 m and magnetic moments as weak as 10^-16 Am^2. These microscopy capabilities have enhanced numerous magnetic studies, including investigations of the paleointensity of the Earth's magnetic field, shock magnetization and demagnetization of impacts, magnetostratigraphy, the magnetic record in speleothems, and the records of ancient core dynamos of planetary bodies. A common component among many studies utilizing scanning magnetic microscopy is solving an inverse problem to determine the non-negative magnitude of the magnetic moments that produce the measured component of the magnetic field. The two most frequently used methods to solve this inverse problem are classic fast Fourier techniques in the frequency domain and non-negative least squares (NNLS) methods in the spatial domain. Although Fourier techniques are extremely fast, they typically violate non-negativity and it is difficult to implement constraints associated with the space domain. NNLS methods do not violate non-negativity, but have typically been computation time prohibitive for samples of practical size or resolution. Existing NNLS methods use multiple techniques to attain tractable computation. To reduce computation time in the past, typically sample size or scan resolution would have to be reduced. Similarly, multiple inversions of smaller sample subdivisions can be performed, although this frequently results in undesirable artifacts at subdivision boundaries. Dipole interactions can also be filtered to only compute interactions above a threshold which enables the use of sparse methods through artificial sparsity. To improve upon existing spatial domain techniques, we present the application of the TNT algorithm, named TNT as it is a "dynamite" non-negative least squares algorithm which enhances the performance and accuracy of spatial domain inversions. We show that the TNT algorithm reduces the execution time of spatial domain inversions from months to hours and that inverse solution accuracy is improved as the TNT algorithm naturally produces solutions with small norms. Using sIRM and NRM measures of multiple synthetic and natural samples we show that the capabilities of the TNT algorithm allow very large samples to be inverted without the need for alternative techniques to make the problems tractable. Ultimately, the TNT algorithm enables accurate spatial domain analysis of scanning magnetic microscopy data on an accelerated time scale that renders spatial domain analyses tractable for numerous studies, including searches for the best fit of unidirectional magnetization direction and high-resolution step-wise magnetization and demagnetization.

  16. Stability and Physical Accuracy Analysis of the Numerical Solutions to Wigner-Poisson Modeling of Resonant Tunneling Diodes

    DTIC Science & Technology

    2013-03-22

    discrete Wigner function is periodic in momentum space. The periodicity follows from the Fourier transform of the density matrix. The inverse...resonant-tunneling diode . The Green function method has been one of alternatives. Another alternative was to utilize the Wigner function . The Wigner ... function approach to the simulation of a resonant-tunneling diode offers many advantages. In the limit of the classical physics the Wigner equation

  17. Measuring the electrical properties of soil using a calibrated ground-coupled GPR system

    USGS Publications Warehouse

    Oden, C.P.; Olhoeft, G.R.; Wright, D.L.; Powers, M.H.

    2008-01-01

    Traditional methods for estimating vadose zone soil properties using ground penetrating radar (GPR) include measuring travel time, fitting diffraction hyperbolae, and other methods exploiting geometry. Additional processing techniques for estimating soil properties are possible with properly calibrated GPR systems. Such calibration using ground-coupled antennas must account for the effects of the shallow soil on the antenna's response, because changing soil properties result in a changing antenna response. A prototype GPR system using ground-coupled antennas was calibrated using laboratory measurements and numerical simulations of the GPR components. Two methods for estimating subsurface properties that utilize the calibrated response were developed. First, a new nonlinear inversion algorithm to estimate shallow soil properties under ground-coupled antennas was evaluated. Tests with synthetic data showed that the inversion algorithm is well behaved across the allowed range of soil properties. A preliminary field test gave encouraging results, with estimated soil property uncertainties (????) of ??1.9 and ??4.4 mS/m for the relative dielectric permittivity and the electrical conductivity, respectively. Next, a deconvolution method for estimating the properties of subsurface reflectors with known shapes (e.g., pipes or planar interfaces) was developed. This method uses scattering matrices to account for the response of subsurface reflectors. The deconvolution method was evaluated for use with noisy data using synthetic data. Results indicate that the deconvolution method requires reflected waves with a signal/noise ratio of about 10:1 or greater. When applied to field data with a signal/noise ratio of 2:1, the method was able to estimate the reflection coefficient and relative permittivity, but the large uncertainty in this estimate precluded inversion for conductivity. ?? Soil Science Society of America.

  18. Shear Wave Splitting Inversion in a Complex Crust

    NASA Astrophysics Data System (ADS)

    Lucas, A.

    2015-12-01

    Shear wave splitting (SWS) inversion presents a method whereby the upper crust can be interrogated for fracture density. It is caused when a shear wave traverses an area of anisotropy, splits in two, with each wave experiencing a different velocity resulting in an observable separation in arrival times. A SWS observation consists of the first arrival polarization direction and the time delay. Given the large amount of data common in SWS studies, manual inspection for polarization and time delay is considered prohibitively time intensive. All automated techniques used can produce high amounts of observations falsely interpreted as SWS. Thus introducing error into the interpretation. The technique often used for removing these false observations is to manually inspect all SWS observations defined as high quality by the automated routine, and remove false identifications. We investigate the nature of events falsely identified compared to those correctly identified. Once this identification is complete we conduct a inversion for crack density from SWS time delay. The current body of work on linear SWS inversion utilizes an equation that defines the time delay between arriving shear waves with respect to fracture density. This equation makes the assumption that no fluid flow occurs as a result of the passing shear wave, a situation called squirt flow. We show that the assumption is not applicable in all geological situations. When it is not true, its use in an inversion produces a result which is negatively affected by the assumptions. This is shown to be the case at the test case of 6894 SWS observations gathered in a small area at Puna geothermal field, Hawaii. To rectify this situation, a series of new time delay formulae, applicable to linear inversion, are derived from velocity equations presented in literature. The new formula use a 'fluid influence parameter' which indicates the degree to which squirt flow is influencing the SWS. It is found that accounting for squirt flow better fits the data and is more applicable. The fluid influence factor that best describes the data can be identified prior to solving the inversion. Implementing this formula in a linear inversion has a significantly improved fit to the time delay observations than that of the current methods.

  19. Accelerated Training for Large Feedforward Neural Networks

    NASA Technical Reports Server (NTRS)

    Stepniewski, Slawomir W.; Jorgensen, Charles C.

    1998-01-01

    In this paper we introduce a new training algorithm, the scaled variable metric (SVM) method. Our approach attempts to increase the convergence rate of the modified variable metric method. It is also combined with the RBackprop algorithm, which computes the product of the matrix of second derivatives (Hessian) with an arbitrary vector. The RBackprop method allows us to avoid computationally expensive, direct line searches. In addition, it can be utilized in the new, 'predictive' updating technique of the inverse Hessian approximation. We have used directional slope testing to adjust the step size and found that this strategy works exceptionally well in conjunction with the Rbackprop algorithm. Some supplementary, but nevertheless important enhancements to the basic training scheme such as improved setting of a scaling factor for the variable metric update and computationally more efficient procedure for updating the inverse Hessian approximation are presented as well. We summarize by comparing the SVM method with four first- and second- order optimization algorithms including a very effective implementation of the Levenberg-Marquardt method. Our tests indicate promising computational speed gains of the new training technique, particularly for large feedforward networks, i.e., for problems where the training process may be the most laborious.

  20. Rayleigh wave dispersion curve inversion by using particle swarm optimization and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Buyuk, Ersin; Zor, Ekrem; Karaman, Abdullah

    2017-04-01

    Inversion of surface wave dispersion curves with its highly nonlinear nature has some difficulties using traditional linear inverse methods due to the need and strong dependence to the initial model, possibility of trapping in local minima and evaluation of partial derivatives. There are some modern global optimization methods to overcome of these difficulties in surface wave analysis such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). GA is based on biologic evolution consisting reproduction, crossover and mutation operations, while PSO algorithm developed after GA is inspired from the social behaviour of birds or fish of swarms. Utility of these methods require plausible convergence rate, acceptable relative error and optimum computation cost that are important for modelling studies. Even though PSO and GA processes are similar in appearence, the cross-over operation in GA is not used in PSO and the mutation operation is a stochastic process for changing the genes within chromosomes in GA. Unlike GA, the particles in PSO algorithm changes their position with logical velocities according to particle's own experience and swarm's experience. In this study, we applied PSO algorithm to estimate S wave velocities and thicknesses of the layered earth model by using Rayleigh wave dispersion curve and also compared these results with GA and we emphasize on the advantage of using PSO algorithm for geophysical modelling studies considering its rapid convergence, low misfit error and computation cost.

  1. A hydraulic tomography approach coupling travel time inversion with steady shape analysis based on aquifer analogue study in coarsely clastic fluvial glacial deposit

    NASA Astrophysics Data System (ADS)

    Hu, R.; Brauchler, R.; Herold, M.; Bayer, P.; Sauter, M.

    2009-04-01

    Rarely is it possible to draw a significant conclusion about the geometry and the properties of geological structures of the underground using the information which is typically obtained from boreholes, since soil exploration is only representative of the position where the soil sample is taken from. Conventional aquifer investigation methods like pumping tests can provide hydraulic properties of a larger area; however, they lead to integral information. This information is insufficient to develop groundwater models, especially contaminant transport models, which require information about the spatial distribution of the hydraulic properties of the subsurface. Hydraulic tomography is an innovative method which has the potential to spatially resolve three dimensional structures of natural aquifer bodies. The method employs hydraulic short term tests performed between two or more wells, whereby the pumped intervals (sources) and the observation points (receivers) are separated by double packer systems. In order to optimize the computationally intensive tomographic inversion of transient hydraulic data we have decided to couple two inversion approaches (a) hydraulic travel time inversion and (b) steady shape inversion. (a) Hydraulic travel time inversion is based on the solution of the travel time integral, which describes the relationship between travel time of maximum signal variation of a transient hydraulic signal and the diffusivity between source and receiver. The travel time inversion is computationally extremely effective and robust, however, it is limited to the determination of diffusivity. In order to overcome this shortcoming we use the estimated diffusivity distribution as starting model for the steady shape inversion with the goal to separate the estimated diffusivity distribution into its components, hydraulic conductivity and specific storage. (b) The steady shape inversion utilizes the fact that at steady shape conditions, drawdown varies with time but the hydraulic gradient does not. By this trick, transient data can be analyzed with the computational efficiency of a steady state model, which proceeds hundreds of times faster than transient models. Finally, a specific storage distribution can be calculated from the diffusivity and hydraulic conductivity reconstructions derived from travel time and steady shape inversion. The groundwork of this study is the aquifer-analogue study from BAYER (1999), in which six parallel profiles of a natural sedimentary body with a size of 16m x 10m x 7m were mapped in high resolution with respect to structural and hydraulic parameters. Based on these results and using geostatistical interpolation methods, MAJI (2005) designed a three dimensional hydraulic model with a resolution of 5cm x 5cm x 5cm. This hydraulic model was used to simulate a large number of short term pumping tests in a tomographical array. The high resolution parameter reconstructions gained from the inversion of simulated pumping test data demonstrate that the proposed inversion scheme allows reconstructing the individual architectural elements and their hydraulic properties with a higher resolution compared to conventional hydraulic and geological investigation methods. Bayer P (1999) Aquifer-Analog-Studium in grobklastischen braided river Ablagerungen: Sedimentäre/hydrogeologische Wandkartierung und Kalibrierung von Georadarmessungen, Diplomkartierung am Lehrstuhl für Angewandte Geologie, Universität Tübingen, 25 pp. Maji, R. (2005) Conditional Stochastic Modelling of DNAPL Migration and Dissolution in a High-resolution Aquifer Analog, Ph.D. thesis at the University of Waterloo, 187 pp.

  2. Inverse design engineering of all-silicon polarization beam splitters

    NASA Astrophysics Data System (ADS)

    Frandsen, Lars H.; Sigmund, Ole

    2016-03-01

    Utilizing the inverse design engineering method of topology optimization, we have realized high-performing all-silicon ultra-compact polarization beam splitters. We show that the device footprint of the polarization beam splitter can be as compact as ~2 μm2 while performing experimentally with a polarization splitting loss lower than ~0.82 dB and an extinction ratio larger than ~15 dB in the C-band. We investigate the device performance as a function of the device length and find a lower length above which the performance only increases incrementally. Imposing a minimum feature size constraint in the optimization is shown to affect the performance negatively and reveals the necessity for light to scatter on a sub-wavelength scale to obtain functionalities in compact photonic devices.

  3. Using seismically constrained magnetotelluric inversion to recover velocity structure in the shallow lithosphere

    NASA Astrophysics Data System (ADS)

    Moorkamp, M.; Fishwick, S.; Jones, A. G.

    2015-12-01

    Typical surface wave tomography can recover well the velocity structure of the upper mantle in the depth range between 70-200km. For a successful inversion, we have to constrain the crustal structure and assess the impact on the resulting models. In addition,we often observe potentially interesting features in the uppermost lithosphere which are poorly resolved and thus their interpretationhas to be approached with great care.We are currently developing a seismically constrained magnetotelluric (MT) inversion approach with the aim of better recovering the lithospheric properties (and thus seismic velocities) in these problematic areas. We perform a 3D MT inversion constrained by a fixed seismic velocity model from surface wave tomography. In order to avoid strong bias, we only utilize information on structural boundaries to combine these two methods. Within the region that is well resolved by both methods, we can then extract a velocity-conductivity relationship. By translating the conductivitiesretrieved from MT into velocities in areas where the velocity model is poorly resolved, we can generate an updated velocity model and test what impactthe updated velocities have on the predicted data.We test this new approach using a MT dataset acquired in central Botswana over the Okwa terrane and the adjacent Kaapvaal and Zimbabwe Cratons togetherwith a tomographic models for the region. Here, both datasets have previously been used to constrain lithospheric structure and show some similarities.We carefully asses the validity of our results by comparing with observations and petrophysical predictions for the conductivity-velocity relationship.

  4. A robust bi-orthogonal/dynamically-orthogonal method using the covariance pseudo-inverse with application to stochastic flow problems

    NASA Astrophysics Data System (ADS)

    Babaee, Hessam; Choi, Minseok; Sapsis, Themistoklis P.; Karniadakis, George Em

    2017-09-01

    We develop a new robust methodology for the stochastic Navier-Stokes equations based on the dynamically-orthogonal (DO) and bi-orthogonal (BO) methods [1-3]. Both approaches are variants of a generalized Karhunen-Loève (KL) expansion in which both the stochastic coefficients and the spatial basis evolve according to system dynamics, hence, capturing the low-dimensional structure of the solution. The DO and BO formulations are mathematically equivalent [3], but they exhibit computationally complimentary properties. Specifically, the BO formulation may fail due to crossing of the eigenvalues of the covariance matrix, while both BO and DO become unstable when there is a high condition number of the covariance matrix or zero eigenvalues. To this end, we combine the two methods into a robust hybrid framework and in addition we employ a pseudo-inverse technique to invert the covariance matrix. The robustness of the proposed method stems from addressing the following issues in the DO/BO formulation: (i) eigenvalue crossing: we resolve the issue of eigenvalue crossing in the BO formulation by switching to the DO near eigenvalue crossing using the equivalence theorem and switching back to BO when the distance between eigenvalues is larger than a threshold value; (ii) ill-conditioned covariance matrix: we utilize a pseudo-inverse strategy to invert the covariance matrix; (iii) adaptivity: we utilize an adaptive strategy to add/remove modes to resolve the covariance matrix up to a threshold value. In particular, we introduce a soft-threshold criterion to allow the system to adapt to the newly added/removed mode and therefore avoid repetitive and unnecessary mode addition/removal. When the total variance approaches zero, we show that the DO/BO formulation becomes equivalent to the evolution equation of the Optimally Time-Dependent modes [4]. We demonstrate the capability of the proposed methodology with several numerical examples, namely (i) stochastic Burgers equation: we analyze the performance of the method in the presence of eigenvalue crossing and zero eigenvalues; (ii) stochastic Kovasznay flow: we examine the method in the presence of a singular covariance matrix; and (iii) we examine the adaptivity of the method for an incompressible flow over a cylinder where for large stochastic forcing thirteen DO/BO modes are active.

  5. Single-shot coherent diffraction imaging of microbunched relativistic electron beams for free-electron laser applications.

    PubMed

    Marinelli, A; Dunning, M; Weathersby, S; Hemsing, E; Xiang, D; Andonian, G; O'Shea, F; Miao, Jianwei; Hast, C; Rosenzweig, J B

    2013-03-01

    With the advent of coherent x rays provided by the x-ray free-electron laser (FEL), strong interest has been kindled in sophisticated diffraction imaging techniques. In this Letter, we exploit such techniques for the diagnosis of the density distribution of the intense electron beams typically utilized in an x-ray FEL itself. We have implemented this method by analyzing the far-field coherent transition radiation emitted by an inverse-FEL microbunched electron beam. This analysis utilizes an oversampling phase retrieval method on the transition radiation angular spectrum to reconstruct the transverse spatial distribution of the electron beam. This application of diffraction imaging represents a significant advance in electron beam physics, having critical applications to the diagnosis of high-brightness beams, as well as the collective microbunching instabilities afflicting these systems.

  6. Iterative methods for mixed finite element equations

    NASA Technical Reports Server (NTRS)

    Nakazawa, S.; Nagtegaal, J. C.; Zienkiewicz, O. C.

    1985-01-01

    Iterative strategies for the solution of indefinite system of equations arising from the mixed finite element method are investigated in this paper with application to linear and nonlinear problems in solid and structural mechanics. The augmented Hu-Washizu form is derived, which is then utilized to construct a family of iterative algorithms using the displacement method as the preconditioner. Two types of iterative algorithms are implemented. Those are: constant metric iterations which does not involve the update of preconditioner; variable metric iterations, in which the inverse of the preconditioning matrix is updated. A series of numerical experiments is conducted to evaluate the numerical performance with application to linear and nonlinear model problems.

  7. A multifrequency MUSIC algorithm for locating small inhomogeneities in inverse scattering

    NASA Astrophysics Data System (ADS)

    Griesmaier, Roland; Schmiedecke, Christian

    2017-03-01

    We consider an inverse scattering problem for time-harmonic acoustic or electromagnetic waves with sparse multifrequency far field data-sets. The goal is to localize several small penetrable objects embedded inside an otherwise homogeneous background medium from observations of far fields of scattered waves corresponding to incident plane waves with one fixed incident direction but several different frequencies. We assume that the far field is measured at a few observation directions only. Taking advantage of the smallness of the scatterers with respect to wavelength we utilize an asymptotic representation formula for the far field to design and analyze a MUSIC-type reconstruction method for this setup. We establish lower bounds on the number of frequencies and receiver directions that are required to recover the number and the positions of an ensemble of scatterers from the given measurements. Furthermore we briefly sketch a possible application of the reconstruction method to the practically relevant case of multifrequency backscattering data. Numerical examples are presented to document the potentials and limitations of this approach.

  8. Using an SLR inversion to measure the mass balance of Greenland before and during GRACE

    NASA Astrophysics Data System (ADS)

    Bonin, Jennifer

    2016-04-01

    The GRACE mission has done an admirable job of measuring large-scale mass changes over Greenland since its launch in 2002. However before that time, measurements of large-scale ice mass balance were few and far between, leading to a lack of baseline knowledge. High-quality Satellite Laser Ranging (SLR) data existed a decade earlier, but normally has too low a spatial resolution to be used for this purpose. I demonstrate that a least squares inversion technique can reconstitute the SLR data and use it to measure ice loss over Greenland. To do so, I first simulate the problem by degrading today's GRACE data to a level comparable with SLR, then demonstrating that the inversion can re-localize Greenland's contribution to the low-resolution signal, giving an accurate time series of mass change over all of Greenland which compares well with the full-resolution GRACE estimates. I then utilize that method on the actual SLR data, resulting in an independent 1994-2014 time series of mass change over Greenland. I find favorable agreement between the pure-SLR inverted results and the 2012 Ice-sheet Mass Balance Inter-comparison Exercise (IMBIE) results, which are largely based on the "input-output" modeling method before GRACE's launch.

  9. Benchmarking passive seismic methods of estimating the depth of velocity interfaces down to ~300 m

    NASA Astrophysics Data System (ADS)

    Czarnota, Karol; Gorbatov, Alexei

    2016-04-01

    In shallow passive seismology it is generally accepted that the spatial autocorrelation (SPAC) method is more robust than the horizontal-over-vertical spectral ratio (HVSR) method at resolving the depth to surface-wave velocity (Vs) interfaces. Here we present results of a field test of these two methods over ten drill sites in western Victoria, Australia. The target interface is the base of Cenozoic unconsolidated to semi-consolidated clastic and/or carbonate sediments of the Murray Basin, which overlie Paleozoic crystalline rocks. Depths of this interface intersected in drill holes are between ~27 m and ~300 m. Seismometers were deployed in a three-arm spiral array, with a radius of 250 m, consisting of 13 Trillium Compact 120 s broadband instruments. Data were acquired at each site for 7-21 hours. The Vs architecture beneath each site was determined through nonlinear inversion of HVSR and SPAC data using the neighbourhood algorithm, implemented in the geopsy modelling package (Wathelet, 2005, GRL v35). The HVSR technique yielded depth estimates of the target interface (Vs > 1000 m/s) generally within ±20% error. Successful estimates were even obtained at a site with an inverted velocity profile, where Quaternary basalts overlie Neogene sediments which in turn overlie the target basement. Half of the SPAC estimates showed significantly higher errors than were obtained using HVSR. Joint inversion provided the most reliable estimates but was unstable at three sites. We attribute the surprising success of HVSR over SPAC to a low content of transient signals within the seismic record caused by low levels of anthropogenic noise at the benchmark sites. At a few sites SPAC waveform curves showed clear overtones suggesting that more reliable SPAC estimates may be obtained utilizing a multi-modal inversion. Nevertheless, our study indicates that reliable basin thickness estimates in the Australian conditions tested can be obtained utilizing HVSR data from a single seismometer, without a priori knowledge of the surface-wave velocity of the basin material, thereby negating the need to deploy cumbersome arrays.

  10. Required Accuracy of Structural Constraints in the Inversion of Electrical Resistivity Data for Improved Water Content Estimation

    NASA Astrophysics Data System (ADS)

    Heinze, T.; Budler, J.; Weigand, M.; Kemna, A.

    2017-12-01

    Water content distribution in the ground is essential for hazard analysis during monitoring of landslide prone hills. Geophysical methods like electrical resistivity tomography (ERT) can be utilized to determine the spatial distribution of water content using established soil physical relationships between bulk electrical resistivity and water content. However, often more dominant electrical contrasts due to lithological structures outplay these hydraulic signatures and blur the results in the inversion process. Additionally, the inversion of ERT data requires further constraints. In the standard Occam inversion method, a smoothness constraint is used, assuming that soil properties change softly in space. While this applies in many scenarios, sharp lithological layers with strongly divergent hydrological parameters, as often found in landslide prone hillslopes, are typically badly resolved by standard ERT. We use a structurally constrained ERT inversion approach for improving water content estimation in landslide prone hills by including a-priori information about lithological layers. The smoothness constraint is reduced along layer boundaries identified using seismic data. This approach significantly improves water content estimations, because in landslide prone hills often a layer of rather high hydraulic conductivity is followed by a hydraulic barrier like clay-rich soil, causing higher pore pressures. One saturated layer and one almost drained layer typically result also in a sharp contrast in electrical resistivity, assuming that surface conductivity of the soil does not change in similar order. Using synthetic data, we study the influence of uncertainties in the a-priori information on the inverted resistivity and estimated water content distribution. We find a similar behavior over a broad range of models and depths. Based on our simulation results, we provide best-practice recommendations for field applications and suggest important tests to obtain reliable, reproducible and trustworthy results. We finally apply our findings to field data, compare conventional and improved analysis results, and discuss limitations of the structurally-constrained inversion approach.

  11. Investigation of inversion polymorphisms in the human genome using principal components analysis.

    PubMed

    Ma, Jianzhong; Amos, Christopher I

    2012-01-01

    Despite the significant advances made over the last few years in mapping inversions with the advent of paired-end sequencing approaches, our understanding of the prevalence and spectrum of inversions in the human genome has lagged behind other types of structural variants, mainly due to the lack of a cost-efficient method applicable to large-scale samples. We propose a novel method based on principal components analysis (PCA) to characterize inversion polymorphisms using high-density SNP genotype data. Our method applies to non-recurrent inversions for which recombination between the inverted and non-inverted segments in inversion heterozygotes is suppressed due to the loss of unbalanced gametes. Inside such an inversion region, an effect similar to population substructure is thus created: two distinct "populations" of inversion homozygotes of different orientations and their 1:1 admixture, namely the inversion heterozygotes. This kind of substructure can be readily detected by performing PCA locally in the inversion regions. Using simulations, we demonstrated that the proposed method can be used to detect and genotype inversion polymorphisms using unphased genotype data. We applied our method to the phase III HapMap data and inferred the inversion genotypes of known inversion polymorphisms at 8p23.1 and 17q21.31. These inversion genotypes were validated by comparing with literature results and by checking Mendelian consistency using the family data whenever available. Based on the PCA-approach, we also performed a preliminary genome-wide scan for inversions using the HapMap data, which resulted in 2040 candidate inversions, 169 of which overlapped with previously reported inversions. Our method can be readily applied to the abundant SNP data, and is expected to play an important role in developing human genome maps of inversions and exploring associations between inversions and susceptibility of diseases.

  12. Method of imaging the electrical conductivity distribution of a subsurface

    DOEpatents

    Johnson, Timothy C.

    2017-09-26

    A method of imaging electrical conductivity distribution of a subsurface containing metallic structures with known locations and dimensions is disclosed. Current is injected into the subsurface to measure electrical potentials using multiple sets of electrodes, thus generating electrical resistivity tomography measurements. A numeric code is applied to simulate the measured potentials in the presence of the metallic structures. An inversion code is applied that utilizes the electrical resistivity tomography measurements and the simulated measured potentials to image the subsurface electrical conductivity distribution and remove effects of the subsurface metallic structures with known locations and dimensions.

  13. On the Application of Inverse-Mode SiGe HBTs in RF Receivers for the Mitigation of Single-Event Transients

    NASA Astrophysics Data System (ADS)

    Song, Ickhyun; Cho, Moon-Kyu; Oakley, Michael A.; Ildefonso, Adrian; Ju, Inchan; Buchner, Stephen P.; McMorrow, Dale; Paki, Pauline; Cressler, John. D.

    2017-05-01

    Best practice in mitigation strategies for single-event transients (SETs) in radio-frequency (RF) receiver modules is investigated using a variety of integrated receivers utilizing inverse-mode silicon-germanium (SiGe) heterojunction bipolar transistors (HBTs). The receivers were designed and implemented in a 130-nm SiGe BiCMOS technology platform. In general, RF switches, low-noise amplifiers (LNAs), and downconversion mixers utilizing inverse-mode SiGe HBTs exhibit less susceptibility to SETs than conventional RF designs, in terms of transient peaks and duration, at the cost of RF performance. Under normal RF operation, the SET-hardened switch is mainly effective in peak reduction, while the LNA and the mixer exhibit reductions in transient peaks as well as transient duration.

  14. Structural Analysis Methods for Structural Health Management of Future Aerospace Vehicles

    NASA Technical Reports Server (NTRS)

    Tessler, Alexander

    2007-01-01

    Two finite element based computational methods, Smoothing Element Analysis (SEA) and the inverse Finite Element Method (iFEM), are reviewed, and examples of their use for structural health monitoring are discussed. Due to their versatility, robustness, and computational efficiency, the methods are well suited for real-time structural health monitoring of future space vehicles, large space structures, and habitats. The methods may be effectively employed to enable real-time processing of sensing information, specifically for identifying three-dimensional deformed structural shapes as well as the internal loads. In addition, they may be used in conjunction with evolutionary algorithms to design optimally distributed sensors. These computational tools have demonstrated substantial promise for utilization in future Structural Health Management (SHM) systems.

  15. A regional high-resolution carbon flux inversion of North America for 2004

    NASA Astrophysics Data System (ADS)

    Schuh, A. E.; Denning, A. S.; Corbin, K. D.; Baker, I. T.; Uliasz, M.; Parazoo, N.; Andrews, A. E.; Worthy, D. E. J.

    2010-05-01

    Resolving the discrepancies between NEE estimates based upon (1) ground studies and (2) atmospheric inversion results, demands increasingly sophisticated techniques. In this paper we present a high-resolution inversion based upon a regional meteorology model (RAMS) and an underlying biosphere (SiB3) model, both running on an identical 40 km grid over most of North America. Current operational systems like CarbonTracker as well as many previous global inversions including the Transcom suite of inversions have utilized inversion regions formed by collapsing biome-similar grid cells into larger aggregated regions. An extreme example of this might be where corrections to NEE imposed on forested regions on the east coast of the United States might be the same as that imposed on forests on the west coast of the United States while, in reality, there likely exist subtle differences in the two areas, both natural and anthropogenic. Our current inversion framework utilizes a combination of previously employed inversion techniques while allowing carbon flux corrections to be biome independent. Temporally and spatially high-resolution results utilizing biome-independent corrections provide insight into carbon dynamics in North America. In particular, we analyze hourly CO2 mixing ratio data from a sparse network of eight towers in North America for 2004. A prior estimate of carbon fluxes due to Gross Primary Productivity (GPP) and Ecosystem Respiration (ER) is constructed from the SiB3 biosphere model on a 40 km grid. A combination of transport from the RAMS and the Parameterized Chemical Transport Model (PCTM) models is used to forge a connection between upwind biosphere fluxes and downwind observed CO2 mixing ratio data. A Kalman filter procedure is used to estimate weekly corrections to biosphere fluxes based upon observed CO2. RMSE-weighted annual NEE estimates, over an ensemble of potential inversion parameter sets, show a mean estimate 0.57 Pg/yr sink in North America. We perform the inversion with two independently derived boundary inflow conditions and calculate jackknife-based statistics to test the robustness of the model results. We then compare final results to estimates obtained from the CarbonTracker inversion system and at the Southern Great Plains flux site. Results are promising, showing the ability to correct carbon fluxes from the biosphere models over annual and seasonal time scales, as well as over the different GPP and ER components. Additionally, the correlation of an estimated sink of carbon in the South Central United States with regional anomalously high precipitation in an area of managed agricultural and forest lands provides interesting hypotheses for future work.

  16. Temporal Protection in Real Time Systems

    DTIC Science & Technology

    2016-11-01

    distribution. Consolidation of Mixed-Criticality Tasks P la n n in g O b s ta c le a v o id a n c e BUT Symmetric protection leads to criticality inversion 8...public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Criticality Inversion A higher...Monotonic Priority Shorter Period  Higher Priority • Ideal utilization BUT: Poor Criticality Protection Due to Criticality Inversion • If criticality

  17. Simultaneous inversion of seismic velocity and moment tensor using elastic-waveform inversion of microseismic data: Application to the Aneth CO2-EOR field

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Huang, L.

    2017-12-01

    Moment tensors are key parameters for characterizing CO2-injection-induced microseismic events. Elastic-waveform inversion has the potential to providing accurate results of moment tensors. Microseismic waveforms contains information of source moment tensors and the wave propagation velocity along the wavepaths. We develop an elastic-waveform inversion method to jointly invert the seismic velocity model and moment tensor. We first use our adaptive moment-tensor joint inversion method to estimate moment tensors of microseismic events. Our adaptive moment-tensor inversion method jointly inverts multiple microseismic events with similar waveforms within a cluster to reduce inversion uncertainty for microseismic data recorded using a single borehole geophone array. We use this inversion result as the initial model for our elastic-waveform inversion to minimize the cross-correlated-based data misfit between observed data and synthetic data. We verify our method using synthetic microseismic data and obtain improved results of both moment tensors and seismic velocity model. We apply our new inversion method to microseismic data acquired at a CO2-enhanced oil recovery field in Aneth, Utah, using a single borehole geophone array. The results demonstrate that our new inversion method significantly reduces the data misfit compared to the conventional ray-theory-based moment-tensor inversion.

  18. Multivariate Formation Pressure Prediction with Seismic-derived Petrophysical Properties from Prestack AVO inversion and Poststack Seismic Motion Inversion

    NASA Astrophysics Data System (ADS)

    Yu, H.; Gu, H.

    2017-12-01

    A novel multivariate seismic formation pressure prediction methodology is presented, which incorporates high-resolution seismic velocity data from prestack AVO inversion, and petrophysical data (porosity and shale volume) derived from poststack seismic motion inversion. In contrast to traditional seismic formation prediction methods, the proposed methodology is based on a multivariate pressure prediction model and utilizes a trace-by-trace multivariate regression analysis on seismic-derived petrophysical properties to calibrate model parameters in order to make accurate predictions with higher resolution in both vertical and lateral directions. With prestack time migration velocity as initial velocity model, an AVO inversion was first applied to prestack dataset to obtain high-resolution seismic velocity with higher frequency that is to be used as the velocity input for seismic pressure prediction, and the density dataset to calculate accurate Overburden Pressure (OBP). Seismic Motion Inversion (SMI) is an inversion technique based on Markov Chain Monte Carlo simulation. Both structural variability and similarity of seismic waveform are used to incorporate well log data to characterize the variability of the property to be obtained. In this research, porosity and shale volume are first interpreted on well logs, and then combined with poststack seismic data using SMI to build porosity and shale volume datasets for seismic pressure prediction. A multivariate effective stress model is used to convert velocity, porosity and shale volume datasets to effective stress. After a thorough study of the regional stratigraphic and sedimentary characteristics, a regional normally compacted interval model is built, and then the coefficients in the multivariate prediction model are determined in a trace-by-trace multivariate regression analysis on the petrophysical data. The coefficients are used to convert velocity, porosity and shale volume datasets to effective stress and then to calculate formation pressure with OBP. Application of the proposed methodology to a research area in East China Sea has proved that the method can bridge the gap between seismic and well log pressure prediction and give predicted pressure values close to pressure meassurements from well testing.

  19. Optimal aperture synthesis radar imaging

    NASA Astrophysics Data System (ADS)

    Hysell, D. L.; Chau, J. L.

    2006-03-01

    Aperture synthesis radar imaging has been used to investigate coherent backscatter from ionospheric plasma irregularities at Jicamarca and elsewhere for several years. Phenomena of interest include equatorial spread F, 150-km echoes, the equatorial electrojet, range-spread meteor trails, and mesospheric echoes. The sought-after images are related to spaced-receiver data mathematically through an integral transform, but direct inversion is generally impractical or suboptimal. We instead turn to statistical inverse theory, endeavoring to utilize fully all available information in the data inversion. The imaging algorithm used at Jicamarca is based on an implementation of the MaxEnt method developed for radio astronomy. Its strategy is to limit the space of candidate images to those that are positive definite, consistent with data to the degree required by experimental confidence limits; smooth (in some sense); and most representative of the class of possible solutions. The algorithm was improved recently by (1) incorporating the antenna radiation pattern in the prior probability and (2) estimating and including the full error covariance matrix in the constraints. The revised algorithm is evaluated using new 28-baseline electrojet data from Jicamarca.

  20. Run-to-Run Optimization Control Within Exact Inverse Framework for Scan Tracking.

    PubMed

    Yeoh, Ivan L; Reinhall, Per G; Berg, Martin C; Chizeck, Howard J; Seibel, Eric J

    2017-09-01

    A run-to-run optimization controller uses a reduced set of measurement parameters, in comparison to more general feedback controllers, to converge to the best control point for a repetitive process. A new run-to-run optimization controller is presented for the scanning fiber device used for image acquisition and display. This controller utilizes very sparse measurements to estimate a system energy measure and updates the input parameterizations iteratively within a feedforward with exact-inversion framework. Analysis, simulation, and experimental investigations on the scanning fiber device demonstrate improved scan accuracy over previous methods and automatic controller adaptation to changing operating temperature. A specific application example and quantitative error analyses are provided of a scanning fiber endoscope that maintains high image quality continuously across a 20 °C temperature rise without interruption of the 56 Hz video.

  1. Dependence of paracentric inversion rate on tract length.

    PubMed

    York, Thomas L; Durrett, Rick; Nielsen, Rasmus

    2007-04-03

    We develop a Bayesian method based on MCMC for estimating the relative rates of pericentric and paracentric inversions from marker data from two species. The method also allows estimation of the distribution of inversion tract lengths. We apply the method to data from Drosophila melanogaster and D. yakuba. We find that pericentric inversions occur at a much lower rate compared to paracentric inversions. The average paracentric inversion tract length is approx. 4.8 Mb with small inversions being more frequent than large inversions. If the two breakpoints defining a paracentric inversion tract are uniformly and independently distributed over chromosome arms there will be more short tract-length inversions than long; we find an even greater preponderance of short tract lengths than this would predict. Thus there appears to be a correlation between the positions of breakpoints which favors shorter tract lengths. The method developed in this paper provides the first statistical estimator for estimating the distribution of inversion tract lengths from marker data. Application of this method for a number of data sets may help elucidate the relationship between the length of an inversion and the chance that it will get accepted.

  2. Dependence of paracentric inversion rate on tract length

    PubMed Central

    York, Thomas L; Durrett, Rick; Nielsen, Rasmus

    2007-01-01

    Background We develop a Bayesian method based on MCMC for estimating the relative rates of pericentric and paracentric inversions from marker data from two species. The method also allows estimation of the distribution of inversion tract lengths. Results We apply the method to data from Drosophila melanogaster and D. yakuba. We find that pericentric inversions occur at a much lower rate compared to paracentric inversions. The average paracentric inversion tract length is approx. 4.8 Mb with small inversions being more frequent than large inversions. If the two breakpoints defining a paracentric inversion tract are uniformly and independently distributed over chromosome arms there will be more short tract-length inversions than long; we find an even greater preponderance of short tract lengths than this would predict. Thus there appears to be a correlation between the positions of breakpoints which favors shorter tract lengths. Conclusion The method developed in this paper provides the first statistical estimator for estimating the distribution of inversion tract lengths from marker data. Application of this method for a number of data sets may help elucidate the relationship between the length of an inversion and the chance that it will get accepted. PMID:17407601

  3. Evidence for large inversion polymorphisms in the human genome from HapMap data

    PubMed Central

    Bansal, Vikas; Bashir, Ali; Bafna, Vineet

    2007-01-01

    Knowledge about structural variation in the human genome has grown tremendously in the past few years. However, inversions represent a class of structural variation that remains difficult to detect. We present a statistical method to identify large inversion polymorphisms using unusual Linkage Disequilibrium (LD) patterns from high-density SNP data. The method is designed to detect chromosomal segments that are inverted (in a majority of the chromosomes) in a population with respect to the reference human genome sequence. We demonstrate the power of this method to detect such inversion polymorphisms through simulations done using the HapMap data. Application of this method to the data from the first phase of the International HapMap project resulted in 176 candidate inversions ranging from 200 kb to several megabases in length. Our predicted inversions include an 800-kb polymorphic inversion at 7p22, a 1.1-Mb inversion at 16p12, and a novel 1.2-Mb inversion on chromosome 10 that is supported by the presence of two discordant fosmids. Analysis of the genomic sequence around inversion breakpoints showed that 11 predicted inversions are flanked by pairs of highly homologous repeats in the inverted orientation. In addition, for three candidate inversions, the inverted orientation is represented in the Celera genome assembly. Although the power of our method to detect inversions is restricted because of inherently noisy LD patterns in population data, inversions predicted by our method represent strong candidates for experimental validation and analysis. PMID:17185644

  4. Computer programs for the solution of systems of linear algebraic equations

    NASA Technical Reports Server (NTRS)

    Sequi, W. T.

    1973-01-01

    FORTRAN subprograms for the solution of systems of linear algebraic equations are described, listed, and evaluated in this report. Procedures considered are direct solution, iteration, and matrix inversion. Both incore methods and those which utilize auxiliary data storage devices are considered. Some of the subroutines evaluated require the entire coefficient matrix to be in core, whereas others account for banding or sparceness of the system. General recommendations relative to equation solving are made, and on the basis of tests, specific subprograms are recommended.

  5. Investigation of Inversion Polymorphisms in the Human Genome Using Principal Components Analysis

    PubMed Central

    Ma, Jianzhong; Amos, Christopher I.

    2012-01-01

    Despite the significant advances made over the last few years in mapping inversions with the advent of paired-end sequencing approaches, our understanding of the prevalence and spectrum of inversions in the human genome has lagged behind other types of structural variants, mainly due to the lack of a cost-efficient method applicable to large-scale samples. We propose a novel method based on principal components analysis (PCA) to characterize inversion polymorphisms using high-density SNP genotype data. Our method applies to non-recurrent inversions for which recombination between the inverted and non-inverted segments in inversion heterozygotes is suppressed due to the loss of unbalanced gametes. Inside such an inversion region, an effect similar to population substructure is thus created: two distinct “populations” of inversion homozygotes of different orientations and their 1∶1 admixture, namely the inversion heterozygotes. This kind of substructure can be readily detected by performing PCA locally in the inversion regions. Using simulations, we demonstrated that the proposed method can be used to detect and genotype inversion polymorphisms using unphased genotype data. We applied our method to the phase III HapMap data and inferred the inversion genotypes of known inversion polymorphisms at 8p23.1 and 17q21.31. These inversion genotypes were validated by comparing with literature results and by checking Mendelian consistency using the family data whenever available. Based on the PCA-approach, we also performed a preliminary genome-wide scan for inversions using the HapMap data, which resulted in 2040 candidate inversions, 169 of which overlapped with previously reported inversions. Our method can be readily applied to the abundant SNP data, and is expected to play an important role in developing human genome maps of inversions and exploring associations between inversions and susceptibility of diseases. PMID:22808122

  6. Modeling and control of magnetorheological fluid dampers using neural networks

    NASA Astrophysics Data System (ADS)

    Wang, D. H.; Liao, W. H.

    2005-02-01

    Due to the inherent nonlinear nature of magnetorheological (MR) fluid dampers, one of the challenging aspects for utilizing these devices to achieve high system performance is the development of accurate models and control algorithms that can take advantage of their unique characteristics. In this paper, the direct identification and inverse dynamic modeling for MR fluid dampers using feedforward and recurrent neural networks are studied. The trained direct identification neural network model can be used to predict the damping force of the MR fluid damper on line, on the basis of the dynamic responses across the MR fluid damper and the command voltage, and the inverse dynamic neural network model can be used to generate the command voltage according to the desired damping force through supervised learning. The architectures and the learning methods of the dynamic neural network models and inverse neural network models for MR fluid dampers are presented, and some simulation results are discussed. Finally, the trained neural network models are applied to predict and control the damping force of the MR fluid damper. Moreover, validation methods for the neural network models developed are proposed and used to evaluate their performance. Validation results with different data sets indicate that the proposed direct identification dynamic model using the recurrent neural network can be used to predict the damping force accurately and the inverse identification dynamic model using the recurrent neural network can act as a damper controller to generate the command voltage when the MR fluid damper is used in a semi-active mode.

  7. The inference of vector magnetic fields from polarization measurements with limited spectral resolution

    NASA Technical Reports Server (NTRS)

    Lites, B. W.; Skumanich, A.

    1985-01-01

    A method is presented for recovery of the vector magnetic field and thermodynamic parameters from polarization measurement of photospheric line profiles measured with filtergraphs. The method includes magneto-optic effects and may be utilized on data sampled at arbitrary wavelengths within the line profile. The accuracy of this method is explored through inversion of synthetic Stokes profiles subjected to varying levels of random noise, instrumental wave-length resolution, and line profile sampling. The level of error introduced by the systematic effect of profile sampling over a finite fraction of the 5 minute oscillation cycle is also investigated. The results presented here are intended to guide instrumental design and observational procedure.

  8. An equivalent method of mixed dielectric constant in passive microwave/millimeter radiometric measurement

    NASA Astrophysics Data System (ADS)

    Su, Jinlong; Tian, Yan; Hu, Fei; Gui, Liangqi; Cheng, Yayun; Peng, Xiaohui

    2017-10-01

    Dielectric constant is an important role to describe the properties of matter. This paper proposes This paper proposes the concept of mixed dielectric constant(MDC) in passive microwave radiometric measurement. In addition, a MDC inversion method is come up, Ratio of Angle-Polarization Difference(RAPD) is utilized in this method. The MDC of several materials are investigated using RAPD. Brightness temperatures(TBs) which calculated by MDC and original dielectric constant are compared. Random errors are added to the simulation to test the robustness of the algorithm. Keywords: Passive detection, microwave/millimeter, radiometric measurement, ratio of angle-polarization difference (RAPD), mixed dielectric constant (MDC), brightness temperatures, remote sensing, target recognition.

  9. Analysis shear wave velocity structure obtained from surface wave methods in Bornova, Izmir

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pamuk, Eren, E-mail: eren.pamuk@deu.edu.tr; Akgün, Mustafa, E-mail: mustafa.akgun@deu.edu.tr; Özdağ, Özkan Cevdet, E-mail: cevdet.ozdag@deu.edu.tr

    2016-04-18

    Properties of the soil from the bedrock is necessary to describe accurately and reliably for the reduction of earthquake damage. Because seismic waves change their amplitude and frequency content owing to acoustic impedance difference between soil and bedrock. Firstly, shear wave velocity and depth information of layers on bedrock is needed to detect this changing. Shear wave velocity can be obtained using inversion of Rayleigh wave dispersion curves obtained from surface wave methods (MASW- the Multichannel Analysis of Surface Waves, ReMi-Refraction Microtremor, SPAC-Spatial Autocorrelation). While research depth is limeted in active source study, a passive source methods are utilized formore » deep depth which is not reached using active source methods. ReMi method is used to determine layer thickness and velocity up to 100 m using seismic refraction measurement systems.The research carried out up to desired depth depending on radius using SPAC which is utilized easily in conditions that district using of seismic studies in the city. Vs profiles which are required to calculate deformations in under static and dynamic loads can be obtained with high resolution using combining rayleigh wave dispersion curve obtained from active and passive source methods. In the this study, Surface waves data were collected using the measurements of MASW, ReMi and SPAC at the İzmir Bornova region. Dispersion curves obtained from surface wave methods were combined in wide frequency band and Vs-depth profiles were obtained using inversion. Reliability of the resulting soil profiles were provided by comparison with theoretical transfer function obtained from soil paremeters and observed soil transfer function from Nakamura technique and by examination of fitting between these functions. Vs values are changed between 200-830 m/s and engineering bedrock (Vs>760 m/s) depth is approximately 150 m.« less

  10. The attitude inversion method of geostationary satellites based on unscented particle filter

    NASA Astrophysics Data System (ADS)

    Du, Xiaoping; Wang, Yang; Hu, Heng; Gou, Ruixin; Liu, Hao

    2018-04-01

    The attitude information of geostationary satellites is difficult to be obtained since they are presented in non-resolved images on the ground observation equipment in space object surveillance. In this paper, an attitude inversion method for geostationary satellite based on Unscented Particle Filter (UPF) and ground photometric data is presented. The inversion algorithm based on UPF is proposed aiming at the strong non-linear feature in the photometric data inversion for satellite attitude, which combines the advantage of Unscented Kalman Filter (UKF) and Particle Filter (PF). This update method improves the particle selection based on the idea of UKF to redesign the importance density function. Moreover, it uses the RMS-UKF to partially correct the prediction covariance matrix, which improves the applicability of the attitude inversion method in view of UKF and the particle degradation and dilution of the attitude inversion method based on PF. This paper describes the main principles and steps of algorithm in detail, correctness, accuracy, stability and applicability of the method are verified by simulation experiment and scaling experiment in the end. The results show that the proposed method can effectively solve the problem of particle degradation and depletion in the attitude inversion method on account of PF, and the problem that UKF is not suitable for the strong non-linear attitude inversion. However, the inversion accuracy is obviously superior to UKF and PF, in addition, in the case of the inversion with large attitude error that can inverse the attitude with small particles and high precision.

  11. Inverse scattering theory: Inverse scattering series method for one dimensional non-compact support potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Jie, E-mail: yjie2@uh.edu; Lesage, Anne-Cécile; Hussain, Fazle

    2014-12-15

    The reversion of the Born-Neumann series of the Lippmann-Schwinger equation is one of the standard ways to solve the inverse acoustic scattering problem. One limitation of the current inversion methods based on the reversion of the Born-Neumann series is that the velocity potential should have compact support. However, this assumption cannot be satisfied in certain cases, especially in seismic inversion. Based on the idea of distorted wave scattering, we explore an inverse scattering method for velocity potentials without compact support. The strategy is to decompose the actual medium as a known single interface reference medium, which has the same asymptoticmore » form as the actual medium and a perturbative scattering potential with compact support. After introducing the method to calculate the Green’s function for the known reference potential, the inverse scattering series and Volterra inverse scattering series are derived for the perturbative potential. Analytical and numerical examples demonstrate the feasibility and effectiveness of this method. Besides, to ensure stability of the numerical computation, the Lanczos averaging method is employed as a filter to reduce the Gibbs oscillations for the truncated discrete inverse Fourier transform of each order. Our method provides a rigorous mathematical framework for inverse acoustic scattering with a non-compact support velocity potential.« less

  12. Multi-scale signed envelope inversion

    NASA Astrophysics Data System (ADS)

    Chen, Guo-Xin; Wu, Ru-Shan; Wang, Yu-Qing; Chen, Sheng-Chang

    2018-06-01

    Envelope inversion based on modulation signal mode was proposed to reconstruct large-scale structures of underground media. In order to solve the shortcomings of conventional envelope inversion, multi-scale envelope inversion was proposed using new envelope Fréchet derivative and multi-scale inversion strategy to invert strong contrast models. In multi-scale envelope inversion, amplitude demodulation was used to extract the low frequency information from envelope data. However, only to use amplitude demodulation method will cause the loss of wavefield polarity information, thus increasing the possibility of inversion to obtain multiple solutions. In this paper we proposed a new demodulation method which can contain both the amplitude and polarity information of the envelope data. Then we introduced this demodulation method into multi-scale envelope inversion, and proposed a new misfit functional: multi-scale signed envelope inversion. In the numerical tests, we applied the new inversion method to the salt layer model and SEG/EAGE 2-D Salt model using low-cut source (frequency components below 4 Hz were truncated). The results of numerical test demonstrated the effectiveness of this method.

  13. A method for the retrieval of atomic oxygen density and temperature profiles from ground-based measurements of the O(+)(2D-2P) 7320 A twilight airglow

    NASA Technical Reports Server (NTRS)

    Fennelly, J. A.; Torr, D. G.; Richards, P. G.; Torr, M. R.; Sharp, W. E.

    1991-01-01

    This paper describes a technique for extracting thermospheric profiles of the atomic-oxygen density and temperature, using ground-based measurements of the O(+)(2D-2P) doublet at 7320 and 7330 A in the twilight airglow. In this method, a local photochemical model is used to calculate the 7320-A intensity; the method also utilizes an iterative inversion procedure based on the Levenberg-Marquardt method described by Press et al. (1986). The results demonstrate that, if the measurements are only limited by errors due to Poisson noise, the altitude profiles of neutral temperature and atomic oxygen concentration can be determined accurately using currently available spectrometers.

  14. Obtaining source current density related to irregularly structured electromagnetic target field inside human body using hybrid inverse/FDTD method.

    PubMed

    Han, Jijun; Yang, Deqiang; Sun, Houjun; Xin, Sherman Xuegang

    2017-01-01

    Inverse method is inherently suitable for calculating the distribution of source current density related with an irregularly structured electromagnetic target field. However, the present form of inverse method cannot calculate complex field-tissue interactions. A novel hybrid inverse/finite-difference time domain (FDTD) method that can calculate the complex field-tissue interactions for the inverse design of source current density related with an irregularly structured electromagnetic target field is proposed. A Huygens' equivalent surface is established as a bridge to combine the inverse and FDTD method. Distribution of the radiofrequency (RF) magnetic field on the Huygens' equivalent surface is obtained using the FDTD method by considering the complex field-tissue interactions within the human body model. The obtained magnetic field distributed on the Huygens' equivalent surface is regarded as the next target. The current density on the designated source surface is derived using the inverse method. The homogeneity of target magnetic field and specific energy absorption rate are calculated to verify the proposed method.

  15. anisotropic microseismic focal mechanism inversion by waveform imaging matching

    NASA Astrophysics Data System (ADS)

    Wang, L.; Chang, X.; Wang, Y.; Xue, Z.

    2016-12-01

    The focal mechanism is one of the most important parameters in source inversion, for both natural earthquakes and human-induced seismic events. It has been reported to be useful for understanding stress distribution and evaluating the fracturing effect. The conventional focal mechanism inversion method picks the first arrival waveform of P wave. This method assumes the source as a Double Couple (DC) type and the media isotropic, which is usually not the case for induced seismic focal mechanism inversion. For induced seismic events, the inappropriate source and media model in inversion processing, by introducing ambiguity or strong simulation errors, will seriously reduce the inversion effectiveness. First, the focal mechanism contains significant non-DC source type. Generally, the source contains three components: DC, isotropic (ISO) and the compensated linear vector dipole (CLVD), which makes focal mechanisms more complicated. Second, the anisotropy of media will affect travel time and waveform to generate inversion bias. The common way to describe focal mechanism inversion is based on moment tensor (MT) inversion which can be decomposed into the combination of DC, ISO and CLVD components. There are two ways to achieve MT inversion. The wave-field migration method is applied to achieve moment tensor imaging. This method can construct elements imaging of MT in 3D space without picking the first arrival, but the retrieved MT value is influenced by imaging resolution. The full waveform inversion is employed to retrieve MT. In this method, the source position and MT can be reconstructed simultaneously. However, this method needs vast numerical calculation. Moreover, the source position and MT also influence each other in the inversion process. In this paper, the waveform imaging matching (WIM) method is proposed, which combines source imaging with waveform inversion for seismic focal mechanism inversion. Our method uses the 3D tilted transverse isotropic (TTI) elastic wave equation to approximate wave propagating in anisotropic media. First, a source imaging procedure is employed to obtain the source position. Second, we refine a waveform inversion algorithm to retrieve MT. We also use a microseismic data set recorded in surface acquisition to test our method.

  16. Quantification of chaotic strength and mixing in a micro fluidic system

    NASA Astrophysics Data System (ADS)

    Kim, Ho Jun; Beskok, Ali

    2007-11-01

    Comparative studies of five different techniques commonly employed to identify the chaotic strength and mixing efficiency in micro fluidic systems are presented to demonstrate the competitive advantages and shortcomings of each method. The 'chaotic electroosmotic stirrer' of Qian and Bau (2002 Anal. Chem. 74 3616-25) is utilized as the benchmark case due to its well-defined flow kinematics. Lagrangian particle tracking methods are utilized to study particle dispersion in the conceptual device using spectral element and fourth-order Runge-Kutta discretizations in space and time, respectively. Stirring efficiency is predicted using the stirring index based on the box counting method, and Poincaré sections are utilized to identify the chaotic and regular regions under various actuation conditions. Finite time Lyapunov exponents are calculated to quantify the chaotic strength, while the probability density function of the stretching field is utilized as an alternative method to demonstrate the statistical analysis of chaotic and partially chaotic cases. Mixing index inverse, based on the standard deviation of scalar species distribution, is utilized as a metric to quantify the mixing efficiency. Series of numerical simulations are performed by varying the Peclet number (Pe) at fixed kinematic conditions. The mixing time (tm) is characterized as a function of the Pe number, and tm ~ ln(Pe) scaling is demonstrated for fully chaotic cases, while tm ~ Peα scaling with α ≈ 0.33 and α = 0.5 are observed for partially chaotic and regular cases, respectively. Employing the aforementioned techniques, optimum kinematic conditions and the actuation frequency of the stirrer that result in the highest mixing/stirring efficiency are identified.

  17. Object-based inversion of crosswell radar tomography data to monitor vegetable oil injection experiments

    USGS Publications Warehouse

    Lane, John W.; Day-Lewis, Frederick D.; Versteeg, Roelof J.; Casey, Clifton C.

    2004-01-01

    Crosswell radar methods can be used to dynamically image ground-water flow and mass transport associated with tracer tests, hydraulic tests, and natural physical processes, for improved characterization of preferential flow paths and complex aquifer heterogeneity. Unfortunately, because the raypath coverage of the interwell region is limited by the borehole geometry, the tomographic inverse problem is typically underdetermined, and tomograms may contain artifacts such as spurious blurring or streaking that confuse interpretation.We implement object-based inversion (using a constrained, non-linear, least-squares algorithm) to improve results from pixel-based inversion approaches that utilize regularization criteria, such as damping or smoothness. Our approach requires pre- and post-injection travel-time data. Parameterization of the image plane comprises a small number of objects rather than a large number of pixels, resulting in an overdetermined problem that reduces the need for prior information. The nature and geometry of the objects are based on hydrologic insight into aquifer characteristics, the nature of the experiment, and the planned use of the geophysical results.The object-based inversion is demonstrated using synthetic and crosswell radar field data acquired during vegetable-oil injection experiments at a site in Fridley, Minnesota. The region where oil has displaced ground water is discretized as a stack of rectangles of variable horizontal extents. The inversion provides the geometry of the affected region and an estimate of the radar slowness change for each rectangle. Applying petrophysical models to these results and porosity from neutron logs, we estimate the vegetable-oil emulsion saturation in various layers.Using synthetic- and field-data examples, object-based inversion is shown to be an effective strategy for inverting crosswell radar tomography data acquired to monitor the emplacement of vegetable-oil emulsions. A principal advantage of object-based inversion is that it yields images that hydrologists and engineers can easily interpret and use for model calibration.

  18. Evaluating the Impact of Parent-Reported Medical Home Status on Children's Health Care Utilization, Expenditures, and Quality: A Difference-in-Differences Analysis with Causal Inference Methods.

    PubMed

    Han, Bing; Yu, Hao; Friedberg, Mark W

    2017-04-01

    To evaluate the effects of the parent-reported medical home status on health care utilization, expenditures, and quality for children. Medical Expenditure Panel Survey (MEPS) during 2004-2012, including a total of 9,153 children who were followed up for 2 years in the survey. We took a causal difference-in-differences approach using inverse probability weighting and doubly robust estimators to study how changes in medical home status over a 2-year period affected children's health care outcomes. Our analysis adjusted for children's sociodemographic, health, and insurance statuses. We conducted sensitivity analyses using alternative statistical methods, different approaches to outliers and missing data, and accounting for possible common-method biases. Compared with children whose parents reported having medical homes in both years 1 and 2, those who had medical homes in year 1 but lost them in year 2 had significantly lower parent-reported ratings of health care quality and higher utilization of emergency care. Compared with children whose parents reported having no medical homes in both years, those who did not have medical homes in year 1 but gained them in year 2 had significantly higher ratings of health care quality, but no significant differences in health care expenditures and utilization. Having a medical home may help improve health care quality for children; losing a medical home may lead to higher utilization of emergency care. © Health Research and Educational Trust.

  19. An Inversion Recovery NMR Kinetics Experiment

    ERIC Educational Resources Information Center

    Williams, Travis J.; Kershaw, Allan D.; Li, Vincent; Wu, Xinping

    2011-01-01

    A convenient laboratory experiment is described in which NMR magnetization transfer by inversion recovery is used to measure the kinetics and thermochemistry of amide bond rotation. The experiment utilizes Varian spectrometers with the VNMRJ 2.3 software, but can be easily adapted to any NMR platform. The procedures and sample data sets in this…

  20. Time-lapse joint AVO inversion using generalized linear method based on exact Zoeppritz equations

    NASA Astrophysics Data System (ADS)

    Zhi, Longxiao; Gu, Hanming

    2018-03-01

    The conventional method of time-lapse AVO (Amplitude Versus Offset) inversion is mainly based on the approximate expression of Zoeppritz equations. Though the approximate expression is concise and convenient to use, it has certain limitations. For example, its application condition is that the difference of elastic parameters between the upper medium and lower medium is little and the incident angle is small. In addition, the inversion of density is not stable. Therefore, we develop the method of time-lapse joint AVO inversion based on exact Zoeppritz equations. In this method, we apply exact Zoeppritz equations to calculate the reflection coefficient of PP wave. And in the construction of objective function for inversion, we use Taylor series expansion to linearize the inversion problem. Through the joint AVO inversion of seismic data in baseline survey and monitor survey, we can obtain the P-wave velocity, S-wave velocity, density in baseline survey and their time-lapse changes simultaneously. We can also estimate the oil saturation change according to inversion results. Compared with the time-lapse difference inversion, the joint inversion doesn't need certain assumptions and can estimate more parameters simultaneously. It has a better applicability. Meanwhile, by using the generalized linear method, the inversion is easily implemented and its calculation cost is small. We use the theoretical model to generate synthetic seismic records to test and analyze the influence of random noise. The results can prove the availability and anti-noise-interference ability of our method. We also apply the inversion to actual field data and prove the feasibility of our method in actual situation.

  1. Global high-frequency source imaging accounting for complexity in Green's functions

    NASA Astrophysics Data System (ADS)

    Lambert, V.; Zhan, Z.

    2017-12-01

    The general characterization of earthquake source processes at long periods has seen great success via seismic finite fault inversion/modeling. Complementary techniques, such as seismic back-projection, extend the capabilities of source imaging to higher frequencies and reveal finer details of the rupture process. However, such high frequency methods are limited by the implicit assumption of simple Green's functions, which restricts the use of global arrays and introduces artifacts (e.g., sweeping effects, depth/water phases) that require careful attention. This motivates the implementation of an imaging technique that considers the potential complexity of Green's functions at high frequencies. We propose an alternative inversion approach based on the modest assumption that the path effects contributing to signals within high-coherency subarrays share a similar form. Under this assumption, we develop a method that can combine multiple high-coherency subarrays to invert for a sparse set of subevents. By accounting for potential variability in the Green's functions among subarrays, our method allows for the utilization of heterogeneous global networks for robust high resolution imaging of the complex rupture process. The approach also provides a consistent framework for examining frequency-dependent radiation across a broad frequency spectrum.

  2. Crustal seismic structure beneath the southwest Yunnan region from joint inversion of body-wave and surface wave data

    NASA Astrophysics Data System (ADS)

    Luo, Y.; Thurber, C. H.; Zeng, X.; Zhang, L.

    2016-12-01

    Data from 71 broadband stations of a dense transportable array deployed in southwest Yunnan makes it possible to improve the resolution of the seismic model in this region. Continuous waveforms from 12 permanent stations of the China National Seismic Network were also used in this study. We utilized one-year continuous vertical component records to compute ambient noise cross-correlation functions (NCF). More than 3,000 NCFs were obtained and used to measure group velocities between 5 and 25 seconds with the frequency-time analysis method. This frequency band is most sensitive to crustal seismic structure, especially the upper and middle crust. The group velocity at short-period shows a clear azimuthal anisotropy with a north-south fast direction. The fast direction is consistent with previous seismic results revealed from shear wave splitting. More than 2,000 group velocity measurements were employed to invert the surface wave dispersion data for group velocity maps. We applied a finite difference forward modeling algorithm with an iterative inversion. A new body-wave and surface wave joint inversion algorithm (Fang et al., 2016) was utilized to improve the resolution of both P and S models. About 60,000 P wave and S wave arrivals from 1,780 local earthquakes, which occurred from May 2011 to December 2013 with magnitudes larger than 2.0, were manually picked. The new high-resolution seismic structure shows good consistency with local geological features, e.g. Tengchong Volcano. The earthquake locations also were refined with our new velocity model.

  3. Metamodel-based inverse method for parameter identification: elastic-plastic damage model

    NASA Astrophysics Data System (ADS)

    Huang, Changwu; El Hami, Abdelkhalak; Radi, Bouchaïb

    2017-04-01

    This article proposed a metamodel-based inverse method for material parameter identification and applies it to elastic-plastic damage model parameter identification. An elastic-plastic damage model is presented and implemented in numerical simulation. The metamodel-based inverse method is proposed in order to overcome the disadvantage in computational cost of the inverse method. In the metamodel-based inverse method, a Kriging metamodel is constructed based on the experimental design in order to model the relationship between material parameters and the objective function values in the inverse problem, and then the optimization procedure is executed by the use of a metamodel. The applications of the presented material model and proposed parameter identification method in the standard A 2017-T4 tensile test prove that the presented elastic-plastic damage model is adequate to describe the material's mechanical behaviour and that the proposed metamodel-based inverse method not only enhances the efficiency of parameter identification but also gives reliable results.

  4. Numerical methods for the inverse problem of density functional theory

    DOE PAGES

    Jensen, Daniel S.; Wasserman, Adam

    2017-07-17

    Here, the inverse problem of Kohn–Sham density functional theory (DFT) is often solved in an effort to benchmark and design approximate exchange-correlation potentials. The forward and inverse problems of DFT rely on the same equations but the numerical methods for solving each problem are substantially different. We examine both problems in this tutorial with a special emphasis on the algorithms and error analysis needed for solving the inverse problem. Two inversion methods based on partial differential equation constrained optimization and constrained variational ideas are introduced. We compare and contrast several different inversion methods applied to one-dimensional finite and periodic modelmore » systems.« less

  5. Numerical methods for the inverse problem of density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jensen, Daniel S.; Wasserman, Adam

    Here, the inverse problem of Kohn–Sham density functional theory (DFT) is often solved in an effort to benchmark and design approximate exchange-correlation potentials. The forward and inverse problems of DFT rely on the same equations but the numerical methods for solving each problem are substantially different. We examine both problems in this tutorial with a special emphasis on the algorithms and error analysis needed for solving the inverse problem. Two inversion methods based on partial differential equation constrained optimization and constrained variational ideas are introduced. We compare and contrast several different inversion methods applied to one-dimensional finite and periodic modelmore » systems.« less

  6. Characterization of dynamic changes of current source localization based on spatiotemporal fMRI constrained EEG source imaging

    NASA Astrophysics Data System (ADS)

    Nguyen, Thinh; Potter, Thomas; Grossman, Robert; Zhang, Yingchun

    2018-06-01

    Objective. Neuroimaging has been employed as a promising approach to advance our understanding of brain networks in both basic and clinical neuroscience. Electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) represent two neuroimaging modalities with complementary features; EEG has high temporal resolution and low spatial resolution while fMRI has high spatial resolution and low temporal resolution. Multimodal EEG inverse methods have attempted to capitalize on these properties but have been subjected to localization error. The dynamic brain transition network (DBTN) approach, a spatiotemporal fMRI constrained EEG source imaging method, has recently been developed to address these issues by solving the EEG inverse problem in a Bayesian framework, utilizing fMRI priors in a spatial and temporal variant manner. This paper presents a computer simulation study to provide a detailed characterization of the spatial and temporal accuracy of the DBTN method. Approach. Synthetic EEG data were generated in a series of computer simulations, designed to represent realistic and complex brain activity at superficial and deep sources with highly dynamical activity time-courses. The source reconstruction performance of the DBTN method was tested against the fMRI-constrained minimum norm estimates algorithm (fMRIMNE). The performances of the two inverse methods were evaluated both in terms of spatial and temporal accuracy. Main results. In comparison with the commonly used fMRIMNE method, results showed that the DBTN method produces results with increased spatial and temporal accuracy. The DBTN method also demonstrated the capability to reduce crosstalk in the reconstructed cortical time-course(s) induced by neighboring regions, mitigate depth bias and improve overall localization accuracy. Significance. The improved spatiotemporal accuracy of the reconstruction allows for an improved characterization of complex neural activity. This improvement can be extended to any subsequent brain connectivity analyses used to construct the associated dynamic brain networks.

  7. Bayesian resolution of TEM, CSEM and MT soundings: a comparative study

    NASA Astrophysics Data System (ADS)

    Blatter, D. B.; Ray, A.; Key, K.

    2017-12-01

    We examine the resolution of three electromagnetic exploration methods commonly used to map the electrical conductivity of the shallow crust - the magnetotelluric (MT) method, the controlled-source electromagnetic (CSEM) method and the transient electromagnetic (TEM) method. TEM and CSEM utilize an artificial source of EM energy, while MT makes use of natural variations in the Earth's electromagnetic field. For a given geological setting and acquisition parameters, each of these methods will have a different resolution due to differences in the source field polarization and the frequency range of the measurements. For example, the MT and TEM methods primarily rely on induced horizontal currents and are most sensitive to conductive layers while the CSEM method generates vertical loops of current and is more sensitive to resistive features. Our study seeks to provide a robust resolution comparison that can help inform exploration geophysicists about which technique is best suited for a particular target. While it is possible to understand and describe a difference in resolution qualitatively, it remains challenging to fully describe it quantitatively using optimization based approaches. Part of the difficulty here stems from the standard electromagnetic inversion toolkit, which makes heavy use of regularization (often in the form of smoothing) to constrain the non-uniqueness inherent in the inverse problem. This regularization makes it difficult to accurately estimate the uncertainty in estimated model parameters - and therefore obscures their true resolution. To overcome this difficulty, we compare the resolution of CSEM, airborne TEM, and MT data quantitatively using a Bayesian trans-dimensional Markov chain Monte Carlo (McMC) inversion scheme. Noisy synthetic data for this study are computed from various representative 1D test models: a conductive anomaly under a conductive/resistive overburden; and a resistive anomaly under a conductive/resistive overburden. In addition to obtaining the full posterior probability density function of the model parameters, we develop a metric to more directly compare the resolution of each method as a function of depth.

  8. Computational inverse methods of heat source in fatigue damage problems

    NASA Astrophysics Data System (ADS)

    Chen, Aizhou; Li, Yuan; Yan, Bo

    2018-04-01

    Fatigue dissipation energy is the research focus in field of fatigue damage at present. It is a new idea to solve the problem of calculating fatigue dissipation energy by introducing inverse method of heat source into parameter identification of fatigue dissipation energy model. This paper introduces the research advances on computational inverse method of heat source and regularization technique to solve inverse problem, as well as the existing heat source solution method in fatigue process, prospects inverse method of heat source applying in fatigue damage field, lays the foundation for further improving the effectiveness of fatigue dissipation energy rapid prediction.

  9. Two Dimensional Finite Element Based Magnetotelluric Inversion using Singular Value Decomposition Method on Transverse Electric Mode

    NASA Astrophysics Data System (ADS)

    Tjong, Tiffany; Yihaa’ Roodhiyah, Lisa; Nurhasan; Sutarno, Doddy

    2018-04-01

    In this work, an inversion scheme was performed using a vector finite element (VFE) based 2-D magnetotelluric (MT) forward modelling. We use an inversion scheme with Singular value decomposition (SVD) method toimprove the accuracy of MT inversion.The inversion scheme was applied to transverse electric (TE) mode of MT. SVD method was used in this inversion to decompose the Jacobian matrices. Singular values which obtained from the decomposition process were analyzed. This enabled us to determine the importance of data and therefore to define a threshold for truncation process. The truncation of singular value in inversion processcould improve the resulted model.

  10. Comparing Balance and Inverse Methods on Learning Conceptual and Procedural Knowledge in Equation Solving: A Cognitive Load Perspective

    ERIC Educational Resources Information Center

    Ngu, Bing Hiong; Phan, Huy Phuong

    2016-01-01

    We examined the use of balance and inverse methods in equation solving. The main difference between the balance and inverse methods lies in the operational line (e.g. +2 on both sides vs -2 becomes +2). Differential element interactivity favours the inverse method because the interaction between elements occurs on both sides of the equation for…

  11. Estimation of Dry Fracture Weakness, Porosity, and Fluid Modulus Using Observable Seismic Reflection Data in a Gas-Bearing Reservoir

    NASA Astrophysics Data System (ADS)

    Chen, Huaizhen; Zhang, Guangzhi

    2017-05-01

    Fracture detection and fluid identification are important tasks for a fractured reservoir characterization. Our goal is to demonstrate a direct approach to utilize azimuthal seismic data to estimate fluid bulk modulus, porosity, and dry fracture weaknesses, which decreases the uncertainty of fluid identification. Combining Gassmann's (Vier. der Natur. Gesellschaft Zürich 96:1-23, 1951) equations and linear-slip model, we first establish new simplified expressions of stiffness parameters for a gas-bearing saturated fractured rock with low porosity and small fracture density, and then we derive a novel PP-wave reflection coefficient in terms of dry background rock properties (P-wave and S-wave moduli, and density), fracture (dry fracture weaknesses), porosity, and fluid (fluid bulk modulus). A Bayesian Markov chain Monte Carlo nonlinear inversion method is proposed to estimate fluid bulk modulus, porosity, and fracture weaknesses directly from azimuthal seismic data. The inversion method yields reasonable estimates in the case of synthetic data containing a moderate noise and stable results on real data.

  12. A concept for holistic whole body MRI data analysis, Imiomics

    PubMed Central

    Malmberg, Filip; Johansson, Lars; Lind, Lars; Sundbom, Magnus; Ahlström, Håkan; Kullberg, Joel

    2017-01-01

    Purpose To present and evaluate a whole-body image analysis concept, Imiomics (imaging–omics) and an image registration method that enables Imiomics analyses by deforming all image data to a common coordinate system, so that the information in each voxel can be compared between persons or within a person over time and integrated with non-imaging data. Methods The presented image registration method utilizes relative elasticity constraints of different tissue obtained from whole-body water-fat MRI. The registration method is evaluated by inverse consistency and Dice coefficients and the Imiomics concept is evaluated by example analyses of importance for metabolic research using non-imaging parameters where we know what to expect. The example analyses include whole body imaging atlas creation, anomaly detection, and cross-sectional and longitudinal analysis. Results The image registration method evaluation on 128 subjects shows low inverse consistency errors and high Dice coefficients. Also, the statistical atlas with fat content intensity values shows low standard deviation values, indicating successful deformations to the common coordinate system. The example analyses show expected associations and correlations which agree with explicit measurements, and thereby illustrate the usefulness of the proposed Imiomics concept. Conclusions The registration method is well-suited for Imiomics analyses, which enable analyses of relationships to non-imaging data, e.g. clinical data, in new types of holistic targeted and untargeted big-data analysis. PMID:28241015

  13. Genome-wide association tests of inversions with application to psoriasis

    PubMed Central

    Ma, Jianzhong; Xiong, Momiao; You, Ming; Lozano, Guillermina; Amos, Christopher I.

    2014-01-01

    Although inversions have occasionally been found to be associated with disease susceptibility through interrupting a gene or its regulatory region, or by increasing the risk for deleterious secondary rearrangements, no association study has been specifically conducted for risks associated with inversions, mainly because existing approaches to detecting and genotyping inversions do not readily scale to a large number of samples. Based on our recently proposed approach to identifying and genotyping inversions using principal components analysis (PCA), we herein develop a method of detecting association between inversions and disease in a genome-wide fashion. Our method uses genotype data for single nucleotide polymorphisms (SNPs), and is thus cost-efficient and computationally fast. For an inversion polymorphism, local PCA around the inversion region is performed to infer the inversion genotypes of all samples. For many inversions, we found that some of the SNPs inside an inversion region are fixed in the two lineages of different orientations and thus can serve as surrogate markers. Our method can be applied to case-control and quantitative trait association studies to identify inversions that may interrupt a gene or the connection between a gene and its regulatory agents. Our method also offers a new venue to identify inversions that are responsible for disease-causing secondary rearrangements. We illustrated our proposed approach to case-control data for psoriasis and identified novel associations with a few inversion polymorphisms. PMID:24623382

  14. Geophysical characterization of peatlands using crosshole GPR full-waveform inversion: Case study from a bog in northwestern Germany

    NASA Astrophysics Data System (ADS)

    Schmäck, J.; Klotzsche, A.; Van Der Kruk, J.; Vereecken, H.; Bechtold, M.

    2017-12-01

    The characterization of peatlands is of particular interest, since areas with peat soils represent global hotspots for the exchange of greenhouse gases. Their effect on global warming depends on several parameters, like mean annual water level and land use. Models of greenhouse gas emissions and carbon accumulation in peatlands can be improved by including small-scale soil properties that e.g. act as gas traps and periodically release gases to the atmosphere during ebullition events. Ground penetrating radar (GPR) is well suited to non- or minimal invasively characterize and improve our understanding of dynamic processes that take place in the critical zone. It uses high frequency electromagnetic waves to image and characterize the dielectric permittivity and electrical conductivity of the critical zone, which can be related to hydrogeological properties like porosity, soil water content, salinity and clay content. In the last decade, the full-waveform inversion of crosshole GPR data has proved to be a powerful tool to improve the image resolution compared to standard ray-based methods. This approach was successfully applied to several different aquifers and was able to provide decimeter-scale resolution images including small-scale high contrast layers that can be related to zones of high porosity, zones of preferential flow or clay lenses. The comparison to independently measured e.g. logging data proved the reliability of the method. Here, for the first time crosshole GPR full-waveform inversion is used to image three peatland plots with different land use that are part of the "Ahlen-Falkenberger Moor peat bog complex" in northwestern Germany. The full-waveform inversion of the acquired data returned higher resolution images than standard ray-based GPR methods, and, is able to improve our understanding of subsurface structures. The comparison of the different plots is expected to provide new insights into gas content and gas trapping structures across different land uses. Additionally, season-related changes of peatland soil properties are investigated. The crosshole GPR full-waveform inversion was successfully applied to several datasets and the results show the utility and credibility of GPR FWI to analyze peatland properties.

  15. Inverse Monte Carlo method in a multilayered tissue model for diffuse reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    Fredriksson, Ingemar; Larsson, Marcus; Strömberg, Tomas

    2012-04-01

    Model based data analysis of diffuse reflectance spectroscopy data enables the estimation of optical and structural tissue parameters. The aim of this study was to present an inverse Monte Carlo method based on spectra from two source-detector distances (0.4 and 1.2 mm), using a multilayered tissue model. The tissue model variables include geometrical properties, light scattering properties, tissue chromophores such as melanin and hemoglobin, oxygen saturation and average vessel diameter. The method utilizes a small set of presimulated Monte Carlo data for combinations of different levels of epidermal thickness and tissue scattering. The path length distributions in the different layers are stored and the effect of the other parameters is added in the post-processing. The accuracy of the method was evaluated using Monte Carlo simulations of tissue-like models containing discrete blood vessels, evaluating blood tissue fraction and oxygenation. It was also compared to a homogeneous model. The multilayer model performed better than the homogeneous model and all tissue parameters significantly improved spectral fitting. Recorded in vivo spectra were fitted well at both distances, which we previously found was not possible with a homogeneous model. No absolute intensity calibration is needed and the algorithm is fast enough for real-time processing.

  16. Computation of dynamic seismic responses to viscous fluid of digitized three-dimensional Berea sandstones with a coupled finite-difference method.

    PubMed

    Zhang, Yang; Toksöz, M Nafi

    2012-08-01

    The seismic response of saturated porous rocks is studied numerically using microtomographic images of three-dimensional digitized Berea sandstones. A stress-strain calculation is employed to compute the velocities and attenuations of rock samples whose sizes are much smaller than the seismic wavelength of interest. To compensate for the contributions of small cracks lost in the imaging process to the total velocity and attenuation, a hybrid method is developed to recover the crack distribution, in which the differential effective medium theory, the Kuster-Toksöz model, and a modified squirt-flow model are utilized in a two-step Monte Carlo inversion. In the inversion, the velocities of P- and S-waves measured for the dry and water-saturated cases, and the measured attenuation of P-waves for different fluids are used. By using such a hybrid method, both the velocities of saturated porous rocks and the attenuations are predicted accurately when compared to laboratory data. The hybrid method is a practical way to model numerically the seismic properties of saturated porous rocks until very high resolution digital data are available. Cracks lost in the imaging process are critical for accurately predicting velocities and attenuations of saturated porous rocks.

  17. Inversion methods for interpretation of asteroid lightcurves

    NASA Technical Reports Server (NTRS)

    Kaasalainen, Mikko; Lamberg, L.; Lumme, K.

    1992-01-01

    We have developed methods of inversion that can be used in the determination of the three-dimensional shape or the albedo distribution of the surface of a body from disk-integrated photometry, assuming the shape to be strictly convex. In addition to the theory of inversion methods, we have studied the practical aspects of the inversion problem and applied our methods to lightcurve data of 39 Laetitia and 16 Psyche.

  18. Improving water content estimation on landslide-prone hillslopes using structurally-constrained inversion of electrical resistivity data

    NASA Astrophysics Data System (ADS)

    Heinze, Thomas; Möhring, Simon; Budler, Jasmin; Weigand, Maximilian; Kemna, Andreas

    2017-04-01

    Rainfall-triggered landslides are a latent danger in almost any place of the world. Due to climate change heavy rainfalls might occur more often, increasing the risk of landslides. With pore pressure as mechanical trigger, knowledge of water content distribution in the ground is essential for hazard analysis during monitoring of potentially dangerous rainfall events. Geophysical methods like electrical resistivity tomography (ERT) can be utilized to determine the spatial distribution of water content using established soil physical relationships between bulk electrical resistivity and water content. However, often more dominant electrical contrasts due to lithological structures outplay these hydraulic signatures and blur the results in the inversion process. Additionally, the inversion of ERT data requires further constraints. In the standard Occam inversion method, a smoothness constraint is used, assuming that soil properties change softly in space. This applies in many scenarios, as for example during infiltration of water without a clear saturation front. Sharp lithological layers with strongly divergent hydrological parameters, as often found in landslide prone hillslopes, on the other hand, are typically badly resolved by standard ERT. We use a structurally constrained ERT inversion approach for improving water content estimation in landslide prone hills by including a-priori information about lithological layers. Here the standard smoothness constraint is reduced along layer boundaries identified using seismic data or other additional sources. This approach significantly improves water content estimations, because in landslide prone hills often a layer of rather high hydraulic conductivity is followed by a hydraulic barrier like clay-rich soil, causing higher pore pressures. One saturated layer and one almost drained layer typically result also in a sharp contrast in electrical resistivity, assuming that surface conductivity of the soil does not change in similar order. Using synthetic data, we study the influence of uncertainties in the a-priori information on the inverted resistivity and estimated water content distribution. Based on our simulation results, we provide best-practice recommendations for field applications and suggest important tests to obtain reliable, reproducible and trustworthy results. We finally apply our findings to field data, compare conventional and improved analysis results, and discuss limitations of the structurally-constrained inversion approach.

  19. Economic Data and Models in a Greenhouse Gas Monitoring System (Invited)

    NASA Astrophysics Data System (ADS)

    Reilly, J. M.

    2010-12-01

    Under the Framework Convention for Climate Change a system of inventory reporting of greenhouse gas emissions has been developed (UNFCCC, 2010). The system is based on a bottom-up approach that identifies activities associated with emissions (e.g. coal burning, rice production), develops coefficients associated with each unit of the activity (ton of coal, hectare of rice), and then based on the level of the activity (total tons, hectares) arrives through multiplication at an estimate of annual emissions at a country level. Tier 1, 2, and 3 approaches vary in the level of detail taken into account in making estimates, in large part attempting to take account of variation in emissions coefficients based on experimental measurements that may not be representative of the countries actual activities (differences among rice fields, livestock, etc). Top-down approaches use direct measurements of concentrations of GHGs in the atmosphere and inverse methods and assimilated weather data to estimate emissions sources (e.g. Prinn, 2000). The statistical approaches used rely on known relationships, measurements, and estimates of error to reconcile different measurements to improve the estimate of interest—i.e. emissions. While inverse methods have typically relied on measurement of concentrations, economic data and relationships such as those used in bottom-up reporting could also be brought into the inverse calculations to reconcile estimates of anthropogenic emissions in national reports with those of natural emissions , of non-reporting countries, and of concentrations. For example, per unit emissions from coal, oil, or gas combustion are well constrained, and estimates of tons know reasonably well but with some uncertainty. Inverse methods may improve estimates of tons of fuel combusted but because they are relatively well known that information is likely to tightly constrain fossil emissions and thus improve the estimate of emissions from land use change which are more poorly known. If inverse methods are to be used for treaty verification, it would seem particularly important to incorporate economic data because so far treaties have covered emissions from "anthropogenic" emissions and there may not be a distinct physical signature to identify concentration changes related to legally defined anthropogenic sources from natural sources (e.g. emissions or uptake on managed forest land from a natural change in uptake of carbon on land or nitrous oxide emissions related to land management (inorganic or organic fertilizers) from N2O from natural cycles. There are many issues that would need to be resolved to effectively utilize economic activity data in inverse calculations. In particular, economic activity data often lacks spatial and temporal resolution as it is reported for political units (e.g. at the national level) and often only annually. However, given the potential gain these data could contribute to top down estimates it is worth further investigating their incorporation, and it may give impetus to efforts to create economic data sets with greater spatial and temporal resolution. Prinn, R.G., 2000: Measurement Equation for Trace Chemicals in Fluids and Solution of its Inverse, in Inverse Methods in Global Biogeochemical Cycles, Geophysical Monograph 114, American Geophysical Union. UNFCCC, 2010, National Reports at http://unfccc.int/national_reports/items/1408.php

  20. Identification of subsurface structures using electromagnetic data and shape priors

    NASA Astrophysics Data System (ADS)

    Tveit, Svenn; Bakr, Shaaban A.; Lien, Martha; Mannseth, Trond

    2015-03-01

    We consider the inverse problem of identifying large-scale subsurface structures using the controlled source electromagnetic method. To identify structures in the subsurface where the contrast in electric conductivity can be small, regularization is needed to bias the solution towards preserving structural information. We propose to combine two approaches for regularization of the inverse problem. In the first approach we utilize a model-based, reduced, composite representation of the electric conductivity that is highly flexible, even for a moderate number of degrees of freedom. With a low number of parameters, the inverse problem is efficiently solved using a standard, second-order gradient-based optimization algorithm. Further regularization is obtained using structural prior information, available, e.g., from interpreted seismic data. The reduced conductivity representation is suitable for incorporation of structural prior information. Such prior information cannot, however, be accurately modeled with a gaussian distribution. To alleviate this, we incorporate the structural information using shape priors. The shape prior technique requires the choice of kernel function, which is application dependent. We argue for using the conditionally positive definite kernel which is shown to have computational advantages over the commonly applied gaussian kernel for our problem. Numerical experiments on various test cases show that the methodology is able to identify fairly complex subsurface electric conductivity distributions while preserving structural prior information during the inversion.

  1. Spectral reflectance inversion with high accuracy on green target

    NASA Astrophysics Data System (ADS)

    Jiang, Le; Yuan, Jinping; Li, Yong; Bai, Tingzhu; Liu, Shuoqiong; Jin, Jianzhou; Shen, Jiyun

    2016-09-01

    Using Landsat-7 ETM remote sensing data, the inversion of spectral reflectance of green wheat in visible and near infrared waveband in Yingke, China is studied. In order to solve the problem of lower inversion accuracy, custom atmospheric conditions method based on moderate resolution transmission model (MODTRAN) is put forward. Real atmospheric parameters are considered when adopting this method. The atmospheric radiative transfer theory to calculate atmospheric parameters is introduced first and then the inversion process of spectral reflectance is illustrated in detail. At last the inversion result is compared with simulated atmospheric conditions method which was a widely used method by previous researchers. The comparison shows that the inversion accuracy of this paper's method is higher in all inversion bands; the inversed spectral reflectance curve by this paper's method is more similar to the measured reflectance curve of wheat and better reflects the spectral reflectance characteristics of green plant which is very different from green artificial target. Thus, whether a green target is a plant or artificial target can be judged by reflectance inversion based on remote sensing image. This paper's research is helpful for the judgment of green artificial target hidden in the greenery, which has a great significance on the precise strike of green camouflaged weapons in military field.

  2. SPIN: An Inversion Code for the Photospheric Spectral Line

    NASA Astrophysics Data System (ADS)

    Yadav, Rahul; Mathew, Shibu K.; Tiwary, Alok Ranjan

    2017-08-01

    Inversion codes are the most useful tools to infer the physical properties of the solar atmosphere from the interpretation of Stokes profiles. In this paper, we present the details of a new Stokes Profile INversion code (SPIN) developed specifically to invert the spectro-polarimetric data of the Multi-Application Solar Telescope (MAST) at Udaipur Solar Observatory. The SPIN code has adopted Milne-Eddington approximations to solve the polarized radiative transfer equation (RTE) and for the purpose of fitting a modified Levenberg-Marquardt algorithm has been employed. We describe the details and utilization of the SPIN code to invert the spectro-polarimetric data. We also present the details of tests performed to validate the inversion code by comparing the results from the other widely used inversion codes (VFISV and SIR). The inverted results of the SPIN code after its application to Hinode/SP data have been compared with the inverted results from other inversion codes.

  3. Children's strategies to solving additive inverse problems: a preliminary analysis

    NASA Astrophysics Data System (ADS)

    Ding, Meixia; Auxter, Abbey E.

    2017-03-01

    Prior studies show that elementary school children generally "lack" formal understanding of inverse relations. This study goes beyond lack to explore what children might "have" in their existing conception. A total of 281 students, kindergarten to third grade, were recruited to respond to a questionnaire that involved both contextual and non-contextual tasks on inverse relations, requiring both computational and explanatory skills. Results showed that children demonstrated better performance in computation than explanation. However, many students' explanations indicated that they did not necessarily utilize inverse relations for computation. Rather, they appeared to possess partial understanding, as evidenced by their use of part-whole structure, which is a key to understanding inverse relations. A close inspection of children's solution strategies further revealed that the sophistication of children's conception of part-whole structure varied in representation use and unknown quantity recognition, which suggests rich opportunities to develop students' understanding of inverse relations in lower elementary classrooms.

  4. Accounting for Selection Bias in Studies of Acute Cardiac Events.

    PubMed

    Banack, Hailey R; Harper, Sam; Kaufman, Jay S

    2018-06-01

    In cardiovascular research, pre-hospital mortality represents an important potential source of selection bias. Inverse probability of censoring weights are a method to account for this source of bias. The objective of this article is to examine and correct for the influence of selection bias due to pre-hospital mortality on the relationship between cardiovascular risk factors and all-cause mortality after an acute cardiac event. The relationship between the number of cardiovascular disease (CVD) risk factors (0-5; smoking status, diabetes, hypertension, dyslipidemia, and obesity) and all-cause mortality was examined using data from the Atherosclerosis Risk in Communities (ARIC) study. To illustrate the magnitude of selection bias, estimates from an unweighted generalized linear model with a log link and binomial distribution were compared with estimates from an inverse probability of censoring weighted model. In unweighted multivariable analyses the estimated risk ratio for mortality ranged from 1.09 (95% confidence interval [CI], 0.98-1.21) for 1 CVD risk factor to 1.95 (95% CI, 1.41-2.68) for 5 CVD risk factors. In the inverse probability of censoring weights weighted analyses, the risk ratios ranged from 1.14 (95% CI, 0.94-1.39) to 4.23 (95% CI, 2.69-6.66). Estimates from the inverse probability of censoring weighted model were substantially greater than unweighted, adjusted estimates across all risk factor categories. This shows the magnitude of selection bias due to pre-hospital mortality and effect on estimates of the effect of CVD risk factors on mortality. Moreover, the results highlight the utility of using this method to address a common form of bias in cardiovascular research. Copyright © 2018 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.

  5. Expected treatment dose construction and adaptive inverse planning optimization: Implementation for offline head and neck cancer adaptive radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan Di; Liang Jian

    Purpose: To construct expected treatment dose for adaptive inverse planning optimization, and evaluate it on head and neck (h and n) cancer adaptive treatment modification. Methods: Adaptive inverse planning engine was developed and integrated in our in-house adaptive treatment control system. The adaptive inverse planning engine includes an expected treatment dose constructed using the daily cone beam (CB) CT images in its objective and constrains. Feasibility of the adaptive inverse planning optimization was evaluated retrospectively using daily CBCT images obtained from the image guided IMRT treatment of 19 h and n cancer patients. Adaptive treatment modification strategies with respect tomore » the time and the number of adaptive inverse planning optimization during the treatment course were evaluated using the cumulative treatment dose in organs of interest constructed using all daily CBCT images. Results: Expected treatment dose was constructed to include both the delivered dose, to date, and the estimated dose for the remaining treatment during the adaptive treatment course. It was used in treatment evaluation, as well as in constructing the objective and constraints for adaptive inverse planning optimization. The optimization engine is feasible to perform planning optimization based on preassigned treatment modification schedule. Compared to the conventional IMRT, the adaptive treatment for h and n cancer illustrated clear dose-volume improvement for all critical normal organs. The dose-volume reductions of right and left parotid glands, spine cord, brain stem and mandible were (17 {+-} 6)%, (14 {+-} 6)%, (11 {+-} 6)%, (12 {+-} 8)%, and (5 {+-} 3)% respectively with the single adaptive modification performed after the second treatment week; (24 {+-} 6)%, (22 {+-} 8)%, (21 {+-} 5)%, (19 {+-} 8)%, and (10 {+-} 6)% with three weekly modifications; and (28 {+-} 5)%, (25 {+-} 9)%, (26 {+-} 5)%, (24 {+-} 8)%, and (15 {+-} 9)% with five weekly modifications. Conclusions: Adaptive treatment modification can be implemented including the expected treatment dose in the adaptive inverse planning optimization. The retrospective evaluation results demonstrate that utilizing the weekly adaptive inverse planning optimization, the dose distribution of h and n cancer treatment can be largely improved.« less

  6. Inversion of time-domain induced polarization data based on time-lapse concept

    NASA Astrophysics Data System (ADS)

    Kim, Bitnarae; Nam, Myung Jin; Kim, Hee Joon

    2018-05-01

    Induced polarization (IP) surveys, measuring overvoltage phenomena of the medium, are widely and increasingly performed not only for exploration of mineral resources but also for engineering applications. Among several IP survey methods such as time-domain, frequency-domain and spectral IP surveys, this study introduces a noble inversion method for time-domain IP data to recover the chargeability structure of target medium. The inversion method employs the concept of 4D inversion of time-lapse resistivity data sets, considering the fact that measured voltage in time-domain IP survey is distorted by IP effects to increase from the instantaneous voltage measured at the moment the source current injection starts. Even though the increase is saturated very fast, we can consider the saturated and instantaneous voltages as a time-lapse data set. The 4D inversion method is one of the most powerful method for inverting time-lapse resistivity data sets. Using the developed IP inversion algorithm, we invert not only synthetic but also field IP data to show the effectiveness of the proposed method by comparing the recovered chargeability models with those from linear inversion that was used for the inversion of the field data in a previous study. Numerical results confirm that the proposed inversion method generates reliable chargeability models even though the anomalous bodies have large IP effects.

  7. Benchmark solutions for the galactic heavy-ion transport equations with energy and spatial coupling

    NASA Technical Reports Server (NTRS)

    Ganapol, Barry D.; Townsend, Lawrence W.; Lamkin, Stanley L.; Wilson, John W.

    1991-01-01

    Nontrivial benchmark solutions are developed for the galactic heavy ion transport equations in the straightahead approximation with energy and spatial coupling. Analytical representations of the ion fluxes are obtained for a variety of sources with the assumption that the nuclear interaction parameters are energy independent. The method utilizes an analytical LaPlace transform inversion to yield a closed form representation that is computationally efficient. The flux profiles are then used to predict ion dose profiles, which are important for shield design studies.

  8. Analysis of soil moisture extraction algorithm using data from aircraft experiments

    NASA Technical Reports Server (NTRS)

    Burke, H. H. K.; Ho, J. H.

    1981-01-01

    A soil moisture extraction algorithm is developed using a statistical parameter inversion method. Data sets from two aircraft experiments are utilized for the test. Multifrequency microwave radiometric data surface temperature, and soil moisture information are contained in the data sets. The surface and near surface ( or = 5 cm) soil moisture content can be extracted with accuracy of approximately 5% to 6% for bare fields and fields with grass cover by using L, C, and X band radiometer data. This technique is used for handling large amounts of remote sensing data from space.

  9. Scenario Evaluator for Electrical Resistivity survey pre-modeling tool

    USGS Publications Warehouse

    Terry, Neil; Day-Lewis, Frederick D.; Robinson, Judith L.; Slater, Lee D.; Halford, Keith J.; Binley, Andrew; Lane, John W.; Werkema, Dale D.

    2017-01-01

    Geophysical tools have much to offer users in environmental, water resource, and geotechnical fields; however, techniques such as electrical resistivity imaging (ERI) are often oversold and/or overinterpreted due to a lack of understanding of the limitations of the techniques, such as the appropriate depth intervals or resolution of the methods. The relationship between ERI data and resistivity is nonlinear; therefore, these limitations depend on site conditions and survey design and are best assessed through forward and inverse modeling exercises prior to field investigations. In this approach, proposed field surveys are first numerically simulated given the expected electrical properties of the site, and the resulting hypothetical data are then analyzed using inverse models. Performing ERI forward/inverse modeling, however, requires substantial expertise and can take many hours to implement. We present a new spreadsheet-based tool, the Scenario Evaluator for Electrical Resistivity (SEER), which features a graphical user interface that allows users to manipulate a resistivity model and instantly view how that model would likely be interpreted by an ERI survey. The SEER tool is intended for use by those who wish to determine the value of including ERI to achieve project goals, and is designed to have broad utility in industry, teaching, and research.

  10. Aerosol physical properties from satellite horizon inversion

    NASA Technical Reports Server (NTRS)

    Gray, C. R.; Malchow, H. L.; Merritt, D. C.; Var, R. E.; Whitney, C. K.

    1973-01-01

    The feasibility is investigated of determining the physical properties of aerosols globally in the altitude region of 10 to 100 km from a satellite horizon scanning experiment. The investigation utilizes a horizon inversion technique previously developed and extended. Aerosol physical properties such as number density, size distribution, and the real and imaginary components of the index of refraction are demonstrated to be invertible in the aerosol size ranges (0.01-0.1 microns), (0.1-1.0 microns), (1.0-10 microns). Extensions of previously developed radiative transfer models and recursive inversion algorithms are displayed.

  11. Waveform inversion with source encoding for breast sound speed reconstruction in ultrasound computed tomography.

    PubMed

    Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A

    2015-03-01

    Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the sound speed distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Both computer simulation and experimental phantom studies are conducted to demonstrate the use of the WISE method. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.

  12. New Inversion and Interpretation of Public-Domain Electromagnetic Survey Data from Selected Areas in Alaska

    NASA Astrophysics Data System (ADS)

    Smith, B. D.; Kass, A.; Saltus, R. W.; Minsley, B. J.; Deszcz-Pan, M.; Bloss, B. R.; Burns, L. E.

    2013-12-01

    Public-domain airborne geophysical surveys (combined electromagnetics and magnetics), mostly collected for and released by the State of Alaska, Division of Geological and Geophysical Surveys (DGGS), are a unique and valuable resource for both geologic interpretation and geophysical methods development. A new joint effort by the US Geological Survey (USGS) and the DGGS aims to add value to these data through the application of novel advanced inversion methods and through innovative and intuitive display of data: maps, profiles, voxel-based models, and displays of estimated inversion quality and confidence. Our goal is to make these data even more valuable for interpretation of geologic frameworks, geotechnical studies, and cryosphere studies, by producing robust estimates of subsurface resistivity that can be used by non-geophysicists. The available datasets, which are available in the public domain, include 39 frequency-domain electromagnetic datasets collected since 1993, and continue to grow with 5 more data releases pending in 2013. The majority of these datasets were flown for mineral resource purposes, with one survey designed for infrastructure analysis. In addition, several USGS datasets are included in this study. The USGS has recently developed new inversion methodologies for airborne EM data and have begun to apply these and other new techniques to the available datasets. These include a trans-dimensional Markov Chain Monte Carlo technique, laterally-constrained regularized inversions, and deterministic inversions which include calibration factors as a free parameter. Incorporation of the magnetic data as an additional constraining dataset has also improved the inversion results. Processing has been completed in several areas, including Fortymile and the Alaska Highway surveys, and continues in others such as the Styx River and Nome surveys. Utilizing these new techniques, we provide models beyond the apparent resistivity maps supplied by the original contractors, allowing us to produce a variety of products, such as maps of resistivity as a function of depth or elevation, cross section maps, and 3D voxel models, which have been treated consistently both in terms of processing and error analysis throughout the state. These products facilitate a more fruitful exchange between geologists and geophysicists and a better understanding of uncertainty, and the process results in iterative development and improvement of geologic models, both on small and large scales.

  13. Parallel three-dimensional magnetotelluric inversion using adaptive finite-element method. Part I: theory and synthetic study

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.

    2015-07-01

    This paper presents a distributed magnetotelluric inversion scheme based on adaptive finite-element method (FEM). The key novel aspect of the introduced algorithm is the use of automatic mesh refinement techniques for both forward and inverse modelling. These techniques alleviate tedious and subjective procedure of choosing a suitable model parametrization. To avoid overparametrization, meshes for forward and inverse problems were decoupled. For calculation of accurate electromagnetic (EM) responses, automatic mesh refinement algorithm based on a goal-oriented error estimator has been adopted. For further efficiency gain, EM fields for each frequency were calculated using independent meshes in order to account for substantially different spatial behaviour of the fields over a wide range of frequencies. An automatic approach for efficient initial mesh design in inverse problems based on linearized model resolution matrix was developed. To make this algorithm suitable for large-scale problems, it was proposed to use a low-rank approximation of the linearized model resolution matrix. In order to fill a gap between initial and true model complexities and resolve emerging 3-D structures better, an algorithm for adaptive inverse mesh refinement was derived. Within this algorithm, spatial variations of the imaged parameter are calculated and mesh is refined in the neighborhoods of points with the largest variations. A series of numerical tests were performed to demonstrate the utility of the presented algorithms. Adaptive mesh refinement based on the model resolution estimates provides an efficient tool to derive initial meshes which account for arbitrary survey layouts, data types, frequency content and measurement uncertainties. Furthermore, the algorithm is capable to deliver meshes suitable to resolve features on multiple scales while keeping number of unknowns low. However, such meshes exhibit dependency on an initial model guess. Additionally, it is demonstrated that the adaptive mesh refinement can be particularly efficient in resolving complex shapes. The implemented inversion scheme was able to resolve a hemisphere object with sufficient resolution starting from a coarse discretization and refining mesh adaptively in a fully automatic process. The code is able to harness the computational power of modern distributed platforms and is shown to work with models consisting of millions of degrees of freedom. Significant computational savings were achieved by using locally refined decoupled meshes.

  14. Cybernetic group method of data handling (GMDH) statistical learning for hyperspectral remote sensing inverse problems in coastal ocean optics

    NASA Astrophysics Data System (ADS)

    Filippi, Anthony Matthew

    For complex systems, sufficient a priori knowledge is often lacking about the mathematical or empirical relationship between cause and effect or between inputs and outputs of a given system. Automated machine learning may offer a useful solution in such cases. Coastal marine optical environments represent such a case, as the optical remote sensing inverse problem remains largely unsolved. A self-organizing, cybernetic mathematical modeling approach known as the group method of data handling (GMDH), a type of statistical learning network (SLN), was used to generate explicit spectral inversion models for optically shallow coastal waters. Optically shallow water light fields represent a particularly difficult challenge in oceanographic remote sensing. Several algorithm-input data treatment combinations were utilized in multiple experiments to automatically generate inverse solutions for various inherent optical property (IOP), bottom optical property (BOP), constituent concentration, and bottom depth estimations. The objective was to identify the optimal remote-sensing reflectance Rrs(lambda) inversion algorithm. The GMDH also has the potential of inductive discovery of physical hydro-optical laws. Simulated data were used to develop generalized, quasi-universal relationships. The Hydrolight numerical forward model, based on radiative transfer theory, was used to compute simulated above-water remote-sensing reflectance Rrs(lambda) psuedodata, matching the spectral channels and resolution of the experimental Naval Research Laboratory Ocean PHILLS (Portable Hyperspectral Imager for Low-Light Spectroscopy) sensor. The input-output pairs were for GMDH and artificial neural network (ANN) model development, the latter of which was used as a baseline, or control, algorithm. Both types of models were applied to in situ and aircraft data. Also, in situ spectroradiometer-derived Rrs(lambda) were used as input to an optimization-based inversion procedure. Target variables included bottom depth z b, chlorophyll a concentration [chl- a], spectral bottom irradiance reflectance Rb(lambda), and spectral total absorption a(lambda) and spectral total backscattering bb(lambda) coefficients. When applying the cybernetic and neural models to in situ HyperTSRB-derived Rrs, the difference in the means of the absolute error of the inversion estimates for zb was significant (alpha = 0.05). GMDH yielded significantly better zb than the ANN. The ANN model posted a mean absolute error (MAE) of 0.62214 m, compared with 0.55161 m for GMDH.

  15. Nonlinear inversion of electrical resistivity imaging using pruning Bayesian neural networks

    NASA Astrophysics Data System (ADS)

    Jiang, Fei-Bo; Dai, Qian-Wei; Dong, Li

    2016-06-01

    Conventional artificial neural networks used to solve electrical resistivity imaging (ERI) inversion problem suffer from overfitting and local minima. To solve these problems, we propose to use a pruning Bayesian neural network (PBNN) nonlinear inversion method and a sample design method based on the K-medoids clustering algorithm. In the sample design method, the training samples of the neural network are designed according to the prior information provided by the K-medoids clustering results; thus, the training process of the neural network is well guided. The proposed PBNN, based on Bayesian regularization, is used to select the hidden layer structure by assessing the effect of each hidden neuron to the inversion results. Then, the hyperparameter α k , which is based on the generalized mean, is chosen to guide the pruning process according to the prior distribution of the training samples under the small-sample condition. The proposed algorithm is more efficient than other common adaptive regularization methods in geophysics. The inversion of synthetic data and field data suggests that the proposed method suppresses the noise in the neural network training stage and enhances the generalization. The inversion results with the proposed method are better than those of the BPNN, RBFNN, and RRBFNN inversion methods as well as the conventional least squares inversion.

  16. A Hybrid Seismic Inversion Method for V P/V S Ratio and Its Application to Gas Identification

    NASA Astrophysics Data System (ADS)

    Guo, Qiang; Zhang, Hongbing; Han, Feilong; Xiao, Wei; Shang, Zuoping

    2018-03-01

    The ratio of compressional wave velocity to shear wave velocity (V P/V S ratio) has established itself as one of the most important parameters in identifying gas reservoirs. However, considering that seismic inversion process is highly non-linear and geological conditions encountered may be complex, a direct estimation of V P/V S ratio from pre-stack seismic data remains a challenging task. In this paper, we propose a hybrid seismic inversion method to estimate V P/V S ratio directly. In this method, post- and pre-stack inversions are combined in which the pre-stack inversion for V P/V S ratio is driven by the post-stack inversion results (i.e., V P and density). In particular, the V P/V S ratio is considered as a model parameter and is directly inverted from the pre-stack inversion based on the exact Zoeppritz equation. Moreover, anisotropic Markov random field is employed in order to regularise the inversion process as well as taking care of geological structures (boundaries) information. Aided by the proposed hybrid inversion strategy, the directional weighting coefficients incorporated in the anisotropic Markov random field neighbourhoods are quantitatively calculated by the anisotropic diffusion method. The synthetic test demonstrates the effectiveness of the proposed inversion method. In particular, given low quality of the pre-stack data and high heterogeneity of the target layers in the field data, the proposed inversion method reveals the detailed model of V P/V S ratio that can successfully identify the gas-bearing zones.

  17. Survival analysis using inverse probability of treatment weighted methods based on the generalized propensity score.

    PubMed

    Sugihara, Masahiro

    2010-01-01

    In survival analysis, treatment effects are commonly evaluated based on survival curves and hazard ratios as causal treatment effects. In observational studies, these estimates may be biased due to confounding factors. The inverse probability of treatment weighted (IPTW) method based on the propensity score is one of the approaches utilized to adjust for confounding factors between binary treatment groups. As a generalization of this methodology, we developed an exact formula for an IPTW log-rank test based on the generalized propensity score for survival data. This makes it possible to compare the group differences of IPTW Kaplan-Meier estimators of survival curves using an IPTW log-rank test for multi-valued treatments. As causal treatment effects, the hazard ratio can be estimated using the IPTW approach. If the treatments correspond to ordered levels of a treatment, the proposed method can be easily extended to the analysis of treatment effect patterns with contrast statistics. In this paper, the proposed method is illustrated with data from the Kyushu Lipid Intervention Study (KLIS), which investigated the primary preventive effects of pravastatin on coronary heart disease (CHD). The results of the proposed method suggested that pravastatin treatment reduces the risk of CHD and that compliance to pravastatin treatment is important for the prevention of CHD. (c) 2009 John Wiley & Sons, Ltd.

  18. Analysis of Operating Principles with S-system Models

    PubMed Central

    Lee, Yun; Chen, Po-Wei; Voit, Eberhard O.

    2011-01-01

    Operating principles address general questions regarding the response dynamics of biological systems as we observe or hypothesize them, in comparison to a priori equally valid alternatives. In analogy to design principles, the question arises: Why are some operating strategies encountered more frequently than others and in what sense might they be superior? It is at this point impossible to study operation principles in complete generality, but the work here discusses the important situation where a biological system must shift operation from its normal steady state to a new steady state. This situation is quite common and includes many stress responses. We present two distinct methods for determining different solutions to this task of achieving a new target steady state. Both methods utilize the property of S-system models within Biochemical Systems Theory (BST) that steady-states can be explicitly represented as systems of linear algebraic equations. The first method uses matrix inversion, a pseudo-inverse, or regression to characterize the entire admissible solution space. Operations on the basis of the solution space permit modest alterations of the transients toward the target steady state. The second method uses standard or mixed integer linear programming to determine admissible solutions that satisfy criteria of functional effectiveness, which are specified beforehand. As an illustration, we use both methods to characterize alternative response patterns of yeast subjected to heat stress, and compare them with observations from the literature. PMID:21377479

  19. The analysis of a rocket tomography measurement of the N2+3914A emission and N2 ionization rates in an auroral arc

    NASA Technical Reports Server (NTRS)

    Mcdade, Ian C.

    1991-01-01

    Techniques were developed for recovering two-dimensional distributions of auroral volume emission rates from rocket photometer measurements made in a tomographic spin scan mode. These tomographic inversion procedures are based upon an algebraic reconstruction technique (ART) and utilize two different iterative relaxation techniques for solving the problems associated with noise in the observational data. One of the inversion algorithms is based upon a least squares method and the other on a maximum probability approach. The performance of the inversion algorithms, and the limitations of the rocket tomography technique, were critically assessed using various factors such as (1) statistical and non-statistical noise in the observational data, (2) rocket penetration of the auroral form, (3) background sources of emission, (4) smearing due to the photometer field of view, and (5) temporal variations in the auroral form. These tests show that the inversion procedures may be successfully applied to rocket observations made in medium intensity aurora with standard rocket photometer instruments. The inversion procedures have been used to recover two-dimensional distributions of auroral emission rates and ionization rates from an existing set of N2+3914A rocket photometer measurements which were made in a tomographic spin scan mode during the ARIES auroral campaign. The two-dimensional distributions of the 3914A volume emission rates recoverd from the inversion of the rocket data compare very well with the distributions that were inferred from ground-based measurements using triangulation-tomography techniques and the N2 ionization rates derived from the rocket tomography results are in very good agreement with the in situ particle measurements that were made during the flight. Three pre-prints describing the tomographic inversion techniques and the tomographic analysis of the ARIES rocket data are included as appendices.

  20. Public and private healthcare services utilization by non-institutional elderly in Hong Kong: is the inverse care law operating?

    PubMed

    Yam, Ho-Kwan; Mercer, Stewart W; Wong, Lai-Yi; Chan, Wan-Kin; Yeoh, Eng-Kiong

    2009-08-01

    To assess the factors associated with healthcare services utilization by the non-institutional elderly across five types of service utilization (Western medicine doctors in Government clinics, private Western medicine doctors, Chinese medicine practitioners, Emergency Units, and hospitalization). A secondary data analysis of a territory-wide cross-sectional survey collected by the Government among a representative sample of 4812 elderly (aged 60 and above) in Hong Kong. Our analysis, based on Anderson's behavioral framework, shows that need factors (relating to actual or perceived illness and diseases) are significantly related to the healthcare services utilization examined. However, enabling factors, such as monthly household income per capita, play a significant role in determining the utilization. Although the lower-income elderly consult more Government clinics and less private clinics than the more affluent, they have a lower total utilization of healthcare services despite having significantly greater healthcare needs. This suggests a mismatch of need and supply within the mixed economy of private and public healthcare services and suggests the existence of an 'inverse care law' in Hong Kong amongst elderly citizens. The findings raise concerns of inequities in Hong Kong's healthcare system, raising implications for future healthcare reforms.

  1. Inverse scattering method and soliton double solution family for the general symplectic gravity model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao Yajun

    A previously established Hauser-Ernst-type extended double-complex linear system is slightly modified and used to develop an inverse scattering method for the stationary axisymmetric general symplectic gravity model. The reduction procedures in this inverse scattering method are found to be fairly simple, which makes the inverse scattering method applied fine and effective. As an application, a concrete family of soliton double solutions for the considered theory is obtained.

  2. Mixed linear-non-linear inversion of crustal deformation data: Bayesian inference of model, weighting and regularization parameters

    NASA Astrophysics Data System (ADS)

    Fukuda, Jun'ichi; Johnson, Kaj M.

    2010-06-01

    We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.

  3. Geoelectric Characterization of Thermal Water Aquifers Using 2.5D Inversion of VES Measurements

    NASA Astrophysics Data System (ADS)

    Gyulai, Á.; Szűcs, P.; Turai, E.; Baracza, M. K.; Fejes, Z.

    2017-03-01

    This paper presents a short theoretical summary of the series expansion-based 2.5D combined geoelectric weighted inversion (CGWI) method and highlights the advantageous way with which the number of unknowns can be decreased due to the simultaneous characteristic of this inversion. 2.5D CGWI is an approximate inversion method for the determination of 3D structures, which uses the joint 2D forward modeling of dip and strike direction data. In the inversion procedure, the Steiner's most frequent value method is applied to the automatic separation of dip and strike direction data and outliers. The workflow of inversion and its practical application are presented in the study. For conventional vertical electrical sounding (VES) measurements, this method can determine the parameters of complex structures more accurately than the single inversion method. Field data show that the 2.5D CGWI which was developed can determine the optimal location for drilling an exploratory thermal water prospecting well. The novelty of this research is that the measured VES data in dip and strike direction are jointly inverted by the 2.5D CGWI method.

  4. Inversion of Density Interfaces Using the Pseudo-Backpropagation Neural Network Method

    NASA Astrophysics Data System (ADS)

    Chen, Xiaohong; Du, Yukun; Liu, Zhan; Zhao, Wenju; Chen, Xiaocheng

    2018-05-01

    This paper presents a new pseudo-backpropagation (BP) neural network method that can invert multi-density interfaces at one time. The new method is based on the conventional forward modeling and inverse modeling theories in addition to conventional pseudo-BP neural network arithmetic. A 3D inversion model for gravity anomalies of multi-density interfaces using the pseudo-BP neural network method is constructed after analyzing the structure and function of the artificial neural network. The corresponding iterative inverse formula of the space field is presented at the same time. Based on trials of gravity anomalies and density noise, the influence of the two kinds of noise on the inverse result is discussed and the scale of noise requested for the stability of the arithmetic is analyzed. The effects of the initial model on the reduction of the ambiguity of the result and improvement of the precision of inversion are discussed. The correctness and validity of the method were verified by the 3D model of the three interfaces. 3D inversion was performed on the observed gravity anomaly data of the Okinawa trough using the program presented herein. The Tertiary basement and Moho depth were obtained from the inversion results, which also testify the adaptability of the method. This study has made a useful attempt for the inversion of gravity density interfaces.

  5. Discontinuous Spectral Difference Method for Conservation Laws on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Vinokur, Marcel

    2004-01-01

    A new, high-order, conservative, and efficient discontinuous spectral finite difference (SD) method for conservation laws on unstructured grids is developed. The concept of discontinuous and high-order local representations to achieve conservation and high accuracy is utilized in a manner similar to the Discontinuous Galerkin (DG) and the Spectral Volume (SV) methods, but while these methods are based on the integrated forms of the equations, the new method is based on the differential form to attain a simpler formulation and higher efficiency. Conventional unstructured finite-difference and finite-volume methods require data reconstruction based on the least-squares formulation using neighboring point or cell data. Since each unknown employs a different stencil, one must repeat the least-squares inversion for every point or cell at each time step, or to store the inversion coefficients. In a high-order, three-dimensional computation, the former would involve impractically large CPU time, while for the latter the memory requirement becomes prohibitive. In addition, the finite-difference method does not satisfy the integral conservation in general. By contrast, the DG and SV methods employ a local, universal reconstruction of a given order of accuracy in each cell in terms of internally defined conservative unknowns. Since the solution is discontinuous across cell boundaries, a Riemann solver is necessary to evaluate boundary flux terms and maintain conservation. In the DG method, a Galerkin finite-element method is employed to update the nodal unknowns within each cell. This requires the inversion of a mass matrix, and the use of quadratures of twice the order of accuracy of the reconstruction to evaluate the surface integrals and additional volume integrals for nonlinear flux functions. In the SV method, the integral conservation law is used to update volume averages over subcells defined by a geometrically similar partition of each grid cell. As the order of accuracy increases, the partitioning for 3D requires the introduction of a large number of parameters, whose optimization to achieve convergence becomes increasingly more difficult. Also, the number of interior facets required to subdivide non-planar faces, and the additional increase in the number of quadrature points for each facet, increases the computational cost greatly.

  6. Robot geometry calibration

    NASA Technical Reports Server (NTRS)

    Hayati, Samad; Tso, Kam; Roston, Gerald

    1988-01-01

    Autonomous robot task execution requires that the end effector of the robot be positioned accurately relative to a reference world-coordinate frame. The authors present a complete formulation to identify the actual robot geometric parameters. The method applies to any serial link manipulator with arbitrary order and combination of revolute and prismatic joints. A method is also presented to solve the inverse kinematic of the actual robot model which usually is not a so-called simple robot. Experimental results performed by utilizing a PUMA 560 with simple measurement hardware are presented. As a result of this calibration a precision move command is designed and integrated into a robot language, RCCL, and used in the NASA Telerobot Testbed.

  7. Group refractive index reconstruction with broadband interferometric confocal microscopy

    PubMed Central

    Marks, Daniel L.; Schlachter, Simon C.; Zysk, Adam M.; Boppart, Stephen A.

    2010-01-01

    We propose a novel method of measuring the group refractive index of biological tissues at the micrometer scale. The technique utilizes a broadband confocal microscope embedded into a Mach–Zehnder interferometer, with which spectral interferograms are measured as the sample is translated through the focus of the beam. The method does not require phase unwrapping and is insensitive to vibrations in the sample and reference arms. High measurement stability is achieved because a single spectral interferogram contains all the information necessary to compute the optical path delay of the beam transmitted through the sample. Included are a physical framework defining the forward problem, linear solutions to the inverse problem, and simulated images of biologically relevant phantoms. PMID:18451922

  8. Apparatus and method for measuring critical current properties of a coated conductor

    DOEpatents

    Mueller, Fred M [Los Alamos, NM; Haenisch, Jens [Dresden, DE

    2012-07-24

    The transverse critical-current uniformity in a superconducting tape was determined using a magnetic knife apparatus. A critical current I.sub.c distribution and transverse critical current density J.sub.c distribution in YBCO coated conductors was measured nondestructively with high resolution using a magnetic knife apparatus. The method utilizes the strong depression of J.sub.c in applied magnetic fields. A narrow region of low, including zero, magnetic field in a surrounding higher field is moved transversely across a sample of coated conductor. This reveals the critical current density distribution. A Fourier series inversion process was used to determine the transverse J.sub.c distribution in the sample.

  9. Comparative evolution of the inverse problems (Introduction to an interdisciplinary study of the inverse problems)

    NASA Technical Reports Server (NTRS)

    Sabatier, P. C.

    1972-01-01

    The progressive realization of the consequences of nonuniqueness imply an evolution of both the methods and the centers of interest in inverse problems. This evolution is schematically described together with the various mathematical methods used. A comparative description is given of inverse methods in scientific research, with examples taken from mathematics, quantum and classical physics, seismology, transport theory, radiative transfer, electromagnetic scattering, electrocardiology, etc. It is hoped that this paper will pave the way for an interdisciplinary study of inverse problems.

  10. Time-lapse joint AVO inversion using generalized linear method based on exact Zoeppritz equations

    NASA Astrophysics Data System (ADS)

    Zhi, L.; Gu, H.

    2017-12-01

    The conventional method of time-lapse AVO (Amplitude Versus Offset) inversion is mainly based on the approximate expression of Zoeppritz equations. Though the approximate expression is concise and convenient to use, it has certain limitations. For example, its application condition is that the difference of elastic parameters between the upper medium and lower medium is little and the incident angle is small. In addition, the inversion of density is not stable. Therefore, we develop the method of time-lapse joint AVO inversion based on exact Zoeppritz equations. In this method, we apply exact Zoeppritz equations to calculate the reflection coefficient of PP wave. And in the construction of objective function for inversion, we use Taylor expansion to linearize the inversion problem. Through the joint AVO inversion of seismic data in baseline survey and monitor survey, we can obtain P-wave velocity, S-wave velocity, density in baseline survey and their time-lapse changes simultaneously. We can also estimate the oil saturation change according to inversion results. Compared with the time-lapse difference inversion, the joint inversion has a better applicability. It doesn't need some assumptions and can estimate more parameters simultaneously. Meanwhile, by using the generalized linear method, the inversion is easily realized and its calculation amount is small. We use the Marmousi model to generate synthetic seismic records to test and analyze the influence of random noise. Without noise, all estimation results are relatively accurate. With the increase of noise, P-wave velocity change and oil saturation change are stable and less affected by noise. S-wave velocity change is most affected by noise. Finally we use the actual field data of time-lapse seismic prospecting to process and the results can prove the availability and feasibility of our method in actual situation.

  11. An Integrated Approach to Characterizing Bypassed Oil in Heterogeneous and Fractured Reservoirs Using Partitioning Tracers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akhil Datta-Gupta

    2006-12-31

    We explore the use of efficient streamline-based simulation approaches for modeling partitioning interwell tracer tests in hydrocarbon reservoirs. Specifically, we utilize the unique features of streamline models to develop an efficient approach for interpretation and history matching of field tracer response. A critical aspect here is the underdetermined and highly ill-posed nature of the associated inverse problems. We have investigated the relative merits of the traditional history matching ('amplitude inversion') and a novel travel time inversion in terms of robustness of the method and convergence behavior of the solution. We show that the traditional amplitude inversion is orders of magnitudemore » more non-linear and the solution here is likely to get trapped in local minimum, leading to inadequate history match. The proposed travel time inversion is shown to be extremely efficient and robust for practical field applications. The streamline approach is generalized to model water injection in naturally fractured reservoirs through the use of a dual media approach. The fractures and matrix are treated as separate continua that are connected through a transfer function, as in conventional finite difference simulators for modeling fractured systems. A detailed comparison with a commercial finite difference simulator shows very good agreement. Furthermore, an examination of the scaling behavior of the computation time indicates that the streamline approach is likely to result in significant savings for large-scale field applications. We also propose a novel approach to history matching finite-difference models that combines the advantage of the streamline models with the versatility of finite-difference simulation. In our approach, we utilize the streamline-derived sensitivities to facilitate history matching during finite-difference simulation. The use of finite-difference model allows us to account for detailed process physics and compressibility effects. The approach is very fast and avoids much of the subjective judgments and time-consuming trial-and-errors associated with manual history matching. We demonstrate the power and utility of our approach using a synthetic example and two field examples. We have also explored the use of a finite difference reservoir simulator, UTCHEM, for field-scale design and optimization of partitioning interwell tracer tests. The finite-difference model allows us to include detailed physics associated with reactive tracer transport, particularly those related with transverse and cross-streamline mechanisms. We have investigated the potential use of downhole tracer samplers and also the use of natural tracers for the design of partitioning tracer tests. Finally, we discuss several alternative ways of using partitioning interwell tracer tests (PITTs) in oil fields for the calculation of oil saturation, swept pore volume and sweep efficiency, and assess the accuracy of such tests under a variety of reservoir conditions.« less

  12. A 3D inversion for all-space magnetotelluric data with static shift correction

    NASA Astrophysics Data System (ADS)

    Zhang, Kun

    2017-04-01

    Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results. The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm.

  13. Clearance detector and method for motion and distance

    DOEpatents

    Xavier, Patrick G [Albuquerque, NM

    2011-08-09

    A method for correct and efficient detection of clearances between three-dimensional bodies in computer-based simulations, where one or both of the volumes is subject to translation and/or rotations. The method conservatively determines of the size of such clearances and whether there is a collision between the bodies. Given two bodies, each of which is undergoing separate motions, the method utilizes bounding-volume hierarchy representations for the two bodies and, mappings and inverse mappings for the motions of the two bodies. The method uses the representations, mappings and direction vectors to determine the directionally furthest locations of points on the convex hulls of the volumes virtually swept by the bodies and hence the clearance between the bodies, without having to calculate the convex hulls of the bodies. The method includes clearance detection for bodies comprising convex geometrical primitives and more specific techniques for bodies comprising convex polyhedra.

  14. Nonexpansiveness of a linearized augmented Lagrangian operator for hierarchical convex optimization

    NASA Astrophysics Data System (ADS)

    Yamagishi, Masao; Yamada, Isao

    2017-04-01

    Hierarchical convex optimization concerns two-stage optimization problems: the first stage problem is a convex optimization; the second stage problem is the minimization of a convex function over the solution set of the first stage problem. For the hierarchical convex optimization, the hybrid steepest descent method (HSDM) can be applied, where the solution set of the first stage problem must be expressed as the fixed point set of a certain nonexpansive operator. In this paper, we propose a nonexpansive operator that yields a computationally efficient update when it is plugged into the HSDM. The proposed operator is inspired by the update of the linearized augmented Lagrangian method. It is applicable to characterize the solution set of recent sophisticated convex optimization problems found in the context of inverse problems, where the sum of multiple proximable convex functions involving linear operators must be minimized to incorporate preferable properties into the minimizers. For such a problem formulation, there has not yet been reported any nonexpansive operator that yields an update free from the inversions of linear operators in cases where it is utilized in the HSDM. Unlike previously known nonexpansive operators, the proposed operator yields an inversion-free update in such cases. As an application of the proposed operator plugged into the HSDM, we also present, in the context of the so-called superiorization, an algorithmic solution to a convex optimization problem over the generalized convex feasible set where the intersection of the hard constraints is not necessarily simple.

  15. Conducting research on the Medicare market: the need for better data and methods.

    PubMed

    Wong, H S; Hellinger, F J

    2001-04-01

    To highlight data limitations, the need to improve data collection, the need to develop better analytic methods, and the need to use alternative data sources to conduct research related to the Medicare program. Objectives were achieved by reviewing existing studies on risk selection in Medicare HMOs, examining their data limitations, and introducing a new approach that circumvents many of these shortcomings. Data for years 1995-97 for five states (Arizona, Florida, Massachusetts, New York, and Pennsylvania) from the Healthcare Cost and Utilization Project (HCUP) State Inpatient Databases (SIDs), maintained by the Agency for Healthcare Research and Quality; and the Health Care Financing Administration's Medicare Managed Care Market Penetration Data Files and Medicare Provider Analysis and Review Files. Analysis of hospital utilization rates for Medicare beneficiaries in the traditional fee-for-service (FFS) Medicare and Medicare HMO sectors and examination of the relationship between these rates and the Medicare HMO penetration rates. Medicare HMOs have lower hospital utilization rates than their FFS counterparts, differences in utilization rates vary across states, and HMO penetration rates are inversely related to our rough measure of favorable selection. Substantial growth in Medicare HMO enrollment and the implementation of a new risk-adjusted payment system have led to an increasing need for research on the Medicare program. Improved data collection, better methods, new creative approaches, and alternative data sources are needed to address these issues in a timely and suitable manner.

  16. Distorted Born iterative T-matrix method for inversion of CSEM data in anisotropic media

    NASA Astrophysics Data System (ADS)

    Jakobsen, Morten; Tveit, Svenn

    2018-05-01

    We present a direct iterative solutions to the nonlinear controlled-source electromagnetic (CSEM) inversion problem in the frequency domain, which is based on a volume integral equation formulation of the forward modelling problem in anisotropic conductive media. Our vectorial nonlinear inverse scattering approach effectively replaces an ill-posed nonlinear inverse problem with a series of linear ill-posed inverse problems, for which there already exist efficient (regularized) solution methods. The solution update the dyadic Green's function's from the source to the scattering-volume and from the scattering-volume to the receivers, after each iteration. The T-matrix approach of multiple scattering theory is used for efficient updating of all dyadic Green's functions after each linearized inversion step. This means that we have developed a T-matrix variant of the Distorted Born Iterative (DBI) method, which is often used in the acoustic and electromagnetic (medical) imaging communities as an alternative to contrast-source inversion. The main advantage of using the T-matrix approach in this context, is that it eliminates the need to perform a full forward simulation at each iteration of the DBI method, which is known to be consistent with the Gauss-Newton method. The T-matrix allows for a natural domain decomposition, since in the sense that a large model can be decomposed into an arbitrary number of domains that can be treated independently and in parallel. The T-matrix we use for efficient model updating is also independent of the source-receiver configuration, which could be an advantage when performing fast-repeat modelling and time-lapse inversion. The T-matrix is also compatible with the use of modern renormalization methods that can potentially help us to reduce the sensitivity of the CSEM inversion results on the starting model. To illustrate the performance and potential of our T-matrix variant of the DBI method for CSEM inversion, we performed a numerical experiments based on synthetic CSEM data associated with 2D VTI and 3D orthorombic model inversions. The results of our numerical experiment suggest that the DBIT method for inversion of CSEM data in anisotropic media is both accurate and efficient.

  17. On the inversion of geodetic integrals defined over the sphere using 1-D FFT

    NASA Astrophysics Data System (ADS)

    García, R. V.; Alejo, C. A.

    2005-08-01

    An iterative method is presented which performs inversion of integrals defined over the sphere. The method is based on one-dimensional fast Fourier transform (1-D FFT) inversion and is implemented with the projected Landweber technique, which is used to solve constrained least-squares problems reducing the associated 1-D cyclic-convolution error. The results obtained are as precise as the direct matrix inversion approach, but with better computational efficiency. A case study uses the inversion of Hotine’s integral to obtain gravity disturbances from geoid undulations. Numerical convergence is also analyzed and comparisons with respect to the direct matrix inversion method using conjugate gradient (CG) iteration are presented. Like the CG method, the number of iterations needed to get the optimum (i.e., small) error decreases as the measurement noise increases. Nevertheless, for discrete data given over a whole parallel band, the method can be applied directly without implementing the projected Landweber method, since no cyclic convolution error exists.

  18. Breast ultrasound computed tomography using waveform inversion with source encoding

    NASA Astrophysics Data System (ADS)

    Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A.

    2015-03-01

    Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the speed-of-sound distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Computer-simulation studies are conducted to demonstrate the use of the WISE method. Using a single graphics processing unit card, each iteration can be completed within 25 seconds for a 128 × 128 mm2 reconstruction region. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.

  19. Following the footprints of polymorphic inversions on SNP data: from detection to association tests

    PubMed Central

    Cáceres, Alejandro; González, Juan R.

    2015-01-01

    Inversion polymorphisms have important phenotypic and evolutionary consequences in humans. Two different methodologies have been used to infer inversions from SNP dense data, enabling the use of large cohorts for their study. One approach relies on the differences in linkage disequilibrium across breakpoints; the other one captures the internal haplotype groups that tag the inversion status of chromosomes. In this article, we assessed the convergence of the two methods in the detection of 20 human inversions that have been reported in the literature. The methods converged in four inversions including inv-8p23, for which we studied its association with low-BMI in American children. Using a novel haplotype tagging method with control on inversion ancestry, we computed the frequency of inv-8p23 in two American cohorts and observed inversion haplotype admixture. Accounting for haplotype ancestry, we found that the European inverted allele in children carries a recessive risk of underweight, validated in an independent Spanish cohort (combined: OR= 2.00, P = 0.001). While the footprints of inversions on SNP data are complex, we show that systematic analyses, such as convergence of different methods and controlling for ancestry, can reveal the contribution of inversions to the ancestral composition of populations and to the heritability of human disease. PMID:25672393

  20. Laterally constrained inversion for CSAMT data interpretation

    NASA Astrophysics Data System (ADS)

    Wang, Ruo; Yin, Changchun; Wang, Miaoyue; Di, Qingyun

    2015-10-01

    Laterally constrained inversion (LCI) has been successfully applied to the inversion of dc resistivity, TEM and airborne EM data. However, it hasn't been yet applied to the interpretation of controlled-source audio-frequency magnetotelluric (CSAMT) data. In this paper, we apply the LCI method for CSAMT data inversion by preconditioning the Jacobian matrix. We apply a weighting matrix to Jacobian to balance the sensitivity of model parameters, so that the resolution with respect to different model parameters becomes more uniform. Numerical experiments confirm that this can improve the convergence of the inversion. We first invert a synthetic dataset with and without noise to investigate the effect of LCI applications to CSAMT data, for the noise free data, the results show that the LCI method can recover the true model better compared to the traditional single-station inversion; and for the noisy data, the true model is recovered even with a noise level of 8%, indicating that LCI inversions are to some extent noise insensitive. Then, we re-invert two CSAMT datasets collected respectively in a watershed and a coal mine area in Northern China and compare our results with those from previous inversions. The comparison with the previous inversion in a coal mine shows that LCI method delivers smoother layer interfaces that well correlate to seismic data, while comparison with a global searching algorithm of simulated annealing (SA) in a watershed shows that though both methods deliver very similar good results, however, LCI algorithm presented in this paper runs much faster. The inversion results for the coal mine CSAMT survey show that a conductive water-bearing zone that was not revealed by the previous inversions has been identified by the LCI. This further demonstrates that the method presented in this paper works for CSAMT data inversion.

  1. Inverse scattering pre-stack depth imaging and it's comparison to some depth migration methods for imaging rich fault complex structure

    NASA Astrophysics Data System (ADS)

    Nurhandoko, Bagus Endar B.; Sukmana, Indriani; Mubarok, Syahrul; Deny, Agus; Widowati, Sri; Kurniadi, Rizal

    2012-06-01

    Migration is important issue for seismic imaging in complex structure. In this decade, depth imaging becomes important tools for producing accurate image in depth imaging instead of time domain imaging. The challenge of depth migration method, however, is in revealing the complex structure of subsurface. There are many methods of depth migration with their advantages and weaknesses. In this paper, we show our propose method of pre-stack depth migration based on time domain inverse scattering wave equation. Hopefully this method can be as solution for imaging complex structure in Indonesia, especially in rich thrusting fault zones. In this research, we develop a recent advance wave equation migration based on time domain inverse scattering wave which use more natural wave propagation using scattering wave. This wave equation pre-stack depth migration use time domain inverse scattering wave equation based on Helmholtz equation. To provide true amplitude recovery, an inverse of divergence procedure and recovering transmission loss are considered of pre-stack migration. Benchmarking the propose inverse scattering pre-stack depth migration with the other migration methods are also presented, i.e.: wave equation pre-stack depth migration, waveequation depth migration, and pre-stack time migration method. This inverse scattering pre-stack depth migration could image successfully the rich fault zone which consist extremely dip and resulting superior quality of seismic image. The image quality of inverse scattering migration is much better than the others migration methods.

  2. Text mining for search term development in systematic reviewing: A discussion of some methods and challenges.

    PubMed

    Stansfield, Claire; O'Mara-Eves, Alison; Thomas, James

    2017-09-01

    Using text mining to aid the development of database search strings for topics described by diverse terminology has potential benefits for systematic reviews; however, methods and tools for accomplishing this are poorly covered in the research methods literature. We briefly review the literature on applications of text mining for search term development for systematic reviewing. We found that the tools can be used in 5 overarching ways: improving the precision of searches; identifying search terms to improve search sensitivity; aiding the translation of search strategies across databases; searching and screening within an integrated system; and developing objectively derived search strategies. Using a case study and selected examples, we then reflect on the utility of certain technologies (term frequency-inverse document frequency and Termine, term frequency, and clustering) in improving the precision and sensitivity of searches. Challenges in using these tools are discussed. The utility of these tools is influenced by the different capabilities of the tools, the way the tools are used, and the text that is analysed. Increased awareness of how the tools perform facilitates the further development of methods for their use in systematic reviews. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Automated Detection and Modeling of Slow Slip: Case Study of the Cascadia Subduction Zone

    NASA Astrophysics Data System (ADS)

    Crowell, B. W.; Bock, Y.; Liu, Z.

    2012-12-01

    The discovery of transient slow slip events over the past decade has changed our understanding of tectonic hazards and the earthquake cycle. Proper geodetic characterization of transient deformation is necessary for studies of regional interseismic, coseismic and postseismic tectonics, and miscalculations can affect our understanding of the regional stress field. We utilize two different methods to create a complete record of slow slip from continuous GPS stations in the Cascadia subduction zone between 1996 and 2012: spatiotemporal principal component analysis (PCA) and the relative strength index (RSI). The PCA is performed on 100 day windows of nearby stations to locate signals that exist across many stations in the network by looking at the ratio of the first two eigenvalues. The RSI is a financial momentum oscillator that looks for changes in individual time series with respect to previous epochs to locate rapid changes, indicative of transient deformation. Using both methods, we create a complete history of slow slip across the Cascadia subduction zone, fully characterizing the timing, progression, and magnitude of events. We inject the results from the automated transient detection into a time-dependent slip inversion and apply a Kalman filter based network inversion method to image the spatiotemporal variation of slip transients along the Cascadia margin.

  4. Viscous Design of TCA Configuration

    NASA Technical Reports Server (NTRS)

    Krist, Steven E.; Bauer, Steven X. S.; Campbell, Richard L.

    1999-01-01

    The goal in this effort is to redesign the baseline TCA configuration for improved performance at both supersonic and transonic cruise. Viscous analyses are conducted with OVERFLOW, a Navier-Stokes code for overset grids, using PEGSUS to compute the interpolations between overset grids. Viscous designs are conducted with OVERDISC, a script which couples OVERFLOW with the Constrained Direct Iterative Surface Curvature (CDISC) inverse design method. The successful execution of any computational fluid dynamics (CFD) based aerodynamic design method for complex configurations requires an efficient method for regenerating the computational grids to account for modifications to the configuration shape. The first section of this presentation deals with the automated regridding procedure used to generate overset grids for the fuselage/wing/diverter/nacelle configurations analysed in this effort. The second section outlines the procedures utilized to conduct OVERDISC inverse designs. The third section briefly covers the work conducted by Dick Campbell, in which a dual-point design at Mach 2.4 and 0.9 was attempted using OVERDISC; the initial configuration from which this design effort was started is an early version of the optimized shape for the TCA configuration developed by the Boeing Commercial Airplane Group (BCAG), which eventually evolved into the NCV design. The final section presents results from application of the Natural Flow Wing design philosophy to the TCA configuration.

  5. Derivation of three closed loop kinematic velocity models using normalized quaternion feedback for an autonomous redundant manipulator with application to inverse kinematics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unseren, M.A.

    1993-04-01

    The report discusses the orientation tracking control problem for a kinematically redundant, autonomous manipulator moving in a three dimensional workspace. The orientation error is derived using the normalized quaternion error method of Ickes, the Luh, Walker, and Paul error method, and a method suggested here utilizing the Rodrigues parameters, all of which are expressed in terms of normalized quaternions. The analytical time derivatives of the orientation errors are determined. The latter, along with the translational velocity error, form a dosed loop kinematic velocity model of the manipulator using normalized quaternion and translational position feedback. An analysis of the singularities associatedmore » with expressing the models in a form suitable for solving the inverse kinematics problem is given. Two redundancy resolution algorithms originally developed using an open loop kinematic velocity model of the manipulator are extended to properly take into account the orientation tracking control problem. This report furnishes the necessary mathematical framework required prior to experimental implementation of the orientation tracking control schemes on the seven axis CESARm research manipulator or on the seven-axis Robotics Research K1207i dexterous manipulator, the latter of which is to be delivered to the Oak Ridge National Laboratory in 1993.« less

  6. The neural network approximation method for solving multidimensional nonlinear inverse problems of geophysics

    NASA Astrophysics Data System (ADS)

    Shimelevich, M. I.; Obornev, E. A.; Obornev, I. E.; Rodionov, E. A.

    2017-07-01

    The iterative approximation neural network method for solving conditionally well-posed nonlinear inverse problems of geophysics is presented. The method is based on the neural network approximation of the inverse operator. The inverse problem is solved in the class of grid (block) models of the medium on a regularized parameterization grid. The construction principle of this grid relies on using the calculated values of the continuity modulus of the inverse operator and its modifications determining the degree of ambiguity of the solutions. The method provides approximate solutions of inverse problems with the maximal degree of detail given the specified degree of ambiguity with the total number of the sought parameters n × 103 of the medium. The a priori and a posteriori estimates of the degree of ambiguity of the approximated solutions are calculated. The work of the method is illustrated by the example of the three-dimensional (3D) inversion of the synthesized 2D areal geoelectrical (audio magnetotelluric sounding, AMTS) data corresponding to the schematic model of a kimberlite pipe.

  7. Comparative study of inversion methods of three-dimensional NMR and sensitivity to fluids

    NASA Astrophysics Data System (ADS)

    Tan, Maojin; Wang, Peng; Mao, Keyu

    2014-04-01

    Three-dimensional nuclear magnetic resonance (3D NMR) logging can simultaneously measure transverse relaxation time (T2), longitudinal relaxation time (T1), and diffusion coefficient (D). These parameters can be used to distinguish fluids in the porous reservoirs. For 3D NMR logging, the relaxation mechanism and mathematical model, Fredholm equation, are introduced, and the inversion methods including Singular Value Decomposition (SVD), Butler-Reeds-Dawson (BRD), and Global Inversion (GI) methods are studied in detail, respectively. During one simulation test, multi-echo CPMG sequence activation is designed firstly, echo trains of the ideal fluid models are synthesized, then an inversion algorithm is carried on these synthetic echo trains, and finally T2-T1-D map is built. Futhermore, SVD, BRD, and GI methods are respectively applied into a same fluid model, and the computing speed and inversion accuracy are compared and analyzed. When the optimal inversion method and matrix dimention are applied, the inversion results are in good aggreement with the supposed fluid model, which indicates that the inversion method of 3D NMR is applieable for fluid typing of oil and gas reservoirs. Additionally, the forward modeling and inversion tests are made in oil-water and gas-water models, respectively, the sensitivity to the fluids in different magnetic field gradients is also examined in detail. The effect of magnetic gradient on fluid typing in 3D NMR logging is stuied and the optimal manetic gradient is choosen.

  8. 2.5D transient electromagnetic inversion with OCCAM method

    NASA Astrophysics Data System (ADS)

    Li, R.; Hu, X.

    2016-12-01

    In the application of time-domain electromagnetic method (TEM), some multidimensional inversion schemes are applied for imaging in the past few decades to overcome great error produced by 1D model inversion when the subsurface structure is complex. The current mainstream multidimensional inversion for EM data, with the finite-difference time-domain (FDTD) forward method, mainly implemented by Nonlinear Conjugate Gradient (NLCG). But the convergence rate of NLCG heavily depends on Lagrange multiplier and maybe fail to converge. We use the OCCAM inversion method to avoid the weakness. OCCAM inversion is proven to be a more stable and reliable method to image the subsurface 2.5D electrical conductivity. Firstly, we simulate the 3D transient EM fields governed by Maxwell's equations with FDTD method. Secondly, we use the OCCAM inversion scheme with the appropriate objective error functional we established to image the 2.5D structure. And the data space OCCAM's inversion (DASOCC) strategy based on OCCAM scheme were given in this paper. The sensitivity matrix is calculated with the method of time-integrated back-propagated fields. Imaging result of example model shown in Fig. 1 have proven that the OCCAM scheme is an efficient inversion method for TEM with FDTD method. The processes of the inversion iterations have shown the great ability of convergence with few iterations. Summarizing the process of the imaging, we can make the following conclusions. Firstly, the 2.5D imaging in FDTD system with OCCAM inversion demonstrates that we could get desired imaging results for the resistivity structure in the homogeneous half-space. Secondly, the imaging results usually do not over-depend on the initial model, but the iteration times can be reduced distinctly if the background resistivity of initial model get close to the truthful model. So it is batter to set the initial model based on the other geologic information in the application. When the background resistivity fit the truthful model well, the imaging of anomalous body only need a few iteration steps. Finally, the speed of imaging vertical boundaries is slower than the speed of imaging the horizontal boundaries.

  9. Inverse modeling methods for indoor airborne pollutant tracking: literature review and fundamentals.

    PubMed

    Liu, X; Zhai, Z

    2007-12-01

    Reduction in indoor environment quality calls for effective control and improvement measures. Accurate and prompt identification of contaminant sources ensures that they can be quickly removed and contaminated spaces isolated and cleaned. This paper discusses the use of inverse modeling to identify potential indoor pollutant sources with limited pollutant sensor data. The study reviews various inverse modeling methods for advection-dispersion problems and summarizes the methods into three major categories: forward, backward, and probability inverse modeling methods. The adjoint probability inverse modeling method is indicated as an appropriate model for indoor air pollutant tracking because it can quickly find source location, strength and release time without prior information. The paper introduces the principles of the adjoint probability method and establishes the corresponding adjoint equations for both multi-zone airflow models and computational fluid dynamics (CFD) models. The study proposes a two-stage inverse modeling approach integrating both multi-zone and CFD models, which can provide a rapid estimate of indoor pollution status and history for a whole building. Preliminary case study results indicate that the adjoint probability method is feasible for indoor pollutant inverse modeling. The proposed method can help identify contaminant source characteristics (location and release time) with limited sensor outputs. This will ensure an effective and prompt execution of building management strategies and thus achieve a healthy and safe indoor environment. The method can also help design optimal sensor networks.

  10. Identifying seawater intrusion in coastal areas by means of 1D and quasi-2D joint inversion of TDEM and VES data

    NASA Astrophysics Data System (ADS)

    Martínez-Moreno, F. J.; Monteiro-Santos, F. A.; Bernardo, I.; Farzamian, M.; Nascimento, C.; Fernandes, J.; Casal, B.; Ribeiro, J. A.

    2017-09-01

    Seawater intrusion is an increasingly widespread problem in coastal aquifers caused by climate changes -sea-level rise, extreme phenomena like flooding and droughts- and groundwater depletion near to the coastline. To evaluate and mitigate the environmental risks of this phenomenon it is necessary to characterize the coastal aquifer and the salt intrusion. Geophysical methods are the most appropriate tool to address these researches. Among all geophysical techniques, electrical methods are able to detect seawater intrusions due to the high resistivity contrast between saltwater, freshwater and geological layers. The combination of two or more geophysical methods is recommended and they are more efficient when both data are inverted jointly because the final model encompasses the physical properties measured for each methods. In this investigation, joint inversion of vertical electric and time domain soundings has been performed to examine seawater intrusion in an area within the Ferragudo-Albufeira aquifer system (Algarve, South of Portugal). For this purpose two profiles combining electrical resistivity tomography (ERT) and time domain electromagnetic (TDEM) methods were measured and the results were compared with the information obtained from exploration drilling. Three different inversions have been carried out: single inversion of the ERT and TDEM data, 1D joint inversion and quasi-2D joint inversion. Single inversion results identify seawater intrusion, although the sedimentary layers detected in exploration drilling were not well differentiated. The models obtained with 1D joint inversion improve the previous inversion due to better detection of sedimentary layer and the seawater intrusion appear to be better defined. Finally, the quasi-2D joint inversion reveals a more realistic shape of the seawater intrusion and it is able to distinguish more sedimentary layers recognised in the exploration drilling. This study demonstrates that the quasi-2D joint inversion improves the previous inversions methods making it a powerful tool applicable to different research areas.

  11. Four-parameter potential box with inverse square singular boundaries

    NASA Astrophysics Data System (ADS)

    Alhaidari, A. D.; Taiwo, T. J.

    2018-03-01

    Using the Tridiagonal Representation Approach (TRA), we obtain solutions (energy spectrum and corresponding wavefunctions) for a four-parameter potential box with inverse square singularity at the boundaries. It could be utilized in physical applications to replace the widely used one-parameter infinite square potential well (ISPW). The four parameters of the potential provide an added flexibility over the one-parameter ISPW to control the physical features of the system. The two potential parameters that give the singularity strength at the boundaries are naturally constrained to avoid the inherent quantum anomalies associated with the inverse square potential.

  12. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  13. The origin, global distribution, and functional impact of the human 8p23 inversion polymorphism.

    PubMed

    Salm, Maximilian P A; Horswell, Stuart D; Hutchison, Claire E; Speedy, Helen E; Yang, Xia; Liang, Liming; Schadt, Eric E; Cookson, William O; Wierzbicki, Anthony S; Naoumova, Rossi P; Shoulders, Carol C

    2012-06-01

    Genomic inversions are an increasingly recognized source of genetic variation. However, a lack of reliable high-throughput genotyping assays for these structures has precluded a full understanding of an inversion's phylogenetic, phenotypic, and population genetic properties. We characterize these properties for one of the largest polymorphic inversions in man (the ∼4.5-Mb 8p23.1 inversion), a structure that encompasses numerous signals of natural selection and disease association. We developed and validated a flexible bioinformatics tool that utilizes SNP data to enable accurate, high-throughput genotyping of the 8p23.1 inversion. This tool was applied retrospectively to diverse genome-wide data sets, revealing significant population stratification that largely follows a clinal "serial founder effect" distribution model. Phylogenetic analyses establish the inversion's ancestral origin within the Homo lineage, indicating that 8p23.1 inversion has occurred independently in the Pan lineage. The human inversion breakpoint was localized to an inverted pair of human endogenous retrovirus elements within the large, flanking low-copy repeats; experimental validation of this breakpoint confirmed these elements as the likely intermediary substrates that sponsored inversion formation. In five data sets, mRNA levels of disease-associated genes were robustly associated with inversion genotype. Moreover, a haplotype associated with systemic lupus erythematosus was restricted to the derived inversion state. We conclude that the 8p23.1 inversion is an evolutionarily dynamic structure that can now be accommodated into the understanding of human genetic and phenotypic diversity.

  14. The origin, global distribution, and functional impact of the human 8p23 inversion polymorphism

    PubMed Central

    Salm, Maximilian P.A.; Horswell, Stuart D.; Hutchison, Claire E.; Speedy, Helen E.; Yang, Xia; Liang, Liming; Schadt, Eric E.; Cookson, William O.; Wierzbicki, Anthony S.; Naoumova, Rossi P.; Shoulders, Carol C.

    2012-01-01

    Genomic inversions are an increasingly recognized source of genetic variation. However, a lack of reliable high-throughput genotyping assays for these structures has precluded a full understanding of an inversion's phylogenetic, phenotypic, and population genetic properties. We characterize these properties for one of the largest polymorphic inversions in man (the ∼4.5-Mb 8p23.1 inversion), a structure that encompasses numerous signals of natural selection and disease association. We developed and validated a flexible bioinformatics tool that utilizes SNP data to enable accurate, high-throughput genotyping of the 8p23.1 inversion. This tool was applied retrospectively to diverse genome-wide data sets, revealing significant population stratification that largely follows a clinal “serial founder effect” distribution model. Phylogenetic analyses establish the inversion's ancestral origin within the Homo lineage, indicating that 8p23.1 inversion has occurred independently in the Pan lineage. The human inversion breakpoint was localized to an inverted pair of human endogenous retrovirus elements within the large, flanking low-copy repeats; experimental validation of this breakpoint confirmed these elements as the likely intermediary substrates that sponsored inversion formation. In five data sets, mRNA levels of disease-associated genes were robustly associated with inversion genotype. Moreover, a haplotype associated with systemic lupus erythematosus was restricted to the derived inversion state. We conclude that the 8p23.1 inversion is an evolutionarily dynamic structure that can now be accommodated into the understanding of human genetic and phenotypic diversity. PMID:22399572

  15. Technical note: An inverse method to relate organic carbon reactivity to isotope composition from serial oxidation

    NASA Astrophysics Data System (ADS)

    Hemingway, Jordon D.; Rothman, Daniel H.; Rosengard, Sarah Z.; Galy, Valier V.

    2017-11-01

    Serial oxidation coupled with stable carbon and radiocarbon analysis of sequentially evolved CO2 is a promising method to characterize the relationship between organic carbon (OC) chemical composition, source, and residence time in the environment. However, observed decay profiles depend on experimental conditions and oxidation pathway. It is therefore necessary to properly assess serial oxidation kinetics before utilizing decay profiles as a measure of OC reactivity. We present a regularized inverse method to estimate the distribution of OC activation energy (E), a proxy for bond strength, using serial oxidation. Here, we apply this method to ramped temperature pyrolysis or oxidation (RPO) analysis but note that this approach is broadly applicable to any serial oxidation technique. RPO analysis directly compares thermal reactivity to isotope composition by determining the E range for OC decaying within each temperature interval over which CO2 is collected. By analyzing a decarbonated test sample at multiple masses and oven ramp rates, we show that OC decay during RPO analysis follows a superposition of parallel first-order kinetics and that resulting E distributions are independent of experimental conditions. We therefore propose the E distribution as a novel proxy to describe OC thermal reactivity and suggest that E vs. isotope relationships can provide new insight into the compositional controls on OC source and residence time.

  16. Probabilistic Geoacoustic Inversion in Complex Environments

    DTIC Science & Technology

    2015-09-30

    Probabilistic Geoacoustic Inversion in Complex Environments Jan Dettmer School of Earth and Ocean Sciences, University of Victoria, Victoria BC...long-range inversion methods can fail to provide sufficient resolution. For proper quantitative examination of variability, parameter uncertainty must...project aims to advance probabilistic geoacoustic inversion methods for complex ocean environments for a range of geoacoustic data types. The work is

  17. The whole space three-dimensional magnetotelluric inversion algorithm with static shift correction

    NASA Astrophysics Data System (ADS)

    Zhang, K.

    2016-12-01

    Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results.The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm. The verification and application example of 3D inversion algorithm is shown in Figure 1. From the comparison of figure 1, the inversion model can reflect all the abnormal bodies and terrain clearly regardless of what type of data (impedance/tipper/impedance and tipper). And the resolution of the bodies' boundary can be improved by using tipper data. The algorithm is very effective for terrain inversion. So it is very useful for the study of continental shelf with continuous exploration of land, marine and underground.The three-dimensional electrical model of the ore zone reflects the basic information of stratum, rock and structure. Although it cannot indicate the ore body position directly, the important clues are provided for prospecting work by the delineation of diorite pluton uplift range. The test results show that, the high quality of the data processing and efficient inversion method for electromagnetic method is an important guarantee for porphyry ore.

  18. Convex blind image deconvolution with inverse filtering

    NASA Astrophysics Data System (ADS)

    Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong

    2018-03-01

    Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.

  19. Minimum relative entropy, Bayes and Kapur

    NASA Astrophysics Data System (ADS)

    Woodbury, Allan D.

    2011-04-01

    The focus of this paper is to illustrate important philosophies on inversion and the similarly and differences between Bayesian and minimum relative entropy (MRE) methods. The development of each approach is illustrated through the general-discrete linear inverse. MRE differs from both Bayes and classical statistical methods in that knowledge of moments are used as ‘data’ rather than sample values. MRE like Bayes, presumes knowledge of a prior probability distribution and produces the posterior pdf itself. MRE attempts to produce this pdf based on the information provided by new moments. It will use moments of the prior distribution only if new data on these moments is not available. It is important to note that MRE makes a strong statement that the imposed constraints are exact and complete. In this way, MRE is maximally uncommitted with respect to unknown information. In general, since input data are known only to within a certain accuracy, it is important that any inversion method should allow for errors in the measured data. The MRE approach can accommodate such uncertainty and in new work described here, previous results are modified to include a Gaussian prior. A variety of MRE solutions are reproduced under a number of assumed moments and these include second-order central moments. Various solutions of Jacobs & van der Geest were repeated and clarified. Menke's weighted minimum length solution was shown to have a basis in information theory, and the classic least-squares estimate is shown as a solution to MRE under the conditions of more data than unknowns and where we utilize the observed data and their associated noise. An example inverse problem involving a gravity survey over a layered and faulted zone is shown. In all cases the inverse results match quite closely the actual density profile, at least in the upper portions of the profile. The similar results to Bayes presented in are a reflection of the fact that the MRE posterior pdf, and its mean are constrained not by d=Gm but by its first moment E(d=Gm), a weakened form of the constraints. If there is no error in the data then one should expect a complete agreement between Bayes and MRE and this is what is shown. Similar results are shown when second moment data is available (e.g. posterior covariance equal to zero). But dissimilar results are noted when we attempt to derive a Bayesian like result from MRE. In the various examples given in this paper, the problems look similar but are, in the final analysis, not equal. The methods of attack are different and so are the results even though we have used the linear inverse problem as a common template.

  20. A GPU-accelerated semi-implicit fractional-step method for numerical solutions of incompressible Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Ha, Sanghyun; Park, Junshin; You, Donghyun

    2018-01-01

    Utility of the computational power of Graphics Processing Units (GPUs) is elaborated for solutions of incompressible Navier-Stokes equations which are integrated using a semi-implicit fractional-step method. The Alternating Direction Implicit (ADI) and the Fourier-transform-based direct solution methods used in the semi-implicit fractional-step method take advantage of multiple tridiagonal matrices whose inversion is known as the major bottleneck for acceleration on a typical multi-core machine. A novel implementation of the semi-implicit fractional-step method designed for GPU acceleration of the incompressible Navier-Stokes equations is presented. Aspects of the programing model of Compute Unified Device Architecture (CUDA), which are critical to the bandwidth-bound nature of the present method are discussed in detail. A data layout for efficient use of CUDA libraries is proposed for acceleration of tridiagonal matrix inversion and fast Fourier transform. OpenMP is employed for concurrent collection of turbulence statistics on a CPU while the Navier-Stokes equations are computed on a GPU. Performance of the present method using CUDA is assessed by comparing the speed of solving three tridiagonal matrices using ADI with the speed of solving one heptadiagonal matrix using a conjugate gradient method. An overall speedup of 20 times is achieved using a Tesla K40 GPU in comparison with a single-core Xeon E5-2660 v3 CPU in simulations of turbulent boundary-layer flow over a flat plate conducted on over 134 million grids. Enhanced performance of 48 times speedup is reached for the same problem using a Tesla P100 GPU.

  1. Investigating the "inverse care law" in dental care: A comparative analysis of Canadian jurisdictions.

    PubMed

    Dehmoobadsharifabadi, Armita; Singhal, Sonica; Quiñonez, Carlos

    2017-03-01

    To compare physician and dentist visits nationally and at the provincial/territorial level and to assess the extent of the "inverse care law" in dental care among different age groups in the same way. Publicly available data from the 2007 to 2008 Canadian Community Health Survey were utilized to investigate physician and dentist visits in the past 12 months in relation to self-perceived general and oral health by performing descriptive statistics and binary logistic regression, controlling for age, sex, education, income, and physician/dentist population ratios. Analysis was conducted for all participants and stratified by age groups - children (12-17 years), adults (18-64 years) and seniors (65 years and over). Nationally and provincially/territorially, it appears that the "inverse care law" persists for dental care but is not present for physician care. Specifically, when comparing to those with excellent general/oral health, individuals with poor general health were 2.71 (95% confidence interval [CI]: 2.70-2.72) times more likely to visit physicians, and individuals with poor oral health were 2.16 (95% CI: 2.16-2.17) times less likely to visit dentists. Stratified analyses by age showed more variability in the extent of the "inverse care law" in children and seniors compared to adults. The "inverse care law" in dental care exists both nationally and provincially/territorially among different age groups. Given this, it is important to assess the government's role in improving access to, and utilization of, dental care in Canada.

  2. Polynomial dual energy inverse functions for bone Calcium/Phosphorus ratio determination and experimental evaluation.

    PubMed

    Sotiropoulou, P; Fountos, G; Martini, N; Koukou, V; Michail, C; Kandarakis, I; Nikiforidis, G

    2016-12-01

    An X-ray dual energy (XRDE) method was examined, using polynomial nonlinear approximation of inverse functions for the determination of the bone Calcium-to-Phosphorus (Ca/P) mass ratio. Inverse fitting functions with the least-squares estimation were used, to determine calcium and phosphate thicknesses. The method was verified by measuring test bone phantoms with a dedicated dual energy system and compared with previously published dual energy data. The accuracy in the determination of the calcium and phosphate thicknesses improved with the polynomial nonlinear inverse function method, introduced in this work, (ranged from 1.4% to 6.2%), compared to the corresponding linear inverse function method (ranged from 1.4% to 19.5%). Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Comparison of iterative inverse coarse-graining methods

    NASA Astrophysics Data System (ADS)

    Rosenberger, David; Hanke, Martin; van der Vegt, Nico F. A.

    2016-10-01

    Deriving potentials for coarse-grained Molecular Dynamics (MD) simulations is frequently done by solving an inverse problem. Methods like Iterative Boltzmann Inversion (IBI) or Inverse Monte Carlo (IMC) have been widely used to solve this problem. The solution obtained by application of these methods guarantees a match in the radial distribution function (RDF) between the underlying fine-grained system and the derived coarse-grained system. However, these methods often fail in reproducing thermodynamic properties. To overcome this deficiency, additional thermodynamic constraints such as pressure or Kirkwood-Buff integrals (KBI) may be added to these methods. In this communication we test the ability of these methods to converge to a known solution of the inverse problem. With this goal in mind we have studied a binary mixture of two simple Lennard-Jones (LJ) fluids, in which no actual coarse-graining is performed. We further discuss whether full convergence is actually needed to achieve thermodynamic representability.

  4. 2D joint inversion of CSAMT and magnetic data based on cross-gradient theory

    NASA Astrophysics Data System (ADS)

    Wang, Kun-Peng; Tan, Han-Dong; Wang, Tao

    2017-06-01

    A two-dimensional forward and backward algorithm for the controlled-source audio-frequency magnetotelluric (CSAMT) method is developed to invert data in the entire region (near, transition, and far) and deal with the effects of artificial sources. First, a regularization factor is introduced in the 2D magnetic inversion, and the magnetic susceptibility is updated in logarithmic form so that the inversion magnetic susceptibility is always positive. Second, the joint inversion of the CSAMT and magnetic methods is completed with the introduction of the cross gradient. By searching for the weight of the cross-gradient term in the objective function, the mutual influence between two different physical properties at different locations are avoided. Model tests show that the joint inversion based on cross-gradient theory offers better results than the single-method inversion. The 2D forward and inverse algorithm for CSAMT with source can effectively deal with artificial sources and ensures the reliability of the final joint inversion algorithm.

  5. Though This be Madness, . .

    ERIC Educational Resources Information Center

    Havenhill, Wallace P.

    1969-01-01

    Utilizes two interpretations for the positive and negative signs to demonstrate how multiplication and division involving negative numbers can be represented and their inverse nature illustrated. (RP)

  6. Improved resistivity imaging of groundwater solute plumes using POD-based inversion

    NASA Astrophysics Data System (ADS)

    Oware, E. K.; Moysey, S. M.; Khan, T.

    2012-12-01

    We propose a new approach for enforcing physics-based regularization in electrical resistivity imaging (ERI) problems. The approach utilizes a basis-constrained inversion where an optimal set of basis vectors is extracted from training data by Proper Orthogonal Decomposition (POD). The key aspect of the approach is that Monte Carlo simulation of flow and transport is used to generate a training dataset, thereby intrinsically capturing the physics of the underlying flow and transport models in a non-parametric form. POD allows for these training data to be projected onto a subspace of the original domain, resulting in the extraction of a basis for the inversion that captures characteristics of the groundwater flow and transport system, while simultaneously allowing for dimensionality reduction of the original problem in the projected space We use two different synthetic transport scenarios in heterogeneous media to illustrate how the POD-based inversion compares with standard Tikhonov and coupled inversion. The first scenario had a single source zone leading to a unimodal solute plume (synthetic #1), whereas, the second scenario had two source zones that produced a bimodal plume (synthetic #2). For both coupled inversion and the POD approach, the conceptual flow and transport model used considered only a single source zone for both scenarios. Results were compared based on multiple metrics (concentration root-mean square error (RMSE), peak concentration, and total solute mass). In addition, results for POD inversion based on 3 different data densities (120, 300, and 560 data points) and varying number of selected basis images (100, 300, and 500) were compared. For synthetic #1, we found that all three methods provided qualitatively reasonable reproduction of the true plume. Quantitatively, the POD inversion performed best overall for each metric considered. Moreover, since synthetic #1 was consistent with the conceptual transport model, a small number of basis vectors (100) contained enough a priori information to constrain the inversion. Increasing the amount of data or number of selected basis images did not translate into significant improvement in imaging results. For synthetic #2, the RMSE and error in total mass were lowest for the POD inversion. However, the peak concentration was significantly overestimated by the POD approach. Regardless, the POD-based inversion was the only technique that could capture the bimodality of the plume in the reconstructed image, thus providing critical information that could be used to reconceptualize the transport problem. We also found that, in the case of synthetic #2, increasing the number of resistivity measurements and the number of selected basis vectors allowed for significant improvements in the reconstructed images.

  7. Logarithmic Laplacian Prior Based Bayesian Inverse Synthetic Aperture Radar Imaging.

    PubMed

    Zhang, Shuanghui; Liu, Yongxiang; Li, Xiang; Bi, Guoan

    2016-04-28

    This paper presents a novel Inverse Synthetic Aperture Radar Imaging (ISAR) algorithm based on a new sparse prior, known as the logarithmic Laplacian prior. The newly proposed logarithmic Laplacian prior has a narrower main lobe with higher tail values than the Laplacian prior, which helps to achieve performance improvement on sparse representation. The logarithmic Laplacian prior is used for ISAR imaging within the Bayesian framework to achieve better focused radar image. In the proposed method of ISAR imaging, the phase errors are jointly estimated based on the minimum entropy criterion to accomplish autofocusing. The maximum a posterior (MAP) estimation and the maximum likelihood estimation (MLE) are utilized to estimate the model parameters to avoid manually tuning process. Additionally, the fast Fourier Transform (FFT) and Hadamard product are used to minimize the required computational efficiency. Experimental results based on both simulated and measured data validate that the proposed algorithm outperforms the traditional sparse ISAR imaging algorithms in terms of resolution improvement and noise suppression.

  8. Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration

    USGS Publications Warehouse

    Doherty, John E.; Hunt, Randall J.

    2010-01-01

    Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.

  9. Identification of different geologic units using fuzzy constrained resistivity tomography

    NASA Astrophysics Data System (ADS)

    Singh, Anand; Sharma, S. P.

    2018-01-01

    Different geophysical inversion strategies are utilized as a component of an interpretation process that tries to separate geologic units based on the resistivity distribution. In the present study, we present the results of separating different geologic units using fuzzy constrained resistivity tomography. This was accomplished using fuzzy c means, a clustering procedure to improve the 2D resistivity image and geologic separation within the iterative minimization through inversion. First, we developed a Matlab-based inversion technique to obtain a reliable resistivity image using different geophysical data sets (electrical resistivity and electromagnetic data). Following this, the recovered resistivity model was converted into a fuzzy constrained resistivity model by assigning the highest probability value of each model cell to the cluster utilizing fuzzy c means clustering procedure during the iterative process. The efficacy of the algorithm is demonstrated using three synthetic plane wave electromagnetic data sets and one electrical resistivity field dataset. The presented approach shows improvement on the conventional inversion approach to differentiate between different geologic units if the correct number of geologic units will be identified. Further, fuzzy constrained resistivity tomography was performed to examine the augmentation of uranium mineralization in the Beldih open cast mine as a case study. We also compared geologic units identified by fuzzy constrained resistivity tomography with geologic units interpreted from the borehole information.

  10. Inversion of solar extinction data from the Apollo-Soyuz Test Project Stratospheric Aerosol Measurement (ASTP/SAM) experiment

    NASA Technical Reports Server (NTRS)

    Pepin, T. J.

    1977-01-01

    The inversion methods are reported that have been used to determine the vertical profile of the extinction coefficient due to the stratospheric aerosols from data measured during the ASTP/SAM solar occultation experiment. Inversion methods include the onion skin peel technique and methods of solving the Fredholm equation for the problem subject to smoothing constraints. The latter of these approaches involves a double inversion scheme. Comparisons are made between the inverted results from the SAM experiment and near simultaneous measurements made by lidar and balloon born dustsonde. The results are used to demonstrate the assumptions required to perform the inversions for aerosols.

  11. Application of Carbonate Reservoir using waveform inversion and reverse-time migration methods

    NASA Astrophysics Data System (ADS)

    Kim, W.; Kim, H.; Min, D.; Keehm, Y.

    2011-12-01

    Recent exploration targets of oil and gas resources are deeper and more complicated subsurface structures, and carbonate reservoirs have become one of the attractive and challenging targets in seismic exploration. To increase the rate of success in oil and gas exploration, it is required to delineate detailed subsurface structures. Accordingly, migration method is more important factor in seismic data processing for the delineation. Seismic migration method has a long history, and there have been developed lots of migration techniques. Among them, reverse-time migration is promising, because it can provide reliable images for the complicated model even in the case of significant velocity contrasts in the model. The reliability of seismic migration images is dependent on the subsurface velocity models, which can be extracted in several ways. These days, geophysicists try to obtain velocity models through seismic full waveform inversion. Since Lailly (1983) and Tarantola (1984) proposed that the adjoint state of wave equations can be used in waveform inversion, the back-propagation techniques used in reverse-time migration have been used in waveform inversion, which accelerated the development of waveform inversion. In this study, we applied acoustic waveform inversion and reverse-time migration methods to carbonate reservoir models with various reservoir thicknesses to examine the feasibility of the methods in delineating carbonate reservoir models. We first extracted subsurface material properties from acoustic waveform inversion, and then applied reverse-time migration using the inverted velocities as a background model. The waveform inversion in this study used back-propagation technique, and conjugate gradient method was used in optimization. The inversion was performed using the frequency-selection strategy. Finally waveform inversion results showed that carbonate reservoir models are clearly inverted by waveform inversion and migration images based on the inversion results are quite reliable. Different thicknesses of reservoir models were also described and the results revealed that the lower boundary of the reservoir was not delineated because of energy loss. From these results, it was noted that carbonate reservoirs can be properly imaged and interpreted by waveform inversion and reverse-time migration methods. This work was supported by the Energy Resources R&D program of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government Ministry of Knowledge Economy (No. 2009201030001A, No. 2010T100200133) and the Brain Korea 21 project of Energy System Engineering.

  12. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  13. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE PAGES

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  14. Implement Method for Automated Testing of Markov Chain Convergence into INVERSE for ORNL12-RS-108J: Advanced Multi-Dimensional Forward and Inverse Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bledsoe, Keith C.

    2015-04-01

    The DiffeRential Evolution Adaptive Metropolis (DREAM) method is a powerful optimization/uncertainty quantification tool used to solve inverse transport problems in Los Alamos National Laboratory’s INVERSE code system. The DREAM method has been shown to be adept at accurate uncertainty quantification, but it can be very computationally demanding. Previously, the DREAM method in INVERSE performed a user-defined number of particle transport calculations. This placed a burden on the user to guess the number of calculations that would be required to accurately solve any given problem. This report discusses a new approach that has been implemented into INVERSE, the Gelman-Rubin convergence metric.more » This metric automatically detects when an appropriate number of transport calculations have been completed and the uncertainty in the inverse problem has been accurately calculated. In a test problem with a spherical geometry, this method was found to decrease the number of transport calculations (and thus time required) to solve a problem by an average of over 90%. In a cylindrical test geometry, a 75% decrease was obtained.« less

  15. Fast Quantitative Susceptibility Mapping with L1-Regularization and Automatic Parameter Selection

    PubMed Central

    Bilgic, Berkin; Fan, Audrey P.; Polimeni, Jonathan R.; Cauley, Stephen F.; Bianciardi, Marta; Adalsteinsson, Elfar; Wald, Lawrence L.; Setsompop, Kawin

    2014-01-01

    Purpose To enable fast reconstruction of quantitative susceptibility maps with Total Variation penalty and automatic regularization parameter selection. Methods ℓ1-regularized susceptibility mapping is accelerated by variable-splitting, which allows closed-form evaluation of each iteration of the algorithm by soft thresholding and FFTs. This fast algorithm also renders automatic regularization parameter estimation practical. A weighting mask derived from the magnitude signal can be incorporated to allow edge-aware regularization. Results Compared to the nonlinear Conjugate Gradient (CG) solver, the proposed method offers 20× speed-up in reconstruction time. A complete pipeline including Laplacian phase unwrapping, background phase removal with SHARP filtering and ℓ1-regularized dipole inversion at 0.6 mm isotropic resolution is completed in 1.2 minutes using Matlab on a standard workstation compared to 22 minutes using the Conjugate Gradient solver. This fast reconstruction allows estimation of regularization parameters with the L-curve method in 13 minutes, which would have taken 4 hours with the CG algorithm. Proposed method also permits magnitude-weighted regularization, which prevents smoothing across edges identified on the magnitude signal. This more complicated optimization problem is solved 5× faster than the nonlinear CG approach. Utility of the proposed method is also demonstrated in functional BOLD susceptibility mapping, where processing of the massive time-series dataset would otherwise be prohibitive with the CG solver. Conclusion Online reconstruction of regularized susceptibility maps may become feasible with the proposed dipole inversion. PMID:24259479

  16. Stable Estimation of a Covariance Matrix Guided by Nuclear Norm Penalties

    PubMed Central

    Chi, Eric C.; Lange, Kenneth

    2014-01-01

    Estimation of a covariance matrix or its inverse plays a central role in many statistical methods. For these methods to work reliably, estimated matrices must not only be invertible but also well-conditioned. The current paper introduces a novel prior to ensure a well-conditioned maximum a posteriori (MAP) covariance estimate. The prior shrinks the sample covariance estimator towards a stable target and leads to a MAP estimator that is consistent and asymptotically efficient. Thus, the MAP estimator gracefully transitions towards the sample covariance matrix as the number of samples grows relative to the number of covariates. The utility of the MAP estimator is demonstrated in two standard applications – discriminant analysis and EM clustering – in this sampling regime. PMID:25143662

  17. Sampling from a Discrete Distribution While Preserving Monotonicity.

    DTIC Science & Technology

    1982-02-01

    in a table beforehand, this procedure, known as the inverse transform method, requires n storage spaces and EX comparisons on average, which may prove...limitations that deserve attention: a. In general, the alias method does not preserve a monotone relationship between U and X as does the inverse transform method...uses the inverse transform approach but with more information computed beforehand, as in the alias method. The proposed method is not new having been

  18. Asteroid orbital inversion using uniform phase-space sampling

    NASA Astrophysics Data System (ADS)

    Muinonen, K.; Pentikäinen, H.; Granvik, M.; Oszkiewicz, D.; Virtanen, J.

    2014-07-01

    We review statistical inverse methods for asteroid orbit computation from a small number of astrometric observations and short time intervals of observations. With the help of Markov-chain Monte Carlo methods (MCMC), we present a novel inverse method that utilizes uniform sampling of the phase space for the orbital elements. The statistical orbital ranging method (Virtanen et al. 2001, Muinonen et al. 2001) was set out to resolve the long-lasting challenges in the initial computation of orbits for asteroids. The ranging method starts from the selection of a pair of astrometric observations. Thereafter, the topocentric ranges and angular deviations in R.A. and Decl. are randomly sampled. The two Cartesian positions allow for the computation of orbital elements and, subsequently, the computation of ephemerides for the observation dates. Candidate orbital elements are included in the sample of accepted elements if the χ^2-value between the observed and computed observations is within a pre-defined threshold. The sample orbital elements obtain weights based on a certain debiasing procedure. When the weights are available, the full sample of orbital elements allows the probabilistic assessments for, e.g., object classification and ephemeris computation as well as the computation of collision probabilities. The MCMC ranging method (Oszkiewicz et al. 2009; see also Granvik et al. 2009) replaces the original sampling algorithm described above with a proposal probability density function (p.d.f.), and a chain of sample orbital elements results in the phase space. MCMC ranging is based on a bivariate Gaussian p.d.f. for the topocentric ranges, and allows for the sampling to focus on the phase-space domain with most of the probability mass. In the virtual-observation MCMC method (Muinonen et al. 2012), the proposal p.d.f. for the orbital elements is chosen to mimic the a posteriori p.d.f. for the elements: first, random errors are simulated for each observation, resulting in a set of virtual observations; second, corresponding virtual least-squares orbital elements are derived using the Nelder-Mead downhill simplex method; third, repeating the procedure two times allows for a computation of a difference for two sets of virtual orbital elements; and, fourth, this orbital-element difference constitutes a symmetric proposal in a random-walk Metropolis-Hastings algorithm, avoiding the explicit computation of the proposal p.d.f. In a discrete approximation, the allowed proposals coincide with the differences that are based on a large number of pre-computed sets of virtual least-squares orbital elements. The virtual-observation MCMC method is thus based on the characterization of the relevant volume in the orbital-element phase space. Here we utilize MCMC to map the phase-space domain of acceptable solutions. We can make use of the proposal p.d.f.s from the MCMC ranging and virtual-observation methods. The present phase-space mapping produces, upon convergence, a uniform sampling of the solution space within a pre-defined χ^2-value. The weights of the sampled orbital elements are then computed on the basis of the corresponding χ^2-values. The present method resembles the original ranging method. On one hand, MCMC mapping is insensitive to local extrema in the phase space and efficiently maps the solution space. This is somewhat contrary to the MCMC methods described above. On the other hand, MCMC mapping can suffer from producing a small number of sample elements with small χ^2-values, in resemblance to the original ranging method. We apply the methods to example near-Earth, main-belt, and transneptunian objects, and highlight the utilization of the methods in the data processing and analysis pipeline of the ESA Gaia space mission.

  19. Large-scale inverse model analyses employing fast randomized data reduction

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  20. An Improved 3D Joint Inversion Method of Potential Field Data Using Cross-Gradient Constraint and LSQR Method

    NASA Astrophysics Data System (ADS)

    Joulidehsar, Farshad; Moradzadeh, Ali; Doulati Ardejani, Faramarz

    2018-06-01

    The joint interpretation of two sets of geophysical data related to the same source is an appropriate method for decreasing non-uniqueness of the resulting models during inversion process. Among the available methods, a method based on using cross-gradient constraint combines two datasets is an efficient approach. This method, however, is time-consuming for 3D inversion and cannot provide an exact assessment of situation and extension of anomaly of interest. In this paper, the first attempt is to speed up the required calculation by substituting singular value decomposition by least-squares QR method to solve the large-scale kernel matrix of 3D inversion, more rapidly. Furthermore, to improve the accuracy of resulting models, a combination of depth-weighing matrix and compacted constraint, as automatic selection covariance of initial parameters, is used in the proposed inversion algorithm. This algorithm was developed in Matlab environment and first implemented on synthetic data. The 3D joint inversion of synthetic gravity and magnetic data shows a noticeable improvement in the results and increases the efficiency of algorithm for large-scale problems. Additionally, a real gravity and magnetic dataset of Jalalabad mine, in southeast of Iran was tested. The obtained results by the improved joint 3D inversion of cross-gradient along with compacted constraint showed a mineralised zone in depth interval of about 110-300 m which is in good agreement with the available drilling data. This is also a further confirmation on the accuracy and progress of the improved inversion algorithm.

  1. Inverse gravity modeling for depth varying density structures through genetic algorithm, triangulated facet representation, and switching routines

    NASA Astrophysics Data System (ADS)

    King, Thomas Steven

    A hybrid gravity modeling method is developed to investigate the structure of sedimentary mass bodies. The method incorporates as constraints surficial basement/sediment contacts and topography of a mass target with a quadratically varying density distribution. The inverse modeling utilizes a genetic algorithm (GA) to scan a wide range of the solution space to determine initial models and the Marquardt-Levenberg (ML) nonlinear inversion to determine final models that meet pre-assigned misfit criteria, thus providing an estimate of model variability and uncertainty. The surface modeling technique modifies Delaunay triangulation by allowing individual facets to be manually constructed and non-convex boundaries to be incorporated into the triangulation scheme. The sedimentary body is represented by a set of uneven prisms and edge elements, comprised of tetrahedrons, capped by polyhedrons. Each underlying prism and edge element's top surface is located by determining its point of tangency with the overlying terrain. The remaining overlying mass is gravitationally evaluated and subtracted from the observation points. Inversion then proceeds in the usual sense, but on an irregular tiered surface with each element's density defined relative to their top surface. Efficiency is particularly important due to the large number of facets evaluated for surface representations and the many repeated element evaluations of the stochastic GA. The gravitation of prisms, triangular faceted polygons, and tetrahedrons can be formulated in different ways, either mathematically or by physical approximations, each having distinct characteristics, such as evaluation time, accuracy over various spatial ranges, and computational singularities. A decision tree or switching routine is constructed for each element by combining these characteristics into a single cohesive package that optimizes the computation for accuracy and speed while avoiding singularities. The GA incorporates a subspace technique and parameter dependency to maintain model smoothness during development, thus minimizing creating nonphysical models. The stochastic GA explores the solution space, producing a broad range of unbiased initial models, while the ML inversion is deterministic and thus quickly converges to the final model. The combination allows many solution models to be determined from the same observed data.

  2. Utilization of Negotiated Tuition Aid Benefits. A Summary of the Study "Where Are the Women? A Study of the Underutilization of Tuition Aid Plans."

    ERIC Educational Resources Information Center

    Abramovitz, Mimi

    A chapter from the forthcoming book, "Practitioners' Guide to Education for Working Adults," describes a year-long study to explore the low utilization of tuition aid plans in three unionized companies. The research has shown that the use of tuition aid programs is in inverse ratio to need. Workers who tend to utilize tuition aid are those who…

  3. Comparing multiple statistical methods for inverse prediction in nuclear forensics applications

    DOE PAGES

    Lewis, John R.; Zhang, Adah; Anderson-Cook, Christine Michaela

    2017-10-29

    Forensic science seeks to predict source characteristics using measured observables. Statistically, this objective can be thought of as an inverse problem where interest is in the unknown source characteristics or factors ( X) of some underlying causal model producing the observables or responses (Y = g ( X) + error). Here, this paper reviews several statistical methods for use in inverse problems and demonstrates that comparing results from multiple methods can be used to assess predictive capability. Motivation for assessing inverse predictions comes from the desired application to historical and future experiments involving nuclear material production for forensics research inmore » which inverse predictions, along with an assessment of predictive capability, are desired.« less

  4. Comparing multiple statistical methods for inverse prediction in nuclear forensics applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, John R.; Zhang, Adah; Anderson-Cook, Christine Michaela

    Forensic science seeks to predict source characteristics using measured observables. Statistically, this objective can be thought of as an inverse problem where interest is in the unknown source characteristics or factors ( X) of some underlying causal model producing the observables or responses (Y = g ( X) + error). Here, this paper reviews several statistical methods for use in inverse problems and demonstrates that comparing results from multiple methods can be used to assess predictive capability. Motivation for assessing inverse predictions comes from the desired application to historical and future experiments involving nuclear material production for forensics research inmore » which inverse predictions, along with an assessment of predictive capability, are desired.« less

  5. Fast Nonlinear Generalized Inversion of Gravity Data with Application to the Three-Dimensional Crustal Density Structure of Sichuan Basin, Southwest China

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Meng, Xiaohong; Li, Fang

    2017-11-01

    Generalized inversion is one of the important steps in the quantitative interpretation of gravity data. With appropriate algorithm and parameters, it gives a view of the subsurface which characterizes different geological bodies. However, generalized inversion of gravity data is time consuming due to the large amount of data points and model cells adopted. Incorporating of various prior information as constraints deteriorates the above situation. In the work discussed in this paper, a method for fast nonlinear generalized inversion of gravity data is proposed. The fast multipole method is employed for forward modelling. The inversion objective function is established with weighted data misfit function along with model objective function. The total objective function is solved by a dataspace algorithm. Moreover, depth weighing factor is used to improve depth resolution of the result, and bound constraint is incorporated by a transfer function to limit the model parameters in a reliable range. The matrix inversion is accomplished by a preconditioned conjugate gradient method. With the above algorithm, equivalent density vectors can be obtained, and interpolation is performed to get the finally density model on the fine mesh in the model domain. Testing on synthetic gravity data demonstrated that the proposed method is faster than conventional generalized inversion algorithm to produce an acceptable solution for gravity inversion problem. The new developed inversion method was also applied for inversion of the gravity data collected over Sichuan basin, southwest China. The established density structure in this study helps understanding the crustal structure of Sichuan basin and provides reference for further oil and gas exploration in this area.

  6. Wavelet-based 3-D inversion for frequency-domain airborne EM data

    NASA Astrophysics Data System (ADS)

    Liu, Yunhe; Farquharson, Colin G.; Yin, Changchun; Baranwal, Vikas C.

    2018-04-01

    In this paper, we propose a new wavelet-based 3-D inversion method for frequency-domain airborne electromagnetic (FDAEM) data. Instead of inverting the model in the space domain using a smoothing constraint, this new method recovers the model in the wavelet domain based on a sparsity constraint. In the wavelet domain, the model is represented by two types of coefficients, which contain both large- and fine-scale informations of the model, meaning the wavelet-domain inversion has inherent multiresolution. In order to accomplish a sparsity constraint, we minimize an L1-norm measure in the wavelet domain that mostly gives a sparse solution. The final inversion system is solved by an iteratively reweighted least-squares method. We investigate different orders of Daubechies wavelets to accomplish our inversion algorithm, and test them on synthetic frequency-domain AEM data set. The results show that higher order wavelets having larger vanishing moments and regularity can deliver a more stable inversion process and give better local resolution, while the lower order wavelets are simpler and less smooth, and thus capable of recovering sharp discontinuities if the model is simple. At last, we test this new inversion algorithm on a frequency-domain helicopter EM (HEM) field data set acquired in Byneset, Norway. Wavelet-based 3-D inversion of HEM data is compared to L2-norm-based 3-D inversion's result to further investigate the features of the new method.

  7. The application of the pilot points in groundwater numerical inversion model

    NASA Astrophysics Data System (ADS)

    Hu, Bin; Teng, Yanguo; Cheng, Lirong

    2015-04-01

    Numerical inversion simulation of groundwater has been widely applied in groundwater. Compared to traditional forward modeling, inversion model has more space to study. Zones and inversing modeling cell by cell are conventional methods. Pilot points is a method between them. The traditional inverse modeling method often uses software dividing the model into several zones with a few parameters needed to be inversed. However, distribution is usually too simple for modeler and result of simulation deviation. Inverse cell by cell will get the most actual parameter distribution in theory, but it need computational complexity greatly and quantity of survey data for geological statistical simulation areas. Compared to those methods, pilot points distribute a set of points throughout the different model domains for parameter estimation. Property values are assigned to model cells by Kriging to ensure geological units within the parameters of heterogeneity. It will reduce requirements of simulation area geological statistics and offset the gap between above methods. Pilot points can not only save calculation time, increase fitting degree, but also reduce instability of numerical model caused by numbers of parameters and other advantages. In this paper, we use pilot point in a field which structure formation heterogeneity and hydraulics parameter was unknown. We compare inversion modeling results of zones and pilot point methods. With the method of comparative analysis, we explore the characteristic of pilot point in groundwater inversion model. First, modeler generates an initial spatially correlated field given a geostatistical model by the description of the case site with the software named Groundwater Vistas 6. Defining Kriging to obtain the value of the field functions over the model domain on the basis of their values at measurement and pilot point locations (hydraulic conductivity), then we assign pilot points to the interpolated field which have been divided into 4 zones. And add range of disturbance values to inversion targets to calculate the value of hydraulic conductivity. Third, after inversion calculation (PEST), the interpolated field will minimize an objective function measuring the misfit between calculated and measured data. It's an optimization problem to find the optimum value of parameters. After the inversion modeling, the following major conclusion can be found out: (1) In a field structure formation is heterogeneity, the results of pilot point method is more real: better fitting result of parameters, more stable calculation of numerical simulation (stable residual distribution). Compared to zones, it is better of reflecting the heterogeneity of study field. (2) Pilot point method ensures that each parameter is sensitive and not entirely dependent on other parameters. Thus it guarantees the relative independence and authenticity of parameters evaluation results. However, it costs more time to calculate than zones. Key words: groundwater; pilot point; inverse model; heterogeneity; hydraulic conductivity

  8. The New Method of Tsunami Source Reconstruction With r-Solution Inversion Method

    NASA Astrophysics Data System (ADS)

    Voronina, T. A.; Romanenko, A. A.

    2016-12-01

    Application of the r-solution method to reconstructing the initial tsunami waveform is discussed. This methodology is based on the inversion of remote measurements of water-level data. The wave propagation is considered within the scope of a linear shallow-water theory. The ill-posed inverse problem in question is regularized by means of a least square inversion using the truncated Singular Value Decomposition method. As a result of the numerical process, an r-solution is obtained. The method proposed allows one to control the instability of a numerical solution and to obtain an acceptable result in spite of ill posedness of the problem. Implementation of this methodology to reconstructing of the initial waveform to 2013 Solomon Islands tsunami validates the theoretical conclusion for synthetic data and a model tsunami source: the inversion result strongly depends on data noisiness, the azimuthal and temporal coverage of recording stations with respect to the source area. Furthermore, it is possible to make a preliminary selection of the most informative set of the available recording stations used in the inversion process.

  9. Probabilistic numerical methods for PDE-constrained Bayesian inverse problems

    NASA Astrophysics Data System (ADS)

    Cockayne, Jon; Oates, Chris; Sullivan, Tim; Girolami, Mark

    2017-06-01

    This paper develops meshless methods for probabilistically describing discretisation error in the numerical solution of partial differential equations. This construction enables the solution of Bayesian inverse problems while accounting for the impact of the discretisation of the forward problem. In particular, this drives statistical inferences to be more conservative in the presence of significant solver error. Theoretical results are presented describing rates of convergence for the posteriors in both the forward and inverse problems. This method is tested on a challenging inverse problem with a nonlinear forward model.

  10. Quantifying Uncertainty in Near Surface Electromagnetic Imaging Using Bayesian Methods

    NASA Astrophysics Data System (ADS)

    Blatter, D. B.; Ray, A.; Key, K.

    2017-12-01

    Geoscientists commonly use electromagnetic methods to image the Earth's near surface. Field measurements of EM fields are made (often with the aid an artificial EM source) and then used to infer near surface electrical conductivity via a process known as inversion. In geophysics, the standard inversion tool kit is robust and can provide an estimate of the Earth's near surface conductivity that is both geologically reasonable and compatible with the measured field data. However, standard inverse methods struggle to provide a sense of the uncertainty in the estimate they provide. This is because the task of finding an Earth model that explains the data to within measurement error is non-unique - that is, there are many, many such models; but the standard methods provide only one "answer." An alternative method, known as Bayesian inversion, seeks to explore the full range of Earth model parameters that can adequately explain the measured data, rather than attempting to find a single, "ideal" model. Bayesian inverse methods can therefore provide a quantitative assessment of the uncertainty inherent in trying to infer near surface conductivity from noisy, measured field data. This study applies a Bayesian inverse method (called trans-dimensional Markov chain Monte Carlo) to transient airborne EM data previously collected over Taylor Valley - one of the McMurdo Dry Valleys in Antarctica. Our results confirm the reasonableness of previous estimates (made using standard methods) of near surface conductivity beneath Taylor Valley. In addition, we demonstrate quantitatively the uncertainty associated with those estimates. We demonstrate that Bayesian inverse methods can provide quantitative uncertainty to estimates of near surface conductivity.

  11. Approach to simultaneously denoise and invert backscatter and extinction from photon-limited atmospheric lidar observations.

    PubMed

    Marais, Willem J; Holz, Robert E; Hu, Yu Hen; Kuehn, Ralph E; Eloranta, Edwin E; Willett, Rebecca M

    2016-10-10

    Atmospheric lidar observations provide a unique capability to directly observe the vertical column of cloud and aerosol scattering properties. Detector and solar-background noise, however, hinder the ability of lidar systems to provide reliable backscatter and extinction cross-section estimates. Standard methods for solving this inverse problem are most effective with high signal-to-noise ratio observations that are only available at low resolution in uniform scenes. This paper describes a novel method for solving the inverse problem with high-resolution, lower signal-to-noise ratio observations that are effective in non-uniform scenes. The novelty is twofold. First, the inferences of the backscatter and extinction are applied to images, whereas current lidar algorithms only use the information content of single profiles. Hence, the latent spatial and temporal information in noisy images are utilized to infer the cross-sections. Second, the noise associated with photon-counting lidar observations can be modeled using a Poisson distribution, and state-of-the-art tools for solving Poisson inverse problems are adapted to the atmospheric lidar problem. It is demonstrated through photon-counting high spectral resolution lidar (HSRL) simulations that the proposed algorithm yields inverted backscatter and extinction cross-sections (per unit volume) with smaller mean squared error values at higher spatial and temporal resolutions, compared to the standard approach. Two case studies of real experimental data are also provided where the proposed algorithm is applied on HSRL observations and the inverted backscatter and extinction cross-sections are compared against the standard approach.

  12. Classification of Weed Species Using Artificial Neural Networks Based on Color Leaf Texture Feature

    NASA Astrophysics Data System (ADS)

    Li, Zhichen; An, Qiu; Ji, Changying

    The potential impact of herbicide utilization compel people to use new method of weed control. Selective herbicide application is optimal method to reduce herbicide usage while maintain weed control. The key of selective herbicide is how to discriminate weed exactly. The HIS color co-occurrence method (CCM) texture analysis techniques was used to extract four texture parameters: Angular second moment (ASM), Entropy(E), Inertia quadrature (IQ), and Inverse difference moment or local homogeneity (IDM).The weed species selected for studying were Arthraxon hispidus, Digitaria sanguinalis, Petunia, Cyperus, Alternanthera Philoxeroides and Corchoropsis psilocarpa. The software of neuroshell2 was used for designing the structure of the neural network, training and test the data. It was found that the 8-40-1 artificial neural network provided the best classification performance and was capable of classification accuracies of 78%.

  13. To investigate the relation between pore size and twist angle in enhanced thermoelectric efficient porous armchair graphene nanoribbons

    NASA Astrophysics Data System (ADS)

    Kaur, Sukhdeep; Randhawa, Deep Kamal Kaur; Bindra Narang, Sukhleen

    2018-05-01

    Based on Non-Equilibrium Green’s function method, we demonstrate that the twisted deformation is an efficient method to improve the figure of merit ZT of porous armchair graphene nanoribbons AGNRs. The peak value of ZT can be obtained for a certain tunable twist angle. Further analysis shows that the tunable twist angle exhibits an inverse relationship with the pore size laying forth the designers a choice for the larger twists to be replaced by smaller ones simply by increasing the size of the pore. Ballistic transport regime and semi-empirical method using Huckel basis set is used to obtain the electrical properties while the Tersoff potential is employed for the phononic system. These interesting findings indicate that the twisted porous AGNRs can be utilized as designing materials for potential thermoelectric applications.

  14. [Study of inversion and classification of particle size distribution under dependent model algorithm].

    PubMed

    Sun, Xiao-Gang; Tang, Hong; Yuan, Gui-Bin

    2008-05-01

    For the total light scattering particle sizing technique, an inversion and classification method was proposed with the dependent model algorithm. The measured particle system was inversed simultaneously by different particle distribution functions whose mathematic model was known in advance, and then classified according to the inversion errors. The simulation experiments illustrated that it is feasible to use the inversion errors to determine the particle size distribution. The particle size distribution function was obtained accurately at only three wavelengths in the visible light range with the genetic algorithm, and the inversion results were steady and reliable, which decreased the number of multi wavelengths to the greatest extent and increased the selectivity of light source. The single peak distribution inversion error was less than 5% and the bimodal distribution inversion error was less than 10% when 5% stochastic noise was put in the transmission extinction measurement values at two wavelengths. The running time of this method was less than 2 s. The method has advantages of simplicity, rapidity, and suitability for on-line particle size measurement.

  15. Buried Man-made Structure Imaging using 2-D Resistivity Inversion

    NASA Astrophysics Data System (ADS)

    Anderson Bery, Andy; Nordiana, M. M.; El Hidayah Ismail, Noer; Jinmin, M.; Nur Amalina, M. K. A.

    2018-04-01

    This study is carried out with the objective to determine the suitable resistivity inversion method for buried man-made structure (bunker). This study was carried out with two stages. The first stage is suitable array determination using 2-D computerized modeling method. One suitable array is used for the infield resistivity survey to determine the dimension and location of the target. The 2-D resistivity inversion results showed that robust inversion method is suitable to resolve the top and bottom part of the buried bunker as target. In addition, the dimension of the buried bunker is successfully determined with height of 7 m and length of 20 m. The location of this target is located at -10 m until 10 m of the infield resistivity survey line. The 2-D resistivity inversion results obtained in this study showed that the parameters selection is important in order to give the optimum results. These parameters are array type, survey geometry and inversion method used in data processing.

  16. A three-step Maximum-A-Posterior probability method for InSAR data inversion of coseismic rupture with application to four recent large earthquakes in Asia

    NASA Astrophysics Data System (ADS)

    Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.

    2012-12-01

    We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of the method in earthquake studies and a number of advantages of it over other methods. The details will be reported on the meeting.

  17. From the Rendering Equation to Stratified Light Transport Inversion

    DTIC Science & Technology

    2010-12-09

    iteratively. These approaches relate closely to the radiosity method for diffuse global illumination in forward rendering (Hanrahan et al, 1991; Gortler et...currently simply use sparse matrices to represent T, we are also interested in exploring connections with hierar- chical and wavelet radiosity as in...Seidel iterative methods used in radiosity . 2.4 Inverse Light Transport Previous work on inverse rendering has considered inversion of the direct

  18. Perturbational and nonperturbational inversion of Rayleigh-wave velocities

    USGS Publications Warehouse

    Haney, Matt; Tsai, Victor C.

    2017-01-01

    The inversion of Rayleigh-wave dispersion curves is a classic geophysical inverse problem. We have developed a set of MATLAB codes that performs forward modeling and inversion of Rayleigh-wave phase or group velocity measurements. We describe two different methods of inversion: a perturbational method based on finite elements and a nonperturbational method based on the recently developed Dix-type relation for Rayleigh waves. In practice, the nonperturbational method can be used to provide a good starting model that can be iteratively improved with the perturbational method. Although the perturbational method is well-known, we solve the forward problem using an eigenvalue/eigenvector solver instead of the conventional approach of root finding. Features of the codes include the ability to handle any mix of phase or group velocity measurements, combinations of modes of any order, the presence of a surface water layer, computation of partial derivatives due to changes in material properties and layer boundaries, and the implementation of an automatic grid of layers that is optimally suited for the depth sensitivity of Rayleigh waves.

  19. Magnetotelluric inversion via reverse time migration algorithm of seismic data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ha, Taeyoung; Shin, Changsoo

    2007-07-01

    We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nedelec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversionmore » algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data.« less

  20. Efficient sampling of parsimonious inversion histories with application to genome rearrangement in Yersinia.

    PubMed

    Miklós, István; Darling, Aaron E

    2009-06-22

    Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called "MC4Inversion." We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique.

  1. Prestack density inversion using the Fatti equation constrained by the P- and S-wave impedance and density

    NASA Astrophysics Data System (ADS)

    Liang, Li-Feng; Zhang, Hong-Bing; Dan, Zhi-Wei; Xu, Zi-Qiang; Liu, Xiu-Juan; Cao, Cheng-Hao

    2017-03-01

    Simultaneous prestack inversion is based on the modified Fatti equation and uses the ratio of the P- and S-wave velocity as constraints. We use the relation of P-wave impedance and density (PID) and S-wave impedance and density (SID) to replace the constant Vp/Vs constraint, and we propose the improved constrained Fatti equation to overcome the effect of P-wave impedance on density. We compare the sensitivity of both methods using numerical simulations and conclude that the density inversion sensitivity improves when using the proposed method. In addition, the random conjugate-gradient method is used in the inversion because it is fast and produces global solutions. The use of synthetic and field data suggests that the proposed inversion method is effective in conventional and nonconventional lithologies.

  2. Modeling arson - An exercise in qualitative model building

    NASA Technical Reports Server (NTRS)

    Heineke, J. M.

    1975-01-01

    A detailed example is given of the role of von Neumann and Morgenstern's 1944 'expected utility theorem' (in the theory of games and economic behavior) in qualitative model building. Specifically, an arsonist's decision as to the amount of time to allocate to arson and related activities is modeled, and the responsiveness of this time allocation to changes in various policy parameters is examined. Both the activity modeled and the method of presentation are intended to provide an introduction to the scope and power of the expected utility theorem in modeling situations of 'choice under uncertainty'. The robustness of such a model is shown to vary inversely with the number of preference restrictions used in the analysis. The fewer the restrictions, the wider is the class of agents to which the model is applicable, and accordingly more confidence is put in the derived results. A methodological discussion on modeling human behavior is included.

  3. Optimizing signal output: effects of viscoelasticity and difference frequency on vibroacoustic radiation of tissue-mimicking phantoms

    NASA Astrophysics Data System (ADS)

    Namiri, Nikan K.; Maccabi, Ashkan; Bajwa, Neha; Badran, Karam W.; Taylor, Zachary D.; St. John, Maie A.; Grundfest, Warren S.; Saddik, George N.

    2018-02-01

    Vibroacoustography (VA) is an imaging technology that utilizes the acoustic response of tissues to a localized, low frequency radiation force to generate a spatially resolved, high contrast image. Previous studies have demonstrated the utility of VA for tissue identification and margin delineation in cancer tissues. However, the relationship between specimen viscoelasticity and vibroacoustic emission remains to be fully quantified. This work utilizes the effects of variable acoustic wave profiles on unique tissue-mimicking phantoms (TMPs) to maximize VA signal power according to tissue mechanical properties, particularly elasticity. A micro-indentation method was utilized to provide measurements of the elastic modulus for each biological replica. An inverse relationship was found between elastic modulus (E) and VA signal amplitude among homogeneous TMPs. Additionally, the difference frequency (Δf ) required to reach maximum VA signal correlated with specimen elastic modulus. Peak signal diminished with increasing Δf among the polyvinyl alcohol specimen, suggesting an inefficient vibroacoustic response by the specimen beyond a threshold of resonant Δf. Comparison of these measurements may provide additional information to improve tissue modeling, system characterization, as well as insights into the unique tissue composition of tumors in head and neck cancer patients.

  4. Angle-domain inverse scattering migration/inversion in isotropic media

    NASA Astrophysics Data System (ADS)

    Li, Wuqun; Mao, Weijian; Li, Xuelei; Ouyang, Wei; Liang, Quan

    2018-07-01

    The classical seismic asymptotic inversion can be transformed into a problem of inversion of generalized Radon transform (GRT). In such methods, the combined parameters are linearly attached to the scattered wave-field by Born approximation and recovered by applying an inverse GRT operator to the scattered wave-field data. Typical GRT-style true-amplitude inversion procedure contains an amplitude compensation process after the weighted migration via dividing an illumination associated matrix whose elements are integrals of scattering angles. It is intuitional to some extent that performs the generalized linear inversion and the inversion of GRT together by this process for direct inversion. However, it is imprecise to carry out such operation when the illumination at the image point is limited, which easily leads to the inaccuracy and instability of the matrix. This paper formulates the GRT true-amplitude inversion framework in an angle-domain version, which naturally degrades the external integral term related to the illumination in the conventional case. We solve the linearized integral equation for combined parameters of different fixed scattering angle values. With this step, we obtain high-quality angle-domain common-image gathers (CIGs) in the migration loop which provide correct amplitude-versus-angle (AVA) behavior and reasonable illumination range for subsurface image points. Then we deal with the over-determined problem to solve each parameter in the combination by a standard optimization operation. The angle-domain GRT inversion method keeps away from calculating the inaccurate and unstable illumination matrix. Compared with the conventional method, the angle-domain method can obtain more accurate amplitude information and wider amplitude-preserved range. Several model tests demonstrate the effectiveness and practicability.

  5. Estimating surface acoustic impedance with the inverse method.

    PubMed

    Piechowicz, Janusz

    2011-01-01

    Sound field parameters are predicted with numerical methods in sound control systems, in acoustic designs of building and in sound field simulations. Those methods define the acoustic properties of surfaces, such as sound absorption coefficients or acoustic impedance, to determine boundary conditions. Several in situ measurement techniques were developed; one of them uses 2 microphones to measure direct and reflected sound over a planar test surface. Another approach is used in the inverse boundary elements method, in which estimating acoustic impedance of a surface is expressed as an inverse boundary problem. The boundary values can be found from multipoint sound pressure measurements in the interior of a room. This method can be applied to arbitrarily-shaped surfaces. This investigation is part of a research programme on using inverse methods in industrial room acoustics.

  6. Graph-cut based discrete-valued image reconstruction.

    PubMed

    Tuysuzoglu, Ahmet; Karl, W Clem; Stojanovic, Ivana; Castañòn, David; Ünlü, M Selim

    2015-05-01

    Efficient graph-cut methods have been used with great success for labeling and denoising problems occurring in computer vision. Unfortunately, the presence of linear image mappings has prevented the use of these techniques in most discrete-amplitude image reconstruction problems. In this paper, we develop a graph-cut based framework for the direct solution of discrete amplitude linear image reconstruction problems cast as regularized energy function minimizations. We first analyze the structure of discrete linear inverse problem cost functions to show that the obstacle to the application of graph-cut methods to their solution is the variable mixing caused by the presence of the linear sensing operator. We then propose to use a surrogate energy functional that overcomes the challenges imposed by the sensing operator yet can be utilized efficiently in existing graph-cut frameworks. We use this surrogate energy functional to devise a monotonic iterative algorithm for the solution of discrete valued inverse problems. We first provide experiments using local convolutional operators and show the robustness of the proposed technique to noise and stability to changes in regularization parameter. Then we focus on nonlocal, tomographic examples where we consider limited-angle data problems. We compare our technique with state-of-the-art discrete and continuous image reconstruction techniques. Experiments show that the proposed method outperforms state-of-the-art techniques in challenging scenarios involving discrete valued unknowns.

  7. Stable modeling based control methods using a new RBF network.

    PubMed

    Beyhan, Selami; Alci, Musa

    2010-10-01

    This paper presents a novel model with radial basis functions (RBFs), which is applied successively for online stable identification and control of nonlinear discrete-time systems. First, the proposed model is utilized for direct inverse modeling of the plant to generate the control input where it is assumed that inverse plant dynamics exist. Second, it is employed for system identification to generate a sliding-mode control input. Finally, the network is employed to tune PID (proportional + integrative + derivative) controller parameters automatically. The adaptive learning rate (ALR), which is employed in the gradient descent (GD) method, provides the global convergence of the modeling errors. Using the Lyapunov stability approach, the boundedness of the tracking errors and the system parameters are shown both theoretically and in real time. To show the superiority of the new model with RBFs, its tracking results are compared with the results of a conventional sigmoidal multi-layer perceptron (MLP) neural network and the new model with sigmoid activation functions. To see the real-time capability of the new model, the proposed network is employed for online identification and control of a cascaded parallel two-tank liquid-level system. Even though there exist large disturbances, the proposed model with RBFs generates a suitable control input to track the reference signal better than other methods in both simulations and real time. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Identification of Balanced Chromosomal Rearrangements Previously Unknown Among Participants in the 1000 Genomes Project: Implications for Interpretation of Structural Variation in Genomes and the Future of Clinical Cytogenetics

    PubMed Central

    Dong, Zirui; Wang, Huilin; Chen, Haixiao; Jiang, Hui; Yuan, Jianying; Yang, Zhenjun; Wang, Wen-Jing; Xu, Fengping; Guo, Xiaosen; Cao, Ye; Zhu, Zhenzhen; Geng, Chunyu; Cheung, Wan Chee; Kwok, Yvonne K; Yang, Huangming; Leung, Tak Yeung; Morton, Cynthia C.; Cheung, Sau Wai; Choy, Kwong Wai

    2017-01-01

    Purpose Recent studies demonstrate that whole-genome sequencing (WGS) enables detection of cryptic rearrangements in apparently balanced chromosomal rearrangements (also known as balanced chromosomal abnormalities, BCAs) previously identified by conventional cytogenetic methods. We aimed to assess our analytical tool for detecting BCAs in The 1000 Genomes Project without knowing affected bands. Methods The 1000 Genomes Project provides an unprecedented integrated map of structural variants in phenotypically normal subjects, but there is no information on potential inclusion of subjects with apparently BCAs akin to those traditionally detected in diagnostic cytogenetics laboratories. We applied our analytical tool to 1,166 genomes from the 1000 Genomes Project with sufficient physical coverage (8.25-fold). Results Our approach detected four reciprocal balanced translocations and four inversions ranging in size from 57.9 kb to 13.3 Mb, all of which were confirmed by cytogenetic methods and PCR studies. One of DNAs has a subtle translocation that is not readily identified by chromosome analysis due to similar banding patterns and size of exchanged segments, and another results in disruption of all transcripts of an OMIM gene. Conclusions Our study demonstrates the extension of utilizing low-coverage WGS for unbiased detection of BCAs including translocations and inversions previously unknown in the 1000 Genomes Project. PMID:29095815

  9. Model based inversion of ultrasound data in composites

    NASA Astrophysics Data System (ADS)

    Roberts, R. A.

    2018-04-01

    Work is reported on model-based defect characterization in CFRP composites. The work utilizes computational models of ultrasound interaction with defects in composites, to determine 1) the measured signal dependence on material and defect properties (forward problem), and 2) an assessment of defect properties from analysis of measured ultrasound signals (inverse problem). Work is reported on model implementation for inspection of CFRP laminates containing multi-ply impact-induced delamination, in laminates displaying irregular surface geometry (roughness), as well as internal elastic heterogeneity (varying fiber density, porosity). Inversion of ultrasound data is demonstrated showing the quantitative extraction of delamination geometry and surface transmissivity. Additionally, data inversion is demonstrated for determination of surface roughness and internal heterogeneity, and the influence of these features on delamination characterization is examined. Estimation of porosity volume fraction is demonstrated when internal heterogeneity is attributed to porosity.

  10. Fractional Gaussian model in global optimization

    NASA Astrophysics Data System (ADS)

    Dimri, V. P.; Srivastava, R. P.

    2009-12-01

    Earth system is inherently non-linear and it can be characterized well if we incorporate no-linearity in the formulation and solution of the problem. General tool often used for characterization of the earth system is inversion. Traditionally inverse problems are solved using least-square based inversion by linearizing the formulation. The initial model in such inversion schemes is often assumed to follow posterior Gaussian probability distribution. It is now well established that most of the physical properties of the earth follow power law (fractal distribution). Thus, the selection of initial model based on power law probability distribution will provide more realistic solution. We present a new method which can draw samples of posterior probability density function very efficiently using fractal based statistics. The application of the method has been demonstrated to invert band limited seismic data with well control. We used fractal based probability density function which uses mean, variance and Hurst coefficient of the model space to draw initial model. Further this initial model is used in global optimization inversion scheme. Inversion results using initial models generated by our method gives high resolution estimates of the model parameters than the hitherto used gradient based liner inversion method.

  11. A time domain inverse dynamic method for the end point tracking control of a flexible manipulator

    NASA Technical Reports Server (NTRS)

    Kwon, Dong-Soo; Book, Wayne J.

    1991-01-01

    The inverse dynamic equation of a flexible manipulator was solved in the time domain. By dividing the inverse system equation into the causal part and the anticausal part, we calculated the torque and the trajectories of all state variables for a given end point trajectory. The interpretation of this method in the frequency domain was explained in detail using the two-sided Laplace transform and the convolution integral. The open loop control of the inverse dynamic method shows an excellent result in simulation. For real applications, a practical control strategy is proposed by adding a feedback tracking control loop to the inverse dynamic feedforward control, and its good experimental performance is presented.

  12. Joint Inversion of Body-Wave Arrival Times and Surface-Wave Dispersion Data in the Wavelet Domain Constrained by Sparsity Regularization

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Fang, H.; Yao, H.; Maceira, M.; van der Hilst, R. D.

    2014-12-01

    Recently, Zhang et al. (2014, Pure and Appiled Geophysics) have developed a joint inversion code incorporating body-wave arrival times and surface-wave dispersion data. The joint inversion code was based on the regional-scale version of the double-difference tomography algorithm tomoDD. The surface-wave inversion part uses the propagator matrix solver in the algorithm DISPER80 (Saito, 1988) for forward calculation of dispersion curves from layered velocity models and the related sensitivities. The application of the joint inversion code to the SAFOD site in central California shows that the fault structure is better imaged in the new model, which is able to fit both the body-wave and surface-wave observations adequately. Here we present a new joint inversion method that solves the model in the wavelet domain constrained by sparsity regularization. Compared to the previous method, it has the following advantages: (1) The method is both data- and model-adaptive. For the velocity model, it can be represented by different wavelet coefficients at different scales, which are generally sparse. By constraining the model wavelet coefficients to be sparse, the inversion in the wavelet domain can inherently adapt to the data distribution so that the model has higher spatial resolution in the good data coverage zone. Fang and Zhang (2014, Geophysical Journal International) have showed the superior performance of the wavelet-based double-difference seismic tomography method compared to the conventional method. (2) For the surface wave inversion, the joint inversion code takes advantage of the recent development of direct inversion of surface wave dispersion data for 3-D variations of shear wave velocity without the intermediate step of phase or group velocity maps (Fang et al., 2014, Geophysical Journal International). A fast marching method is used to compute, at each period, surface wave traveltimes and ray paths between sources and receivers. We will test the new joint inversion code at the SAFOD site to compare its performance over the previous code. We will also select another fault zone such as the San Jacinto Fault Zone to better image its structure.

  13. On the calibration process of film dosimetry: OLS inverse regression versus WLS inverse prediction.

    PubMed

    Crop, F; Van Rompaye, B; Paelinck, L; Vakaet, L; Thierens, H; De Wagter, C

    2008-07-21

    The purpose of this study was both putting forward a statistically correct model for film calibration and the optimization of this process. A reliable calibration is needed in order to perform accurate reference dosimetry with radiographic (Gafchromic) film. Sometimes, an ordinary least squares simple linear (in the parameters) regression is applied to the dose-optical-density (OD) curve with the dose as a function of OD (inverse regression) or sometimes OD as a function of dose (inverse prediction). The application of a simple linear regression fit is an invalid method because heteroscedasticity of the data is not taken into account. This could lead to erroneous results originating from the calibration process itself and thus to a lower accuracy. In this work, we compare the ordinary least squares (OLS) inverse regression method with the correct weighted least squares (WLS) inverse prediction method to create calibration curves. We found that the OLS inverse regression method could lead to a prediction bias of up to 7.3 cGy at 300 cGy and total prediction errors of 3% or more for Gafchromic EBT film. Application of the WLS inverse prediction method resulted in a maximum prediction bias of 1.4 cGy and total prediction errors below 2% in a 0-400 cGy range. We developed a Monte-Carlo-based process to optimize calibrations, depending on the needs of the experiment. This type of thorough analysis can lead to a higher accuracy for film dosimetry.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yu; Gao, Kai; Huang, Lianjie

    Accurate imaging and characterization of fracture zones is crucial for geothermal energy exploration. Aligned fractures within fracture zones behave as anisotropic media for seismic-wave propagation. The anisotropic properties in fracture zones introduce extra difficulties for seismic imaging and waveform inversion. We have recently developed a new anisotropic elastic-waveform inversion method using a modified total-variation regularization scheme and a wave-energy-base preconditioning technique. Our new inversion method uses the parameterization of elasticity constants to describe anisotropic media, and hence it can properly handle arbitrary anisotropy. We apply our new inversion method to a seismic velocity model along a 2D-line seismic data acquiredmore » at Eleven-Mile Canyon located at the Southern Dixie Valley in Nevada for geothermal energy exploration. Our inversion results show that anisotropic elastic-waveform inversion has potential to reconstruct subsurface anisotropic elastic parameters for imaging and characterization of fracture zones.« less

  15. Layer 1 VPN services in distributed next-generation SONET/SDH networks with inverse multiplexing

    NASA Astrophysics Data System (ADS)

    Ghani, N.; Muthalaly, M. V.; Benhaddou, D.; Alanqar, W.

    2006-05-01

    Advances in next-generation SONET/SDH along with GMPLS control architectures have enabled many new service provisioning capabilities. In particular, a key services paradigm is the emergent Layer 1 virtual private network (L1 VPN) framework, which allows multiple clients to utilize a common physical infrastructure and provision their own 'virtualized' circuit-switched networks. This precludes expensive infrastructure builds and increases resource utilization for carriers. Along these lines, a novel L1 VPN services resource management scheme for next-generation SONET/SDH networks is proposed that fully leverages advanced virtual concatenation and inverse multiplexing features. Additionally, both centralized and distributed GMPLS-based implementations are also tabled to support the proposed L1 VPN services model. Detailed performance analysis results are presented along with avenues for future research.

  16. Non-recursive augmented Lagrangian algorithms for the forward and inverse dynamics of constrained flexible multibodies

    NASA Technical Reports Server (NTRS)

    Bayo, Eduardo; Ledesma, Ragnar

    1993-01-01

    A technique is presented for solving the inverse dynamics of flexible planar multibody systems. This technique yields the non-causal joint efforts (inverse dynamics) as well as the internal states (inverse kinematics) that produce a prescribed nominal trajectory of the end effector. A non-recursive global Lagrangian approach is used in formulating the equations for motion as well as in solving the inverse dynamics equations. Contrary to the recursive method previously presented, the proposed method solves the inverse problem in a systematic and direct manner for both open-chain as well as closed-chain configurations. Numerical simulation shows that the proposed procedure provides an excellent tracking of the desired end effector trajectory.

  17. Detecting a subsurface cylinder by a Time Reversal MUSIC like method

    NASA Astrophysics Data System (ADS)

    Solimene, Raffaele; Dell'Aversano, Angela; Leone, Giovanni

    2014-05-01

    In this contribution the problem of imaging a buried homogeneous circular cylinder is dealt with for a two-dimensional scalar geometry. Though the addressed geometry is extremely simple as compared to real world scenarios, it can be considered of interest for a classical GPR civil engineering applicative context: that is the subsurface prospecting of urban area in order to detect and locate buried utilities. A large body of methods for subsurface imaging have been presented in literature [1], ranging from migration algorithms to non-linear inverse scattering approaches. More recently, also spectral estimation methods, which benefit from sub-array data arrangement, have been proposed and compared in [2].Here a Time Reversal MUSIC (TRM) like method is employed. TRM has been initially conceived to detect point-like scatterers and then generalized to the case of extended scatterers [3]. In the latter case, no a priori information about the scatterers is exploited. However, utilities often can be schematized as circular cylinders. Here, we develop a TRM variant which use this information to properly tailor the steering vector while implementing TRM. Accordingly, instead of a spatial map [3], the imaging procedure returns the scatterer's parameters such as its center position, radius and dielectric permittivity. The study is developed by numerical simulations. First the free-space case is considered in order to more easily introduce the idea and the problem mathematical structure. Then the analysis is extended to the half-space case. In both situations a FDTD forward solver is used to generate the synthetic data. As usual in TRM, a multi-view/multi-static single-frequency configuration is considered and emphasis is put on the role played by the number of available sensors. Acknowledgement This work benefited from networking activities carried out within the EU funded COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar." [1] A. Randazzo and R. Solimene, 'Development Of New Methods For The Solution Of Inverse Electromagnetic Scattering Problems By Buried Structures: State of the Art and Open Issues ,'in COST ACTION TU1208: CIVIL ENGINEERING APPLICATIONS OF GROUND PENETRATING RADAR, Proceedings of first Action's General Meeting, 2013. ISBN: 978-88-548-6191-6. [2] S. Meschino, L. Pajewski, M. Pastorino, A. Randazzo, G. Schettini, "Detection of subsurface metallic utilities by means of a SAP technique: Comparing MUSIC- and SVM-based approaches, Journal of Applied Geophysics, vol. 97, pp. 60-68, 2013. [3] E. A. Marengo, F. K. Gruber, F. Simonetti, 'Time-reversal MUSIC imaging of extended targets,' IEEE Trans Image Process. vol. 16, pp. 1967-84, 2007

  18. Estimation of biological parameters of marine organisms using linear and nonlinear acoustic scattering model-based inversion methods.

    PubMed

    Chu, Dezhang; Lawson, Gareth L; Wiebe, Peter H

    2016-05-01

    The linear inversion commonly used in fisheries and zooplankton acoustics assumes a constant inversion kernel and ignores the uncertainties associated with the shape and behavior of the scattering targets, as well as other relevant animal parameters. Here, errors of the linear inversion due to uncertainty associated with the inversion kernel are quantified. A scattering model-based nonlinear inversion method is presented that takes into account the nonlinearity of the inverse problem and is able to estimate simultaneously animal abundance and the parameters associated with the scattering model inherent to the kernel. It uses sophisticated scattering models to estimate first, the abundance, and second, the relevant shape and behavioral parameters of the target organisms. Numerical simulations demonstrate that the abundance, size, and behavior (tilt angle) parameters of marine animals (fish or zooplankton) can be accurately inferred from the inversion by using multi-frequency acoustic data. The influence of the singularity and uncertainty in the inversion kernel on the inversion results can be mitigated by examining the singular values for linear inverse problems and employing a non-linear inversion involving a scattering model-based kernel.

  19. Telomeres and NextGen CO-FISH: Directional Genomic Hybridization (Telo-dGH™).

    PubMed

    McKenna, Miles J; Robinson, Erin; Goodwin, Edwin H; Cornforth, Michael N; Bailey, Susan M

    2017-01-01

    The cytogenomics-based methodology of Directional Genomic Hybridization (dGH™) emerged from the concept of strand-specific hybridization, first made possible by Chromosome Orientation FISH (CO-FISH), the utility of which was demonstrated in a variety of early applications, often involving telomeres. Similar to standard whole chromosome painting (FISH), dGH™ is capable of identifying inter-chromosomal rearrangements (translocations between chromosomes), but its distinctive strength stems from its ability to detect intra-chromosomal rearrangements (inversions within chromosomes), and to do so at higher resolution than previously possible. dGH™ brings together the strand specificity and directionality of CO-FISH with sophisticated bioinformatics-based oligonucleotide probe design to unique sequences. dGH™ serves not only as a powerful discovery tool-capable of interrogating the entire genome at the megabase level-it can also be used for high-resolution targeted detection of known inversions, a valuable attribute in both research and clinical settings. Detection of chromosomal inversions, particularly small ones, poses a formidable challenge for more traditional cytogenetic approaches, especially when they occur near the ends or telomeric regions. Here, we describe Telo-dGH™, a strand-specific scheme that utilizes dGH™ in combination with telomere CO-FISH to differentiate between terminal exchange events, specifically terminal inversions, and an altogether different form of genetic recombination that often occurs near the telomere, namely sister chromatid exchange (SCE).

  20. Identification of SR3335 (ML176): a Synthetic RORα Selective Inverse Agonist

    PubMed Central

    Kumar, Naresh; Kojetin, Douglas J.; Solt, Laura A.; Kumar, K. Ganesh; Nuhant, Philippe; Duckett, Derek R.; Cameron, Michael D.; Butler, Andrew A.; Roush, William R.; Griffin, Patrick R.; Burris, Thomas P.

    2010-01-01

    Several nuclear receptors (NRs) are still characterized as orphan receptors since ligands have not yet been identified for these proteins. The retinoic acid receptor-related receptors (RORs) have no well-defined physiological ligands. Here, we describe the identification of a selective RORα synthetic ligand, SR3335 (ML-176). SR3335 directly binds to RORα, but not other RORs, and functions as a selective partial inverse agonist of RORα in cell-based assays. Furthermore, SR3335 suppresses the expression of endogenous RORα target genes in HepG2 involved in hepatic gluconeogenesis including glucose-6-phosphatase and phosphoenolpyruvate carboxykinase. Pharmacokinetic studies indicate that SR3335 displays reasonable exposure following an i.p. injection into mice. We assess the ability of SR3335 to suppress gluconeogenesis in vivo using a diet induced obesity (DIO) mouse model where the mice where treated with 15 mg/kg b.i.d., i.p. for 6-days followed by a pyruvate tolerance test. SR3335 treated mice displayed lower plasma glucose levels following the pyruvate challenge consistent with suppression of gluconeogenesis. Thus, we have identified the first selective synthetic RORα inverse agonist and this compound can be utilized as a chemical tool to probe the function of this receptor both in vitro and in vivo. Additionally, our data suggests that RORα inverse agonists may hold utility for suppression of elevated hepatic glucose production in type 2 diabetics. PMID:21090593

  1. Geospatial Relationships between Awareness and Utilization of Community Exercise Resources and Physical Activity Levels in Older Adults.

    PubMed

    Dondzila, Christopher J; Swartz, Ann M; Keenan, Kevin G; Harley, Amy E; Azen, Razia; Strath, Scott J

    2014-01-01

    Introduction. It is unclear if community-based fitness resources (CBFR) translate to heightened activity levels within neighboring areas. The purpose of this study was to determine whether awareness and utilization of fitness resources and physical activity differed depending on residential distance from CBFR. Methods. Four hundred and seventeen older adults (72.9 ± 7.7 years) were randomly recruited from three spatial tiers (≤1.6, >1.6 to ≤3.2, and >3.2 to 8.0 km) surrounding seven senior centers, which housed CBFR. Participants completed questionnaires on health history, CBFR, and physical activity, gathering data on CBFR awareness, utilization, and barriers, overall levels, and predictors to engagement in moderate to vigorous physical activity (MVPA). Results. Across spatial tiers, there were no differences in positive awareness rates of CBFR or CBFR utilization. Engagement in MVPA differed across spatial tiers (P < 0.001), with the >3.2 to 8.0 km radius having the highest mean energy expenditure. Across all sites, age and income level (P < 0.05) were significant predictors of low and high amounts of MVPA, respectively, and current health status and lack of interest represented barriers to CBFR utilization (P < 0.05). Conclusion. Closer proximity to CBFR did not impact awareness or utilization rates and had an inverse relationship with physical activity.

  2. Long-term reduction of health care costs & utilization after epilepsy surgery

    PubMed Central

    Schiltz, Nicholas K.; Kaiboriboon, Kitti; Koroukian, Siran M.; Singer, Mendel E.; Love, Thomas E.

    2015-01-01

    SUMMARY Objective To assess long-term direct medical costs, health care utilization, and mortality following resective surgery in persons with uncontrolled epilepsy. Methods Retrospective longitudinal cohort study of Medicaid beneficiaries with epilepsy from 2000 - 2008. The study population included 7,835 persons with uncontrolled focal epilepsy age 18 to 64 years, with an average follow-up time of 5 years. Of these, 135 received surgery during the study period. To account for selection bias, we used risk-set optimal pairwise matching on a time-varying propensity score, and inverse probability of treatment weighting. Repeated measures generalized linear models were used to model utilization and cost outcomes. Cox proportional hazard was used to model survival. Results The mean direct medical cost difference between the surgical group and control group was $6,806 after risk-set matching. The incidence rate ratio of inpatient, emergency room, and outpatient utilization was lower among the surgical group in both unadjusted and adjusted analyses. There was no significant difference in mortality after adjustment. Among surgical cases, mean annual costs per subject were on average $6,484 lower, and all utilization measures were lower after surgery compared to before. Significance Subjects that underwent epilepsy surgery had lower direct medical care costs and health care utilization. These findings support that epilepsy surgery yield substantial health care cost savings. PMID:26693701

  3. Special Course on Inverse Methods for Airfoil Design for Aeronautical and Turbomachinery Applications (Methodes Inverses pour la Conception des Profils Porteurs pour des Applications dans les Domaines de l’Aeronautique et des Turbomachines)

    DTIC Science & Technology

    1990-11-01

    engined jet aircraft wing MID PLA CROSS tCTO% taking into account the effects of the propulsive system. -DESIGN PAAMETERS DESTIGE PARAMETERS 5CT 0 (MC 0...AGARD Report No.780 Special Course on Inverse Methods for Airfoil Design for Aeronautical and Turbomachinery Applications (M6thodes Inverses pour la...manufacturing systems. Blade or airfoil designs are normally made in two steps, and the lectures are accordingly grouped into two parts. - - In the

  4. Viscoelastic material inversion using Sierra-SD and ROL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walsh, Timothy; Aquino, Wilkins; Ridzal, Denis

    2014-11-01

    In this report we derive frequency-domain methods for inverse characterization of the constitutive parameters of viscoelastic materials. The inverse problem is cast in a PDE-constrained optimization framework with efficient computation of gradients and Hessian vector products through matrix free operations. The abstract optimization operators for first and second derivatives are derived from first principles. Various methods from the Rapid Optimization Library (ROL) are tested on the viscoelastic inversion problem. The methods described herein are applied to compute the viscoelastic bulk and shear moduli of a foam block model, which was recently used in experimental testing for viscoelastic property characterization.

  5. Nonlinear PP and PS joint inversion based on the exact Zoeppritz equations: a two-stage procedure

    NASA Astrophysics Data System (ADS)

    Zhi, Lixia; Chen, Shuangquan; Song, Baoshan; Li, Xiang-yang

    2018-04-01

    S-velocity and density are very important parameters in distinguishing lithology and estimating other petrophysical properties. A reliable estimate of S-velocity and density is very difficult to obtain, even from long-offset gather data. Joint inversion of PP and PS data provides a promising strategy for stabilizing and improving the results of inversion in estimating elastic parameters and density. For 2D or 3D inversion, the trace-by-trace strategy is still the most widely used method although it often suffers from a lack of clarity because of its high efficiency, which is due to parallel computing. This paper describes a two-stage inversion method for nonlinear PP and PS joint inversion based on the exact Zoeppritz equations. There are several advantages for our proposed methods as follows: (1) Thanks to the exact Zoeppritz equation, our joint inversion method is applicable for wide angle amplitude-versus-angle inversion; (2) The use of both P- and S-wave information can further enhance the stability and accuracy of parameter estimation, especially for the S-velocity and density; (3) The two-stage inversion procedure proposed in this paper can achieve a good compromise between efficiency and precision. On the one hand, the trace-by-trace strategy used in the first stage can be processed in parallel so that it has high computational efficiency. On the other hand, to deal with the indistinctness of and undesired disturbances to the inversion results obtained from the first stage, we apply the second stage—total variation (TV) regularization. By enforcing spatial and temporal constraints, the TV regularization stage deblurs the inversion results and leads to parameter estimation with greater precision. Notably, the computation consumption of the TV regularization stage can be ignored compared to the first stage because it is solved using the fast split Bregman iterations. Numerical examples using a well log and the Marmousi II model show that the proposed joint inversion is a reliable method capable of accurately estimating the density parameter as well as P-wave velocity and S-wave velocity, even when the seismic data is noisy with signal-to-noise ratio of 5.

  6. Wedge-shaped slice-selective adiabatic inversion pulse for controlling temporal width of bolus in pulsed arterial spin labeling

    PubMed Central

    Guo, Jia; Buxton, Richard B.; Wong, Eric C.

    2015-01-01

    Purpose In pulsed arterial spin labeling (PASL) methods, arterial blood is labeled via inverting a slab with uniform thickness, resulting in different temporal widths of boluses in vessels with different flow velocities. This limits the temporal resolution and signal-to-noise ratio (SNR) efficiency gains in PASL-based methods intended for high temporal resolution and SNR efficiency, such as Turbo-ASL and Turbo-QUASAR. Theory and Methods A novel wedge-shaped (WS) adiabatic inversion pulse is developed by adding in-plane gradient pulses to a slice-selective (SS) adiabatic inversion pulse to linearly modulate the inversion thicknesses at different locations while maintaining the adiabatic properties of the original pulse. A hyperbolic secant (HS) based WS inversion pulse was implemented. Its performance was tested in simulations, phantom and human experiments, and compared to an SS HS inversion pulse. Results Compared to the SS inversion pulse, the WS inversion pulse is capable of inducing different inversion thicknesses at different locations. It can be adjusted to generate a uniform temporal width of boluses in arteries at locations with different flow velocities. Conclusion The WS inversion pulse can be used to control the temporal widths of labeled boluses in PASL experiments. This should benefit PASL experiments by maximizing labeling duty cycle, and improving temporal resolution and SNR efficiency. PMID:26451521

  7. Fully three-dimensional and viscous semi-inverse method for axial/radial turbomachine blade design

    NASA Astrophysics Data System (ADS)

    Ji, Min

    2008-10-01

    A fully three-dimensional viscous semi-inverse method for the design of turbomachine blades is presented in this work. Built on a time marching Reynolds-Averaged Navier-Stokes solver, the inverse scheme is capable of designing axial/radial turbomachinery blades in flow regimes ranging from very low Mach number to transonic/supersonic flows. In order to solve flow at all-speed conditions, the preconditioning technique is incorporated into the basic JST time-marching scheme. The accuracy of the resulting flow solver is verified with documented experimental data and commercial CFD codes. The level of accuracy of the flow solver exhibited in those verification cases is typical of CFD analysis employed in the design process in industry. The inverse method described in the present work takes pressure loading and blade thickness as prescribed quantities and computes the corresponding three-dimensional blade camber surface. In order to have the option of imposing geometrical constraints on the designed blade shapes, a new inverse algorithm is developed to solve the camber surface at specified spanwise pseudo stream-tubes (i.e. along grid lines), while the blade geometry is constructed through ruling (e.g. straight-line element) at the remaining spanwise stations. The new inverse algorithm involves re-formulating the boundary condition on the blade surfaces as a hybrid inverse/analysis boundary condition, preserving the full three-dimensional nature of the flow. The new design procedure can be interpreted as a fully three-dimensional viscous semi-inverse method. The ruled surface design ensures the blade surface smoothness and mechanical integrity as well as achieves cost reduction for the manufacturing process. A numerical target shooting experiment for a mixed flow impeller shows that the semi-inverse method is able to accurately recover the target blade composed of straightline element from a different initial blade. The semi-inverse method is proved to work well with various loading strategies for the mixed flow impeller. It is demonstrated that uniformity of impeller exit flow and performance gain can be achieved with appropriate loading combinations at hub and shroud. An application of this semi-inverse method is also demonstrated through a redesign of an industrial shrouded subsonic centrifugal impeller. The redesigned impeller shows improved performance and operating range from the original one. Preliminary studies of blade designs presented in this work show that through the choice of the prescribed pressure loading profiles, this semi-inverse method can be used to design blade with the following objectives: (1) Various operating envelope. (2) Uniformity of impeller exit flow. (3) Overall performance improvement. By designing blade geometry with the proposed semi-inverse method whereby the blade pressure loading is specified instead of the conventional design approach of manually adjusting the blade angle to achieve blade design objectives, designers can discover blade geometry design space that has not been explored before.

  8. Comparison of the convolution quadrature method and enhanced inverse FFT with application in elastodynamic boundary element method

    NASA Astrophysics Data System (ADS)

    Schanz, Martin; Ye, Wenjing; Xiao, Jinyou

    2016-04-01

    Transient problems can often be solved with transformation methods, where the inverse transformation is usually performed numerically. Here, the discrete Fourier transform in combination with the exponential window method is compared with the convolution quadrature method formulated as inverse transformation. Both are inverse Laplace transforms, which are formally identical but use different complex frequencies. A numerical study is performed, first with simple convolution integrals and, second, with a boundary element method (BEM) for elastodynamics. Essentially, when combined with the BEM, the discrete Fourier transform needs less frequency calculations, but finer mesh compared to the convolution quadrature method to obtain the same level of accuracy. If further fast methods like the fast multipole method are used to accelerate the boundary element method the convolution quadrature method is better, because the iterative solver needs much less iterations to converge. This is caused by the larger real part of the complex frequencies necessary for the calculation, which improves the conditions of system matrix.

  9. Efficient Sampling of Parsimonious Inversion Histories with Application to Genome Rearrangement in Yersinia

    PubMed Central

    Darling, Aaron E.

    2009-01-01

    Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called “MC4Inversion.” We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique. PMID:20333186

  10. Satellite Imagery Analysis for Nighttime Temperature Inversion Clouds

    NASA Technical Reports Server (NTRS)

    Kawamoto, K.; Minnis, P.; Arduini, R.; Smith, W., Jr.

    2001-01-01

    Clouds play important roles in the climate system. Their optical and microphysical properties, which largely determine their radiative property, need to be investigated. Among several measurement means, satellite remote sensing seems to be the most promising. Since most of the cloud algorithms proposed so far are daytime use which utilizes solar radiation, Minnis et al. (1998) developed a nighttime use one using 3.7-, 11 - and 12-microns channels. Their algorithm, however, has a drawback that is not able to treat temperature inversion cases. We update their algorithm, incorporating new parameterization by Arduini et al. (1999) which is valid for temperature inversion cases. This updated algorithm has been applied to GOES satellite data and reasonable retrieval results were obtained.

  11. Ground resistivity method and DCIP2D forward and inversion modelling to identify alteration at the Midwest uranium deposit, northern Saskatchewan, Canada

    NASA Astrophysics Data System (ADS)

    Long, Samuel R. M.; Smith, Richard S.; Hearst, Robert B.

    2017-06-01

    Resistivity methods are commonly used in mineral exploration to map lithology, structure, sulphides and alteration. In the Athabasca Basin, resistivity methods are used to detect alteration associated with uranium. At the Midwest deposit, there is an alteration zone in the Athabasca sandstones that is above a uraniferous conductive graphitic fault in the basement and below a conductive lake at surface. Previous geophysical work in this area has yielded resistivity sections that we feel are ambiguous in the area where the alteration is expected. Resolve® and TEMPEST sections yield an indistinct alteration zone, while two-dimensional (2D) inversions of the ground resistivity data show an equivocal smeared conductive feature in the expected location between the conductive graphite and the conductive lake. Forward modelling alone cannot identify features in the pseudosections that are clearly associated with alteration, as the section is dominated by the feature associated with the near-surface conductive lake; inverse modelling alone produces sections that are smeared and equivocal. We advocate an approach that uses a combination of forward and inverse modelling. We generate a forward model from a synthetic geoelectric section; this forward data is then inverse modelled and compared with the inverse model generated from the field data using the same inversion parameters. The synthetic geoelectric section is then adjusted until the synthetic inverse model closely matches the field inverse model. We found that this modelling process required a conductive alteration zone in the sandstone above the graphite, as removing the alteration zone from the sandstone created an inverse section very dissimilar to the inverse section derived from the field data. We therefore conclude that the resistivity method is able to identify conductive alteration at Midwest even though it is below a conductive lake and above a conductive graphitic fault. We also concluded that resistivity inversions suggest a conductive paleoweathering surface on the top of the basement rocks at the basin/basement unconformity.

  12. Parallel optical image addition and subtraction in a dynamic photorefractive memory by phase-code multiplexing

    NASA Astrophysics Data System (ADS)

    Denz, Cornelia; Dellwig, Thilo; Lembcke, Jan; Tschudi, Theo

    1996-02-01

    We propose and demonstrate experimentally a method for utilizing a dynamic phase-encoded photorefractive memory to realize parallel optical addition, subtraction, and inversion operations of stored images. The phase-encoded holographic memory is realized in photorefractive BaTiO3, storing eight images using WalshHadamard binary phase codes and an incremental recording procedure. By subsampling the set of reference beams during the recall operation, the selectivity of the phase address is decreased, allowing one to combine images in such a way that different linear combination of the images can be realized at the output of the memory.

  13. Reliability Overhaul Model

    DTIC Science & Technology

    1989-08-01

    Random variables for the conditional exponential distribution are generated using the inverse transform method. C1) Generate U - UCO,i) (2) Set s - A ln...e - [(x+s - 7)/ n] 0 + [Cx-T)/n]0 c. Random variables from the conditional weibull distribution are generated using the inverse transform method. C1...using a standard normal transformation and the inverse transform method. B - 3 APPENDIX 3 DISTRIBUTIONS SUPPORTED BY THE MODEL (1) Generate Y - PCX S

  14. An Innovations-Based Noise Cancelling Technique on Inverse Kepstrum Whitening Filter and Adaptive FIR Filter in Beamforming Structure

    PubMed Central

    Jeong, Jinsoo

    2011-01-01

    This paper presents an acoustic noise cancelling technique using an inverse kepstrum system as an innovations-based whitening application for an adaptive finite impulse response (FIR) filter in beamforming structure. The inverse kepstrum method uses an innovations-whitened form from one acoustic path transfer function between a reference microphone sensor and a noise source so that the rear-end reference signal will then be a whitened sequence to a cascaded adaptive FIR filter in the beamforming structure. By using an inverse kepstrum filter as a whitening filter with the use of a delay filter, the cascaded adaptive FIR filter estimates only the numerator of the polynomial part from the ratio of overall combined transfer functions. The test results have shown that the adaptive FIR filter is more effective in beamforming structure than an adaptive noise cancelling (ANC) structure in terms of signal distortion in the desired signal and noise reduction in noise with nonminimum phase components. In addition, the inverse kepstrum method shows almost the same convergence level in estimate of noise statistics with the use of a smaller amount of adaptive FIR filter weights than the kepstrum method, hence it could provide better computational simplicity in processing. Furthermore, the rear-end inverse kepstrum method in beamforming structure has shown less signal distortion in the desired signal than the front-end kepstrum method and the front-end inverse kepstrum method in beamforming structure. PMID:22163987

  15. A multiwave range test for obstacle reconstructions with unknown physical properties

    NASA Astrophysics Data System (ADS)

    Potthast, Roland; Schulz, Jochen

    2007-08-01

    We develop a new multiwave version of the range test for shape reconstruction in inverse scattering theory. The range test [R. Potthast, et al., A `range test' for determining scatterers with unknown physical properties, Inverse Problems 19(3) (2003) 533-547] has originally been proposed to obtain knowledge about an unknown scatterer when the far field pattern for only one plane wave is given. Here, we extend the method to the case of multiple waves and show that the full shape of the unknown scatterer can be reconstructed. We further will clarify the relation between the range test methods, the potential method [A. Kirsch, R. Kress, On an integral equation of the first kind in inverse acoustic scattering, in: Inverse Problems (Oberwolfach, 1986), Internationale Schriftenreihe zur Numerischen Mathematik, vol. 77, Birkhauser, Basel, 1986, pp. 93-102] and the singular sources method [R. Potthast, Point sources and multipoles in inverse scattering theory, Habilitation Thesis, Gottingen, 1999]. In particular, we propose a new version of the Kirsch-Kress method using the range test and a new approach to the singular sources method based on the range test and potential method. Numerical examples of reconstructions for all four methods are provided.

  16. A Joint Method of Envelope Inversion Combined with Hybrid-domain Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    CUI, C.; Hou, W.

    2017-12-01

    Full waveform inversion (FWI) aims to construct high-precision subsurface models by fully using the information in seismic records, including amplitude, travel time, phase and so on. However, high non-linearity and the absence of low frequency information in seismic data lead to the well-known cycle skipping problem and make inversion easily fall into local minima. In addition, those 3D inversion methods that are based on acoustic approximation ignore the elastic effects in real seismic field, and make inversion harder. As a result, the accuracy of final inversion results highly relies on the quality of initial model. In order to improve stability and quality of inversion results, multi-scale inversion that reconstructs subsurface model from low to high frequency are applied. But, the absence of very low frequencies (< 3Hz) in field data is still bottleneck in the FWI. By extracting ultra low-frequency data from field data, envelope inversion is able to recover low wavenumber model with a demodulation operator (envelope operator), though the low frequency data does not really exist in field data. To improve the efficiency and viability of the inversion, in this study, we proposed a joint method of envelope inversion combined with hybrid-domain FWI. First, we developed 3D elastic envelope inversion, and the misfit function and the corresponding gradient operator were derived. Then we performed hybrid-domain FWI with envelope inversion result as initial model which provides low wavenumber component of model. Here, forward modeling is implemented in the time domain and inversion in the frequency domain. To accelerate the inversion, we adopt CPU/GPU heterogeneous computing techniques. There were two levels of parallelism. In the first level, the inversion tasks are decomposed and assigned to each computation node by shot number. In the second level, GPU multithreaded programming is used for the computation tasks in each node, including forward modeling, envelope extraction, DFT (discrete Fourier transform) calculation and gradients calculation. Numerical tests demonstrated that the combined envelope inversion + hybrid-domain FWI could obtain much faithful and accurate result than conventional hybrid-domain FWI. The CPU/GPU heterogeneous parallel computation could improve the performance speed.

  17. Fair and Square Computation of Inverse "Z"-Transforms of Rational Functions

    ERIC Educational Resources Information Center

    Moreira, M. V.; Basilio, J. C.

    2012-01-01

    All methods presented in textbooks for computing inverse "Z"-transforms of rational functions have some limitation: 1) the direct division method does not, in general, provide enough information to derive an analytical expression for the time-domain sequence "x"("k") whose "Z"-transform is "X"("z"); 2) computation using the inversion integral…

  18. Inverse simulation system for evaluating handling qualities during rendezvous and docking

    NASA Astrophysics Data System (ADS)

    Zhou, Wanmeng; Wang, Hua; Thomson, Douglas; Tang, Guojin; Zhang, Fan

    2017-08-01

    The traditional method used for handling qualities assessment of manned space vehicles is too time-consuming to meet the requirements of an increasingly fast design process. In this study, a rendezvous and docking inverse simulation system to assess the handling qualities of spacecraft is proposed using a previously developed model-predictive-control architecture. By considering the fixed discrete force of the thrusters of the system, the inverse model is constructed using the least squares estimation method with a hyper-ellipsoidal restriction, the continuous control outputs of which are subsequently dispersed by pulse width modulation with sensitivity factors introduced. The inputs in every step are deemed constant parameters, and the method could be considered as a general method for solving nominal, redundant, and insufficient inverse problems. The rendezvous and docking inverse simulation is applied to a nine-degrees-of-freedom platform, and a novel handling qualities evaluation scheme is established according to the operation precision and astronauts' workload. Finally, different nominal trajectories are scored by the inverse simulation and an established evaluation scheme. The scores can offer theoretical guidance for astronaut training and more complex operation missions.

  19. Iterative image-domain decomposition for dual-energy CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niu, Tianye; Dong, Xue; Petrongolo, Michael

    2014-04-15

    Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm ismore » formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the proposed method but with an edge-preserving regularization term. Results: On the Catphan phantom, the method maintains the same spatial resolution on the decomposed images as that of the CT images before decomposition (8 pairs/cm) while significantly reducing their noise standard deviation. Compared to that obtained by the direct matrix inversion, the noise standard deviation in the images decomposed by the proposed algorithm is reduced by over 98%. Without considering the noise correlation properties in the formulation, the denoising scheme degrades the spatial resolution to 6 pairs/cm for the same level of noise suppression. Compared to the edge-preserving algorithm, the method achieves better low-contrast detectability. A quantitative study is performed on the contrast-rod slice of Catphan phantom. The proposed method achieves lower electron density measurement error as compared to that by the direct matrix inversion, and significantly reduces the error variation by over 97%. On the head phantom, the method reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusions: The authors propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. By exploring the full variance-covariance properties of the decomposed images and utilizing the edge predetection, the proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability.« less

  20. A full potential inverse method based on a density linearization scheme for wing design

    NASA Technical Reports Server (NTRS)

    Shankar, V.

    1982-01-01

    A mixed analysis inverse procedure based on the full potential equation in conservation form was developed to recontour a given base wing to produce density linearization scheme in applying the pressure boundary condition in terms of the velocity potential. The FL030 finite volume analysis code was modified to include the inverse option. The new surface shape information, associated with the modified pressure boundary condition, is calculated at a constant span station based on a mass flux integration. The inverse method is shown to recover the original shape when the analysis pressure is not altered. Inverse calculations for weakening of a strong shock system and for a laminar flow control (LFC) pressure distribution are presented. Two methods for a trailing edge closure model are proposed for further study.

  1. Acoustic Green's function extraction in the ocean

    NASA Astrophysics Data System (ADS)

    Zang, Xiaoqin

    The acoustic Green's function (GF) is the key to understanding the acoustic properties of ocean environments. With knowledge of the acoustic GF, the physics of sound propagation, such as dispersion, can be analyzed; underwater communication over thousands of miles can be understood; physical properties of the ocean, including ocean temperature, ocean current speed, as well as seafloor bathymetry, can be investigated. Experimental methods of acoustic GF extraction can be categorized as active methods and passive methods. Active methods are based on employment of man-made sound sources. These active methods require less computational complexity and time, but may cause harm to marine mammals. Passive methods cost much less and do not harm marine mammals, but require more theoretical and computational work. Both methods have advantages and disadvantages that should be carefully tailored to fit the need of each specific environment and application. In this dissertation, we study one passive method, the noise interferometry method, and one active method, the inverse filter processing method, to achieve acoustic GF extraction in the ocean. The passive method of noise interferometry makes use of ambient noise to extract an approximation to the acoustic GF. In an environment with a diffusive distribution of sound sources, sound waves that pass through two hydrophones at two locations carry the information of the acoustic GF between these two locations; by listening to the long-term ambient noise signals and cross-correlating the noise data recorded at two locations, the acoustic GF emerges from the noise cross-correlation function (NCF); a coherent stack of many realizations of NCFs yields a good approximation to the acoustic GF between these two locations, with all the deterministic structures clearly exhibited in the waveform. To test the performance of noise interferometry in different types of ocean environments, two field experiments were performed and ambient noise data were collected in a 100-meter deep coastal ocean environment and a 600-meter deep ocean environment. In the coastal ocean environment, the collected noise data were processed by coherently stacking five days of cross-correlation functions between pairs of hydrophones separated by 5 km, 10 km and 15 km, respectively. NCF waveforms were modeled using the KRAKEN normal mode model, with the difference between the NCFs and the acoustic GFs quantified by a weighting function. Through waveform inversion of NCFs, an optimal geoacoustic model was obtained by minimizing the two-norm misfit between the simulation and the measurement. Using a simulated time-reversal mirror, the extracted GF was back propagated from the receiver location to the virtual source, and a strong focus was found in the vicinity of the source, which provides additional support for the optimality of the aforementioned geoacoustic model. With the extracted GF, dispersion in experimental shallow water environment was visualized in the time-frequency representation. Normal modes of GFs were separated using the time-warping transformation. By separating the modes in the frequency domain of the time-warped signal, we isolated modal arrivals and reconstructed the NCF by summing up the isolated modes, thereby significantly improving the signal-to-noise ratio of NCFs. Finally, these reconstructed NCFs were employed to estimate the depth-averaged current speed in the Florida Straits, based on an effective sound speed approximation. In the mid-deep ocean environment, the noise data were processed using the same noise interferometry method, but the obtained NCFs were not as good as those in the coastal ocean environment. Several highly possible reasons of the difference in the noise interferometry performance were investigated and discussed. The first one is the noise source composition, which is different in the spectrograms of noise records in two environments. The second is strong ocean current variability that can result in coherence loss and undermine the utility of coherent stacking. The third one is the downward refracting sound speed profile, which impedes strong coupling between near surface noise sources and the near-bottom instruments. The active method of inverse filter processing was tested in a long-range deep-ocean environment. The high-power sound source, which was located near the sound channel axis, transmitted a pre-designed signal that was composed of a precursor signal and a communication signal. After traveling 1428.5 km distance in the north Pacific Ocean, the transmitted signal was detected by the receiver and was processed using the inverse filter. The probe signal, which was composed of M sequences and was known at the receiver, was utilized for the GF extraction in the inverse filter; the communication signal was then interpreted with the extracted GF. With a glitch in the length of communication signal, the inverse filter processing method was shown to be effective for long-range low-frequency deep ocean acoustic communication. (Abstract shortened by ProQuest.).

  2. FOREWORD: 5th International Workshop on New Computational Methods for Inverse Problems

    NASA Astrophysics Data System (ADS)

    Vourc'h, Eric; Rodet, Thomas

    2015-11-01

    This volume of Journal of Physics: Conference Series is dedicated to the scientific research presented during the 5th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2015 (http://complement.farman.ens-cachan.fr/NCMIP_2015.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 29, 2015. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011, and secondly at the initiative of Institut Farman, in May 2012, May 2013 and May 2014. The New Computational Methods for Inverse Problems (NCMIP) workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel methods, learning methods, convex optimization, free discontinuity problems, metamodels, proper orthogonal decomposition, reduced models for the inversion, non-linear inverse scattering, image reconstruction and restoration, and applications (bio-medical imaging, non-destructive evaluation...). NCMIP 2015 was a one-day workshop held in May 2015 which attracted around 70 attendees. Each of the submitted papers has been reviewed by two reviewers. There have been 15 accepted papers. In addition, three international speakers were invited to present a longer talk. The workshop was supported by Institut Farman (ENS Cachan, CNRS) and endorsed by the following French research networks: GDR ISIS, GDR MIA, GDR MOA and GDR Ondes. The program committee acknowledges the following research laboratories: CMLA, LMT, LURPA and SATIE.

  3. Examination of Soil Moisture Retrieval Using SIR-C Radar Data and a Distributed Hydrological Model

    NASA Technical Reports Server (NTRS)

    Hsu, A. Y.; ONeill, P. E.; Wood, E. F.; Zion, M.

    1997-01-01

    A major objective of soil moisture-related hydrological-research during NASA's SIR-C/X-SAR mission was to determine and compare soil moisture patterns within humid watersheds using SAR data, ground-based measurements, and hydrologic modeling. Currently available soil moisture-inversion methods using active microwave data are only accurate when applied to bare and slightly vegetated surfaces. Moreover, as the surface dries down, the number of pixels that can provide estimated soil moisture by these radar inversion methods decreases, leading to less accuracy and, confidence in the retrieved soil moisture fields at the watershed scale. The impact of these errors in microwave- derived soil moisture on hydrological modeling of vegetated watersheds has yet to be addressed. In this study a coupled water and energy balance model operating within a topographic framework is used to predict surface soil moisture for both bare and vegetated areas. In the first model run, the hydrological model is initialized using a standard baseflow approach, while in the second model run, soil moisture values derived from SIR-C radar data are used for initialization. The results, which compare favorably with ground measurements, demonstrate the utility of combining radar-derived surface soil moisture information with basin-scale hydrological modeling.

  4. Mineral Information Extraction Based on GAOFEN-5'S Thermal Infrared Data

    NASA Astrophysics Data System (ADS)

    Liu, L.; Shang, K.

    2018-04-01

    Gaofen-5 carries six instruments aimed at various land and atmosphere applications, and it's an important unit of China High-resolution Earth Observation System. As Gaofen-5's thermal infrared payload is similar to that of ASTER, which is widely used in mineral exploration, application of Gaofen-5's thermal infrared data is discussed regarding its capability in mineral classification and silica content estimation. First, spectra of silicate, carbonate, sulfate minerals from a spectral library are used to conduct spectral feature analysis on Gaofen-5's thermal infrared emissivities. Spectral indices of band emissivities are proposed, and by setting thresholds of these spectral indices, it can classify three types of minerals mentioned above. This classification method is tested on a simulated Gaofen-5 emissivity image. With samples acquired from the study area, this method is proven to be feasible. Second, with band emissivities of silicate and their silica content from the same spectral library, correlation models have been tried to be built for silica content inversion. However, the highest correlation coefficient is merely 0.592, which is much lower than that of correlation model built on ASTER thermal infrared emissivity. It can be concluded that GF-5's thermal infrared data can be utilized in mineral classification but not in silica content inversion.

  5. Efficient realization of 3D joint inversion of seismic and magnetotelluric data with cross gradient structure constraint

    NASA Astrophysics Data System (ADS)

    Luo, H.; Zhang, H.; Gao, J.

    2016-12-01

    Seismic and magnetotelluric (MT) imaging methods are generally used to characterize subsurface structures at various scales. The two methods are complementary to each other and the integration of them is helpful for more reliably determining the resistivity and velocity models of the target region. Because of the difficulty in finding empirical relationship between resistivity and velocity parameters, Gallardo and Meju [2003] proposed a joint inversion method enforcing resistivity and velocity models consistent in structure, which is realized by minimizing cross gradients between two models. However, it is extremely challenging to combine two different inversion systems together along with the cross gradient constraints. For this reason, Gallardo [2007] proposed a joint inversion scheme that decouples the seismic and MT inversion systems by iteratively performing seismic and MT inversions as well as cross gradient minimization separately. This scheme avoids the complexity of combining two different systems together but it suffers the issue of balancing between data fitting and structure constraint. In this study, we have developed a new joint inversion scheme that avoids the problem encountered by the scheme of Gallardo [2007]. In the new scheme, seismic and MT inversions are still separately performed but the cross gradient minimization is also constrained by model perturbations from separate inversions. In this way, the new scheme still avoids the complexity of combining two different systems together and at the same time the balance between data fitting and structure consistency constraint can be enforced. We have tested our joint inversion algorithm for both 2D and 3D cases. Synthetic tests show that joint inversion better reconstructed the velocity and resistivity models than separate inversions. Compared to separate inversions, joint inversion can remove artifacts in the resistivity model and can improve the resolution for deeper resistivity structures. We will also show results applying the new joint seismic and MT inversion scheme to southwest China, where several MT profiles are available and earthquakes are very active.

  6. Extremely high resolution 3D electrical resistivity tomography to depict archaeological subsurface structures

    NASA Astrophysics Data System (ADS)

    Al-Saadi, Osamah; Schmidt, Volkmar; Becken, Michael; Fritsch, Thomas

    2017-04-01

    Electrical resistivity tomography (ERT) methods have been increasingly used in various shallow depth archaeological prospections in the last few decades. These non-invasive techniques are very useful in saving time, costs, and efforts. Both 2D and 3D ERT techniques are used to obtain detailed images of subsurface anomalies. In two surveyed areas near Nonnweiler (Germany), we present the results of the full 3D setup with a roll-along technique and of the quasi-3D setup (parallel and orthogonal profiles in dipole-dipole configuration). In area A, a dipole-dipole array with 96 electrodes in a uniform rectangular survey grid has been used in full 3D to investigate a presumed Roman building. A roll-along technique has been utilized to cover a large part of the archaeological site with an electrode spacing of 1 meter and with 0.5 meter for a more detailed image. Additional dense parallel 2D profiles have been carried out in dipole-dipole array with 0.25 meter electrode spacing and 0.25 meter between adjacent profiles in both direction for higher- resolution subsurface images. We have designed a new field procedure, which used an electrode array fixed in a frame. This facilitates efficient field operation, which comprised 2376 electrode positions. With the quasi 3D imaging, we confirmed the full 3D inversion model but at a much better resolution. In area B, dense parallel 2D profiles were directly used to survey the second target with also 0.25 meter electrode spacing and profiles separation respectively. The same field measurement design has been utilized and comprised 9648 electrode positions in total. The quasi-3D inversion results clearly revealed the main structures of the Roman construction. These ERT inversion results coincided well with the archaeological excavation, which has been done in some parts of this area. The ERT result successfully images parts from the walls and also smaller internal structures of the Roman building.

  7. a method of gravity and seismic sequential inversion and its GPU implementation

    NASA Astrophysics Data System (ADS)

    Liu, G.; Meng, X.

    2011-12-01

    In this abstract, we introduce a gravity and seismic sequential inversion method to invert for density and velocity together. For the gravity inversion, we use an iterative method based on correlation imaging algorithm; for the seismic inversion, we use the full waveform inversion. The link between the density and velocity is an empirical formula called Gardner equation, for large volumes of data, we use the GPU to accelerate the computation. For the gravity inversion method , we introduce a method based on correlation imaging algorithm,it is also a interative method, first we calculate the correlation imaging of the observed gravity anomaly, it is some value between -1 and +1, then we multiply this value with a little density ,this value become the initial density model. We get a forward reuslt with this initial model and also calculate the correaltion imaging of the misfit of observed data and the forward data, also multiply the correaltion imaging result a little density and add it to the initial model, then do the same procedure above , at last ,we can get a inversion density model. For the seismic inveron method ,we use a mothod base on the linearity of acoustic wave equation written in the frequency domain,with a intial velociy model, we can get a good velocity result. In the sequential inversion of gravity and seismic , we need a link formula to convert between density and velocity ,in our method , we use the Gardner equation. Driven by the insatiable market demand for real time, high-definition 3D images, the programmable NVIDIA Graphic Processing Unit (GPU) as co-processor of CPU has been developed for high performance computing. Compute Unified Device Architecture (CUDA) is a parallel programming model and software environment provided by NVIDIA designed to overcome the challenge of using traditional general purpose GPU while maintaining a low learn curve for programmers familiar with standard programming languages such as C. In our inversion processing, we use the GPU to accelerate our gravity and seismic inversion. Taking the gravity inversion as an example, its kernels are gravity forward simulation and correlation imaging, after the parallelization in GPU, in 3D case,the inversion module, the original five CPU loops are reduced to three,the forward module the original five CPU loops are reduced to two. Acknowledgments We acknowledge the financial support of Sinoprobe project (201011039 and 201011049-03), the Fundamental Research Funds for the Central Universities (2010ZY26 and 2011PY0183), the National Natural Science Foundation of China (41074095) and the Open Project of State Key Laboratory of Geological Processes and Mineral Resources (GPMR0945).

  8. Absolutely and uniformly convergent iterative approach to inverse scattering with an infinite radius of convergence

    DOEpatents

    Kouri, Donald J [Houston, TX; Vijay, Amrendra [Houston, TX; Zhang, Haiyan [Houston, TX; Zhang, Jingfeng [Houston, TX; Hoffman, David K [Ames, IA

    2007-05-01

    A method and system for solving the inverse acoustic scattering problem using an iterative approach with consideration of half-off-shell transition matrix elements (near-field) information, where the Volterra inverse series correctly predicts the first two moments of the interaction, while the Fredholm inverse series is correct only for the first moment and that the Volterra approach provides a method for exactly obtaining interactions which can be written as a sum of delta functions.

  9. New Additions to the Toolkit for Forward/Inverse Problems in Electrocardiography within the SCIRun Problem Solving Environment.

    PubMed

    Coll-Font, Jaume; Burton, Brett M; Tate, Jess D; Erem, Burak; Swenson, Darrel J; Wang, Dafang; Brooks, Dana H; van Dam, Peter; Macleod, Rob S

    2014-09-01

    Cardiac electrical imaging often requires the examination of different forward and inverse problem formulations based on mathematical and numerical approximations of the underlying source and the intervening volume conductor that can generate the associated voltages on the surface of the body. If the goal is to recover the source on the heart from body surface potentials, the solution strategy must include numerical techniques that can incorporate appropriate constraints and recover useful solutions, even though the problem is badly posed. Creating complete software solutions to such problems is a daunting undertaking. In order to make such tools more accessible to a broad array of researchers, the Center for Integrative Biomedical Computing (CIBC) has made an ECG forward/inverse toolkit available within the open source SCIRun system. Here we report on three new methods added to the inverse suite of the toolkit. These new algorithms, namely a Total Variation method, a non-decreasing TMP inverse and a spline-based inverse, consist of two inverse methods that take advantage of the temporal structure of the heart potentials and one that leverages the spatial characteristics of the transmembrane potentials. These three methods further expand the possibilities of researchers in cardiology to explore and compare solutions to their particular imaging problem.

  10. Spatially constrained Bayesian inversion of frequency- and time-domain electromagnetic data from the Tellus projects

    NASA Astrophysics Data System (ADS)

    Kiyan, Duygu; Rath, Volker; Delhaye, Robert

    2017-04-01

    The frequency- and time-domain airborne electromagnetic (AEM) data collected under the Tellus projects of the Geological Survey of Ireland (GSI) which represent a wealth of information on the multi-dimensional electrical structure of Ireland's near-surface. Our project, which was funded by GSI under the framework of their Short Call Research Programme, aims to develop and implement inverse techniques based on various Bayesian methods for these densely sampled data. We have developed a highly flexible toolbox using Python language for the one-dimensional inversion of AEM data along the flight lines. The computational core is based on an adapted frequency- and time-domain forward modelling core derived from the well-tested open-source code AirBeo, which was developed by the CSIRO (Australia) and the AMIRA consortium. Three different inversion methods have been implemented: (i) Tikhonov-type inversion including optimal regularisation methods (Aster el al., 2012; Zhdanov, 2015), (ii) Bayesian MAP inversion in parameter and data space (e.g. Tarantola, 2005), and (iii) Full Bayesian inversion with Markov Chain Monte Carlo (Sambridge and Mosegaard, 2002; Mosegaard and Sambridge, 2002), all including different forms of spatial constraints. The methods have been tested on synthetic and field data. This contribution will introduce the toolbox and present case studies on the AEM data from the Tellus projects.

  11. Simultaneous Gaussian and exponential inversion for improved analysis of shales by NMR relaxometry

    USGS Publications Warehouse

    Washburn, Kathryn E.; Anderssen, Endre; Vogt, Sarah J.; Seymour, Joseph D.; Birdwell, Justin E.; Kirkland, Catherine M.; Codd, Sarah L.

    2014-01-01

    Nuclear magnetic resonance (NMR) relaxometry is commonly used to provide lithology-independent porosity and pore-size estimates for petroleum resource evaluation based on fluid-phase signals. However in shales, substantial hydrogen content is associated with solid and fluid signals and both may be detected. Depending on the motional regime, the signal from the solids may be best described using either exponential or Gaussian decay functions. When the inverse Laplace transform, the standard method for analysis of NMR relaxometry results, is applied to data containing Gaussian decays, this can lead to physically unrealistic responses such as signal or porosity overcall and relaxation times that are too short to be determined using the applied instrument settings. We apply a new simultaneous Gaussian-Exponential (SGE) inversion method to simulated data and measured results obtained on a variety of oil shale samples. The SGE inversion produces more physically realistic results than the inverse Laplace transform and displays more consistent relaxation behavior at high magnetic field strengths. Residuals for the SGE inversion are consistently lower than for the inverse Laplace method and signal overcall at short T2 times is mitigated. Beyond geological samples, the method can also be applied in other fields where the sample relaxation consists of both Gaussian and exponential decays, for example in material, medical and food sciences.

  12. Effect of inversion layer at iron pyrite surface on photovoltaic device

    NASA Astrophysics Data System (ADS)

    Uchiyama, Shunsuke; Ishikawa, Yasuaki; Uraoka, Yukiharu

    2018-03-01

    Iron pyrite has great potential as a thin-film solar cell material because it has high optical absorption, low cost, and is earth-abundant. However, previously reported iron pyrite solar cells showed poor photovoltaic characteristics. Here, we have numerically simulated its photovoltaic characteristics and band structures by utilizing a two-dimensional (2D) device simulator, ATLAS, to evaluate the effects of an inversion layer at the surface and a high density of deep donor defect states in the bulk. We found that previous device structures did not consider the inversion layer at the surface region of iron pyrite, which made it difficult to obtain the conversion efficiency. Therefore, we remodeled the device structure and suggested that removing the inversion layer and reducing the density of deep donor defect states would lead to a high conversion efficiency of iron pyrite solar cells.

  13. An Inversion Method for Reconstructing Hall Thruster Plume Parameters from the Line Integrated Measurements (Preprint)

    DTIC Science & Technology

    2007-06-05

    From - To) 05-06-2007 Technical Paper 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER An Inversion Method for Reconstructing Hall Thruster Plume...239.18 An Inversion Method for Reconstructing Hall Thruster Plume Parameters from Line Integrated Measurements (Preprint) Taylor S. Matlock∗ Jackson...dimensional estimate of the plume electron temperature using a published xenon collisional radiative model. I. Introduction The Hall thruster is a high

  14. Comparisons between conventional optical imaging and parametric indirect microscopic imaging on human skin detection

    NASA Astrophysics Data System (ADS)

    Liu, Guoyan; Gao, Kun; Liu, Xuefeng; Ni, Guoqiang

    2016-10-01

    We report a new method, polarization parameters indirect microscopic imaging with a high transmission infrared light source, to detect the morphology and component of human skin. A conventional reflection microscopic system is used as the basic optical system, into which a polarization-modulation mechanics is inserted and a high transmission infrared light source is utilized. The near-field structural characteristics of human skin can be delivered by infrared waves and material coupling. According to coupling and conduction physics, changes of the optical wave parameters can be calculated and curves of the intensity of the image can be obtained. By analyzing the near-field polarization parameters in nanoscale, we can finally get the inversion images of human skin. Compared with the conventional direct optical microscope, this method can break diffraction limit and achieve a super resolution of sub-100nm. Besides, the method is more sensitive to the edges, wrinkles, boundaries and impurity particles.

  15. Research on fusion algorithm of polarization image in tetrolet domain

    NASA Astrophysics Data System (ADS)

    Zhang, Dexiang; Yuan, BaoHong; Zhang, Jingjing

    2015-12-01

    Tetrolets are Haar-type wavelets whose supports are tetrominoes which are shapes made by connecting four equal-sized squares. A fusion method for polarization images based on tetrolet transform is proposed. Firstly, the magnitude of polarization image and angle of polarization image can be decomposed into low-frequency coefficients and high-frequency coefficients with multi-scales and multi-directions using tetrolet transform. For the low-frequency coefficients, the average fusion method is used. According to edge distribution differences in high frequency sub-band images, for the directional high-frequency coefficients are used to select the better coefficients by region spectrum entropy algorithm for fusion. At last the fused image can be obtained by utilizing inverse transform for fused tetrolet coefficients. Experimental results show that the proposed method can detect image features more effectively and the fused image has better subjective visual effect

  16. A case study of the sensitivity of forecast skill to data and data analysis techniques

    NASA Technical Reports Server (NTRS)

    Baker, W. E.; Atlas, R.; Halem, M.; Susskind, J.

    1983-01-01

    A series of experiments have been conducted to examine the sensitivity of forecast skill to various data and data analysis techniques for the 0000 GMT case of January 21, 1979. These include the individual components of the FGGE observing system, the temperatures obtained with different satellite retrieval methods, and the method of vertical interpolation between the mandatory pressure analysis levels and the model sigma levels. It is found that NESS TIROS-N infrared retrievals seriously degrade a rawinsonde-only analysis over land, resulting in a poorer forecast over North America. Less degradation in the 72-hr forecast skill at sea level and some improvement at 500 mb is noted, relative to the control with TIROS-N retrievals produced with a physical inversion method which utilizes a 6-hr forecast first guess. NESS VTPR oceanic retrievals lead to an improved forecast over North America when added to the control.

  17. Acoustic methods for cavitation mapping in biomedical applications

    NASA Astrophysics Data System (ADS)

    Wan, M.; Xu, S.; Ding, T.; Hu, H.; Liu, R.; Bai, C.; Lu, S.

    2015-12-01

    In recent years, cavitation is increasingly utilized in a wide range of applications in biomedical field. Monitoring the spatial-temporal evolution of cavitation bubbles is of great significance for efficiency and safety in biomedical applications. In this paper, several acoustic methods for cavitation mapping proposed or modified on the basis of existing work will be presented. The proposed novel ultrasound line-by-line/plane-by-plane method can depict cavitation bubbles distribution with high spatial and temporal resolution and may be developed as a potential standard 2D/3D cavitation field mapping method. The modified ultrafast active cavitation mapping based upon plane wave transmission and reception as well as bubble wavelet and pulse inversion technique can apparently enhance the cavitation to tissue ratio in tissue and further assist in monitoring the cavitation mediated therapy with good spatial and temporal resolution. The methods presented in this paper will be a foundation to promote the research and development of cavitation imaging in non-transparent medium.

  18. Using the in-line component for fixed-wing EM 1D inversion

    NASA Astrophysics Data System (ADS)

    Smiarowski, Adam

    2015-09-01

    Numerous authors have discussed the utility of multicomponent measurements. Generally speaking, for a vertical-oriented dipole source, the measured vertical component couples to horizontal planar bodies while the horizontal in-line component couples best to vertical planar targets. For layered-earth cases, helicopter EM systems have little or no in-line component response and as a result much of the in-line signal is due to receiver coil rotation and appears as noise. In contrast to this, the in-line component of a fixed-wing airborne electromagnetic (AEM) system with large transmitter-receiver offset can be substantial, exceeding the vertical component in conductive areas. This paper compares the in-line and vertical response of a fixed-wing airborne electromagnetic (AEM) system using a half-space model and calculates sensitivity functions. The a posteriori inversion model parameter uncertainty matrix is calculated for a bathymetry model (conductive layer over more resistive half-space) for two inversion cases; use of vertical component alone is compared to joint inversion of vertical and in-line components. The joint inversion is able to better resolve model parameters. An example is then provided using field data from a bathymetry survey to compare the joint inversion to vertical component only inversion. For each inversion set, the difference between the inverted water depth and ship-measured bathymetry is calculated. The result is in general agreement with that expected from the a posteriori inversion model parameter uncertainty calculation.

  19. Inverse solutions for electrical impedance tomography based on conjugate gradients methods

    NASA Astrophysics Data System (ADS)

    Wang, M.

    2002-01-01

    A multistep inverse solution for two-dimensional electric field distribution is developed to deal with the nonlinear inverse problem of electric field distribution in relation to its boundary condition and the problem of divergence due to errors introduced by the ill-conditioned sensitivity matrix and the noise produced by electrode modelling and instruments. This solution is based on a normalized linear approximation method where the change in mutual impedance is derived from the sensitivity theorem and a method of error vector decomposition. This paper presents an algebraic solution of the linear equations at each inverse step, using a generalized conjugate gradients method. Limiting the number of iterations in the generalized conjugate gradients method controls the artificial errors introduced by the assumption of linearity and the ill-conditioned sensitivity matrix. The solution of the nonlinear problem is approached using a multistep inversion. This paper also reviews the mathematical and physical definitions of the sensitivity back-projection algorithm based on the sensitivity theorem. Simulations and discussion based on the multistep algorithm, the sensitivity coefficient back-projection method and the Newton-Raphson method are given. Examples of imaging gas-liquid mixing and a human hand in brine are presented.

  20. Seismic Tomography

    NASA Astrophysics Data System (ADS)

    Nowack, Robert L.; Li, Cuiping

    The inversion of seismic travel-time data for radially varying media was initially investigated by Herglotz, Wiechert, and Bateman (the HWB method) in the early part of the 20th century [1]. Tomographic inversions for laterally varying media began in seismology starting in the 1970’s. This included early work by Aki, Christoffersson, and Husebye who developed an inversion technique for estimating lithospheric structure beneath a seismic array from distant earthquakes (the ACH method) [2]. Also, Alekseev and others in Russia performed early inversions of refraction data for laterally varying upper mantle structure [3]. Aki and Lee [4] developed an inversion technique using travel-time data from local earthquakes.

  1. Nonlinear adaptive inverse control via the unified model neural network

    NASA Astrophysics Data System (ADS)

    Jeng, Jin-Tsong; Lee, Tsu-Tian

    1999-03-01

    In this paper, we propose a new nonlinear adaptive inverse control via a unified model neural network. In order to overcome nonsystematic design and long training time in nonlinear adaptive inverse control, we propose the approximate transformable technique to obtain a Chebyshev Polynomials Based Unified Model (CPBUM) neural network for the feedforward/recurrent neural networks. It turns out that the proposed method can use less training time to get an inverse model. Finally, we apply this proposed method to control magnetic bearing system. The experimental results show that the proposed nonlinear adaptive inverse control architecture provides a greater flexibility and better performance in controlling magnetic bearing systems.

  2. Three-dimensional inversion of multisource array electromagnetic data

    NASA Astrophysics Data System (ADS)

    Tartaras, Efthimios

    Three-dimensional (3-D) inversion is increasingly important for the correct interpretation of geophysical data sets in complex environments. To this effect, several approximate solutions have been developed that allow the construction of relatively fast inversion schemes. One such method that is fast and provides satisfactory accuracy is the quasi-linear (QL) approximation. It has, however, the drawback that it is source-dependent and, therefore, impractical in situations where multiple transmitters in different positions are employed. I have, therefore, developed a localized form of the QL approximation that is source-independent. This so-called localized quasi-linear (LQL) approximation can have a scalar, a diagonal, or a full tensor form. Numerical examples of its comparison with the full integral equation solution, the Born approximation, and the original QL approximation are given. The objective behind developing this approximation is to use it in a fast 3-D inversion scheme appropriate for multisource array data such as those collected in airborne surveys, cross-well logging, and other similar geophysical applications. I have developed such an inversion scheme using the scalar and diagonal LQL approximation. It reduces the original nonlinear inverse electromagnetic (EM) problem to three linear inverse problems. The first of these problems is solved using a weighted regularized linear conjugate gradient method, whereas the last two are solved in the least squares sense. The algorithm I developed provides the option of obtaining either smooth or focused inversion images. I have applied the 3-D LQL inversion to synthetic 3-D EM data that simulate a helicopter-borne survey over different earth models. The results demonstrate the stability and efficiency of the method and show that the LQL approximation can be a practical solution to the problem of 3-D inversion of multisource array frequency-domain EM data. I have also applied the method to helicopter-borne EM data collected by INCO Exploration over the Voisey's Bay area in Labrador, Canada. The results of the 3-D inversion successfully delineate the shallow massive sulfides and show that the method can produce reasonable results even in areas of complex geology and large resistivity contrasts.

  3. GENERATING FRACTAL PATTERNS BY USING p-CIRCLE INVERSION

    NASA Astrophysics Data System (ADS)

    Ramírez, José L.; Rubiano, Gustavo N.; Zlobec, Borut Jurčič

    2015-10-01

    In this paper, we introduce the p-circle inversion which generalizes the classical inversion with respect to a circle (p = 2) and the taxicab inversion (p = 1). We study some basic properties and we also show the inversive images of some basic curves. We apply this new transformation to well-known fractals such as Sierpinski triangle, Koch curve, dragon curve, Fibonacci fractal, among others. Then we obtain new fractal patterns. Moreover, we generalize the method called circle inversion fractal be means of the p-circle inversion.

  4. Critical assessment of inverse gas chromatography as means of assessing surface free energy and acid-base interaction of pharmaceutical powders.

    PubMed

    Telko, Martin J; Hickey, Anthony J

    2007-10-01

    Inverse gas chromatography (IGC) has been employed as a research tool for decades. Despite this record of use and proven utility in a variety of applications, the technique is not routinely used in pharmaceutical research. In other fields the technique has flourished. IGC is experimentally relatively straightforward, but analysis requires that certain theoretical assumptions are satisfied. The assumptions made to acquire some of the recently reported data are somewhat modified compared to initial reports. Most publications in the pharmaceutical literature have made use of a simplified equation for the determination of acid/base surface properties resulting in parameter values that are inconsistent with prior methods. In comparing the surface properties of different batches of alpha-lactose monohydrate, new data has been generated and compared with literature to allow critical analysis of the theoretical assumptions and their importance to the interpretation of the data. The commonly used (simplified) approach was compared with the more rigorous approach originally outlined in the surface chemistry literature. (c) 2007 Wiley-Liss, Inc.

  5. Mathematical investigation of one-way transform matrix options.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cooper, James Arlin

    2006-01-01

    One-way transforms have been used in weapon systems processors since the mid- to late-1970s in order to help recognize insertion of correct pre-arm information while maintaining abnormal-environment safety. Level-One, Level-Two, and Level-Three transforms have been designed. The Level-One and Level-Two transforms have been implemented in weapon systems, and both of these transforms are equivalent to matrix multiplication applied to the inserted information. The Level-Two transform, utilizing a 6 x 6 matrix, provided the basis for the ''System 2'' interface definition for Unique-Signal digital communication between aircraft and attached weapons. The investigation described in this report was carried out to findmore » out if there were other size matrices that would be equivalent to the 6 x 6 Level-Two matrix. One reason for the investigation was to find out whether or not other dimensions were possible, and if so, to derive implementation options. Another important reason was to more fully explore the potential for inadvertent inversion. The results were that additional implementation methods were discovered, but no inversion weaknesses were revealed.« less

  6. Spectral Calculation of ICRF Wave Propagation and Heating in 2-D Using Massively Parallel Computers

    NASA Astrophysics Data System (ADS)

    Jaeger, E. F.; D'Azevedo, E.; Berry, L. A.; Carter, M. D.; Batchelor, D. B.

    2000-10-01

    Spectral calculations of ICRF wave propagation in plasmas have the natural advantage that they require no assumption regarding the smallness of the ion Larmor radius ρ relative to wavelength λ. Results are therefore applicable to all orders in k_bot ρ where k_bot = 2π/λ. But because all modes in the spectral representation are coupled, the solution requires inversion of a large dense matrix. In contrast, finite difference algorithms involve only matrices that are sparse and banded. Thus, spectral calculations of wave propagation and heating in tokamak plasmas have so far been limited to 1-D. In this paper, we extend the spectral method to 2-D by taking advantage of new matrix inversion techniques that utilize massively parallel computers. By spreading the dense matrix over 576 processors on the ORNL IBM RS/6000 SP supercomputer, we are able to solve up to 120,000 coupled complex equations requiring 230 GBytes of memory and achieving over 500 Gflops/sec. Initial results for ASDEX and NSTX will be presented using up to 200 modes in both the radial and vertical dimensions.

  7. A Computationally-Efficient Inverse Approach to Probabilistic Strain-Based Damage Diagnosis

    NASA Technical Reports Server (NTRS)

    Warner, James E.; Hochhalter, Jacob D.; Leser, William P.; Leser, Patrick E.; Newman, John A

    2016-01-01

    This work presents a computationally-efficient inverse approach to probabilistic damage diagnosis. Given strain data at a limited number of measurement locations, Bayesian inference and Markov Chain Monte Carlo (MCMC) sampling are used to estimate probability distributions of the unknown location, size, and orientation of damage. Substantial computational speedup is obtained by replacing a three-dimensional finite element (FE) model with an efficient surrogate model. The approach is experimentally validated on cracked test specimens where full field strains are determined using digital image correlation (DIC). Access to full field DIC data allows for testing of different hypothetical sensor arrangements, facilitating the study of strain-based diagnosis effectiveness as the distance between damage and measurement locations increases. The ability of the framework to effectively perform both probabilistic damage localization and characterization in cracked plates is demonstrated and the impact of measurement location on uncertainty in the predictions is shown. Furthermore, the analysis time to produce these predictions is orders of magnitude less than a baseline Bayesian approach with the FE method by utilizing surrogate modeling and effective numerical sampling approaches.

  8. A 3D generic inverse dynamic method using wrench notation and quaternion algebra.

    PubMed

    Dumas, R; Aissaoui, R; de Guise, J A

    2004-06-01

    In the literature, conventional 3D inverse dynamic models are limited in three aspects related to inverse dynamic notation, body segment parameters and kinematic formalism. First, conventional notation yields separate computations of the forces and moments with successive coordinate system transformations. Secondly, the way conventional body segment parameters are defined is based on the assumption that the inertia tensor is principal and the centre of mass is located between the proximal and distal ends. Thirdly, the conventional kinematic formalism uses Euler or Cardanic angles that are sequence-dependent and suffer from singularities. In order to overcome these limitations, this paper presents a new generic method for inverse dynamics. This generic method is based on wrench notation for inverse dynamics, a general definition of body segment parameters and quaternion algebra for the kinematic formalism.

  9. Comparison result of inversion of gravity data of a fault by particle swarm optimization and Levenberg-Marquardt methods.

    PubMed

    Toushmalani, Reza

    2013-01-01

    The purpose of this study was to compare the performance of two methods for gravity inversion of a fault. First method [Particle swarm optimization (PSO)] is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. Second method [The Levenberg-Marquardt algorithm (LM)] is an approximation to the Newton method used also for training ANNs. In this paper first we discussed the gravity field of a fault, then describes the algorithms of PSO and LM And presents application of Levenberg-Marquardt algorithm, and a particle swarm algorithm in solving inverse problem of a fault. Most importantly the parameters for the algorithms are given for the individual tests. Inverse solution reveals that fault model parameters are agree quite well with the known results. A more agreement has been found between the predicted model anomaly and the observed gravity anomaly in PSO method rather than LM method.

  10. Modeling of Sensor Placement Strategy for Shape Sensing and Structural Health Monitoring of a Wing-Shaped Sandwich Panel Using Inverse Finite Element Method.

    PubMed

    Kefal, Adnan; Yildiz, Mehmet

    2017-11-30

    This paper investigated the effect of sensor density and alignment for three-dimensional shape sensing of an airplane-wing-shaped thick panel subjected to three different loading conditions, i.e., bending, torsion, and membrane loads. For shape sensing analysis of the panel, the Inverse Finite Element Method (iFEM) was used together with the Refined Zigzag Theory (RZT), in order to enable accurate predictions for transverse deflection and through-the-thickness variation of interfacial displacements. In this study, the iFEM-RZT algorithm is implemented by utilizing a novel three-node C°-continuous inverse-shell element, known as i3-RZT. The discrete strain data is generated numerically through performing a high-fidelity finite element analysis on the wing-shaped panel. This numerical strain data represents experimental strain readings obtained from surface patched strain gauges or embedded fiber Bragg grating (FBG) sensors. Three different sensor placement configurations with varying density and alignment of strain data were examined and their corresponding displacement contours were compared with those of reference solutions. The results indicate that a sparse distribution of FBG sensors (uniaxial strain measurements), aligned in only the longitudinal direction, is sufficient for predicting accurate full-field membrane and bending responses (deformed shapes) of the panel, including a true zigzag representation of interfacial displacements. On the other hand, a sparse deployment of strain rosettes (triaxial strain measurements) is essentially enough to produce torsion shapes that are as accurate as those of predicted by a dense sensor placement configuration. Hence, the potential applicability and practical aspects of i3-RZT/iFEM methodology is proven for three-dimensional shape-sensing of future aerospace structures.

  11. Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods.

    PubMed

    Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti

    2012-04-07

    Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell's equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions that have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called minimum norm estimates (MNE), promote source estimates with a small ℓ₂ norm. Here, we consider a more general class of priors based on mixed norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as mixed-norm estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ₁/ℓ₂ mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ₁/ℓ₂ norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furthermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data.

  12. Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods

    PubMed Central

    Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti

    2012-01-01

    Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell’s equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions than have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called Minimum Norm Estimates (MNE), promote source estimates with a small ℓ2 norm. Here, we consider a more general class of priors based on mixed-norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as Mixed-Norm Estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ1/ℓ2 mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ1/ℓ2 norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furhermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data. PMID:22421459

  13. Analysis Code - Data Analysis in 'Leveraging Multiple Statistical Methods for Inverse Prediction in Nuclear Forensics Applications' (LMSMIPNFA) v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, John R

    R code that performs the analysis of a data set presented in the paper ‘Leveraging Multiple Statistical Methods for Inverse Prediction in Nuclear Forensics Applications’ by Lewis, J., Zhang, A., Anderson-Cook, C. It provides functions for doing inverse predictions in this setting using several different statistical methods. The data set is a publicly available data set from a historical Plutonium production experiment.

  14. The Role of Eigensolutions in Nonlinear Inverse Cavity-Flow-Theory. Revision.

    DTIC Science & Technology

    1985-06-10

    The method of Levi Civita is applied to an isolated fully cavitating body at zero cavitation number and adapted to the solution of the inverse...Eigensolutions in Nonlinear Inverse Cavity-Flow Theory [Revised] Abstract: The method of Levi Civita is applied to an isolated fully cavitating body at...problem is not thought * to present much of a challenge at zero cavitation number. In this case, - the classical method of Levi Civita [7] can be

  15. An Inversion Method for Reconstructing Hall Thruster Plume Parameters from the Line Integrated Measurements (Postprint)

    DTIC Science & Technology

    2007-07-01

    Technical Paper 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER An Inversion Method for Reconstructing Hall Thruster Plume...298 (Rev. 8-98) Prescribed by ANSI Std. 239.18 An Inversion Method for Reconstructing Hall Thruster Plume Parameters from Line Integrated Measurements... Hall thruster is a high specific impulse electric thruster that produces a highly ionized plasma inside an annular chamber through the use of high

  16. A Higher Order Iterative Method for Computing the Drazin Inverse

    PubMed Central

    Soleymani, F.; Stanimirović, Predrag S.

    2013-01-01

    A method with high convergence rate for finding approximate inverses of nonsingular matrices is suggested and established analytically. An extension of the introduced computational scheme to general square matrices is defined. The extended method could be used for finding the Drazin inverse. The application of the scheme on large sparse test matrices alongside the use in preconditioning of linear system of equations will be presented to clarify the contribution of the paper. PMID:24222747

  17. Enhanced photochemical catalysis of TiO2 inverse opals by modification with ZnO or Fe2O3 using ALD and the hydrothermal method

    NASA Astrophysics Data System (ADS)

    Liu, Jiatong; Sun, Cuifeng; Fu, Ming; Long, Jie; He, Dawei; Wang, Yongsheng

    2018-02-01

    The development of porous materials exhibiting photon regulation abilities for improved photoelectrochemical catalysis performance is always one of the important goals of solar energy harvesting. In this study, methods to improve the photocatalytic activity of TiO2 inverse opals were discussed. TiO2 inverse opals were prepared by atomic layer deposition (ALD) using colloidal crystal templates. In addition, TiO2 inverse opal heterostructures were fabricated using colloidal heterocrystals by repeated vertical deposition using different colloidal spheres. The hydrothermal method and ALD were used to prepare ZnO- or Fe2O3-modified TiO2 inverse opals on the internal surfaces of the TiO2 porous structures. Although the photonic reflection band was not significantly varied by oxide modification, the presence of Fe2O3 in the TiO2 inverse opals enhanced their visible absorption. The conformally modified oxides on the TiO2 inverse opals could also form energy barriers and avoid the recombination of electrons and holes. The fabrication of the TiO2 photonic crystal heterostructures and modification with ZnO or Fe2O3 can enhance the photocatalytic activity of TiO2 inverse opals.

  18. A Methodology to Seperate and Analyze a Seismic Wide Angle Profile

    NASA Astrophysics Data System (ADS)

    Weinzierl, Wolfgang; Kopp, Heidrun

    2010-05-01

    General solutions of inverse problems can often be obtained through the introduction of probability distributions to sample the model space. We present a simple approach of defining an a priori space in a tomographic study and retrieve the velocity-depth posterior distribution by a Monte Carlo method. Utilizing a fitting routine designed for very low statistics to setup and analyze the obtained tomography results, it is possible to statistically separate the velocity-depth model space derived from the inversion of seismic refraction data. An example of a profile acquired in the Lesser Antilles subduction zone reveals the effectiveness of this approach. The resolution analysis of the structural heterogeneity includes a divergence analysis which proves to be capable of dissecting long wide-angle profiles for deep crust and upper mantle studies. The complete information of any parameterised physical system is contained in the a posteriori distribution. Methods for analyzing and displaying key properties of the a posteriori distributions of highly nonlinear inverse problems are therefore essential in the scope of any interpretation. From this study we infer several conclusions concerning the interpretation of the tomographic approach. By calculating a global as well as singular misfits of velocities we are able to map different geological units along a profile. Comparing velocity distributions with the result of a tomographic inversion along the profile we can mimic the subsurface structures in their extent and composition. The possibility of gaining a priori information for seismic refraction analysis by a simple solution to an inverse problem and subsequent resolution of structural heterogeneities through a divergence analysis is a new and simple way of defining a priori space and estimating the a posteriori mean and covariance in singular and general form. The major advantage of a Monte Carlo based approach in our case study is the obtained knowledge of velocity depth distributions. Certainly the decision of where to extract velocity information on the profile for setting up a Monte Carlo ensemble is limiting the a priori space. However, the general conclusion of analyzing the velocity field according to distinct reference distributions gives us the possibility to define the covariance according to any geological unit if we have a priori information on the velocity depth distributions. Using the wide angle data recorded across the Lesser Antilles arc, we are able to resolve a shallow feature like the backstop by a robust and simple divergence analysis. We demonstrate the effectiveness of the new methodology to extract some key features and properties from the inversion results by including information concerning the confidence level of results.

  19. Adaptive multi-step Full Waveform Inversion based on Waveform Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Hu, Yong; Han, Liguo; Xu, Zhuo; Zhang, Fengjiao; Zeng, Jingwen

    2017-04-01

    Full Waveform Inversion (FWI) can be used to build high resolution velocity models, but there are still many challenges in seismic field data processing. The most difficult problem is about how to recover long-wavelength components of subsurface velocity models when seismic data is lacking of low frequency information and without long-offsets. To solve this problem, we propose to use Waveform Mode Decomposition (WMD) method to reconstruct low frequency information for FWI to obtain a smooth model, so that the initial model dependence of FWI can be reduced. In this paper, we use adjoint-state method to calculate the gradient for Waveform Mode Decomposition Full Waveform Inversion (WMDFWI). Through the illustrative numerical examples, we proved that the low frequency which is reconstructed by WMD method is very reliable. WMDFWI in combination with the adaptive multi-step inversion strategy can obtain more faithful and accurate final inversion results. Numerical examples show that even if the initial velocity model is far from the true model and lacking of low frequency information, we still can obtain good inversion results with WMD method. From numerical examples of anti-noise test, we see that the adaptive multi-step inversion strategy for WMDFWI has strong ability to resist Gaussian noise. WMD method is promising to be able to implement for the land seismic FWI, because it can reconstruct the low frequency information, lower the dominant frequency in the adjoint source, and has a strong ability to resist noise.

  20. Electromagnetic Inverse Methods and Applications for Inhomogeneous Media Probing and Synthesis.

    NASA Astrophysics Data System (ADS)

    Xia, Jake Jiqing

    The electromagnetic inverse scattering problems concerned in this thesis are to find unknown inhomogeneous permittivity and conductivity profiles in a medium from the scattering data. Both analytical and numerical methods are studied in the thesis. The inverse methods can be applied to geophysical medium probing, non-destructive testing, medical imaging, optical waveguide synthesis and material characterization. An introduction is given in Chapter 1. The first part of the thesis presents inhomogeneous media probing. The Riccati equation approach is discussed in Chapter 2 for a one-dimensional planar profile inversion problem. Two types of the Riccati equations are derived and distinguished. New renormalized formulae based inverting one specific type of the Riccati equation are derived. Relations between the inverse methods of Green's function, the Riccati equation and the Gel'fand-Levitan-Marchenko (GLM) theory are studied. In Chapter 3, the renormalized source-type integral equation (STIE) approach is formulated for inversion of cylindrically inhomogeneous permittivity and conductivity profiles. The advantages of the renormalized STIE approach are demonstrated in numerical examples. The cylindrical profile inversion problem has an application for borehole inversion. In Chapter 4 the renormalized STIE approach is extended to a planar case where the two background media are different. Numerical results have shown fast convergence. This formulation is applied to inversion of the underground soil moisture profiles in remote sensing. The second part of the thesis presents the synthesis problem of inhomogeneous dielectric waveguides using the electromagnetic inverse methods. As a particular example, the rational function representation of reflection coefficients in the GLM theory is used. The GLM method is reviewed in Chapter 5. Relations between modal structures and transverse reflection coefficients of an inhomogeneous medium are established in Chapter 6. A stratified medium model is used to derive the guidance condition and the reflection coefficient. Results obtained in Chapter 6 provide the physical foundation for applying the inverse methods for the waveguide design problem. In Chapter 7, a global guidance condition for continuously varying medium is derived using the Riccati equation. It is further shown that the discrete modes in an inhomogeneous medium have the same wave vectors as the poles of the transverse reflection coefficient. An example of synthesizing an inhomogeneous dielectric waveguide using a rational reflection coefficient is presented. A summary of the thesis is given in Chapter 8. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.).

  1. A matrix-inversion method for gamma-source mapping from gamma-count data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adsley, Ian; Burgess, Claire; Bull, Richard K

    In a previous paper it was proposed that a simple matrix inversion method could be used to extract source distributions from gamma-count maps, using simple models to calculate the response matrix. The method was tested using numerically generated count maps. In the present work a 100 kBq Co{sup 60} source has been placed on a gridded surface and the count rate measured using a NaI scintillation detector. The resulting map of gamma counts was used as input to the matrix inversion procedure and the source position recovered. A multi-source array was simulated by superposition of several single-source count maps andmore » the source distribution was again recovered using matrix inversion. The measurements were performed for several detector heights. The effects of uncertainties in source-detector distances on the matrix inversion method are also examined. The results from this work give confidence in the application of the method to practical applications, such as the segregation of highly active objects amongst fuel-element debris. (authors)« less

  2. Density-to-Potential Inversions to Guide Development of Exchange-Correlation Approximations at Finite Temperature

    NASA Astrophysics Data System (ADS)

    Jensen, Daniel; Wasserman, Adam; Baczewski, Andrew

    The construction of approximations to the exchange-correlation potential for warm dense matter (WDM) is a topic of significant recent interest. In this work, we study the inverse problem of Kohn-Sham (KS) DFT as a means of guiding functional design at zero temperature and in WDM. Whereas the forward problem solves the KS equations to produce a density from a specified exchange-correlation potential, the inverse problem seeks to construct the exchange-correlation potential from specified densities. These two problems require different computational methods and convergence criteria despite sharing the same mathematical equations. We present two new inversion methods based on constrained variational and PDE-constrained optimization methods. We adapt these methods to finite temperature calculations to reveal the exchange-correlation potential's temperature dependence in WDM-relevant conditions. The different inversion methods presented are applied to both non-interacting and interacting model systems for comparison. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Security Administration under contract DE-AC04-94.

  3. Multisource inverse-geometry CT. Part II. X-ray source design and prototype

    PubMed Central

    Neculaes, V. Bogdan; Caiafa, Antonio; Cao, Yang; De Man, Bruno; Edic, Peter M.; Frutschy, Kristopher; Gunturi, Satish; Inzinna, Lou; Reynolds, Joseph; Vermilyea, Mark; Wagner, David; Zhang, Xi; Zou, Yun; Pelc, Norbert J.; Lounsberry, Brian

    2016-01-01

    Purpose: This paper summarizes the development of a high-power distributed x-ray source, or “multisource,” designed for inverse-geometry computed tomography (CT) applications [see B. De Man et al., “Multisource inverse-geometry CT. Part I. System concept and development,” Med. Phys. 43, 4607–4616 (2016)]. The paper presents the evolution of the source architecture, component design (anode, emitter, beam optics, control electronics, high voltage insulator), and experimental validation. Methods: Dispenser cathode emitters were chosen as electron sources. A modular design was adopted, with eight electron emitters (two rows of four emitters) per module, wherein tungsten targets were brazed onto copper anode blocks—one anode block per module. A specialized ceramic connector provided high voltage standoff capability and cooling oil flow to the anode. A matrix topology and low-noise electronic controls provided switching of the emitters. Results: Four modules (32 x-ray sources in two rows of 16) have been successfully integrated into a single vacuum vessel and operated on an inverse-geometry computed tomography system. Dispenser cathodes provided high beam current (>1000 mA) in pulse mode, and the electrostatic lenses focused the current beam to a small optical focal spot size (0.5 × 1.4 mm). Controlled emitter grid voltage allowed the beam current to be varied for each source, providing the ability to modulate beam current across the fan of the x-ray beam, denoted as a virtual bowtie filter. The custom designed controls achieved x-ray source switching in <1 μs. The cathode-grounded source was operated successfully up to 120 kV. Conclusions: A high-power, distributed x-ray source for inverse-geometry CT applications was successfully designed, fabricated, and operated. Future embodiments may increase the number of spots and utilize fast read out detectors to increase the x-ray flux magnitude further, while still staying within the stationary target inherent thermal limitations. PMID:27487878

  4. Multisource inverse-geometry CT. Part II. X-ray source design and prototype

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neculaes, V. Bogdan, E-mail: neculaes@ge.com; Caia

    2016-08-15

    Purpose: This paper summarizes the development of a high-power distributed x-ray source, or “multisource,” designed for inverse-geometry computed tomography (CT) applications [see B. De Man et al., “Multisource inverse-geometry CT. Part I. System concept and development,” Med. Phys. 43, 4607–4616 (2016)]. The paper presents the evolution of the source architecture, component design (anode, emitter, beam optics, control electronics, high voltage insulator), and experimental validation. Methods: Dispenser cathode emitters were chosen as electron sources. A modular design was adopted, with eight electron emitters (two rows of four emitters) per module, wherein tungsten targets were brazed onto copper anode blocks—one anode blockmore » per module. A specialized ceramic connector provided high voltage standoff capability and cooling oil flow to the anode. A matrix topology and low-noise electronic controls provided switching of the emitters. Results: Four modules (32 x-ray sources in two rows of 16) have been successfully integrated into a single vacuum vessel and operated on an inverse-geometry computed tomography system. Dispenser cathodes provided high beam current (>1000 mA) in pulse mode, and the electrostatic lenses focused the current beam to a small optical focal spot size (0.5 × 1.4 mm). Controlled emitter grid voltage allowed the beam current to be varied for each source, providing the ability to modulate beam current across the fan of the x-ray beam, denoted as a virtual bowtie filter. The custom designed controls achieved x-ray source switching in <1 μs. The cathode-grounded source was operated successfully up to 120 kV. Conclusions: A high-power, distributed x-ray source for inverse-geometry CT applications was successfully designed, fabricated, and operated. Future embodiments may increase the number of spots and utilize fast read out detectors to increase the x-ray flux magnitude further, while still staying within the stationary target inherent thermal limitations.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Chien-Chih; Hsu, Pei-Lun; Lin, Li

    A particular edge-dependent inversion current behavior of metal-oxide-semiconductor (MOS) tunneling diodes was investigated utilizing square and comb-shaped electrodes. The inversion tunneling current exhibits the strong dependence on the tooth size of comb-shaped electrodes and oxide thickness. Detailed illustrations of current conduction mechanism are developed by simulation and experimental measurement results. It is found that the electron diffusion current and Schottky barrier height lowering for hole tunneling current both contribute on inversion current conduction. In MOS tunneling photodiode applications, the photoresponse can be improved by decreasing SiO{sub 2} thickness and using comb-shaped electrodes with smaller tooth spacing. Meantime, the high andmore » steady photosensitivity can also be approached by introducing HfO{sub 2} into dielectric stacks.« less

  6. pyGIMLi: An open-source library for modelling and inversion in geophysics

    NASA Astrophysics Data System (ADS)

    Rücker, Carsten; Günther, Thomas; Wagner, Florian M.

    2017-12-01

    Many tasks in applied geosciences cannot be solved by single measurements, but require the integration of geophysical, geotechnical and hydrological methods. Numerical simulation techniques are essential both for planning and interpretation, as well as for the process understanding of modern geophysical methods. These trends encourage open, simple, and modern software architectures aiming at a uniform interface for interdisciplinary and flexible modelling and inversion approaches. We present pyGIMLi (Python Library for Inversion and Modelling in Geophysics), an open-source framework that provides tools for modelling and inversion of various geophysical but also hydrological methods. The modelling component supplies discretization management and the numerical basis for finite-element and finite-volume solvers in 1D, 2D and 3D on arbitrarily structured meshes. The generalized inversion framework solves the minimization problem with a Gauss-Newton algorithm for any physical forward operator and provides opportunities for uncertainty and resolution analyses. More general requirements, such as flexible regularization strategies, time-lapse processing and different sorts of coupling individual methods are provided independently of the actual methods used. The usage of pyGIMLi is first demonstrated by solving the steady-state heat equation, followed by a demonstration of more complex capabilities for the combination of different geophysical data sets. A fully coupled hydrogeophysical inversion of electrical resistivity tomography (ERT) data of a simulated tracer experiment is presented that allows to directly reconstruct the underlying hydraulic conductivity distribution of the aquifer. Another example demonstrates the improvement of jointly inverting ERT and ultrasonic data with respect to saturation by a new approach that incorporates petrophysical relations in the inversion. Potential applications of the presented framework are manifold and include time-lapse, constrained, joint, and coupled inversions of various geophysical and hydrological data sets.

  7. An adaptive coupling strategy for joint inversions that use petrophysical information as constraints

    NASA Astrophysics Data System (ADS)

    Heincke, Björn; Jegen, Marion; Moorkamp, Max; Hobbs, Richard W.; Chen, Jin

    2017-01-01

    Joint inversion strategies for geophysical data have become increasingly popular as they allow for the efficient combination of complementary information from different data sets. The algorithm used for the joint inversion needs to be flexible in its description of the subsurface so as to be able to handle the diverse nature of the data. Hence, joint inversion schemes are needed that 1) adequately balance data from the different methods, 2) have stable convergence behavior, 3) consider the different resolution power of the methods used and 4) link the parameter models in a way that they are suited for a wide range of applications. Here, we combine active source seismic P-wave tomography, gravity and magnetotelluric (MT) data in a petrophysical joint inversion that accounts for these issues. Data from the different methods are inverted separately but are linked through constraints accounting for parameter relationships. An advantage of performing the inversions separately is that no relative weighting between the data sets is required. To avoid perturbing the convergence behavior of the inversions by the coupling, the strengths of the constraints are readjusted at each iteration. The criterion we use to control the adaption of the coupling strengths is based on variations in the objective functions of the individual inversions from one to the next iteration. Adaption of the coupling strengths makes the joint inversion scheme also applicable to subsurface conditions, where assumed relationships are not valid everywhere, because the individual inversions decouple if it is not possible to reach adequately low data misfits for the made assumptions. In addition, the coupling constraints depend on the relative resolutions of the methods, which leads to an improved convergence behavior of the joint inversion. Another benefit of the proposed scheme is that structural information can easily be incorporated in the petrophysical joint inversion (no additional terms are added in the objective functions) by using mutually controlled structural weights for the smoothing constraints. We test our scheme using data generated from a synthetic 2-D sub-basalt model. We observe that the adaption of the coupling strengths makes the convergence of the inversions very robust (data misfits of all methods are close to the target misfits) and that final results are always close to the true models independent of the parameter choices. Finally, the scheme is applied on real data sets from the Faroe-Shetland Basin to image a basaltic sequence and underlying structures. The presence of a borehole and a 3-D reflection seismic survey in this region allows direct comparison and, hence, evaluate the quality of the joint inversion results. The results from joint inversion are more consistent with results from other studies than the ones from the corresponding individual inversions and the shape of the basaltic sequence is better resolved. However, due to the limited resolution of the individual methods used it was not possible to resolve structures underneath the basalt in detail, indicating that additional geophysical information (e.g. CSEM, reflection onsets) needs to be included.

  8. Strategies to Enhance the Model Update in Regions of Weak Sensitivities for Use in Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Nuber, André; Manukyan, Edgar; Maurer, Hansruedi

    2014-05-01

    Conventional methods of interpreting seismic data rely on filtering and processing limited portions of the recorded wavefield. Typically, either reflections, refractions or surface waves are considered in isolation. Particularly in near-surface engineering and environmental investigations (depths less than, say 100 m), these wave types often overlap in time and are difficult to separate. Full waveform inversion is a technique that seeks to exploit and interpret the full information content of the seismic records without the need for separating events first; it yields models of the subsurface at sub-wavelength resolution. We use a finite element modelling code to solve the 2D elastic isotropic wave equation in the frequency domain. This code is part of a Gauss-Newton inversion scheme which we employ to invert for the P- and S-wave velocities as well as for density in the subsurface. For shallow surface data the use of an elastic forward solver is essential because surface waves often dominate the seismograms. This leads to high sensitivities (partial derivatives contained in the Jacobian matrix of the Gauss-Newton inversion scheme) and thus large model updates close to the surface. Reflections from deeper structures may also include useful information, but the large sensitivities of the surface waves often preclude this information from being fully exploited. We have developed two methods that balance the sensitivity distributions and thus may help resolve the deeper structures. The first method includes equilibrating the columns of the Jacobian matrix prior to every inversion step by multiplying them with individual scaling factors. This is expected to also balance the model updates throughout the entire subsurface model. It can be shown that this procedure is mathematically equivalent to balancing the regularization weights of the individual model parameters. A proper choice of the scaling factors required to balance the Jacobian matrix is critical. We decided to normalise the columns of the Jacobian based on their absolute column sum, but defining an upper threshold for the scaling factors. This avoids particularly small and therefore insignificant sensitivities being over-boosted, which would produce unstable results. The second method proposed includes adjusting the inversion cell size with depth. Multiple cells of the forward modelling grid are merged to form larger inversion cells (typical ratios between forward and inversion cells are in the order of 1:100). The irregular inversion grid is adapted to the expected resolution power of full waveform inversion. Besides stabilizing the inversion, this approach also reduces the number of model parameters to be recovered. Consequently, the computational costs and the memory consumption are reduced significantly. This is particularly critical when Gauss-Newton type inversion schemes are employed. Extensive tests with synthetic data demonstrated that both methods stabilise the inversion and improve the inversion results. The two methods have some redundancy, which can be seen when both are applied simultaneously, that is, when scaling of the Jacobian matrix is applied to an irregular inversion grid. The calculated scaling factors are quite balanced and span a much smaller range than in the case of a regular inversion grid.

  9. Utilizing High-Performance Computing to Investigate Parameter Sensitivity of an Inversion Model for Vadose Zone Flow and Transport

    NASA Astrophysics Data System (ADS)

    Fang, Z.; Ward, A. L.; Fang, Y.; Yabusaki, S.

    2011-12-01

    High-resolution geologic models have proven effective in improving the accuracy of subsurface flow and transport predictions. However, many of the parameters in subsurface flow and transport models cannot be determined directly at the scale of interest and must be estimated through inverse modeling. A major challenge, particularly in vadose zone flow and transport, is the inversion of the highly-nonlinear, high-dimensional problem as current methods are not readily scalable for large-scale, multi-process models. In this paper we describe the implementation of a fully automated approach for addressing complex parameter optimization and sensitivity issues on massively parallel multi- and many-core systems. The approach is based on the integration of PNNL's extreme scale Subsurface Transport Over Multiple Phases (eSTOMP) simulator, which uses the Global Array toolkit, with the Beowulf-Cluster inspired parallel nonlinear parameter estimation software, BeoPEST in the MPI mode. In the eSTOMP/BeoPEST implementation, a pre-processor generates all of the PEST input files based on the eSTOMP input file. Simulation results for comparison with observations are extracted automatically at each time step eliminating the need for post-process data extractions. The inversion framework was tested with three different experimental data sets: one-dimensional water flow at Hanford Grass Site; irrigation and infiltration experiment at the Andelfingen Site; and a three-dimensional injection experiment at Hanford's Sisson and Lu Site. Good agreements are achieved in all three applications between observations and simulations in both parameter estimates and water dynamics reproduction. Results show that eSTOMP/BeoPEST approach is highly scalable and can be run efficiently with hundreds or thousands of processors. BeoPEST is fault tolerant and new nodes can be dynamically added and removed. A major advantage of this approach is the ability to use high-resolution geologic models to preserve the spatial structure in the inverse model, which leads to better parameter estimates and improved predictions when using the inverse-conditioned realizations of parameter fields.

  10. Large Scale Document Inversion using a Multi-threaded Computing System

    PubMed Central

    Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won

    2018-01-01

    Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. CCS Concepts •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations. PMID:29861701

  11. Large Scale Document Inversion using a Multi-threaded Computing System.

    PubMed

    Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won

    2017-06-01

    Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations.

  12. Applications of He's semi-inverse method, ITEM and GGM to the Davey-Stewartson equation

    NASA Astrophysics Data System (ADS)

    Zinati, Reza Farshbaf; Manafian, Jalil

    2017-04-01

    We investigate the Davey-Stewartson (DS) equation. Travelling wave solutions were found. In this paper, we demonstrate the effectiveness of the analytical methods, namely, He's semi-inverse variational principle method (SIVPM), the improved tan(φ/2)-expansion method (ITEM) and generalized G'/G-expansion method (GGM) for seeking more exact solutions via the DS equation. These methods are direct, concise and simple to implement compared to other existing methods. The exact solutions containing four types solutions have been achieved. The results demonstrate that the aforementioned methods are more efficient than the Ansatz method applied by Mirzazadeh (2015). Abundant exact travelling wave solutions including solitons, kink, periodic and rational solutions have been found by the improved tan(φ/2)-expansion and generalized G'/G-expansion methods. By He's semi-inverse variational principle we have obtained dark and bright soliton wave solutions. Also, the obtained semi-inverse variational principle has profound implications in physical understandings. These solutions might play important role in engineering and physics fields. Moreover, by using Matlab, some graphical simulations were done to see the behavior of these solutions.

  13. Elastic-Waveform Inversion with Compressive Sensing for Sparse Seismic Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Youzuo; Huang, Lianjie

    2015-01-28

    Accurate velocity models of compressional- and shear-waves are essential for geothermal reservoir characterization and microseismic imaging. Elastic-waveform inversion of multi-component seismic data can provide high-resolution inversion results of subsurface geophysical properties. However, the method requires seismic data acquired using dense source and receiver arrays. In practice, seismic sources and/or geophones are often sparsely distributed on the surface and/or in a borehole, such as 3D vertical seismic profiling (VSP) surveys. We develop a novel elastic-waveform inversion method with compressive sensing for inversion of sparse seismic data. We employ an alternating-minimization algorithm to solve the optimization problem of our new waveform inversionmore » method. We validate our new method using synthetic VSP data for a geophysical model built using geologic features found at the Raft River enhanced-geothermal-system (EGS) field. We apply our method to synthetic VSP data with a sparse source array and compare the results with those obtained with a dense source array. Our numerical results demonstrate that the velocity models produced with our new method using a sparse source array are almost as accurate as those obtained using a dense source array.« less

  14. FOREWORD: 4th International Workshop on New Computational Methods for Inverse Problems (NCMIP2014)

    NASA Astrophysics Data System (ADS)

    2014-10-01

    This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 4th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2014 (http://www.farman.ens-cachan.fr/NCMIP_2014.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 23, 2014. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/), and secondly at the initiative of Institut Farman, in May 2012 and May 2013, (http://www.farman.ens-cachan.fr/NCMIP_2012.html), (http://www.farman.ens-cachan.fr/NCMIP_2013.html). The New Computational Methods for Inverse Problems (NCMIP) Workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel methods, learning methods, convex optimization, free discontinuity problems, metamodels, proper orthogonal decomposition, reduced models for the inversion, non-linear inverse scattering, image reconstruction and restoration, and applications (bio-medical imaging, non-destructive evaluation...). NCMIP 2014 was a one-day workshop held in May 2014 which attracted around sixty attendees. Each of the submitted papers has been reviewed by two reviewers. There have been nine accepted papers. In addition, three international speakers were invited to present a longer talk. The workshop was supported by Institut Farman (ENS Cachan, CNRS) and endorsed by the following French research networks (GDR ISIS, GDR MIA, GDR MOA, GDR Ondes). The program committee acknowledges the following research laboratories: CMLA, LMT, LURPA, SATIE. Eric Vourc'h and Thomas Rodet

  15. Multidimensional NMR inversion without Kronecker products: Multilinear inversion

    NASA Astrophysics Data System (ADS)

    Medellín, David; Ravi, Vivek R.; Torres-Verdín, Carlos

    2016-08-01

    Multidimensional NMR inversion using Kronecker products poses several challenges. First, kernel compression is only possible when the kernel matrices are separable, and in recent years, there has been an increasing interest in NMR sequences with non-separable kernels. Second, in three or more dimensions, the singular value decomposition is not unique; therefore kernel compression is not well-defined for higher dimensions. Without kernel compression, the Kronecker product yields matrices that require large amounts of memory, making the inversion intractable for personal computers. Finally, incorporating arbitrary regularization terms is not possible using the Lawson-Hanson (LH) or the Butler-Reeds-Dawson (BRD) algorithms. We develop a minimization-based inversion method that circumvents the above problems by using multilinear forms to perform multidimensional NMR inversion without using kernel compression or Kronecker products. The new method is memory efficient, requiring less than 0.1% of the memory required by the LH or BRD methods. It can also be extended to arbitrary dimensions and adapted to include non-separable kernels, linear constraints, and arbitrary regularization terms. Additionally, it is easy to implement because only a cost function and its first derivative are required to perform the inversion.

  16. Fast inversion of gravity data using the symmetric successive over-relaxation (SSOR) preconditioned conjugate gradient algorithm

    NASA Astrophysics Data System (ADS)

    Meng, Zhaohai; Li, Fengting; Xu, Xuechun; Huang, Danian; Zhang, Dailei

    2017-02-01

    The subsurface three-dimensional (3D) model of density distribution is obtained by solving an under-determined linear equation that is established by gravity data. Here, we describe a new fast gravity inversion method to recover a 3D density model from gravity data. The subsurface will be divided into a large number of rectangular blocks, each with an unknown constant density. The gravity inversion method introduces a stabiliser model norm with a depth weighting function to produce smooth models. The depth weighting function is combined with the model norm to counteract the skin effect of the gravity potential field. As the numbers of density model parameters is NZ (the number of layers in the vertical subsurface domain) times greater than the observed gravity data parameters, the inverse density parameter is larger than the observed gravity data parameters. Solving the full set of gravity inversion equations is very time-consuming, and applying a new algorithm to estimate gravity inversion can significantly reduce the number of iterations and the computational time. In this paper, a new symmetric successive over-relaxation (SSOR) iterative conjugate gradient (CG) method is shown to be an appropriate algorithm to solve this Tikhonov cost function (gravity inversion equation). The new, faster method is applied on Gaussian noise-contaminated synthetic data to demonstrate its suitability for 3D gravity inversion. To demonstrate the performance of the new algorithm on actual gravity data, we provide a case study that includes ground-based measurement of residual Bouguer gravity anomalies over the Humble salt dome near Houston, Gulf Coast Basin, off the shore of Louisiana. A 3D distribution of salt rock concentration is used to evaluate the inversion results recovered by the new SSOR iterative method. In the test model, the density values in the constructed model coincide with the known location and depth of the salt dome.

  17. Fixed-point image orthorectification algorithms for reduced computational cost

    NASA Astrophysics Data System (ADS)

    French, Joseph Clinton

    Imaging systems have been applied to many new applications in recent years. With the advent of low-cost, low-power focal planes and more powerful, lower cost computers, remote sensing applications have become more wide spread. Many of these applications require some form of geolocation, especially when relative distances are desired. However, when greater global positional accuracy is needed, orthorectification becomes necessary. Orthorectification is the process of projecting an image onto a Digital Elevation Map (DEM), which removes terrain distortions and corrects the perspective distortion by changing the viewing angle to be perpendicular to the projection plane. Orthorectification is used in disaster tracking, landscape management, wildlife monitoring and many other applications. However, orthorectification is a computationally expensive process due to floating point operations and divisions in the algorithm. To reduce the computational cost of on-board processing, two novel algorithm modifications are proposed. One modification is projection utilizing fixed-point arithmetic. Fixed point arithmetic removes the floating point operations and reduces the processing time by operating only on integers. The second modification is replacement of the division inherent in projection with a multiplication of the inverse. The inverse must operate iteratively. Therefore, the inverse is replaced with a linear approximation. As a result of these modifications, the processing time of projection is reduced by a factor of 1.3x with an average pixel position error of 0.2% of a pixel size for 128-bit integer processing and over 4x with an average pixel position error of less than 13% of a pixel size for a 64-bit integer processing. A secondary inverse function approximation is also developed that replaces the linear approximation with a quadratic. The quadratic approximation produces a more accurate approximation of the inverse, allowing for an integer multiplication calculation to be used in place of the traditional floating point division. This method increases the throughput of the orthorectification operation by 38% when compared to floating point processing. Additionally, this method improves the accuracy of the existing integer-based orthorectification algorithms in terms of average pixel distance, increasing the accuracy of the algorithm by more than 5x. The quadratic function reduces the pixel position error to 2% and is still 2.8x faster than the 128-bit floating point algorithm.

  18. Seismic joint analysis for non-destructive testing of asphalt and concrete slabs

    USGS Publications Warehouse

    Ryden, N.; Park, C.B.

    2005-01-01

    A seismic approach is used to estimate the thickness and elastic stiffness constants of asphalt or concrete slabs. The overall concept of the approach utilizes the robustness of the multichannel seismic method. A multichannel-equivalent data set is compiled from multiple time series recorded from multiple hammer impacts at progressively different offsets from a fixed receiver. This multichannel simulation with one receiver (MSOR) replaces the true multichannel recording in a cost-effective and convenient manner. A recorded data set is first processed to evaluate the shear wave velocity through a wave field transformation, normally used in the multichannel analysis of surface waves (MASW) method, followed by a Lambwave inversion. Then, the same data set is used to evaluate compression wave velocity from a combined processing of the first-arrival picking and a linear regression. Finally, the amplitude spectra of the time series are used to evaluate the thickness by following the concepts utilized in the Impact Echo (IE) method. Due to the powerful signal extraction capabilities ensured by the multichannel processing schemes used, the entire procedure for all three evaluations can be fully automated and results can be obtained directly in the field. A field data set is used to demonstrate the proposed approach.

  19. A unified inversion scheme to process multifrequency measurements of various dispersive electromagnetic properties

    NASA Astrophysics Data System (ADS)

    Han, Y.; Misra, S.

    2018-04-01

    Multi-frequency measurement of a dispersive electromagnetic (EM) property, such as electrical conductivity, dielectric permittivity, or magnetic permeability, is commonly analyzed for purposes of material characterization. Such an analysis requires inversion of the multi-frequency measurement based on a specific relaxation model, such as Cole-Cole model or Pelton's model. We develop a unified inversion scheme that can be coupled to various type of relaxation models to independently process multi-frequency measurement of varied EM properties for purposes of improved EM-based geomaterial characterization. The proposed inversion scheme is firstly tested in few synthetic cases in which different relaxation models are coupled into the inversion scheme and then applied to multi-frequency complex conductivity, complex resistivity, complex permittivity, and complex impedance measurements. The method estimates up to seven relaxation-model parameters exhibiting convergence and accuracy for random initializations of the relaxation-model parameters within up to 3-orders of magnitude variation around the true parameter values. The proposed inversion method implements a bounded Levenberg algorithm with tuning initial values of damping parameter and its iterative adjustment factor, which are fixed in all the cases shown in this paper and irrespective of the type of measured EM property and the type of relaxation model. Notably, jump-out step and jump-back-in step are implemented as automated methods in the inversion scheme to prevent the inversion from getting trapped around local minima and to honor physical bounds of model parameters. The proposed inversion scheme can be easily used to process various types of EM measurements without major changes to the inversion scheme.

  20. 3D inversion of full gravity gradient tensor data in spherical coordinate system using local north-oriented frame

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Wu, Yulong; Yan, Jianguo; Wang, Haoran; Rodriguez, J. Alexis P.; Qiu, Yue

    2018-04-01

    In this paper, we propose an inverse method for full gravity gradient tensor data in the spherical coordinate system. As opposed to the traditional gravity inversion in the Cartesian coordinate system, our proposed method takes the curvature of the Earth, the Moon, or other planets into account, using tesseroid bodies to produce gravity gradient effects in forward modeling. We used both synthetic and observed datasets to test the stability and validity of the proposed method. Our results using synthetic gravity data show that our new method predicts the depth of the density anomalous body efficiently and accurately. Using observed gravity data for the Mare Smythii area on the moon, the density distribution of the crust in this area reveals its geological structure. These results validate the proposed method and potential application for large area data inversion of planetary geological structures.[Figure not available: see fulltext.

  1. Peeling linear inversion of upper mantle velocity structure with receiver functions

    NASA Astrophysics Data System (ADS)

    Shen, Xuzhang; Zhou, Huilan

    2012-02-01

    A peeling linear inversion method is presented to study the upper mantle (from Moho to 800 km depth) velocity structures with receiver functions. The influences of the crustal and upper mantle velocity ratio error on the inversion results are analyzed, and three valid measures are taken for its reduction. This method is tested with the IASP91 and the PREM models, and the upper mantle structures beneath the stations GTA, LZH, and AXX in northwestern China are then inverted. The results indicate that this inversion method is feasible to quantify upper mantle discontinuities, besides the discontinuities between 3 h M ( h M denotes the depth of Moho) and 5 h M due to the interference of multiples from Moho. Smoothing is used to overcome possible false discontinuities from the multiples and ensure the stability of the inversion results, but the detailed information on the depth range between 3 h M and 5 h M is sacrificed.

  2. Adaptive eigenspace method for inverse scattering problems in the frequency domain

    NASA Astrophysics Data System (ADS)

    Grote, Marcus J.; Kray, Marie; Nahum, Uri

    2017-02-01

    A nonlinear optimization method is proposed for the solution of inverse scattering problems in the frequency domain, when the scattered field is governed by the Helmholtz equation. The time-harmonic inverse medium problem is formulated as a PDE-constrained optimization problem and solved by an inexact truncated Newton-type iteration. Instead of a grid-based discrete representation, the unknown wave speed is projected to a particular finite-dimensional basis of eigenfunctions, which is iteratively adapted during the optimization. Truncating the adaptive eigenspace (AE) basis at a (small and slowly increasing) finite number of eigenfunctions effectively introduces regularization into the inversion and thus avoids the need for standard Tikhonov-type regularization. Both analytical and numerical evidence underpins the accuracy of the AE representation. Numerical experiments demonstrate the efficiency and robustness to missing or noisy data of the resulting adaptive eigenspace inversion method.

  3. Applications of Bayesian spectrum representation in acoustics

    NASA Astrophysics Data System (ADS)

    Botts, Jonathan M.

    This dissertation utilizes a Bayesian inference framework to enhance the solution of inverse problems where the forward model maps to acoustic spectra. A Bayesian solution to filter design inverts a acoustic spectra to pole-zero locations of a discrete-time filter model. Spatial sound field analysis with a spherical microphone array is a data analysis problem that requires inversion of spatio-temporal spectra to directions of arrival. As with many inverse problems, a probabilistic analysis results in richer solutions than can be achieved with ad-hoc methods. In the filter design problem, the Bayesian inversion results in globally optimal coefficient estimates as well as an estimate the most concise filter capable of representing the given spectrum, within a single framework. This approach is demonstrated on synthetic spectra, head-related transfer function spectra, and measured acoustic reflection spectra. The Bayesian model-based analysis of spatial room impulse responses is presented as an analogous problem with equally rich solution. The model selection mechanism provides an estimate of the number of arrivals, which is necessary to properly infer the directions of simultaneous arrivals. Although, spectrum inversion problems are fairly ubiquitous, the scope of this dissertation has been limited to these two and derivative problems. The Bayesian approach to filter design is demonstrated on an artificial spectrum to illustrate the model comparison mechanism and then on measured head-related transfer functions to show the potential range of application. Coupled with sampling methods, the Bayesian approach is shown to outperform least-squares filter design methods commonly used in commercial software, confirming the need for a global search of the parameter space. The resulting designs are shown to be comparable to those that result from global optimization methods, but the Bayesian approach has the added advantage of a filter length estimate within the same unified framework. The application to reflection data is useful for representing frequency-dependent impedance boundaries in finite difference acoustic simulations. Furthermore, since the filter transfer function is a parametric model, it can be modified to incorporate arbitrary frequency weighting and account for the band-limited nature of measured reflection spectra. Finally, the model is modified to compensate for dispersive error in the finite difference simulation, from the filter design process. Stemming from the filter boundary problem, the implementation of pressure sources in finite difference simulation is addressed in order to assure that schemes properly converge. A class of parameterized source functions is proposed and shown to offer straightforward control of residual error in the simulation. Guided by the notion that the solution to be approximated affects the approximation error, sources are designed which reduce residual dispersive error to the size of round-off errors. The early part of a room impulse response can be characterized by a series of isolated plane waves. Measured with an array of microphones, plane waves map to a directional response of the array or spatial intensity map. Probabilistic inversion of this response results in estimates of the number and directions of image source arrivals. The model-based inversion is shown to avoid ambiguities associated with peak-finding or inspection of the spatial intensity map. For this problem, determining the number of arrivals in a given frame is critical for properly inferring the state of the sound field. This analysis is effectively compression of the spatial room response, which is useful for analysis or encoding of the spatial sound field. Parametric, model-based formulations of these problems enhance the solution in all cases, and a Bayesian interpretation provides a principled approach to model comparison and parameter estimation. v

  4. Insurer Market Structure and Variation in Commercial Health Care Spending

    PubMed Central

    McKellar, Michael R; Naimer, Sivia; Landrum, Mary B; Gibson, Teresa B; Chandra, Amitabh; Chernew, Michael

    2014-01-01

    Objective To examine the relationship between insurance market structure and health care prices, utilization, and spending. Data Sources Claims for 37.6 million privately insured employees and their dependents from the Truven Health Market Scan Database in 2009. Measures of insurer market structure derived from Health Leaders Inter study data. Methods Regression models are used to estimate the association between insurance market concentration and health care spending, utilization, and price, adjusting for differences in patient characteristics and other market-level traits. Results Insurance market concentration is inversely related to prices and spending, but positively related to utilization. Our results imply that, after adjusting for input price differences, a market with two equal size insurers is associated with 3.9 percent lower medical care spending per capita (p = .002) and 5.0 percent lower prices for health care services relative to one with three equal size insurers (p < .001). Conclusion Greater fragmentation in the insurance market might lead to higher prices and higher spending for care, suggesting some of the gains from insurer competition may be absorbed by higher prices for health care. Greater attention to prices and utilization in the provider market may need to accompany procompetitive insurance market strategies. PMID:24303879

  5. A sparse reconstruction method for the estimation of multi-resolution emission fields via atmospheric inversion

    DOE PAGES

    Ray, J.; Lee, J.; Yadav, V.; ...

    2015-04-29

    Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO 2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) andmore » fitting. Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO 2 (ffCO 2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO 2 emissions and synthetic observations of ffCO 2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of 2. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less

  6. A sparse reconstruction method for the estimation of multi-resolution emission fields via atmospheric inversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ray, J.; Lee, J.; Yadav, V.

    Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO 2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) andmore » fitting. Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO 2 (ffCO 2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO 2 emissions and synthetic observations of ffCO 2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of 2. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less

  7. [Implication of inverse-probability weighting method in the evaluation of diagnostic test with verification bias].

    PubMed

    Kang, Leni; Zhang, Shaokai; Zhao, Fanghui; Qiao, Youlin

    2014-03-01

    To evaluate and adjust the verification bias existed in the screening or diagnostic tests. Inverse-probability weighting method was used to adjust the sensitivity and specificity of the diagnostic tests, with an example of cervical cancer screening used to introduce the Compare Tests package in R software which could be implemented. Sensitivity and specificity calculated from the traditional method and maximum likelihood estimation method were compared to the results from Inverse-probability weighting method in the random-sampled example. The true sensitivity and specificity of the HPV self-sampling test were 83.53% (95%CI:74.23-89.93)and 85.86% (95%CI: 84.23-87.36). In the analysis of data with randomly missing verification by gold standard, the sensitivity and specificity calculated by traditional method were 90.48% (95%CI:80.74-95.56)and 71.96% (95%CI:68.71-75.00), respectively. The adjusted sensitivity and specificity under the use of Inverse-probability weighting method were 82.25% (95% CI:63.11-92.62) and 85.80% (95% CI: 85.09-86.47), respectively, whereas they were 80.13% (95%CI:66.81-93.46)and 85.80% (95%CI: 84.20-87.41) under the maximum likelihood estimation method. The inverse-probability weighting method could effectively adjust the sensitivity and specificity of a diagnostic test when verification bias existed, especially when complex sampling appeared.

  8. Invited review article: physics and Monte Carlo techniques as relevant to cryogenic, phonon, and ionization readout of Cryogenic Dark Matter Search radiation detectors.

    PubMed

    Leman, Steven W

    2012-09-01

    This review discusses detector physics and Monte Carlo techniques for cryogenic, radiation detectors that utilize combined phonon and ionization readout. A general review of cryogenic phonon and charge transport is provided along with specific details of the Cryogenic Dark Matter Search detector instrumentation. In particular, this review covers quasidiffusive phonon transport, which includes phonon focusing, anharmonic decay, and isotope scattering. The interaction of phonons in the detector surface is discussed along with the downconversion of phonons in superconducting films. The charge transport physics include a mass tensor which results from the crystal band structure and is modeled with a Herring-Vogt transformation. Charge scattering processes involve the creation of Neganov-Luke phonons. Transition-edge-sensor (TES) simulations include a full electric circuit description and all thermal processes including Joule heating, cooling to the substrate, and thermal diffusion within the TES, the latter of which is necessary to model normal-superconducting phase separation. Relevant numerical constants are provided for these physical processes in germanium, silicon, aluminum, and tungsten. Random number sampling methods including inverse cumulative distribution function (CDF) and rejection techniques are reviewed. To improve the efficiency of charge transport modeling, an additional second order inverse CDF method is developed here along with an efficient barycentric coordinate sampling method of electric fields. Results are provided in a manner that is convenient for use in Monte Carlo and references are provided for validation of these models.

  9. Identification of Arbitrary Zonation in Groundwater Parameters using the Level Set Method and a Parallel Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Lei, H.; Lu, Z.; Vesselinov, V. V.; Ye, M.

    2017-12-01

    Simultaneous identification of both the zonation structure of aquifer heterogeneity and the hydrogeological parameters associated with these zones is challenging, especially for complex subsurface heterogeneity fields. In this study, a new approach, based on the combination of the level set method and a parallel genetic algorithm is proposed. Starting with an initial guess for the zonation field (including both zonation structure and the hydraulic properties of each zone), the level set method ensures that material interfaces are evolved through the inverse process such that the total residual between the simulated and observed state variables (hydraulic head) always decreases, which means that the inversion result depends on the initial guess field and the minimization process might fail if it encounters a local minimum. To find the global minimum, the genetic algorithm (GA) is utilized to explore the parameters that define initial guess fields, and the minimal total residual corresponding to each initial guess field is considered as the fitness function value in the GA. Due to the expensive evaluation of the fitness function, a parallel GA is adapted in combination with a simulated annealing algorithm. The new approach has been applied to several synthetic cases in both steady-state and transient flow fields, including a case with real flow conditions at the chromium contaminant site at the Los Alamos National Laboratory. The results show that this approach is capable of identifying the arbitrary zonation structures of aquifer heterogeneity and the hydrogeological parameters associated with these zones effectively.

  10. Whole head quantitative susceptibility mapping using a least-norm direct dipole inversion method.

    PubMed

    Sun, Hongfu; Ma, Yuhan; MacDonald, M Ethan; Pike, G Bruce

    2018-06-15

    A new dipole field inversion method for whole head quantitative susceptibility mapping (QSM) is proposed. Instead of performing background field removal and local field inversion sequentially, the proposed method performs dipole field inversion directly on the total field map in a single step. To aid this under-determined and ill-posed inversion process and obtain robust QSM images, Tikhonov regularization is implemented to seek the local susceptibility solution with the least-norm (LN) using the L-curve criterion. The proposed LN-QSM does not require brain edge erosion, thereby preserving the cerebral cortex in the final images. This should improve its applicability for QSM-based cortical grey matter measurement, functional imaging and venography of full brain. Furthermore, LN-QSM also enables susceptibility mapping of the entire head without the need for brain extraction, which makes QSM reconstruction more automated and less dependent on intermediate pre-processing methods and their associated parameters. It is shown that the proposed LN-QSM method reduced errors in a numerical phantom simulation, improved accuracy in a gadolinium phantom experiment, and suppressed artefacts in nine subjects, as compared to two-step and other single-step QSM methods. Measurements of deep grey matter and skull susceptibilities from LN-QSM are consistent with established reconstruction methods. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Source counting in MEG neuroimaging

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Dell, John; Magee, Ralphy; Roberts, Timothy P. L.

    2009-02-01

    Magnetoencephalography (MEG) is a multi-channel, functional imaging technique. It measures the magnetic field produced by the primary electric currents inside the brain via a sensor array composed of a large number of superconducting quantum interference devices. The measurements are then used to estimate the locations, strengths, and orientations of these electric currents. This magnetic source imaging technique encompasses a great variety of signal processing and modeling techniques which include Inverse problem, MUltiple SIgnal Classification (MUSIC), Beamforming (BF), and Independent Component Analysis (ICA) method. A key problem with Inverse problem, MUSIC and ICA methods is that the number of sources must be detected a priori. Although BF method scans the source space on a point-to-point basis, the selection of peaks as sources, however, is finally made by subjective thresholding. In practice expert data analysts often select results based on physiological plausibility. This paper presents an eigenstructure approach for the source number detection in MEG neuroimaging. By sorting eigenvalues of the estimated covariance matrix of the acquired MEG data, the measured data space is partitioned into the signal and noise subspaces. The partition is implemented by utilizing information theoretic criteria. The order of the signal subspace gives an estimate of the number of sources. The approach does not refer to any model or hypothesis, hence, is an entirely data-led operation. It possesses clear physical interpretation and efficient computation procedure. The theoretical derivation of this method and the results obtained by using the real MEG data are included to demonstrates their agreement and the promise of the proposed approach.

  12. Microseismic imaging using a source function independent full waveform inversion method

    NASA Astrophysics Data System (ADS)

    Wang, Hanchen; Alkhalifah, Tariq

    2018-07-01

    At the heart of microseismic event measurements is the task to estimate the location of the source microseismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional microseismic source locating methods require, in many cases, manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, FWI of microseismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent FWI of microseismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modelled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers are calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.

  13. Bayesian Inversion of 2D Models from Airborne Transient EM Data

    NASA Astrophysics Data System (ADS)

    Blatter, D. B.; Key, K.; Ray, A.

    2016-12-01

    The inherent non-uniqueness in most geophysical inverse problems leads to an infinite number of Earth models that fit observed data to within an adequate tolerance. To resolve this ambiguity, traditional inversion methods based on optimization techniques such as the Gauss-Newton and conjugate gradient methods rely on an additional regularization constraint on the properties that an acceptable model can possess, such as having minimal roughness. While allowing such an inversion scheme to converge on a solution, regularization makes it difficult to estimate the uncertainty associated with the model parameters. This is because regularization biases the inversion process toward certain models that satisfy the regularization constraint and away from others that don't, even when both may suitably fit the data. By contrast, a Bayesian inversion framework aims to produce not a single `most acceptable' model but an estimate of the posterior likelihood of the model parameters, given the observed data. In this work, we develop a 2D Bayesian framework for the inversion of transient electromagnetic (TEM) data. Our method relies on a reversible-jump Markov Chain Monte Carlo (RJ-MCMC) Bayesian inverse method with parallel tempering. Previous gradient-based inversion work in this area used a spatially constrained scheme wherein individual (1D) soundings were inverted together and non-uniqueness was tackled by using lateral and vertical smoothness constraints. By contrast, our work uses a 2D model space of Voronoi cells whose parameterization (including number of cells) is fully data-driven. To make the problem work practically, we approximate the forward solution for each TEM sounding using a local 1D approximation where the model is obtained from the 2D model by retrieving a vertical profile through the Voronoi cells. The implicit parsimony of the Bayesian inversion process leads to the simplest models that adequately explain the data, obviating the need for explicit smoothness constraints. In addition, credible intervals in model space are directly obtained, resolving some of the uncertainty introduced by regularization. An example application shows how the method can be used to quantify the uncertainty in airborne EM soundings for imaging subglacial brine channels and groundwater systems.

  14. Simultaneous estimation of aquifer thickness, conductivity, and BC using borehole and hydrodynamic data with geostatistical inverse direct method

    NASA Astrophysics Data System (ADS)

    Gao, F.; Zhang, Y.

    2017-12-01

    A new inverse method is developed to simultaneously estimate aquifer thickness and boundary conditions using borehole and hydrodynamic measurements from a homogeneous confined aquifer under steady-state ambient flow. This method extends a previous groundwater inversion technique which had assumed known aquifer geometry and thickness. In this research, thickness inversion was successfully demonstrated when hydrodynamic data were supplemented with measured thicknesses from boreholes. Based on a set of hybrid formulations which describe approximate solutions to the groundwater flow equation, the new inversion technique can incorporate noisy observed data (i.e., thicknesses, hydraulic heads, Darcy fluxes or flow rates) at measurement locations as a set of conditioning constraints. Given sufficient quantity and quality of the measurements, the inverse method yields a single well-posed system of equations that can be solved efficiently with nonlinear optimization. The method is successfully tested on two-dimensional synthetic aquifer problems with regular geometries. The solution is stable when measurement errors are increased, with error magnitude reaching up to +/- 10% of the range of the respective measurement. When error-free observed data are used to condition the inversion, the estimated thickness is within a +/- 5% error envelope surrounding the true value; when data contain increasing errors, the estimated thickness become less accurate, as expected. Different combinations of measurement types are then investigated to evaluate data worth. Thickness can be inverted with the combination of observed heads and at least one of the other types of observations such as thickness, Darcy fluxes, or flow rates. Data requirement of the new inversion method is thus not much different from that of interpreting classic well tests. Future work will improve upon this research by developing an estimation strategy for heterogeneous aquifers while drawdown data from hydraulic tests will also be incorporated as conditioning measurements.

  15. Centralized PI control for high dimensional multivariable systems based on equivalent transfer function.

    PubMed

    Luan, Xiaoli; Chen, Qiang; Liu, Fei

    2014-09-01

    This article presents a new scheme to design full matrix controller for high dimensional multivariable processes based on equivalent transfer function (ETF). Differing from existing ETF method, the proposed ETF is derived directly by exploiting the relationship between the equivalent closed-loop transfer function and the inverse of open-loop transfer function. Based on the obtained ETF, the full matrix controller is designed utilizing the existing PI tuning rules. The new proposed ETF model can more accurately represent the original processes. Furthermore, the full matrix centralized controller design method proposed in this paper is applicable to high dimensional multivariable systems with satisfactory performance. Comparison with other multivariable controllers shows that the designed ETF based controller is superior with respect to design-complexity and obtained performance. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Detection of small surface defects using DCT based enhancement approach in machine vision systems

    NASA Astrophysics Data System (ADS)

    He, Fuqiang; Wang, Wen; Chen, Zichen

    2005-12-01

    Utilizing DCT based enhancement approach, an improved small defect detection algorithm for real-time leather surface inspection was developed. A two-stage decomposition procedure was proposed to extract an odd-odd frequency matrix after a digital image has been transformed to DCT domain. Then, the reverse cumulative sum algorithm was proposed to detect the transition points of the gentle curves plotted from the odd-odd frequency matrix. The best radius of the cutting sector was computed in terms of the transition points and the high-pass filtering operation was implemented. The filtered image was then inversed and transformed back to the spatial domain. Finally, the restored image was segmented by an entropy method and some defect features are calculated. Experimental results show the proposed small defect detection method can reach the small defect detection rate by 94%.

  17. A three-step maximum a posteriori probability method for InSAR data inversion of coseismic rupture with application to the 14 April 2010 Mw 6.9 Yushu, China, earthquake

    NASA Astrophysics Data System (ADS)

    Sun, Jianbao; Shen, Zheng-Kang; Bürgmann, Roland; Wang, Min; Chen, Lichun; Xu, Xiwei

    2013-08-01

    develop a three-step maximum a posteriori probability method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic deformation solutions of earthquake rupture. The method originates from the fully Bayesian inversion and mixed linear-nonlinear Bayesian inversion methods and shares the same posterior PDF with them, while overcoming difficulties with convergence when large numbers of low-quality data are used and greatly improving the convergence rate using optimization procedures. A highly efficient global optimization algorithm, adaptive simulated annealing, is used to search for the maximum of a posterior PDF ("mode" in statistics) in the first step. The second step inversion approaches the "true" solution further using the Monte Carlo inversion technique with positivity constraints, with all parameters obtained from the first step as the initial solution. Then slip artifacts are eliminated from slip models in the third step using the same procedure of the second step, with fixed fault geometry parameters. We first design a fault model with 45° dip angle and oblique slip, and produce corresponding synthetic interferometric synthetic aperture radar (InSAR) data sets to validate the reliability and efficiency of the new method. We then apply this method to InSAR data inversion for the coseismic slip distribution of the 14 April 2010 Mw 6.9 Yushu, China earthquake. Our preferred slip model is composed of three segments with most of the slip occurring within 15 km depth and the maximum slip reaches 1.38 m at the surface. The seismic moment released is estimated to be 2.32e+19 Nm, consistent with the seismic estimate of 2.50e+19 Nm.

  18. Hospice Enrollment, Local Hospice Utilization Patterns, and Rehospitalization in Medicare Patients

    PubMed Central

    Holden, Timothy R.; Smith, Maureen A.; Bartels, Christie M.; Campbell, Toby C.; Yu, Menggang

    2015-01-01

    Abstract Background: Rehospitalizations are prevalent and associated with decreased quality of life. Although hospice has been advocated to reduce rehospitalizations, it is not known how area-level hospice utilization patterns affect rehospitalization risk. Objectives: The study objective was to examine the association between hospice enrollment, local hospice utilization patterns, and 30-day rehospitalization in Medicare patients. Methods: With a retrospective cohort design, 1,997,506 hospitalizations were assessed between 2005 and 2009 from a 5% national sample of Medicare beneficiaries. Local hospice utilization was defined using tertiles representing the percentage of all deaths occurring in hospice within each Hospital Service Area (HSA). Cox proportional hazard models were used to assess the relationship between 30-day rehospitalization, hospice enrollment, and local hospice utilization, adjusting for patient sociodemographics, medical history, and hospital characteristics. Results: Rates of patients dying in hospice were 27% in the lowest hospice utilization tertile, 41% in the middle tertile, and 53% in the highest tertile. Patients enrolled in hospice had lower rates of 30-day rehospitalization than those not enrolled (2.2% versus 18.8%; adjusted hazard ratio [HR], 0.12; 95% confidence interval [CI], 0.118–0.131). Patients residing in areas of low hospice utilization were at greater rehospitalization risk than those residing in areas of high utilization (19.1% versus 17.5%; HR, 1.05; 95% CI, 1.04–1.06), which persisted beyond that accounted for by individual hospice enrollment. Conclusions: Area-level hospice utilization is inversely proportional to rehospitalization rates. This relationship is not fully explained by direct hospice enrollment, and may reflect a spillover effect of the benefits of hospice extending to nonenrollees. PMID:25879990

  19. Investigating source processes of isotropic events

    NASA Astrophysics Data System (ADS)

    Chiang, Andrea

    This dissertation demonstrates the utility of the complete waveform regional moment tensor inversion for nuclear event discrimination. I explore the source processes and associated uncertainties for explosions and earthquakes under the effects of limited station coverage, compound seismic sources, assumptions in velocity models and the corresponding Green's functions, and the effects of shallow source depth and free-surface conditions. The motivation to develop better techniques to obtain reliable source mechanism and assess uncertainties is not limited to nuclear monitoring, but they also provide quantitative information about the characteristics of seismic hazards, local and regional tectonics and in-situ stress fields of the region . This dissertation begins with the analysis of three sparsely recorded events: the 14 September 1988 US-Soviet Joint Verification Experiment (JVE) nuclear test at the Semipalatinsk test site in Eastern Kazakhstan, and two nuclear explosions at the Chinese Lop Nor test site. We utilize a regional distance seismic waveform method fitting long-period, complete, three-component waveforms jointly with first-motion observations from regional stations and teleseismic arrays. The combination of long period waveforms and first motion observations provides unique discrimination of these sparsely recorded events in the context of the Hudson et al. (1989) source-type diagram. We examine the effects of the free surface on the moment tensor via synthetic testing, and apply the moment tensor based discrimination method to well-recorded chemical explosions. These shallow chemical explosions represent rather severe source-station geometry in terms of the vanishing traction issues. We show that the combined waveform and first motion method enables the unique discrimination of these events, even though the data include unmodeled single force components resulting from the collapse and blowout of the quarry face immediately following the initial explosion. In contrast, recovering the announced explosive yield using seismic moment estimates from moment tensor inversion remains challenging but we can begin to put error bounds on our moment estimates using the NSS technique. The estimation of seismic source parameters is dependent upon having a well-calibrated velocity model to compute the Green's functions for the inverse problem. Ideally, seismic velocity models are calibrated through broadband waveform modeling, however in regions of low seismicity velocity models derived from body or surface wave tomography may be employed. Whether a velocity model is 1D or 3D, or based on broadband seismic waveform modeling or the various tomographic techniques, the uncertainty in the velocity model can be the greatest source of error in moment tensor inversion. These errors have not been fully investigated for the nuclear discrimination problem. To study the effects of unmodeled structures on the moment tensor inversion, we set up a synthetic experiment where we produce synthetic seismograms for a 3D model (Moschetti et al., 2010) and invert these data using Green's functions computed with a 1D velocity mode (Song et al., 1996) to evaluate the recoverability of input solutions, paying particular attention to biases in the isotropic component. The synthetic experiment results indicate that the 1D model assumption is valid for moment tensor inversions at periods as short as 10 seconds for the 1D western U.S. model (Song et al., 1996). The correct earthquake mechanisms and source depth are recovered with statistically insignificant isotropic components as determined by the F-test. Shallow explosions are biased by the theoretical ISO-CLVD tradeoff but the tectonic release component remains low, and the tradeoff can be eliminated with constraints from P wave first motion. Path-calibration to the 1D model can reduce non-double-couple components in earthquakes, non-isotropic components in explosions and composite sources and improve the fit to the data. When we apply the 3D model to real data, at long periods (20-50 seconds), we see good agreement in the solutions between the 1D and 3D models and slight improvement in waveform fits when using the 3D velocity model Green's functions. (Abstract shortened by ProQuest.).

  20. Comparative Study of Three Data Assimilation Methods for Ice Sheet Model Initialisation

    NASA Astrophysics Data System (ADS)

    Mosbeux, Cyrille; Gillet-Chaulet, Fabien; Gagliardini, Olivier

    2015-04-01

    The current global warming has direct consequences on ice-sheet mass loss contributing to sea level rise. This loss is generally driven by an acceleration of some coastal outlet glaciers and reproducing these mechanisms is one of the major issues in ice-sheet and ice flow modelling. The construction of an initial state, as close as possible to current observations, is required as a prerequisite before producing any reliable projection of the evolution of ice-sheets. For this step, inverse methods are often used to infer badly known or unknown parameters. For instance, the adjoint inverse method has been implemented and applied with success by different authors in different ice flow models in order to infer the basal drag [ Schafer et al., 2012; Gillet-chauletet al., 2012; Morlighem et al., 2010]. Others data fields, such as ice surface and bedrock topography, are easily measurable with more or less uncertainty but only locally along tracks and interpolated on finer model grid. All these approximations lead to errors on the data elevation model and give rise to an ill-posed problem inducing non-physical anomalies in flux divergence [Seroussi et al, 2011]. A solution to dissipate these divergences of flux is to conduct a surface relaxation step at the expense of the accuracy of the modelled surface [Gillet-Chaulet et al., 2012]. Other solutions, based on the inversion of ice thickness and basal drag were proposed [Perego et al., 2014; Pralong & Gudmundsson, 2011]. In this study, we create a twin experiment to compare three different assimilation algorithms based on inverse methods and nudging to constrain the bedrock friction and the bedrock elevation: (i) cyclic inversion of friction parameter and bedrock topography using adjoint method, (ii) cycles coupling inversion of friction parameter using adjoint method and nudging of bedrock topography, (iii) one step inversion of both parameters with adjoint method. The three methods show a clear improvement in parameters knowledge leading to a significant reduction of flux divergence of the model before forecasting.

  1. A direct method for nonlinear ill-posed problems

    NASA Astrophysics Data System (ADS)

    Lakhal, A.

    2018-02-01

    We propose a direct method for solving nonlinear ill-posed problems in Banach-spaces. The method is based on a stable inversion formula we explicitly compute by applying techniques for analytic functions. Furthermore, we investigate the convergence and stability of the method and prove that the derived noniterative algorithm is a regularization. The inversion formula provides a systematic sensitivity analysis. The approach is applicable to a wide range of nonlinear ill-posed problems. We test the algorithm on a nonlinear problem of travel-time inversion in seismic tomography. Numerical results illustrate the robustness and efficiency of the algorithm.

  2. A multi-frequency iterative imaging method for discontinuous inverse medium problem

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Feng, Lixin

    2018-06-01

    The inverse medium problem with discontinuous refractive index is a kind of challenging inverse problem. We employ the primal dual theory and fast solution of integral equations, and propose a new iterative imaging method. The selection criteria of regularization parameter is given by the method of generalized cross-validation. Based on multi-frequency measurements of the scattered field, a recursive linearization algorithm has been presented with respect to the frequency from low to high. We also discuss the initial guess selection strategy by semi-analytical approaches. Numerical experiments are presented to show the effectiveness of the proposed method.

  3. Approximate non-linear multiparameter inversion for multicomponent single and double P-wave scattering in isotropic elastic media

    NASA Astrophysics Data System (ADS)

    Ouyang, Wei; Mao, Weijian

    2018-03-01

    An asymptotic quadratic true-amplitude inversion method for isotropic elastic P waves is proposed to invert medium parameters. The multicomponent P-wave scattered wavefield is computed based on a forward relationship using second-order Born approximation and corresponding high-frequency ray theoretical methods. Within the local double scattering mechanism, the P-wave transmission factors are elaborately calculated, which results in the radiation pattern for P-waves scattering being a quadratic combination of the density and Lamé's moduli perturbation parameters. We further express the elastic P-wave scattered wavefield in a form of generalized Radon transform (GRT). After introducing classical backprojection operators, we obtain an approximate solution of the inverse problem by solving a quadratic non-linear system. Numerical tests with synthetic data computed by finite-differences scheme demonstrate that our quadratic inversion can accurately invert perturbation parameters for strong perturbations, compared with the P-wave single-scattering linear inversion method. Although our inversion strategy here is only syncretized with P-wave scattering, it can be extended to invert multicomponent elastic data containing both P-wave and S-wave information.

  4. Approximate nonlinear multiparameter inversion for multicomponent single and double P-wave scattering in isotropic elastic media

    NASA Astrophysics Data System (ADS)

    Ouyang, Wei; Mao, Weijian

    2018-07-01

    An asymptotic quadratic true-amplitude inversion method for isotropic elastic P waves is proposed to invert medium parameters. The multicomponent P-wave scattered wavefield is computed based on a forward relationship using second-order Born approximation and corresponding high-frequency ray theoretical methods. Within the local double scattering mechanism, the P-wave transmission factors are elaborately calculated, which results in the radiation pattern for P-wave scattering being a quadratic combination of the density and Lamé's moduli perturbation parameters. We further express the elastic P-wave scattered wavefield in a form of generalized Radon transform. After introducing classical backprojection operators, we obtain an approximate solution of the inverse problem by solving a quadratic nonlinear system. Numerical tests with synthetic data computed by finite-differences scheme demonstrate that our quadratic inversion can accurately invert perturbation parameters for strong perturbations, compared with the P-wave single-scattering linear inversion method. Although our inversion strategy here is only syncretized with P-wave scattering, it can be extended to invert multicomponent elastic data containing both P- and S-wave information.

  5. Objective function analysis for electric soundings (VES), transient electromagnetic soundings (TEM) and joint inversion VES/TEM

    NASA Astrophysics Data System (ADS)

    Bortolozo, Cassiano Antonio; Bokhonok, Oleg; Porsani, Jorge Luís; Monteiro dos Santos, Fernando Acácio; Diogo, Liliana Alcazar; Slob, Evert

    2017-11-01

    Ambiguities in geophysical inversion results are always present. How these ambiguities appear in most cases open to interpretation. It is interesting to investigate ambiguities with regard to the parameters of the models under study. Residual Function Dispersion Map (RFDM) can be used to differentiate between global ambiguities and local minima in the objective function. We apply RFDM to Vertical Electrical Sounding (VES) and TEM Sounding inversion results. Through topographic analysis of the objective function we evaluate the advantages and limitations of electrical sounding data compared with TEM sounding data, and the benefits of joint inversion in comparison with the individual methods. The RFDM analysis proved to be a very interesting tool for understanding the joint inversion method of VES/TEM. Also the advantage of the applicability of the RFDM analyses in real data is explored in this paper to demonstrate not only how the objective function of real data behaves but the applicability of the RFDM approach in real cases. With the analysis of the results, it is possible to understand how the joint inversion can reduce the ambiguity of the methods.

  6. Allele quantification using molecular inversion probes (MIP)

    PubMed Central

    Wang, Yuker; Moorhead, Martin; Karlin-Neumann, George; Falkowski, Matthew; Chen, Chunnuan; Siddiqui, Farooq; Davis, Ronald W.; Willis, Thomas D.; Faham, Malek

    2005-01-01

    Detection of genomic copy number changes has been an important research area, especially in cancer. Several high-throughput technologies have been developed to detect these changes. Features that are important for the utility of technologies assessing copy number changes include the ability to interrogate regions of interest at the desired density as well as the ability to differentiate the two homologs. In addition, assessing formaldehyde fixed and paraffin embedded (FFPE) samples allows the utilization of the vast majority of cancer samples. To address these points we demonstrate the use of molecular inversion probe (MIP) technology to the study of copy number. MIP is a high-throughput genotyping technology capable of interrogating >20 000 single nucleotide polymorphisms in the same tube. We have shown the ability of MIP at this multiplex level to provide copy number measurements while obtaining the allele information. In addition we have demonstrated a proof of principle for copy number analysis in FFPE samples. PMID:16314297

  7. Extracting Low-Frequency Information from Time Attenuation in Elastic Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Guo, Xuebao; Liu, Hong; Shi, Ying; Wang, Weihong

    2017-03-01

    Low-frequency information is crucial for recovering background velocity, but the lack of low-frequency information in field data makes inversion impractical without accurate initial models. Laplace-Fourier domain waveform inversion can recover a smooth model from real data without low-frequency information, which can be used for subsequent inversion as an ideal starting model. In general, it also starts with low frequencies and includes higher frequencies at later inversion stages, while the difference is that its ultralow frequency information comes from the Laplace-Fourier domain. Meanwhile, a direct implementation of the Laplace-transformed wavefield using frequency domain inversion is also very convenient. However, because broad frequency bands are often used in the pure time domain waveform inversion, it is difficult to extract the wavefields dominated by low frequencies in this case. In this paper, low-frequency components are constructed by introducing time attenuation into the recorded residuals, and the rest of the method is identical to the traditional time domain inversion. Time windowing and frequency filtering are also applied to mitigate the ambiguity of the inverse problem. Therefore, we can start at low frequencies and to move to higher frequencies. The experiment shows that the proposed method can achieve a good inversion result in the presence of a linear initial model and records without low-frequency information.

  8. Parameterizations for ensemble Kalman inversion

    NASA Astrophysics Data System (ADS)

    Chada, Neil K.; Iglesias, Marco A.; Roininen, Lassi; Stuart, Andrew M.

    2018-05-01

    The use of ensemble methods to solve inverse problems is attractive because it is a derivative-free methodology which is also well-adapted to parallelization. In its basic iterative form the method produces an ensemble of solutions which lie in the linear span of the initial ensemble. Choice of the parameterization of the unknown field is thus a key component of the success of the method. We demonstrate how both geometric ideas and hierarchical ideas can be used to design effective parameterizations for a number of applied inverse problems arising in electrical impedance tomography, groundwater flow and source inversion. In particular we show how geometric ideas, including the level set method, can be used to reconstruct piecewise continuous fields, and we show how hierarchical methods can be used to learn key parameters in continuous fields, such as length-scales, resulting in improved reconstructions. Geometric and hierarchical ideas are combined in the level set method to find piecewise constant reconstructions with interfaces of unknown topology.

  9. Utilization of high-frequency Rayleigh waves in near-surface geophysics

    USGS Publications Warehouse

    Xia, J.; Miller, R.D.; Park, C.B.; Ivanov, J.; Tian, G.; Chen, C.

    2004-01-01

    Shear-wave velocities can be derived from inverting the dispersive phase velocity of the surface. The multichannel analysis of surface waves (MASW) is one technique for inverting high-frequency Rayleigh waves. The process includes acquisition of high-frequency broad-band Rayleigh waves, efficient and accurate algorithms designed to extract Rayleigh-wave dispersion curves from Rayleigh waves, and stable and efficient inversion algorithms to obtain near-surface S-wave velocity profiles. MASW estimates S-wave velocity from multichannel vertical compoent data and consists of data acquisition, dispersion-curve picking, and inversion.

  10. Aerosol properties from spectral extinction and backscatter estimated by an inverse Monte Carlo method.

    PubMed

    Ligon, D A; Gillespie, J B; Pellegrino, P

    2000-08-20

    The feasibility of using a generalized stochastic inversion methodology to estimate aerosol size distributions accurately by use of spectral extinction, backscatter data, or both is examined. The stochastic method used, inverse Monte Carlo (IMC), is verified with both simulated and experimental data from aerosols composed of spherical dielectrics with a known refractive index. Various levels of noise are superimposed on the data such that the effect of noise on the stability and results of inversion can be determined. Computational results show that the application of the IMC technique to inversion of spectral extinction or backscatter data or both can produce good estimates of aerosol size distributions. Specifically, for inversions for which both spectral extinction and backscatter data are used, the IMC technique was extremely accurate in determining particle size distributions well outside the wavelength range. Also, the IMC inversion results proved to be stable and accurate even when the data had significant noise, with a signal-to-noise ratio of 3.

  11. Simultaneous stochastic inversion for geomagnetic main field and secular variation. I - A large-scale inverse problem

    NASA Technical Reports Server (NTRS)

    Bloxham, Jeremy

    1987-01-01

    The method of stochastic inversion is extended to the simultaneous inversion of both main field and secular variation. In the present method, the time dependency is represented by an expansion in Legendre polynomials, resulting in a simple diagonal form for the a priori covariance matrix. The efficient preconditioned Broyden-Fletcher-Goldfarb-Shanno algorithm is used to solve the large system of equations resulting from expansion of the field spatially to spherical harmonic degree 14 and temporally to degree 8. Application of the method to observatory data spanning the 1900-1980 period results in a data fit of better than 30 nT, while providing temporally and spatially smoothly varying models of the magnetic field at the core-mantle boundary.

  12. A novel compensation method of insertion losses for wavelet inverse-transform processors using surface acoustic wave devices.

    PubMed

    Lu, Wenke; Zhu, Changchun

    2011-11-01

    The objective of this research was to investigate the possibility of compensating for the insertion losses of the wavelet inverse-transform processors using SAW devices. The motivation for this work was prompted by the processors which are of large insertion losses. In this paper, the insertion losses are the key problem of the wavelet inverse-transform processors using SAW devices. A novel compensation method of the insertion losses is achieved in this study. When the output ends of the wavelet inverse-transform processors are respectively connected to the amplifiers, their insertion losses can be compensated for. The bandwidths of the amplifiers and their adjustment method are also given in this paper. © 2011 American Institute of Physics

  13. Improved FFT-based numerical inversion of Laplace transforms via fast Hartley transform algorithm

    NASA Technical Reports Server (NTRS)

    Hwang, Chyi; Lu, Ming-Jeng; Shieh, Leang S.

    1991-01-01

    The disadvantages of numerical inversion of the Laplace transform via the conventional fast Fourier transform (FFT) are identified and an improved method is presented to remedy them. The improved method is based on introducing a new integration step length Delta(omega) = pi/mT for trapezoidal-rule approximation of the Bromwich integral, in which a new parameter, m, is introduced for controlling the accuracy of the numerical integration. Naturally, this method leads to multiple sets of complex FFT computations. A new inversion formula is derived such that N equally spaced samples of the inverse Laplace transform function can be obtained by (m/2) + 1 sets of N-point complex FFT computations or by m sets of real fast Hartley transform (FHT) computations.

  14. A Geophysical Inversion Model Enhancement Technique Based on the Blind Deconvolution

    NASA Astrophysics Data System (ADS)

    Zuo, B.; Hu, X.; Li, H.

    2011-12-01

    A model-enhancement technique is proposed to enhance the geophysical inversion model edges and details without introducing any additional information. Firstly, the theoretic correctness of the proposed geophysical inversion model-enhancement technique is discussed. An inversion MRM (model resolution matrix) convolution approximating PSF (Point Spread Function) method is designed to demonstrate the correctness of the deconvolution model enhancement method. Then, a total-variation regularization blind deconvolution geophysical inversion model-enhancement algorithm is proposed. In previous research, Oldenburg et al. demonstrate the connection between the PSF and the geophysical inverse solution. Alumbaugh et al. propose that more information could be provided by the PSF if we return to the idea of it behaving as an averaging or low pass filter. We consider the PSF as a low pass filter to enhance the inversion model basis on the theory of the PSF convolution approximation. Both the 1D linear and the 2D magnetotelluric inversion examples are used to analyze the validity of the theory and the algorithm. To prove the proposed PSF convolution approximation theory, the 1D linear inversion problem is considered. It shows the ratio of convolution approximation error is only 0.15%. The 2D synthetic model enhancement experiment is presented. After the deconvolution enhancement, the edges of the conductive prism and the resistive host become sharper, and the enhancement result is closer to the actual model than the original inversion model according the numerical statistic analysis. Moreover, the artifacts in the inversion model are suppressed. The overall precision of model increases 75%. All of the experiments show that the structure details and the numerical precision of inversion model are significantly improved, especially in the anomalous region. The correlation coefficient between the enhanced inversion model and the actual model are shown in Fig. 1. The figure illustrates that more information and details structure of the actual model are enhanced through the proposed enhancement algorithm. Using the proposed enhancement method can help us gain a clearer insight into the results of the inversions and help make better informed decisions.

  15. Forward modeling and inversion of tensor CSAMT in 3D anisotropic media

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Wang, Kun-Peng; Tan, Han-Dong

    2017-12-01

    Tensor controlled-source audio-frequency magnetotellurics (CSAMT) can yield information about electric and magnetic fields owing to its multi-transmitter configuration compared with the common scalar CSAMT. The most current theories, numerical simulations, and inversion of tensor CSAMT are based on far-field measurements and the assumption that underground media have isotropic resistivity. We adopt a three-dimensional (3D) staggered-grid finite difference numerical simulation method to analyze the resistivity in axial anisotropic and isotropic media. We further adopt the limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) method to perform 3D tensor CSAMT axial anisotropic inversion. The inversion results suggest that when the underground structure is anisotropic, the isotropic inversion will introduce errors to the interpretation.

  16. A new optimization method using a compressed sensing inspired solver for real-time LDR-brachytherapy treatment planning

    NASA Astrophysics Data System (ADS)

    Guthier, C.; Aschenbrenner, K. P.; Buergy, D.; Ehmann, M.; Wenz, F.; Hesser, J. W.

    2015-03-01

    This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced.

  17. A new optimization method using a compressed sensing inspired solver for real-time LDR-brachytherapy treatment planning.

    PubMed

    Guthier, C; Aschenbrenner, K P; Buergy, D; Ehmann, M; Wenz, F; Hesser, J W

    2015-03-21

    This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced.

  18. Analytical and numerical analysis of inverse optimization problems: conditions of uniqueness and computational methods

    PubMed Central

    Zatsiorsky, Vladimir M.

    2011-01-01

    One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423–453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem. PMID:21311907

  19. Detection of Natural Fractures from Observed Surface Seismic Data Based on a Linear-Slip Model

    NASA Astrophysics Data System (ADS)

    Chen, Huaizhen; Zhang, Guangzhi

    2018-03-01

    Natural fractures play an important role in migration of hydrocarbon fluids. Based on a rock physics effective model, the linear-slip model, which defines fracture parameters (fracture compliances) for quantitatively characterizing the effects of fractures on rock total compliance, we propose a method to detect natural fractures from observed seismic data via inversion for the fracture compliances. We first derive an approximate PP-wave reflection coefficient in terms of fracture compliances. Using the approximate reflection coefficient, we derive azimuthal elastic impedance as a function of fracture compliances. An inversion method to estimate fracture compliances from seismic data is presented based on a Bayesian framework and azimuthal elastic impedance, which is implemented in a two-step procedure: a least-squares inversion for azimuthal elastic impedance and an iterative inversion for fracture compliances. We apply the inversion method to synthetic and real data to verify its stability and reasonability. Synthetic tests confirm that the method can make a stable estimation of fracture compliances in the case of seismic data containing a moderate signal-to-noise ratio for Gaussian noise, and the test on real data reveals that reasonable fracture compliances are obtained using the proposed method.

  20. A Gauss-Newton full-waveform inversion in PML-truncated domains using scalar probing waves

    NASA Astrophysics Data System (ADS)

    Pakravan, Alireza; Kang, Jun Won; Newtson, Craig M.

    2017-12-01

    This study considers the characterization of subsurface shear wave velocity profiles in semi-infinite media using scalar waves. Using surficial responses caused by probing waves, a reconstruction of the material profile is sought using a Gauss-Newton full-waveform inversion method in a two-dimensional domain truncated by perfectly matched layer (PML) wave-absorbing boundaries. The PML is introduced to limit the semi-infinite extent of the half-space and to prevent reflections from the truncated boundaries. A hybrid unsplit-field PML is formulated in the inversion framework to enable more efficient wave simulations than with a fully mixed PML. The full-waveform inversion method is based on a constrained optimization framework that is implemented using Karush-Kuhn-Tucker (KKT) optimality conditions to minimize the objective functional augmented by PML-endowed wave equations via Lagrange multipliers. The KKT conditions consist of state, adjoint, and control problems, and are solved iteratively to update the shear wave velocity profile of the PML-truncated domain. Numerical examples show that the developed Gauss-Newton inversion method is accurate enough and more efficient than another inversion method. The algorithm's performance is demonstrated by the numerical examples including the case of noisy measurement responses and the case of reduced number of sources and receivers.

  1. [The development of the methods for the determination of nickel and its urine levels by inversion voltamperometry].

    PubMed

    Antoshina, L I; Pavlovskaia, N A

    1999-01-01

    The authors created a method detecting nickel through inversion voltamperometry by Russian analyzer CVA = 1BM. The method is diagnostic in hygienic, clinical and toxicologic studies for measuring quantity of nickel that enters human body during occupational activities.

  2. Improved Genetic Algorithm Based on the Cooperation of Elite and Inverse-elite

    NASA Astrophysics Data System (ADS)

    Kanakubo, Masaaki; Hagiwara, Masafumi

    In this paper, we propose an improved genetic algorithm based on the combination of Bee system and Inverse-elitism, both are effective strategies for the improvement of GA. In the Bee system, in the beginning, each chromosome tries to find good solution individually as global search. When some chromosome is regarded as superior one, the other chromosomes try to find solution around there. However, since chromosomes for global search are generated randomly, Bee system lacks global search ability. On the other hand, in the Inverse-elitism, an inverse-elite whose gene values are reversed from the corresponding elite is produced. This strategy greatly contributes to diversification of chromosomes, but it lacks local search ability. In the proposed method, the Inverse-elitism with Pseudo-simplex method is employed for global search of Bee system in order to strengthen global search ability. In addition, it also has strong local search ability. The proposed method has synergistic effects of the three strategies. We confirmed validity and superior performance of the proposed method by computer simulations.

  3. 3-D Magnetotelluric Forward Modeling And Inversion Incorporating Topography By Using Vector Finite-Element Method Combined With Divergence Corrections Based On The Magnetic Field (VFEH++)

    NASA Astrophysics Data System (ADS)

    Shi, X.; Utada, H.; Jiaying, W.

    2009-12-01

    The vector finite-element method combined with divergence corrections based on the magnetic field H, referred to as VFEH++ method, is developed to simulate the magnetotelluric (MT) responses of 3-D conductivity models. The advantages of the new VFEH++ method are the use of edge-elements to eliminate the vector parasites and the divergence corrections to explicitly guarantee the divergence-free conditions in the whole modeling domain. 3-D MT topographic responses are modeling using the new VFEH++ method, and are compared with those calculated by other numerical methods. The results show that MT responses can be modeled highly accurate using the VFEH+ +method. The VFEH++ algorithm is also employed for the 3-D MT data inversion incorporating topography. The 3-D MT inverse problem is formulated as a minimization problem of the regularized misfit function. In order to avoid the huge memory requirement and very long time for computing the Jacobian sensitivity matrix for Gauss-Newton method, we employ the conjugate gradient (CG) approach to solve the inversion equation. In each iteration of CG algorithm, the cost computation is the product of the Jacobian sensitivity matrix with a model vector x or its transpose with a data vector y, which can be transformed into two pseudo-forwarding modeling. This avoids the full explicitly Jacobian matrix calculation and storage which leads to considerable savings in the memory required by the inversion program in PC computer. The performance of CG algorithm will be illustrated by several typical 3-D models with horizontal earth surface and topographic surfaces. The results show that the VFEH++ and CG algorithms can be effectively employed to 3-D MT field data inversion.

  4. Intelligent inversion method for pre-stack seismic big data based on MapReduce

    NASA Astrophysics Data System (ADS)

    Yan, Xuesong; Zhu, Zhixin; Wu, Qinghua

    2018-01-01

    Seismic exploration is a method of oil exploration that uses seismic information; that is, according to the inversion of seismic information, the useful information of the reservoir parameters can be obtained to carry out exploration effectively. Pre-stack data are characterised by a large amount of data, abundant information, and so on, and according to its inversion, the abundant information of the reservoir parameters can be obtained. Owing to the large amount of pre-stack seismic data, existing single-machine environments have not been able to meet the computational needs of the huge amount of data; thus, the development of a method with a high efficiency and the speed to solve the inversion problem of pre-stack seismic data is urgently needed. The optimisation of the elastic parameters by using a genetic algorithm easily falls into a local optimum, which results in a non-obvious inversion effect, especially for the optimisation effect of the density. Therefore, an intelligent optimisation algorithm is proposed in this paper and used for the elastic parameter inversion of pre-stack seismic data. This algorithm improves the population initialisation strategy by using the Gardner formula and the genetic operation of the algorithm, and the improved algorithm obtains better inversion results when carrying out a model test with logging data. All of the elastic parameters obtained by inversion and the logging curve of theoretical model are fitted well, which effectively improves the inversion precision of the density. This algorithm was implemented with a MapReduce model to solve the seismic big data inversion problem. The experimental results show that the parallel model can effectively reduce the running time of the algorithm.

  5. Fast, Nonlinear, Fully Probabilistic Inversion of Large Geophysical Problems

    NASA Astrophysics Data System (ADS)

    Curtis, A.; Shahraeeni, M.; Trampert, J.; Meier, U.; Cho, G.

    2010-12-01

    Almost all Geophysical inverse problems are in reality nonlinear. Fully nonlinear inversion including non-approximated physics, and solving for probability distribution functions (pdf’s) that describe the solution uncertainty, generally requires sampling-based Monte-Carlo style methods that are computationally intractable in most large problems. In order to solve such problems, physical relationships are usually linearized leading to efficiently-solved, (possibly iterated) linear inverse problems. However, it is well known that linearization can lead to erroneous solutions, and in particular to overly optimistic uncertainty estimates. What is needed across many Geophysical disciplines is a method to invert large inverse problems (or potentially tens of thousands of small inverse problems) fully probabilistically and without linearization. This talk shows how very large nonlinear inverse problems can be solved fully probabilistically and incorporating any available prior information using mixture density networks (driven by neural network banks), provided the problem can be decomposed into many small inverse problems. In this talk I will explain the methodology, compare multi-dimensional pdf inversion results to full Monte Carlo solutions, and illustrate the method with two applications: first, inverting surface wave group and phase velocities for a fully-probabilistic global tomography model of the Earth’s crust and mantle, and second inverting industrial 3D seismic data for petrophysical properties throughout and around a subsurface hydrocarbon reservoir. The latter problem is typically decomposed into 104 to 105 individual inverse problems, each solved fully probabilistically and without linearization. The results in both cases are sufficiently close to the Monte Carlo solution to exhibit realistic uncertainty, multimodality and bias. This provides far greater confidence in the results, and in decisions made on their basis.

  6. A thermally tunable inverse opal photonic crystal for monitoring glass transition.

    PubMed

    Sun, Liguo; Xie, Zhuoying; Xu, Hua; Xu, Ming; Han, Guozhi; Wang, Cheng; Bai, Xuduo; Gu, ZhongZe

    2012-03-01

    An optical method was developed to monitor the glass transition of the polymer by taking advantage of reflection spectrum change of the thermally tunable inverse opal photonic crystal. The thermally tunable photonic bands of the polymer inverse opal photonic crystal were traceable to the segmental motion of macromolecules, and the segmental motion was temperature dependent. By observing the reflection spectrum change of the polystyrene inverse opal photonic crystal during thermal treatment, the glass transition temperature of polystyrene was gotten. Both changes of the position and intensity of the reflection peak were observed during the glass transition process of the polystyrene inverse opal photonic crystal. The optical change of inverse opal photonic crystal was so large that the glass transition temperature could even be estimated by naked eyes. The glass transition temperature derived from this method was consistent with the values measured by differential scanning calorimeter.

  7. 2D data-space cross-gradient joint inversion of MT, gravity and magnetic data

    NASA Astrophysics Data System (ADS)

    Pak, Yong-Chol; Li, Tonglin; Kim, Gang-Sop

    2017-08-01

    We have developed a data-space multiple cross-gradient joint inversion algorithm, and validated it through synthetic tests and applied it to magnetotelluric (MT), gravity and magnetic datasets acquired along a 95 km profile in Benxi-Ji'an area of northeastern China. To begin, we discuss a generalized cross-gradient joint inversion for multiple datasets and model parameters sets, and formulate it in data space. The Lagrange multiplier required for the structural coupling in the data-space method is determined using an iterative solver to avoid calculation of the inverse matrix in solving the large system of equations. Next, using model-space and data-space methods, we inverted the synthetic data and field data. Based on our result, the joint inversion in data-space not only delineates geological bodies more clearly than the separate inversion, but also yields nearly equal results with the one in model-space while consuming much less memory.

  8. Visco-acoustic wave-equation traveltime inversion and its sensitivity to attenuation errors

    NASA Astrophysics Data System (ADS)

    Yu, Han; Chen, Yuqing; Hanafy, Sherif M.; Huang, Jiangping

    2018-04-01

    A visco-acoustic wave-equation traveltime inversion method is presented that inverts for the shallow subsurface velocity distribution. Similar to the classical wave equation traveltime inversion, this method finds the velocity model that minimizes the squared sum of the traveltime residuals. Even though, wave-equation traveltime inversion can partly avoid the cycle skipping problem, a good initial velocity model is required for the inversion to converge to a reasonable tomogram with different attenuation profiles. When Q model is far away from the real model, the final tomogram is very sensitive to the starting velocity model. Nevertheless, a minor or moderate perturbation of the Q model from the true one does not strongly affect the inversion if the low wavenumber information of the initial velocity model is mostly correct. These claims are validated with numerical tests on both the synthetic and field data sets.

  9. Time-domain full waveform inversion using instantaneous phase information with damping

    NASA Astrophysics Data System (ADS)

    Luo, Jingrui; Wu, Ru-Shan; Gao, Fuchun

    2018-06-01

    In time domain, the instantaneous phase can be obtained from the complex seismic trace using Hilbert transform. The instantaneous phase information has great potential in overcoming the local minima problem and improving the result of full waveform inversion. However, the phase wrapping problem, which comes from numerical calculation, prevents its application. In order to avoid the phase wrapping problem, we choose to use the exponential phase combined with the damping method, which gives instantaneous phase-based multi-stage inversion. We construct the objective functions based on the exponential instantaneous phase, and also derive the corresponding gradient operators. Conventional full waveform inversion and the instantaneous phase-based inversion are compared with numerical examples, which indicates that in the case without low frequency information in seismic data, our method is an effective and efficient approach for initial model construction for full waveform inversion.

  10. 3-D transient hydraulic tomography in unconfined aquifers with fast drainage response

    NASA Astrophysics Data System (ADS)

    Cardiff, M.; Barrash, W.

    2011-12-01

    We investigate, through numerical experiments, the viability of three-dimensional transient hydraulic tomography (3DTHT) for identifying the spatial distribution of groundwater flow parameters (primarily, hydraulic conductivity K) in permeable, unconfined aquifers. To invert the large amount of transient data collected from 3DTHT surveys, we utilize an iterative geostatistical inversion strategy in which outer iterations progressively increase the number of data points fitted and inner iterations solve the quasi-linear geostatistical formulas of Kitanidis. In order to base our numerical experiments around realistic scenarios, we utilize pumping rates, geometries, and test lengths similar to those attainable during 3DTHT field campaigns performed at the Boise Hydrogeophysical Research Site (BHRS). We also utilize hydrologic parameters that are similar to those observed at the BHRS and in other unconsolidated, unconfined fluvial aquifers. In addition to estimating K, we test the ability of 3DTHT to estimate both average storage values (specific storage Ss and specific yield Sy) as well as spatial variability in storage coefficients. The effects of model conceptualization errors during unconfined 3DTHT are investigated including: (1) assuming constant storage coefficients during inversion and (2) assuming stationary geostatistical parameter variability. Overall, our findings indicate that estimation of K is slightly degraded if storage parameters must be jointly estimated, but that this effect is quite small compared with the degradation of estimates due to violation of "structural" geostatistical assumptions. Practically, we find for our scenarios that assuming constant storage values during inversion does not appear to have a significant effect on K estimates or uncertainty bounds.

  11. Guided filter and principal component analysis hybrid method for hyperspectral pansharpening

    NASA Astrophysics Data System (ADS)

    Qu, Jiahui; Li, Yunsong; Dong, Wenqian

    2018-01-01

    Hyperspectral (HS) pansharpening aims to generate a fused HS image with high spectral and spatial resolution through integrating an HS image with a panchromatic (PAN) image. A guided filter (GF) and principal component analysis (PCA) hybrid HS pansharpening method is proposed. First, the HS image is interpolated and the PCA transformation is performed on the interpolated HS image. The first principal component (PC1) channel concentrates on the spatial information of the HS image. Different from the traditional PCA method, the proposed method sharpens the PAN image and utilizes the GF to obtain the spatial information difference between the HS image and the enhanced PAN image. Then, in order to reduce spectral and spatial distortion, an appropriate tradeoff parameter is defined and the spatial information difference is injected into the PC1 channel through multiplying by this tradeoff parameter. Once the new PC1 channel is obtained, the fused image is finally generated by the inverse PCA transformation. Experiments performed on both synthetic and real datasets show that the proposed method outperforms other several state-of-the-art HS pansharpening methods in both subjective and objective evaluations.

  12. Joint Processing of Envelope Alignment and Phase Compensation for Isar Imaging

    NASA Astrophysics Data System (ADS)

    Chen, Tao; Jin, Guanghu; Dong, Zhen

    2018-04-01

    Range envelope alignment and phase compensation are spilt into two isolated parts in the classical methods of translational motion compensation in Inverse Synthetic Aperture Radar (ISAR) imaging. In classic method of the rotating object imaging, the two reference points of the envelope alignment and the Phase Difference (PD) estimation are probably not the same point, making it difficult to uncouple the coupling term by conducting the correction of Migration Through Resolution Cell (MTRC). In this paper, an improved approach of joint processing which chooses certain scattering point as the sole reference point is proposed to perform with utilizing the Prominent Point Processing (PPP) method. With this end in view, we firstly get the initial image using classical methods from which a certain scattering point can be chose. The envelope alignment and phase compensation using the selected scattering point as the same reference point are subsequently conducted. The keystone transform is thus smoothly applied to further improve imaging quality. Both simulation experiments and real data processing are provided to demonstrate the performance of the proposed method compared with classical method.

  13. Joint two dimensional inversion of gravity and magnetotelluric data using correspondence maps

    NASA Astrophysics Data System (ADS)

    Carrillo Lopez, J.; Gallardo, L. A.

    2016-12-01

    Inverse problems in Earth sciences are inherently non-unique. To improve models and reduce the number of solutions we need to provide extra information. In geological context, this information could be a priori information, for example, geological information, well log data, smoothness, or actually, information of measures of different kind of data. Joint inversion provides an approach to improve the solution and reduce the errors due to suppositions of each method. To do that, we need a link between two or more models. Some approaches have been explored successfully in recent years. For example, Gallardo and Meju (2003), Gallardo and Meju (2004, 2011), and Gallardo et. al. (2012) used the directions of properties to measure the similarity between models minimizing their cross gradients. In this work, we proposed a joint iterative inversion method that use spatial distribution of properties as a link. Correspondence maps could be better characterizing specific Earth systems due they consider the relation between properties. We implemented a code in Fortran to do a two dimensional inversion of magnetotelluric and gravity data, which are two of the standard methods in geophysical exploration. Synthetic tests show the advantages of joint inversion using correspondence maps against separate inversion. Finally, we applied this technique to magnetotelluric and gravity data in the geothermal zone located in Cerro Prieto, México.

  14. Pilot Study on the Applicability of Variance Reduction Techniques to the Simulation of a Stochastic Combat Model

    DTIC Science & Technology

    1987-09-01

    inverse transform method to obtain unit-mean exponential random variables, where Vi is the jth random number in the sequence of a stream of uniform random...numbers. The inverse transform method is discussed in the simulation textbooks listed in the reference section of this thesis. X(b,c,d) = - P(b,c,d...Defender ,C * P(b,c,d) We again use the inverse transform method to obtain the conditions for an interim event to occur and to induce the change in

  15. Constructing inverse probability weights for continuous exposures: a comparison of methods.

    PubMed

    Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S

    2014-03-01

    Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.

  16. Resolution enhancement of robust Bayesian pre-stack inversion in the frequency domain

    NASA Astrophysics Data System (ADS)

    Yin, Xingyao; Li, Kun; Zong, Zhaoyun

    2016-10-01

    AVO/AVA (amplitude variation with an offset or angle) inversion is one of the most practical and useful approaches to estimating model parameters. So far, publications on AVO inversion in the Fourier domain have been quite limited in view of its poor stability and sensitivity to noise compared with time-domain inversion. For the resolution and stability of AVO inversion in the Fourier domain, a novel robust Bayesian pre-stack AVO inversion based on the mixed domain formulation of stationary convolution is proposed which could solve the instability and achieve superior resolution. The Fourier operator will be integrated into the objective equation and it avoids the Fourier inverse transform in our inversion process. Furthermore, the background constraints of model parameters are taken into consideration to improve the stability and reliability of inversion which could compensate for the low-frequency components of seismic signals. Besides, the different frequency components of seismic signals can realize decoupling automatically. This will help us to solve the inverse problem by means of multi-component successive iterations and the convergence precision of the inverse problem could be improved. So, superior resolution compared with the conventional time-domain pre-stack inversion could be achieved easily. Synthetic tests illustrate that the proposed method could achieve high-resolution results with a high degree of agreement with the theoretical model and verify the quality of anti-noise. Finally, applications on a field data case demonstrate that the proposed method could obtain stable inversion results of elastic parameters from pre-stack seismic data in conformity with the real logging data.

  17. An inversion-based self-calibration for SIMS measurements: Application to H, F, and Cl in apatite

    NASA Astrophysics Data System (ADS)

    Boyce, J. W.; Eiler, J. M.

    2011-12-01

    Measurements of volatile abundances in igneous apatites can provide information regarding the abundances and evolution of volatiles in magmas, with applications to terrestrial volcanism and planetary evolution. Secondary ion mass spectrometry (SIMS) measurements can produce accurate and precise measurements of H and other volatiles in many materials including apatite. SIMS standardization generally makes use of empirical linear transfer functions that relate measured ion ratios to independently known concentrations. However, this approach is often limited by the lack of compositionally diverse, well-characterized, homogeneous standards. In general, SIMS calibrations are developed for minor and trace elements, and any two are treated as independent of one another. However, in crystalline materials, additional stoichiometric constraints may apply. In the case of apatite, the sum of concentrations of abundant volatile elements (H, Cl, and F) should closely approach 100% occupancy of their collective structural site. Here we propose and document the efficacy of a method for standardizing SIMS analyses of abundant volatiles in apatites that takes advantage of this stoichiometric constraint. The principle advantage of this method is that it is effectively self-standardizing; i.e., it requires no independently known homogeneous reference standards. We define a system of independent linear equations relating measured ion ratios (H/P, Cl/P, F/P) and unknown calibration slopes. Given sufficient range in the concentrations of the different elements among apatites measured in a single analytical session, solving this system of equations allows for the calibration slope for each element to be determined without standards, using only blank-corrected ion ratios. In the case that a data set of this kind lacks sufficient range in measured compositions of one or more of the relevant ion ratios, one can employ measurements of additional apatites of a variety of compositions to increase the statistical range and make the inversion more accurate and precise. These additional non-standard apatites need only be wide-ranging in composition: They need not be homogenous nor have known H, F, or Cl concentrations. Tests utilizing synthetic data and data generated in the laboratory indicate that this method should yield satisfactory results provided apatites meet the criteria of the model. The inversion method is able to reproduce conventional calibrations to within <2.5%, a level of accuracy comparable to or even better than the uncertainty of the conventional calibration, and one that includes both error in the inversion method as well as any true error in the independently determined values of the standards. Uncertainties in the inversion calibrations range from 0.1-1.7% (2σ), typically an order of magnitude smaller than the uncertainties in conventional calibrations (~4-5% for H2O, 1-19% for F and Cl). However, potential systematic errors stem from the model assumption of 100% occupancy of this site by the measured elements. Use of this method simplifies analysis of H, F, and Cl in apatites by SIMS, and may also be amenable to other stoichiometrically limited substitution groups, including P+As+S+Si+C in apatite, and Zr+Hf+U+Th in non-metamict zircon.

  18. Sharp Boundary Inversion of 2D Magnetotelluric Data using Bayesian Method.

    NASA Astrophysics Data System (ADS)

    Zhou, S.; Huang, Q.

    2017-12-01

    Normally magnetotelluric(MT) inversion method cannot show the distribution of underground resistivity with clear boundary, even if there are obviously different blocks. Aiming to solve this problem, we develop a Bayesian structure to inverse 2D MT sharp boundary data, using boundary location and inside resistivity as the random variables. Firstly, we use other MT inversion results, like ModEM, to analyze the resistivity distribution roughly. Then, we select the suitable random variables and change its data format to traditional staggered grid parameters, which can be used to do finite difference forward part. Finally, we can shape the posterior probability density(PPD), which contains all the prior information and model-data correlation, by Markov Chain Monte Carlo(MCMC) sampling from prior distribution. The depth, resistivity and their uncertainty can be valued. It also works for sensibility estimation. We applied the method to a synthetic case, which composes two large abnormal blocks in a trivial background. We consider the boundary smooth and the near true model weight constrains that mimic joint inversion or constrained inversion, then we find that the model results a more precise and focused depth distribution. And we also test the inversion without constrains and find that the boundary could also be figured, though not as well. Both inversions have a good valuation of resistivity. The constrained result has a lower root mean square than ModEM inversion result. The data sensibility obtained via PPD shows that the resistivity is the most sensible, center depth comes second and both sides are the worst.

  19. Accurate modeling and inversion of electrical resistivity data in the presence of metallic infrastructure with known location and dimension

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Timothy C.; Wellman, Dawn M.

    2015-06-26

    Electrical resistivity tomography (ERT) has been widely used in environmental applications to study processes associated with subsurface contaminants and contaminant remediation. Anthropogenic alterations in subsurface electrical conductivity associated with contamination often originate from highly industrialized areas with significant amounts of buried metallic infrastructure. The deleterious influence of such infrastructure on imaging results generally limits the utility of ERT where it might otherwise prove useful for subsurface investigation and monitoring. In this manuscript we present a method of accurately modeling the effects of buried conductive infrastructure within the forward modeling algorithm, thereby removing them from the inversion results. The method ismore » implemented in parallel using immersed interface boundary conditions, whereby the global solution is reconstructed from a series of well-conditioned partial solutions. Forward modeling accuracy is demonstrated by comparison with analytic solutions. Synthetic imaging examples are used to investigate imaging capabilities within a subsurface containing electrically conductive buried tanks, transfer piping, and well casing, using both well casings and vertical electrode arrays as current sources and potential measurement electrodes. Results show that, although accurate infrastructure modeling removes the dominating influence of buried metallic features, the presence of metallic infrastructure degrades imaging resolution compared to standard ERT imaging. However, accurate imaging results may be obtained if electrodes are appropriately located.« less

  20. Finite‐fault Bayesian inversion of teleseismic body waves

    USGS Publications Warehouse

    Clayton, Brandon; Hartzell, Stephen; Moschetti, Morgan P.; Minson, Sarah E.

    2017-01-01

    Inverting geophysical data has provided fundamental information about the behavior of earthquake rupture. However, inferring kinematic source model parameters for finite‐fault ruptures is an intrinsically underdetermined problem (the problem of nonuniqueness), because we are restricted to finite noisy observations. Although many studies use least‐squares techniques to make the finite‐fault problem tractable, these methods generally lack the ability to apply non‐Gaussian error analysis and the imposition of nonlinear constraints. However, the Bayesian approach can be employed to find a Gaussian or non‐Gaussian distribution of all probable model parameters, while utilizing nonlinear constraints. We present case studies to quantify the resolving power and associated uncertainties using only teleseismic body waves in a Bayesian framework to infer the slip history for a synthetic case and two earthquakes: the 2011 Mw 7.1 Van, east Turkey, earthquake and the 2010 Mw 7.2 El Mayor–Cucapah, Baja California, earthquake. In implementing the Bayesian method, we further present two distinct solutions to investigate the uncertainties by performing the inversion with and without velocity structure perturbations. We find that the posterior ensemble becomes broader when including velocity structure variability and introduces a spatial smearing of slip. Using the Bayesian framework solely on teleseismic body waves, we find rake is poorly constrained by the observations and rise time is poorly resolved when slip amplitude is low.

  1. Hybrid Weighted Minimum Norm Method A new method based LORETA to solve EEG inverse problem.

    PubMed

    Song, C; Zhuang, T; Wu, Q

    2005-01-01

    This Paper brings forward a new method to solve EEG inverse problem. Based on following physiological characteristic of neural electrical activity source: first, the neighboring neurons are prone to active synchronously; second, the distribution of source space is sparse; third, the active intensity of the sources are high centralized, we take these prior knowledge as prerequisite condition to develop the inverse solution of EEG, and not assume other characteristic of inverse solution to realize the most commonly 3D EEG reconstruction map. The proposed algorithm takes advantage of LORETA's low resolution method which emphasizes particularly on 'localization' and FOCUSS's high resolution method which emphasizes particularly on 'separability'. The method is still under the frame of the weighted minimum norm method. The keystone is to construct a weighted matrix which takes reference from the existing smoothness operator, competition mechanism and study algorithm. The basic processing is to obtain an initial solution's estimation firstly, then construct a new estimation using the initial solution's information, repeat this process until the solutions under last two estimate processing is keeping unchanged.

  2. Reader reaction to "a robust method for estimating optimal treatment regimes" by Zhang et al. (2012).

    PubMed

    Taylor, Jeremy M G; Cheng, Wenting; Foster, Jared C

    2015-03-01

    A recent article (Zhang et al., 2012, Biometrics 168, 1010-1018) compares regression based and inverse probability based methods of estimating an optimal treatment regime and shows for a small number of covariates that inverse probability weighted methods are more robust to model misspecification than regression methods. We demonstrate that using models that fit the data better reduces the concern about non-robustness for the regression methods. We extend the simulation study of Zhang et al. (2012, Biometrics 168, 1010-1018), also considering the situation of a larger number of covariates, and show that incorporating random forests into both regression and inverse probability weighted based methods improves their properties. © 2014, The International Biometric Society.

  3. Navier-Stokes simulation of plume/Vertical Launching System interaction flowfields

    NASA Astrophysics Data System (ADS)

    York, B. J.; Sinha, N.; Dash, S. M.; Anderson, L.; Gominho, L.

    1992-01-01

    The application of Navier-Stokes methodology to the analysis of Vertical Launching System/missile exhaust plume interactions is discussed. The complex 3D flowfields related to the Vertical Launching System are computed utilizing the PARCH/RNP Navier-Stokes code. PARCH/RNP solves the fully-coupled system of fluid, two-equation turbulence (k-epsilon) and chemical species equations via the implicit, approximately factored, Beam-Warming algorithm utilizing a block-tridiagonal inversion procedure.

  4. Final Technical Report for "Applied Mathematics Research: Simulation Based Optimization and Application to Electromagnetic Inverse Problems"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haber, Eldad

    2014-03-17

    The focus of research was: Developing adaptive mesh for the solution of Maxwell's equations; Developing a parallel framework for time dependent inverse Maxwell's equations; Developing multilevel methods for optimization problems with inequality constraints; A new inversion code for inverse Maxwell's equations in the 0th frequency (DC resistivity); A new inversion code for inverse Maxwell's equations in low frequency regime. Although the research concentrated on electromagnetic forward and in- verse problems the results of the research was applied to the problem of image registration.

  5. ASKI: A modular toolbox for scattering-integral-based seismic full waveform inversion and sensitivity analysis utilizing external forward codes

    NASA Astrophysics Data System (ADS)

    Schumacher, Florian; Friederich, Wolfgang

    Due to increasing computational resources, the development of new numerically demanding methods and software for imaging Earth's interior remains of high interest in Earth sciences. Here, we give a description from a user's and programmer's perspective of the highly modular, flexible and extendable software package ASKI-Analysis of Sensitivity and Kernel Inversion-recently developed for iterative scattering-integral-based seismic full waveform inversion. In ASKI, the three fundamental steps of solving the seismic forward problem, computing waveform sensitivity kernels and deriving a model update are solved by independent software programs that interact via file output/input only. Furthermore, the spatial discretizations of the model space used for solving the seismic forward problem and for deriving model updates, respectively, are kept completely independent. For this reason, ASKI does not contain a specific forward solver but instead provides a general interface to established community wave propagation codes. Moreover, the third fundamental step of deriving a model update can be repeated at relatively low costs applying different kinds of model regularization or re-selecting/weighting the inverted dataset without need to re-solve the forward problem or re-compute the kernels. Additionally, ASKI offers the user sensitivity and resolution analysis tools based on the full sensitivity matrix and allows to compose customized workflows in a consistent computational environment. ASKI is written in modern Fortran and Python, it is well documented and freely available under terms of the GNU General Public License (http://www.rub.de/aski).

  6. Uncertainty Estimation in Tsunami Initial Condition From Rapid Bayesian Finite Fault Modeling

    NASA Astrophysics Data System (ADS)

    Benavente, R. F.; Dettmer, J.; Cummins, P. R.; Urrutia, A.; Cienfuegos, R.

    2017-12-01

    It is well known that kinematic rupture models for a given earthquake can present discrepancies even when similar datasets are employed in the inversion process. While quantifying this variability can be critical when making early estimates of the earthquake and triggered tsunami impact, "most likely models" are normally used for this purpose. In this work, we quantify the uncertainty of the tsunami initial condition for the great Illapel earthquake (Mw = 8.3, 2015, Chile). We focus on utilizing data and inversion methods that are suitable to rapid source characterization yet provide meaningful and robust results. Rupture models from teleseismic body and surface waves as well as W-phase are derived and accompanied by Bayesian uncertainty estimates from linearized inversion under positivity constraints. We show that robust and consistent features about the rupture kinematics appear when working within this probabilistic framework. Moreover, by using static dislocation theory, we translate the probabilistic slip distributions into seafloor deformation which we interpret as a tsunami initial condition. After considering uncertainty, our probabilistic seafloor deformation models obtained from different data types appear consistent with each other providing meaningful results. We also show that selecting just a single "representative" solution from the ensemble of initial conditions for tsunami propagation may lead to overestimating information content in the data. Our results suggest that rapid, probabilistic rupture models can play a significant role during emergency response by providing robust information about the extent of the disaster.

  7. Lithological and Surface Geometry Joint Inversions Using Multi-Objective Global Optimization Methods

    NASA Astrophysics Data System (ADS)

    Lelièvre, Peter; Bijani, Rodrigo; Farquharson, Colin

    2016-04-01

    Geologists' interpretations about the Earth typically involve distinct rock units with contacts (interfaces) between them. In contrast, standard minimum-structure geophysical inversions are performed on meshes of space-filling cells (typically prisms or tetrahedra) and recover smoothly varying physical property distributions that are inconsistent with typical geological interpretations. There are several approaches through which mesh-based minimum-structure geophysical inversion can help recover models with some of the desired characteristics. However, a more effective strategy may be to consider two fundamentally different types of inversions: lithological and surface geometry inversions. A major advantage of these two inversion approaches is that joint inversion of multiple types of geophysical data is greatly simplified. In a lithological inversion, the subsurface is discretized into a mesh and each cell contains a particular rock type. A lithological model must be translated to a physical property model before geophysical data simulation. Each lithology may map to discrete property values or there may be some a priori probability density function associated with the mapping. Through this mapping, lithological inverse problems limit the parameter domain and consequently reduce the non-uniqueness from that presented by standard mesh-based inversions that allow physical property values on continuous ranges. Furthermore, joint inversion is greatly simplified because no additional mathematical coupling measure is required in the objective function to link multiple physical property models. In a surface geometry inversion, the model comprises wireframe surfaces representing contacts between rock units. This parameterization is then fully consistent with Earth models built by geologists, which in 3D typically comprise wireframe contact surfaces of tessellated triangles. As for the lithological case, the physical properties of the units lying between the contact surfaces are set to a priori values. The inversion is tasked with calculating the geometry of the contact surfaces instead of some piecewise distribution of properties in a mesh. Again, no coupling measure is required and joint inversion is simplified. Both of these inverse problems involve high nonlinearity and discontinuous or non-obtainable derivatives. They can also involve the existence of multiple minima. Hence, one can not apply the standard descent-based local minimization methods used to solve typical minimum-structure inversions. Instead, we are applying Pareto multi-objective global optimization (PMOGO) methods, which generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. While there are definite advantages to PMOGO joint inversion approaches, the methods come with significantly increased computational requirements. We are researching various strategies to ameliorate these computational issues including parallelization and problem dimension reduction.

  8. Stochastic Gabor reflectivity and acoustic impedance inversion

    NASA Astrophysics Data System (ADS)

    Hariri Naghadeh, Diako; Morley, Christopher Keith; Ferguson, Angus John

    2018-02-01

    To delineate subsurface lithology to estimate petrophysical properties of a reservoir, it is possible to use acoustic impedance (AI) which is the result of seismic inversion. To change amplitude to AI, removal of wavelet effects from the seismic signal in order to get a reflection series, and subsequently transforming those reflections to AI, is vital. To carry out seismic inversion correctly it is important to not assume that the seismic signal is stationary. However, all stationary deconvolution methods are designed following that assumption. To increase temporal resolution and interpretation ability, amplitude compensation and phase correction are inevitable. Those are pitfalls of stationary reflectivity inversion. Although stationary reflectivity inversion methods are trying to estimate reflectivity series, because of incorrect assumptions their estimations will not be correct, but may be useful. Trying to convert those reflection series to AI, also merging with the low frequency initial model, can help us. The aim of this study was to apply non-stationary deconvolution to eliminate time variant wavelet effects from the signal and to convert the estimated reflection series to the absolute AI by getting bias from well logs. To carry out this aim, stochastic Gabor inversion in the time domain was used. The Gabor transform derived the signal’s time-frequency analysis and estimated wavelet properties from different windows. Dealing with different time windows gave an ability to create a time-variant kernel matrix, which was used to remove matrix effects from seismic data. The result was a reflection series that does not follow the stationary assumption. The subsequent step was to convert those reflections to AI using well information. Synthetic and real data sets were used to show the ability of the introduced method. The results highlight that the time cost to get seismic inversion is negligible related to general Gabor inversion in the frequency domain. Also, obtaining bias could help the method to estimate reliable AI. To justify the effect of random noise on deterministic and stochastic inversion results, a stationary noisy trace with signal-to-noise ratio equal to 2 was used. The results highlight the inability of deterministic inversion in dealing with a noisy data set even using a high number of regularization parameters. Also, despite the low level of signal, stochastic Gabor inversion not only can estimate correctly the wavelet’s properties but also, because of bias from well logs, the inversion result is very close to the real AI. Comparing deterministic and introduced inversion results on a real data set shows that low resolution results, especially in the deeper parts of seismic sections using deterministic inversion, creates significant reliability problems for seismic prospects, but this pitfall is solved completely using stochastic Gabor inversion. The estimated AI using Gabor inversion in the time domain is much better and faster than general Gabor inversion in the frequency domain. This is due to the extra number of windows required to analyze the time-frequency information and also the amount of temporal increment between windows. In contrast, stochastic Gabor inversion can estimate trustable physical properties close to the real characteristics. Applying to a real data set could give an ability to detect the direction of volcanic intrusion and the ability of lithology distribution delineation along the fan. Comparing the inversion results highlights the efficiency of stochastic Gabor inversion to delineate lateral lithology changes because of the improved frequency content and zero phasing of the final inversion volume.

  9. Saturation-inversion-recovery: A method for T1 measurement

    NASA Astrophysics Data System (ADS)

    Wang, Hongzhi; Zhao, Ming; Ackerman, Jerome L.; Song, Yiqiao

    2017-01-01

    Spin-lattice relaxation (T1) has always been measured by inversion-recovery (IR), saturation-recovery (SR), or related methods. These existing methods share a common behavior in that the function describing T1 sensitivity is the exponential, e.g., exp(- τ /T1), where τ is the recovery time. In this paper, we describe a saturation-inversion-recovery (SIR) sequence for T1 measurement with considerably sharper T1-dependence than those of the IR and SR sequences, and demonstrate it experimentally. The SIR method could be useful in improving the contrast between regions of differing T1 in T1-weighted MRI.

  10. Integration of Electrical Resistivity and Seismic Refraction using Combine Inversion for Detecting Material Deposits of Impact Crater at Bukit Bunuh, Lenggong, Perak

    NASA Astrophysics Data System (ADS)

    Yusoh, R.; Saad, R.; Saidin, M.; Muhammad, S. B.; Anda, S. T.

    2018-04-01

    Both electrical resistivity and seismic refraction profiling has become a common method in pre-investigations for visualizing subsurface structure. The encouragement to use these methods is that combined of both methods can decrease the obscure inherent to the distinctive use of these methods. Both method have their individual software packages for data inversion, but potential to combine certain geophysical methods are restricted; however, the research algorithms that have this functionality was exists and are evaluated personally. The interpretation of subsurface were improve by combining inversion data from both method by influence each other models using closure coupling; thus, by implementing both methods to support each other which could improve the subsurface interpretation. These methods were applied on a field dataset from a pre-investigation for archeology in finding the material deposits of impact crater. There were no major changes in the inverted model by combining data inversion for this archetype which probably due to complex geology. The combine data analysis shows the deposit material start from ground surface to 20 meter depth which the class separation clearly separate the deposit material.

  11. 2D Seismic Imaging of Elastic Parameters by Frequency Domain Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Brossier, R.; Virieux, J.; Operto, S.

    2008-12-01

    Thanks to recent advances in parallel computing, full waveform inversion is today a tractable seismic imaging method to reconstruct physical parameters of the earth interior at different scales ranging from the near- surface to the deep crust. We present a massively parallel 2D frequency-domain full-waveform algorithm for imaging visco-elastic media from multi-component seismic data. The forward problem (i.e. the resolution of the frequency-domain 2D PSV elastodynamics equations) is based on low-order Discontinuous Galerkin (DG) method (P0 and/or P1 interpolations). Thanks to triangular unstructured meshes, the DG method allows accurate modeling of both body waves and surface waves in case of complex topography for a discretization of 10 to 15 cells per shear wavelength. The frequency-domain DG system is solved efficiently for multiple sources with the parallel direct solver MUMPS. The local inversion procedure (i.e. minimization of residuals between observed and computed data) is based on the adjoint-state method which allows to efficiently compute the gradient of the objective function. Applying the inversion hierarchically from the low frequencies to the higher ones defines a multiresolution imaging strategy which helps convergence towards the global minimum. In place of expensive Newton algorithm, the combined use of the diagonal terms of the approximate Hessian matrix and optimization algorithms based on quasi-Newton methods (Conjugate Gradient, LBFGS, ...) allows to improve the convergence of the iterative inversion. The distribution of forward problem solutions over processors driven by a mesh partitioning performed by METIS allows to apply most of the inversion in parallel. We shall present the main features of the parallel modeling/inversion algorithm, assess its scalability and illustrate its performances with realistic synthetic case studies.

  12. Application of Nonlinear Systems Inverses to Automatic Flight Control Design: System Concepts and Flight Evaluations

    NASA Technical Reports Server (NTRS)

    Meyer, G.; Cicolani, L.

    1981-01-01

    A practical method for the design of automatic flight control systems for aircraft with complex characteristics and operational requirements, such as the powered lift STOL and V/STOL configurations, is presented. The method is effective for a large class of dynamic systems requiring multi-axis control which have highly coupled nonlinearities, redundant controls, and complex multidimensional operational envelopes. It exploits the concept of inverse dynamic systems, and an algorithm for the construction of inverse is given. A hierarchic structure for the total control logic with inverses is presented. The method is illustrated with an application to the Augmentor Wing Jet STOL Research Aircraft equipped with a digital flight control system. Results of flight evaluation of the control concept on this aircraft are presented.

  13. Research on Inversion Models for Forest Height Estimation Using Polarimetric SAR Interferometry

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Duan, B.; Zou, B.

    2017-09-01

    The forest height is an important forest resource information parameter and usually used in biomass estimation. Forest height extraction with PolInSAR is a hot research field of imaging SAR remote sensing. SAR interferometry is a well-established SAR technique to estimate the vertical location of the effective scattering center in each resolution cell through the phase difference in images acquired from spatially separated antennas. The manipulation of PolInSAR has applications ranging from climate monitoring to disaster detection especially when used in forest area, is of particular interest because it is quite sensitive to the location and vertical distribution of vegetation structure components. However, some of the existing methods can't estimate forest height accurately. Here we introduce several available inversion models and compare the precision of some classical inversion approaches using simulated data. By comparing the advantages and disadvantages of these inversion methods, researchers can find better solutions conveniently based on these inversion methods.

  14. Getting in shape: Reconstructing three-dimensional long-track speed skating kinematics by comparing several body pose reconstruction techniques.

    PubMed

    van der Kruk, E; Schwab, A L; van der Helm, F C T; Veeger, H E J

    2018-03-01

    In gait studies body pose reconstruction (BPR) techniques have been widely explored, but no previous protocols have been developed for speed skating, while the peculiarities of the skating posture and technique do not automatically allow for the transfer of the results of those explorations to kinematic skating data. The aim of this paper is to determine the best procedure for body pose reconstruction and inverse dynamics of speed skating, and to what extend this choice influences the estimation of joint power. The results show that an eight body segment model together with a global optimization method with revolute joint in the knee and in the lumbosacral joint, while keeping the other joints spherical, would be the most realistic model to use for the inverse kinematics in speed skating. To determine joint power, this method should be combined with a least-square error method for the inverse dynamics. Reporting on the BPR technique and the inverse dynamic method is crucial to enable comparison between studies. Our data showed an underestimation of up to 74% in mean joint power when no optimization procedure was applied for BPR and an underestimation of up to 31% in mean joint power when a bottom-up inverse dynamics method was chosen instead of a least square error approach. Although these results are aimed at speed skating, reporting on the BPR procedure and the inverse dynamics method, together with setting a golden standard should be common practice in all human movement research to allow comparison between studies. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Inverse problems in quantum chemistry

    NASA Astrophysics Data System (ADS)

    Karwowski, Jacek

    Inverse problems constitute a branch of applied mathematics with well-developed methodology and formalism. A broad family of tasks met in theoretical physics, in civil and mechanical engineering, as well as in various branches of medical and biological sciences has been formulated as specific implementations of the general theory of inverse problems. In this article, it is pointed out that a number of approaches met in quantum chemistry can (and should) be classified as inverse problems. Consequently, the methodology used in these approaches may be enriched by applying ideas and theorems developed within the general field of inverse problems. Several examples, including the RKR method for the construction of potential energy curves, determining parameter values in semiempirical methods, and finding external potentials for which the pertinent Schrödinger equation is exactly solvable, are discussed in detail.

  16. A Synthetic Study on the Resolution of 2D Elastic Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Cui, C.; Wang, Y.

    2017-12-01

    Gradient based full waveform inversion is an effective method in seismic study, it makes full use of the information given by seismic records and is capable of providing a more accurate model of the interior of the earth at a relatively low computational cost. However, the strong non-linearity of the problem brings about many difficulties in the assessment of its resolution. Synthetic inversions are therefore helpful before an inversion based on real data is made. Checker-board test is a commonly used method, but it is not always reliable due to the significant difference between a checker-board and the true model. Our study aims to provide a basic understanding of the resolution of 2D elastic inversion by examining three main factors that affect the inversion result respectively: 1. The structural characteristic of the model; 2. The level of similarity between the initial model and the true model; 3. The spacial distribution of sources and receivers. We performed about 150 synthetic inversions to demonstrate how each factor contributes to quality of the result, and compared the inversion results with those achieved by checker-board tests. The study can be a useful reference to assess the resolution of an inversion in addition to regular checker-board tests, or to determine whether the seismic data of a specific region is sufficient for a successful inversion.

  17. Spectral inversion of frequency-domain IP data obtained in Haenam, South Korea

    NASA Astrophysics Data System (ADS)

    Kim, B.; Nam, M. J.; Son, J. S.

    2017-12-01

    Spectral induced polarization (SIP) method using a range of source frequencies have been performed for not only exploring minerals resources, but also engineering or environmental application. SIP interpretation first makes inversion of individual frequency data to obtain complex resistivity structures, which will further analyzed employing Cole-Cole model to explain the frequency-dependent characteristics. However, due to the difficulty in fitting Cole-Cole model, there is a movement to interpret complex resistivity structure inverted only from a single frequency data: that is so-called "complex resistivity survey". Further, simultaneous inversion of multi-frequency SIP data, rather than making a single frequency SIP data, has been studied to improve ambiguity and artefacts of independent single frequency inversion in obtaining a complex resistivity structure, even though the dispersion characteristics of complex resistivity with respect to source frequency. Employing the simultaneous inversion method, this study makes inversion of field SIP data obtained over epithermal mineralized area, Haenam, in the southernmost tip of South Korea. The area has a polarizable structure because of extensive hydrothermal alteration, gold-silver deposits. After the inversion, we compare between inversion results considering multi-frequency data and single frequency data set to evaluate the performance of simultaneous inversion of multi-frequency SIP data.

  18. 3D CSEM inversion based on goal-oriented adaptive finite element method

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Key, K.

    2016-12-01

    We present a parallel 3D frequency domain controlled-source electromagnetic inversion code name MARE3DEM. Non-linear inversion of observed data is performed with the Occam variant of regularized Gauss-Newton optimization. The forward operator is based on the goal-oriented finite element method that efficiently calculates the responses and sensitivity kernels in parallel using a data decomposition scheme where independent modeling tasks contain different frequencies and subsets of the transmitters and receivers. To accommodate complex 3D conductivity variation with high flexibility and precision, we adopt the dual-grid approach where the forward mesh conforms to the inversion parameter grid and is adaptively refined until the forward solution converges to the desired accuracy. This dual-grid approach is memory efficient, since the inverse parameter grid remains independent from fine meshing generated around the transmitter and receivers by the adaptive finite element method. Besides, the unstructured inverse mesh efficiently handles multiple scale structures and allows for fine-scale model parameters within the region of interest. Our mesh generation engine keeps track of the refinement hierarchy so that the map of conductivity and sensitivity kernel between the forward and inverse mesh is retained. We employ the adjoint-reciprocity method to calculate the sensitivity kernels which establish a linear relationship between changes in the conductivity model and changes in the modeled responses. Our code uses a direcy solver for the linear systems, so the adjoint problem is efficiently computed by re-using the factorization from the primary problem. Further computational efficiency and scalability is obtained in the regularized Gauss-Newton portion of the inversion using parallel dense matrix-matrix multiplication and matrix factorization routines implemented with the ScaLAPACK library. We show the scalability, reliability and the potential of the algorithm to deal with complex geological scenarios by applying it to the inversion of synthetic marine controlled source EM data generated for a complex 3D offshore model with significant seafloor topography.

  19. Synconset Waves and Chains: Spiking Onsets in Synchronous Populations Predict and Are Predicted by Network Structure

    PubMed Central

    Raghavan, Mohan; Amrutur, Bharadwaj; Narayanan, Rishikesh; Sikdar, Sujit Kumar

    2013-01-01

    Synfire waves are propagating spike packets in synfire chains, which are feedforward chains embedded in random networks. Although synfire waves have proved to be effective quantification for network activity with clear relations to network structure, their utilities are largely limited to feedforward networks with low background activity. To overcome these shortcomings, we describe a novel generalisation of synfire waves, and define ‘synconset wave’ as a cascade of first spikes within a synchronisation event. Synconset waves would occur in ‘synconset chains’, which are feedforward chains embedded in possibly heavily recurrent networks with heavy background activity. We probed the utility of synconset waves using simulation of single compartment neuron network models with biophysically realistic conductances, and demonstrated that the spread of synconset waves directly follows from the network connectivity matrix and is modulated by top-down inputs and the resultant oscillations. Such synconset profiles lend intuitive insights into network organisation in terms of connection probabilities between various network regions rather than an adjacency matrix. To test this intuition, we develop a Bayesian likelihood function that quantifies the probability that an observed synfire wave was caused by a given network. Further, we demonstrate it's utility in the inverse problem of identifying the network that caused a given synfire wave. This method was effective even in highly subsampled networks where only a small subset of neurons were accessible, thus showing it's utility in experimental estimation of connectomes in real neuronal-networks. Together, we propose synconset chains/waves as an effective framework for understanding the impact of network structure on function, and as a step towards developing physiology-driven network identification methods. Finally, as synconset chains extend the utilities of synfire chains to arbitrary networks, we suggest utilities of our framework to several aspects of network physiology including cell assemblies, population codes, and oscillatory synchrony. PMID:24116018

  20. Fabrication of Au- and Ag–SiO{sub 2} inverse opals having both localized surface plasmon resonance and Bragg diffraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erola, Markus O.A.; Philip, Anish; Ahmed, Tanzir

    The inverse opal films of SiO{sub 2} containing metal nanoparticles can have both the localized surface plasmon resonance (LSPR) of metal nanoparticles and the Bragg diffraction of inverse opal crystals of SiO{sub 2}, which are very useful properties for applications, such as tunable photonic structures, catalysts and sensors. However, effective processes for fabrication of these films from colloidal particles have rarely been reported. In our study, two methods for preparation of inverse opal films of SiO{sub 2} with three different crystal sizes and containing gold or silver nanoparticles (NPs) via self-assembly using electrostatic interactions and capillary forces are reported. Themore » Bragg diffraction of inverse opal films of SiO{sub 2} in the presence and absence of the template was measured and predicted on the basis of with UV–vis spectroscopy and scanning electron microscopy. The preparation methods used provided good-quality inverse opal SiO{sub 2} films containing highly dispersed, plasmonic AuNPs or AgNPs and having both Bragg diffractions and LSPRs. - Graphical abstract: For syntheses of SiO{sub 2} inverse opals containing Au/Ag nanoparticles two approaches and three template sizes were employed. Self-assembly of template molecules and metal nanoparticles occurred using electrostatic interactions and capillary forces. Both the Bragg diffraction of the photonic crystal and the localized surface plasmon resonance of Au/Ag nanoparticles were detected. - Highlights: • Fabrication methods of silica inverse opals containing metal nanoparticles studied. • Three template sizes used to produce SiO{sub 2} inverse opals with Au/Ag nanoparticles. • PS templates with Au nanoparticles adsorbed used in formation of inverse opals. • Ag particles infiltrated in inverse opals with capillary and electrostatic forces. • Bragg diffractions of IOs and surface plasmon resonances of nanoparticles observed.« less

  1. A semi-inverse variational method for generating the bound state energy eigenvalues in a quantum system: the Dirac Coulomb type-equation

    NASA Astrophysics Data System (ADS)

    Libarir, K.; Zerarka, A.

    2018-05-01

    Exact eigenspectra and eigenfunctions of the Dirac quantum equation are established using the semi-inverse variational method. This method improves of a considerable manner the efficiency and accuracy of results compared with the other usual methods much argued in the literature. Some applications for different state configurations are proposed to concretize the method.

  2. Fabrication of titania inverse opals by multi-cycle dip-infiltration for optical sensing

    NASA Astrophysics Data System (ADS)

    Chiang, Chun-Chen; Tuyen, Le Dac; Ren, Ching-Rung; Chau, Lai-Kwan; Wu, Cheng Yi; Huang, Ping-Ji; Hsu, Chia Chen

    2016-04-01

    We have demonstrated a low-cost method to fabricate TiO2 inverse opal photonic crystals with high refractive index skeleton. The TiO2 inverse opal films were fabricated from a polystyrene opal template by multi-cycle dip-infiltration-coating method. The properties of the TiO2 inverse opal films were characterized by scanning electron microscopy and Bragg reflection spectroscopy. The reflection spectroscopic measurements of the TiO2 inverse opal films were compared with theories of photonic band calculations and Bragg law. The agreement between experiment and theory indicates that we can precisely predict the refractive index of the infiltrated liquid sample in the TiO2 inverse opal films from the measurement results. The red-shift of the peak wavelength in the Bragg reflection spectra for both alcohol mixtures and aqueous sucrose solutions of increasing refractive index was observed and respective refractive index sensitivities of 296 and 286 nm/RIU (refractive index unit) were achieved. As the fabrication of the TiO2 inverse opal films and reflection spectroscopic measurement are fairly easy, the TiO2 inverse opal films have potential applications in optical sensing.

  3. Utilization of AERONET polarimetric measurements for improving retrieval of aerosol microphysics: GSFC, Beijing and Dakar data analysis

    NASA Astrophysics Data System (ADS)

    Fedarenka, Anton; Dubovik, Oleg; Goloub, Philippe; Li, Zhengqiang; Lapyonok, Tatyana; Litvinov, Pavel; Barel, Luc; Gonzalez, Louis; Podvin, Thierry; Crozel, Didier

    2016-08-01

    The study presents the efforts on including the polarimetric data to the routine inversion of the radiometric ground-based measurements for characterization of the atmospheric aerosols and analysis of the obtained advantages in retrieval results. First, to operationally process the large amount of polarimetric data the data preparation tool was developed. The AERONET inversion code adapted for inversion of both intensity and polarization measurements was used for processing. Second, in order to estimate the effect from utilization of polarimetric information on aerosol retrieval results, both synthetic data and the real measurements were processed using developed routine and analyzed. The sensitivity study has been carried out using simulated data based on three main aerosol models: desert dust, urban industrial and urban clean aerosols. The test investigated the effects of utilization of polarization data in the presence of random noise, bias in measurements of optical thickness and angular pointing shift. The results demonstrate the advantage of polarization data utilization in the cases of aerosols with pronounced concentration of fine particles. Further, the extended set of AERONET observations was processed. The data for three sites have been used: GSFC, USA (clean urban aerosol dominated by fine particles), Beijing, China (polluted industrial aerosol characterized by pronounced mixture of both fine and coarse modes) and Dakar, Senegal (desert dust dominated by coarse particles). The results revealed considerable advantage of polarimetric data applying for characterizing fine mode dominated aerosols including industrial pollution (Beijing). The use of polarization corrects particle size distribution by decreasing overestimated fine mode and increasing the coarse mode. It also increases underestimated real part of the refractive index and improves the retrieval of the fraction of spherical particles due to high sensitivity of polarization to particle shape. Overall, the study demonstrates a substantial value of polarimetric data for improving aerosol characterization.

  4. On the joint inversion of geophysical data for models of the coupled core-mantle system

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.

    1991-01-01

    Joint inversion of magnetic, earth rotation, geoid, and seismic data for a unified model of the coupled core-mantle system is proposed and shown to be possible. A sample objective function is offered and simplified by targeting results from independent inversions and summary travel time residuals instead of original observations. These data are parameterized in terms of a very simple, closed model of the topographically coupled core-mantle system. Minimization of the simplified objective function leads to a nonlinear inverse problem; an iterative method for solution is presented. Parameterization and method are emphasized; numerical results are not presented.

  5. Regularization of soft-X-ray imaging in the DIII-D tokamak

    DOE PAGES

    Wingen, A.; Shafer, M. W.; Unterberg, E. A.; ...

    2015-03-02

    We developed an image inversion scheme for the soft X-ray imaging system (SXRIS) diagnostic at the DIII-D tokamak in order to obtain the local soft X-ray emission at a poloidal cross-section from the spatially line-integrated image taken by the SXRIS camera. The scheme uses the Tikhonov regularization method since the inversion problem is generally ill-posed. The regularization technique uses the generalized singular value decomposition to determine a solution that depends on a free regularization parameter. The latter has to be chosen carefully, and the so called {\\it L-curve} method to find the optimum regularization parameter is outlined. A representative testmore » image is used to study the properties of the inversion scheme with respect to inversion accuracy, amount/strength of regularization, image noise and image resolution. Moreover, the optimum inversion parameters are identified, while the L-curve method successfully computes the optimum regularization parameter. Noise is found to be the most limiting issue, but sufficient regularization is still possible at noise to signal ratios up to 10%-15%. Finally, the inversion scheme is applied to measured SXRIS data and the line-integrated SXRIS image is successfully inverted.« less

  6. Normal-inverse bimodule operation Hadamard transform ion mobility spectrometry.

    PubMed

    Hong, Yan; Huang, Chaoqun; Liu, Sheng; Xia, Lei; Shen, Chengyin; Chu, Yannan

    2018-10-31

    In order to suppress or eliminate the spurious peaks and improve signal-to-noise ratio (SNR) of Hadamard transform ion mobility spectrometry (HT-IMS), a normal-inverse bimodule operation Hadamard transform - ion mobility spectrometry (NIBOHT-IMS) technique was developed. In this novel technique, a normal and inverse pseudo random binary sequence (PRBS) was produced in sequential order by an ion gate controller and utilized to control the ion gate of IMS, and then the normal HT-IMS mobility spectrum and the inverse HT-IMS mobility spectrum were obtained. A NIBOHT-IMS mobility spectrum was gained by subtracting the inverse HT-IMS mobility spectrum from normal HT-IMS mobility spectrum. Experimental results demonstrate that the NIBOHT-IMS technique can significantly suppress or eliminate the spurious peaks, and enhance the SNR by measuring the reactant ions. Furthermore, the gas CHCl 3 and CH 2 Br 2 were measured for evaluating the capability of detecting real sample. The results show that the NIBOHT-IMS technique is able to eliminate the spurious peaks and improve the SNR notably not only for the detection of larger ion signals but also for the detection of small ion signals. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Minimization of model representativity errors in identification of point source emission from atmospheric concentration measurements

    NASA Astrophysics Data System (ADS)

    Sharan, Maithili; Singh, Amit Kumar; Singh, Sarvesh Kumar

    2017-11-01

    Estimation of an unknown atmospheric release from a finite set of concentration measurements is considered an ill-posed inverse problem. Besides ill-posedness, the estimation process is influenced by the instrumental errors in the measured concentrations and model representativity errors. The study highlights the effect of minimizing model representativity errors on the source estimation. This is described in an adjoint modelling framework and followed in three steps. First, an estimation of point source parameters (location and intensity) is carried out using an inversion technique. Second, a linear regression relationship is established between the measured concentrations and corresponding predicted using the retrieved source parameters. Third, this relationship is utilized to modify the adjoint functions. Further, source estimation is carried out using these modified adjoint functions to analyse the effect of such modifications. The process is tested for two well known inversion techniques, called renormalization and least-square. The proposed methodology and inversion techniques are evaluated for a real scenario by using concentrations measurements from the Idaho diffusion experiment in low wind stable conditions. With both the inversion techniques, a significant improvement is observed in the retrieval of source estimation after minimizing the representativity errors.

  8. Reconstruction of the temperature field for inverse ultrasound hyperthermia calculations at a muscle/bone interface.

    PubMed

    Liauh, Chihng-Tsung; Shih, Tzu-Ching; Huang, Huang-Wen; Lin, Win-Li

    2004-02-01

    An inverse algorithm with Tikhonov regularization of order zero has been used to estimate the intensity ratios of the reflected longitudinal wave to the incident longitudinal wave and that of the refracted shear wave to the total transmitted wave into bone in calculating the absorbed power field and then to reconstruct the temperature distribution in muscle and bone regions based on a limited number of temperature measurements during simulated ultrasound hyperthermia. The effects of the number of temperature sensors are investigated, as is the amount of noise superimposed on the temperature measurements, and the effects of the optimal sensor location on the performance of the inverse algorithm. Results show that noisy input data degrades the performance of this inverse algorithm, especially when the number of temperature sensors is small. Results are also presented demonstrating an improvement in the accuracy of the temperature estimates by employing an optimal value of the regularization parameter. Based on the analysis of singular-value decomposition, the optimal sensor position in a case utilizing only one temperature sensor can be determined to make the inverse algorithm converge to the true solution.

  9. A Comparison between Model Base Hardconstrain, Bandlimited, and Sparse-Spike Seismic Inversion: New Insights for CBM Reservoir Modelling on Muara Enim Formation, South Sumatra

    NASA Astrophysics Data System (ADS)

    Mohamad Noor, Faris; Adipta, Agra

    2018-03-01

    Coal Bed Methane (CBM) as a newly developed resource in Indonesia is one of the alternatives to relieve Indonesia’s dependencies on conventional energies. Coal resource of Muara Enim Formation is known as one of the prolific reservoirs in South Sumatra Basin. Seismic inversion and well analysis are done to determine the coal seam characteristics of Muara Enim Formation. This research uses three inversion methods, which are: model base hard- constrain, bandlimited, and sparse-spike inversion. Each type of seismic inversion has its own advantages to display the coal seam and its characteristic. Interpretation result from the analysis data shows that the Muara Enim coal seam has 20 (API) gamma ray value, 1 (gr/cc) – 1.4 (gr/cc) from density log, and low AI cutoff value range between 5000-6400 (m/s)*(g/cc). The distribution of coal seam is laterally thinning northwest to southeast. Coal seam is seen biasedly on model base hard constraint inversion and discontinued on band-limited inversion which isn’t similar to the geological model. The appropriate AI inversion is sparse spike inversion which has 0.884757 value from cross plot inversion as the best correlation value among the chosen inversion methods. Sparse Spike inversion its self-has high amplitude as a proper tool to identify coal seam continuity which commonly appears as a thin layer. Cross-sectional sparse spike inversion shows that there are possible new boreholes in CDP 3662-3722, CDP 3586-3622, and CDP 4004-4148 which is seen in seismic data as a thick coal seam.

  10. Group-theoretic models of the inversion process in bacterial genomes.

    PubMed

    Egri-Nagy, Attila; Gebhardt, Volker; Tanaka, Mark M; Francis, Andrew R

    2014-07-01

    The variation in genome arrangements among bacterial taxa is largely due to the process of inversion. Recent studies indicate that not all inversions are equally probable, suggesting, for instance, that shorter inversions are more frequent than longer, and those that move the terminus of replication are less probable than those that do not. Current methods for establishing the inversion distance between two bacterial genomes are unable to incorporate such information. In this paper we suggest a group-theoretic framework that in principle can take these constraints into account. In particular, we show that by lifting the problem from circular permutations to the affine symmetric group, the inversion distance can be found in polynomial time for a model in which inversions are restricted to acting on two regions. This requires the proof of new results in group theory, and suggests a vein of new combinatorial problems concerning permutation groups on which group theorists will be needed to collaborate with biologists. We apply the new method to inferring distances and phylogenies for published Yersinia pestis data.

  11. Local constitutive behavior of paper determined by an inverse method

    Treesearch

    John M. Considine; C. Tim Scott; Roland Gleisner; Junyong Zhu

    2006-01-01

    The macroscopic behavior of paper is governed by small-scale behavior. Intuitively, we know that a small-scale defect with a paper sheet effectively determines the global behavior of the sheet. In this work, we describe a method to evaluate the local constitutive behavior of paper by using an inverse method.

  12. Three-dimensional Gravity Inversion with a New Gradient Scheme on Unstructured Grids

    NASA Astrophysics Data System (ADS)

    Sun, S.; Yin, C.; Gao, X.; Liu, Y.; Zhang, B.

    2017-12-01

    Stabilized gradient-based methods have been proved to be efficient for inverse problems. Based on these methods, setting gradient close to zero can effectively minimize the objective function. Thus the gradient of objective function determines the inversion results. By analyzing the cause of poor resolution on depth in gradient-based gravity inversion methods, we find that imposing depth weighting functional in conventional gradient can improve the depth resolution to some extent. However, the improvement is affected by the regularization parameter and the effect of the regularization term becomes smaller with increasing depth (shown as Figure 1 (a)). In this paper, we propose a new gradient scheme for gravity inversion by introducing a weighted model vector. The new gradient can improve the depth resolution more efficiently, which is independent of the regularization parameter, and the effect of regularization term will not be weakened when depth increases. Besides, fuzzy c-means clustering method and smooth operator are both used as regularization terms to yield an internal consecutive inverse model with sharp boundaries (Sun and Li, 2015). We have tested our new gradient scheme with unstructured grids on synthetic data to illustrate the effectiveness of the algorithm. Gravity forward modeling with unstructured grids is based on the algorithm proposed by Okbe (1979). We use a linear conjugate gradient inversion scheme to solve the inversion problem. The numerical experiments show a great improvement in depth resolution compared with regular gradient scheme, and the inverse model is compact at all depths (shown as Figure 1 (b)). AcknowledgeThis research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900). ReferencesSun J, Li Y. 2015. Multidomain petrophysically constrained inversion and geology differentiation using guided fuzzy c-means clustering. Geophysics, 80(4): ID1-ID18. Okabe M. 1979. Analytical expressions for gravity anomalies due to homogeneous polyhedral bodies and translations into magnetic anomalies. Geophysics, 44(4), 730-741.

  13. Nonlinear inversion of borehole-radar tomography data to reconstruct velocity and attenuation distribution in earth materials

    USGS Publications Warehouse

    Zhou, C.; Liu, L.; Lane, J.W.

    2001-01-01

    A nonlinear tomographic inversion method that uses first-arrival travel-time and amplitude-spectra information from cross-hole radar measurements was developed to simultaneously reconstruct electromagnetic velocity and attenuation distribution in earth materials. Inversion methods were developed to analyze single cross-hole tomography surveys and differential tomography surveys. Assuming the earth behaves as a linear system, the inversion methods do not require estimation of source radiation pattern, receiver coupling, or geometrical spreading. The data analysis and tomographic inversion algorithm were applied to synthetic test data and to cross-hole radar field data provided by the US Geological Survey (USGS). The cross-hole radar field data were acquired at the USGS fractured-rock field research site at Mirror Lake near Thornton, New Hampshire, before and after injection of a saline tracer, to monitor the transport of electrically conductive fluids in the image plane. Results from the synthetic data test demonstrate the algorithm computational efficiency and indicate that the method robustly can reconstruct electromagnetic (EM) wave velocity and attenuation distribution in earth materials. The field test results outline zones of velocity and attenuation anomalies consistent with the finding of previous investigators; however, the tomograms appear to be quite smooth. Further work is needed to effectively find the optimal smoothness criterion in applying the Tikhonov regularization in the nonlinear inversion algorithms for cross-hole radar tomography. ?? 2001 Elsevier Science B.V. All rights reserved.

  14. A combined direct/inverse three-dimensional transonic wing design method for vector computers

    NASA Technical Reports Server (NTRS)

    Weed, R. A.; Carlson, L. A.; Anderson, W. K.

    1984-01-01

    A three-dimensional transonic-wing design algorithm for vector computers is developed, and the results of sample computations are presented graphically. The method incorporates the direct/inverse scheme of Carlson (1975), a Cartesian grid system with boundary conditions applied at a mean plane, and a potential-flow solver based on the conservative form of the full potential equation and using the ZEBRA II vectorizable solution algorithm of South et al. (1980). The accuracy and consistency of the method with regard to direct and inverse analysis and trailing-edge closure are verified in the test computations.

  15. Stochastic seismic inversion based on an improved local gradual deformation method

    NASA Astrophysics Data System (ADS)

    Yang, Xiuwei; Zhu, Peimin

    2017-12-01

    A new stochastic seismic inversion method based on the local gradual deformation method is proposed, which can incorporate seismic data, well data, geology and their spatial correlations into the inversion process. Geological information, such as sedimentary facies and structures, could provide significant a priori information to constrain an inversion and arrive at reasonable solutions. The local a priori conditional cumulative distributions at each node of model to be inverted are first established by indicator cokriging, which integrates well data as hard data and geological information as soft data. Probability field simulation is used to simulate different realizations consistent with the spatial correlations and local conditional cumulative distributions. The corresponding probability field is generated by the fast Fourier transform moving average method. Then, optimization is performed to match the seismic data via an improved local gradual deformation method. Two improved strategies are proposed to be suitable for seismic inversion. The first strategy is that we select and update local areas of bad fitting between synthetic seismic data and real seismic data. The second one is that we divide each seismic trace into several parts and obtain the optimal parameters for each part individually. The applications to a synthetic example and a real case study demonstrate that our approach can effectively find fine-scale acoustic impedance models and provide uncertainty estimations.

  16. Likelihood-Based Random-Effect Meta-Analysis of Binary Events.

    PubMed

    Amatya, Anup; Bhaumik, Dulal K; Normand, Sharon-Lise; Greenhouse, Joel; Kaizar, Eloise; Neelon, Brian; Gibbons, Robert D

    2015-01-01

    Meta-analysis has been used extensively for evaluation of efficacy and safety of medical interventions. Its advantages and utilities are well known. However, recent studies have raised questions about the accuracy of the commonly used moment-based meta-analytic methods in general and for rare binary outcomes in particular. The issue is further complicated for studies with heterogeneous effect sizes. Likelihood-based mixed-effects modeling provides an alternative to moment-based methods such as inverse-variance weighted fixed- and random-effects estimators. In this article, we compare and contrast different mixed-effect modeling strategies in the context of meta-analysis. Their performance in estimation and testing of overall effect and heterogeneity are evaluated when combining results from studies with a binary outcome. Models that allow heterogeneity in both baseline rate and treatment effect across studies have low type I and type II error rates, and their estimates are the least biased among the models considered.

  17. Quantifying Grain Level Stress-Strain Behavior for AM40 via Instrumented Microindentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Guang; Barker, Erin I.; Stephens, Elizabeth V.

    2016-01-01

    ABSTRACT Microindentation is performed on hot isostatic pressed (HIP) Mg-Al (AM40) alloy samples produced by high-pressure die cast (HPDC) process for the purpose of quantifying the mechanical properties of the α-Mg grains. The process of obtaining elastic modulus and hardness from indentation load-depth curves is well established in the literature. A new inverse method is developed to extract plastic properties in this study. The method utilizes empirical yield strength-hardness relationship reported in the literature together with finite element modeling of the individual indentation. Due to the shallow depth of the indentation, indentation size effect (ISE) is taken into account whenmore » determining plastic properties. The stress versus strain behavior is determined for a series of indents. The resulting average values and standard deviations are obtained for future use as input distributions for microstructure-based property prediction of AM40.« less

  18. Sparse representation-based image restoration via nonlocal supervised coding

    NASA Astrophysics Data System (ADS)

    Li, Ao; Chen, Deyun; Sun, Guanglu; Lin, Kezheng

    2016-10-01

    Sparse representation (SR) and nonlocal technique (NLT) have shown great potential in low-level image processing. However, due to the degradation of the observed image, SR and NLT may not be accurate enough to obtain a faithful restoration results when they are used independently. To improve the performance, in this paper, a nonlocal supervised coding strategy-based NLT for image restoration is proposed. The novel method has three main contributions. First, to exploit the useful nonlocal patches, a nonnegative sparse representation is introduced, whose coefficients can be utilized as the supervised weights among patches. Second, a novel objective function is proposed, which integrated the supervised weights learning and the nonlocal sparse coding to guarantee a more promising solution. Finally, to make the minimization tractable and convergence, a numerical scheme based on iterative shrinkage thresholding is developed to solve the above underdetermined inverse problem. The extensive experiments validate the effectiveness of the proposed method.

  19. A Stochastic Climate Generator for Agriculture in Southeast Asian Domains

    NASA Astrophysics Data System (ADS)

    Greene, A. M.; Allis, E. C.

    2014-12-01

    We extend a previously-described method for generating future climate scenarios, suitable for driving agricultural models, to selected domains in Lao PDR, Bangladesh and Indonesia. There are notable differences in climatology among the study regions, most importantly the inverse seasonal relationship of southeast Asian and Australian monsoons. These differences necessitate a partially-differentiated modeling approach, utilizing common features for better estimation while allowing independent modeling of divergent attributes. The method attempts to constrain uncertainty due to both anthropogenic and natural influences, providing a measure of how these effects may combine during specified future decades. Seasonal climate fields are downscaled to the daily time step by resampling the AgMERRA dataset, providing a full suite of agriculturally relevant variables and enabling the propagation of climate uncertainty to agricultural outputs. The role of this research in a broader project, conducted under the auspices of the International Fund for Agricultural Development (IFAD), is discussed.

  20. Joint Inversion of Vp, Vs, and Resistivity at SAFOD

    NASA Astrophysics Data System (ADS)

    Bennington, N. L.; Zhang, H.; Thurber, C. H.; Bedrosian, P. A.

    2010-12-01

    Seismic and resistivity models at SAFOD have been derived from separate inversions that show significant spatial similarity between the main model features. Previous work [Zhang et al., 2009] used cluster analysis to make lithologic inferences from trends in the seismic and resistivity models. We have taken this one step further by developing a joint inversion scheme that uses the cross-gradient penalty function to achieve structurally similar Vp, Vs, and resistivity images that adequately fit the seismic and magnetotelluric MT data without forcing model similarity where none exists. The new inversion code, tomoDDMT, merges the seismic inversion code tomoDD [Zhang and Thurber, 2003] and the MT inversion code Occam2DMT [Constable et al., 1987; deGroot-Hedlin and Constable, 1990]. We are exploring the utility of the cross-gradients penalty function in improving models of fault-zone structure at SAFOD on the San Andreas Fault in the Parkfield, California area. Two different sets of end-member starting models are being tested. One set is the separately inverted Vp, Vs, and resistivity models. The other set consists of simple, geologically based block models developed from borehole information at the SAFOD drill site and a simplified version of features seen in geophysical models at Parkfield. For both starting models, our preliminary results indicate that the inversion produces a converging solution with resistivity, seismic, and cross-gradient misfits decreasing over successive iterations. We also compare the jointly inverted Vp, Vs, and resistivity models to borehole information from SAFOD to provide a "ground truth" comparison.

  1. M-Band Analysis of Chromosome Aberrations in Human Epithelial Cells Induced By Low- and High-Let Radiations

    NASA Technical Reports Server (NTRS)

    Hada, M.; Gersey, B.; Saganti, P. B.; Wilkins, R.; Gonda, S. R.; Cucinotta, F. A.; Wu, H.

    2007-01-01

    Energetic primary and secondary particles pose a health risk to astronauts in extended ISS and future Lunar and Mars missions. High-LET radiation is much more effective than low-LET radiation in the induction of various biological effects, including cell inactivation, genetic mutations, cataracts and cancer. Most of these biological endpoints are closely correlated to chromosomal damage, which can be utilized as a biomarker for radiation insult. In this study, human epithelial cells were exposed in vitro to gamma rays, 1 GeV/nucleon Fe ions and secondary neutrons whose spectrum is similar to that measured inside the Space Station. Chromosomes were condensed using a premature chromosome condensation technique and chromosome aberrations were analyzed with the multi-color banding (mBAND) technique. With this technique, individually painted chromosomal bands on one chromosome allowed the identification of both interchromosomal (translocation to unpainted chromosomes) and intrachromosomal aberrations (inversions and deletions within a single painted chromosome). Results of the study confirmed the observation of higher incidence of inversions for high-LET irradiation. However, detailed analysis of the inversion type revealed that all of the three radiation types in the study induced a low incidence of simple inversions. Half of the inversions observed in the low-LET irradiated samples were accompanied by other types of intrachromosome aberrations, but few inversions were accompanied by interchromosome aberrations. In contrast, Fe ions induced a significant fraction of inversions that involved complex rearrangements of both the inter- and intrachromosome exchanges.

  2. Parsimony and goodness-of-fit in multi-dimensional NMR inversion

    NASA Astrophysics Data System (ADS)

    Babak, Petro; Kryuchkov, Sergey; Kantzas, Apostolos

    2017-01-01

    Multi-dimensional nuclear magnetic resonance (NMR) experiments are often used for study of molecular structure and dynamics of matter in core analysis and reservoir evaluation. Industrial applications of multi-dimensional NMR involve a high-dimensional measurement dataset with complicated correlation structure and require rapid and stable inversion algorithms from the time domain to the relaxation rate and/or diffusion domains. In practice, applying existing inverse algorithms with a large number of parameter values leads to an infinite number of solutions with a reasonable fit to the NMR data. The interpretation of such variability of multiple solutions and selection of the most appropriate solution could be a very complex problem. In most cases the characteristics of materials have sparse signatures, and investigators would like to distinguish the most significant relaxation and diffusion values of the materials. To produce an easy to interpret and unique NMR distribution with the finite number of the principal parameter values, we introduce a new method for NMR inversion. The method is constructed based on the trade-off between the conventional goodness-of-fit approach to multivariate data and the principle of parsimony guaranteeing inversion with the least number of parameter values. We suggest performing the inversion of NMR data using the forward stepwise regression selection algorithm. To account for the trade-off between goodness-of-fit and parsimony, the objective function is selected based on Akaike Information Criterion (AIC). The performance of the developed multi-dimensional NMR inversion method and its comparison with conventional methods are illustrated using real data for samples with bitumen, water and clay.

  3. Direct Iterative Nonlinear Inversion by Multi-frequency T-matrix Completion

    NASA Astrophysics Data System (ADS)

    Jakobsen, M.; Wu, R. S.

    2016-12-01

    Researchers in the mathematical physics community have recently proposed a conceptually new method for solving nonlinear inverse scattering problems (like FWI) which is inspired by the theory of nonlocality of physical interactions. The conceptually new method, which may be referred to as the T-matrix completion method, is very interesting since it is not based on linearization at any stage. Also, there are no gradient vectors or (inverse) Hessian matrices to calculate. However, the convergence radius of this promising T-matrix completion method is seriously restricted by it's use of single-frequency scattering data only. In this study, we have developed a modified version of the T-matrix completion method which we believe is more suitable for applications to nonlinear inverse scattering problems in (exploration) seismology, because it makes use of multi-frequency data. Essentially, we have simplified the single-frequency T-matrix completion method of Levinson and Markel and combined it with the standard sequential frequency inversion (multi-scale regularization) method. For each frequency, we first estimate the experimental T-matrix by using the Moore-Penrose pseudo inverse concept. Then this experimental T-matrix is used to initiate an iterative procedure for successive estimation of the scattering potential and the T-matrix using the Lippmann-Schwinger for the nonlinear relation between these two quantities. The main physical requirements in the basic iterative cycle is that the T-matrix should be data-compatible and the scattering potential operator should be dominantly local; although a non-local scattering potential operator is allowed in the intermediate iterations. In our simplified T-matrix completion strategy, we ensure that the T-matrix updates are always data compatible simply by adding a suitable correction term in the real space coordinate representation. The use of singular-value decomposition representations are not required in our formulation since we have developed an efficient domain decomposition method. The results of several numerical experiments for the SEG/EAGE salt model illustrate the importance of using multi-frequency data when performing frequency domain full waveform inversion in strongly scattering media via the new concept of T-matrix completion.

  4. Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.

    PubMed

    Han, Lei; Zhang, Yu; Zhang, Tong

    2016-08-01

    The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ 1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.

  5. Simultaneous acquisition of 2D and 3D solid-state NMR experiments for sequential assignment of oriented membrane protein samples.

    PubMed

    Gopinath, T; Mote, Kaustubh R; Veglia, Gianluigi

    2015-05-01

    We present a new method called DAISY (Dual Acquisition orIented ssNMR spectroScopY) for the simultaneous acquisition of 2D and 3D oriented solid-state NMR experiments for membrane proteins reconstituted in mechanically or magnetically aligned lipid bilayers. DAISY utilizes dual acquisition of sine and cosine dipolar or chemical shift coherences and long living (15)N longitudinal polarization to obtain two multi-dimensional spectra, simultaneously. In these new experiments, the first acquisition gives the polarization inversion spin exchange at the magic angle (PISEMA) or heteronuclear correlation (HETCOR) spectra, the second acquisition gives PISEMA-mixing or HETCOR-mixing spectra, where the mixing element enables inter-residue correlations through (15)N-(15)N homonuclear polarization transfer. The analysis of the two 2D spectra (first and second acquisitions) enables one to distinguish (15)N-(15)N inter-residue correlations for sequential assignment of membrane proteins. DAISY can be implemented in 3D experiments that include the polarization inversion spin exchange at magic angle via I spin coherence (PISEMAI) sequence, as we show for the simultaneous acquisition of 3D PISEMAI-HETCOR and 3D PISEMAI-HETCOR-mixing experiments.

  6. Total-variation based velocity inversion with Bregmanized operator splitting algorithm

    NASA Astrophysics Data System (ADS)

    Zand, Toktam; Gholami, Ali

    2018-04-01

    Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.

  7. Estimating and Identifying Unspecified Correlation Structure for Longitudinal Data

    PubMed Central

    Hu, Jianhua; Wang, Peng; Qu, Annie

    2014-01-01

    Identifying correlation structure is important to achieving estimation efficiency in analyzing longitudinal data, and is also crucial for drawing valid statistical inference for large size clustered data. In this paper, we propose a nonparametric method to estimate the correlation structure, which is applicable for discrete longitudinal data. We utilize eigenvector-based basis matrices to approximate the inverse of the empirical correlation matrix and determine the number of basis matrices via model selection. A penalized objective function based on the difference between the empirical and model approximation of the correlation matrices is adopted to select an informative structure for the correlation matrix. The eigenvector representation of the correlation estimation is capable of reducing the risk of model misspecification, and also provides useful information on the specific within-cluster correlation pattern of the data. We show that the proposed method possesses the oracle property and selects the true correlation structure consistently. The proposed method is illustrated through simulations and two data examples on air pollution and sonar signal studies. PMID:26361433

  8. Sensor And Method For Detecting A Superstrate

    NASA Technical Reports Server (NTRS)

    Arndt, G. Dickey (Inventor); Cari, James R. (Inventor); Ngo, Phong H. (Inventor); Fink, Patrick W. (Inventor); Siekierski, James D. (Inventor)

    2006-01-01

    Method and apparatus are provided for determining a superstrate on or near a sensor, e.g., for detecting the presence of an ice superstrate on an airplane wing or a road. In one preferred embodiment, multiple measurement cells are disposed along a transmission line. While the present invention is operable with different types of transmission lines, construction details for a presently preferred coplanar waveguide and a microstrip waveguide are disclosed. A computer simulation is provided as part of the invention for predicting results of a simulated superstrate detector system. The measurement cells may be physically partitioned, nonphysically partitioned with software or firmware, or include a combination of different types of partitions. In one embodiment, a plurality of transmission lines are utilized wherein each transmission line includes a plurality of measurement cells. The plurality of transmission lines may be multiplexed with the signal from each transmission line being applied to the same phase detector. In one embodiment, an inverse problem method is applied to determine the superstrate dielectric for a transmission line with multiple measurement cells.

  9. A Comprehensive Estimation of the Economic Effects of Meteorological Services Based on the Input-Output Method

    PubMed Central

    Wu, Xianhua; Yang, Lingjuan; Guo, Ji; Lu, Huaguo; Chen, Yunfeng; Sun, Jian

    2014-01-01

    Concentrating on consuming coefficient, partition coefficient, and Leontief inverse matrix, relevant concepts and algorithms are developed for estimating the impact of meteorological services including the associated (indirect, complete) economic effect. Subsequently, quantitative estimations are particularly obtained for the meteorological services in Jiangxi province by utilizing the input-output method. It is found that the economic effects are noticeably rescued by the preventive strategies developed from both the meteorological information and internal relevance (interdependency) in the industrial economic system. Another finding is that the ratio range of input in the complete economic effect on meteorological services is about 1 : 108.27–1 : 183.06, remarkably different from a previous estimation based on the Delphi method (1 : 30–1 : 51). Particularly, economic effects of meteorological services are higher for nontraditional users of manufacturing, wholesale and retail trades, services sector, tourism and culture, and art and lower for traditional users of agriculture, forestry, livestock, fishery, and construction industries. PMID:24578666

  10. Quasi-closed phase forward-backward linear prediction analysis of speech for accurate formant detection and estimation.

    PubMed

    Gowda, Dhananjaya; Airaksinen, Manu; Alku, Paavo

    2017-09-01

    Recently, a quasi-closed phase (QCP) analysis of speech signals for accurate glottal inverse filtering was proposed. However, the QCP analysis which belongs to the family of temporally weighted linear prediction (WLP) methods uses the conventional forward type of sample prediction. This may not be the best choice especially in computing WLP models with a hard-limiting weighting function. A sample selective minimization of the prediction error in WLP reduces the effective number of samples available within a given window frame. To counter this problem, a modified quasi-closed phase forward-backward (QCP-FB) analysis is proposed, wherein each sample is predicted based on its past as well as future samples thereby utilizing the available number of samples more effectively. Formant detection and estimation experiments on synthetic vowels generated using a physical modeling approach as well as natural speech utterances show that the proposed QCP-FB method yields statistically significant improvements over the conventional linear prediction and QCP methods.

  11. A comprehensive estimation of the economic effects of meteorological services based on the input-output method.

    PubMed

    Wu, Xianhua; Wei, Guo; Yang, Lingjuan; Guo, Ji; Lu, Huaguo; Chen, Yunfeng; Sun, Jian

    2014-01-01

    Concentrating on consuming coefficient, partition coefficient, and Leontief inverse matrix, relevant concepts and algorithms are developed for estimating the impact of meteorological services including the associated (indirect, complete) economic effect. Subsequently, quantitative estimations are particularly obtained for the meteorological services in Jiangxi province by utilizing the input-output method. It is found that the economic effects are noticeably rescued by the preventive strategies developed from both the meteorological information and internal relevance (interdependency) in the industrial economic system. Another finding is that the ratio range of input in the complete economic effect on meteorological services is about 1 : 108.27-1 : 183.06, remarkably different from a previous estimation based on the Delphi method (1 : 30-1 : 51). Particularly, economic effects of meteorological services are higher for nontraditional users of manufacturing, wholesale and retail trades, services sector, tourism and culture, and art and lower for traditional users of agriculture, forestry, livestock, fishery, and construction industries.

  12. 2 + 1 Toda chain. I. Inverse scattering method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lipovskii, V.D.; Shirokov, A.V.

    A formal scheme of the inverse scattering method is constructed for the2 + 1 Toda chain in the class of rapidly decreasing Cauchy data. Application of the inverse scattering method to the two-dimensional infinite Toda chain was made difficult by the circumstance that this system is a (2 + 1)-dimensional object, i.e., possesses time and two spatial variables, the role of one of these being played by the chain site number. Because of this, our information about the 2 + 1 Toda chain was limited to a rich set of particular solutions of soliton type obtained in the cycle ofmore » studies by the Darboux transformation method.« less

  13. Health Risk Assessment of Inhalable Particulate Matter in Beijing Based on the Thermal Environment

    PubMed Central

    Xu, Lin-Yu; Yin, Hao; Xie, Xiao-Dong

    2014-01-01

    Inhalable particulate matter (PM10) is a primary air pollutant closely related to public health, and an especially serious problem in urban areas. The urban heat island (UHI) effect has made the urban PM10 pollution situation more complex and severe. In this study, we established a health risk assessment system utilizing an epidemiological method taking the thermal environment effects into consideration. We utilized a remote sensing method to retrieve the PM10 concentration, UHI, Normalized Difference Vegetation Index (NDVI), and Normalized Difference Water Index (NDWI). With the correlation between difference vegetation index (DVI) and PM10 concentration, we utilized the established model between PM10 and thermal environmental indicators to evaluate the PM10 health risks based on the epidemiological study. Additionally, with the regulation of UHI, NDVI and NDWI, we aimed at regulating the PM10 health risks and thermal environment simultaneously. This study attempted to accomplish concurrent thermal environment regulation and elimination of PM10 health risks through control of UHI intensity. The results indicate that urban Beijing has a higher PM10 health risk than rural areas; PM10 health risk based on the thermal environment is 1.145, which is similar to the health risk calculated (1.144) from the PM10 concentration inversion; according to the regulation results, regulation of UHI and NDVI is effective and helpful for mitigation of PM10 health risk in functional zones. PMID:25464132

  14. Frequency-domain elastic full waveform inversion using encoded simultaneous sources

    NASA Astrophysics Data System (ADS)

    Jeong, W.; Son, W.; Pyun, S.; Min, D.

    2011-12-01

    Currently, numerous studies have endeavored to develop robust full waveform inversion and migration algorithms. These processes require enormous computational costs, because of the number of sources in the survey. To avoid this problem, the phase encoding technique for prestack migration was proposed by Romero (2000) and Krebs et al. (2009) proposed the encoded simultaneous-source inversion technique in the time domain. On the other hand, Ben-Hadj-Ali et al. (2011) demonstrated the robustness of the frequency-domain full waveform inversion with simultaneous sources for noisy data changing the source assembling. Although several studies on simultaneous-source inversion tried to estimate P- wave velocity based on the acoustic wave equation, seismic migration and waveform inversion based on the elastic wave equations are required to obtain more reliable subsurface information. In this study, we propose a 2-D frequency-domain elastic full waveform inversion technique using phase encoding methods. In our algorithm, the random phase encoding method is employed to calculate the gradients of the elastic parameters, source signature estimation and the diagonal entries of approximate Hessian matrix. The crosstalk for the estimated source signature and the diagonal entries of approximate Hessian matrix are suppressed with iteration as for the gradients. Our 2-D frequency-domain elastic waveform inversion algorithm is composed using the back-propagation technique and the conjugate-gradient method. Source signature is estimated using the full Newton method. We compare the simultaneous-source inversion with the conventional waveform inversion for synthetic data sets of the Marmousi-2 model. The inverted results obtained by simultaneous sources are comparable to those obtained by individual sources, and source signature is successfully estimated in simultaneous source technique. Comparing the inverted results using the pseudo Hessian matrix with previous inversion results provided by the approximate Hessian matrix, it is noted that the latter are better than the former for deeper parts of the model. This work was financially supported by the Brain Korea 21 project of Energy System Engineering, by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2010-0006155), by the Energy Efficiency & Resources of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government Ministry of Knowledge Economy (No. 2010T100200133).

  15. Quantitative T1 and T2* carotid atherosclerotic plaque imaging using a three-dimensional multi-echo phase-sensitive inversion recovery sequence: a feasibility study.

    PubMed

    Fujiwara, Yasuhiro; Maruyama, Hirotoshi; Toyomaru, Kanako; Nishizaka, Yuri; Fukamatsu, Masahiro

    2018-06-01

    Magnetic resonance imaging (MRI) is widely used to detect carotid atherosclerotic plaques. Although it is important to evaluate vulnerable carotid plaques containing lipids and intra-plaque hemorrhages (IPHs) using T 1 -weighted images, the image contrast changes depending on the imaging settings. Moreover, to distinguish between a thrombus and a hemorrhage, it is useful to evaluate the iron content of the plaque using both T 1 -weighted and T 2 *-weighted images. Therefore, a quantitative evaluation of carotid atherosclerotic plaques using T 1 and T 2 * values may be necessary for the accurate evaluation of plaque components. The purpose of this study was to determine whether the multi-echo phase-sensitive inversion recovery (mPSIR) sequence can improve T 1 contrast while simultaneously providing accurate T 1 and T 2 * values of an IPH. T 1 and T 2 * values measured using mPSIR were compared to values from conventional methods in phantom and in vivo studies. In the phantom study, the T 1 and T 2 * values estimated using mPSIR were linearly correlated with those of conventional methods. In the in vivo study, mPSIR demonstrated higher T 1 contrast between the IPH phantom and sternocleidomastoid muscle than the conventional method. Moreover, the T 1 and T 2 * values of the blood vessel wall and sternocleidomastoid muscle estimated using mPSIR were correlated with values measured by conventional methods and with values reported previously. The mPSIR sequence improved T 1 contrast while simultaneously providing accurate T 1 and T 2 * values of the neck region. Although further study is required to evaluate the clinical utility, mPSIR may improve carotid atherosclerotic plaque detection and provide detailed information about plaque components.

  16. High-frequency Rayleigh-wave method

    USGS Publications Warehouse

    Xia, J.; Miller, R.D.; Xu, Y.; Luo, Y.; Chen, C.; Liu, J.; Ivanov, J.; Zeng, C.

    2009-01-01

    High-frequency (???2 Hz) Rayleigh-wave data acquired with a multichannel recording system have been utilized to determine shear (S)-wave velocities in near-surface geophysics since the early 1980s. This overview article discusses the main research results of high-frequency surface-wave techniques achieved by research groups at the Kansas Geological Survey and China University of Geosciences in the last 15 years. The multichannel analysis of surface wave (MASW) method is a non-invasive acoustic approach to estimate near-surface S-wave velocity. The differences between MASW results and direct borehole measurements are approximately 15% or less and random. Studies show that simultaneous inversion with higher modes and the fundamental mode can increase model resolution and an investigation depth. The other important seismic property, quality factor (Q), can also be estimated with the MASW method by inverting attenuation coefficients of Rayleigh waves. An inverted model (S-wave velocity or Q) obtained using a damped least-squares method can be assessed by an optimal damping vector in a vicinity of the inverted model determined by an objective function, which is the trace of a weighted sum of model-resolution and model-covariance matrices. Current developments include modeling high-frequency Rayleigh-waves in near-surface media, which builds a foundation for shallow seismic or Rayleigh-wave inversion in the time-offset domain; imaging dispersive energy with high resolution in the frequency-velocity domain and possibly with data in an arbitrary acquisition geometry, which opens a door for 3D surface-wave techniques; and successfully separating surface-wave modes, which provides a valuable tool to perform S-wave velocity profiling with high-horizontal resolution. ?? China University of Geosciences (Wuhan) and Springer-Verlag GmbH 2009.

  17. Development of Spatial Scaling Technique of Forest Health Sample Point Information

    NASA Astrophysics Data System (ADS)

    Lee, J. H.; Ryu, J. E.; Chung, H. I.; Choi, Y. Y.; Jeon, S. W.; Kim, S. H.

    2018-04-01

    Forests provide many goods, Ecosystem services, and resources to humans such as recreation air purification and water protection functions. In rececnt years, there has been an increase in the factors that threaten the health of forests such as global warming due to climate change, environmental pollution, and the increase in interest in forests, and efforts are being made in various countries for forest management. Thus, existing forest ecosystem survey method is a monitoring method of sampling points, and it is difficult to utilize forests for forest management because Korea is surveying only a small part of the forest area occupying 63.7 % of the country (Ministry of Land Infrastructure and Transport Korea, 2016). Therefore, in order to manage large forests, a method of interpolating and spatializing data is needed. In this study, The 1st Korea Forest Health Management biodiversity Shannon;s index data (National Institute of Forests Science, 2015) were used for spatial interpolation. Two widely used methods of interpolation, Kriging method and IDW(Inverse Distance Weighted) method were used to interpolate the biodiversity index. Vegetation indices SAVI, NDVI, LAI and SR were used. As a result, Kriging method was the most accurate method.

  18. Linear distributed source modeling of local field potentials recorded with intra-cortical electrode arrays.

    PubMed

    Hindriks, Rikkert; Schmiedt, Joscha; Arsiwalla, Xerxes D; Peter, Alina; Verschure, Paul F M J; Fries, Pascal; Schmid, Michael C; Deco, Gustavo

    2017-01-01

    Planar intra-cortical electrode (Utah) arrays provide a unique window into the spatial organization of cortical activity. Reconstruction of the current source density (CSD) underlying such recordings, however, requires "inverting" Poisson's equation. For inter-laminar recordings, this is commonly done by the CSD method, which consists in taking the second-order spatial derivative of the recorded local field potentials (LFPs). Although the CSD method has been tremendously successful in mapping the current generators underlying inter-laminar LFPs, its application to planar recordings is more challenging. While for inter-laminar recordings the CSD method seems reasonably robust against violations of its assumptions, is it unclear as to what extent this holds for planar recordings. One of the objectives of this study is to characterize the conditions under which the CSD method can be successfully applied to Utah array data. Using forward modeling, we find that for spatially coherent CSDs, the CSD method yields inaccurate reconstructions due to volume-conducted contamination from currents in deeper cortical layers. An alternative approach is to "invert" a constructed forward model. The advantage of this approach is that any a priori knowledge about the geometrical and electrical properties of the tissue can be taken into account. Although several inverse methods have been proposed for LFP data, the applicability of existing electroencephalographic (EEG) and magnetoencephalographic (MEG) inverse methods to LFP data is largely unexplored. Another objective of our study therefore, is to assess the applicability of the most commonly used EEG/MEG inverse methods to Utah array data. Our main conclusion is that these inverse methods provide more accurate CSD reconstructions than the CSD method. We illustrate the inverse methods using event-related potentials recorded from primary visual cortex of a macaque monkey during a motion discrimination task.

  19. Linear distributed source modeling of local field potentials recorded with intra-cortical electrode arrays

    PubMed Central

    Schmiedt, Joscha; Arsiwalla, Xerxes D.; Peter, Alina; Verschure, Paul F. M. J.; Fries, Pascal; Schmid, Michael C.; Deco, Gustavo

    2017-01-01

    Planar intra-cortical electrode (Utah) arrays provide a unique window into the spatial organization of cortical activity. Reconstruction of the current source density (CSD) underlying such recordings, however, requires “inverting” Poisson’s equation. For inter-laminar recordings, this is commonly done by the CSD method, which consists in taking the second-order spatial derivative of the recorded local field potentials (LFPs). Although the CSD method has been tremendously successful in mapping the current generators underlying inter-laminar LFPs, its application to planar recordings is more challenging. While for inter-laminar recordings the CSD method seems reasonably robust against violations of its assumptions, is it unclear as to what extent this holds for planar recordings. One of the objectives of this study is to characterize the conditions under which the CSD method can be successfully applied to Utah array data. Using forward modeling, we find that for spatially coherent CSDs, the CSD method yields inaccurate reconstructions due to volume-conducted contamination from currents in deeper cortical layers. An alternative approach is to “invert” a constructed forward model. The advantage of this approach is that any a priori knowledge about the geometrical and electrical properties of the tissue can be taken into account. Although several inverse methods have been proposed for LFP data, the applicability of existing electroencephalographic (EEG) and magnetoencephalographic (MEG) inverse methods to LFP data is largely unexplored. Another objective of our study therefore, is to assess the applicability of the most commonly used EEG/MEG inverse methods to Utah array data. Our main conclusion is that these inverse methods provide more accurate CSD reconstructions than the CSD method. We illustrate the inverse methods using event-related potentials recorded from primary visual cortex of a macaque monkey during a motion discrimination task. PMID:29253006

  20. Identification of balanced chromosomal rearrangements previously unknown among participants in the 1000 Genomes Project: implications for interpretation of structural variation in genomes and the future of clinical cytogenetics.

    PubMed

    Dong, Zirui; Wang, Huilin; Chen, Haixiao; Jiang, Hui; Yuan, Jianying; Yang, Zhenjun; Wang, Wen-Jing; Xu, Fengping; Guo, Xiaosen; Cao, Ye; Zhu, Zhenzhen; Geng, Chunyu; Cheung, Wan Chee; Kwok, Yvonne K; Yang, Huanming; Leung, Tak Yeung; Morton, Cynthia C; Cheung, Sau Wai; Choy, Kwong Wai

    2017-11-02

    PurposeRecent studies demonstrate that whole-genome sequencing enables detection of cryptic rearrangements in apparently balanced chromosomal rearrangements (also known as balanced chromosomal abnormalities, BCAs) previously identified by conventional cytogenetic methods. We aimed to assess our analytical tool for detecting BCAs in the 1000 Genomes Project without knowing which bands were affected.MethodsThe 1000 Genomes Project provides an unprecedented integrated map of structural variants in phenotypically normal subjects, but there is no information on potential inclusion of subjects with apparent BCAs akin to those traditionally detected in diagnostic cytogenetics laboratories. We applied our analytical tool to 1,166 genomes from the 1000 Genomes Project with sufficient physical coverage (8.25-fold).ResultsWith this approach, we detected four reciprocal balanced translocations and four inversions, ranging in size from 57.9 kb to 13.3 Mb, all of which were confirmed by cytogenetic methods and polymerase chain reaction studies. One of these DNAs has a subtle translocation that is not readily identified by chromosome analysis because of the similarity of the banding patterns and size of exchanged segments, and another results in disruption of all transcripts of an OMIM gene.ConclusionOur study demonstrates the extension of utilizing low-pass whole-genome sequencing for unbiased detection of BCAs including translocations and inversions previously unknown in the 1000 Genomes Project.GENETICS in MEDICINE advance online publication, 2 November 2017; doi:10.1038/gim.2017.170.

Top