Sample records for inversion technique based

  1. Inversion of particle-size distribution from angular light-scattering data with genetic algorithms.

    PubMed

    Ye, M; Wang, S; Lu, Y; Hu, T; Zhu, Z; Xu, Y

    1999-04-20

    A stochastic inverse technique based on a genetic algorithm (GA) to invert particle-size distribution from angular light-scattering data is developed. This inverse technique is independent of any given a priori information of particle-size distribution. Numerical tests show that this technique can be successfully applied to inverse problems with high stability in the presence of random noise and low susceptibility to the shape of distributions. It has also been shown that the GA-based inverse technique is more efficient in use of computing time than the inverse Monte Carlo method recently developed by Ligon et al. [Appl. Opt. 35, 4297 (1996)].

  2. An ionospheric occultation inversion technique based on epoch difference

    NASA Astrophysics Data System (ADS)

    Lin, Jian; Xiong, Jing; Zhu, Fuying; Yang, Jian; Qiao, Xuejun

    2013-09-01

    Of the ionospheric radio occultation (IRO) electron density profile (EDP) retrievals, the Abel based calibrated TEC inversion (CTI) is the most widely used technique. In order to eliminate the contribution from the altitude above the RO satellite, it is necessary to utilize the calibrated TEC to retrieve the EDP, which introduces the error due to the coplanar assumption. In this paper, a new technique based on the epoch difference inversion (EDI) is firstly proposed to eliminate this error. The comparisons between CTI and EDI have been done, taking advantage of the simulated and real COSMIC data. The following conclusions can be drawn: the EDI technique can successfully retrieve the EDPs without non-occultation side measurements and shows better performance than the CTI method, especially for lower orbit mission; no matter which technique is used, the inversion results at the higher altitudes are better than those at the lower altitudes, which could be explained theoretically.

  3. Tomographic inversion techniques incorporating physical constraints for line integrated spectroscopy in stellarators and tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pablant, N. A.; Bell, R. E.; Bitter, M.

    2014-11-15

    Accurate tomographic inversion is important for diagnostic systems on stellarators and tokamaks which rely on measurements of line integrated emission spectra. A tomographic inversion technique based on spline optimization with enforcement of constraints is described that can produce unique and physically relevant inversions even in situations with noisy or incomplete input data. This inversion technique is routinely used in the analysis of data from the x-ray imaging crystal spectrometer (XICS) installed at the Large Helical Device. The XICS diagnostic records a 1D image of line integrated emission spectra from impurities in the plasma. Through the use of Doppler spectroscopy andmore » tomographic inversion, XICS can provide profile measurements of the local emissivity, temperature, and plasma flow. Tomographic inversion requires the assumption that these measured quantities are flux surface functions, and that a known plasma equilibrium reconstruction is available. In the case of low signal levels or partial spatial coverage of the plasma cross-section, standard inversion techniques utilizing matrix inversion and linear-regularization often cannot produce unique and physically relevant solutions. The addition of physical constraints, such as parameter ranges, derivative directions, and boundary conditions, allow for unique solutions to be reliably found. The constrained inversion technique described here utilizes a modified Levenberg-Marquardt optimization scheme, which introduces a condition avoidance mechanism by selective reduction of search directions. The constrained inversion technique also allows for the addition of more complicated parameter dependencies, for example, geometrical dependence of the emissivity due to asymmetries in the plasma density arising from fast rotation. The accuracy of this constrained inversion technique is discussed, with an emphasis on its applicability to systems with limited plasma coverage.« less

  4. Tomographic inversion techniques incorporating physical constraints for line integrated spectroscopy in stellarators and tokamaksa)

    DOE PAGES

    Pablant, N. A.; Bell, R. E.; Bitter, M.; ...

    2014-08-08

    Accurate tomographic inversion is important for diagnostic systems on stellarators and tokamaks which rely on measurements of line integrated emission spectra. A tomographic inversion technique based on spline optimization with enforcement of constraints is described that can produce unique and physically relevant inversions even in situations with noisy or incomplete input data. This inversion technique is routinely used in the analysis of data from the x-ray imaging crystal spectrometer (XICS) installed at LHD. The XICS diagnostic records a 1D image of line integrated emission spectra from impurities in the plasma. Through the use of Doppler spectroscopy and tomographic inversion, XICSmore » can provide pro file measurements of the local emissivity, temperature and plasma flow. Tomographic inversion requires the assumption that these measured quantities are flux surface functions, and that a known plasma equilibrium reconstruction is available. In the case of low signal levels or partial spatial coverage of the plasma cross-section, standard inversion techniques utilizing matrix inversion and linear-regularization often cannot produce unique and physically relevant solutions. The addition of physical constraints, such as parameter ranges, derivative directions, and boundary conditions, allow for unique solutions to be reliably found. The constrained inversion technique described here utilizes a modifi ed Levenberg-Marquardt optimization scheme, which introduces a condition avoidance mechanism by selective reduction of search directions. The constrained inversion technique also allows for the addition of more complicated parameter dependencies, for example geometrical dependence of the emissivity due to asymmetries in the plasma density arising from fast rotation. The accuracy of this constrained inversion technique is discussed, with an emphasis on its applicability to systems with limited plasma coverage.« less

  5. Abel inversion using fast Fourier transforms.

    PubMed

    Kalal, M; Nugent, K A

    1988-05-15

    A fast Fourier transform based Abel inversion technique is proposed. The method is faster than previously used techniques, potentially very accurate (even for a relatively small number of points), and capable of handling large data sets. The technique is discussed in the context of its use with 2-D digital interferogram analysis algorithms. Several examples are given.

  6. A Sparsity-based Framework for Resolution Enhancement in Optical Fault Analysis of Integrated Circuits

    DTIC Science & Technology

    2015-01-01

    for IC fault detection . This section provides background information on inversion methods. Conventional inversion techniques and their shortcomings are...physical techniques, electron beam imaging/analysis, ion beam techniques, scanning probe techniques. Electrical tests are used to detect faults in 13 an...hand, there is also the second harmonic technique through which duty cycle degradation faults are detected by collecting the magnitude and the phase of

  7. The analysis of a rocket tomography measurement of the N2+3914A emission and N2 ionization rates in an auroral arc

    NASA Technical Reports Server (NTRS)

    Mcdade, Ian C.

    1991-01-01

    Techniques were developed for recovering two-dimensional distributions of auroral volume emission rates from rocket photometer measurements made in a tomographic spin scan mode. These tomographic inversion procedures are based upon an algebraic reconstruction technique (ART) and utilize two different iterative relaxation techniques for solving the problems associated with noise in the observational data. One of the inversion algorithms is based upon a least squares method and the other on a maximum probability approach. The performance of the inversion algorithms, and the limitations of the rocket tomography technique, were critically assessed using various factors such as (1) statistical and non-statistical noise in the observational data, (2) rocket penetration of the auroral form, (3) background sources of emission, (4) smearing due to the photometer field of view, and (5) temporal variations in the auroral form. These tests show that the inversion procedures may be successfully applied to rocket observations made in medium intensity aurora with standard rocket photometer instruments. The inversion procedures have been used to recover two-dimensional distributions of auroral emission rates and ionization rates from an existing set of N2+3914A rocket photometer measurements which were made in a tomographic spin scan mode during the ARIES auroral campaign. The two-dimensional distributions of the 3914A volume emission rates recoverd from the inversion of the rocket data compare very well with the distributions that were inferred from ground-based measurements using triangulation-tomography techniques and the N2 ionization rates derived from the rocket tomography results are in very good agreement with the in situ particle measurements that were made during the flight. Three pre-prints describing the tomographic inversion techniques and the tomographic analysis of the ARIES rocket data are included as appendices.

  8. Eruption mass estimation using infrasound waveform inversion and ash and gas measurements: Evaluation at Sakurajima Volcano, Japan [Comparison of eruption masses at Sakurajima Volcano, Japan calculated by infrasound waveform inversion and ground-based sampling

    DOE PAGES

    Fee, David; Izbekov, Pavel; Kim, Keehoon; ...

    2017-10-09

    Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been appliedmore » to the inversion technique. Furthermore we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan.« less

  9. Eruption mass estimation using infrasound waveform inversion and ash and gas measurements: Evaluation at Sakurajima Volcano, Japan [Comparison of eruption masses at Sakurajima Volcano, Japan calculated by infrasound waveform inversion and ground-based sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fee, David; Izbekov, Pavel; Kim, Keehoon

    Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been appliedmore » to the inversion technique. Furthermore we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan.« less

  10. Inversion technique for IR heterodyne sounding of stratospheric constituents from space platforms

    NASA Technical Reports Server (NTRS)

    Abbas, M. M.; Shapiro, G. L.; Alvarez, J. M.

    1981-01-01

    The techniques which have been employed for inversion of IR heterodyne measurements for remote sounding of stratospheric trace constituents usually rely on either geometric effects based on limb-scan observations (i.e., onion peel techniques) or spectral effects by using weighting functions corresponding to different frequencies of an IR spectral line. An experimental approach and inversion technique are discussed which optimize the retrieval of concentration profiles by combining the geometric and the spectral effects in an IR heterodyne receiver. The results of inversions of some synthetic CIO spectral lines corresponding to solar occultation limb scans of the stratosphere are presented, indicating considerable improvement in the accuracy of the retrieved profiles. The effects of noise on the accuracy of retrievals are discussed for realistic situations.

  11. Inversion technique for IR heterodyne sounding of stratospheric constituents from space platforms.

    PubMed

    Abbas, M M; Shapiro, G L; Alvarez, J M

    1981-11-01

    The techniques which have been employed for inversion of IR heterodyne measurements for remote sounding of stratospheric trace constituents usually rely on either geometric effects based on limb-scan observations (i.e., onion peel techniques) or spectral effects by using weighting functions corresponding to different frequencies of an IR spectral line. An experimental approach and inversion technique are discussed which optimize the retrieval of concentration profiles by combining the geometric and the spectral effects in an IR heterodyne receiver. The results of inversions of some synthetic ClO spectral lines corresponding to solar occultation limb scans of the stratosphere are presented, indicating considerable improvement in the accuracy of the retrieved profiles. The effects of noise on the accuracy of retrievals are discussed for realistic situations.

  12. Magnetic resonance separation imaging using a divided inversion recovery technique (DIRT).

    PubMed

    Goldfarb, James W

    2010-04-01

    The divided inversion recovery technique is an MRI separation method based on tissue T(1) relaxation differences. When tissue T(1) relaxation times are longer than the time between inversion pulses in a segmented inversion recovery pulse sequence, longitudinal magnetization does not pass through the null point. Prior to additional inversion pulses, longitudinal magnetization may have an opposite polarity. Spatial displacement of tissues in inversion recovery balanced steady-state free-precession imaging has been shown to be due to this magnetization phase change resulting from incomplete magnetization recovery. In this paper, it is shown how this phase change can be used to provide image separation. A pulse sequence parameter, the time between inversion pulses (T180), can be adjusted to provide water-fat or fluid separation. Example water-fat and fluid separation images of the head, heart, and abdomen are presented. The water-fat separation performance was investigated by comparing image intensities in short-axis divided inversion recovery technique images of the heart. Fat, blood, and fluid signal was suppressed to the background noise level. Additionally, the separation performance was not affected by main magnetic field inhomogeneities.

  13. An approximate inverse scattering technique for reconstructing blockage profiles in water pipelines using acoustic transients.

    PubMed

    Jing, Liwen; Li, Zhao; Wang, Wenjie; Dubey, Amartansh; Lee, Pedro; Meniconi, Silvia; Brunone, Bruno; Murch, Ross D

    2018-05-01

    An approximate inverse scattering technique is proposed for reconstructing cross-sectional area variation along water pipelines to deduce the size and position of blockages. The technique allows the reconstructed blockage profile to be written explicitly in terms of the measured acoustic reflectivity. It is based upon the Born approximation and provides good accuracy, low computational complexity, and insight into the reconstruction process. Numerical simulations and experimental results are provided for long pipelines with mild and severe blockages of different lengths. Good agreement is found between the inverse result and the actual pipe condition for mild blockages.

  14. Query-based learning for aerospace applications.

    PubMed

    Saad, E W; Choi, J J; Vian, J L; Wunsch, D C Ii

    2003-01-01

    Models of real-world applications often include a large number of parameters with a wide dynamic range, which contributes to the difficulties of neural network training. Creating the training data set for such applications becomes costly, if not impossible. In order to overcome the challenge, one can employ an active learning technique known as query-based learning (QBL) to add performance-critical data to the training set during the learning phase, thereby efficiently improving the overall learning/generalization. The performance-critical data can be obtained using an inverse mapping called network inversion (discrete network inversion and continuous network inversion) followed by oracle query. This paper investigates the use of both inversion techniques for QBL learning, and introduces an original heuristic to select the inversion target values for continuous network inversion method. Efficiency and generalization was further enhanced by employing node decoupled extended Kalman filter (NDEKF) training and a causality index (CI) as a means to reduce the input search dimensionality. The benefits of the overall QBL approach are experimentally demonstrated in two aerospace applications: a classification problem with large input space and a control distribution problem.

  15. Optimization of the Inverse Algorithm for Estimating the Optical Properties of Biological Materials Using Spatially-resolved Diffuse Reflectance Technique

    USDA-ARS?s Scientific Manuscript database

    Determination of the optical properties from intact biological materials based on diffusion approximation theory is a complicated inverse problem, and it requires proper implementation of inverse algorithm, instrumentation, and experiment. This work was aimed at optimizing the procedure of estimatin...

  16. Preview-Based Stable-Inversion for Output Tracking

    NASA Technical Reports Server (NTRS)

    Zou, Qing-Ze; Devasia, Santosh

    1999-01-01

    Stable Inversion techniques can be used to achieve high-accuracy output tracking. However, for nonminimum phase systems, the inverse is non-causal - hence the inverse has to be pre-computed using a pre-specified desired-output trajectory. This requirement for pre-specification of the desired output restricts the use of inversion-based approaches to trajectory planning problems (for nonminimum phase systems). In the present article, it is shown that preview information of the desired output can be used to achieve online inversion-based output tracking of linear systems. The amount of preview-time needed is quantified in terms of the tracking error and the internal dynamics of the system (zeros of the system). The methodology is applied to the online output tracking of a flexible structure and experimental results are presented.

  17. Eruption mass estimation using infrasound waveform inversion and ash and gas measurements: Evaluation at Sakurajima Volcano, Japan

    NASA Astrophysics Data System (ADS)

    Fee, David; Izbekov, Pavel; Kim, Keehoon; Yokoo, Akihiko; Lopez, Taryn; Prata, Fred; Kazahaya, Ryunosuke; Nakamichi, Haruhisa; Iguchi, Masato

    2017-12-01

    Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been applied to the inversion technique. Here we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan. Six infrasound stations deployed from 12-20 February 2015 recorded the explosions. We compute numerical Green's functions using 3-D Finite Difference Time Domain modeling and a high-resolution digital elevation model. The inversion, assuming a simple acoustic monopole source, provides realistic eruption masses and excellent fit to the data for the majority of the explosions. The inversion results are compared to independent eruption masses derived from ground-based ash collection and volcanic gas measurements. Assuming realistic flow densities, our infrasound-derived eruption masses for ash-rich eruptions compare favorably to the ground-based estimates, with agreement ranging from within a factor of two to one order of magnitude. Uncertainties in the time-dependent flow density and acoustic propagation likely contribute to the mismatch between the methods. Our results suggest that realistic and accurate infrasound-based eruption mass and mass flow rate estimates can be computed using the method employed here. If accurate volcanic flow parameters are known, application of this technique could be broadly applied to enable near real-time calculation of eruption mass flow rates and total masses. These critical input parameters for volcanic eruption modeling and monitoring are not currently available.

  18. Frequency-domain elastic full waveform inversion using encoded simultaneous sources

    NASA Astrophysics Data System (ADS)

    Jeong, W.; Son, W.; Pyun, S.; Min, D.

    2011-12-01

    Currently, numerous studies have endeavored to develop robust full waveform inversion and migration algorithms. These processes require enormous computational costs, because of the number of sources in the survey. To avoid this problem, the phase encoding technique for prestack migration was proposed by Romero (2000) and Krebs et al. (2009) proposed the encoded simultaneous-source inversion technique in the time domain. On the other hand, Ben-Hadj-Ali et al. (2011) demonstrated the robustness of the frequency-domain full waveform inversion with simultaneous sources for noisy data changing the source assembling. Although several studies on simultaneous-source inversion tried to estimate P- wave velocity based on the acoustic wave equation, seismic migration and waveform inversion based on the elastic wave equations are required to obtain more reliable subsurface information. In this study, we propose a 2-D frequency-domain elastic full waveform inversion technique using phase encoding methods. In our algorithm, the random phase encoding method is employed to calculate the gradients of the elastic parameters, source signature estimation and the diagonal entries of approximate Hessian matrix. The crosstalk for the estimated source signature and the diagonal entries of approximate Hessian matrix are suppressed with iteration as for the gradients. Our 2-D frequency-domain elastic waveform inversion algorithm is composed using the back-propagation technique and the conjugate-gradient method. Source signature is estimated using the full Newton method. We compare the simultaneous-source inversion with the conventional waveform inversion for synthetic data sets of the Marmousi-2 model. The inverted results obtained by simultaneous sources are comparable to those obtained by individual sources, and source signature is successfully estimated in simultaneous source technique. Comparing the inverted results using the pseudo Hessian matrix with previous inversion results provided by the approximate Hessian matrix, it is noted that the latter are better than the former for deeper parts of the model. This work was financially supported by the Brain Korea 21 project of Energy System Engineering, by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2010-0006155), by the Energy Efficiency & Resources of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government Ministry of Knowledge Economy (No. 2010T100200133).

  19. Three-Dimensional Inverse Transport Solver Based on Compressive Sensing Technique

    NASA Astrophysics Data System (ADS)

    Cheng, Yuxiong; Wu, Hongchun; Cao, Liangzhi; Zheng, Youqi

    2013-09-01

    According to the direct exposure measurements from flash radiographic image, a compressive sensing-based method for three-dimensional inverse transport problem is presented. The linear absorption coefficients and interface locations of objects are reconstructed directly at the same time. It is always very expensive to obtain enough measurements. With limited measurements, compressive sensing sparse reconstruction technique orthogonal matching pursuit is applied to obtain the sparse coefficients by solving an optimization problem. A three-dimensional inverse transport solver is developed based on a compressive sensing-based technique. There are three features in this solver: (1) AutoCAD is employed as a geometry preprocessor due to its powerful capacity in graphic. (2) The forward projection matrix rather than Gauss matrix is constructed by the visualization tool generator. (3) Fourier transform and Daubechies wavelet transform are adopted to convert an underdetermined system to a well-posed system in the algorithm. Simulations are performed and numerical results in pseudo-sine absorption problem, two-cube problem and two-cylinder problem when using compressive sensing-based solver agree well with the reference value.

  20. A direct-inverse method for transonic and separated flows about airfoils

    NASA Technical Reports Server (NTRS)

    Carlson, K. D.

    1985-01-01

    A direct-inverse technique and computer program called TAMSEP that can be sued for the analysis of the flow about airfoils at subsonic and low transonic freestream velocities is presented. The method is based upon a direct-inverse nonconservative full potential inviscid method, a Thwaites laminar boundary layer technique, and the Barnwell turbulent momentum integral scheme; and it is formulated using Cartesian coordinates. Since the method utilizes inverse boundary conditions in regions of separated flow, it is suitable for predicing the flowfield about airfoils having trailing edge separated flow under high lift conditions. Comparisons with experimental data indicate that the method should be a useful tool for applied aerodynamic analyses.

  1. A direct-inverse method for transonic and separated flows about airfoils

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1990-01-01

    A direct-inverse technique and computer program called TAMSEP that can be used for the analysis of the flow about airfoils at subsonic and low transonic freestream velocities is presented. The method is based upon a direct-inverse nonconservative full potential inviscid method, a Thwaites laminar boundary layer technique, and the Barnwell turbulent momentum integral scheme; and it is formulated using Cartesian coordinates. Since the method utilizes inverse boundary conditions in regions of separated flow, it is suitable for predicting the flow field about airfoils having trailing edge separated flow under high lift conditions. Comparisons with experimental data indicate that the method should be a useful tool for applied aerodynamic analyses.

  2. Inverse analysis of aerodynamic loads from strain information using structural models and neural networks

    NASA Astrophysics Data System (ADS)

    Wada, Daichi; Sugimoto, Yohei

    2017-04-01

    Aerodynamic loads on aircraft wings are one of the key parameters to be monitored for reliable and effective aircraft operations and management. Flight data of the aerodynamic loads would be used onboard to control the aircraft and accumulated data would be used for the condition-based maintenance and the feedback for the fatigue and critical load modeling. The effective sensing techniques such as fiber optic distributed sensing have been developed and demonstrated promising capability of monitoring structural responses, i.e., strains on the surface of the aircraft wings. By using the developed techniques, load identification methods for structural health monitoring are expected to be established. The typical inverse analysis for load identification using strains calculates the loads in a discrete form of concentrated forces, however, the distributed form of the loads is essential for the accurate and reliable estimation of the critical stress at structural parts. In this study, we demonstrate an inverse analysis to identify the distributed loads from measured strain information. The introduced inverse analysis technique calculates aerodynamic loads not in a discrete but in a distributed manner based on a finite element model. In order to verify the technique through numerical simulations, we apply static aerodynamic loads on a flat panel model, and conduct the inverse identification of the load distributions. We take two approaches to build the inverse system between loads and strains. The first one uses structural models and the second one uses neural networks. We compare the performance of the two approaches, and discuss the effect of the amount of the strain sensing information.

  3. Large-scale inverse model analyses employing fast randomized data reduction

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  4. Sparse Matrix Motivated Reconstruction of Far-Field Radiation Patterns

    DTIC Science & Technology

    2015-03-01

    method for base - station antenna radiation patterns. IEEE Antennas Propagation Magazine. 2001;43(2):132. 4. Vasiliadis TG, Dimitriou D, Sergiadis JD...algorithm based on sparse representations of radiation patterns using the inverse Discrete Fourier Transform (DFT) and the inverse Discrete Cosine...patterns using a Model- Based Parameter Estimation (MBPE) technique that reduces the computational time required to model radiation patterns. Another

  5. Qualitative and quantitative comparison of geostatistical techniques of porosity prediction from the seismic and logging data: a case study from the Blackfoot Field, Alberta, Canada

    NASA Astrophysics Data System (ADS)

    Maurya, S. P.; Singh, K. H.; Singh, N. P.

    2018-05-01

    In present study, three recently developed geostatistical methods, single attribute analysis, multi-attribute analysis and probabilistic neural network algorithm have been used to predict porosity in inter well region for Blackfoot field, Alberta, Canada, an offshore oil field. These techniques make use of seismic attributes, generated by model based inversion and colored inversion techniques. The principle objective of the study is to find the suitable combination of seismic inversion and geostatistical techniques to predict porosity and identification of prospective zones in 3D seismic volume. The porosity estimated from these geostatistical approaches is corroborated with the well log porosity. The results suggest that all the three implemented geostatistical methods are efficient and reliable to predict the porosity but the multi-attribute and probabilistic neural network analysis provide more accurate and high resolution porosity sections. A low impedance (6000-8000 m/s g/cc) and high porosity (> 15%) zone is interpreted from inverted impedance and porosity sections respectively between 1060 and 1075 ms time interval and is characterized as reservoir. The qualitative and quantitative results demonstrate that of all the employed geostatistical methods, the probabilistic neural network along with model based inversion is the most efficient method for predicting porosity in inter well region.

  6. Bayesian inversion of data from effusive volcanic eruptions using physics-based models: Application to Mount St. Helens 2004--2008

    USGS Publications Warehouse

    Anderson, Kyle; Segall, Paul

    2013-01-01

    Physics-based models of volcanic eruptions can directly link magmatic processes with diverse, time-varying geophysical observations, and when used in an inverse procedure make it possible to bring all available information to bear on estimating properties of the volcanic system. We develop a technique for inverting geodetic, extrusive flux, and other types of data using a physics-based model of an effusive silicic volcanic eruption to estimate the geometry, pressure, depth, and volatile content of a magma chamber, and properties of the conduit linking the chamber to the surface. A Bayesian inverse formulation makes it possible to easily incorporate independent information into the inversion, such as petrologic estimates of melt water content, and yields probabilistic estimates for model parameters and other properties of the volcano. Probability distributions are sampled using a Markov-Chain Monte Carlo algorithm. We apply the technique using GPS and extrusion data from the 2004–2008 eruption of Mount St. Helens. In contrast to more traditional inversions such as those involving geodetic data alone in combination with kinematic forward models, this technique is able to provide constraint on properties of the magma, including its volatile content, and on the absolute volume and pressure of the magma chamber. Results suggest a large chamber of >40 km3 with a centroid depth of 11–18 km and a dissolved water content at the top of the chamber of 2.6–4.9 wt%.

  7. Noise suppression in surface microseismic data

    USGS Publications Warehouse

    Forghani-Arani, Farnoush; Batzle, Mike; Behura, Jyoti; Willis, Mark; Haines, Seth S.; Davidson, Michael

    2012-01-01

    We introduce a passive noise suppression technique, based on the τ − p transform. In the τ − p domain, one can separate microseismic events from surface noise based on distinct characteristics that are not visible in the time-offset domain. By applying the inverse τ − p transform to the separated microseismic event, we suppress the surface noise in the data. Our technique significantly improves the signal-to-noise ratios of the microseismic events and is superior to existing techniques for passive noise suppression in the sense that it preserves the waveform. We introduce a passive noise suppression technique, based on the τ − p transform. In the τ − p domain, one can separate microseismic events from surface noise based on distinct characteristics that are not visible in the time-offset domain. By applying the inverse τ − p transform to the separated microseismic event, we suppress the surface noise in the data. Our technique significantly improves the signal-to-noise ratios of the microseismic events and is superior to existing techniques for passive noise suppression in the sense that it preserves the waveform.

  8. A gEUD-based inverse planning technique for HDR prostate brachytherapy: Feasibility study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giantsoudi, D.; Department of Radiation Oncology, Francis H. Burr Proton Therapy Center, Boston, Massachusetts 02114; Baltas, D.

    2013-04-15

    Purpose: The purpose of this work was to study the feasibility of a new inverse planning technique based on the generalized equivalent uniform dose for image-guided high dose rate (HDR) prostate cancer brachytherapy in comparison to conventional dose-volume based optimization. Methods: The quality of 12 clinical HDR brachytherapy implants for prostate utilizing HIPO (Hybrid Inverse Planning Optimization) is compared with alternative plans, which were produced through inverse planning using the generalized equivalent uniform dose (gEUD). All the common dose-volume indices for the prostate and the organs at risk were considered together with radiobiological measures. The clinical effectiveness of the differentmore » dose distributions was investigated by comparing dose volume histogram and gEUD evaluators. Results: Our results demonstrate the feasibility of gEUD-based inverse planning in HDR brachytherapy implants for prostate. A statistically significant decrease in D{sub 10} or/and final gEUD values for the organs at risk (urethra, bladder, and rectum) was found while improving dose homogeneity or dose conformity of the target volume. Conclusions: Following the promising results of gEUD-based optimization in intensity modulated radiation therapy treatment optimization, as reported in the literature, the implementation of a similar model in HDR brachytherapy treatment plan optimization is suggested by this study. The potential of improved sparing of organs at risk was shown for various gEUD-based optimization parameter protocols, which indicates the ability of this method to adapt to the user's preferences.« less

  9. Hybrid-dual-fourier tomographic algorithm for a fast three-dimensionial optical image reconstruction in turbid media

    NASA Technical Reports Server (NTRS)

    Alfano, Robert R. (Inventor); Cai, Wei (Inventor)

    2007-01-01

    A reconstruction technique for reducing computation burden in the 3D image processes, wherein the reconstruction procedure comprises an inverse and a forward model. The inverse model uses a hybrid dual Fourier algorithm that combines a 2D Fourier inversion with a 1D matrix inversion to thereby provide high-speed inverse computations. The inverse algorithm uses a hybrid transfer to provide fast Fourier inversion for data of multiple sources and multiple detectors. The forward model is based on an analytical cumulant solution of a radiative transfer equation. The accurate analytical form of the solution to the radiative transfer equation provides an efficient formalism for fast computation of the forward model.

  10. Bayesian inversion of refraction seismic traveltime data

    NASA Astrophysics Data System (ADS)

    Ryberg, T.; Haberland, Ch

    2018-03-01

    We apply a Bayesian Markov chain Monte Carlo (McMC) formalism to the inversion of refraction seismic, traveltime data sets to derive 2-D velocity models below linear arrays (i.e. profiles) of sources and seismic receivers. Typical refraction data sets, especially when using the far-offset observations, are known as having experimental geometries which are very poor, highly ill-posed and far from being ideal. As a consequence, the structural resolution quickly degrades with depth. Conventional inversion techniques, based on regularization, potentially suffer from the choice of appropriate inversion parameters (i.e. number and distribution of cells, starting velocity models, damping and smoothing constraints, data noise level, etc.) and only local model space exploration. McMC techniques are used for exhaustive sampling of the model space without the need of prior knowledge (or assumptions) of inversion parameters, resulting in a large number of models fitting the observations. Statistical analysis of these models allows to derive an average (reference) solution and its standard deviation, thus providing uncertainty estimates of the inversion result. The highly non-linear character of the inversion problem, mainly caused by the experiment geometry, does not allow to derive a reference solution and error map by a simply averaging procedure. We present a modified averaging technique, which excludes parts of the prior distribution in the posterior values due to poor ray coverage, thus providing reliable estimates of inversion model properties even in those parts of the models. The model is discretized by a set of Voronoi polygons (with constant slowness cells) or a triangulated mesh (with interpolation within the triangles). Forward traveltime calculations are performed by a fast, finite-difference-based eikonal solver. The method is applied to a data set from a refraction seismic survey from Northern Namibia and compared to conventional tomography. An inversion test for a synthetic data set from a known model is also presented.

  11. Time-lapse joint inversion of geophysical data with automatic joint constraints and dynamic attributes

    NASA Astrophysics Data System (ADS)

    Rittgers, J. B.; Revil, A.; Mooney, M. A.; Karaoulis, M.; Wodajo, L.; Hickey, C. J.

    2016-12-01

    Joint inversion and time-lapse inversion techniques of geophysical data are often implemented in an attempt to improve imaging of complex subsurface structures and dynamic processes by minimizing negative effects of random and uncorrelated spatial and temporal noise in the data. We focus on the structural cross-gradient (SCG) approach (enforcing recovered models to exhibit similar spatial structures) in combination with time-lapse inversion constraints applied to surface-based electrical resistivity and seismic traveltime refraction data. The combination of both techniques is justified by the underlying petrophysical models. We investigate the benefits and trade-offs of SCG and time-lapse constraints. Using a synthetic case study, we show that a combined joint time-lapse inversion approach provides an overall improvement in final recovered models. Additionally, we introduce a new approach to reweighting SCG constraints based on an iteratively updated normalized ratio of model sensitivity distributions at each time-step. We refer to the new technique as the Automatic Joint Constraints (AJC) approach. The relevance of the new joint time-lapse inversion process is demonstrated on the synthetic example. Then, these approaches are applied to real time-lapse monitoring field data collected during a quarter-scale earthen embankment induced-piping failure test. The use of time-lapse joint inversion is justified by the fact that a change of porosity drives concomitant changes in seismic velocities (through its effect on the bulk and shear moduli) and resistivities (through its influence upon the formation factor). Combined with the definition of attributes (i.e. specific characteristics) of the evolving target associated with piping, our approach allows localizing the position of the preferential flow path associated with internal erosion. This is not the case using other approaches.

  12. Improved preconditioned conjugate gradient algorithm and application in 3D inversion of gravity-gradiometry data

    NASA Astrophysics Data System (ADS)

    Wang, Tai-Han; Huang, Da-Nian; Ma, Guo-Qing; Meng, Zhao-Hai; Li, Ye

    2017-06-01

    With the continuous development of full tensor gradiometer (FTG) measurement techniques, three-dimensional (3D) inversion of FTG data is becoming increasingly used in oil and gas exploration. In the fast processing and interpretation of large-scale high-precision data, the use of the graphics processing unit process unit (GPU) and preconditioning methods are very important in the data inversion. In this paper, an improved preconditioned conjugate gradient algorithm is proposed by combining the symmetric successive over-relaxation (SSOR) technique and the incomplete Choleksy decomposition conjugate gradient algorithm (ICCG). Since preparing the preconditioner requires extra time, a parallel implement based on GPU is proposed. The improved method is then applied in the inversion of noisecontaminated synthetic data to prove its adaptability in the inversion of 3D FTG data. Results show that the parallel SSOR-ICCG algorithm based on NVIDIA Tesla C2050 GPU achieves a speedup of approximately 25 times that of a serial program using a 2.0 GHz Central Processing Unit (CPU). Real airborne gravity-gradiometry data from Vinton salt dome (southwest Louisiana, USA) are also considered. Good results are obtained, which verifies the efficiency and feasibility of the proposed parallel method in fast inversion of 3D FTG data.

  13. Improved Abel transform inversion: First application to COSMIC/FORMOSAT-3

    NASA Astrophysics Data System (ADS)

    Aragon-Angel, A.; Hernandez-Pajares, M.; Juan, J.; Sanz, J.

    2007-05-01

    In this paper the first results of Ionospheric Tomographic inversion are presented, using the Improved Abel Transform on the COSMIC/FORMOSAT-3 constellation of 6 LEO satellites, carrying on-board GPS receivers.[- 4mm] The Abel transform inversion is a wide used technique which in the ionospheric context makes it possible to retrieve electron densities as a function of height based of STEC (Slant Total Electron Content) data gathered from GPS receivers on board of LEO (Low Earth Orbit) satellites. Within this precise use, the classical approach of the Abel inversion is based on the assumption of spherical symmetry of the electron density in the vicinity of an occultation, meaning that the electron content varies in height but not horizontally. In particular, one implication of this assumption is that the VTEC (Vertical Total Electron Content) is a constant value for the occultation region. This assumption may not always be valid since horizontal ionospheric gradients (a very frequent feature in some ionosphere problematic areas such as the Equatorial region) could significantly affect the electron profiles. [- 4mm] In order to overcome this limitation/problem of the classical Abel inversion, a studied improvement of this technique can be obtained by assuming separability in the electron density (see Hernández-Pajares et al. 2000). This means that the electron density can be expressed by the multiplication of VTEC data and a shape function which assumes all the height dependency in it while the VTEC data keeps the horizontal dependency. Actually, it is more realistic to assume that this shape fuction depends only on the height and to use VTEC information to take into account the horizontal variation rather than considering spherical symmetry in the electron density function as it has been carried out in the classical approach of the Abel inversion.[-4mm] Since the above mentioned improved Abel inversion technique has already been tested and proven to be a useful tool to obtain a vertical description of the ionospheric electron density (see García-Fernández et al. 2003), a natural following step would be to extend the use of this technique to the recently available COSMIC data. The COSMIC satellite constellation, formed by 6 micro-satellites, is being deployed since April 2006 in circular orbit around the Earth, with a final altitude of about 700-800 kilometers. Its global and almost uniform coverage will overcome one of the main limitations of this technique which is the sparcity of data, related to lack of GPS receivers in some regions. This can significantly stimulate the development of radio occultation techniques with the use of the huge volume of data provided by the COSMIC constellation to be processed and analysed updating the current knowledge of the Ionospheres nature and behaviour. In this context a summary of the Improvel Abel transform inversion technique and the first results based on COSMIC constellation data will be presented. Moreover, future improvements, taking into account the higher temporal and global spatial coverage, will be discussed. [-4mm] References:M. Hernández-Pajares, J. M. Juan and J. Sanz, Improving the Abel inversion by adding ground GPS data to LEO radio occultations in ionospheric sounding, GEOPHYSICAL RESEARCH LETTERS, VOL. 27, NO. 16, PAGES 2473-2476, AUGUST 15, 2000.M. Garcia-Fernández, M. Hernández-Pajares, M. Juan, and J. Sanz, Improvement of ionospheric electron density estimation with GPSMET occultations using Abel inversion and VTEC Information, JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 108, NO. A9, 1338, doi:10.1029/2003JA009952, 2003

  14. A comparative study of surface waves inversion techniques at strong motion recording sites in Greece

    USGS Publications Warehouse

    Panagiotis C. Pelekis,; Savvaidis, Alexandros; Kayen, Robert E.; Vlachakis, Vasileios S.; Athanasopoulos, George A.

    2015-01-01

    Surface wave method was used for the estimation of Vs vs depth profile at 10 strong motion stations in Greece. The dispersion data were obtained by SASW method, utilizing a pair of electromechanical harmonic-wave source (shakers) or a random source (drop weight). In this study, three inversion techniques were used a) a recently proposed Simplified Inversion Method (SIM), b) an inversion technique based on a neighborhood algorithm (NA) which allows the incorporation of a priori information regarding the subsurface structure parameters, and c) Occam's inversion algorithm. For each site constant value of Poisson's ratio was assumed (ν=0.4) since the objective of the current study is the comparison of the three inversion schemes regardless the uncertainties resulting due to the lack of geotechnical data. A penalty function was introduced to quantify the deviations of the derived Vs profiles. The Vs models are compared as of Vs(z), Vs30 and EC8 soil category, in order to show the insignificance of the existing variations. The comparison results showed that the average variation of SIM profiles is 9% and 4.9% comparing with NA and Occam's profiles respectively whilst the average difference of Vs30 values obtained from SIM is 7.4% and 5.0% compared with NA and Occam's.

  15. Joint inversion of regional and teleseismic earthquake waveforms

    NASA Astrophysics Data System (ADS)

    Baker, Mark R.; Doser, Diane I.

    1988-03-01

    A least squares joint inversion technique for regional and teleseismic waveforms is presented. The mean square error between seismograms and synthetics is minimized using true amplitudes. Matching true amplitudes in modeling requires meaningful estimates of modeling uncertainties and of seismogram signal-to-noise ratios. This also permits calculating linearized uncertainties on the solution based on accuracy and resolution. We use a priori estimates of earthquake parameters to stabilize unresolved parameters, and for comparison with a posteriori uncertainties. We verify the technique on synthetic data, and on the 1983 Borah Peak, Idaho (M = 7.3), earthquake. We demonstrate the inversion on the August 1954 Rainbow Mountain, Nevada (M = 6.8), earthquake and find parameters consistent with previous studies.

  16. Top-down constraints on global N2O emissions at optimal resolution: application of a new dimension reduction technique

    NASA Astrophysics Data System (ADS)

    Wells, Kelley C.; Millet, Dylan B.; Bousserez, Nicolas; Henze, Daven K.; Griffis, Timothy J.; Chaliyakunnel, Sreelekha; Dlugokencky, Edward J.; Saikawa, Eri; Xiang, Gao; Prinn, Ronald G.; O'Doherty, Simon; Young, Dickon; Weiss, Ray F.; Dutton, Geoff S.; Elkins, James W.; Krummel, Paul B.; Langenfelds, Ray; Steele, L. Paul

    2018-01-01

    We present top-down constraints on global monthly N2O emissions for 2011 from a multi-inversion approach and an ensemble of surface observations. The inversions employ the GEOS-Chem adjoint and an array of aggregation strategies to test how well current observations can constrain the spatial distribution of global N2O emissions. The strategies include (1) a standard 4D-Var inversion at native model resolution (4° × 5°), (2) an inversion for six continental and three ocean regions, and (3) a fast 4D-Var inversion based on a novel dimension reduction technique employing randomized singular value decomposition (SVD). The optimized global flux ranges from 15.9 Tg N yr-1 (SVD-based inversion) to 17.5-17.7 Tg N yr-1 (continental-scale, standard 4D-Var inversions), with the former better capturing the extratropical N2O background measured during the HIAPER Pole-to-Pole Observations (HIPPO) airborne campaigns. We find that the tropics provide a greater contribution to the global N2O flux than is predicted by the prior bottom-up inventories, likely due to underestimated agricultural and oceanic emissions. We infer an overestimate of natural soil emissions in the extratropics and find that predicted emissions are seasonally biased in northern midlatitudes. Here, optimized fluxes exhibit a springtime peak consistent with the timing of spring fertilizer and manure application, soil thawing, and elevated soil moisture. Finally, the inversions reveal a major emission underestimate in the US Corn Belt in the bottom-up inventory used here. We extensively test the impact of initial conditions on the analysis and recommend formally optimizing the initial N2O distribution to avoid biasing the inferred fluxes. We find that the SVD-based approach provides a powerful framework for deriving emission information from N2O observations: by defining the optimal resolution of the solution based on the information content of the inversion, it provides spatial information that is lost when aggregating to political or geographic regions, while also providing more temporal information than a standard 4D-Var inversion.

  17. Analyzing the performance of PROSPECT model inversion based on different spectral information for leaf biochemical properties retrieval

    NASA Astrophysics Data System (ADS)

    Sun, Jia; Shi, Shuo; Yang, Jian; Du, Lin; Gong, Wei; Chen, Biwu; Song, Shalei

    2018-01-01

    Leaf biochemical constituents provide useful information about major ecological processes. As a fast and nondestructive method, remote sensing techniques are critical to reflect leaf biochemistry via models. PROSPECT model has been widely applied in retrieving leaf traits by providing hemispherical reflectance and transmittance. However, the process of measuring both reflectance and transmittance can be time-consuming and laborious. Contrary to use reflectance spectrum alone in PROSPECT model inversion, which has been adopted by many researchers, this study proposes to use transmission spectrum alone, with the increasing availability of the latter through various remote sensing techniques. Then we analyzed the performance of PROSPECT model inversion with (1) only transmission spectrum, (2) only reflectance and (3) both reflectance and transmittance, using synthetic datasets (with varying levels of random noise and systematic noise) and two experimental datasets (LOPEX and ANGERS). The results show that (1) PROSPECT-5 model inversion based solely on transmission spectrum is viable with results generally better than that based solely on reflectance spectrum; (2) leaf dry matter can be better estimated using only transmittance or reflectance than with both reflectance and transmittance spectra.

  18. An Inverse Modeling Plugin for HydroDesktop using the Method of Anchored Distributions (MAD)

    NASA Astrophysics Data System (ADS)

    Ames, D. P.; Osorio, C.; Over, M. W.; Rubin, Y.

    2011-12-01

    The CUAHSI Hydrologic Information System (HIS) software stack is based on an open and extensible architecture that facilitates the addition of new functions and capabilities at both the server side (using HydroServer) and the client side (using HydroDesktop). The HydroDesktop client plugin architecture is used here to expose a new scripting based plugin that makes use of the R statistics software as a means for conducting inverse modeling using the Method of Anchored Distributions (MAD). MAD is a Bayesian inversion technique for conditioning computational model parameters on relevant field observations yielding probabilistic distributions of the model parameters, related to the spatial random variable of interest, by assimilating multi-type and multi-scale data. The implementation of a desktop software tool for using the MAD technique is expected to significantly lower the barrier to use of inverse modeling in education, research, and resource management. The HydroDesktop MAD plugin is being developed following a community-based, open-source approach that will help both its adoption and long term sustainability as a user tool. This presentation will briefly introduce MAD, HydroDesktop, and the MAD plugin and software development effort.

  19. Comparative evaluation between anatomic and non-anatomic lateral ligament reconstruction techniques in the ankle joint: A computational study.

    PubMed

    Purevsuren, Tserenchimed; Batbaatar, Myagmarbayar; Khuyagbaatar, Batbayar; Kim, Kyungsoo; Kim, Yoon Hyuk

    2018-03-12

    Biomechanical studies have indicated that the conventional non-anatomic reconstruction techniques for lateral ankle sprain (LAS) tend to restrict subtalar joint motion compared to intact ankle joints. Excessive restriction in subtalar motion may lead to chronic pain, functional difficulties, and development of osteoarthritis. Therefore, various anatomic surgical techniques to reconstruct both the anterior talofibular and calcaneofibular ligaments have been introduced. In this study, ankle joint stability was evaluated using multibody computational ankle joint model to assess two new anatomic reconstruction and three popular non-anatomic reconstruction techniques. An LAS injury, three popular non-anatomic reconstruction models (Watson-Jones, Evans, and Chrisman-Snook), and two common types of anatomic reconstruction models were developed based on the intact ankle model. The stability of ankle in both talocrural and subtalar joint were evaluated under anterior drawer test (150 N anterior force), inversion test (3 Nm inversion moment), internal rotational test (3 Nm internal rotation moment), and the combined loading test (9 Nm inversion and internal moment as well as 1800 N compressive force). Our overall results show that the two anatomic reconstruction techniques were superior to the non-anatomic reconstruction techniques in stabilizing both talocrural and subtalar joints. Restricted subtalar joint motion, which mainly observed in Watson-Jones and Chrisman-Snook techniques, was not shown in the anatomical reconstructions. Evans technique was beneficial for subtalar joint as it does not restrict subtalar motion, though Evans technique was insufficient for restoring talocrural joint inversion. The anatomical reconstruction techniques best recovered ankle stability.

  20. Inverse transport calculations in optical imaging with subspace optimization algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Tian, E-mail: tding@math.utexas.edu; Ren, Kui, E-mail: ren@math.utexas.edu

    2014-09-15

    Inverse boundary value problems for the radiative transport equation play an important role in optics-based medical imaging techniques such as diffuse optical tomography (DOT) and fluorescence optical tomography (FOT). Despite the rapid progress in the mathematical theory and numerical computation of these inverse problems in recent years, developing robust and efficient reconstruction algorithms remains a challenging task and an active research topic. We propose here a robust reconstruction method that is based on subspace minimization techniques. The method splits the unknown transport solution (or a functional of it) into low-frequency and high-frequency components, and uses singular value decomposition to analyticallymore » recover part of low-frequency information. Minimization is then applied to recover part of the high-frequency components of the unknowns. We present some numerical simulations with synthetic data to demonstrate the performance of the proposed algorithm.« less

  1. Using machine learning to accelerate sampling-based inversion

    NASA Astrophysics Data System (ADS)

    Valentine, A. P.; Sambridge, M.

    2017-12-01

    In most cases, a complete solution to a geophysical inverse problem (including robust understanding of the uncertainties associated with the result) requires a sampling-based approach. However, the computational burden is high, and proves intractable for many problems of interest. There is therefore considerable value in developing techniques that can accelerate sampling procedures.The main computational cost lies in evaluation of the forward operator (e.g. calculation of synthetic seismograms) for each candidate model. Modern machine learning techniques-such as Gaussian Processes-offer a route for constructing a computationally-cheap approximation to this calculation, which can replace the accurate solution during sampling. Importantly, the accuracy of the approximation can be refined as inversion proceeds, to ensure high-quality results.In this presentation, we describe and demonstrate this approach-which can be seen as an extension of popular current methods, such as the Neighbourhood Algorithm, and bridges the gap between prior- and posterior-sampling frameworks.

  2. Deformation measurement for a rotating deformable lap based on inverse fringe projection

    NASA Astrophysics Data System (ADS)

    Liao, Min; Zhang, Qican

    2015-03-01

    The active deformable lap (also namely stressed lap) is an efficient polishing tool in optical manufacturing. To measure the dynamic deformation caused by outside force on a deformable lap is important and helpful to the opticians to ensure the performance of a deformable lap as expected. In this paper, a manual deformable lap was designed to simulate the dynamic deformation of an active stressed lap, and a measurement system was developed based on inverse projected fringe technique to restore the 3D shape. A redesigned inverse fringe has been projected onto the surface of the measured lap, and the deformations of the tested lap become much obvious and can be easily and quickly evaluated by Fourier fringe analysis. Compared with the conventional projection, this technique is more obvious, and it should be a promising one in the deformation measurement of the active stressed lap in optical manufacturing.

  3. Nonlinear adaptive inverse control via the unified model neural network

    NASA Astrophysics Data System (ADS)

    Jeng, Jin-Tsong; Lee, Tsu-Tian

    1999-03-01

    In this paper, we propose a new nonlinear adaptive inverse control via a unified model neural network. In order to overcome nonsystematic design and long training time in nonlinear adaptive inverse control, we propose the approximate transformable technique to obtain a Chebyshev Polynomials Based Unified Model (CPBUM) neural network for the feedforward/recurrent neural networks. It turns out that the proposed method can use less training time to get an inverse model. Finally, we apply this proposed method to control magnetic bearing system. The experimental results show that the proposed nonlinear adaptive inverse control architecture provides a greater flexibility and better performance in controlling magnetic bearing systems.

  4. Simulation studies of phase inversion in agitated vessels using a Monte Carlo technique.

    PubMed

    Yeo, Leslie Y; Matar, Omar K; Perez de Ortiz, E Susana; Hewitt, Geoffrey F

    2002-04-15

    A speculative study on the conditions under which phase inversion occurs in agitated liquid-liquid dispersions is conducted using a Monte Carlo technique. The simulation is based on a stochastic model, which accounts for fundamental physical processes such as drop deformation, breakup, and coalescence, and utilizes the minimization of interfacial energy as a criterion for phase inversion. Profiles of the interfacial energy indicate that a steady-state equilibrium is reached after a sufficiently large number of random moves and that predictions are insensitive to initial drop conditions. The calculated phase inversion holdup is observed to increase with increasing density and viscosity ratio, and to decrease with increasing agitation speed for a fixed viscosity ratio. It is also observed that, for a fixed viscosity ratio, the phase inversion holdup remains constant for large enough agitation speeds. The proposed model is therefore capable of achieving reasonable qualitative agreement with general experimental trends and of reproducing key features observed experimentally. The results of this investigation indicate that this simple stochastic method could be the basis upon which more advanced models for predicting phase inversion behavior can be developed.

  5. Influence of Gridded Standoff Measurement Resolution on Numerical Bathymetric Inversion

    NASA Astrophysics Data System (ADS)

    Hesser, T.; Farthing, M. W.; Brodie, K.

    2016-02-01

    The bathymetry from the surfzone to the shoreline incurs frequent, active movement due to wave energy interacting with the seafloor. Methodologies to measure bathymetry range from point-source in-situ instruments, vessel-mounted single-beam or multi-beam sonar surveys, airborne bathymetric lidar, as well as inversion techniques from standoff measurements of wave processes from video or radar imagery. Each type of measurement has unique sources of error and spatial and temporal resolution and availability. Numerical bathymetry estimation frameworks can use these disparate data types in combination with model-based inversion techniques to produce a "best-estimate of bathymetry" at a given time. Understanding how the sources of error and varying spatial or temporal resolution of each data type affect the end result is critical for determining best practices and in turn increase the accuracy of bathymetry estimation techniques. In this work, we consider an initial step in the development of a complete framework for estimating bathymetry in the nearshore by focusing on gridded standoff measurements and in-situ point observations in model-based inversion at the U.S. Army Corps of Engineers Field Research Facility in Duck, NC. The standoff measurement methods return wave parameters computed using linear wave theory from the direct measurements. These gridded datasets can range in temporal and spatial resolution that do not match the desired model parameters and therefore could lead to a reduction in the accuracy of these methods. Specifically, we investigate the affect of numerical resolution on the accuracy of an Ensemble Kalman Filter bathymetric inversion technique in relation to the spatial and temporal resolution of the gridded standoff measurements. The accuracies of the bathymetric estimates are compared with both high-resolution Real Time Kinematic (RTK) single-beam surveys as well as alternative direct in-situ measurements using sonic altimeters.

  6. An improved pulse sequence and inversion algorithm of T2 spectrum

    NASA Astrophysics Data System (ADS)

    Ge, Xinmin; Chen, Hua; Fan, Yiren; Liu, Juntao; Cai, Jianchao; Liu, Jianyu

    2017-03-01

    The nuclear magnetic resonance transversal relaxation time is widely applied in geological prospecting, both in laboratory and downhole environments. However, current methods used for data acquisition and inversion should be reformed to characterize geological samples with complicated relaxation components and pore size distributions, such as samples of tight oil, gas shale, and carbonate. We present an improved pulse sequence to collect transversal relaxation signals based on the CPMG (Carr, Purcell, Meiboom, and Gill) pulse sequence. The echo spacing is not constant but varies in different windows, depending on prior knowledge or customer requirements. We use the entropy based truncated singular value decomposition (TSVD) to compress the ill-posed matrix and discard small singular values which cause the inversion instability. A hybrid algorithm combining the iterative TSVD and a simultaneous iterative reconstruction technique is implemented to reach the global convergence and stability of the inversion. Numerical simulations indicate that the improved pulse sequence leads to the same result as CPMG, but with lower echo numbers and computational time. The proposed method is a promising technique for geophysical prospecting and other related fields in future.

  7. An Innovations-Based Noise Cancelling Technique on Inverse Kepstrum Whitening Filter and Adaptive FIR Filter in Beamforming Structure

    PubMed Central

    Jeong, Jinsoo

    2011-01-01

    This paper presents an acoustic noise cancelling technique using an inverse kepstrum system as an innovations-based whitening application for an adaptive finite impulse response (FIR) filter in beamforming structure. The inverse kepstrum method uses an innovations-whitened form from one acoustic path transfer function between a reference microphone sensor and a noise source so that the rear-end reference signal will then be a whitened sequence to a cascaded adaptive FIR filter in the beamforming structure. By using an inverse kepstrum filter as a whitening filter with the use of a delay filter, the cascaded adaptive FIR filter estimates only the numerator of the polynomial part from the ratio of overall combined transfer functions. The test results have shown that the adaptive FIR filter is more effective in beamforming structure than an adaptive noise cancelling (ANC) structure in terms of signal distortion in the desired signal and noise reduction in noise with nonminimum phase components. In addition, the inverse kepstrum method shows almost the same convergence level in estimate of noise statistics with the use of a smaller amount of adaptive FIR filter weights than the kepstrum method, hence it could provide better computational simplicity in processing. Furthermore, the rear-end inverse kepstrum method in beamforming structure has shown less signal distortion in the desired signal than the front-end kepstrum method and the front-end inverse kepstrum method in beamforming structure. PMID:22163987

  8. Spectral line inversion for sounding of stratospheric minor constituents by infrared heterodyne technique from balloon altitudes

    NASA Technical Reports Server (NTRS)

    Abbas, M. M.; Shapiro, G. L.; Allario, F.; Alvarez, J. M.

    1981-01-01

    A combination of two different techniques for the inversion of infrared laser heterodyne measurements of tenuous gases in the stratosphere by solar occulation is presented which incorporates the advantages of each technique. An experimental approach and inversion technique are developed which optimize the retrieval of concentration profiles by incorporating the onion peel collection scheme into the spectral inversion technique. A description of an infrared heterodyne spectrometer and the mode of observations for solar occulation measurement is presented, and the results of inversions of some synthetic ClO spectral lines corresponding to solar occulation limb-scans of the stratosphere are examined. A comparison between the new techniques and one of the current techniques indicates that considerable improvement in the accuracy of the retrieved profiles can be achieved. It is found that noise affects the accuracy of both techniques but not in a straightforward manner since there is interaction between the noise level, noise propagation through inversion, and the number of scans leading to an optimum retrieval.

  9. Wave tilt sounding of multilayered structures. [for probing of stratified planetary surface electrical properties and thickness

    NASA Technical Reports Server (NTRS)

    Warne, L.; Jaggard, D. L.; Elachi, C.

    1979-01-01

    The relationship between the wave tilt and the electrical parameters of a multilayered structure is investigated. Particular emphasis is placed on the inverse problem associated with the sounding planetary surfaces. An inversion technique, based on multifrequency wave tilt, is proposed and demonstrated with several computer models. It is determined that there is close agreement between the electrical parameters used in the models and those in the inversion values.

  10. Unscented Kalman filter assimilation of time-lapse self-potential data for monitoring solute transport

    NASA Astrophysics Data System (ADS)

    Cui, Yi-an; Liu, Lanbo; Zhu, Xiaoxiong

    2017-08-01

    Monitoring the extent and evolution of contaminant plumes in local and regional groundwater systems from existing landfills is critical in contamination control and remediation. The self-potential survey is an efficient and economical nondestructive geophysical technique that can be used to investigate underground contaminant plumes. Based on the unscented transform, we have built a Kalman filtering cycle to conduct time-lapse data assimilation for monitoring the transport of solute based on the solute transport experiment using a bench-scale physical model. The data assimilation was formed by modeling the evolution based on the random walk model and observation correcting based on the self-potential forward. Thus, monitoring self-potential data can be inverted by the data assimilation technique. As a result, we can reconstruct the dynamic process of the contaminant plume instead of using traditional frame-to-frame static inversion, which may cause inversion artifacts. The data assimilation inversion algorithm was evaluated through noise-added synthetic time-lapse self-potential data. The result of the numerical experiment shows validity, accuracy and tolerance to the noise of the dynamic inversion. To validate the proposed algorithm, we conducted a scaled-down sandbox self-potential observation experiment to generate time-lapse data that closely mimics the real-world contaminant monitoring setup. The results of physical experiments support the idea that the data assimilation method is a potentially useful approach for characterizing the transport of contamination plumes using the unscented Kalman filter (UKF) data assimilation technique applied to field time-lapse self-potential data.

  11. Full-Physics Inverse Learning Machine for Satellite Remote Sensing Retrievals

    NASA Astrophysics Data System (ADS)

    Loyola, D. G.

    2017-12-01

    The satellite remote sensing retrievals are usually ill-posed inverse problems that are typically solved by finding a state vector that minimizes the residual between simulated data and real measurements. The classical inversion methods are very time-consuming as they require iterative calls to complex radiative-transfer forward models to simulate radiances and Jacobians, and subsequent inversion of relatively large matrices. In this work we present a novel and extremely fast algorithm for solving inverse problems called full-physics inverse learning machine (FP-ILM). The FP-ILM algorithm consists of a training phase in which machine learning techniques are used to derive an inversion operator based on synthetic data generated using a radiative transfer model (which expresses the "full-physics" component) and the smart sampling technique, and an operational phase in which the inversion operator is applied to real measurements. FP-ILM has been successfully applied to the retrieval of the SO2 plume height during volcanic eruptions and to the retrieval of ozone profile shapes from UV/VIS satellite sensors. Furthermore, FP-ILM will be used for the near-real-time processing of the upcoming generation of European Sentinel sensors with their unprecedented spectral and spatial resolution and associated large increases in the amount of data.

  12. Magnetotelluric inversion via reverse time migration algorithm of seismic data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ha, Taeyoung; Shin, Changsoo

    2007-07-01

    We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nedelec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversionmore » algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data.« less

  13. A Geophysical Inversion Model Enhancement Technique Based on the Blind Deconvolution

    NASA Astrophysics Data System (ADS)

    Zuo, B.; Hu, X.; Li, H.

    2011-12-01

    A model-enhancement technique is proposed to enhance the geophysical inversion model edges and details without introducing any additional information. Firstly, the theoretic correctness of the proposed geophysical inversion model-enhancement technique is discussed. An inversion MRM (model resolution matrix) convolution approximating PSF (Point Spread Function) method is designed to demonstrate the correctness of the deconvolution model enhancement method. Then, a total-variation regularization blind deconvolution geophysical inversion model-enhancement algorithm is proposed. In previous research, Oldenburg et al. demonstrate the connection between the PSF and the geophysical inverse solution. Alumbaugh et al. propose that more information could be provided by the PSF if we return to the idea of it behaving as an averaging or low pass filter. We consider the PSF as a low pass filter to enhance the inversion model basis on the theory of the PSF convolution approximation. Both the 1D linear and the 2D magnetotelluric inversion examples are used to analyze the validity of the theory and the algorithm. To prove the proposed PSF convolution approximation theory, the 1D linear inversion problem is considered. It shows the ratio of convolution approximation error is only 0.15%. The 2D synthetic model enhancement experiment is presented. After the deconvolution enhancement, the edges of the conductive prism and the resistive host become sharper, and the enhancement result is closer to the actual model than the original inversion model according the numerical statistic analysis. Moreover, the artifacts in the inversion model are suppressed. The overall precision of model increases 75%. All of the experiments show that the structure details and the numerical precision of inversion model are significantly improved, especially in the anomalous region. The correlation coefficient between the enhanced inversion model and the actual model are shown in Fig. 1. The figure illustrates that more information and details structure of the actual model are enhanced through the proposed enhancement algorithm. Using the proposed enhancement method can help us gain a clearer insight into the results of the inversions and help make better informed decisions.

  14. Model based Inverse Methods for Sizing Cracks of Varying Shape and Location in Bolt hole Eddy Current (BHEC) Inspections (Postprint)

    DTIC Science & Technology

    2016-02-10

    using bolt hole eddy current (BHEC) techniques. Data was acquired for a wide range of crack sizes and shapes, including mid- bore , corner and through...to select the most appropriate VIC-3D surrogate model for subsequent crack sizing inversion step. Inversion results for select mid- bore , through and...the flaw. 15. SUBJECT TERMS Bolt hole eddy current (BHEC); mid- bore , corner and through-thickness crack types; VIC-3D generated surrogate models

  15. Inverse modeling of Texas NOx emissions using space-based and ground-based NO2 observations

    NASA Astrophysics Data System (ADS)

    Tang, W.; Cohan, D. S.; Lamsal, L. N.; Xiao, X.; Zhou, W.

    2013-11-01

    Inverse modeling of nitrogen oxide (NOx) emissions using satellite-based NO2 observations has become more prevalent in recent years, but has rarely been applied to regulatory modeling at regional scales. In this study, OMI satellite observations of NO2 column densities are used to conduct inverse modeling of NOx emission inventories for two Texas State Implementation Plan (SIP) modeling episodes. Addition of lightning, aircraft, and soil NOx emissions to the regulatory inventory narrowed but did not close the gap between modeled and satellite-observed NO2 over rural regions. Satellite-based top-down emission inventories are created with the regional Comprehensive Air Quality Model with extensions (CAMx) using two techniques: the direct scaling method and discrete Kalman filter (DKF) with decoupled direct method (DDM) sensitivity analysis. The simulations with satellite-inverted inventories are compared to the modeling results using the a priori inventory as well as an inventory created by a ground-level NO2-based DKF inversion. The DKF inversions yield conflicting results: the satellite-based inversion scales up the a priori NOx emissions in most regions by factors of 1.02 to 1.84, leading to 3-55% increase in modeled NO2 column densities and 1-7 ppb increase in ground 8 h ozone concentrations, while the ground-based inversion indicates the a priori NOx emissions should be scaled by factors of 0.34 to 0.57 in each region. However, none of the inversions improve the model performance in simulating aircraft-observed NO2 or ground-level ozone (O3) concentrations.

  16. Inverse modeling of Texas NOx emissions using space-based and ground-based NO2 observations

    NASA Astrophysics Data System (ADS)

    Tang, W.; Cohan, D.; Lamsal, L. N.; Xiao, X.; Zhou, W.

    2013-07-01

    Inverse modeling of nitrogen oxide (NOx) emissions using satellite-based NO2 observations has become more prevalent in recent years, but has rarely been applied to regulatory modeling at regional scales. In this study, OMI satellite observations of NO2 column densities are used to conduct inverse modeling of NOx emission inventories for two Texas State Implementation Plan (SIP) modeling episodes. Addition of lightning, aircraft, and soil NOx emissions to the regulatory inventory narrowed but did not close the gap between modeled and satellite observed NO2 over rural regions. Satellite-based top-down emission inventories are created with the regional Comprehensive Air Quality Model with extensions (CAMx) using two techniques: the direct scaling method and discrete Kalman filter (DKF) with Decoupled Direct Method (DDM) sensitivity analysis. The simulations with satellite-inverted inventories are compared to the modeling results using the a priori inventory as well as an inventory created by a ground-level NO2 based DKF inversion. The DKF inversions yield conflicting results: the satellite-based inversion scales up the a priori NOx emissions in most regions by factors of 1.02 to 1.84, leading to 3-55% increase in modeled NO2 column densities and 1-7 ppb increase in ground 8 h ozone concentrations, while the ground-based inversion indicates the a priori NOx emissions should be scaled by factors of 0.34 to 0.57 in each region. However, none of the inversions improve the model performance in simulating aircraft-observed NO2 or ground-level ozone (O3) concentrations.

  17. Developing the remote sensing-based water environmental model for monitoring alpine river water environment over Plateau cold zone

    NASA Astrophysics Data System (ADS)

    You, Y.; Wang, S.; Yang, Q.; Shen, M.; Chen, G.

    2017-12-01

    Alpine river water environment on the Plateau (such as Tibetan Plateau, China) is a key indicator for water security and environmental security in China. Due to the complex terrain and various surface eco-environment, it is a very difficult to monitor the water environment over the complex land surface of the plateau. The increasing availability of remote sensing techniques with appropriate spatiotemporal resolutions, broad coverage and low costs allows for effective monitoring river water environment on the Plateau, particularly in remote and inaccessible areas where are lack of in situ observations. In this study, we propose a remote sense-based monitoring model by using multi-platform remote sensing data for monitoring alpine river environment. In this study some parameterization methodologies based on satellite remote sensing data and field observations have been proposed for monitoring the water environmental parameters (including chlorophyll-a concentration (Chl-a), water turbidity (WT) or water clarity (SD), total nitrogen (TN), total phosphorus (TP), and total organic carbon (TOC)) over the china's southwest highland rivers, such as the Brahmaputra. First, because most sensors do not collect multiple observations of a target in a single pass, data from multiple orbits or acquisition times may be used, and varying atmospheric and irradiance effects must be reconciled. So based on various types of satellite data, at first we developed the techniques of multi-sensor data correction, atmospheric correction. Second, we also built the inversion spectral database derived from long-term remote sensing data and field sampling data. Then we have studied and developed a high-precision inversion model over the southwest highland river backed by inversion spectral database through using the techniques of multi-sensor remote sensing information optimization and collaboration. Third, take the middle reaches of the Brahmaputra river as the study area, we validated the key water environmental parameters and further improved the inversion model. The results indicate that our proposed water environment inversion model can be a good inversion for alpine water environmental parameters, and can improve the monitoring and warning ability for the alpine river water environment in the future.

  18. Mathematical model of cycad cones' thermogenic temperature responses: inverse calorimetry to estimate metabolic heating rates.

    PubMed

    Roemer, R B; Booth, D; Bhavsar, A A; Walter, G H; Terry, L I

    2012-12-21

    A mathematical model based on conservation of energy has been developed and used to simulate the temperature responses of cones of the Australian cycads Macrozamia lucida and Macrozamia. macleayi during their daily thermogenic cycle. These cones generate diel midday thermogenic temperature increases as large as 12 °C above ambient during their approximately two week pollination period. The cone temperature response model is shown to accurately predict the cones' temperatures over multiple days as based on simulations of experimental results from 28 thermogenic events from 3 different cones, each simulated for either 9 or 10 sequential days. The verified model is then used as the foundation of a new, parameter estimation based technique (termed inverse calorimetry) that estimates the cones' daily metabolic heating rates from temperature measurements alone. The inverse calorimetry technique's predictions of the major features of the cones' thermogenic metabolism compare favorably with the estimates from conventional respirometry (indirect calorimetry). Because the new technique uses only temperature measurements, and does not require measurements of oxygen consumption, it provides a simple, inexpensive and portable complement to conventional respirometry for estimating metabolic heating rates. It thus provides an additional tool to facilitate field and laboratory investigations of the bio-physics of thermogenic plants. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Inversion using a new low-dimensional representation of complex binary geological media based on a deep neural network

    NASA Astrophysics Data System (ADS)

    Laloy, Eric; Hérault, Romain; Lee, John; Jacques, Diederik; Linde, Niklas

    2017-12-01

    Efficient and high-fidelity prior sampling and inversion for complex geological media is still a largely unsolved challenge. Here, we use a deep neural network of the variational autoencoder type to construct a parametric low-dimensional base model parameterization of complex binary geological media. For inversion purposes, it has the attractive feature that random draws from an uncorrelated standard normal distribution yield model realizations with spatial characteristics that are in agreement with the training set. In comparison with the most commonly used parametric representations in probabilistic inversion, we find that our dimensionality reduction (DR) approach outperforms principle component analysis (PCA), optimization-PCA (OPCA) and discrete cosine transform (DCT) DR techniques for unconditional geostatistical simulation of a channelized prior model. For the considered examples, important compression ratios (200-500) are achieved. Given that the construction of our parameterization requires a training set of several tens of thousands of prior model realizations, our DR approach is more suited for probabilistic (or deterministic) inversion than for unconditional (or point-conditioned) geostatistical simulation. Probabilistic inversions of 2D steady-state and 3D transient hydraulic tomography data are used to demonstrate the DR-based inversion. For the 2D case study, the performance is superior compared to current state-of-the-art multiple-point statistics inversion by sequential geostatistical resampling (SGR). Inversion results for the 3D application are also encouraging.

  20. Time-reversal and Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Debski, Wojciech

    2017-04-01

    Probabilistic inversion technique is superior to the classical optimization-based approach in all but one aspects. It requires quite exhaustive computations which prohibit its use in huge size inverse problems like global seismic tomography or waveform inversion to name a few. The advantages of the approach are, however, so appealing that there is an ongoing continuous afford to make the large inverse task as mentioned above manageable with the probabilistic inverse approach. One of the perspective possibility to achieve this goal relays on exploring the internal symmetry of the seismological modeling problems in hand - a time reversal and reciprocity invariance. This two basic properties of the elastic wave equation when incorporating into the probabilistic inversion schemata open a new horizons for Bayesian inversion. In this presentation we discuss the time reversal symmetry property, its mathematical aspects and propose how to combine it with the probabilistic inverse theory into a compact, fast inversion algorithm. We illustrate the proposed idea with the newly developed location algorithm TRMLOC and discuss its efficiency when applied to mining induced seismic data.

  1. Comparison of anatomy-based, fluence-based and aperture-based treatment planning approaches for VMAT

    NASA Astrophysics Data System (ADS)

    Rao, Min; Cao, Daliang; Chen, Fan; Ye, Jinsong; Mehta, Vivek; Wong, Tony; Shepard, David

    2010-11-01

    Volumetric modulated arc therapy (VMAT) has the potential to reduce treatment times while producing comparable or improved dose distributions relative to fixed-field intensity-modulated radiation therapy. In order to take full advantage of the VMAT delivery technique, one must select a robust inverse planning tool. The purpose of this study was to evaluate the effectiveness and efficiency of VMAT planning techniques of three categories: anatomy-based, fluence-based and aperture-based inverse planning. We have compared these techniques in terms of the plan quality, planning efficiency and delivery efficiency. Fourteen patients were selected for this study including six head-and-neck (HN) cases, and two cases each of prostate, pancreas, lung and partial brain. For each case, three VMAT plans were created. The first VMAT plan was generated based on the anatomical geometry. In the Elekta ERGO++ treatment planning system (TPS), segments were generated based on the beam's eye view (BEV) of the target and the organs at risk. The segment shapes were then exported to Pinnacle3 TPS followed by segment weight optimization and final dose calculation. The second VMAT plan was generated by converting optimized fluence maps (calculated by the Pinnacle3 TPS) into deliverable arcs using an in-house arc sequencer. The third VMAT plan was generated using the Pinnacle3 SmartArc IMRT module which is an aperture-based optimization method. All VMAT plans were delivered using an Elekta Synergy linear accelerator and the plan comparisons were made in terms of plan quality and delivery efficiency. The results show that for cases of little or modest complexity such as prostate, pancreas, lung and brain, the anatomy-based approach provides similar target coverage and critical structure sparing, but less conformal dose distributions as compared to the other two approaches. For more complex HN cases, the anatomy-based approach is not able to provide clinically acceptable VMAT plans while highly conformal dose distributions were obtained using both aperture-based and fluence-based inverse planning techniques. The aperture-based approach provides improved dose conformity than the fluence-based technique in complex cases.

  2. Use of a Monte Carlo technique to complete a fragmented set of H2S emission rates from a wastewater treatment plant.

    PubMed

    Schauberger, Günther; Piringer, Martin; Baumann-Stanzer, Kathrin; Knauder, Werner; Petz, Erwin

    2013-12-15

    The impact of ambient concentrations in the vicinity of a plant can only be assessed if the emission rate is known. In this study, based on measurements of ambient H2S concentrations and meteorological parameters, the a priori unknown emission rates of a tannery wastewater treatment plant are calculated by an inverse dispersion technique. The calculations are determined using the Gaussian Austrian regulatory dispersion model. Following this method, emission data can be obtained, though only for a measurement station that is positioned such that the wind direction at the measurement station is leeward of the plant. Using the inverse transform sampling, which is a Monte Carlo technique, the dataset can also be completed for those wind directions for which no ambient concentration measurements are available. For the model validation, the measured ambient concentrations are compared with the calculated ambient concentrations obtained from the synthetic emission data of the Monte Carlo model. The cumulative frequency distribution of this new dataset agrees well with the empirical data. This inverse transform sampling method is thus a useful supplement for calculating emission rates using the inverse dispersion technique. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Polarimetric SAR Interferometry based modeling for tree height and aboveground biomass retrieval in a tropical deciduous forest

    NASA Astrophysics Data System (ADS)

    Kumar, Shashi; Khati, Unmesh G.; Chandola, Shreya; Agrawal, Shefali; Kushwaha, Satya P. S.

    2017-08-01

    The regulation of the carbon cycle is a critical ecosystem service provided by forests globally. It is, therefore, necessary to have robust techniques for speedy assessment of forest biophysical parameters at the landscape level. It is arduous and time taking to monitor the status of vast forest landscapes using traditional field methods. Remote sensing and GIS techniques are efficient tools that can monitor the health of forests regularly. Biomass estimation is a key parameter in the assessment of forest health. Polarimetric SAR (PolSAR) remote sensing has already shown its potential for forest biophysical parameter retrieval. The current research work focuses on the retrieval of forest biophysical parameters of tropical deciduous forest, using fully polarimetric spaceborne C-band data with Polarimetric SAR Interferometry (PolInSAR) techniques. PolSAR based Interferometric Water Cloud Model (IWCM) has been used to estimate aboveground biomass (AGB). Input parameters to the IWCM have been extracted from the decomposition modeling of SAR data as well as PolInSAR coherence estimation. The technique of forest tree height retrieval utilized PolInSAR coherence based modeling approach. Two techniques - Coherence Amplitude Inversion (CAI) and Three Stage Inversion (TSI) - for forest height estimation are discussed, compared and validated. These techniques allow estimation of forest stand height and true ground topography. The accuracy of the forest height estimated is assessed using ground-based measurements. PolInSAR based forest height models showed enervation in the identification of forest vegetation and as a result height values were obtained in river channels and plain areas. Overestimation in forest height was also noticed at several patches of the forest. To overcome this problem, coherence and backscatter based threshold technique is introduced for forest area identification and accurate height estimation in non-forested regions. IWCM based modeling for forest AGB retrieval showed R2 value of 0.5, RMSE of 62.73 (t ha-1) and a percent accuracy of 51%. TSI based PolInSAR inversion modeling showed the most accurate result for forest height estimation. The correlation between the field measured forest height and the estimated tree height using TSI technique is 62% with an average accuracy of 91.56% and RMSE of 2.28 m. The study suggested that PolInSAR coherence based modeling approach has significant potential for retrieval of forest biophysical parameters.

  4. MATLAB Simulation of Gradient-Based Neural Network for Online Matrix Inversion

    NASA Astrophysics Data System (ADS)

    Zhang, Yunong; Chen, Ke; Ma, Weimu; Li, Xiao-Dong

    This paper investigates the simulation of a gradient-based recurrent neural network for online solution of the matrix-inverse problem. Several important techniques are employed as follows to simulate such a neural system. 1) Kronecker product of matrices is introduced to transform a matrix-differential-equation (MDE) to a vector-differential-equation (VDE); i.e., finally, a standard ordinary-differential-equation (ODE) is obtained. 2) MATLAB routine "ode45" is introduced to solve the transformed initial-value ODE problem. 3) In addition to various implementation errors, different kinds of activation functions are simulated to show the characteristics of such a neural network. Simulation results substantiate the theoretical analysis and efficacy of the gradient-based neural network for online constant matrix inversion.

  5. Inverse Modeling of Texas NOx Emissions Using Space-Based and Ground-Based NO2 Observations

    NASA Technical Reports Server (NTRS)

    Tang, Wei; Cohan, D.; Lamsal, L. N.; Xiao, X.; Zhou, W.

    2013-01-01

    Inverse modeling of nitrogen oxide (NOx) emissions using satellite-based NO2 observations has become more prevalent in recent years, but has rarely been applied to regulatory modeling at regional scales. In this study, OMI satellite observations of NO2 column densities are used to conduct inverse modeling of NOx emission inventories for two Texas State Implementation Plan (SIP) modeling episodes. Addition of lightning, aircraft, and soil NOx emissions to the regulatory inventory narrowed but did not close the gap between modeled and satellite observed NO2 over rural regions. Satellitebased top-down emission inventories are created with the regional Comprehensive Air Quality Model with extensions (CAMx) using two techniques: the direct scaling method and discrete Kalman filter (DKF) with Decoupled Direct Method (DDM) sensitivity analysis. The simulations with satellite-inverted inventories are compared to the modeling results using the a priori inventory as well as an inventory created by a ground-level NO2 based DKF inversion. The DKF inversions yield conflicting results: the satellite based inversion scales up the a priori NOx emissions in most regions by factors of 1.02 to 1.84, leading to 3-55% increase in modeled NO2 column densities and 1-7 ppb increase in ground 8 h ozone concentrations, while the ground-based inversion indicates the a priori NOx emissions should be scaled by factors of 0.34 to 0.57 in each region. However, none of the inversions improve the model performance in simulating aircraft-observed NO2 or ground-level ozone (O3) concentrations.

  6. Trans-dimensional and hierarchical Bayesian approaches toward rigorous estimation of seismic sources and structures in the Northeast Asia

    NASA Astrophysics Data System (ADS)

    Kim, Seongryong; Tkalčić, Hrvoje; Mustać, Marija; Rhie, Junkee; Ford, Sean

    2016-04-01

    A framework is presented within which we provide rigorous estimations for seismic sources and structures in the Northeast Asia. We use Bayesian inversion methods, which enable statistical estimations of models and their uncertainties based on data information. Ambiguities in error statistics and model parameterizations are addressed by hierarchical and trans-dimensional (trans-D) techniques, which can be inherently implemented in the Bayesian inversions. Hence reliable estimation of model parameters and their uncertainties is possible, thus avoiding arbitrary regularizations and parameterizations. Hierarchical and trans-D inversions are performed to develop a three-dimensional velocity model using ambient noise data. To further improve the model, we perform joint inversions with receiver function data using a newly developed Bayesian method. For the source estimation, a novel moment tensor inversion method is presented and applied to regional waveform data of the North Korean nuclear explosion tests. By the combination of new Bayesian techniques and the structural model, coupled with meaningful uncertainties related to each of the processes, more quantitative monitoring and discrimination of seismic events is possible.

  7. The investigation of advanced remote sensing, radiative transfer and inversion techniques for the measurement of atmospheric constituents

    NASA Technical Reports Server (NTRS)

    Deepak, Adarsh; Wang, Pi-Huan

    1985-01-01

    The research program is documented for developing space and ground-based remote sensing techniques performed during the period from December 15, 1977 to March 15, 1985. The program involved the application of sophisticated radiative transfer codes and inversion methods to various advanced remote sensing concepts for determining atmospheric constituents, particularly aerosols. It covers detailed discussions of the solar aureole technique for monitoring columnar aerosol size distribution, and the multispectral limb scattered radiance and limb attenuated radiance (solar occultation) techniques, as well as the upwelling scattered solar radiance method for determining the aerosol and gaseous characteristics. In addition, analytical models of aerosol size distribution and simulation studies of the limb solar aureole radiance technique and the variability of ozone at high altitudes during satellite sunrise/sunset events are also described in detail.

  8. Waveform inversion of acoustic waves for explosion yield estimation

    DOE PAGES

    Kim, K.; Rodgers, A. J.

    2016-07-08

    We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less

  9. Waveform inversion of acoustic waves for explosion yield estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, K.; Rodgers, A. J.

    We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less

  10. EIT image reconstruction based on a hybrid FE-EFG forward method and the complete-electrode model.

    PubMed

    Hadinia, M; Jafari, R; Soleimani, M

    2016-06-01

    This paper presents the application of the hybrid finite element-element free Galerkin (FE-EFG) method for the forward and inverse problems of electrical impedance tomography (EIT). The proposed method is based on the complete electrode model. Finite element (FE) and element-free Galerkin (EFG) methods are accurate numerical techniques. However, the FE technique has meshing task problems and the EFG method is computationally expensive. In this paper, the hybrid FE-EFG method is applied to take both advantages of FE and EFG methods, the complete electrode model of the forward problem is solved, and an iterative regularized Gauss-Newton method is adopted to solve the inverse problem. The proposed method is applied to compute Jacobian in the inverse problem. Utilizing 2D circular homogenous models, the numerical results are validated with analytical and experimental results and the performance of the hybrid FE-EFG method compared with the FE method is illustrated. Results of image reconstruction are presented for a human chest experimental phantom.

  11. Improving the geological interpretation of magnetic and gravity satellite anomalies

    NASA Technical Reports Server (NTRS)

    Hinze, William J.; Braile, Lawrence W.; Vonfrese, Ralph R. B.

    1987-01-01

    Quantitative analysis of the geologic component of observed satellite magnetic and gravity fields requires accurate isolation of the geologic component of the observations, theoretically sound and viable inversion techniques, and integration of collateral, constraining geologic and geophysical data. A number of significant contributions were made which make quantitative analysis more accurate. These include procedures for: screening and processing orbital data for lithospheric signals based on signal repeatability and wavelength analysis; producing accurate gridded anomaly values at constant elevations from the orbital data by three-dimensional least squares collocation; increasing the stability of equivalent point source inversion and criteria for the selection of the optimum damping parameter; enhancing inversion techniques through an iterative procedure based on the superposition theorem of potential fields; and modeling efficiently regional-scale lithospheric sources of satellite magnetic anomalies. In addition, these techniques were utilized to investigate regional anomaly sources of North and South America and India and to provide constraints to continental reconstruction. Since the inception of this research study, eleven papers were presented with associated published abstracts, three theses were completed, four papers were published or accepted for publication, and an additional manuscript was submitted for publication.

  12. Probabilistic Magnetotelluric Inversion with Adaptive Regularisation Using the No-U-Turns Sampler

    NASA Astrophysics Data System (ADS)

    Conway, Dennis; Simpson, Janelle; Didana, Yohannes; Rugari, Joseph; Heinson, Graham

    2018-04-01

    We present the first inversion of magnetotelluric (MT) data using a Hamiltonian Monte Carlo algorithm. The inversion of MT data is an underdetermined problem which leads to an ensemble of feasible models for a given dataset. A standard approach in MT inversion is to perform a deterministic search for the single solution which is maximally smooth for a given data-fit threshold. An alternative approach is to use Markov Chain Monte Carlo (MCMC) methods, which have been used in MT inversion to explore the entire solution space and produce a suite of likely models. This approach has the advantage of assigning confidence to resistivity models, leading to better geological interpretations. Recent advances in MCMC techniques include the No-U-Turns Sampler (NUTS), an efficient and rapidly converging method which is based on Hamiltonian Monte Carlo. We have implemented a 1D MT inversion which uses the NUTS algorithm. Our model includes a fixed number of layers of variable thickness and resistivity, as well as probabilistic smoothing constraints which allow sharp and smooth transitions. We present the results of a synthetic study and show the accuracy of the technique, as well as the fast convergence, independence of starting models, and sampling efficiency. Finally, we test our technique on MT data collected from a site in Boulia, Queensland, Australia to show its utility in geological interpretation and ability to provide probabilistic estimates of features such as depth to basement.

  13. Surface Wave Mode Conversion due to Lateral Heterogeneity and its Impact on Waveform Inversions

    NASA Astrophysics Data System (ADS)

    Datta, A.; Priestley, K. F.; Chapman, C. H.; Roecker, S. W.

    2016-12-01

    Surface wave tomography based on great circle ray theory has certain limitations which become increasingly significant with increasing frequency. One such limitation is the assumption of different surface wave modes propagating independently from source to receiver, valid only in case of smoothly varying media. In the real Earth, strong lateral gradients can cause significant interconversion among modes, thus potentially wreaking havoc with ray theory based tomographic inversions that make use of multimode information. The issue of mode coupling (with either normal modes or surface wave modes) for accurate modelling and inversion of body wave data has received significant attention in the seismological literature, but its impact on inversion of surface waveforms themselves remains much less understood.We present an empirical study with synthetic data, to investigate this problem with a two-fold approach. In the first part, 2D forward modelling using a new finite difference method that allows modelling a single mode at a time, is used to build a general picture of energy transfer among modes as a function of size, strength and sharpness of lateral heterogeneities. In the second part, we use the example of a multimode waveform inversion technique based on the Cara and Leveque (1987) approach of secondary observables, to invert our synthetic data and assess how mode conversion can affect the process of imaging the Earth. We pay special attention to ensuring that any biases or artefacts in the resulting inversions can be unambiguously attributed to mode conversion effects. This study helps pave the way towards the next generation of (non-numerical) surface wave tomography techniques geared to exploit higher frequencies and mode numbers than are typically used today.

  14. Invariant-Based Inverse Engineering of Crane Control Parameters

    NASA Astrophysics Data System (ADS)

    González-Resines, S.; Guéry-Odelin, D.; Tobalina, A.; Lizuain, I.; Torrontegui, E.; Muga, J. G.

    2017-11-01

    By applying invariant-based inverse engineering in the small-oscillation regime, we design the time dependence of the control parameters of an overhead crane (trolley displacement and rope length) to transport a load between two positions at different heights with minimal final-energy excitation for a microcanonical ensemble of initial conditions. The analogy between ion transport in multisegmented traps or neutral-atom transport in moving optical lattices and load manipulation by cranes opens a route for a useful transfer of techniques among very different fields.

  15. Recovering Long-wavelength Velocity Models using Spectrogram Inversion with Single- and Multi-frequency Components

    NASA Astrophysics Data System (ADS)

    Ha, J.; Chung, W.; Shin, S.

    2015-12-01

    Many waveform inversion algorithms have been proposed in order to construct subsurface velocity structures from seismic data sets. These algorithms have suffered from computational burden, local minima problems, and the lack of low-frequency components. Computational efficiency can be improved by the application of back-propagation techniques and advances in computing hardware. In addition, waveform inversion algorithms, for obtaining long-wavelength velocity models, could avoid both the local minima problem and the effect of the lack of low-frequency components in seismic data. In this study, we proposed spectrogram inversion as a technique for recovering long-wavelength velocity models. In spectrogram inversion, decomposed frequency components from spectrograms of traces, in the observed and calculated data, are utilized to generate traces with reproduced low-frequency components. Moreover, since each decomposed component can reveal the different characteristics of a subsurface structure, several frequency components were utilized to analyze the velocity features in the subsurface. We performed the spectrogram inversion using a modified SEG/SEGE salt A-A' line. Numerical results demonstrate that spectrogram inversion could also recover the long-wavelength velocity features. However, inversion results varied according to the frequency components utilized. Based on the results of inversion using a decomposed single-frequency component, we noticed that robust inversion results are obtained when a dominant frequency component of the spectrogram was utilized. In addition, detailed information on recovered long-wavelength velocity models was obtained using a multi-frequency component combined with single-frequency components. Numerical examples indicate that various detailed analyses of long-wavelength velocity models can be carried out utilizing several frequency components.

  16. A regional high-resolution carbon flux inversion of North America for 2004

    NASA Astrophysics Data System (ADS)

    Schuh, A. E.; Denning, A. S.; Corbin, K. D.; Baker, I. T.; Uliasz, M.; Parazoo, N.; Andrews, A. E.; Worthy, D. E. J.

    2010-05-01

    Resolving the discrepancies between NEE estimates based upon (1) ground studies and (2) atmospheric inversion results, demands increasingly sophisticated techniques. In this paper we present a high-resolution inversion based upon a regional meteorology model (RAMS) and an underlying biosphere (SiB3) model, both running on an identical 40 km grid over most of North America. Current operational systems like CarbonTracker as well as many previous global inversions including the Transcom suite of inversions have utilized inversion regions formed by collapsing biome-similar grid cells into larger aggregated regions. An extreme example of this might be where corrections to NEE imposed on forested regions on the east coast of the United States might be the same as that imposed on forests on the west coast of the United States while, in reality, there likely exist subtle differences in the two areas, both natural and anthropogenic. Our current inversion framework utilizes a combination of previously employed inversion techniques while allowing carbon flux corrections to be biome independent. Temporally and spatially high-resolution results utilizing biome-independent corrections provide insight into carbon dynamics in North America. In particular, we analyze hourly CO2 mixing ratio data from a sparse network of eight towers in North America for 2004. A prior estimate of carbon fluxes due to Gross Primary Productivity (GPP) and Ecosystem Respiration (ER) is constructed from the SiB3 biosphere model on a 40 km grid. A combination of transport from the RAMS and the Parameterized Chemical Transport Model (PCTM) models is used to forge a connection between upwind biosphere fluxes and downwind observed CO2 mixing ratio data. A Kalman filter procedure is used to estimate weekly corrections to biosphere fluxes based upon observed CO2. RMSE-weighted annual NEE estimates, over an ensemble of potential inversion parameter sets, show a mean estimate 0.57 Pg/yr sink in North America. We perform the inversion with two independently derived boundary inflow conditions and calculate jackknife-based statistics to test the robustness of the model results. We then compare final results to estimates obtained from the CarbonTracker inversion system and at the Southern Great Plains flux site. Results are promising, showing the ability to correct carbon fluxes from the biosphere models over annual and seasonal time scales, as well as over the different GPP and ER components. Additionally, the correlation of an estimated sink of carbon in the South Central United States with regional anomalously high precipitation in an area of managed agricultural and forest lands provides interesting hypotheses for future work.

  17. Low-cost capacitor voltage inverter for outstanding performance in piezoelectric energy harvesting.

    PubMed

    Lallart, Mickaël; Garbuio, Lauric; Richard, Claude; Guyomar, Daniel

    2010-01-01

    The purpose of this paper is to propose a new scheme for piezoelectric energy harvesting optimization. The proposed enhancement relies on a new topology for inverting the voltage across a single capacitor with reduced losses. The increase of the inversion quality allows a much more effective energy harvesting process using the so-called synchronized switch harvesting on inductor (SSHI) nonlinear technique. It is shown that the proposed architecture, based on a 2-step inversion, increases the harvested power by a theoretical factor up to square root of 2 (i.e., 40% gain) compared with classical SSHI, allowing an increase of the harvested power by a factor greater than 1000% compared with the standard energy harvesting technique for realistic values of inversion components. The proposed circuit, using only 4 digital switches and an intermediate capacitor, is also ultra-low power, because the inversion circuit does not require any external energy and the command signals are very simple.

  18. Comparative interpretations of renormalization inversion technique for reconstructing unknown emissions from measured atmospheric concentrations

    NASA Astrophysics Data System (ADS)

    Singh, Sarvesh Kumar; Kumar, Pramod; Rani, Raj; Turbelin, Grégory

    2017-04-01

    The study highlights a theoretical comparison and various interpretations of a recent inversion technique, called renormalization, developed for the reconstruction of unknown tracer emissions from their measured concentrations. The comparative interpretations are presented in relation to the other inversion techniques based on principle of regularization, Bayesian, minimum norm, maximum entropy on mean, and model resolution optimization. It is shown that the renormalization technique can be interpreted in a similar manner to other techniques, with a practical choice of a priori information and error statistics, while eliminating the need of additional constraints. The study shows that the proposed weight matrix and weighted Gram matrix offer a suitable deterministic choice to the background error and measurement covariance matrices, respectively, in the absence of statistical knowledge about background and measurement errors. The technique is advantageous since it (i) utilizes weights representing a priori information apparent to the monitoring network, (ii) avoids dependence on background source estimates, (iii) improves on alternative choices for the error statistics, (iv) overcomes the colocalization problem in a natural manner, and (v) provides an optimally resolved source reconstruction. A comparative illustration of source retrieval is made by using the real measurements from a continuous point release conducted in Fusion Field Trials, Dugway Proving Ground, Utah.

  19. Efficient 3D inversions using the Richards equation

    NASA Astrophysics Data System (ADS)

    Cockett, Rowan; Heagy, Lindsey J.; Haber, Eldad

    2018-07-01

    Fluid flow in the vadose zone is governed by the Richards equation; it is parameterized by hydraulic conductivity, which is a nonlinear function of pressure head. Investigations in the vadose zone typically require characterizing distributed hydraulic properties. Water content or pressure head data may include direct measurements made from boreholes. Increasingly, proxy measurements from hydrogeophysics are being used to supply more spatially and temporally dense data sets. Inferring hydraulic parameters from such datasets requires the ability to efficiently solve and optimize the nonlinear time domain Richards equation. This is particularly important as the number of parameters to be estimated in a vadose zone inversion continues to grow. In this paper, we describe an efficient technique to invert for distributed hydraulic properties in 1D, 2D, and 3D. Our technique does not store the Jacobian matrix, but rather computes its product with a vector. Existing literature for the Richards equation inversion explicitly calculates the sensitivity matrix using finite difference or automatic differentiation, however, for large scale problems these methods are constrained by computation and/or memory. Using an implicit sensitivity algorithm enables large scale inversion problems for any distributed hydraulic parameters in the Richards equation to become tractable on modest computational resources. We provide an open source implementation of our technique based on the SimPEG framework, and show it in practice for a 3D inversion of saturated hydraulic conductivity using water content data through time.

  20. Comparison of Compressed Sensing Algorithms for Inversion of 3-D Electrical Resistivity Tomography.

    NASA Astrophysics Data System (ADS)

    Peddinti, S. R.; Ranjan, S.; Kbvn, D. P.

    2016-12-01

    Image reconstruction algorithms derived from electrical resistivity tomography (ERT) are highly non-linear, sparse, and ill-posed. The inverse problem is much severe, when dealing with 3-D datasets that result in large sized matrices. Conventional gradient based techniques using L2 norm minimization with some sort of regularization can impose smoothness constraint on the solution. Compressed sensing (CS) is relatively new technique that takes the advantage of inherent sparsity in parameter space in one or the other form. If favorable conditions are met, CS was proven to be an efficient image reconstruction technique that uses limited observations without losing edge sharpness. This paper deals with the development of an open source 3-D resistivity inversion tool using CS framework. The forward model was adopted from RESINVM3D (Pidlisecky et al., 2007) with CS as the inverse code. Discrete cosine transformation (DCT) function was used to induce model sparsity in orthogonal form. Two CS based algorithms viz., interior point method and two-step IST were evaluated on a synthetic layered model with surface electrode observations. The algorithms were tested (in terms of quality and convergence) under varying degrees of parameter heterogeneity, model refinement, and reduced observation data space. In comparison to conventional gradient algorithms, CS was proven to effectively reconstruct the sub-surface image with less computational cost. This was observed by a general increase in NRMSE from 0.5 in 10 iterations using gradient algorithm to 0.8 in 5 iterations using CS algorithms.

  1. Periodic order and defects in Ni-based inverse opal-like crystals on the mesoscopic and atomic scale

    NASA Astrophysics Data System (ADS)

    Chumakova, A. V.; Valkovskiy, G. A.; Mistonov, A. A.; Dyadkin, V. A.; Grigoryeva, N. A.; Sapoletova, N. A.; Napolskii, K. S.; Eliseev, A. A.; Petukhov, A. V.; Grigoriev, S. V.

    2014-10-01

    The structure of inverse opal crystals based on nickel was probed on the mesoscopic and atomic levels by a set of complementary techniques such as scanning electron microscopy and synchrotron microradian and wide-angle diffraction. The microradian diffraction revealed the mesoscopic-scale face-centered-cubic (fcc) ordering of spherical voids in the inverse opal-like structure with unit cell dimension of 750±10nm. The diffuse scattering data were used to map defects in the fcc structure as a function of the number of layers in the Ni inverse opal-like structure. The average lateral size of mesoscopic domains is found to be independent of the number of layers. 3D reconstruction of the reciprocal space for the inverse opal crystals with different thickness provided an indirect study of original opal templates in a depth-resolved way. The microstructure and thermal response of the framework of the porous inverse opal crystal was examined using wide-angle powder x-ray diffraction. This artificial porous structure is built from nickel crystallites possessing stacking faults and dislocations peculiar for the nickel thin films.

  2. A machine learning approach as a surrogate of finite element analysis-based inverse method to estimate the zero-pressure geometry of human thoracic aorta.

    PubMed

    Liang, Liang; Liu, Minliang; Martin, Caitlin; Sun, Wei

    2018-05-09

    Advances in structural finite element analysis (FEA) and medical imaging have made it possible to investigate the in vivo biomechanics of human organs such as blood vessels, for which organ geometries at the zero-pressure level need to be recovered. Although FEA-based inverse methods are available for zero-pressure geometry estimation, these methods typically require iterative computation, which are time-consuming and may be not suitable for time-sensitive clinical applications. In this study, by using machine learning (ML) techniques, we developed an ML model to estimate the zero-pressure geometry of human thoracic aorta given 2 pressurized geometries of the same patient at 2 different blood pressure levels. For the ML model development, a FEA-based method was used to generate a dataset of aorta geometries of 3125 virtual patients. The ML model, which was trained and tested on the dataset, is capable of recovering zero-pressure geometries consistent with those generated by the FEA-based method. Thus, this study demonstrates the feasibility and great potential of using ML techniques as a fast surrogate of FEA-based inverse methods to recover zero-pressure geometries of human organs. Copyright © 2018 John Wiley & Sons, Ltd.

  3. A Strassen-Newton algorithm for high-speed parallelizable matrix inversion

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Ferguson, Helaman R. P.

    1988-01-01

    Techniques are described for computing matrix inverses by algorithms that are highly suited to massively parallel computation. The techniques are based on an algorithm suggested by Strassen (1969). Variations of this scheme use matrix Newton iterations and other methods to improve the numerical stability while at the same time preserving a very high level of parallelism. One-processor Cray-2 implementations of these schemes range from one that is up to 55 percent faster than a conventional library routine to one that is slower than a library routine but achieves excellent numerical stability. The problem of computing the solution to a single set of linear equations is discussed, and it is shown that this problem can also be solved efficiently using these techniques.

  4. The inverse electroencephalography pipeline

    NASA Astrophysics Data System (ADS)

    Weinstein, David Michael

    The inverse electroencephalography (EEG) problem is defined as determining which regions of the brain are active based on remote measurements recorded with scalp EEG electrodes. An accurate solution to this problem would benefit both fundamental neuroscience research and clinical neuroscience applications. However, constructing accurate patient-specific inverse EEG solutions requires complex modeling, simulation, and visualization algorithms, and to date only a few systems have been developed that provide such capabilities. In this dissertation, a computational system for generating and investigating patient-specific inverse EEG solutions is introduced, and the requirements for each stage of this Inverse EEG Pipeline are defined and discussed. While the requirements of many of the stages are satisfied with existing algorithms, others have motivated research into novel modeling and simulation methods. The principal technical results of this work include novel surface-based volume modeling techniques, an efficient construction for the EEG lead field, and the Open Source release of the Inverse EEG Pipeline software for use by the bioelectric field research community. In this work, the Inverse EEG Pipeline is applied to three research problems in neurology: comparing focal and distributed source imaging algorithms; separating measurements into independent activation components for multifocal epilepsy; and localizing the cortical activity that produces the P300 effect in schizophrenia.

  5. Blending Two Major Techniques in Order to Compute [Pi

    ERIC Educational Resources Information Center

    Guasti, M. Fernandez

    2005-01-01

    Three major techniques are employed to calculate [pi]. Namely, (i) the perimeter of polygons inscribed or circumscribed in a circle, (ii) calculus based methods using integral representations of inverse trigonometric functions, and (iii) modular identities derived from the transformation theory of elliptic integrals. This note presents a…

  6. Application of Carbonate Reservoir using waveform inversion and reverse-time migration methods

    NASA Astrophysics Data System (ADS)

    Kim, W.; Kim, H.; Min, D.; Keehm, Y.

    2011-12-01

    Recent exploration targets of oil and gas resources are deeper and more complicated subsurface structures, and carbonate reservoirs have become one of the attractive and challenging targets in seismic exploration. To increase the rate of success in oil and gas exploration, it is required to delineate detailed subsurface structures. Accordingly, migration method is more important factor in seismic data processing for the delineation. Seismic migration method has a long history, and there have been developed lots of migration techniques. Among them, reverse-time migration is promising, because it can provide reliable images for the complicated model even in the case of significant velocity contrasts in the model. The reliability of seismic migration images is dependent on the subsurface velocity models, which can be extracted in several ways. These days, geophysicists try to obtain velocity models through seismic full waveform inversion. Since Lailly (1983) and Tarantola (1984) proposed that the adjoint state of wave equations can be used in waveform inversion, the back-propagation techniques used in reverse-time migration have been used in waveform inversion, which accelerated the development of waveform inversion. In this study, we applied acoustic waveform inversion and reverse-time migration methods to carbonate reservoir models with various reservoir thicknesses to examine the feasibility of the methods in delineating carbonate reservoir models. We first extracted subsurface material properties from acoustic waveform inversion, and then applied reverse-time migration using the inverted velocities as a background model. The waveform inversion in this study used back-propagation technique, and conjugate gradient method was used in optimization. The inversion was performed using the frequency-selection strategy. Finally waveform inversion results showed that carbonate reservoir models are clearly inverted by waveform inversion and migration images based on the inversion results are quite reliable. Different thicknesses of reservoir models were also described and the results revealed that the lower boundary of the reservoir was not delineated because of energy loss. From these results, it was noted that carbonate reservoirs can be properly imaged and interpreted by waveform inversion and reverse-time migration methods. This work was supported by the Energy Resources R&D program of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government Ministry of Knowledge Economy (No. 2009201030001A, No. 2010T100200133) and the Brain Korea 21 project of Energy System Engineering.

  7. Break Point Distribution on Chromosome 3 of Human Epithelial Cells exposed to Gamma Rays, Neutrons and Fe Ions

    NASA Technical Reports Server (NTRS)

    Hada, M.; Saganti, P. B.; Gersey, B.; Wilkins, R.; Cucinotta, F. A.; Wu, H.

    2007-01-01

    Most of the reported studies of break point distribution on the damaged chromosomes from radiation exposure were carried out with the G-banding technique or determined based on the relative length of the broken chromosomal fragments. However, these techniques lack the accuracy in comparison with the later developed multicolor banding in situ hybridization (mBAND) technique that is generally used for analysis of intrachromosomal aberrations such as inversions. Using mBAND, we studied chromosome aberrations in human epithelial cells exposed in vitro to both low or high dose rate gamma rays in Houston, low dose rate secondary neutrons at Los Alamos National Laboratory and high dose rate 600 MeV/u Fe ions at NASA Space Radiation Laboratory. Detailed analysis of the inversion type revealed that all of the three radiation types induced a low incidence of simple inversions. Half of the inversions observed after neutron or Fe ion exposure, and the majority of inversions in gamma-irradiated samples were accompanied by other types of intrachromosomal aberrations. In addition, neutrons and Fe ions induced a significant fraction of inversions that involved complex rearrangements of both inter- and intrachromosome exchanges. We further compared the distribution of break point on chromosome 3 for the three radiation types. The break points were found to be randomly distributed on chromosome 3 after neutrons or Fe ions exposure, whereas non-random distribution with clustering break points was observed for gamma-rays. The break point distribution may serve as a potential fingerprint of high-LET radiation exposure.

  8. CSAMT Data Processing with Source Effect and Static Corrections, Application of Occam's Inversion, and Its Application in Geothermal System

    NASA Astrophysics Data System (ADS)

    Hamdi, H.; Qausar, A. M.; Srigutomo, W.

    2016-08-01

    Controlled source audio-frequency magnetotellurics (CSAMT) is a frequency-domain electromagnetic sounding technique which uses a fixed grounded dipole as an artificial signal source. Measurement of CSAMT with finite distance between transmitter and receiver caused a complex wave. The shifted of the electric field due to the static effect caused elevated resistivity curve up or down and affects the result of measurement. The objective of this study was to obtain data that have been corrected for source and static effects as to have the same characteristic as MT data which are assumed to exhibit plane wave properties. Corrected CSAMT data were inverted to reveal subsurface resistivity model. Source effect correction method was applied to eliminate the effect of the signal source and static effect was corrected by using spatial filtering technique. Inversion method that used in this study is the Occam's 2D Inversion. The results of inversion produces smooth models with a small misfit value, it means the model can describe subsurface conditions well. Based on the result of inversion was predicted measurement area is rock that has high permeability values with rich hot fluid.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yu; Gao, Kai; Huang, Lianjie

    Accurate imaging and characterization of fracture zones is crucial for geothermal energy exploration. Aligned fractures within fracture zones behave as anisotropic media for seismic-wave propagation. The anisotropic properties in fracture zones introduce extra difficulties for seismic imaging and waveform inversion. We have recently developed a new anisotropic elastic-waveform inversion method using a modified total-variation regularization scheme and a wave-energy-base preconditioning technique. Our new inversion method uses the parameterization of elasticity constants to describe anisotropic media, and hence it can properly handle arbitrary anisotropy. We apply our new inversion method to a seismic velocity model along a 2D-line seismic data acquiredmore » at Eleven-Mile Canyon located at the Southern Dixie Valley in Nevada for geothermal energy exploration. Our inversion results show that anisotropic elastic-waveform inversion has potential to reconstruct subsurface anisotropic elastic parameters for imaging and characterization of fracture zones.« less

  10. Real Variable Inversion of Laplace Transforms: An Application in Plasma Physics.

    ERIC Educational Resources Information Center

    Bohn, C. L.; Flynn, R. W.

    1978-01-01

    Discusses the nature of Laplace transform techniques and explains an alternative to them: the Widder's real inversion. To illustrate the power of this new technique, it is applied to a difficult inversion: the problem of Landau damping. (GA)

  11. A forward model and conjugate gradient inversion technique for low-frequency ultrasonic imaging.

    PubMed

    van Dongen, Koen W A; Wright, William M D

    2006-10-01

    Emerging methods of hyperthermia cancer treatment require noninvasive temperature monitoring, and ultrasonic techniques show promise in this regard. Various tomographic algorithms are available that reconstruct sound speed or contrast profiles, which can be related to temperature distribution. The requirement of a high enough frequency for adequate spatial resolution and a low enough frequency for adequate tissue penetration is a difficult compromise. In this study, the feasibility of using low frequency ultrasound for imaging and temperature monitoring was investigated. The transient probing wave field had a bandwidth spanning the frequency range 2.5-320.5 kHz. The results from a forward model which computed the propagation and scattering of low-frequency acoustic pressure and velocity wave fields were used to compare three imaging methods formulated within the Born approximation, representing two main types of reconstruction. The first uses Fourier techniques to reconstruct sound-speed profiles from projection or Radon data based on optical ray theory, seen as an asymptotical limit for comparison. The second uses backpropagation and conjugate gradient inversion methods based on acoustical wave theory. The results show that the accuracy in localization was 2.5 mm or better when using low frequencies and the conjugate gradient inversion scheme, which could be used for temperature monitoring.

  12. A direct method for nonlinear ill-posed problems

    NASA Astrophysics Data System (ADS)

    Lakhal, A.

    2018-02-01

    We propose a direct method for solving nonlinear ill-posed problems in Banach-spaces. The method is based on a stable inversion formula we explicitly compute by applying techniques for analytic functions. Furthermore, we investigate the convergence and stability of the method and prove that the derived noniterative algorithm is a regularization. The inversion formula provides a systematic sensitivity analysis. The approach is applicable to a wide range of nonlinear ill-posed problems. We test the algorithm on a nonlinear problem of travel-time inversion in seismic tomography. Numerical results illustrate the robustness and efficiency of the algorithm.

  13. Inverse halftoning via robust nonlinear filtering

    NASA Astrophysics Data System (ADS)

    Shen, Mei-Yin; Kuo, C.-C. Jay

    1999-10-01

    A new blind inverse halftoning algorithm based on a nonlinear filtering technique of low computational complexity and low memory requirement is proposed in this research. It is called blind since we do not require the knowledge of the halftone kernel. The proposed scheme performs nonlinear filtering in conjunction with edge enhancement to improve the quality of an inverse halftoned image. Distinct features of the proposed approach include: efficiently smoothing halftone patterns in large homogeneous areas, additional edge enhancement capability to recover the edge quality and an excellent PSNR performance with only local integer operations and a small memory buffer.

  14. Gravity inversion of a fault by Particle swarm optimization (PSO).

    PubMed

    Toushmalani, Reza

    2013-01-01

    Particle swarm optimization is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. In this paper we introduce and use this method in gravity inverse problem. We discuss the solution for the inverse problem of determining the shape of a fault whose gravity anomaly is known. Application of the proposed algorithm to this problem has proven its capability to deal with difficult optimization problems. The technique proved to work efficiently when tested to a number of models.

  15. IMPROVED SEARCH OF PRINCIPAL COMPONENT ANALYSIS DATABASES FOR SPECTRO-POLARIMETRIC INVERSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casini, R.; Lites, B. W.; Ramos, A. Asensio

    2013-08-20

    We describe a simple technique for the acceleration of spectro-polarimetric inversions based on principal component analysis (PCA) of Stokes profiles. This technique involves the indexing of the database models based on the sign of the projections (PCA coefficients) of the first few relevant orders of principal components of the four Stokes parameters. In this way, each model in the database can be attributed a distinctive binary number of 2{sup 4n} bits, where n is the number of PCA orders used for the indexing. Each of these binary numbers (indices) identifies a group of ''compatible'' models for the inversion of amore » given set of observed Stokes profiles sharing the same index. The complete set of the binary numbers so constructed evidently determines a partition of the database. The search of the database for the PCA inversion of spectro-polarimetric data can profit greatly from this indexing. In practical cases it becomes possible to approach the ideal acceleration factor of 2{sup 4n} as compared to the systematic search of a non-indexed database for a traditional PCA inversion. This indexing method relies on the existence of a physical meaning in the sign of the PCA coefficients of a model. For this reason, the presence of model ambiguities and of spectro-polarimetric noise in the observations limits in practice the number n of relevant PCA orders that can be used for the indexing.« less

  16. Randomly iterated search and statistical competency as powerful inversion tools for deformation source modeling: Application to volcano interferometric synthetic aperture radar data

    NASA Astrophysics Data System (ADS)

    Shirzaei, M.; Walter, T. R.

    2009-10-01

    Modern geodetic techniques provide valuable and near real-time observations of volcanic activity. Characterizing the source of deformation based on these observations has become of major importance in related monitoring efforts. We investigate two random search approaches, simulated annealing (SA) and genetic algorithm (GA), and utilize them in an iterated manner. The iterated approach helps to prevent GA in general and SA in particular from getting trapped in local minima, and it also increases redundancy for exploring the search space. We apply a statistical competency test for estimating the confidence interval of the inversion source parameters, considering their internal interaction through the model, the effect of the model deficiency, and the observational error. Here, we present and test this new randomly iterated search and statistical competency (RISC) optimization method together with GA and SA for the modeling of data associated with volcanic deformations. Following synthetic and sensitivity tests, we apply the improved inversion techniques to two episodes of activity in the Campi Flegrei volcanic region in Italy, observed by the interferometric synthetic aperture radar technique. Inversion of these data allows derivation of deformation source parameters and their associated quality so that we can compare the two inversion methods. The RISC approach was found to be an efficient method in terms of computation time and search results and may be applied to other optimization problems in volcanic and tectonic environments.

  17. A cut-&-paste strategy for the 3-D inversion of helicopter-borne electromagnetic data - I. 3-D inversion using the explicit Jacobian and a tensor-based formulation

    NASA Astrophysics Data System (ADS)

    Scheunert, M.; Ullmann, A.; Afanasjew, M.; Börner, R.-U.; Siemon, B.; Spitzer, K.

    2016-06-01

    We present an inversion concept for helicopter-borne frequency-domain electromagnetic (HEM) data capable of reconstructing 3-D conductivity structures in the subsurface. Standard interpretation procedures often involve laterally constrained stitched 1-D inversion techniques to create pseudo-3-D models that are largely representative for smoothly varying conductivity distributions in the subsurface. Pronounced lateral conductivity changes may, however, produce significant artifacts that can lead to serious misinterpretation. Still, 3-D inversions of entire survey data sets are numerically very expensive. Our approach is therefore based on a cut-&-paste strategy whereupon the full 3-D inversion needs to be applied only to those parts of the survey where the 1-D inversion actually fails. The introduced 3-D Gauss-Newton inversion scheme exploits information given by a state-of-the-art (laterally constrained) 1-D inversion. For a typical HEM measurement, an explicit representation of the Jacobian matrix is inevitable which is caused by the unique transmitter-receiver relation. We introduce tensor quantities which facilitate the matrix assembly of the forward operator as well as the efficient calculation of the Jacobian. The finite difference forward operator incorporates the displacement currents because they may seriously affect the electromagnetic response at frequencies above 100. Finally, we deliver the proof of concept for the inversion using a synthetic data set with a noise level of up to 5%.

  18. Experimental study of the dynamics of penetration of a solid body into a soil medium

    NASA Astrophysics Data System (ADS)

    Balandin, Vl. V.; Balandin, Vl. Vl.; Bragov, A. M.; Kotov, V. L.

    2016-06-01

    An experimental system is developed to determine the main parameters of the impact and penetration of a solid deformable body into a soft soil medium. This system is based on the technique of an inverse experiment with a measuring rod and the technique of a direct experiment with photo recording and the application of a shadow picture of the interaction of a striker with a soil target. To verify these techniques, the collision of a solid body with soil is studied by a numerical calculation and the time intervals in which the change of the resistance force is proportional to the penetration velocity squared are determined. The penetration resistance coefficients determined in direct and inverse experiments are shown to agree with each other in the collision velocity range 80-400 m/s, which supports the validity of the techniques and the reliability of measuring the total load.

  19. Approximated Stable Inversion for Nonlinear Systems with Nonhyperbolic Internal Dynamics. Revised

    NASA Technical Reports Server (NTRS)

    Devasia, Santosh

    1999-01-01

    A technique to achieve output tracking for nonminimum phase nonlinear systems with non- hyperbolic internal dynamics is presented. The present paper integrates stable inversion techniques (that achieve exact-tracking) with approximation techniques (that modify the internal dynamics) to circumvent the nonhyperbolicity of the internal dynamics - this nonhyperbolicity is an obstruction to applying presently available stable inversion techniques. The theory is developed for nonlinear systems and the method is applied to a two-cart with inverted-pendulum example.

  20. HT2DINV: A 2D forward and inverse code for steady-state and transient hydraulic tomography problems

    NASA Astrophysics Data System (ADS)

    Soueid Ahmed, A.; Jardani, A.; Revil, A.; Dupont, J. P.

    2015-12-01

    Hydraulic tomography is a technique used to characterize the spatial heterogeneities of storativity and transmissivity fields. The responses of an aquifer to a source of hydraulic stimulations are used to recover the features of the estimated fields using inverse techniques. We developed a 2D free source Matlab package for performing hydraulic tomography analysis in steady state and transient regimes. The package uses the finite elements method to solve the ground water flow equation for simple or complex geometries accounting for the anisotropy of the material properties. The inverse problem is based on implementing the geostatistical quasi-linear approach of Kitanidis combined with the adjoint-state method to compute the required sensitivity matrices. For undetermined inverse problems, the adjoint-state method provides a faster and more accurate approach for the evaluation of sensitivity matrices compared with the finite differences method. Our methodology is organized in a way that permits the end-user to activate parallel computing in order to reduce the computational burden. Three case studies are investigated demonstrating the robustness and efficiency of our approach for inverting hydraulic parameters.

  1. Joint inversion of multiple geophysical and petrophysical data using generalized fuzzy clustering algorithms

    NASA Astrophysics Data System (ADS)

    Sun, Jiajia; Li, Yaoguo

    2017-02-01

    Joint inversion that simultaneously inverts multiple geophysical data sets to recover a common Earth model is increasingly being applied to exploration problems. Petrophysical data can serve as an effective constraint to link different physical property models in such inversions. There are two challenges, among others, associated with the petrophysical approach to joint inversion. One is related to the multimodality of petrophysical data because there often exist more than one relationship between different physical properties in a region of study. The other challenge arises from the fact that petrophysical relationships have different characteristics and can exhibit point, linear, quadratic, or exponential forms in a crossplot. The fuzzy c-means (FCM) clustering technique is effective in tackling the first challenge and has been applied successfully. We focus on the second challenge in this paper and develop a joint inversion method based on variations of the FCM clustering technique. To account for the specific shapes of petrophysical relationships, we introduce several different fuzzy clustering algorithms that are capable of handling different shapes of petrophysical relationships. We present two synthetic and one field data examples and demonstrate that, by choosing appropriate distance measures for the clustering component in the joint inversion algorithm, the proposed joint inversion method provides an effective means of handling common petrophysical situations we encounter in practice. The jointly inverted models have both enhanced structural similarity and increased petrophysical correlation, and better represent the subsurface in the spatial domain and the parameter domain of physical properties.

  2. Top-down estimates of methane and nitrogen oxide emissions from shale gas production regions using aircraft measurements and a mesoscale Bayesian inversion system together with a flux ratio inversion technique

    NASA Astrophysics Data System (ADS)

    Cui, Y.; Brioude, J. F.; Angevine, W. M.; McKeen, S. A.; Henze, D. K.; Bousserez, N.; Liu, Z.; McDonald, B.; Peischl, J.; Ryerson, T. B.; Frost, G. J.; Trainer, M.

    2016-12-01

    Production of unconventional natural gas grew rapidly during the past ten years in the US which led to an increase in emissions of methane (CH4) and, depending on the shale region, nitrogen oxides (NOx). In terms of radiative forcing, CH4 is the second most important greenhouse gas after CO2. NOx is a precursor of ozone (O3) in the troposphere and nitrate particles, both of which are regulated by the US Clean Air Act. Emission estimates of CH4 and NOx from the shale regions are still highly uncertain. We present top-down estimates of CH4 and NOx surface fluxes from the Haynesville and Fayetteville shale production regions using aircraft data collected during the Southeast Nexus of Climate Change and Air Quality (SENEX) field campaign (June-July, 2013) and the Shale Oil and Natural Gas Nexus (SONGNEX) field campaign (March-May, 2015) within a mesoscale inversion framework. The inversion method is based on a mesoscale Bayesian inversion system using multiple transport models. EPA's 2011 National CH4 and NOx Emission Inventories are used as prior information to optimize CH4 and NOx emissions. Furthermore, the posterior CH4 emission estimates are used to constrain NOx emission estimates using a flux ratio inversion technique. Sensitivity of the posterior estimates to the use of off-diagonal terms in the error covariance matrices, the transport models, and prior estimates is discussed. Compared to the ground-based in-situ observations, the optimized CH4 and NOx inventories improve ground level CH4 and O3 concentrations calculated by the Weather Research and Forecasting mesoscale model coupled with chemistry (WRF-Chem).

  3. New Additions to the Toolkit for Forward/Inverse Problems in Electrocardiography within the SCIRun Problem Solving Environment.

    PubMed

    Coll-Font, Jaume; Burton, Brett M; Tate, Jess D; Erem, Burak; Swenson, Darrel J; Wang, Dafang; Brooks, Dana H; van Dam, Peter; Macleod, Rob S

    2014-09-01

    Cardiac electrical imaging often requires the examination of different forward and inverse problem formulations based on mathematical and numerical approximations of the underlying source and the intervening volume conductor that can generate the associated voltages on the surface of the body. If the goal is to recover the source on the heart from body surface potentials, the solution strategy must include numerical techniques that can incorporate appropriate constraints and recover useful solutions, even though the problem is badly posed. Creating complete software solutions to such problems is a daunting undertaking. In order to make such tools more accessible to a broad array of researchers, the Center for Integrative Biomedical Computing (CIBC) has made an ECG forward/inverse toolkit available within the open source SCIRun system. Here we report on three new methods added to the inverse suite of the toolkit. These new algorithms, namely a Total Variation method, a non-decreasing TMP inverse and a spline-based inverse, consist of two inverse methods that take advantage of the temporal structure of the heart potentials and one that leverages the spatial characteristics of the transmembrane potentials. These three methods further expand the possibilities of researchers in cardiology to explore and compare solutions to their particular imaging problem.

  4. Spatially constrained Bayesian inversion of frequency- and time-domain electromagnetic data from the Tellus projects

    NASA Astrophysics Data System (ADS)

    Kiyan, Duygu; Rath, Volker; Delhaye, Robert

    2017-04-01

    The frequency- and time-domain airborne electromagnetic (AEM) data collected under the Tellus projects of the Geological Survey of Ireland (GSI) which represent a wealth of information on the multi-dimensional electrical structure of Ireland's near-surface. Our project, which was funded by GSI under the framework of their Short Call Research Programme, aims to develop and implement inverse techniques based on various Bayesian methods for these densely sampled data. We have developed a highly flexible toolbox using Python language for the one-dimensional inversion of AEM data along the flight lines. The computational core is based on an adapted frequency- and time-domain forward modelling core derived from the well-tested open-source code AirBeo, which was developed by the CSIRO (Australia) and the AMIRA consortium. Three different inversion methods have been implemented: (i) Tikhonov-type inversion including optimal regularisation methods (Aster el al., 2012; Zhdanov, 2015), (ii) Bayesian MAP inversion in parameter and data space (e.g. Tarantola, 2005), and (iii) Full Bayesian inversion with Markov Chain Monte Carlo (Sambridge and Mosegaard, 2002; Mosegaard and Sambridge, 2002), all including different forms of spatial constraints. The methods have been tested on synthetic and field data. This contribution will introduce the toolbox and present case studies on the AEM data from the Tellus projects.

  5. Porosity Estimation By Artificial Neural Networks Inversion . Application to Algerian South Field

    NASA Astrophysics Data System (ADS)

    Eladj, Said; Aliouane, Leila; Ouadfeul, Sid-Ali

    2017-04-01

    One of the main geophysicist's current challenge is the discovery and the study of stratigraphic traps, this last is a difficult task and requires a very fine analysis of the seismic data. The seismic data inversion allows obtaining lithological and stratigraphic information for the reservoir characterization . However, when solving the inverse problem we encounter difficult problems such as: Non-existence and non-uniqueness of the solution add to this the instability of the processing algorithm. Therefore, uncertainties in the data and the non-linearity of the relationship between the data and the parameters must be taken seriously. In this case, the artificial intelligence techniques such as Artificial Neural Networks(ANN) is used to resolve this ambiguity, this can be done by integrating different physical properties data which requires a supervised learning methods. In this work, we invert the acoustic impedance 3D seismic cube using the colored inversion method, then, the introduction of the acoustic impedance volume resulting from the first step as an input of based model inversion method allows to calculate the Porosity volume using the Multilayer Perceptron Artificial Neural Network. Application to an Algerian South hydrocarbon field clearly demonstrate the power of the proposed processing technique to predict the porosity for seismic data, obtained results can be used for reserves estimation, permeability prediction, recovery factor and reservoir monitoring. Keywords: Artificial Neural Networks, inversion, non-uniqueness , nonlinear, 3D porosity volume, reservoir characterization .

  6. Efficient Monte Carlo sampling of inverse problems using a neural network-based forward—applied to GPR crosshole traveltime inversion

    NASA Astrophysics Data System (ADS)

    Hansen, T. M.; Cordua, K. S.

    2017-12-01

    Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.

  7. Evaluation of optimal reservoir prospectivity using acoustic-impedance model inversion: A case study of an offshore field, western Niger Delta, Nigeria

    NASA Astrophysics Data System (ADS)

    Oyeyemi, Kehinde D.; Olowokere, Mary T.; Aizebeokhai, Ahzegbobor P.

    2017-12-01

    The evaluation of economic potential of any hydrocarbon field involves the understanding of the reservoir lithofacies and porosity variations. This in turns contributes immensely towards subsequent reservoir management and field development. In this study, integrated 3D seismic data and well log data were employed to assess the quality and prospectivity of the delineated reservoirs (H1-H5) within the OPO field, western Niger Delta using a model-based seismic inversion technique. The model inversion results revealed four distinct sedimentary packages based on the subsurface acoustic impedance properties and shale contents. Low acoustic impedance model values were associated with the delineated hydrocarbon bearing units, denoting their high porosity and good quality. Application of model-based inverted velocity, density and acoustic impedance properties on the generated time slices of reservoirs also revealed a regional fault and prospects within the field.

  8. Restart Operator Meta-heuristics for a Problem-Oriented Evolutionary Strategies Algorithm in Inverse Mathematical MISO Modelling Problem Solving

    NASA Astrophysics Data System (ADS)

    Ryzhikov, I. S.; Semenkin, E. S.

    2017-02-01

    This study is focused on solving an inverse mathematical modelling problem for dynamical systems based on observation data and control inputs. The mathematical model is being searched in the form of a linear differential equation, which determines the system with multiple inputs and a single output, and a vector of the initial point coordinates. The described problem is complex and multimodal and for this reason the proposed evolutionary-based optimization technique, which is oriented on a dynamical system identification problem, was applied. To improve its performance an algorithm restart operator was implemented.

  9. Dual exposure, two-photon, conformal phasemask lithography for three dimensional silicon inverse woodpile photonic crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shir, Daniel J.; Nelson, Erik C.; Chanda, Debashis

    2010-01-01

    The authors describe the fabrication and characterization of three dimensional silicon inverse woodpile photonic crystals. A dual exposure, two-photon, conformal phasemask technique is used to create high quality polymer woodpile structures over large areas with geometries that quantitatively match expectations based on optical simulations. Depositing silicon into these templates followed by the removal of the polymer results in silicon inverse woodpile photonic crystals for which calculations indicate a wide, complete photonic bandgap over a range of structural fill fractions. Spectroscopic measurements of normal incidence reflection from both the polymer and siliconphotonic crystals reveal good optical properties.

  10. Interpretaion of synthetic seismic time-lapse monitoring data for Korea CCS project based on the acoustic-elastic coupled inversion

    NASA Astrophysics Data System (ADS)

    Oh, J.; Min, D.; Kim, W.; Huh, C.; Kang, S.

    2012-12-01

    Recently, the CCS (Carbon Capture and Storage) is one of the promising methods to reduce the CO2 emission. To evaluate the success of the CCS project, various geophysical monitoring techniques have been applied. Among them, the time-lapse seismic monitoring is one of the effective methods to investigate the migration of CO2 plume. To monitor the injected CO2 plume accurately, it is needed to interpret seismic monitoring data using not only the imaging technique but also the full waveform inversion, because subsurface material properties can be estimated through the inversion. However, previous works for interpreting seismic monitoring data are mainly based on the imaging technique. In this study, we perform the frequency-domain full waveform inversion for synthetic data obtained by the acoustic-elastic coupled modeling for the geological model made after Ulleung Basin, which is one of the CO2 storage prospects in Korea. We suppose the injection layer is located in fault-related anticlines in the Dolgorae Deformed Belt and, for more realistic situation, we contaminate the synthetic monitoring data with random noise and outliers. We perform the time-lapse full waveform inversion in two scenarios. One scenario is that the injected CO2 plume migrates within the injection layer and is stably captured. The other scenario is that the injected CO2 plume leaks through the weak part of the cap rock. Using the inverted P- and S-wave velocities and Poisson's ratio, we were able to detect the migration of the injected CO2 plume. Acknowledgment This work was financially supported by the Brain Korea 21 project of Energy Systems Engineering, the "Development of Technology for CO2 Marine Geological Storage" program funded by the Ministry of Land, Transport and Maritime Affairs (MLTM) of Korea and the Korea CCS R&D Center (KCRC) grant funded by the Korea government (Ministry of Education, Science and Technology) (No. 2012-0008926).

  11. Adjoint tomography of Europe

    NASA Astrophysics Data System (ADS)

    Zhu, H.; Bozdag, E.; Peter, D. B.; Tromp, J.

    2010-12-01

    We use spectral-element and adjoint methods to image crustal and upper mantle heterogeneity in Europe. The study area involves the convergent boundaries of the Eurasian, African and Arabian plates and the divergent boundary between the Eurasian and North American plates, making the tectonic structure of this region complex. Our goal is to iteratively fit observed seismograms and improve crustal and upper mantle images by taking advantage of 3D forward and inverse modeling techniques. We use data from 200 earthquakes with magnitudes between 5 and 6 recorded by 262 stations provided by ORFEUS. Crustal model Crust2.0 combined with mantle model S362ANI comprise the initial 3D model. Before the iterative adjoint inversion, we determine earthquake source parameters in the initial 3D model by using 3D Green functions and their Fréchet derivatives with respect to the source parameters (i.e., centroid moment tensor and location). The updated catalog is used in the subsequent structural inversion. Since we concentrate on upper mantle structures which involve anisotropy, transversely isotropic (frequency-dependent) traveltime sensitivity kernels are used in the iterative inversion. Taking advantage of the adjoint method, we use as many measurements as can obtain based on comparisons between observed and synthetic seismograms. FLEXWIN (Maggi et al., 2009) is used to automatically select measurement windows which are analyzed based on a multitaper technique. The bandpass ranges from 15 second to 150 second. Long-period surface waves and short-period body waves are combined in source relocations and structural inversions. A statistical assessments of traveltime anomalies and logarithmic waveform differences is used to characterize the inverted sources and structure.

  12. On the inversion of geodetic integrals defined over the sphere using 1-D FFT

    NASA Astrophysics Data System (ADS)

    García, R. V.; Alejo, C. A.

    2005-08-01

    An iterative method is presented which performs inversion of integrals defined over the sphere. The method is based on one-dimensional fast Fourier transform (1-D FFT) inversion and is implemented with the projected Landweber technique, which is used to solve constrained least-squares problems reducing the associated 1-D cyclic-convolution error. The results obtained are as precise as the direct matrix inversion approach, but with better computational efficiency. A case study uses the inversion of Hotine’s integral to obtain gravity disturbances from geoid undulations. Numerical convergence is also analyzed and comparisons with respect to the direct matrix inversion method using conjugate gradient (CG) iteration are presented. Like the CG method, the number of iterations needed to get the optimum (i.e., small) error decreases as the measurement noise increases. Nevertheless, for discrete data given over a whole parallel band, the method can be applied directly without implementing the projected Landweber method, since no cyclic convolution error exists.

  13. Source-space ICA for MEG source imaging.

    PubMed

    Jonmohamadi, Yaqub; Jones, Richard D

    2016-02-01

    One of the most widely used approaches in electroencephalography/magnetoencephalography (MEG) source imaging is application of an inverse technique (such as dipole modelling or sLORETA) on the component extracted by independent component analysis (ICA) (sensor-space ICA + inverse technique). The advantage of this approach over an inverse technique alone is that it can identify and localize multiple concurrent sources. Among inverse techniques, the minimum-variance beamformers offer a high spatial resolution. However, in order to have both high spatial resolution of beamformer and be able to take on multiple concurrent sources, sensor-space ICA + beamformer is not an ideal combination. We propose source-space ICA for MEG as a powerful alternative approach which can provide the high spatial resolution of the beamformer and handle multiple concurrent sources. The concept of source-space ICA for MEG is to apply the beamformer first and then singular value decomposition + ICA. In this paper we have compared source-space ICA with sensor-space ICA both in simulation and real MEG. The simulations included two challenging scenarios of correlated/concurrent cluster sources. Source-space ICA provided superior performance in spatial reconstruction of source maps, even though both techniques performed equally from a temporal perspective. Real MEG from two healthy subjects with visual stimuli were also used to compare performance of sensor-space ICA and source-space ICA. We have also proposed a new variant of minimum-variance beamformer called weight-normalized linearly-constrained minimum-variance with orthonormal lead-field. As sensor-space ICA-based source reconstruction is popular in EEG and MEG imaging, and given that source-space ICA has superior spatial performance, it is expected that source-space ICA will supersede its predecessor in many applications.

  14. Guidance of Nonlinear Nonminimum-Phase Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Devasia, Santosh

    1996-01-01

    The research work has advanced the inversion-based guidance theory for: systems with non-hyperbolic internal dynamics; systems with parameter jumps; and systems where a redesign of the output trajectory is desired. A technique to achieve output tracking for nonminimum phase linear systems with non-hyperbolic and near non-hyperbolic internal dynamics was developed. This approach integrated stable inversion techniques, that achieve exact-tracking, with approximation techniques, that modify the internal dynamics to achieve desirable performance. Such modification of the internal dynamics was used (a) to remove non-hyperbolicity which is an obstruction to applying stable inversion techniques and (b) to reduce large preactuation times needed to apply stable inversion for near non-hyperbolic cases. The method was applied to an example helicopter hover control problem with near non-hyperbolic internal dynamics for illustrating the trade-off between exact tracking and reduction of preactuation time. Future work will extend these results to guidance of nonlinear non-hyperbolic systems. The exact output tracking problem for systems with parameter jumps was considered. Necessary and sufficient conditions were derived for the elimination of switching-introduced output transient. While previous works had studied this problem by developing a regulator that maintains exact tracking through parameter jumps (switches), such techniques are, however, only applicable to minimum-phase systems. In contrast, our approach is also applicable to nonminimum-phase systems and leads to bounded but possibly non-causal solutions. In addition, for the case when the reference trajectories are generated by an exosystem, we developed an exact-tracking controller which could be written in a feedback form. As in standard regulator theory, we also obtained a linear map from the states of the exosystem to the desired system state, which was defined via a matrix differential equation.

  15. Velocity structure of a bottom simulating reflector offshore Peru: Results from full waveform inversion

    USGS Publications Warehouse

    Pecher, I.A.; Minshull, T.A.; Singh, S.C.; von Huene, Roland E.

    1996-01-01

    Much of our knowledge of the worldwide distribution of submarine gas hydrates comes from seismic observations of Bottom Simulating Reflectors (BSRs). Full waveform inversion has proven to be a reliable technique for studying the fine structure of BSRs using the compressional wave velocity. We applied a non-linear full waveform inversion technique to a BSR at a location offshore Peru. We first determined the large-scale features of seismic velocity variations using a statistical inversion technique to maximise coherent energy along travel-time curves. These velocities were used for a starting velocity model for the full waveform inversion, which yielded a detailed velocity/depth model in the vicinity of the BSR. We found that the data are best fit by a model in which the BSR consists of a thin, low-velocity layer. The compressional wave velocity drops from 2.15 km/s down to an average of 1.70 km/s in an 18m thick interval, with a minimum velocity of 1.62 km/s in a 6 m interval. The resulting compressional wave velocity was used to estimate gas content in the sediments. Our results suggest that the low velocity layer is a 6-18 m thick zone containing a few percent of free gas in the pore space. The presence of the BSR coincides with a region of vertical uplift. Therefore, we suggest that gas at this BSR is formed by a dissociation of hydrates at the base of the hydrate stability zone due to uplift and subsequently a decrease in pressure.

  16. Parallel three-dimensional magnetotelluric inversion using adaptive finite-element method. Part I: theory and synthetic study

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.

    2015-07-01

    This paper presents a distributed magnetotelluric inversion scheme based on adaptive finite-element method (FEM). The key novel aspect of the introduced algorithm is the use of automatic mesh refinement techniques for both forward and inverse modelling. These techniques alleviate tedious and subjective procedure of choosing a suitable model parametrization. To avoid overparametrization, meshes for forward and inverse problems were decoupled. For calculation of accurate electromagnetic (EM) responses, automatic mesh refinement algorithm based on a goal-oriented error estimator has been adopted. For further efficiency gain, EM fields for each frequency were calculated using independent meshes in order to account for substantially different spatial behaviour of the fields over a wide range of frequencies. An automatic approach for efficient initial mesh design in inverse problems based on linearized model resolution matrix was developed. To make this algorithm suitable for large-scale problems, it was proposed to use a low-rank approximation of the linearized model resolution matrix. In order to fill a gap between initial and true model complexities and resolve emerging 3-D structures better, an algorithm for adaptive inverse mesh refinement was derived. Within this algorithm, spatial variations of the imaged parameter are calculated and mesh is refined in the neighborhoods of points with the largest variations. A series of numerical tests were performed to demonstrate the utility of the presented algorithms. Adaptive mesh refinement based on the model resolution estimates provides an efficient tool to derive initial meshes which account for arbitrary survey layouts, data types, frequency content and measurement uncertainties. Furthermore, the algorithm is capable to deliver meshes suitable to resolve features on multiple scales while keeping number of unknowns low. However, such meshes exhibit dependency on an initial model guess. Additionally, it is demonstrated that the adaptive mesh refinement can be particularly efficient in resolving complex shapes. The implemented inversion scheme was able to resolve a hemisphere object with sufficient resolution starting from a coarse discretization and refining mesh adaptively in a fully automatic process. The code is able to harness the computational power of modern distributed platforms and is shown to work with models consisting of millions of degrees of freedom. Significant computational savings were achieved by using locally refined decoupled meshes.

  17. Using sparse regularization for multi-resolution tomography of the ionosphere

    NASA Astrophysics Data System (ADS)

    Panicciari, T.; Smith, N. D.; Mitchell, C. N.; Da Dalt, F.; Spencer, P. S. J.

    2015-10-01

    Computerized ionospheric tomography (CIT) is a technique that allows reconstructing the state of the ionosphere in terms of electron content from a set of slant total electron content (STEC) measurements. It is usually denoted as an inverse problem. In this experiment, the measurements are considered coming from the phase of the GPS signal and, therefore, affected by bias. For this reason the STEC cannot be considered in absolute terms but rather in relative terms. Measurements are collected from receivers not evenly distributed in space and together with limitations such as angle and density of the observations, they are the cause of instability in the operation of inversion. Furthermore, the ionosphere is a dynamic medium whose processes are continuously changing in time and space. This can affect CIT by limiting the accuracy in resolving structures and the processes that describe the ionosphere. Some inversion techniques are based on ℓ2 minimization algorithms (i.e. Tikhonov regularization) and a standard approach is implemented here using spherical harmonics as a reference to compare the new method. A new approach is proposed for CIT that aims to permit sparsity in the reconstruction coefficients by using wavelet basis functions. It is based on the ℓ1 minimization technique and wavelet basis functions due to their properties of compact representation. The ℓ1 minimization is selected because it can optimize the result with an uneven distribution of observations by exploiting the localization property of wavelets. Also illustrated is how the inter-frequency biases on the STEC are calibrated within the operation of inversion, and this is used as a way for evaluating the accuracy of the method. The technique is demonstrated using a simulation, showing the advantage of ℓ1 minimization to estimate the coefficients over the ℓ2 minimization. This is in particular true for an uneven observation geometry and especially for multi-resolution CIT.

  18. Convergence of Chahine's nonlinear relaxation inversion method used for limb viewing remote sensing

    NASA Technical Reports Server (NTRS)

    Chu, W. P.

    1985-01-01

    The application of Chahine's (1970) inversion technique to remote sensing problems utilizing the limb viewing geometry is discussed. The problem considered here involves occultation-type measurements and limb radiance-type measurements from either spacecraft or balloon platforms. The kernel matrix of the inversion problem is either an upper or lower triangular matrix. It is demonstrated that the Chahine inversion technique always converges, provided the diagonal elements of the kernel matrix are nonzero.

  19. Multi-frequency subspace migration for imaging of perfectly conducting, arc-like cracks in full- and limited-view inverse scattering problems

    NASA Astrophysics Data System (ADS)

    Park, Won-Kwang

    2015-02-01

    Multi-frequency subspace migration imaging techniques are usually adopted for the non-iterative imaging of unknown electromagnetic targets, such as cracks in concrete walls or bridges and anti-personnel mines in the ground, in the inverse scattering problems. It is confirmed that this technique is very fast, effective, robust, and can not only be applied to full- but also to limited-view inverse problems if a suitable number of incidents and corresponding scattered fields are applied and collected. However, in many works, the application of such techniques is heuristic. With the motivation of such heuristic application, this study analyzes the structure of the imaging functional employed in the subspace migration imaging technique in two-dimensional full- and limited-view inverse scattering problems when the unknown targets are arbitrary-shaped, arc-like perfectly conducting cracks located in the two-dimensional homogeneous space. In contrast to the statistical approach based on statistical hypothesis testing, our approach is based on the fact that the subspace migration imaging functional can be expressed by a linear combination of the Bessel functions of integer order of the first kind. This is based on the structure of the Multi-Static Response (MSR) matrix collected in the far-field at nonzero frequency in either Transverse Magnetic (TM) mode (Dirichlet boundary condition) or Transverse Electric (TE) mode (Neumann boundary condition). The investigation of the expression of imaging functionals gives us certain properties of subspace migration and explains why multi-frequency enhances imaging resolution. In particular, we carefully analyze the subspace migration and confirm some properties of imaging when a small number of incident fields are applied. Consequently, we introduce a weighted multi-frequency imaging functional and confirm that it is an improved version of subspace migration in TM mode. Various results of numerical simulations performed on the far-field data affected by large amounts of random noise are similar to the analytical results derived in this study, and they provide a direction for future studies.

  20. Pioneer 10 and 11 radio occultations by Jupiter. [atmospheric temperature structure

    NASA Technical Reports Server (NTRS)

    Kliore, A. J.; Woiceshyn, P. M.; Hubbard, W. B.

    1977-01-01

    Results on the temperature structure of the Jovian atmosphere are reviewed which were obtained by applying an integral inversion technique combined with a model for the planet's shape based on gravity data to Pioneer 10 and 11 radio-occultation data. The technique applied to obtain temperature profiles from the Pioneer data consisted of defining a center of refraction based on a computation of the radius of curvature in the plane of refraction and the normal direction to the equipotential surface at the closest approach point of a ray. Observations performed during the Pioneer 10 entry and exit and the Pioneer 11 exit are analyzed, sources of uncertainty are identified, and representative pressure-temperature profiles are presented which clearly show a temperature inversion between 10 and 100 mb. Effects of zonal winds on the reliability of radio-occultation temperature profiles are briefly discussed.

  1. Spatiotemporal Interpolation for Environmental Modelling

    PubMed Central

    Susanto, Ferry; de Souza, Paulo; He, Jing

    2016-01-01

    A variation of the reduction-based approach to spatiotemporal interpolation (STI), in which time is treated independently from the spatial dimensions, is proposed in this paper. We reviewed and compared three widely-used spatial interpolation techniques: ordinary kriging, inverse distance weighting and the triangular irregular network. We also proposed a new distribution-based distance weighting (DDW) spatial interpolation method. In this study, we utilised one year of Tasmania’s South Esk Hydrology model developed by CSIRO. Root mean squared error statistical methods were performed for performance evaluations. Our results show that the proposed reduction approach is superior to the extension approach to STI. However, the proposed DDW provides little benefit compared to the conventional inverse distance weighting (IDW) method. We suggest that the improved IDW technique, with the reduction approach used for the temporal dimension, is the optimal combination for large-scale spatiotemporal interpolation within environmental modelling applications. PMID:27509497

  2. SU-F-T-508: A Collimator-Based 3-Dimensional Grid Therapy Technique in a Small Animal Radiation Research Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, J; Kong, V; Zhang, H

    Purpose: Three dimensional (3D) Grid Therapy using MLC-based inverse-planning has been proposed to achieve the features of both conformal radiotherapy and spatially fractionated radiotherapy, which may deliver very high dose in a single fraction to portions of a large tumor with relatively low normal tissue dose. However, the technique requires relatively long delivery time. This study aims to develop a collimator-based 3D grid therapy technique. Here we report the development of the technique in a small animal radiation research platform. Methods: Similar as in the MLC-based technique, 9 non-coplanar beams in special channeling directions were used for the 3D gridmore » therapy technique. Two specially designed grid collimators were fabricated, and one of them was selectively used to match the corresponding gantry/couch angles so that the grid opening of all 9 beams are met in the 3D space in the target. A stack of EBT3 films were used as 3D dosimetry to demonstrate the 3D grid-like dose distribution in the target. Three 1-mm beams were delivered to the stack of films in the area outside the target for alignment when all the films were scanned to reconstruct the 3D dosimtric image. Results: 3D film dosimetry showed a lattice-like dose distribution in the 3D target as well as in the axial, sagittal and coronal planes. The dose outside the target also showed a grid like dose distribution, and the average dose gradually decreased with the distance to the target. The peak to valley ratio was approximately 5:1. The delivery time was 7 minutes for 18 Gy peak dose, comparing to 6 minutes to deliver a 18-Gy 3D conformal plan. Conclusion: We have demonstrated the feasibility of the collimator-based 3D grid therapy technique which can significantly reduce delivery time comparing to MLC-based inverse planning technique.« less

  3. Automatic 3D Moment tensor inversions for southern California earthquakes

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Tape, C.; Friberg, P.; Tromp, J.

    2008-12-01

    We present a new source mechanism (moment-tensor and depth) catalog for about 150 recent southern California earthquakes with Mw ≥ 3.5. We carefully select the initial solutions from a few available earthquake catalogs as well as our own preliminary 3D moment tensor inversion results. We pick useful data windows by assessing the quality of fits between the data and synthetics using an automatic windowing package FLEXWIN (Maggi et al 2008). We compute the source Fréchet derivatives of moment-tensor elements and depth for a recent 3D southern California velocity model inverted based upon finite-frequency event kernels calculated by the adjoint methods and a nonlinear conjugate gradient technique with subspace preconditioning (Tape et al 2008). We then invert for the source mechanisms and event depths based upon the techniques introduced by Liu et al 2005. We assess the quality of this new catalog, as well as the other existing ones, by computing the 3D synthetics for the updated 3D southern California model. We also plan to implement the moment-tensor inversion methods to automatically determine the source mechanisms for earthquakes with Mw ≥ 3.5 in southern California.

  4. Measuring soil moisture with imaging radars

    NASA Technical Reports Server (NTRS)

    Dubois, Pascale C.; Vanzyl, Jakob; Engman, Ted

    1995-01-01

    An empirical model was developed to infer soil moisture and surface roughness from radar data. The accuracy of the inversion technique is assessed by comparing soil moisture obtained with the inversion technique to in situ measurements. The effect of vegetation on the inversion is studied and a method to eliminate the areas where vegetation impairs the algorithm is described.

  5. Full-waveform inversion of GPR data for civil engineering applications

    NASA Astrophysics Data System (ADS)

    van der Kruk, Jan; Kalogeropoulos, Alexis; Hugenschmidt, Johannes; Klotzsche, Anja; Busch, Sebastian; Vereecken, Harry

    2014-05-01

    Conventional GPR ray-based techniques are often limited in their capability to image complex structures due to the pertaining approximations. Due to the increased computational power, it is becoming more easy to use modeling and inversion tools that explicitly take into account the detailed electromagnetic wave propagation characteristics. In this way, new civil engineering application avenues are opening up that enable an improved high resolution imaging of quantitative medium properties. In this contribution, we show recent developments that enable the full-waveform inversion of off-ground, on-ground and crosshole GPR data. For a successful inversion, a proper start model must be used that generates synthetic data that overlaps the measured data with at least half a wavelength. In addition, the GPR system must be calibrated such that an effective wavelet is obtained that encompasses the complexity of the GPR source and receiver antennas. Simple geometries such as horizontal layers can be described with a limited number of model parameters, which enable the use of a combined global and local search using the Simplex search algorithm. This approach has been implemented for the full-waveform inversion of off-ground and on-ground GPR data measured over horizontally layered media. In this way, an accurate 3D frequency domain forward model of Maxwell's equation can be used where the integral representation of the electric field is numerically evaluated. The full-waveform inversion (FWI) for a large number of unknowns uses gradient-based optimization methods where a 3D to 2D conversion is used to apply this method to experimental data. Off-ground GPR data, measured over homogeneous concrete specimens, were inverted using the full-waveform inversion. In contrast to traditional ray-based techniques we were able to obtain quantitative values for the permittivity and conductivity and in this way distinguish between moisture and chloride effects. For increasing chloride content increasing frequency-dependent conductivity values were obtained. The off-ground full-waveform inversion was extended to invert for positive and negative gradients in conductivity and the conductivity gradient direction could be correctly identified. Experimental specimen containing gradients were generated by exposing a concrete slab to controlled wetting-drying cycles using a saline solution. Full-waveform inversion of the measured data correctly identified the conductivity gradient direction which was confirmed by destructive analysis. On-ground CMP GPR data measured over a concrete layer overlying a metal plate show interfering multiple reflections, which indicates that the structure acts as a waveguide. Calculation of the phase-velocity spectrum shows the presence of several higher order modes. Whereas the dispersion inversion returns the thickness and layer height, the full-waveform inversion was also able to estimate quantitative conductivity values. This abstract is a contribution to COST Action TU1208

  6. Distributed micro-releases of bioterror pathogens : threat characterizations and epidemiology from uncertain patient observables.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolf, Michael M.; Marzouk, Youssef M.; Adams, Brian M.

    2008-10-01

    Terrorist attacks using an aerosolized pathogen preparation have gained credibility as a national security concern since the anthrax attacks of 2001. The ability to characterize the parameters of such attacks, i.e., to estimate the number of people infected, the time of infection, the average dose received, and the rate of disease spread in contemporary American society (for contagious diseases), is important when planning a medical response. For non-contagious diseases, we address the characterization problem by formulating a Bayesian inverse problem predicated on a short time-series of diagnosed patients exhibiting symptoms. To keep the approach relevant for response planning, we limitmore » ourselves to 3.5 days of data. In computational tests performed for anthrax, we usually find these observation windows sufficient, especially if the outbreak model employed in the inverse problem is accurate. For contagious diseases, we formulated a Bayesian inversion technique to infer both pathogenic transmissibility and the social network from outbreak observations, ensuring that the two determinants of spreading are identified separately. We tested this technique on data collected from a 1967 smallpox epidemic in Abakaliki, Nigeria. We inferred, probabilistically, different transmissibilities in the structured Abakaliki population, the social network, and the chain of transmission. Finally, we developed an individual-based epidemic model to realistically simulate the spread of a rare (or eradicated) disease in a modern society. This model incorporates the mixing patterns observed in an (American) urban setting and accepts, as model input, pathogenic transmissibilities estimated from historical outbreaks that may have occurred in socio-economic environments with little resemblance to contemporary society. Techniques were also developed to simulate disease spread on static and sampled network reductions of the dynamic social networks originally in the individual-based model, yielding faster, though approximate, network-based epidemic models. These reduced-order models are useful in scenario analysis for medical response planning, as well as in computationally intensive inverse problems.« less

  7. Non-recursive augmented Lagrangian algorithms for the forward and inverse dynamics of constrained flexible multibodies

    NASA Technical Reports Server (NTRS)

    Bayo, Eduardo; Ledesma, Ragnar

    1993-01-01

    A technique is presented for solving the inverse dynamics of flexible planar multibody systems. This technique yields the non-causal joint efforts (inverse dynamics) as well as the internal states (inverse kinematics) that produce a prescribed nominal trajectory of the end effector. A non-recursive global Lagrangian approach is used in formulating the equations for motion as well as in solving the inverse dynamics equations. Contrary to the recursive method previously presented, the proposed method solves the inverse problem in a systematic and direct manner for both open-chain as well as closed-chain configurations. Numerical simulation shows that the proposed procedure provides an excellent tracking of the desired end effector trajectory.

  8. Inverse problems and optimal experiment design in unsteady heat transfer processes identification

    NASA Technical Reports Server (NTRS)

    Artyukhin, Eugene A.

    1991-01-01

    Experimental-computational methods for estimating characteristics of unsteady heat transfer processes are analyzed. The methods are based on the principles of distributed parameter system identification. The theoretical basis of such methods is the numerical solution of nonlinear ill-posed inverse heat transfer problems and optimal experiment design problems. Numerical techniques for solving problems are briefly reviewed. The results of the practical application of identification methods are demonstrated when estimating effective thermophysical characteristics of composite materials and thermal contact resistance in two-layer systems.

  9. Analysis and Simulation of 3D Scattering due to Heterogeneous Crustal Structure and Surface Topography on Regional Phases; Magnitude and Discrimination

    DTIC Science & Technology

    2009-07-07

    inversion technique that is based on different weights for relatively high frequency waveform modeling of Pnl and relatively long period surface waves (Tan...et al., 2006). Pnl and surface waves are also allowed to shift in time to take into account of uncertainties in velocity structure. Joint...inversion of Pnl and surface waves provides better constraints on focal depth as well as source mechanisms. The pure strike-slip mechanism of the earthquake

  10. Nonlinear compression of temporal solitons in an optical waveguide via inverse engineering

    NASA Astrophysics Data System (ADS)

    Paul, Koushik; Sarma, Amarendra K.

    2018-03-01

    We propose a novel method based on the so-called shortcut-to-adiabatic passage techniques to achieve fast compression of temporal solitons in a nonlinear waveguide. We demonstrate that soliton compression could be achieved, in principle, at an arbitrarily small distance by inverse-engineering the pulse width and the nonlinearity of the medium. The proposed scheme could possibly be exploited for various short-distance communication protocols and may be even in nonlinear guided wave-optics devices and generation of ultrashort soliton pulses.

  11. Fourier analysis and signal processing by use of the Moebius inversion formula

    NASA Technical Reports Server (NTRS)

    Reed, Irving S.; Yu, Xiaoli; Shih, Ming-Tang; Tufts, Donald W.; Truong, T. K.

    1990-01-01

    A novel Fourier technique for digital signal processing is developed. This approach to Fourier analysis is based on the number-theoretic method of the Moebius inversion of series. The Fourier transform method developed is shown also to yield the convolution of two signals. A computer simulation shows that this method for finding Fourier coefficients is quite suitable for digital signal processing. It competes with the classical FFT (fast Fourier transform) approach in terms of accuracy, complexity, and speed.

  12. Developing a Near Real-time System for Earthquake Slip Distribution Inversion

    NASA Astrophysics Data System (ADS)

    Zhao, Li; Hsieh, Ming-Che; Luo, Yan; Ji, Chen

    2016-04-01

    Advances in observational and computational seismology in the past two decades have enabled completely automatic and real-time determinations of the focal mechanisms of earthquake point sources. However, seismic radiations from moderate and large earthquakes often exhibit strong finite-source directivity effect, which is critically important for accurate ground motion estimations and earthquake damage assessments. Therefore, an effective procedure to determine earthquake rupture processes in near real-time is in high demand for hazard mitigation and risk assessment purposes. In this study, we develop an efficient waveform inversion approach for the purpose of solving for finite-fault models in 3D structure. Full slip distribution inversions are carried out based on the identified fault planes in the point-source solutions. To ensure efficiency in calculating 3D synthetics during slip distribution inversions, a database of strain Green tensors (SGT) is established for 3D structural model with realistic surface topography. The SGT database enables rapid calculations of accurate synthetic seismograms for waveform inversion on a regular desktop or even a laptop PC. We demonstrate our source inversion approach using two moderate earthquakes (Mw~6.0) in Taiwan and in mainland China. Our results show that 3D velocity model provides better waveform fitting with more spatially concentrated slip distributions. Our source inversion technique based on the SGT database is effective for semi-automatic, near real-time determinations of finite-source solutions for seismic hazard mitigation purposes.

  13. Review of Modelling Techniques for In Vivo Muscle Force Estimation in the Lower Extremities during Strength Training

    PubMed Central

    Schellenberg, Florian; Oberhofer, Katja; Taylor, William R.

    2015-01-01

    Background. Knowledge of the musculoskeletal loading conditions during strength training is essential for performance monitoring, injury prevention, rehabilitation, and training design. However, measuring muscle forces during exercise performance as a primary determinant of training efficacy and safety has remained challenging. Methods. In this paper we review existing computational techniques to determine muscle forces in the lower limbs during strength exercises in vivo and discuss their potential for uptake into sports training and rehabilitation. Results. Muscle forces during exercise performance have almost exclusively been analysed using so-called forward dynamics simulations, inverse dynamics techniques, or alternative methods. Musculoskeletal models based on forward dynamics analyses have led to considerable new insights into muscular coordination, strength, and power during dynamic ballistic movement activities, resulting in, for example, improved techniques for optimal performance of the squat jump, while quasi-static inverse dynamics optimisation and EMG-driven modelling have helped to provide an understanding of low-speed exercises. Conclusion. The present review introduces the different computational techniques and outlines their advantages and disadvantages for the informed usage by nonexperts. With sufficient validation and widespread application, muscle force calculations during strength exercises in vivo are expected to provide biomechanically based evidence for clinicians and therapists to evaluate and improve training guidelines. PMID:26417378

  14. Review of Modelling Techniques for In Vivo Muscle Force Estimation in the Lower Extremities during Strength Training.

    PubMed

    Schellenberg, Florian; Oberhofer, Katja; Taylor, William R; Lorenzetti, Silvio

    2015-01-01

    Knowledge of the musculoskeletal loading conditions during strength training is essential for performance monitoring, injury prevention, rehabilitation, and training design. However, measuring muscle forces during exercise performance as a primary determinant of training efficacy and safety has remained challenging. In this paper we review existing computational techniques to determine muscle forces in the lower limbs during strength exercises in vivo and discuss their potential for uptake into sports training and rehabilitation. Muscle forces during exercise performance have almost exclusively been analysed using so-called forward dynamics simulations, inverse dynamics techniques, or alternative methods. Musculoskeletal models based on forward dynamics analyses have led to considerable new insights into muscular coordination, strength, and power during dynamic ballistic movement activities, resulting in, for example, improved techniques for optimal performance of the squat jump, while quasi-static inverse dynamics optimisation and EMG-driven modelling have helped to provide an understanding of low-speed exercises. The present review introduces the different computational techniques and outlines their advantages and disadvantages for the informed usage by nonexperts. With sufficient validation and widespread application, muscle force calculations during strength exercises in vivo are expected to provide biomechanically based evidence for clinicians and therapists to evaluate and improve training guidelines.

  15. Noncontrast-enhanced renal angiography using multiple inversion recovery and alternating TR balanced steady-state free precession.

    PubMed

    Dong, Hattie Z; Worters, Pauline W; Wu, Holden H; Ingle, R Reeve; Vasanawala, Shreyas S; Nishimura, Dwight G

    2013-08-01

    Noncontrast-enhanced renal angiography techniques based on balanced steady-state free precession avoid external contrast agents, take advantage of high inherent blood signal from the T 2 / T 1 contrast mechanism, and have short steady-state free precession acquisition times. However, background suppression is limited; inflow times are inflexible; labeling region is difficult to define when tagging arterial flow; and scan times are long. To overcome these limitations, we propose the use of multiple inversion recovery preparatory pulses combined with alternating pulse repetition time balanced steady-state free precession to produce renal angiograms. Multiple inversion recovery uses selective spatial saturation followed by four nonselective inversion recovery pulses to concurrently null a wide range of background T 1 species while allowing for adjustable inflow times; alternating pulse repetition time steady-state free precession maintains vessel contrast and provides added fat suppression. The high level of suppression enables imaging in three-dimensional as well as projective two-dimensional formats, the latter of which has a scan time as short as one heartbeat. In vivo studies at 1.5 T demonstrate the superior vessel contrast of this technique. © 2012 Wiley Periodicals, Inc.

  16. Neural network explanation using inversion.

    PubMed

    Saad, Emad W; Wunsch, Donald C

    2007-01-01

    An important drawback of many artificial neural networks (ANN) is their lack of explanation capability [Andrews, R., Diederich, J., & Tickle, A. B. (1996). A survey and critique of techniques for extracting rules from trained artificial neural networks. Knowledge-Based Systems, 8, 373-389]. This paper starts with a survey of algorithms which attempt to explain the ANN output. We then present HYPINV, a new explanation algorithm which relies on network inversion; i.e. calculating the ANN input which produces a desired output. HYPINV is a pedagogical algorithm, that extracts rules, in the form of hyperplanes. It is able to generate rules with arbitrarily desired fidelity, maintaining a fidelity-complexity tradeoff. To our knowledge, HYPINV is the only pedagogical rule extraction method, which extracts hyperplane rules from continuous or binary attribute neural networks. Different network inversion techniques, involving gradient descent as well as an evolutionary algorithm, are presented. An information theoretic treatment of rule extraction is presented. HYPINV is applied to example synthetic problems, to a real aerospace problem, and compared with similar algorithms using benchmark problems.

  17. On the Duality of Forward and Inverse Light Transport.

    PubMed

    Chandraker, Manmohan; Bai, Jiamin; Ng, Tian-Tsong; Ramamoorthi, Ravi

    2011-10-01

    Inverse light transport seeks to undo global illumination effects, such as interreflections, that pervade images of most scenes. This paper presents the theoretical and computational foundations for inverse light transport as a dual of forward rendering. Mathematically, this duality is established through the existence of underlying Neumann series expansions. Physically, it can be shown that each term of our inverse series cancels an interreflection bounce, just as the forward series adds them. While the convergence properties of the forward series are well known, we show that the oscillatory convergence of the inverse series leads to more interesting conditions on material reflectance. Conceptually, the inverse problem requires the inversion of a large light transport matrix, which is impractical for realistic resolutions using standard techniques. A natural consequence of our theoretical framework is a suite of fast computational algorithms for light transport inversion--analogous to finite element radiosity, Monte Carlo and wavelet-based methods in forward rendering--that rely at most on matrix-vector multiplications. We demonstrate two practical applications, namely, separation of individual bounces of the light transport and fast projector radiometric compensation, to display images free of global illumination artifacts in real-world environments.

  18. Inverse boundary-layer theory and comparison with experiment

    NASA Technical Reports Server (NTRS)

    Carter, J. E.

    1978-01-01

    Inverse boundary layer computational procedures, which permit nonsingular solutions at separation and reattachment, are presented. In the first technique, which is for incompressible flow, the displacement thickness is prescribed; in the second technique, for compressible flow, a perturbation mass flow is the prescribed condition. The pressure is deduced implicitly along with the solution in each of these techniques. Laminar and turbulent computations, which are typical of separated flow, are presented and comparisons are made with experimental data. In both inverse procedures, finite difference techniques are used along with Newton iteration. The resulting procedure is no more complicated than conventional boundary layer computations. These separated boundary layer techniques appear to be well suited for complete viscous-inviscid interaction computations.

  19. Tomographic reconstruction of tokamak plasma light emission using wavelet-vaguelette decomposition

    NASA Astrophysics Data System (ADS)

    Schneider, Kai; Nguyen van Yen, Romain; Fedorczak, Nicolas; Brochard, Frederic; Bonhomme, Gerard; Farge, Marie; Monier-Garbet, Pascale

    2012-10-01

    Images acquired by cameras installed in tokamaks are difficult to interpret because the three-dimensional structure of the plasma is flattened in a non-trivial way. Nevertheless, taking advantage of the slow variation of the fluctuations along magnetic field lines, the optical transformation may be approximated by a generalized Abel transform, for which we proposed in Nguyen van yen et al., Nucl. Fus., 52 (2012) 013005, an inversion technique based on the wavelet-vaguelette decomposition. After validation of the new method using an academic test case and numerical data obtained with the Tokam 2D code, we present an application to an experimental movie obtained in the tokamak Tore Supra. A comparison with a classical regularization technique for ill-posed inverse problems, the singular value decomposition, allows us to assess the efficiency. The superiority of the wavelet-vaguelette technique is reflected in preserving local features, such as blobs and fronts, in the denoised emissivity map.

  20. Tomographic reconstruction of tokamak plasma light emission from single image using wavelet-vaguelette decomposition

    NASA Astrophysics Data System (ADS)

    Nguyen van yen, R.; Fedorczak, N.; Brochard, F.; Bonhomme, G.; Schneider, K.; Farge, M.; Monier-Garbet, P.

    2012-01-01

    Images acquired by cameras installed in tokamaks are difficult to interpret because the three-dimensional structure of the plasma is flattened in a non-trivial way. Nevertheless, taking advantage of the slow variation of the fluctuations along magnetic field lines, the optical transformation may be approximated by a generalized Abel transform, for which we propose an inversion technique based on the wavelet-vaguelette decomposition. After validation of the new method using an academic test case and numerical data obtained with the Tokam 2D code, we present an application to an experimental movie obtained in the tokamak Tore Supra. A comparison with a classical regularization technique for ill-posed inverse problems, the singular value decomposition, allows us to assess the efficiency. The superiority of the wavelet-vaguelette technique is reflected in preserving local features, such as blobs and fronts, in the denoised emissivity map.

  1. Chemically Patterned Inverse Opal Created by a Selective Photolysis Modification Process.

    PubMed

    Tian, Tian; Gao, Ning; Gu, Chen; Li, Jian; Wang, Hui; Lan, Yue; Yin, Xianpeng; Li, Guangtao

    2015-09-02

    Anisotropic photonic crystal materials have long been pursued for their broad applications. A novel method for creating chemically patterned inverse opals is proposed here. The patterning technique is based on selective photolysis of a photolabile polymer together with postmodification on released amine groups. The patterning method allows regioselective modification within an inverse opal structure, taking advantage of selective chemical reaction. Moreover, combined with the unique signal self-reporting feature of the photonic crystal, the fabricated structure is capable of various applications, including gradient photonic bandgap and dynamic chemical patterns. The proposed method provides the ability to extend the structural and chemical complexity of the photonic crystal, as well as its potential applications.

  2. A compressive sensing-based computational method for the inversion of wide-band ground penetrating radar data

    NASA Astrophysics Data System (ADS)

    Gelmini, A.; Gottardi, G.; Moriyama, T.

    2017-10-01

    This work presents an innovative computational approach for the inversion of wideband ground penetrating radar (GPR) data. The retrieval of the dielectric characteristics of sparse scatterers buried in a lossy soil is performed by combining a multi-task Bayesian compressive sensing (MT-BCS) solver and a frequency hopping (FH) strategy. The developed methodology is able to benefit from the regularization capabilities of the MT-BCS as well as to exploit the multi-chromatic informative content of GPR measurements. A set of numerical results is reported in order to assess the effectiveness of the proposed GPR inverse scattering technique, as well as to compare it to a simpler single-task implementation.

  3. Total-variation based velocity inversion with Bregmanized operator splitting algorithm

    NASA Astrophysics Data System (ADS)

    Zand, Toktam; Gholami, Ali

    2018-04-01

    Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.

  4. Electromagnetic inverse scattering

    NASA Technical Reports Server (NTRS)

    Bojarski, N. N.

    1972-01-01

    A three-dimensional electromagnetic inverse scattering identity, based on the physical optics approximation, is developed for the monostatic scattered far field cross section of perfect conductors. Uniqueness of this inverse identity is proven. This identity requires complete scattering information for all frequencies and aspect angles. A nonsingular integral equation is developed for the arbitrary case of incomplete frequence and/or aspect angle scattering information. A general closed-form solution to this integral equation is developed, which yields the shape of the scatterer from such incomplete information. A specific practical radar solution is presented. The resolution of this solution is developed, yielding short-pulse target resolution radar system parameter equations. The special cases of two- and one-dimensional inverse scattering and the special case of a priori knowledge of scatterer symmetry are treated in some detail. The merits of this solution over the conventional radar imaging technique are discussed.

  5. Polarimetric measurements in prominences and "tornadoe" observed by THEMIS

    NASA Astrophysics Data System (ADS)

    Schmieder, Brigitte; López Ariste, Arturo; Levens, Peter; Labrosse, Nicolas; Dalmasse, Kévin

    2015-10-01

    Since 2013, coordinated campaigns with the THEMIS spectropolarimeter in Tenerife and other instruments (space based: Hinode/SOT, IRIS or ground based: Sac Peak, Meudon) are organized to observe prominences. THEMIS records spectropolarimetry at the He I D3 and we use the PCA inversion technique to derive their field strength, inclination and azimuth.

  6. Cross hole GPR traveltime inversion using a fast and accurate neural network as a forward model

    NASA Astrophysics Data System (ADS)

    Mejer Hansen, Thomas

    2017-04-01

    Probabilistic formulated inverse problems can be solved using Monte Carlo based sampling methods. In principle both advanced prior information, such as based on geostatistics, and complex non-linear forward physical models can be considered. However, in practice these methods can be associated with huge computational costs that in practice limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error, that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival travel time inversion of cross hole ground-penetrating radar (GPR) data. An accurate forward model, based on 2D full-waveform modeling followed by automatic travel time picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the full forward model, and considerably faster, and more accurate, than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of the types of inverse problems that can be solved using non-linear Monte Carlo sampling techniques.

  7. Seismic Tomography

    NASA Astrophysics Data System (ADS)

    Nowack, Robert L.; Li, Cuiping

    The inversion of seismic travel-time data for radially varying media was initially investigated by Herglotz, Wiechert, and Bateman (the HWB method) in the early part of the 20th century [1]. Tomographic inversions for laterally varying media began in seismology starting in the 1970’s. This included early work by Aki, Christoffersson, and Husebye who developed an inversion technique for estimating lithospheric structure beneath a seismic array from distant earthquakes (the ACH method) [2]. Also, Alekseev and others in Russia performed early inversions of refraction data for laterally varying upper mantle structure [3]. Aki and Lee [4] developed an inversion technique using travel-time data from local earthquakes.

  8. Feedback control laws for highly maneuverable aircraft

    NASA Technical Reports Server (NTRS)

    Garrard, William L.; Balas, Gary J.

    1994-01-01

    During the first half of the year, the investigators concentrated their efforts on completing the design of control laws for the longitudinal axis of the HARV. During the second half of the year they concentrated on the synthesis of control laws for the lateral-directional axes. The longitudinal control law design efforts can be briefly summarized as follows. Longitudinal control laws were developed for the HARV using mu synthesis design techniques coupled with dynamic inversion. An inner loop dynamic inversion controller was used to simplify the system dynamics by eliminating the aerodynamic nonlinearities and inertial cross coupling. Models of the errors resulting from uncertainties in the principal longitudinal aerodynamic terms were developed and included in the model of the HARV with the inner loop dynamic inversion controller. This resulted in an inner loop transfer function model which was an integrator with the modeling errors characterized as uncertainties in gain and phase. Outer loop controllers were then designed using mu synthesis to provide robustness to these modeling errors and give desired response to pilot inputs. Both pitch rate and angle of attack command following systems were designed. The following tasks have been accomplished for the lateral-directional controllers: inner and outer loop dynamic inversion controllers have been designed; an error model based on a linearized perturbation model of the inner loop system was derived; controllers for the inner loop system have been designed, using classical techniques, that control roll rate and Dutch roll response; the inner loop dynamic inversion and classical controllers have been implemented on the six degree of freedom simulation; and lateral-directional control allocation scheme has been developed based on minimizing required control effort.

  9. Contribution of 3D inversion of Electrical Resistivity Tomography data applied to volcanic structures

    NASA Astrophysics Data System (ADS)

    Portal, Angélie; Fargier, Yannick; Lénat, Jean-François; Labazuy, Philippe

    2016-04-01

    The electrical resistivity tomography (ERT) method, initially developed for environmental and engineering exploration, is now commonly used for geological structures imaging. Such structures can present complex characteristics that conventional 2D inversion processes cannot perfectly integrate. Here we present a new 3D inversion algorithm named EResI, firstly developed for levee investigation, and presently applied to the study of a complex lava dome (the Puy de Dôme volcano, France). EResI algorithm is based on a conventional regularized Gauss-Newton inversion scheme and a 3D non-structured discretization of the model (double grid method based on tetrahedrons). This discretization allows to accurately model the topography of investigated structure (without a mesh deformation procedure) and also permits a precise location of the electrodes. Moreover, we demonstrate that a complete 3D unstructured discretization limits the number of inversion cells and is better adapted to the resolution capacity of tomography than a structured discretization. This study shows that a 3D inversion with a non-structured parametrization has some advantages compared to classical 2D inversions. The first advantage comes from the fact that a 2D inversion leads to artefacts due to 3D effects (3D topography, 3D internal resistivity). The second advantage comes from the fact that the capacity to experimentally align electrodes along an axis (for 2D surveys) depends on the constrains on the field (topography...). In this case, a 2D assumption induced by 2.5D inversion software prevents its capacity to model electrodes outside this axis leading to artefacts in the inversion result. The last limitation comes from the use of mesh deformation techniques used to accurately model the topography in 2D softwares. This technique used for structured discretization (Res2dinv) is prohibed for strong topography (>60 %) and leads to a small computational errors. A wide geophysical survey was carried out on the Puy de Dôme volcano resulting in 12 ERT profiles with approximatively 800 electrodes. We performed two processing stages by inverting independently each profiles in 2D (RES2DINV software) and the complete data set in 3D (EResI). The comparison of the 3D inversion results with those obtained through a conventional 2D inversion process underlined that EResI allows to accurately take into account the random electrodes positioning and reduce out-line artefacts into the inversion models due to positioning errors out of the profile axis. This comparison also highlighted the advantages to integrate several ERT lines to compute the 3D models of complex volcanic structures. Finally, the resulting 3D model allows a better interpretation of the Puy de Dome Volcano.

  10. Directional Slack-Based Measure for the Inverse Data Envelopment Analysis

    PubMed Central

    Abu Bakar, Mohd Rizam; Lee, Lai Soon; Jaafar, Azmi B.; Heydar, Maryam

    2014-01-01

    A novel technique has been introduced in this research which lends its basis to the Directional Slack-Based Measure for the inverse Data Envelopment Analysis. In practice, the current research endeavors to elucidate the inverse directional slack-based measure model within a new production possibility set. On one occasion, there is a modification imposed on the output (input) quantities of an efficient decision making unit. In detail, the efficient decision making unit in this method was omitted from the present production possibility set but substituted by the considered efficient decision making unit while its input and output quantities were subsequently modified. The efficiency score of the entire DMUs will be retained in this approach. Also, there would be an improvement in the efficiency score. The proposed approach was investigated in this study with reference to a resource allocation problem. It is possible to simultaneously consider any upsurges (declines) of certain outputs associated with the efficient decision making unit. The significance of the represented model is accentuated by presenting numerical examples. PMID:24883350

  11. Full-wave Nonlinear Inverse Scattering for Acoustic and Electromagnetic Breast Imaging

    NASA Astrophysics Data System (ADS)

    Haynes, Mark Spencer

    Acoustic and electromagnetic full-wave nonlinear inverse scattering techniques are explored in both theory and experiment with the ultimate aim of noninvasively mapping the material properties of the breast. There is evidence that benign and malignant breast tissue have different acoustic and electrical properties and imaging these properties directly could provide higher quality images with better diagnostic certainty. In this dissertation, acoustic and electromagnetic inverse scattering algorithms are first developed and validated in simulation. The forward solvers and optimization cost functions are modified from traditional forms in order to handle the large or lossy imaging scenes present in ultrasonic and microwave breast imaging. An antenna model is then presented, modified, and experimentally validated for microwave S-parameter measurements. Using the antenna model, a new electromagnetic volume integral equation is derived in order to link the material properties of the inverse scattering algorithms to microwave S-parameters measurements allowing direct comparison of model predictions and measurements in the imaging algorithms. This volume integral equation is validated with several experiments and used as the basis of a free-space inverse scattering experiment, where images of the dielectric properties of plastic objects are formed without the use of calibration targets. These efforts are used as the foundation of a solution and formulation for the numerical characterization of a microwave near-field cavity-based breast imaging system. The system is constructed and imaging results of simple targets are given. Finally, the same techniques are used to explore a new self-characterization method for commercial ultrasound probes. The method is used to calibrate an ultrasound inverse scattering experiment and imaging results of simple targets are presented. This work has demonstrated the feasibility of quantitative microwave inverse scattering by way of a self-consistent characterization formalism, and has made headway in the same area for ultrasound.

  12. Research and application of spectral inversion technique in frequency domain to improve resolution of converted PS-wave

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; He, Zhen-Hua; Li, Ya-Lin; Li, Rui; He, Guamg-Ming; Li, Zhong

    2017-06-01

    Multi-wave exploration is an effective means for improving precision in the exploration and development of complex oil and gas reservoirs that are dense and have low permeability. However, converted wave data is characterized by a low signal-to-noise ratio and low resolution, because the conventional deconvolution technology is easily affected by the frequency range limits, and there is limited scope for improving its resolution. The spectral inversion techniques is used to identify λ/8 thin layers and its breakthrough regarding band range limits has greatly improved the seismic resolution. The difficulty associated with this technology is how to use the stable inversion algorithm to obtain a high-precision reflection coefficient, and then to use this reflection coefficient to reconstruct broadband data for processing. In this paper, we focus on how to improve the vertical resolution of the converted PS-wave for multi-wave data processing. Based on previous research, we propose a least squares inversion algorithm with a total variation constraint, in which we uses the total variance as a priori information to solve under-determined problems, thereby improving the accuracy and stability of the inversion. Here, we simulate the Gaussian fitting amplitude spectrum to obtain broadband wavelet data, which we then process to obtain a higher resolution converted wave. We successfully apply the proposed inversion technology in the processing of high-resolution data from the Penglai region to obtain higher resolution converted wave data, which we then verify in a theoretical test. Improving the resolution of converted PS-wave data will provide more accurate data for subsequent velocity inversion and the extraction of reservoir reflection information.

  13. Modeling T1 and T2 relaxation in bovine white matter

    NASA Astrophysics Data System (ADS)

    Barta, R.; Kalantari, S.; Laule, C.; Vavasour, I. M.; MacKay, A. L.; Michal, C. A.

    2015-10-01

    The fundamental basis of T1 and T2 contrast in brain MRI is not well understood; recent literature contains conflicting views on the nature of relaxation in white matter (WM). We investigated the effects of inversion pulse bandwidth on measurements of T1 and T2 in WM. Hybrid inversion-recovery/Carr-Purcell-Meiboom-Gill experiments with broad or narrow bandwidth inversion pulses were applied to bovine WM in vitro. Data were analysed with the commonly used 1D-non-negative least squares (NNLS) algorithm, a 2D-NNLS algorithm, and a four-pool model which was based upon microscopically distinguishable WM compartments (myelin non-aqueous protons, myelin water, non-myelin non-aqueous protons and intra/extracellular water) and incorporated magnetization exchange between adjacent compartments. 1D-NNLS showed that different T2 components had different T1 behaviours and yielded dissimilar results for the two inversion conditions. 2D-NNLS revealed significantly more complicated T1/T2 distributions for narrow bandwidth than for broad bandwidth inversion pulses. The four-pool model fits allow physical interpretation of the parameters, fit better than the NNLS techniques, and fits results from both inversion conditions using the same parameters. The results demonstrate that exchange cannot be neglected when analysing experimental inversion recovery data from WM, in part because it can introduce exponential components having negative amplitude coefficients that cannot be correctly modeled with nonnegative fitting techniques. While assignment of an individual T1 to one particular pool is not possible, the results suggest that under carefully controlled experimental conditions the amplitude of an apparent short T1 component might be used to quantify myelin water.

  14. Contributed Review: Experimental characterization of inverse piezoelectric strain in GaN HEMTs via micro-Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Bagnall, Kevin R.; Wang, Evelyn N.

    2016-06-01

    Micro-Raman thermography is one of the most popular techniques for measuring local temperature rise in gallium nitride (GaN) high electron mobility transistors with high spatial and temporal resolution. However, accurate temperature measurements based on changes in the Stokes peak positions of the GaN epitaxial layers require properly accounting for the stress and/or strain induced by the inverse piezoelectric effect. It is common practice to use the pinched OFF state as the unpowered reference for temperature measurements because the vertical electric field in the GaN buffer that induces inverse piezoelectric stress/strain is relatively independent of the gate bias. Although this approach has yielded temperature measurements that agree with those derived from the Stokes/anti-Stokes ratio and thermal models, there has been significant difficulty in quantifying the mechanical state of the GaN buffer in the pinched OFF state from changes in the Raman spectra. In this paper, we review the experimental technique of micro-Raman thermography and derive expressions for the detailed dependence of the Raman peak positions on strain, stress, and electric field components in wurtzite GaN. We also use a combination of semiconductor device modeling and electro-mechanical modeling to predict the stress and strain induced by the inverse piezoelectric effect. Based on the insights gained from our electro-mechanical model and the best values of material properties in the literature, we analyze changes in the E2 high and A1 (LO) Raman peaks and demonstrate that there are major quantitative discrepancies between measured and modeled values of inverse piezoelectric stress and strain. We examine many of the hypotheses offered in the literature for these discrepancies but conclude that none of them satisfactorily resolves these discrepancies. Further research is needed to determine whether the electric field components could be affecting the phonon frequencies apart from the inverse piezoelectric effect in wurtzite GaN, which has been predicted theoretically in zinc blende gallium arsenide (GaAs).

  15. Research on Inversion Models for Forest Height Estimation Using Polarimetric SAR Interferometry

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Duan, B.; Zou, B.

    2017-09-01

    The forest height is an important forest resource information parameter and usually used in biomass estimation. Forest height extraction with PolInSAR is a hot research field of imaging SAR remote sensing. SAR interferometry is a well-established SAR technique to estimate the vertical location of the effective scattering center in each resolution cell through the phase difference in images acquired from spatially separated antennas. The manipulation of PolInSAR has applications ranging from climate monitoring to disaster detection especially when used in forest area, is of particular interest because it is quite sensitive to the location and vertical distribution of vegetation structure components. However, some of the existing methods can't estimate forest height accurately. Here we introduce several available inversion models and compare the precision of some classical inversion approaches using simulated data. By comparing the advantages and disadvantages of these inversion methods, researchers can find better solutions conveniently based on these inversion methods.

  16. Analysis of protein circular dichroism spectra for secondary structure using a simple matrix multiplication.

    PubMed

    Compton, L A; Johnson, W C

    1986-05-15

    Inverse circular dichroism (CD) spectra are presented for each of the five major secondary structures of proteins: alpha-helix, antiparallel and parallel beta-sheet, beta-turn, and other (random) structures. The fraction of the each secondary structure in a protein is predicted by forming the dot product of the corresponding inverse CD spectrum, expressed as a vector, with the CD spectrum of the protein digitized in the same way. We show how this method is based on the construction of the generalized inverse from the singular value decomposition of a set of CD spectra corresponding to proteins whose secondary structures are known from X-ray crystallography. These inverse spectra compute secondary structure directly from protein CD spectra without resorting to least-squares fitting and standard matrix inversion techniques. In addition, spectra corresponding to the individual secondary structures, analogous to the CD spectra of synthetic polypeptides, are generated from the five most significant CD eigenvectors.

  17. Oil core microcapsules by inverse gelation technique.

    PubMed

    Martins, Evandro; Renard, Denis; Davy, Joëlle; Marquis, Mélanie; Poncelet, Denis

    2015-01-01

    A promising technique for oil encapsulation in Ca-alginate capsules by inverse gelation was proposed by Abang et al. This method consists of emulsifying calcium chloride solution in oil and then adding it dropwise in an alginate solution to produce Ca-alginate capsules. Spherical capsules with diameters around 3 mm were produced by this technique, however the production of smaller capsules was not demonstrated. The objective of this study is to propose a new method of oil encapsulation in a Ca-alginate membrane by inverse gelation. The optimisation of the method leads to microcapsules with diameters around 500 μm. In a search of microcapsules with improved diffusion characteristics, the size reduction is an essential factor to broaden the applications in food, cosmetics and pharmaceuticals areas. This work contributes to a better understanding of the inverse gelation technique and allows the production of microcapsules with a well-defined shell-core structure.

  18. A novel post-processing scheme for two-dimensional electrical impedance tomography based on artificial neural networks

    PubMed Central

    2017-01-01

    Objective Electrical Impedance Tomography (EIT) is a powerful non-invasive technique for imaging applications. The goal is to estimate the electrical properties of living tissues by measuring the potential at the boundary of the domain. Being safe with respect to patient health, non-invasive, and having no known hazards, EIT is an attractive and promising technology. However, it suffers from a particular technical difficulty, which consists of solving a nonlinear inverse problem in real time. Several nonlinear approaches have been proposed as a replacement for the linear solver, but in practice very few are capable of stable, high-quality, and real-time EIT imaging because of their very low robustness to errors and inaccurate modeling, or because they require considerable computational effort. Methods In this paper, a post-processing technique based on an artificial neural network (ANN) is proposed to obtain a nonlinear solution to the inverse problem, starting from a linear solution. While common reconstruction methods based on ANNs estimate the solution directly from the measured data, the method proposed here enhances the solution obtained from a linear solver. Conclusion Applying a linear reconstruction algorithm before applying an ANN reduces the effects of noise and modeling errors. Hence, this approach significantly reduces the error associated with solving 2D inverse problems using machine-learning-based algorithms. Significance This work presents radical enhancements in the stability of nonlinear methods for biomedical EIT applications. PMID:29206856

  19. Electromagnetic modelling, inversion and data-processing techniques for GPR: ongoing activities in Working Group 3 of COST Action TU1208

    NASA Astrophysics Data System (ADS)

    Pajewski, Lara; Giannopoulos, Antonis; van der Kruk, Jan

    2015-04-01

    This work aims at presenting the ongoing research activities carried out in Working Group 3 (WG3) 'EM methods for near-field scattering problems by buried structures; data processing techniques' of the COST (European COoperation in Science and Technology) Action TU1208 'Civil Engineering Applications of Ground Penetrating Radar' (www.GPRadar.eu). The principal goal of the COST Action TU1208 is to exchange and increase scientific-technical knowledge and experience of GPR techniques in civil engineering, simultaneously promoting throughout Europe the effective use of this safe and non-destructive technique in the monitoring of infrastructures and structures. WG3 is structured in four Projects. Project 3.1 deals with 'Electromagnetic modelling for GPR applications.' Project 3.2 is concerned with 'Inversion and imaging techniques for GPR applications.' The topic of Project 3.3 is the 'Development of intrinsic models for describing near-field antenna effects, including antenna-medium coupling, for improved radar data processing using full-wave inversion.' Project 3.4 focuses on 'Advanced GPR data-processing algorithms.' Electromagnetic modeling tools that are being developed and improved include the Finite-Difference Time-Domain (FDTD) technique and the spectral domain Cylindrical-Wave Approach (CWA). One of the well-known freeware and versatile FDTD simulators is GprMax that enables an improved realistic representation of the soil/material hosting the sought structures and of the GPR antennas. Here, input/output tools are being developed to ease the definition of scenarios and the visualisation of numerical results. The CWA expresses the field scattered by subsurface two-dimensional targets with arbitrary cross-section as a sum of cylindrical waves. In this way, the interaction is taken into account of multiple scattered fields within the medium hosting the sought targets. Recently, the method has been extended to deal with through-the-wall scenarios. One of the inversion techniques currently being improved is Full-Waveform Inversion (FWI) for on-ground, off-ground, and crosshole GPR configurations. In contrast to conventional inversion tools which are often based on approximations and use only part of the available data, FWI uses the complete measured data and detailed modeling tools to obtain an improved estimation of medium properties. During the first year of the Action, information was collected and shared about state-of-the-art of the available modelling, imaging, inversion, and data-processing methods. Advancements achieved by WG3 Members were presented during the TU1208 Second General Meeting (April 30 - May 2, 2014, Vienna, Austria) and the 15th International Conference on Ground Penetrating Radar (June 30 - July 4, 2014, Brussels, Belgium). Currently, a database of numerical and experimental GPR responses from natural and manmade structures is being designed. A geometrical and physical description of the scenarios, together with the available synthetic and experimental data, will be at the disposal of the scientific community. Researchers will thus have a further opportunity of testing and validating, against reliable data, their electromagnetic forward- and inverse-scattering techniques, imaging methods and data-processing algorithms. The motivation to start this database came out during TU1208 meetings and takes inspiration by successful past initiatives carried out in different areas, as the Ipswich and Fresnel databases in the field of free-space electromagnetic scattering, and the Marmousi database in seismic science. Acknowledgement The Authors thank COST, for funding the Action TU1208 'Civil Engineering Applications of Ground Penetrating Radar.'

  20. Ionospheric Asymmetry Evaluation using Tomography to Assess the Effectiveness of Radio Occultation Data Inversion

    NASA Astrophysics Data System (ADS)

    Shaikh, M. M.; Notarpietro, R.; Yin, P.; Nava, B.

    2013-12-01

    The Multi-Instrument Data Analysis System (MIDAS) algorithm is based on the oceanographic imaging techniques first applied to do the imaging of 2D slices of the ionosphere. The first version of MIDAS (version 1.0) was able to deal with any line-integral data such as GPS-ground or GPS-LEO differential-phase data or inverted ionograms. The current version extends tomography into four dimensional (lat, long, height and time) spatial-temporal mapping that combines all observations simultaneously in a single inversion with the minimum of a priori assumptions about the form of the ionospheric electron-concentration distribution. This work is an attempt to investigate the Radio Occultation (RO) data assimilation into MIDAS by assessing the ionospheric asymmetry and its impact on RO data inversion, when the Onion-peeling algorithm is used. Ionospheric RO data from COSMIC mission, specifically data collected during 24 September 2011 storm over mid-latitudes, has been used for the data assimilation. Using output electron density data from Midas (with/without RO assimilation) and ideal RO geometries, we tried to assess ionospheric asymmetry. It has been observed that the level of asymmetry was significantly increased when the storm was active. This was due to the increased ionization, which in turn produced large gradients along occulted ray path in the ionosphere. The presence of larger gradients was better observed when Midas was used with RO assimilated data. A very good correlation has been found between the evaluated asymmetry and errors related to the inversion products, when the inversion is performed considering standard techniques based on the assumption of spherical symmetry of the ionosphere. Errors are evaluated considering the peak electron density (NmF2) estimate and the Vertical TEC (VTEC) evaluation. This work highlights the importance of having a tool which should be able to state the effectiveness of Radio Occultation data inversion considering standard algorithms, like Onion-peeling, which are based on ionospheric spherical symmetry assumption. The outcome of this work will lead to find a better inversion algorithm which will deal with the ionospheric asymmetry in more realistic way. This is foreseen as a task for future research. This work has been done under the framework of TRANSMIT project (ITN Marie Curie Actions - GA No. 264476).

  1. Improved resistivity imaging of groundwater solute plumes using POD-based inversion

    NASA Astrophysics Data System (ADS)

    Oware, E. K.; Moysey, S. M.; Khan, T.

    2012-12-01

    We propose a new approach for enforcing physics-based regularization in electrical resistivity imaging (ERI) problems. The approach utilizes a basis-constrained inversion where an optimal set of basis vectors is extracted from training data by Proper Orthogonal Decomposition (POD). The key aspect of the approach is that Monte Carlo simulation of flow and transport is used to generate a training dataset, thereby intrinsically capturing the physics of the underlying flow and transport models in a non-parametric form. POD allows for these training data to be projected onto a subspace of the original domain, resulting in the extraction of a basis for the inversion that captures characteristics of the groundwater flow and transport system, while simultaneously allowing for dimensionality reduction of the original problem in the projected space We use two different synthetic transport scenarios in heterogeneous media to illustrate how the POD-based inversion compares with standard Tikhonov and coupled inversion. The first scenario had a single source zone leading to a unimodal solute plume (synthetic #1), whereas, the second scenario had two source zones that produced a bimodal plume (synthetic #2). For both coupled inversion and the POD approach, the conceptual flow and transport model used considered only a single source zone for both scenarios. Results were compared based on multiple metrics (concentration root-mean square error (RMSE), peak concentration, and total solute mass). In addition, results for POD inversion based on 3 different data densities (120, 300, and 560 data points) and varying number of selected basis images (100, 300, and 500) were compared. For synthetic #1, we found that all three methods provided qualitatively reasonable reproduction of the true plume. Quantitatively, the POD inversion performed best overall for each metric considered. Moreover, since synthetic #1 was consistent with the conceptual transport model, a small number of basis vectors (100) contained enough a priori information to constrain the inversion. Increasing the amount of data or number of selected basis images did not translate into significant improvement in imaging results. For synthetic #2, the RMSE and error in total mass were lowest for the POD inversion. However, the peak concentration was significantly overestimated by the POD approach. Regardless, the POD-based inversion was the only technique that could capture the bimodality of the plume in the reconstructed image, thus providing critical information that could be used to reconceptualize the transport problem. We also found that, in the case of synthetic #2, increasing the number of resistivity measurements and the number of selected basis vectors allowed for significant improvements in the reconstructed images.

  2. Model-based elastography: a survey of approaches to the inverse elasticity problem

    PubMed Central

    Doyley, M M

    2012-01-01

    Elastography is emerging as an imaging modality that can distinguish normal versus diseased tissues via their biomechanical properties. This article reviews current approaches to elastography in three areas — quasi-static, harmonic, and transient — and describes inversion schemes for each elastographic imaging approach. Approaches include: first-order approximation methods; direct and iterative inversion schemes for linear elastic; isotropic materials; and advanced reconstruction methods for recovering parameters that characterize complex mechanical behavior. The paper’s objective is to document efforts to develop elastography within the framework of solving an inverse problem, so that elastography may provide reliable estimates of shear modulus and other mechanical parameters. We discuss issues that must be addressed if model-based elastography is to become the prevailing approach to quasi-static, harmonic, and transient elastography: (1) developing practical techniques to transform the ill-posed problem with a well-posed one; (2) devising better forward models to capture the transient behavior of soft tissue; and (3) developing better test procedures to evaluate the performance of modulus elastograms. PMID:22222839

  3. Inverse Calibration Free fs-LIBS of Copper-Based Alloys

    NASA Astrophysics Data System (ADS)

    Smaldone, Antonella; De Bonis, Angela; Galasso, Agostino; Guarnaccio, Ambra; Santagata, Antonio; Teghil, Roberto

    2016-09-01

    In this work the analysis by Laser Induced Breakdown Spectroscopy (LIBS) technique of copper-based alloys having different composition and performed with fs laser pulses is presented. A Nd:Glass laser (Twinkle Light Conversion, λ = 527 nm at 250 fs) and a set of bronze and brass certified standards were used. The inverse Calibration-Free method (inverse CF-LIBS) was applied for estimating the temperature of the fs laser induced plasma in order to achieve quantitative elemental analysis of such materials. This approach strengthens the hypothesis that, through the assessment of the plasma temperature occurring in fs-LIBS, straightforward and reliable analytical data can be provided. With this aim the capability of the here adopted inverse CF-LIBS method, which is based on the fulfilment of the Local Thermodynamic Equilibrium (LTE) condition, for an indirect determination of the species excitation temperature, is shown. It is reported that the estimated temperatures occurring during the process provide a good figure of merit between the certified and the experimentally determined composition of the bronze and brass materials, here employed, although further correction procedure, like the use of calibration curves, can be demanded. The reported results demonstrate that the inverse CF-LIBS method can be applied when fs laser pulses are used even though the plasma properties could be affected by the matrix effects restricting its full employment to unknown samples provided that a certified standard having similar composition is available.

  4. Large-scale 3D inversion of marine controlled source electromagnetic data using the integral equation method

    NASA Astrophysics Data System (ADS)

    Zhdanov, M. S.; Cuma, M.; Black, N.; Wilson, G. A.

    2009-12-01

    The marine controlled source electromagnetic (MCSEM) method has become widely used in offshore oil and gas exploration. Interpretation of MCSEM data is still a very challenging problem, especially if one would like to take into account the realistic 3D structure of the subsurface. The inversion of MCSEM data is complicated by the fact that the EM response of a hydrocarbon-bearing reservoir is very weak in comparison with the background EM fields generated by an electric dipole transmitter in complex geoelectrical structures formed by a conductive sea-water layer and the terranes beneath it. In this paper, we present a review of the recent developments in the area of large-scale 3D EM forward modeling and inversion. Our approach is based on using a new integral form of Maxwell’s equations allowing for an inhomogeneous background conductivity, which results in a numerically effective integral representation for 3D EM field. This representation provides an efficient tool for the solution of 3D EM inverse problems. To obtain a robust inverse model of the conductivity distribution, we apply regularization based on a focusing stabilizing functional which allows for the recovery of models with both smooth and sharp geoelectrical boundaries. The method is implemented in a fully parallel computer code, which makes it possible to run large-scale 3D inversions on grids with millions of inversion cells. This new technique can be effectively used for active EM detection and monitoring of the subsurface targets.

  5. Solving Large-Scale Inverse Magnetostatic Problems using the Adjoint Method

    PubMed Central

    Bruckner, Florian; Abert, Claas; Wautischer, Gregor; Huber, Christian; Vogler, Christoph; Hinze, Michael; Suess, Dieter

    2017-01-01

    An efficient algorithm for the reconstruction of the magnetization state within magnetic components is presented. The occurring inverse magnetostatic problem is solved by means of an adjoint approach, based on the Fredkin-Koehler method for the solution of the forward problem. Due to the use of hybrid FEM-BEM coupling combined with matrix compression techniques the resulting algorithm is well suited for large-scale problems. Furthermore the reconstruction of the magnetization state within a permanent magnet as well as an optimal design application are demonstrated. PMID:28098851

  6. Inverse kinematic-based robot control

    NASA Technical Reports Server (NTRS)

    Wolovich, W. A.; Flueckiger, K. F.

    1987-01-01

    A fundamental problem which must be resolved in virtually all non-trivial robotic operations is the well-known inverse kinematic question. More specifically, most of the tasks which robots are called upon to perform are specified in Cartesian (x,y,z) space, such as simple tracking along one or more straight line paths or following a specified surfacer with compliant force sensors and/or visual feedback. In all cases, control is actually implemented through coordinated motion of the various links which comprise the manipulator; i.e., in link space. As a consequence, the control computer of every sophisticated anthropomorphic robot must contain provisions for solving the inverse kinematic problem which, in the case of simple, non-redundant position control, involves the determination of the first three link angles, theta sub 1, theta sub 2, and theta sub 3, which produce a desired wrist origin position P sub xw, P sub yw, and P sub zw at the end of link 3 relative to some fixed base frame. Researchers outline a new inverse kinematic solution and demonstrate its potential via some recent computer simulations. They also compare it to current inverse kinematic methods and outline some of the remaining problems which will be addressed in order to render it fully operational. Also discussed are a number of practical consequences of this technique beyond its obvious use in solving the inverse kinematic question.

  7. Inversion for the driving forces of plate tectonics

    NASA Technical Reports Server (NTRS)

    Richardson, R. M.

    1983-01-01

    Inverse modeling techniques have been applied to the problem of determining the roles of various forces that may drive and resist plate tectonic motions. Separate linear inverse problems have been solved to find the best fitting pole of rotation for finite element grid point velocities and to find the best combination of force models to fit the observed relative plate velocities for the earth's twelve major plates using the generalized inverse operator. Variance-covariance data on plate motion have also been included. Results emphasize the relative importance of ridge push forces in the driving mechanism. Convergent margin forces are smaller by at least a factor of two, and perhaps by as much as a factor of twenty. Slab pull, apparently, is poorly transmitted to the surface plate as a driving force. Drag forces at the base of the plate are smaller than ridge push forces, although the sign of the force remains in question.

  8. Forward and Inverse Predictive Model for the Trajectory Tracking Control of a Lower Limb Exoskeleton for Gait Rehabilitation: Simulation modelling analysis

    NASA Astrophysics Data System (ADS)

    Zakaria, M. A.; Majeed, A. P. P. A.; Taha, Z.; Alim, M. M.; Baarath, K.

    2018-03-01

    The movement of a lower limb exoskeleton requires a reasonably accurate control method to allow for an effective gait therapy session to transpire. Trajectory tracking is a nontrivial means of passive rehabilitation technique to correct the motion of the patients’ impaired limb. This paper proposes an inverse predictive model that is coupled together with the forward kinematics of the exoskeleton to estimate the behaviour of the system. A conventional PID control system is used to converge the required joint angles based on the desired input from the inverse predictive model. It was demonstrated through the present study, that the inverse predictive model is capable of meeting the trajectory demand with acceptable error tolerance. The findings further suggest the ability of the predictive model of the exoskeleton to predict a correct joint angle command to the system.

  9. Matrix-Inversion-Free Compressed Sensing With Variable Orthogonal Multi-Matching Pursuit Based on Prior Information for ECG Signals.

    PubMed

    Cheng, Yih-Chun; Tsai, Pei-Yun; Huang, Ming-Hao

    2016-05-19

    Low-complexity compressed sensing (CS) techniques for monitoring electrocardiogram (ECG) signals in wireless body sensor network (WBSN) are presented. The prior probability of ECG sparsity in the wavelet domain is first exploited. Then, variable orthogonal multi-matching pursuit (vOMMP) algorithm that consists of two phases is proposed. In the first phase, orthogonal matching pursuit (OMP) algorithm is adopted to effectively augment the support set with reliable indices and in the second phase, the orthogonal multi-matching pursuit (OMMP) is employed to rescue the missing indices. The reconstruction performance is thus enhanced with the prior information and the vOMMP algorithm. Furthermore, the computation-intensive pseudo-inverse operation is simplified by the matrix-inversion-free (MIF) technique based on QR decomposition. The vOMMP-MIF CS decoder is then implemented in 90 nm CMOS technology. The QR decomposition is accomplished by two systolic arrays working in parallel. The implementation supports three settings for obtaining 40, 44, and 48 coefficients in the sparse vector. From the measurement result, the power consumption is 11.7 mW at 0.9 V and 12 MHz. Compared to prior chip implementations, our design shows good hardware efficiency and is suitable for low-energy applications.

  10. On the Power and the Systematic Biases of the Detection of Chromosomal Inversions by Paired-End Genome Sequencing

    PubMed Central

    Lucas Lledó, José Ignacio; Cáceres, Mario

    2013-01-01

    One of the most used techniques to study structural variation at a genome level is paired-end mapping (PEM). PEM has the advantage of being able to detect balanced events, such as inversions and translocations. However, inversions are still quite difficult to predict reliably, especially from high-throughput sequencing data. We simulated realistic PEM experiments with different combinations of read and library fragment lengths, including sequencing errors and meaningful base-qualities, to quantify and track down the origin of false positives and negatives along sequencing, mapping, and downstream analysis. We show that PEM is very appropriate to detect a wide range of inversions, even with low coverage data. However, % of inversions located between segmental duplications are expected to go undetected by the most common sequencing strategies. In general, longer DNA libraries improve the detectability of inversions far better than increments of the coverage depth or the read length. Finally, we review the performance of three algorithms to detect inversions —SVDetect, GRIAL, and VariationHunter—, identify common pitfalls, and reveal important differences in their breakpoint precisions. These results stress the importance of the sequencing strategy for the detection of structural variants, especially inversions, and offer guidelines for the design of future genome sequencing projects. PMID:23637806

  11. Monte Carlo uncertainty analyses of a bLS inverse-dispersion technique for measuring gas emissions from livestock operations

    USDA-ARS?s Scientific Manuscript database

    The backward Lagrangian stochastic (bLS) inverse-dispersion technique has been used to measure fugitive gas emissions from livestock operations. The accuracy of the bLS technique, as indicated by the percentages of gas recovery in various tracer-release experiments, has generally been within ± 10% o...

  12. Graph-cut based discrete-valued image reconstruction.

    PubMed

    Tuysuzoglu, Ahmet; Karl, W Clem; Stojanovic, Ivana; Castañòn, David; Ünlü, M Selim

    2015-05-01

    Efficient graph-cut methods have been used with great success for labeling and denoising problems occurring in computer vision. Unfortunately, the presence of linear image mappings has prevented the use of these techniques in most discrete-amplitude image reconstruction problems. In this paper, we develop a graph-cut based framework for the direct solution of discrete amplitude linear image reconstruction problems cast as regularized energy function minimizations. We first analyze the structure of discrete linear inverse problem cost functions to show that the obstacle to the application of graph-cut methods to their solution is the variable mixing caused by the presence of the linear sensing operator. We then propose to use a surrogate energy functional that overcomes the challenges imposed by the sensing operator yet can be utilized efficiently in existing graph-cut frameworks. We use this surrogate energy functional to devise a monotonic iterative algorithm for the solution of discrete valued inverse problems. We first provide experiments using local convolutional operators and show the robustness of the proposed technique to noise and stability to changes in regularization parameter. Then we focus on nonlocal, tomographic examples where we consider limited-angle data problems. We compare our technique with state-of-the-art discrete and continuous image reconstruction techniques. Experiments show that the proposed method outperforms state-of-the-art techniques in challenging scenarios involving discrete valued unknowns.

  13. Rainfall assimilation in RAMS by means of the Kuo parameterisation inversion: method and preliminary results

    NASA Astrophysics Data System (ADS)

    Orlandi, A.; Ortolani, A.; Meneguzzo, F.; Levizzani, V.; Torricella, F.; Turk, F. J.

    2004-03-01

    In order to improve high-resolution forecasts, a specific method for assimilating rainfall rates into the Regional Atmospheric Modelling System model has been developed. It is based on the inversion of the Kuo convective parameterisation scheme. A nudging technique is applied to 'gently' increase with time the weight of the estimated precipitation in the assimilation process. A rough but manageable technique is explained to estimate the partition of convective precipitation from stratiform one, without requiring any ancillary measurement. The method is general purpose, but it is tuned for geostationary satellite rainfall estimation assimilation. Preliminary results are presented and discussed, both through totally simulated experiments and through experiments assimilating real satellite-based precipitation observations. For every case study, Rainfall data are computed with a rapid update satellite precipitation estimation algorithm based on IR and MW satellite observations. This research was carried out in the framework of the EURAINSAT project (an EC research project co-funded by the Energy, Environment and Sustainable Development Programme within the topic 'Development of generic Earth observation technologies', Contract number EVG1-2000-00030).

  14. Simultaneous estimation of aquifer thickness, conductivity, and BC using borehole and hydrodynamic data with geostatistical inverse direct method

    NASA Astrophysics Data System (ADS)

    Gao, F.; Zhang, Y.

    2017-12-01

    A new inverse method is developed to simultaneously estimate aquifer thickness and boundary conditions using borehole and hydrodynamic measurements from a homogeneous confined aquifer under steady-state ambient flow. This method extends a previous groundwater inversion technique which had assumed known aquifer geometry and thickness. In this research, thickness inversion was successfully demonstrated when hydrodynamic data were supplemented with measured thicknesses from boreholes. Based on a set of hybrid formulations which describe approximate solutions to the groundwater flow equation, the new inversion technique can incorporate noisy observed data (i.e., thicknesses, hydraulic heads, Darcy fluxes or flow rates) at measurement locations as a set of conditioning constraints. Given sufficient quantity and quality of the measurements, the inverse method yields a single well-posed system of equations that can be solved efficiently with nonlinear optimization. The method is successfully tested on two-dimensional synthetic aquifer problems with regular geometries. The solution is stable when measurement errors are increased, with error magnitude reaching up to +/- 10% of the range of the respective measurement. When error-free observed data are used to condition the inversion, the estimated thickness is within a +/- 5% error envelope surrounding the true value; when data contain increasing errors, the estimated thickness become less accurate, as expected. Different combinations of measurement types are then investigated to evaluate data worth. Thickness can be inverted with the combination of observed heads and at least one of the other types of observations such as thickness, Darcy fluxes, or flow rates. Data requirement of the new inversion method is thus not much different from that of interpreting classic well tests. Future work will improve upon this research by developing an estimation strategy for heterogeneous aquifers while drawdown data from hydraulic tests will also be incorporated as conditioning measurements.

  15. Asteroseismic inversions in the Kepler era: application to the Kepler Legacy sample

    NASA Astrophysics Data System (ADS)

    Buldgen, Gaël; Reese, Daniel; Dupret, Marc-Antoine

    2017-10-01

    In the past few years, the CoRoT and Kepler missions have carried out what is now called the space photometry revolution. This revolution is still ongoing thanks to K2 and will be continued by the Tess and Plato2.0 missions. However, the photometry revolution must also be followed by progress in stellar modelling, in order to lead to more precise and accurate determinations of fundamental stellar parameters such as masses, radii and ages. In this context, the long-lasting problems related to mixing processes in stellar interior is the main obstacle to further improvements of stellar modelling. In this contribution, we will apply structural asteroseismic inversion techniques to targets from the Kepler Legacy sample and analyse how these can help us constrain the fundamental parameters and mixing processes in these stars. Our approach is based on previous studies using the SOLA inversion technique [1] to determine integrated quantities such as the mean density [2], the acoustic radius, and core conditions indicators [3], and has already been successfully applied to the 16Cyg binary system [4]. We will show how this technique can be applied to the Kepler Legacy sample and how new indicators can help us to further constrain the chemical composition profiles of stars as well as provide stringent constraints on stellar ages.

  16. A general rough-surface inversion algorithm: Theory and application to SAR data

    NASA Technical Reports Server (NTRS)

    Moghaddam, M.

    1993-01-01

    Rough-surface inversion has significant applications in interpretation of SAR data obtained over bare soil surfaces and agricultural lands. Due to the sparsity of data and the large pixel size in SAR applications, it is not feasible to carry out inversions based on numerical scattering models. The alternative is to use parameter estimation techniques based on approximate analytical or empirical models. Hence, there are two issues to be addressed, namely, what model to choose and what estimation algorithm to apply. Here, a small perturbation model (SPM) is used to express the backscattering coefficients of the rough surface in terms of three surface parameters. The algorithm used to estimate these parameters is based on a nonlinear least-squares criterion. The least-squares optimization methods are widely used in estimation theory, but the distinguishing factor for SAR applications is incorporating the stochastic nature of both the unknown parameters and the data into formulation, which will be discussed in detail. The algorithm is tested with synthetic data, and several Newton-type least-squares minimization methods are discussed to compare their convergence characteristics. Finally, the algorithm is applied to multifrequency polarimetric SAR data obtained over some bare soil and agricultural fields. Results will be shown and compared to ground-truth measurements obtained from these areas. The strength of this general approach to inversion of SAR data is that it can be easily modified for use with any scattering model without changing any of the inversion steps. Note also that, for the same reason it is not limited to inversion of rough surfaces, and can be applied to any parameterized scattering process.

  17. QR code-based non-linear image encryption using Shearlet transform and spiral phase transform

    NASA Astrophysics Data System (ADS)

    Kumar, Ravi; Bhaduri, Basanta; Hennelly, Bryan

    2018-02-01

    In this paper, we propose a new quick response (QR) code-based non-linear technique for image encryption using Shearlet transform (ST) and spiral phase transform. The input image is first converted into a QR code and then scrambled using the Arnold transform. The scrambled image is then decomposed into five coefficients using the ST and the first Shearlet coefficient, C1 is interchanged with a security key before performing the inverse ST. The output after inverse ST is then modulated with a random phase mask and further spiral phase transformed to get the final encrypted image. The first coefficient, C1 is used as a private key for decryption. The sensitivity of the security keys is analysed in terms of correlation coefficient and peak signal-to noise ratio. The robustness of the scheme is also checked against various attacks such as noise, occlusion and special attacks. Numerical simulation results are shown in support of the proposed technique and an optoelectronic set-up for encryption is also proposed.

  18. Spatial delineation, fluid-lithology characterization, and petrophysical modeling of deepwater Gulf of Mexico reservoirs though joint AVA deterministic and stochastic inversion of three-dimensional partially-stacked seismic amplitude data and well logs

    NASA Astrophysics Data System (ADS)

    Contreras, Arturo Javier

    This dissertation describes a novel Amplitude-versus-Angle (AVA) inversion methodology to quantitatively integrate pre-stack seismic data, well logs, geologic data, and geostatistical information. Deterministic and stochastic inversion algorithms are used to characterize flow units of deepwater reservoirs located in the central Gulf of Mexico. A detailed fluid/lithology sensitivity analysis was conducted to assess the nature of AVA effects in the study area. Standard AVA analysis indicates that the shale/sand interface represented by the top of the hydrocarbon-bearing turbidite deposits generate typical Class III AVA responses. Layer-dependent Biot-Gassmann analysis shows significant sensitivity of the P-wave velocity and density to fluid substitution, indicating that presence of light saturating fluids clearly affects the elastic response of sands. Accordingly, AVA deterministic and stochastic inversions, which combine the advantages of AVA analysis with those of inversion, have provided quantitative information about the lateral continuity of the turbidite reservoirs based on the interpretation of inverted acoustic properties and fluid-sensitive modulus attributes (P-Impedance, S-Impedance, density, and LambdaRho, in the case of deterministic inversion; and P-velocity, S-velocity, density, and lithotype (sand-shale) distributions, in the case of stochastic inversion). The quantitative use of rock/fluid information through AVA seismic data, coupled with the implementation of co-simulation via lithotype-dependent multidimensional joint probability distributions of acoustic/petrophysical properties, provides accurate 3D models of petrophysical properties such as porosity, permeability, and water saturation. Pre-stack stochastic inversion provides more realistic and higher-resolution results than those obtained from analogous deterministic techniques. Furthermore, 3D petrophysical models can be more accurately co-simulated from AVA stochastic inversion results. By combining AVA sensitivity analysis techniques with pre-stack stochastic inversion, geologic data, and awareness of inversion pitfalls, it is possible to substantially reduce the risk in exploration and development of conventional and non-conventional reservoirs. From the final integration of deterministic and stochastic inversion results with depositional models and analogous examples, the M-series reservoirs have been interpreted as stacked terminal turbidite lobes within an overall fan complex (the Miocene MCAVLU Submarine Fan System); this interpretation is consistent with previous core data interpretations and regional stratigraphic/depositional studies.

  19. A general approach to regularizing inverse problems with regional data using Slepian wavelets

    NASA Astrophysics Data System (ADS)

    Michel, Volker; Simons, Frederik J.

    2017-12-01

    Slepian functions are orthogonal function systems that live on subdomains (for example, geographical regions on the Earth’s surface, or bandlimited portions of the entire spectrum). They have been firmly established as a useful tool for the synthesis and analysis of localized (concentrated or confined) signals, and for the modeling and inversion of noise-contaminated data that are only regionally available or only of regional interest. In this paper, we consider a general abstract setup for inverse problems represented by a linear and compact operator between Hilbert spaces with a known singular-value decomposition (svd). In practice, such an svd is often only given for the case of a global expansion of the data (e.g. on the whole sphere) but not for regional data distributions. We show that, in either case, Slepian functions (associated to an arbitrarily prescribed region and the given compact operator) can be determined and applied to construct a regularization for the ill-posed regional inverse problem. Moreover, we describe an algorithm for constructing the Slepian basis via an algebraic eigenvalue problem. The obtained Slepian functions can be used to derive an svd for the combination of the regionalizing projection and the compact operator. As a result, standard regularization techniques relying on a known svd become applicable also to those inverse problems where the data are regionally given only. In particular, wavelet-based multiscale techniques can be used. An example for the latter case is elaborated theoretically and tested on two synthetic numerical examples.

  20. An inverse method for estimation of the acoustic intensity in the focused ultrasound field

    NASA Astrophysics Data System (ADS)

    Yu, Ying; Shen, Guofeng; Chen, Yazhu

    2017-03-01

    Recently, a new method which based on infrared (IR) imaging was introduced. Authors (A. Shaw, et al and M. R. Myers, et al) have established the relationship between absorber surface temperature and incident intensity during the absorber was irradiated by the transducer. Theoretically, the shorter irradiating time makes estimation more in line with the actual results. But due to the influence of noise and performance constrains of the IR camera, it is hard to identify the difference in temperature with short heating time. An inverse technique is developed to reconstruct the incident intensity distribution using the surface temperature with shorter irradiating time. The algorithm is validated using surface temperature data generated numerically from three-layer model which was developed to calculate the acoustic field in the absorber, the absorbed acoustic energy during the irradiation, and the consequent temperature elevation. To assess the effect of noisy data on the reconstructed intensity profile, in the simulations, the different noise levels with zero mean were superposed on the exact data. Simulation results demonstrate that the inversion technique can provide fairly reliable intensity estimation with satisfactory accuracy.

  1. Non-Contrast-Enhanced Renal Angiography Using Multiple Inversion Recovery and Alternating TR Balanced Steady State Free Precession

    PubMed Central

    Dong, Hattie Z.; Worters, Pauline W.; Wu, Holden H.; Ingle, R. Reeve; Vasanawala, Shreyas S.; Nishimura, Dwight G.

    2014-01-01

    Non-contrast enhanced renal angiography techniques based on balanced steady state free precession (SSFP) avoid external contrast agents, take advantage of high inherent blood signal from the T2/T1 contrast mechanism, and have short SSFP acquisition times. However, background suppression is limited; inflow times are inflexible; labeling region is difficult to define when tagging arterial flow; and scan times are long. To overcome these limitations, we propose the use of multiple inversion recovery (MIR) preparatory pulses combined with alternating TR balanced SSFP (ATR-SSFP) to produce renal angiograms. MIR uses selective spatial saturation followed by four global inversion recovery pulses to concurrently null a wide range of background T1 species while allowing for adjustable inflow times; ATR-SSFP maintains vessel contrast and provides added fat suppression. The high level of suppression enables imaging in 3D as well as projective 2D formats, the latter of which has a scan time down to one heartbeat. In vivo studies at 1.5 T demonstrate the superior vessel contrast of this technique. PMID:23172805

  2. Damped regional-scale stress inversions: Methodology and examples for southern California and the Coalinga aftershock sequence

    USGS Publications Warehouse

    Hardebeck, J.L.; Michael, A.J.

    2006-01-01

    We present a new focal mechanism stress inversion technique to produce regional-scale models of stress orientation containing the minimum complexity necessary to fit the data. Current practice is to divide a region into small subareas and to independently fit a stress tensor to the focal mechanisms of each subarea. This procedure may lead to apparent spatial variability that is actually an artifact of overfitting noisy data or nonuniquely fitting data that does not completely constrain the stress tensor. To remove these artifacts while retaining any stress variations that are strongly required by the data, we devise a damped inversion method to simultaneously invert for stress in all subareas while minimizing the difference in stress between adjacent subareas. This method is conceptually similar to other geophysical inverse techniques that incorporate damping, such as seismic tomography. In checkerboard tests, the damped inversion removes the stress rotation artifacts exhibited by an undamped inversion, while resolving sharper true stress rotations than a simple smoothed model or a moving-window inversion. We show an example of a spatially damped stress field for southern California. The methodology can also be used to study temporal stress changes, and an example for the Coalinga, California, aftershock sequence is shown. We recommend use of the damped inversion technique for any study examining spatial or temporal variations in the stress field.

  3. Inverse Function: Pre-Service Teachers' Techniques and Meanings

    ERIC Educational Resources Information Center

    Paoletti, Teo; Stevens, Irma E.; Hobson, Natalie L. F.; Moore, Kevin C.; LaForest, Kevin R.

    2018-01-01

    Researchers have argued teachers and students are not developing connected meanings for function inverse, thus calling for a closer examination of teachers' and students' inverse function meanings. Responding to this call, we characterize 25 pre-service teachers' inverse function meanings as inferred from our analysis of clinical interviews. After…

  4. Elastic full waveform inversion based on the homogenization method: theoretical framework and 2-D numerical illustrations

    NASA Astrophysics Data System (ADS)

    Capdeville, Yann; Métivier, Ludovic

    2018-05-01

    Seismic imaging is an efficient tool to investigate the Earth interior. Many of the different imaging techniques currently used, including the so-called full waveform inversion (FWI), are based on limited frequency band data. Such data are not sensitive to the true earth model, but to a smooth version of it. This smooth version can be related to the true model by the homogenization technique. Homogenization for wave propagation in deterministic media with no scale separation, such as geological media, has been recently developed. With such an asymptotic theory, it is possible to compute an effective medium valid for a given frequency band such that effective waveforms and true waveforms are the same up to a controlled error. In this work we make the link between limited frequency band inversion, mainly FWI, and homogenization. We establish the relation between a true model and an FWI result model. This relation is important for a proper interpretation of FWI images. We numerically illustrate, in the 2-D case, that an FWI result is at best the homogenized version of the true model. Moreover, it appears that the homogenized FWI model is quite independent of the FWI parametrization, as long as it has enough degrees of freedom. In particular, inverting for the full elastic tensor is, in each of our tests, always a good choice. We show how the homogenization can help to understand FWI behaviour and help to improve its robustness and convergence by efficiently constraining the solution space of the inverse problem.

  5. Noncontact methods for measuring water-surface elevations and velocities in rivers: Implications for depth and discharge extraction

    USGS Publications Warehouse

    Nelson, Jonathan M.; Kinzel, Paul J.; McDonald, Richard R.; Schmeeckle, Mark

    2016-01-01

    Recently developed optical and videographic methods for measuring water-surface properties in a noninvasive manner hold great promise for extracting river hydraulic and bathymetric information. This paper describes such a technique, concentrating on the method of infrared videog- raphy for measuring surface velocities and both acoustic (laboratory-based) and laser-scanning (field-based) techniques for measuring water-surface elevations. In ideal laboratory situations with simple flows, appropriate spatial and temporal averaging results in accurate water-surface elevations and water-surface velocities. In test cases, this accuracy is sufficient to allow direct inversion of the governing equations of motion to produce estimates of depth and discharge. Unlike other optical techniques for determining local depth that rely on transmissivity of the water column (bathymetric lidar, multi/hyperspectral correlation), this method uses only water-surface information, so even deep and/or turbid flows can be investigated. However, significant errors arise in areas of nonhydrostatic spatial accelerations, such as those associated with flow over bedforms or other relatively steep obstacles. Using laboratory measurements for test cases, the cause of these errors is examined and both a simple semi-empirical method and computational results are presented that can potentially reduce bathymetric inversion errors.

  6. Normal-inverse bimodule operation Hadamard transform ion mobility spectrometry.

    PubMed

    Hong, Yan; Huang, Chaoqun; Liu, Sheng; Xia, Lei; Shen, Chengyin; Chu, Yannan

    2018-10-31

    In order to suppress or eliminate the spurious peaks and improve signal-to-noise ratio (SNR) of Hadamard transform ion mobility spectrometry (HT-IMS), a normal-inverse bimodule operation Hadamard transform - ion mobility spectrometry (NIBOHT-IMS) technique was developed. In this novel technique, a normal and inverse pseudo random binary sequence (PRBS) was produced in sequential order by an ion gate controller and utilized to control the ion gate of IMS, and then the normal HT-IMS mobility spectrum and the inverse HT-IMS mobility spectrum were obtained. A NIBOHT-IMS mobility spectrum was gained by subtracting the inverse HT-IMS mobility spectrum from normal HT-IMS mobility spectrum. Experimental results demonstrate that the NIBOHT-IMS technique can significantly suppress or eliminate the spurious peaks, and enhance the SNR by measuring the reactant ions. Furthermore, the gas CHCl 3 and CH 2 Br 2 were measured for evaluating the capability of detecting real sample. The results show that the NIBOHT-IMS technique is able to eliminate the spurious peaks and improve the SNR notably not only for the detection of larger ion signals but also for the detection of small ion signals. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Minimization of model representativity errors in identification of point source emission from atmospheric concentration measurements

    NASA Astrophysics Data System (ADS)

    Sharan, Maithili; Singh, Amit Kumar; Singh, Sarvesh Kumar

    2017-11-01

    Estimation of an unknown atmospheric release from a finite set of concentration measurements is considered an ill-posed inverse problem. Besides ill-posedness, the estimation process is influenced by the instrumental errors in the measured concentrations and model representativity errors. The study highlights the effect of minimizing model representativity errors on the source estimation. This is described in an adjoint modelling framework and followed in three steps. First, an estimation of point source parameters (location and intensity) is carried out using an inversion technique. Second, a linear regression relationship is established between the measured concentrations and corresponding predicted using the retrieved source parameters. Third, this relationship is utilized to modify the adjoint functions. Further, source estimation is carried out using these modified adjoint functions to analyse the effect of such modifications. The process is tested for two well known inversion techniques, called renormalization and least-square. The proposed methodology and inversion techniques are evaluated for a real scenario by using concentrations measurements from the Idaho diffusion experiment in low wind stable conditions. With both the inversion techniques, a significant improvement is observed in the retrieval of source estimation after minimizing the representativity errors.

  8. Output Tracking for Systems with Non-Hyperbolic and Near Non-Hyperbolic Internal Dynamics: Helicopter Hover Control

    NASA Technical Reports Server (NTRS)

    Devasia, Santosh

    1996-01-01

    A technique to achieve output tracking for nonminimum phase linear systems with non-hyperbolic and near non-hyperbolic internal dynamics is presented. This approach integrates stable inversion techniques, that achieve exact-tracking, with approximation techniques, that modify the internal dynamics to achieve desirable performance. Such modification of the internal dynamics is used (1) to remove non-hyperbolicity which an obstruction to applying stable inversion techniques and (2) to reduce large pre-actuation time needed to apply stable inversion for near non-hyperbolic cases. The method is applied to an example helicopter hover control problem with near non-hyperbolic internal dynamic for illustrating the trade-off between exact tracking and reduction of pre-actuation time.

  9. Finite-fault source inversion using adjoint methods in 3D heterogeneous media

    NASA Astrophysics Data System (ADS)

    Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia

    2018-04-01

    Accounting for lateral heterogeneities in the 3D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3D heterogeneity in source inversion involves pre-computing 3D Green's functions, which requires a number of 3D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense datasets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3D heterogeneous velocity model. The velocity model comprises a uniform background and a 3D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3D velocity model are performed for two different station configurations, a dense and a sparse network with 1 km and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect, homogeneous velocity model. We find that, for velocity uncertainties that have standard deviation and correlation length typical of available 3D crustal models, the inverted sources can be severely contaminated by spurious features even if the station density is high. When data from thousand or more receivers is used in source inversions in 3D heterogeneous media, the computational cost of the method proposed in this work is at least two orders of magnitude lower than source inversion based on pre-computed Green's functions.

  10. Finite-fault source inversion using adjoint methods in 3-D heterogeneous media

    NASA Astrophysics Data System (ADS)

    Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia

    2018-07-01

    Accounting for lateral heterogeneities in the 3-D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1-D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3-D heterogeneity in source inversion involves pre-computing 3-D Green's functions, which requires a number of 3-D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense data sets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3-D heterogeneous velocity model. The velocity model comprises a uniform background and a 3-D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3-D velocity model are performed for two different station configurations, a dense and a sparse network with 1 and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak-slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect, homogeneous velocity model. We find that, for velocity uncertainties that have standard deviation and correlation length typical of available 3-D crustal models, the inverted sources can be severely contaminated by spurious features even if the station density is high. When data from thousand or more receivers is used in source inversions in 3-D heterogeneous media, the computational cost of the method proposed in this work is at least two orders of magnitude lower than source inversion based on pre-computed Green's functions.

  11. Goal driven kinematic simulation of flexible arm robot for space station missions

    NASA Technical Reports Server (NTRS)

    Janssen, P.; Choudry, A.

    1987-01-01

    Flexible arms offer a great degree of flexibility in maneuvering in the space environment. The problem of transporting an astronaut for extra-vehicular activity using a space station based flexible arm robot was studied. Inverse kinematic solutions of the multilink structure were developed. The technique is goal driven and can support decision making for configuration selection as required for stability and obstacle avoidance. Details of this technique and results are given.

  12. Improving IMRT delivery efficiency with reweighted L1-minimization for inverse planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Hojin; Becker, Stephen; Lee, Rena

    2013-07-15

    Purpose: This study presents an improved technique to further simplify the fluence-map in intensity modulated radiation therapy (IMRT) inverse planning, thereby reducing plan complexity and improving delivery efficiency, while maintaining the plan quality.Methods: First-order total-variation (TV) minimization (min.) based on L1-norm has been proposed to reduce the complexity of fluence-map in IMRT by generating sparse fluence-map variations. However, with stronger dose sparing to the critical structures, the inevitable increase in the fluence-map complexity can lead to inefficient dose delivery. Theoretically, L0-min. is the ideal solution for the sparse signal recovery problem, yet practically intractable due to its nonconvexity of themore » objective function. As an alternative, the authors use the iteratively reweighted L1-min. technique to incorporate the benefits of the L0-norm into the tractability of L1-min. The weight multiplied to each element is inversely related to the magnitude of the corresponding element, which is iteratively updated by the reweighting process. The proposed penalizing process combined with TV min. further improves sparsity in the fluence-map variations, hence ultimately enhancing the delivery efficiency. To validate the proposed method, this work compares three treatment plans obtained from quadratic min. (generally used in clinic IMRT), conventional TV min., and our proposed reweighted TV min. techniques, implemented by a large-scale L1-solver (template for first-order conic solver), for five patient clinical data. Criteria such as conformation number (CN), modulation index (MI), and estimated treatment time are employed to assess the relationship between the plan quality and delivery efficiency.Results: The proposed method yields simpler fluence-maps than the quadratic and conventional TV based techniques. To attain a given CN and dose sparing to the critical organs for 5 clinical cases, the proposed method reduces the number of segments by 10-15 and 30-35, relative to TV min. and quadratic min. based plans, while MIs decreases by about 20%-30% and 40%-60% over the plans by two existing techniques, respectively. With such conditions, the total treatment time of the plans obtained from our proposed method can be reduced by 12-30 s and 30-80 s mainly due to greatly shorter multileaf collimator (MLC) traveling time in IMRT step-and-shoot delivery.Conclusions: The reweighted L1-minimization technique provides a promising solution to simplify the fluence-map variations in IMRT inverse planning. It improves the delivery efficiency by reducing the entire segments and treatment time, while maintaining the plan quality in terms of target conformity and critical structure sparing.« less

  13. Statistical atmospheric inversion of local gas emissions by coupling the tracer release technique and local-scale transport modelling: a test case with controlled methane emissions

    NASA Astrophysics Data System (ADS)

    Ars, Sébastien; Broquet, Grégoire; Yver Kwok, Camille; Roustan, Yelva; Wu, Lin; Arzoumanian, Emmanuel; Bousquet, Philippe

    2017-12-01

    This study presents a new concept for estimating the pollutant emission rates of a site and its main facilities using a series of atmospheric measurements across the pollutant plumes. This concept combines the tracer release method, local-scale atmospheric transport modelling and a statistical atmospheric inversion approach. The conversion between the controlled emission and the measured atmospheric concentrations of the released tracer across the plume places valuable constraints on the atmospheric transport. This is used to optimise the configuration of the transport model parameters and the model uncertainty statistics in the inversion system. The emission rates of all sources are then inverted to optimise the match between the concentrations simulated with the transport model and the pollutants' measured atmospheric concentrations, accounting for the transport model uncertainty. In principle, by using atmospheric transport modelling, this concept does not strongly rely on the good colocation between the tracer and pollutant sources and can be used to monitor multiple sources within a single site, unlike the classical tracer release technique. The statistical inversion framework and the use of the tracer data for the configuration of the transport and inversion modelling systems should ensure that the transport modelling errors are correctly handled in the source estimation. The potential of this new concept is evaluated with a relatively simple practical implementation based on a Gaussian plume model and a series of inversions of controlled methane point sources using acetylene as a tracer gas. The experimental conditions are chosen so that they are suitable for the use of a Gaussian plume model to simulate the atmospheric transport. In these experiments, different configurations of methane and acetylene point source locations are tested to assess the efficiency of the method in comparison to the classic tracer release technique in coping with the distances between the different methane and acetylene sources. The results from these controlled experiments demonstrate that, when the targeted and tracer gases are not well collocated, this new approach provides a better estimate of the emission rates than the tracer release technique. As an example, the relative error between the estimated and actual emission rates is reduced from 32 % with the tracer release technique to 16 % with the combined approach in the case of a tracer located 60 m upwind of a single methane source. Further studies and more complex implementations with more advanced transport models and more advanced optimisations of their configuration will be required to generalise the applicability of the approach and strengthen its robustness.

  14. Predicting film dose to aid in cassette placement for radiation therapy portal verification film images.

    PubMed

    Keys, Richard A; Marks, James E; Haus, Arthur G

    2002-12-01

    EC film has improved portal localization images with better contrast and improved distinction of bony structures and air-tissue interfaces. A cassette with slower speed screens was used with EC film to image the treatment portal during the entire course of treatment (verification) instead of taking separate films after treatment. Measurements of film density vs source to film distance (SFD) were made using 15 and 25 cm thick water phantoms with both 6 and 18 MV photons from I to 40 cm past the phantom. A characteristic (H & D) curve was measured in air to compare dose to film density. Results show the reduction in radiation between patient and cassette more closely follows an "inverse cube law" rather than an inverse square law. Formulas to calculate radiation exposure to the film, and the desired SFD were based on patient tumor dose, calculation of the exit dose, and the inverse cube relationship. A table of exposure techniques based on the SFD for a given tumor dose was evaluated and compared to conventional techniques. Although the film has a high contrast, there is enough latitude that excellent films can be achieved using a fixed SFD based simply on the tumor dose and beam energy. Patient diameter has a smaller effect. The benefits of imaging portal films during the entire treatment are more reliability in the accuracy of the portal image, ability to detect patient motion, and reduction in the time it takes to take portal images.

  15. A hybrid approach to determining cornea mechanical properties in vivo using a combination of nano-indentation and inverse finite element analysis.

    PubMed

    Abyaneh, M H; Wildman, R D; Ashcroft, I A; Ruiz, P D

    2013-11-01

    An analysis of the material properties of porcine corneas has been performed. A simple stress relaxation test was performed to determine the viscoelastic properties and a rheological model was built based on the Generalized Maxwell (GM) approach. A validation experiment using nano-indentation showed that an isotropic GM model was insufficient for describing the corneal material behaviour when exposed to a complex stress state. A new technique was proposed for determining the properties, using a combination of nano-indentation experiment, an isotropic and orthotropic GM model and inverse finite element method. The good agreement using this method suggests that this is a promising technique for measuring material properties in vivo and further work should focus on the reliability of the approach in practice. © 2013 Elsevier Ltd. All rights reserved.

  16. Genotyping the factor VIII intron 22 inversion locus using fluorescent in situ hybridization.

    PubMed

    Sheen, Campbell R; McDonald, Margaret A; George, Peter M; Smith, Mark P; Morris, Christine M

    2011-02-15

    The factor VIII intron 22 inversion is the most common cause of hemophilia A, accounting for approximately 40% of all severe cases of the disease. Southern hybridization and multiplex long distance PCR are the most commonly used techniques to detect the inversion in a diagnostic setting, although both have significant limitations. Here we describe our experience establishing a multicolor fluorescent in situ hybridization (FISH) based assay as an alternative to existing methods for genetic diagnosis of the inversion. Our assay was designed to apply three differentially labelled BAC DNA probes that when hybridized to interphase nuclei would exhibit signal patterns that are consistent with the normal or the inversion locus. When the FISH assay was applied to five normal and five inversion male samples, the correct genotype was assignable with p<0.001 for all samples. When applied to carrier female samples the assay could not assign a genotype to all female samples, probably due to a lower proportion of informative nuclei in female samples caused by the added complexity of a second X chromosome. Despite this complication, these pilot findings show that the assay performs favourably compared to the commonly used methods. Copyright © 2010 Elsevier Inc. All rights reserved.

  17. Aerosol properties from spectral extinction and backscatter estimated by an inverse Monte Carlo method.

    PubMed

    Ligon, D A; Gillespie, J B; Pellegrino, P

    2000-08-20

    The feasibility of using a generalized stochastic inversion methodology to estimate aerosol size distributions accurately by use of spectral extinction, backscatter data, or both is examined. The stochastic method used, inverse Monte Carlo (IMC), is verified with both simulated and experimental data from aerosols composed of spherical dielectrics with a known refractive index. Various levels of noise are superimposed on the data such that the effect of noise on the stability and results of inversion can be determined. Computational results show that the application of the IMC technique to inversion of spectral extinction or backscatter data or both can produce good estimates of aerosol size distributions. Specifically, for inversions for which both spectral extinction and backscatter data are used, the IMC technique was extremely accurate in determining particle size distributions well outside the wavelength range. Also, the IMC inversion results proved to be stable and accurate even when the data had significant noise, with a signal-to-noise ratio of 3.

  18. Scene-based nonuniformity correction for focal plane arrays by the method of the inverse covariance form.

    PubMed

    Torres, Sergio N; Pezoa, Jorge E; Hayat, Majeed M

    2003-10-10

    What is to our knowledge a new scene-based algorithm for nonuniformity correction in infrared focal-plane array sensors has been developed. The technique is based on the inverse covariance form of the Kalman filter (KF), which has been reported previously and used in estimating the gain and bias of each detector in the array from scene data. The gain and the bias of each detector in the focal-plane array are assumed constant within a given sequence of frames, corresponding to a certain time and operational conditions, but they are allowed to randomly drift from one sequence to another following a discrete-time Gauss-Markov process. The inverse covariance form filter estimates the gain and the bias of each detector in the focal-plane array and optimally updates them as they drift in time. The estimation is performed with considerably higher computational efficiency than the equivalent KF. The ability of the algorithm in compensating for fixed-pattern noise in infrared imagery and in reducing the computational complexity is demonstrated by use of both simulated and real data.

  19. Parameter estimation of a nonlinear Burger's model using nanoindentation and finite element-based inverse analysis

    NASA Astrophysics Data System (ADS)

    Hamim, Salah Uddin Ahmed

    Nanoindentation involves probing a hard diamond tip into a material, where the load and the displacement experienced by the tip is recorded continuously. This load-displacement data is a direct function of material's innate stress-strain behavior. Thus, theoretically it is possible to extract mechanical properties of a material through nanoindentation. However, due to various nonlinearities associated with nanoindentation the process of interpreting load-displacement data into material properties is difficult. Although, simple elastic behavior can be characterized easily, a method to characterize complicated material behavior such as nonlinear viscoelasticity is still lacking. In this study, a nanoindentation-based material characterization technique is developed to characterize soft materials exhibiting nonlinear viscoelasticity. Nanoindentation experiment was modeled in finite element analysis software (ABAQUS), where a nonlinear viscoelastic behavior was incorporated using user-defined subroutine (UMAT). The model parameters were calibrated using a process called inverse analysis. In this study, a surrogate model-based approach was used for the inverse analysis. The different factors affecting the surrogate model performance are analyzed in order to optimize the performance with respect to the computational cost.

  20. Optimization-Based Approach for Joint X-Ray Fluorescence and Transmission Tomographic Inversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di, Zichao; Leyffer, Sven; Wild, Stefan M.

    2016-01-01

    Fluorescence tomographic reconstruction, based on the detection of photons coming from fluorescent emission, can be used for revealing the internal elemental composition of a sample. On the other hand, conventional X-ray transmission tomography can be used for reconstructing the spatial distribution of the absorption coefficient inside a sample. In this work, we integrate both X-ray fluorescence and X-ray transmission data modalities and formulate a nonlinear optimization-based approach for reconstruction of the elemental composition of a given object. This model provides a simultaneous reconstruction of both the quantitative spatial distribution of all elements and the absorption effect in the sample. Mathematicallymore » speaking, we show that compared with the single-modality inversion (i.e., the X-ray transmission or fluorescence alone), the joint inversion provides a better-posed problem, which implies a better recovery. Therefore, the challenges in X-ray fluorescence tomography arising mainly from the effects of self-absorption in the sample are partially mitigated. The use of this technique is demonstrated on the reconstruction of several synthetic samples.« less

  1. Full-Physics Inverse Learning Machine for Satellite Remote Sensing of Ozone Profile Shapes and Tropospheric Columns

    NASA Astrophysics Data System (ADS)

    Xu, J.; Heue, K.-P.; Coldewey-Egbers, M.; Romahn, F.; Doicu, A.; Loyola, D.

    2018-04-01

    Characterizing vertical distributions of ozone from nadir-viewing satellite measurements is known to be challenging, particularly the ozone information in the troposphere. A novel retrieval algorithm called Full-Physics Inverse Learning Machine (FP-ILM), has been developed at DLR in order to estimate ozone profile shapes based on machine learning techniques. In contrast to traditional inversion methods, the FP-ILM algorithm formulates the profile shape retrieval as a classification problem. Its implementation comprises a training phase to derive an inverse function from synthetic measurements, and an operational phase in which the inverse function is applied to real measurements. This paper extends the ability of the FP-ILM retrieval to derive tropospheric ozone columns from GOME- 2 measurements. Results of total and tropical tropospheric ozone columns are compared with the ones using the official GOME Data Processing (GDP) product and the convective-cloud-differential (CCD) method, respectively. Furthermore, the FP-ILM framework will be used for the near-real-time processing of the new European Sentinel sensors with their unprecedented spectral and spatial resolution and corresponding large increases in the amount of data.

  2. Three-dimensional imaging of buried objects in very lossy earth by inversion of VETEM data

    USGS Publications Warehouse

    Cui, T.J.; Aydiner, A.A.; Chew, W.C.; Wright, D.L.; Smith, D.V.

    2003-01-01

    The very early time electromagnetic system (VETEM) is an efficient tool for the detection of buried objects in very lossy earth, which allows a deeper penetration depth compared to the ground-penetrating radar. In this paper, the inversion of VETEM data is investigated using three-dimensional (3-D) inverse scattering techniques, where multiple frequencies are applied in the frequency range from 0-5 MHz. For small and moderately sized problems, the Born approximation and/or the Born iterative method have been used with the aid of the singular value decomposition and/or the conjugate gradient method in solving the linearized integral equations. For large-scale problems, a localized 3-D inversion method based on the Born approximation has been proposed for the inversion of VETEM data over a large measurement domain. Ways to process and to calibrate the experimental VETEM data are discussed to capture the real physics of buried objects. Reconstruction examples using synthesized VETEM data and real-world VETEM data are given to test the validity and efficiency of the proposed approach.

  3. Clinical knowledge-based inverse treatment planning

    NASA Astrophysics Data System (ADS)

    Yang, Yong; Xing, Lei

    2004-11-01

    Clinical IMRT treatment plans are currently made using dose-based optimization algorithms, which do not consider the nonlinear dose-volume effects for tumours and normal structures. The choice of structure specific importance factors represents an additional degree of freedom of the system and makes rigorous optimization intractable. The purpose of this work is to circumvent the two problems by developing a biologically more sensible yet clinically practical inverse planning framework. To implement this, the dose-volume status of a structure was characterized by using the effective volume in the voxel domain. A new objective function was constructed with the incorporation of the volumetric information of the system so that the figure of merit of a given IMRT plan depends not only on the dose deviation from the desired distribution but also the dose-volume status of the involved organs. The conventional importance factor of an organ was written into a product of two components: (i) a generic importance that parametrizes the relative importance of the organs in the ideal situation when the goals for all the organs are met; (ii) a dose-dependent factor that quantifies our level of clinical/dosimetric satisfaction for a given plan. The generic importance can be determined a priori, and in most circumstances, does not need adjustment, whereas the second one, which is responsible for the intractable behaviour of the trade-off seen in conventional inverse planning, was determined automatically. An inverse planning module based on the proposed formalism was implemented and applied to a prostate case and a head-neck case. A comparison with the conventional inverse planning technique indicated that, for the same target dose coverage, the critical structure sparing was substantially improved for both cases. The incorporation of clinical knowledge allows us to obtain better IMRT plans and makes it possible to auto-select the importance factors, greatly facilitating the inverse planning process. The new formalism proposed also reveals the relationship between different inverse planning schemes and gives important insight into the problem of therapeutic plan optimization. In particular, we show that the EUD-based optimization is a special case of the general inverse planning formalism described in this paper.

  4. Efficient sampling of parsimonious inversion histories with application to genome rearrangement in Yersinia.

    PubMed

    Miklós, István; Darling, Aaron E

    2009-06-22

    Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called "MC4Inversion." We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique.

  5. (abstract) Using an Inversion Algorithm to Retrieve Parameters and Monitor Changes over Forested Areas from SAR Data

    NASA Technical Reports Server (NTRS)

    Moghaddam, Mahta

    1995-01-01

    In this work, the application of an inversion algorithm based on a nonlinear opimization technique to retrieve forest parameters from multifrequency polarimetric SAR data is discussed. The approach discussed here allows for retrieving and monitoring changes in forest parameters in a quantative and systematic fashion using SAR data. The parameters to be inverted directly from the data are the electromagnetic scattering properties of the forest components such as their dielectric constants and size characteristics. Once these are known, attributes such as canopy moisture content can be obtained, which are useful in the ecosystem models.

  6. Full wave two-dimensional modeling of scattering and inverse scattering for layered rough surfaces with buried objects

    NASA Astrophysics Data System (ADS)

    Kuo, Chih-Hao

    Efficient and accurate modeling of electromagnetic scattering from layered rough surfaces with buried objects finds applications ranging from detection of landmines to remote sensing of subsurface soil moisture. The formulation of a hybrid numerical/analytical solution to electromagnetic scattering from layered rough surfaces is first presented in this dissertation. The solution to scattering from each rough interface is sought independently based on the extended boundary condition method (EBCM), where the scattered fields of each rough interface are expressed as a summation of plane waves and then cast into reflection/transmission matrices. To account for interactions between multiple rough boundaries, the scattering matrix method (SMM) is applied to recursively cascade reflection and transmission matrices of each rough interface and obtain the composite reflection matrix from the overall scattering medium. The validation of this method against the Method of Moments (MoM) and Small Perturbation Method (SPM) is addressed and the numerical results which investigate the potential of low frequency radar systems in estimating deep soil moisture are presented. Computational efficiency of the proposed method is also discussed. In order to demonstrate the capability of this method in modeling coherent multiple scattering phenomena, the proposed method has been employed to analyze backscattering enhancement and satellite peaks due to surface plasmon waves from layered rough surfaces. Numerical results which show the appearance of enhanced backscattered peaks and satellite peaks are presented. Following the development of the EBCM/SMM technique, a technique which incorporates a buried object in layered rough surfaces by employing the T-matrix method and the cylindrical-to-spatial harmonics transformation is proposed. Validation and numerical results are provided. Finally, a multi-frequency polarimetric inversion algorithm for the retrieval of subsurface soil properties using VHF/UHF band radar measurements is devised. The top soil dielectric constant is first determined using an L-band inversion algorithm. For the retrieval of subsurface properties, a time-domain inversion technique is employed together with a parameter optimization for the pulse shape of time delay echoes from VHF/UHF band radar observations. Numerical studies to investigate the accuracy of the proposed inversion technique in presence of errors are addressed.

  7. Optically Reconfigurable Chiral Microspheres of Self-Organized Helical Superstructures with Handedness Inversion.

    PubMed

    Wang, Ling; Chen, Dong; Gutierrez-Cuevas, Karla G; Bisoyi, Hari Krishna; Fan, Jing; Zola, Rafael S; Li, Guoqiang; Urbas, Augustine M; Bunning, Timothy J; Weitz, David A; Li, Quan

    2017-01-01

    Optically reconfigurable monodisperse chiral microspheres of self-organized helical superstructures with dynamic chirality were fabricated via a capillary-based microfluidic technique. Light-driven handedness-invertible transformations between different configurations of microspheres were vividly observed and optically tunable RGB photonic cross-communications among the microspheres were demonstrated.

  8. Error analysis applied to several inversion techniques used for the retrieval of middle atmospheric constituents from limb-scanning MM-wave spectroscopic measurements

    NASA Technical Reports Server (NTRS)

    Puliafito, E.; Bevilacqua, R.; Olivero, J.; Degenhardt, W.

    1992-01-01

    The formal retrieval error analysis of Rodgers (1990) allows the quantitative determination of such retrieval properties as measurement error sensitivity, resolution, and inversion bias. This technique was applied to five numerical inversion techniques and two nonlinear iterative techniques used for the retrieval of middle atmospheric constituent concentrations from limb-scanning millimeter-wave spectroscopic measurements. It is found that the iterative methods have better vertical resolution, but are slightly more sensitive to measurement error than constrained matrix methods. The iterative methods converge to the exact solution, whereas two of the matrix methods under consideration have an explicit constraint, the sensitivity of the solution to the a priori profile. Tradeoffs of these retrieval characteristics are presented.

  9. Real-time volcano monitoring using GNSS single-frequency receivers

    NASA Astrophysics Data System (ADS)

    Lee, Seung-Woo; Yun, Sung-Hyo; Kim, Do Hyeong; Lee, Dukkee; Lee, Young J.; Schutz, Bob E.

    2015-12-01

    We present a real-time volcano monitoring strategy that uses the Global Navigation Satellite System (GNSS), and we examine the performance of the strategy by processing simulated and real data and comparing the results with published solutions. The cost of implementing the strategy is reduced greatly by using single-frequency GNSS receivers except for one dual-frequency receiver that serves as a base receiver. Positions of the single-frequency receivers are computed relative to the base receiver on an epoch-by-epoch basis using the high-rate double-difference (DD) GNSS technique, while the position of the base station is fixed to the values obtained with a deferred-time precise point positioning technique and updated on a regular basis. Since the performance of the single-frequency high-rate DD technique depends on the conditions of the ionosphere over the monitoring area, the ionospheric total electron content is monitored using the dual-frequency data from the base receiver. The surface deformation obtained with the high-rate DD technique is eventually processed by a real-time inversion filter based on the Mogi point source model. The performance of the real-time volcano monitoring strategy is assessed through a set of tests and case studies, in which the data recorded during the 2007 eruption of Kilauea and the 2005 eruption of Augustine are processed in a simulated real-time mode. The case studies show that the displacement time series obtained with the strategy seem to agree with those obtained with deferred-time, dual-frequency approaches at the level of 10-15 mm. Differences in the estimated volume change of the Mogi source between the real-time inversion filter and previously reported works were in the range of 11 to 13% of the maximum volume changes of the cases examined.

  10. A technique for increasing the accuracy of the numerical inversion of the Laplace transform with applications

    NASA Technical Reports Server (NTRS)

    Berger, B. S.; Duangudom, S.

    1973-01-01

    A technique is introduced which extends the range of useful approximation of numerical inversion techniques to many cycles of an oscillatory function without requiring either the evaluation of the image function for many values of s or the computation of higher-order terms. The technique consists in reducing a given initial value problem defined over some interval into a sequence of initial value problems defined over a set of subintervals. Several numerical examples demonstrate the utility of the method.

  11. Lithospheric architecture of NE China from joint Inversions of receiver functions and surface wave dispersion through Bayesian optimisation

    NASA Astrophysics Data System (ADS)

    Sebastian, Nita; Kim, Seongryong; Tkalčić, Hrvoje; Sippl, Christian

    2017-04-01

    The purpose of this study is to develop an integrated inference on the lithospheric structure of NE China using three passive seismic networks comprised of 92 stations. The NE China plain consists of complex lithospheric domains characterised by the co-existence of complex geodynamic processes such as crustal thinning, active intraplate cenozoic volcanism and low velocity anomalies. To estimate lithospheric structures with greater detail, we chose to perform the joint inversion of independent data sets such as receiver functions and surface wave dispersion curves (group and phase velocity). We perform a joint inversion based on principles of Bayesian transdimensional optimisation techniques (Kim etal., 2016). Unlike in the previous studies of NE China, the complexity of the model is determined from the data in the first stage of the inversion, and the data uncertainty is computed based on Bayesian statistics in the second stage of the inversion. The computed crustal properties are retrieved from an ensemble of probable models. We obtain major structural inferences with well constrained absolute velocity estimates, which are vital for inferring properties of the lithosphere and bulk crustal Vp/Vs ratio. The Vp/Vs estimate obtained from joint inversions confirms the high Vp/Vs ratio ( 1.98) obtained using the H-Kappa method beneath some stations. Moreover, we could confirm the existence of a lower crustal velocity beneath several stations (eg: station SHS) within the NE China plain. Based on these findings we attempt to identify a plausible origin for structural complexity. We compile a high-resolution 3D image of the lithospheric architecture of the NE China plain.

  12. A model-assisted radio occultation data inversion method based on data ingestion into NeQuick

    NASA Astrophysics Data System (ADS)

    Shaikh, M. M.; Nava, B.; Kashcheyev, A.

    2017-01-01

    Inverse Abel transform is the most common method to invert radio occultation (RO) data in the ionosphere and it is based on the assumption of the spherical symmetry for the electron density distribution in the vicinity of an occultation event. It is understood that this 'spherical symmetry hypothesis' could fail, above all, in the presence of strong horizontal electron density gradients. As a consequence, in some cases wrong electron density profiles could be obtained. In this work, in order to incorporate the knowledge of horizontal gradients, we have suggested an inversion technique based on the adaption of the empirical ionospheric model, NeQuick2, to RO-derived TEC. The method relies on the minimization of a cost function involving experimental and model-derived TEC data to determine NeQuick2 input parameters (effective local ionization parameters) at specific locations and times. These parameters are then used to obtain the electron density profile along the tangent point (TP) positions associated with the relevant RO event using NeQuick2. The main focus of our research has been laid on the mitigation of spherical symmetry effects from RO data inversion without using external data such as data from global ionospheric maps (GIM). By using RO data from Constellation Observing System for Meteorology Ionosphere and Climate (FORMOSAT-3/COSMIC) mission and manually scaled peak density data from a network of ionosondes along Asian and American longitudinal sectors, we have obtained a global improvement of 5% with 7% in Asian longitudinal sector (considering the data used in this work), in the retrieval of peak electron density (NmF2) with model-assisted inversion as compared to the Abel inversion. Mean errors of NmF2 in Asian longitudinal sector are calculated to be much higher compared to American sector.

  13. Warhead verification as inverse problem: Applications of neutron spectrum unfolding from organic-scintillator measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lawrence, Chris C.; Flaska, Marek; Pozzi, Sara A.

    2016-08-14

    Verification of future warhead-dismantlement treaties will require detection of certain warhead attributes without the disclosure of sensitive design information, and this presents an unusual measurement challenge. Neutron spectroscopy—commonly eschewed as an ill-posed inverse problem—may hold special advantages for warhead verification by virtue of its insensitivity to certain neutron-source parameters like plutonium isotopics. In this article, we investigate the usefulness of unfolded neutron spectra obtained from organic-scintillator data for verifying a particular treaty-relevant warhead attribute: the presence of high-explosive and neutron-reflecting materials. Toward this end, several improvements on current unfolding capabilities are demonstrated: deuterated detectors are shown to have superior response-matrixmore » condition to that of standard hydrogen-base scintintillators; a novel data-discretization scheme is proposed which removes important detector nonlinearities; and a technique is described for re-parameterizing the unfolding problem in order to constrain the parameter space of solutions sought, sidestepping the inverse problem altogether. These improvements are demonstrated with trial measurements and verified using accelerator-based time-of-flight calculation of reference spectra. Then, a demonstration is presented in which the elemental compositions of low-Z neutron-attenuating materials are estimated to within 10%. These techniques could have direct application in verifying the presence of high-explosive materials in a neutron-emitting test item, as well as other for treaty verification challenges.« less

  14. Warhead verification as inverse problem: Applications of neutron spectrum unfolding from organic-scintillator measurements

    NASA Astrophysics Data System (ADS)

    Lawrence, Chris C.; Febbraro, Michael; Flaska, Marek; Pozzi, Sara A.; Becchetti, F. D.

    2016-08-01

    Verification of future warhead-dismantlement treaties will require detection of certain warhead attributes without the disclosure of sensitive design information, and this presents an unusual measurement challenge. Neutron spectroscopy—commonly eschewed as an ill-posed inverse problem—may hold special advantages for warhead verification by virtue of its insensitivity to certain neutron-source parameters like plutonium isotopics. In this article, we investigate the usefulness of unfolded neutron spectra obtained from organic-scintillator data for verifying a particular treaty-relevant warhead attribute: the presence of high-explosive and neutron-reflecting materials. Toward this end, several improvements on current unfolding capabilities are demonstrated: deuterated detectors are shown to have superior response-matrix condition to that of standard hydrogen-base scintintillators; a novel data-discretization scheme is proposed which removes important detector nonlinearities; and a technique is described for re-parameterizing the unfolding problem in order to constrain the parameter space of solutions sought, sidestepping the inverse problem altogether. These improvements are demonstrated with trial measurements and verified using accelerator-based time-of-flight calculation of reference spectra. Then, a demonstration is presented in which the elemental compositions of low-Z neutron-attenuating materials are estimated to within 10%. These techniques could have direct application in verifying the presence of high-explosive materials in a neutron-emitting test item, as well as other for treaty verification challenges.

  15. A progress report on the ARRA-funded geotechnical site characterization project

    NASA Astrophysics Data System (ADS)

    Martin, A. J.; Yong, A.; Stokoe, K.; Di Matteo, A.; Diehl, J.; Jack, S.

    2011-12-01

    For the past 18 months, the 2009 American Recovery and Reinvestment Act (ARRA) has funded geotechnical site characterizations at 189 seismographic station sites in California and the central U.S. This ongoing effort applies methods involving surface-wave techniques, which include the horizontal-to-vertical spectral ratio (HVSR) technique and one or more of the following: spectral analysis of surface wave (SASW), active and passive multi-channel analysis of surface wave (MASW) and passive array microtremor techniques. From this multi-method approach, shear-wave velocity profiles (VS) and the time-averaged shear-wave velocity of the upper 30 meters (VS30) are estimated for each site. To accommodate the variability in local conditions (e.g., rural and urban soil locales, as well as weathered and competent rock sites), conventional field procedures are often modified ad-hoc to fit the unanticipated complexity at each location. For the majority of sites (>80%), fundamental-mode Rayleigh wave dispersion-based techniques are deployed and where complex geology is encountered, multiple test locations are made. Due to the presence of high velocity layers, about five percent of the locations require multi-mode inversion of Rayleigh wave (MASW-based) data or 3-D array-based inversion of SASW dispersion data, in combination with shallow P-wave seismic refraction and/or HVSR results. Where a strong impedance contrast (i.e. soil over rock) exists at shallow depth (about 10% of sites), dominant higher modes limit the use of Rayleigh wave dispersion techniques. Here, use of the Love wave dispersion technique, along with seismic refraction and/or HVSR data, is required to model the presence of shallow bedrock. At a small percentage of the sites, surface wave techniques are found not suitable for stand-alone deployment and site characterization is limited to the use of the seismic refraction technique. A USGS Open File Report-describing the surface geology, VS profile and the calculated VS30 for each site-will be prepared after the completion of the project in November 2011.

  16. Optimization of equivalent uniform dose using the L-curve criterion.

    PubMed

    Chvetsov, Alexei V; Dempsey, James F; Palta, Jatinder R

    2007-10-07

    Optimization of equivalent uniform dose (EUD) in inverse planning for intensity-modulated radiation therapy (IMRT) prevents variation in radiobiological effect between different radiotherapy treatment plans, which is due to variation in the pattern of dose nonuniformity. For instance, the survival fraction of clonogens would be consistent with the prescription when the optimized EUD is equal to the prescribed EUD. One of the problems in the practical implementation of this approach is that the spatial dose distribution in EUD-based inverse planning would be underdetermined because an unlimited number of nonuniform dose distributions can be computed for a prescribed value of EUD. Together with ill-posedness of the underlying integral equation, this may significantly increase the dose nonuniformity. To optimize EUD and keep dose nonuniformity within reasonable limits, we implemented into an EUD-based objective function an additional criterion which ensures the smoothness of beam intensity functions. This approach is similar to the variational regularization technique which was previously studied for the dose-based least-squares optimization. We show that the variational regularization together with the L-curve criterion for the regularization parameter can significantly reduce dose nonuniformity in EUD-based inverse planning.

  17. A radiobiology-based inverse treatment planning method for optimisation of permanent l-125 prostate implants in focal brachytherapy.

    PubMed

    Haworth, Annette; Mears, Christopher; Betts, John M; Reynolds, Hayley M; Tack, Guido; Leo, Kevin; Williams, Scott; Ebert, Martin A

    2016-01-07

    Treatment plans for ten patients, initially treated with a conventional approach to low dose-rate brachytherapy (LDR, 145 Gy to entire prostate), were compared with plans for the same patients created with an inverse-optimisation planning process utilising a biologically-based objective. The 'biological optimisation' considered a non-uniform distribution of tumour cell density through the prostate based on known and expected locations of the tumour. Using dose planning-objectives derived from our previous biological-model validation study, the volume of the urethra receiving 125% of the conventional prescription (145 Gy) was reduced from a median value of 64% to less than 8% whilst maintaining high values of TCP. On average, the number of planned seeds was reduced from 85 to less than 75. The robustness of plans to random seed displacements needs to be carefully considered when using contemporary seed placement techniques. We conclude that an inverse planning approach to LDR treatments, based on a biological objective, has the potential to maintain high rates of tumour control whilst minimising dose to healthy tissue. In future, the radiobiological model will be informed using multi-parametric MRI to provide a personalised medicine approach.

  18. A radiobiology-based inverse treatment planning method for optimisation of permanent l-125 prostate implants in focal brachytherapy

    NASA Astrophysics Data System (ADS)

    Haworth, Annette; Mears, Christopher; Betts, John M.; Reynolds, Hayley M.; Tack, Guido; Leo, Kevin; Williams, Scott; Ebert, Martin A.

    2016-01-01

    Treatment plans for ten patients, initially treated with a conventional approach to low dose-rate brachytherapy (LDR, 145 Gy to entire prostate), were compared with plans for the same patients created with an inverse-optimisation planning process utilising a biologically-based objective. The ‘biological optimisation’ considered a non-uniform distribution of tumour cell density through the prostate based on known and expected locations of the tumour. Using dose planning-objectives derived from our previous biological-model validation study, the volume of the urethra receiving 125% of the conventional prescription (145 Gy) was reduced from a median value of 64% to less than 8% whilst maintaining high values of TCP. On average, the number of planned seeds was reduced from 85 to less than 75. The robustness of plans to random seed displacements needs to be carefully considered when using contemporary seed placement techniques. We conclude that an inverse planning approach to LDR treatments, based on a biological objective, has the potential to maintain high rates of tumour control whilst minimising dose to healthy tissue. In future, the radiobiological model will be informed using multi-parametric MRI to provide a personalised medicine approach.

  19. Vibrato in Singing Voice: The Link between Source-Filter and Sinusoidal Models

    NASA Astrophysics Data System (ADS)

    Arroabarren, Ixone; Carlosena, Alfonso

    2004-12-01

    The application of inverse filtering techniques for high-quality singing voice analysis/synthesis is discussed. In the context of source-filter models, inverse filtering provides a noninvasive method to extract the voice source, and thus to study voice quality. Although this approach is widely used in speech synthesis, this is not the case in singing voice. Several studies have proved that inverse filtering techniques fail in the case of singing voice, the reasons being unclear. In order to shed light on this problem, we will consider here an additional feature of singing voice, not present in speech: the vibrato. Vibrato has been traditionally studied by sinusoidal modeling. As an alternative, we will introduce here a novel noninteractive source filter model that incorporates the mechanisms of vibrato generation. This model will also allow the comparison of the results produced by inverse filtering techniques and by sinusoidal modeling, as they apply to singing voice and not to speech. In this way, the limitations of these conventional techniques, described in previous literature, will be explained. Both synthetic signals and singer recordings are used to validate and compare the techniques presented in the paper.

  20. Contributed Review: Experimental characterization of inverse piezoelectric strain in GaN HEMTs via micro-Raman spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bagnall, Kevin R.; Wang, Evelyn N.

    2016-06-15

    Micro-Raman thermography is one of the most popular techniques for measuring local temperature rise in gallium nitride (GaN) high electron mobility transistors with high spatial and temporal resolution. However, accurate temperature measurements based on changes in the Stokes peak positions of the GaN epitaxial layers require properly accounting for the stress and/or strain induced by the inverse piezoelectric effect. It is common practice to use the pinched OFF state as the unpowered reference for temperature measurements because the vertical electric field in the GaN buffer that induces inverse piezoelectric stress/strain is relatively independent of the gate bias. Although this approachmore » has yielded temperature measurements that agree with those derived from the Stokes/anti-Stokes ratio and thermal models, there has been significant difficulty in quantifying the mechanical state of the GaN buffer in the pinched OFF state from changes in the Raman spectra. In this paper, we review the experimental technique of micro-Raman thermography and derive expressions for the detailed dependence of the Raman peak positions on strain, stress, and electric field components in wurtzite GaN. We also use a combination of semiconductor device modeling and electro-mechanical modeling to predict the stress and strain induced by the inverse piezoelectric effect. Based on the insights gained from our electro-mechanical model and the best values of material properties in the literature, we analyze changes in the E{sub 2} high and A{sub 1} (LO) Raman peaks and demonstrate that there are major quantitative discrepancies between measured and modeled values of inverse piezoelectric stress and strain. We examine many of the hypotheses offered in the literature for these discrepancies but conclude that none of them satisfactorily resolves these discrepancies. Further research is needed to determine whether the electric field components could be affecting the phonon frequencies apart from the inverse piezoelectric effect in wurtzite GaN, which has been predicted theoretically in zinc blende gallium arsenide (GaAs).« less

  1. Recursive Factorization of the Inverse Overlap Matrix in Linear-Scaling Quantum Molecular Dynamics Simulations.

    PubMed

    Negre, Christian F A; Mniszewski, Susan M; Cawkwell, Marc J; Bock, Nicolas; Wall, Michael E; Niklasson, Anders M N

    2016-07-12

    We present a reduced complexity algorithm to compute the inverse overlap factors required to solve the generalized eigenvalue problem in a quantum-based molecular dynamics (MD) simulation. Our method is based on the recursive, iterative refinement of an initial guess of Z (inverse square root of the overlap matrix S). The initial guess of Z is obtained beforehand by using either an approximate divide-and-conquer technique or dynamical methods, propagated within an extended Lagrangian dynamics from previous MD time steps. With this formulation, we achieve long-term stability and energy conservation even under the incomplete, approximate, iterative refinement of Z. Linear-scaling performance is obtained using numerically thresholded sparse matrix algebra based on the ELLPACK-R sparse matrix data format, which also enables efficient shared-memory parallelization. As we show in this article using self-consistent density-functional-based tight-binding MD, our approach is faster than conventional methods based on the diagonalization of overlap matrix S for systems as small as a few hundred atoms, substantially accelerating quantum-based simulations even for molecular structures of intermediate size. For a 4158-atom water-solvated polyalanine system, we find an average speedup factor of 122 for the computation of Z in each MD step.

  2. Recursive Factorization of the Inverse Overlap Matrix in Linear Scaling Quantum Molecular Dynamics Simulations

    DOE PAGES

    Negre, Christian F. A; Mniszewski, Susan M.; Cawkwell, Marc Jon; ...

    2016-06-06

    We present a reduced complexity algorithm to compute the inverse overlap factors required to solve the generalized eigenvalue problem in a quantum-based molecular dynamics (MD) simulation. Our method is based on the recursive iterative re nement of an initial guess Z of the inverse overlap matrix S. The initial guess of Z is obtained beforehand either by using an approximate divide and conquer technique or dynamically, propagated within an extended Lagrangian dynamics from previous MD time steps. With this formulation, we achieve long-term stability and energy conservation even under incomplete approximate iterative re nement of Z. Linear scaling performance ismore » obtained using numerically thresholded sparse matrix algebra based on the ELLPACK-R sparse matrix data format, which also enables e cient shared memory parallelization. As we show in this article using selfconsistent density functional based tight-binding MD, our approach is faster than conventional methods based on the direct diagonalization of the overlap matrix S for systems as small as a few hundred atoms, substantially accelerating quantum-based simulations even for molecular structures of intermediate size. For a 4,158 atom water-solvated polyalanine system we nd an average speedup factor of 122 for the computation of Z in each MD step.« less

  3. Coloured computational imaging with single-pixel detectors based on a 2D discrete cosine transform

    NASA Astrophysics Data System (ADS)

    Liu, Bao-Lei; Yang, Zhao-Hua; Liu, Xia; Wu, Ling-An

    2017-02-01

    We propose and demonstrate a computational imaging technique that uses structured illumination based on a two-dimensional discrete cosine transform to perform imaging with a single-pixel detector. A scene is illuminated by a projector with two sets of orthogonal patterns, then by applying an inverse cosine transform to the spectra obtained from the single-pixel detector a full-colour image is retrieved. This technique can retrieve an image from sub-Nyquist measurements, and the background noise is easily cancelled to give excellent image quality. Moreover, the experimental set-up is very simple.

  4. New Inversion and Interpretation of Public-Domain Electromagnetic Survey Data from Selected Areas in Alaska

    NASA Astrophysics Data System (ADS)

    Smith, B. D.; Kass, A.; Saltus, R. W.; Minsley, B. J.; Deszcz-Pan, M.; Bloss, B. R.; Burns, L. E.

    2013-12-01

    Public-domain airborne geophysical surveys (combined electromagnetics and magnetics), mostly collected for and released by the State of Alaska, Division of Geological and Geophysical Surveys (DGGS), are a unique and valuable resource for both geologic interpretation and geophysical methods development. A new joint effort by the US Geological Survey (USGS) and the DGGS aims to add value to these data through the application of novel advanced inversion methods and through innovative and intuitive display of data: maps, profiles, voxel-based models, and displays of estimated inversion quality and confidence. Our goal is to make these data even more valuable for interpretation of geologic frameworks, geotechnical studies, and cryosphere studies, by producing robust estimates of subsurface resistivity that can be used by non-geophysicists. The available datasets, which are available in the public domain, include 39 frequency-domain electromagnetic datasets collected since 1993, and continue to grow with 5 more data releases pending in 2013. The majority of these datasets were flown for mineral resource purposes, with one survey designed for infrastructure analysis. In addition, several USGS datasets are included in this study. The USGS has recently developed new inversion methodologies for airborne EM data and have begun to apply these and other new techniques to the available datasets. These include a trans-dimensional Markov Chain Monte Carlo technique, laterally-constrained regularized inversions, and deterministic inversions which include calibration factors as a free parameter. Incorporation of the magnetic data as an additional constraining dataset has also improved the inversion results. Processing has been completed in several areas, including Fortymile and the Alaska Highway surveys, and continues in others such as the Styx River and Nome surveys. Utilizing these new techniques, we provide models beyond the apparent resistivity maps supplied by the original contractors, allowing us to produce a variety of products, such as maps of resistivity as a function of depth or elevation, cross section maps, and 3D voxel models, which have been treated consistently both in terms of processing and error analysis throughout the state. These products facilitate a more fruitful exchange between geologists and geophysicists and a better understanding of uncertainty, and the process results in iterative development and improvement of geologic models, both on small and large scales.

  5. Adjoint Sensitivity Method to Determine Optimal Set of Stations for Tsunami Source Inversion

    NASA Astrophysics Data System (ADS)

    Gusman, A. R.; Hossen, M. J.; Cummins, P. R.; Satake, K.

    2017-12-01

    We applied the adjoint sensitivity technique in tsunami science for the first time to determine an optimal set of stations for a tsunami source inversion. The adjoint sensitivity (AS) method has been used in numerical weather prediction to find optimal locations for adaptive observations. We implemented this technique to Green's Function based Time Reverse Imaging (GFTRI), which is recently used in tsunami source inversion in order to reconstruct the initial sea surface displacement, known as tsunami source model. This method has the same source representation as the traditional least square (LSQ) source inversion method where a tsunami source is represented by dividing the source region into a regular grid of "point" sources. For each of these, Green's function (GF) is computed using a basis function for initial sea surface displacement whose amplitude is concentrated near the grid point. We applied the AS method to the 2009 Samoa earthquake tsunami that occurred on 29 September 2009 in the southwest Pacific, near the Tonga trench. Many studies show that this earthquake is a doublet associated with both normal faulting in the outer-rise region and thrust faulting in the subduction interface. To estimate the tsunami source model for this complex event, we initially considered 11 observations consisting of 5 tide gauges and 6 DART bouys. After implementing AS method, we found the optimal set of observations consisting with 8 stations. Inversion with this optimal set provides better result in terms of waveform fitting and source model that shows both sub-events associated with normal and thrust faulting.

  6. Round-off errors in cutting plane algorithms based on the revised simplex procedure

    NASA Technical Reports Server (NTRS)

    Moore, J. E.

    1973-01-01

    This report statistically analyzes computational round-off errors associated with the cutting plane approach to solving linear integer programming problems. Cutting plane methods require that the inverse of a sequence of matrices be computed. The problem basically reduces to one of minimizing round-off errors in the sequence of inverses. Two procedures for minimizing this problem are presented, and their influence on error accumulation is statistically analyzed. One procedure employs a very small tolerance factor to round computed values to zero. The other procedure is a numerical analysis technique for reinverting or improving the approximate inverse of a matrix. The results indicated that round-off accumulation can be effectively minimized by employing a tolerance factor which reflects the number of significant digits carried for each calculation and by applying the reinversion procedure once to each computed inverse. If 18 significant digits plus an exponent are carried for each variable during computations, then a tolerance value of 0.1 x 10 to the minus 12th power is reasonable.

  7. A fast new algorithm for a robot neurocontroller using inverse QR decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, A.S.; Khemaissia, S.

    2000-01-01

    A new adaptive neural network controller for robots is presented. The controller is based on direct adaptive techniques. Unlike many neural network controllers in the literature, inverse dynamical model evaluation is not required. A numerically robust, computationally efficient processing scheme for neutral network weight estimation is described, namely, the inverse QR decomposition (INVQR). The inverse QR decomposition and a weighted recursive least-squares (WRLS) method for neural network weight estimation is derived using Cholesky factorization of the data matrix. The algorithm that performs the efficient INVQR of the underlying space-time data matrix may be implemented in parallel on a triangular array.more » Furthermore, its systolic architecture is well suited for VLSI implementation. Another important benefit is well suited for VLSI implementation. Another important benefit of the INVQR decomposition is that it solves directly for the time-recursive least-squares filter vector, while avoiding the sequential back-substitution step required by the QR decomposition approaches.« less

  8. Inversion group (IG) fitting: A new T1 mapping method for modified look-locker inversion recovery (MOLLI) that allows arbitrary inversion groupings and rest periods (including no rest period).

    PubMed

    Sussman, Marshall S; Yang, Issac Y; Fok, Kai-Ho; Wintersperger, Bernd J

    2016-06-01

    The Modified Look-Locker Inversion Recovery (MOLLI) technique is used for T1 mapping in the heart. However, a drawback of this technique is that it requires lengthy rest periods in between inversion groupings to allow for complete magnetization recovery. In this work, a new MOLLI fitting algorithm (inversion group [IG] fitting) is presented that allows for arbitrary combinations of inversion groupings and rest periods (including no rest period). Conventional MOLLI algorithms use a three parameter fitting model. In IG fitting, the number of parameters is two plus the number of inversion groupings. This increased number of parameters permits any inversion grouping/rest period combination. Validation was performed through simulation, phantom, and in vivo experiments. IG fitting provided T1 values with less than 1% discrepancy across a range of inversion grouping/rest period combinations. By comparison, conventional three parameter fits exhibited up to 30% discrepancy for some combinations. The one drawback with IG fitting was a loss of precision-approximately 30% worse than the three parameter fits. IG fitting permits arbitrary inversion grouping/rest period combinations (including no rest period). The cost of the algorithm is a loss of precision relative to conventional three parameter fits. Magn Reson Med 75:2332-2340, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  9. MEG-SIM: a web portal for testing MEG analysis methods using realistic simulated and empirical data.

    PubMed

    Aine, C J; Sanfratello, L; Ranken, D; Best, E; MacArthur, J A; Wallace, T; Gilliam, K; Donahue, C H; Montaño, R; Bryant, J E; Scott, A; Stephen, J M

    2012-04-01

    MEG and EEG measure electrophysiological activity in the brain with exquisite temporal resolution. Because of this unique strength relative to noninvasive hemodynamic-based measures (fMRI, PET), the complementary nature of hemodynamic and electrophysiological techniques is becoming more widely recognized (e.g., Human Connectome Project). However, the available analysis methods for solving the inverse problem for MEG and EEG have not been compared and standardized to the extent that they have for fMRI/PET. A number of factors, including the non-uniqueness of the solution to the inverse problem for MEG/EEG, have led to multiple analysis techniques which have not been tested on consistent datasets, making direct comparisons of techniques challenging (or impossible). Since each of the methods is known to have their own set of strengths and weaknesses, it would be beneficial to quantify them. Toward this end, we are announcing the establishment of a website containing an extensive series of realistic simulated data for testing purposes ( http://cobre.mrn.org/megsim/ ). Here, we present: 1) a brief overview of the basic types of inverse procedures; 2) the rationale and description of the testbed created; and 3) cases emphasizing functional connectivity (e.g., oscillatory activity) suitable for a wide assortment of analyses including independent component analysis (ICA), Granger Causality/Directed transfer function, and single-trial analysis.

  10. MEG-SIM: A Web Portal for Testing MEG Analysis Methods using Realistic Simulated and Empirical Data

    PubMed Central

    Aine, C. J.; Sanfratello, L.; Ranken, D.; Best, E.; MacArthur, J. A.; Wallace, T.; Gilliam, K.; Donahue, C. H.; Montaño, R.; Bryant, J. E.; Scott, A.; Stephen, J. M.

    2012-01-01

    MEG and EEG measure electrophysiological activity in the brain with exquisite temporal resolution. Because of this unique strength relative to noninvasive hemodynamic-based measures (fMRI, PET), the complementary nature of hemodynamic and electrophysiological techniques is becoming more widely recognized (e.g., Human Connectome Project). However, the available analysis methods for solving the inverse problem for MEG and EEG have not been compared and standardized to the extent that they have for fMRI/PET. A number of factors, including the non-uniqueness of the solution to the inverse problem for MEG/EEG, have led to multiple analysis techniques which have not been tested on consistent datasets, making direct comparisons of techniques challenging (or impossible). Since each of the methods is known to have their own set of strengths and weaknesses, it would be beneficial to quantify them. Toward this end, we are announcing the establishment of a website containing an extensive series of realistic simulated data for testing purposes (http://cobre.mrn.org/megsim/). Here, we present: 1) a brief overview of the basic types of inverse procedures; 2) the rationale and description of the testbed created; and 3) cases emphasizing functional connectivity (e.g., oscillatory activity) suitable for a wide assortment of analyses including independent component analysis (ICA), Granger Causality/Directed transfer function, and single-trial analysis. PMID:22068921

  11. Full waveform inversion in the frequency domain using classified time-domain residual wavefields

    NASA Astrophysics Data System (ADS)

    Son, Woohyun; Koo, Nam-Hyung; Kim, Byoung-Yeop; Lee, Ho-Young; Joo, Yonghwan

    2017-04-01

    We perform the acoustic full waveform inversion in the frequency domain using residual wavefields that have been separated in the time domain. We sort the residual wavefields in the time domain according to the order of absolute amplitudes. Then, the residual wavefields are separated into several groups in the time domain. To analyze the characteristics of the residual wavefields, we compare the residual wavefields of conventional method with those of our residual separation method. From the residual analysis, the amplitude spectrum obtained from the trace before separation appears to have little energy at the lower frequency bands. However, the amplitude spectrum obtained from our strategy is regularized by the separation process, which means that the low-frequency components are emphasized. Therefore, our method helps to emphasize low-frequency components of residual wavefields. Then, we generate the frequency-domain residual wavefields by taking the Fourier transform of the separated time-domain residual wavefields. With these wavefields, we perform the gradient-based full waveform inversion in the frequency domain using back-propagation technique. Through a comparison of gradient directions, we confirm that our separation method can better describe the sub-salt image than the conventional approach. The proposed method is tested on the SEG/EAGE salt-dome model. The inversion results show that our algorithm is better than the conventional gradient based waveform inversion in the frequency domain, especially for deeper parts of the velocity model.

  12. Level-set techniques for facies identification in reservoir modeling

    NASA Astrophysics Data System (ADS)

    Iglesias, Marco A.; McLaughlin, Dennis

    2011-03-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil-water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301-29 2004 Inverse Problems 20 259-82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg-Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush-Kuhn-Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies.

  13. Efficient Sampling of Parsimonious Inversion Histories with Application to Genome Rearrangement in Yersinia

    PubMed Central

    Darling, Aaron E.

    2009-01-01

    Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called “MC4Inversion.” We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique. PMID:20333186

  14. Fat suppression with short inversion time inversion-recovery and chemical-shift selective saturation: a dual STIR-CHESS combination prepulse for turbo spin echo pulse sequences.

    PubMed

    Tanabe, Koji; Nishikawa, Keiichi; Sano, Tsukasa; Sakai, Osamu; Jara, Hernán

    2010-05-01

    To test a newly developed fat suppression magnetic resonance imaging (MRI) prepulse that synergistically uses the principles of fat suppression via inversion recovery (STIR) and spectral fat saturation (CHESS), relative to pure CHESS and STIR. This new technique is termed dual fat suppression (Dual-FS). To determine if Dual-FS could be chemically specific for fat, the phantom consisted of the fat-mimicking NiCl(2) aqueous solution, porcine fat, porcine muscle, and water was imaged with the three fat-suppression techniques. For Dual-FS and STIR, several inversion times were used. Signal intensities of each image obtained with each technique were compared. To determine if Dual-FS could be robust to magnetic field inhomogeneities, the phantom consisting of different NiCl(2) aqueous solutions, porcine fat, porcine muscle, and water was imaged with Dual-FS and CHESS at the several off-resonance frequencies. To compare fat suppression efficiency in vivo, 10 volunteer subjects were also imaged with the three fat-suppression techniques. Dual-FS could suppress fat sufficiently within the inversion time of 110-140 msec, thus enabling differentiation between fat and fat-mimicking aqueous structures. Dual-FS was as robust to magnetic field inhomogeneities as STIR and less vulnerable than CHESS. The same results for fat suppression were obtained in volunteers. The Dual-FS-STIR-CHESS is an alternative and promising fat suppression technique for turbo spin echo MRI. Copyright 2010 Wiley-Liss, Inc.

  15. Estimation of flow properties using surface deformation and head data: A trajectory-based approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vasco, D.W.

    2004-07-12

    A trajectory-based algorithm provides an efficient and robust means to infer flow properties from surface deformation and head data. The algorithm is based upon the concept of an ''arrival time'' of a drawdown front, which is defined as the time corresponding to the maximum slope of the drawdown curve. The technique involves three steps: the inference of head changes as a function of position and time, the use of the estimated head changes to define arrival times, and the inversion of the arrival times for flow properties. Trajectories, computed from the output of a numerical simulator, are used to relatemore » the drawdown arrival times to flow properties. The inversion algorithm is iterative, requiring one reservoir simulation for each iteration. The method is applied to data from a set of 14 tiltmeters, located at the Raymond Quarry field site in California. Using the technique, I am able to image a high-conductivity channel which extends to the south of the pumping well. The presence of th is permeable pathway is supported by an analysis of earlier cross-well transient pressure test data.« less

  16. Predicting ozone profile shape from satellite UV spectra

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Loyola, Diego; Romahn, Fabian; Doicu, Adrian

    2017-04-01

    Identifying ozone profile shape is a critical yet challenging job for the accurate reconstruction of vertical distributions of atmospheric ozone that is relevant to climate change and air quality. Motivated by the need to develop an approach to reliably and efficiently estimate vertical information of ozone and inspired by the success of machine learning techniques, this work proposes a new algorithm for deriving ozone profile shapes from ultraviolet (UV) absorption spectra that are recorded by satellite instruments, e.g. GOME series and the future Sentinel missions. The proposed algorithm formulates this particular inverse problem in a classification framework rather than a conventional inversion one and places an emphasis on effectively characterizing various profile shapes based on machine learning techniques. Furthermore, a comparison of the ozone profiles from real GOME-2 data estimated by our algorithm and the classical retrieval algorithm (Optimal Estimation Method) is performed.

  17. System for uncollimated digital radiography

    DOEpatents

    Wang, Han; Hall, James M.; McCarrick, James F.; Tang, Vincent

    2015-08-11

    The inversion algorithm based on the maximum entropy method (MEM) removes unwanted effects in high energy imaging resulting from an uncollimated source interacting with a finitely thick scintillator. The algorithm takes as input the image from the thick scintillator (TS) and the radiography setup geometry. The algorithm then outputs a restored image which appears as if taken with an infinitesimally thin scintillator (ITS). Inversion is accomplished by numerically generating a probabilistic model relating the ITS image to the TS image and then inverting this model on the TS image through MEM. This reconstruction technique can reduce the exposure time or the required source intensity without undesirable object blurring on the image by allowing the use of both thicker scintillators with higher efficiencies and closer source-to-detector distances to maximize incident radiation flux. The technique is applicable in radiographic applications including fast neutron, high-energy gamma and x-ray radiography using thick scintillators.

  18. A Generalized Approach for the Interpretation of Geophysical Well Logs in Ground-Water Studies - Theory and Application

    USGS Publications Warehouse

    Paillet, Frederick L.; Crowder, R.E.

    1996-01-01

    Quantitative analysis of geophysical logs in ground-water studies often involves at least as broad a range of applications and variation in lithology as is typically encountered in petroleum exploration, making such logs difficult to calibrate and complicating inversion problem formulation. At the same time, data inversion and analysis depend on inversion model formulation and refinement, so that log interpretation cannot be deferred to a geophysical log specialist unless active involvement with interpretation can be maintained by such an expert over the lifetime of the project. We propose a generalized log-interpretation procedure designed to guide hydrogeologists in the interpretation of geophysical logs, and in the integration of log data into ground-water models that may be systematically refined and improved in an iterative way. The procedure is designed to maximize the effective use of three primary contributions from geophysical logs: (1) The continuous depth scale of the measurements along the well bore; (2) The in situ measurement of lithologic properties and the correlation with hydraulic properties of the formations over a finite sample volume; and (3) Multiple independent measurements that can potentially be inverted for multiple physical or hydraulic properties of interest. The approach is formulated in the context of geophysical inversion theory, and is designed to be interfaced with surface geophysical soundings and conventional hydraulic testing. The step-by-step procedures given in our generalized interpretation and inversion technique are based on both qualitative analysis designed to assist formulation of the interpretation model, and quantitative analysis used to assign numerical values to model parameters. The approach bases a decision as to whether quantitative inversion is statistically warranted by formulating an over-determined inversion. If no such inversion is consistent with the inversion model, quantitative inversion is judged not possible with the given data set. Additional statistical criteria such as the statistical significance of regressions are used to guide the subsequent calibration of geophysical data in terms of hydraulic variables in those situations where quantitative data inversion is considered appropriate.

  19. Scenario Evaluator for Electrical Resistivity survey pre-modeling tool

    USGS Publications Warehouse

    Terry, Neil; Day-Lewis, Frederick D.; Robinson, Judith L.; Slater, Lee D.; Halford, Keith J.; Binley, Andrew; Lane, John W.; Werkema, Dale D.

    2017-01-01

    Geophysical tools have much to offer users in environmental, water resource, and geotechnical fields; however, techniques such as electrical resistivity imaging (ERI) are often oversold and/or overinterpreted due to a lack of understanding of the limitations of the techniques, such as the appropriate depth intervals or resolution of the methods. The relationship between ERI data and resistivity is nonlinear; therefore, these limitations depend on site conditions and survey design and are best assessed through forward and inverse modeling exercises prior to field investigations. In this approach, proposed field surveys are first numerically simulated given the expected electrical properties of the site, and the resulting hypothetical data are then analyzed using inverse models. Performing ERI forward/inverse modeling, however, requires substantial expertise and can take many hours to implement. We present a new spreadsheet-based tool, the Scenario Evaluator for Electrical Resistivity (SEER), which features a graphical user interface that allows users to manipulate a resistivity model and instantly view how that model would likely be interpreted by an ERI survey. The SEER tool is intended for use by those who wish to determine the value of including ERI to achieve project goals, and is designed to have broad utility in industry, teaching, and research.

  20. Inverse imaging of the breast with a material classification technique.

    PubMed

    Manry, C W; Broschat, S L

    1998-03-01

    In recent publications [Chew et al., IEEE Trans. Blomed. Eng. BME-9, 218-225 (1990); Borup et al., Ultrason. Imaging 14, 69-85 (1992)] the inverse imaging problem has been solved by means of a two-step iterative method. In this paper, a third step is introduced for ultrasound imaging of the breast. In this step, which is based on statistical pattern recognition, classification of tissue types and a priori knowledge of the anatomy of the breast are integrated into the iterative method. Use of this material classification technique results in more rapid convergence to the inverse solution--approximately 40% fewer iterations are required--as well as greater accuracy. In addition, tumors are detected early in the reconstruction process. Results for reconstructions of a simple two-dimensional model of the human breast are presented. These reconstructions are extremely accurate when system noise and variations in tissue parameters are not too great. However, for the algorithm used, degradation of the reconstructions and divergence from the correct solution occur when system noise and variations in parameters exceed threshold values. Even in this case, however, tumors are still identified within a few iterations.

  1. Data fitting and image fine-tuning approach to solve the inverse problem in fluorescence molecular imaging

    NASA Astrophysics Data System (ADS)

    Gorpas, Dimitris; Politopoulos, Kostas; Yova, Dido; Andersson-Engels, Stefan

    2008-02-01

    One of the most challenging problems in medical imaging is to "see" a tumour embedded into tissue, which is a turbid medium, by using fluorescent probes for tumour labeling. This problem, despite the efforts made during the last years, has not been fully encountered yet, due to the non-linear nature of the inverse problem and the convergence failures of many optimization techniques. This paper describes a robust solution of the inverse problem, based on data fitting and image fine-tuning techniques. As a forward solver the coupled radiative transfer equation and diffusion approximation model is proposed and compromised via a finite element method, enhanced with adaptive multi-grids for faster and more accurate convergence. A database is constructed by application of the forward model on virtual tumours with known geometry, and thus fluorophore distribution, embedded into simulated tissues. The fitting procedure produces the best matching between the real and virtual data, and thus provides the initial estimation of the fluorophore distribution. Using this information, the coupled radiative transfer equation and diffusion approximation model has the required initial values for a computational reasonable and successful convergence during the image fine-tuning application.

  2. An architecture of entropy decoder, inverse quantiser and predictor for multi-standard video decoding

    NASA Astrophysics Data System (ADS)

    Liu, Leibo; Chen, Yingjie; Yin, Shouyi; Lei, Hao; He, Guanghui; Wei, Shaojun

    2014-07-01

    A VLSI architecture for entropy decoder, inverse quantiser and predictor is proposed in this article. This architecture is used for decoding video streams of three standards on a single chip, i.e. H.264/AVC, AVS (China National Audio Video coding Standard) and MPEG2. The proposed scheme is called MPMP (Macro-block-Parallel based Multilevel Pipeline), which is intended to improve the decoding performance to satisfy the real-time requirements while maintaining a reasonable area and power consumption. Several techniques, such as slice level pipeline, MB (Macro-Block) level pipeline, MB level parallel, etc., are adopted. Input and output buffers for the inverse quantiser and predictor are shared by the decoding engines for H.264, AVS and MPEG2, therefore effectively reducing the implementation overhead. Simulation shows that decoding process consumes 512, 435 and 438 clock cycles per MB in H.264, AVS and MPEG2, respectively. Owing to the proposed techniques, the video decoder can support H.264 HP (High Profile) 1920 × 1088@30fps (frame per second) streams, AVS JP (Jizhun Profile) 1920 × 1088@41fps streams and MPEG2 MP (Main Profile) 1920 × 1088@39fps streams when exploiting a 200 MHz working frequency.

  3. Decomposing Large Inverse Problems with an Augmented Lagrangian Approach: Application to Joint Inversion of Body-Wave Travel Times and Surface-Wave Dispersion Measurements

    NASA Astrophysics Data System (ADS)

    Reiter, D. T.; Rodi, W. L.

    2015-12-01

    Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.

  4. Tomographic inversion of satellite photometry

    NASA Technical Reports Server (NTRS)

    Solomon, S. C.; Hays, P. B.; Abreu, V. J.

    1984-01-01

    An inversion algorithm capable of reconstructing the volume emission rate of thermospheric airglow features from satellite photometry has been developed. The accuracy and resolution of this technique are investigated using simulated data, and the inversions of several sets of observations taken by the Visible Airglow Experiment are presented.

  5. A Forward Glimpse into Inverse Problems through a Geology Example

    ERIC Educational Resources Information Center

    Winkel, Brian J.

    2012-01-01

    This paper describes a forward approach to an inverse problem related to detecting the nature of geological substrata which makes use of optimization techniques in a multivariable calculus setting. The true nature of the related inverse problem is highlighted. (Contains 2 figures.)

  6. Critical assessment of inverse gas chromatography as means of assessing surface free energy and acid-base interaction of pharmaceutical powders.

    PubMed

    Telko, Martin J; Hickey, Anthony J

    2007-10-01

    Inverse gas chromatography (IGC) has been employed as a research tool for decades. Despite this record of use and proven utility in a variety of applications, the technique is not routinely used in pharmaceutical research. In other fields the technique has flourished. IGC is experimentally relatively straightforward, but analysis requires that certain theoretical assumptions are satisfied. The assumptions made to acquire some of the recently reported data are somewhat modified compared to initial reports. Most publications in the pharmaceutical literature have made use of a simplified equation for the determination of acid/base surface properties resulting in parameter values that are inconsistent with prior methods. In comparing the surface properties of different batches of alpha-lactose monohydrate, new data has been generated and compared with literature to allow critical analysis of the theoretical assumptions and their importance to the interpretation of the data. The commonly used (simplified) approach was compared with the more rigorous approach originally outlined in the surface chemistry literature. (c) 2007 Wiley-Liss, Inc.

  7. Understanding Methane Emission from Natural Gas Activities Using Inverse Modeling Techniques

    NASA Astrophysics Data System (ADS)

    Abdioskouei, M.; Carmichael, G. R.

    2015-12-01

    Natural gas (NG) has been promoted as a bridge fuel that can smooth the transition from fossil fuels to zero carbon energy sources by having lower carbon dioxide emission and lower global warming impacts in comparison to other fossil fuels. However, the uncertainty around the estimations of methane emissions from NG systems can lead to underestimation of climate and environmental impacts of using NG as a replacement for coal. Accurate estimates of methane emissions from NG operations is crucial for evaluation of environmental impacts of NG extraction and at larger scale, adoption of NG as transitional fuel. However there is a great inconsistency within the current estimates. Forward simulation of methane from oil and gas operation sites for the US is carried out based on NEI-2011 using the WRF-Chem model. Simulated values are compared against measurements of observations from different platforms such as airborne (FRAPPÉ field campaign) and ground-based measurements (NOAA Earth System Research Laboratory). A novel inverse modeling technique is used in this work to improve the model fit to the observation values and to constrain methane emission from oil and gas extraction sites.

  8. 3D aquifer characterization using stochastic streamline calibration

    NASA Astrophysics Data System (ADS)

    Jang, Minchul

    2007-03-01

    In this study, a new inverse approach, stochastic streamline calibration is proposed. Using both a streamline concept and a stochastic technique, stochastic streamline calibration optimizes an identified field to fit in given observation data in a exceptionally fast and stable fashion. In the stochastic streamline calibration, streamlines are adopted as basic elements not only for describing fluid flow but also for identifying the permeability distribution. Based on the streamline-based inversion by Agarwal et al. [Agarwal B, Blunt MJ. Streamline-based method with full-physics forward simulation for history matching performance data of a North sea field. SPE J 2003;8(2):171-80], Wang and Kovscek [Wang Y, Kovscek AR. Streamline approach for history matching production data. SPE J 2000;5(4):353-62], permeability is modified rather along streamlines than at the individual gridblocks. Permeabilities in the gridblocks which a streamline passes are adjusted by being multiplied by some factor such that we can match flow and transport properties of the streamline. This enables the inverse process to achieve fast convergence. In addition, equipped with a stochastic module, the proposed technique supportively calibrates the identified field in a stochastic manner, while incorporating spatial information into the field. This prevents the inverse process from being stuck in local minima and helps search for a globally optimized solution. Simulation results indicate that stochastic streamline calibration identifies an unknown permeability exceptionally quickly. More notably, the identified permeability distribution reflected realistic geological features, which had not been achieved in the original work by Agarwal et al. with the limitations of the large modifications along streamlines for matching production data only. The constructed model by stochastic streamline calibration forecasted transport of plume which was similar to that of a reference model. By this, we can expect the proposed approach to be applied to the construction of an aquifer model and forecasting of the aquifer performances of interest.

  9. Three-dimensional full waveform inversion of short-period teleseismic wavefields based upon the SEM-DSM hybrid method

    NASA Astrophysics Data System (ADS)

    Monteiller, Vadim; Chevrot, Sébastien; Komatitsch, Dimitri; Wang, Yi

    2015-08-01

    We present a method for high-resolution imaging of lithospheric structures based on full waveform inversion of teleseismic waveforms. We model the propagation of seismic waves using our recently developed direct solution method/spectral-element method hybrid technique, which allows us to simulate the propagation of short-period teleseismic waves through a regional 3-D model. We implement an iterative quasi-Newton method based upon the L-BFGS algorithm, where the gradient of the misfit function is computed using the adjoint-state method. Compared to gradient or conjugate-gradient methods, the L-BFGS algorithm has a much faster convergence rate. We illustrate the potential of this method on a synthetic test case that consists of a crustal model with a crustal discontinuity at 25 km depth and a sharp Moho jump. This model contains short- and long-wavelength heterogeneities along the lateral and vertical directions. The iterative inversion starts from a smooth 1-D model derived from the IASP91 reference Earth model. We invert both radial and vertical component waveforms, starting from long-period signals filtered at 10 s and gradually decreasing the cut-off period down to 1.25 s. This multiscale algorithm quickly converges towards a model that is very close to the true model, in contrast to inversions involving short-period waveforms only, which always get trapped into a local minimum of the cost function.

  10. CSI-EPT in Presence of RF-Shield for MR-Coils.

    PubMed

    Arduino, Alessandro; Zilberti, Luca; Chiampi, Mario; Bottauscio, Oriano

    2017-07-01

    Contrast source inversion electric properties tomography (CSI-EPT) is a recently developed technique for the electric properties tomography that recovers the electric properties distribution starting from measurements performed by magnetic resonance imaging scanners. This method is an optimal control approach based on the contrast source inversion technique, which distinguishes itself from other electric properties tomography techniques for its capability to recover also the local specific absorption rate distribution, essential for online dosimetry. Up to now, CSI-EPT has only been described in terms of integral equations, limiting its applicability to homogeneous unbounded background. In order to extend the method to the presence of a shield in the domain-as in the recurring case of shielded radio frequency coils-a more general formulation of CSI-EPT, based on a functional viewpoint, is introduced here. Two different implementations of CSI-EPT are proposed for a 2-D transverse magnetic model problem, one dealing with an unbounded domain and one considering the presence of a perfectly conductive shield. The two implementations are applied on the same virtual measurements obtained by numerically simulating a shielded radio frequency coil. The results are compared in terms of both electric properties recovery and local specific absorption rate estimate, in order to investigate the requirement of an accurate modeling of the underlying physical problem.

  11. Infrasound Waveform Inversion and Mass Flux Validation from Sakurajima Volcano, Japan

    NASA Astrophysics Data System (ADS)

    Fee, D.; Kim, K.; Yokoo, A.; Izbekov, P. E.; Lopez, T. M.; Prata, F.; Ahonen, P.; Kazahaya, R.; Nakamichi, H.; Iguchi, M.

    2015-12-01

    Recent advances in numerical wave propagation modeling and station coverage have permitted robust inversion of infrasound data from volcanic explosions. Complex topography and crater morphology have been shown to substantially affect the infrasound waveform, suggesting that homogeneous acoustic propagation assumptions are invalid. Infrasound waveform inversion provides an exciting tool to accurately characterize emission volume and mass flux from both volcanic and non-volcanic explosions. Mass flux, arguably the most sought-after parameter from a volcanic eruption, can be determined from the volume flux using infrasound waveform inversion if the volcanic flow is well-characterized. Thus far, infrasound-based volume and mass flux estimates have yet to be validated. In February 2015 we deployed six infrasound stations around the explosive Sakurajima Volcano, Japan for 8 days. Here we present our full waveform inversion method and volume and mass flux estimates of numerous high amplitude explosions using a high resolution DEM and 3-D Finite Difference Time Domain modeling. Application of this technique to volcanic eruptions may produce realistic estimates of mass flux and plume height necessary for volcanic hazard mitigation. Several ground-based instruments and methods are used to independently determine the volume, composition, and mass flux of individual volcanic explosions. Specifically, we use ground-based ash sampling, multispectral infrared imagery, UV spectrometry, and multigas data to estimate the plume composition and flux. Unique tiltmeter data from underground tunnels at Sakurajima also provides a way to estimate the volume and mass of each explosion. In this presentation we compare the volume and mass flux estimates derived from the different methods and discuss sources of error and future improvements.

  12. Convergence acceleration in scattering series and seismic waveform inversion using nonlinear Shanks transformation

    NASA Astrophysics Data System (ADS)

    Eftekhar, Roya; Hu, Hao; Zheng, Yingcai

    2018-06-01

    Iterative solution process is fundamental in seismic inversions, such as in full-waveform inversions and some inverse scattering methods. However, the convergence could be slow or even divergent depending on the initial model used in the iteration. We propose to apply Shanks transformation (ST for short) to accelerate the convergence of the iterative solution. ST is a local nonlinear transformation, which transforms a series locally into another series with an improved convergence property. ST works by separating the series into a smooth background trend called the secular term versus an oscillatory transient term. ST then accelerates the convergence of the secular term. Since the transformation is local, we do not need to know all the terms in the original series which is very important in the numerical implementation. The ST performance was tested numerically for both the forward Born series and the inverse scattering series (ISS). The ST has been shown to accelerate the convergence in several examples, including three examples of forward modeling using the Born series and two examples of velocity inversion based on a particular type of the ISS. We observe that ST is effective in accelerating the convergence and it can also achieve convergence even for a weakly divergent scattering series. As such, it provides a useful technique to invert for a large-contrast medium perturbation in seismic inversion.

  13. A trade-off solution between model resolution and covariance in surface-wave inversion

    USGS Publications Warehouse

    Xia, J.; Xu, Y.; Miller, R.D.; Zeng, C.

    2010-01-01

    Regularization is necessary for inversion of ill-posed geophysical problems. Appraisal of inverse models is essential for meaningful interpretation of these models. Because uncertainties are associated with regularization parameters, extra conditions are usually required to determine proper parameters for assessing inverse models. Commonly used techniques for assessment of a geophysical inverse model derived (generally iteratively) from a linear system are based on calculating the model resolution and the model covariance matrices. Because the model resolution and the model covariance matrices of the regularized solutions are controlled by the regularization parameter, direct assessment of inverse models using only the covariance matrix may provide incorrect results. To assess an inverted model, we use the concept of a trade-off between model resolution and covariance to find a proper regularization parameter with singular values calculated in the last iteration. We plot the singular values from large to small to form a singular value plot. A proper regularization parameter is normally the first singular value that approaches zero in the plot. With this regularization parameter, we obtain a trade-off solution between model resolution and model covariance in the vicinity of a regularized solution. The unit covariance matrix can then be used to calculate error bars of the inverse model at a resolution level determined by the regularization parameter. We demonstrate this approach with both synthetic and real surface-wave data. ?? 2010 Birkh??user / Springer Basel AG.

  14. Atmospheric inverse modeling via sparse reconstruction

    NASA Astrophysics Data System (ADS)

    Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten

    2017-10-01

    Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.

  15. Color regeneration from reflective color sensor using an artificial intelligent technique.

    PubMed

    Saracoglu, Ömer Galip; Altural, Hayriye

    2010-01-01

    A low-cost optical sensor based on reflective color sensing is presented. Artificial neural network models are used to improve the color regeneration from the sensor signals. Analog voltages of the sensor are successfully converted to RGB colors. The artificial intelligent models presented in this work enable color regeneration from analog outputs of the color sensor. Besides, inverse modeling supported by an intelligent technique enables the sensor probe for use of a colorimetric sensor that relates color changes to analog voltages.

  16. Visualisation of urban airborne laser scanning data with occlusion images

    NASA Astrophysics Data System (ADS)

    Hinks, Tommy; Carr, Hamish; Gharibi, Hamid; Laefer, Debra F.

    2015-06-01

    Airborne Laser Scanning (ALS) was introduced to provide rapid, high resolution scans of landforms for computational processing. More recently, ALS has been adapted for scanning urban areas. The greater complexity of urban scenes necessitates the development of novel methods to exploit urban ALS to best advantage. This paper presents occlusion images: a novel technique that exploits the geometric complexity of the urban environment to improve visualisation of small details for better feature recognition. The algorithm is based on an inversion of traditional occlusion techniques.

  17. Microseismic techniques for avoiding induced seismicity during fluid injection

    DOE PAGES

    Matzel, Eric; White, Joshua; Templeton, Dennise; ...

    2014-01-01

    The goal of this research is to develop a fundamentally better approach to geological site characterization and early hazard detection. We combine innovative techniques for analyzing microseismic data with a physics-based inversion model to forecast microseismic cloud evolution. The key challenge is that faults at risk of slipping are often too small to detect during the site characterization phase. Our objective is to devise fast-running methodologies that will allow field operators to respond quickly to changing subsurface conditions.

  18. Three-dimensional magnetotelluric inversion in practice—the electrical conductivity structure of the San Andreas Fault in Central California

    NASA Astrophysics Data System (ADS)

    Tietze, Kristina; Ritter, Oliver

    2013-10-01

    3-D inversion techniques have become a widely used tool in magnetotelluric (MT) data interpretation. However, with real data sets, many of the controlling factors for the outcome of 3-D inversion are little explored, such as alignment of the coordinate system, handling and influence of data errors and model regularization. Here we present 3-D inversion results of 169 MT sites from the central San Andreas Fault in California. Previous extensive 2-D inversion and 3-D forward modelling of the data set revealed significant along-strike variation of the electrical conductivity structure. 3-D inversion can recover these features but only if the inversion parameters are tuned in accordance with the particularities of the data set. Based on synthetic 3-D data we explore the model space and test the impacts of a wide range of inversion settings. The tests showed that the recovery of a pronounced regional 2-D structure in inversion of the complete impedance tensor depends on the coordinate system. As interdependencies between data components are not considered in standard 3-D MT inversion codes, 2-D subsurface structures can vanish if data are not aligned with the regional strike direction. A priori models and data weighting, that is, how strongly individual components of the impedance tensor and/or vertical magnetic field transfer functions dominate the solution, are crucial controls for the outcome of 3-D inversion. If deviations from a prior model are heavily penalized, regularization is prone to result in erroneous and misleading 3-D inversion models, particularly in the presence of strong conductivity contrasts. A `good' overall rms misfit is often meaningless or misleading as a huge range of 3-D inversion results exist, all with similarly `acceptable' misfits but producing significantly differing images of the conductivity structures. Reliable and meaningful 3-D inversion models can only be recovered if data misfit is assessed systematically in the frequency-space domain.

  19. Inverts permittivity and conductivity with structural constraint in GPR FWI based on truncated Newton method

    NASA Astrophysics Data System (ADS)

    Ren, Qianci

    2018-04-01

    Full waveform inversion (FWI) of ground penetrating radar (GPR) is a promising technique to quantitatively evaluate the permittivity and conductivity of near subsurface. However, these two parameters are simultaneously inverted in the GPR FWI, increasing the difficulty to obtain accurate inversion results for both parameters. In this study, I present a structural constrained GPR FWI procedure to jointly invert the two parameters, aiming to force a structural relationship between permittivity and conductivity in the process of model reconstruction. The structural constraint is enforced by a cross-gradient function. In this procedure, the permittivity and conductivity models are inverted alternately at each iteration and updated with hierarchical frequency components in the frequency domain. The joint inverse problem is solved by the truncated Newton method which considering the effect of Hessian operator and using the approximated solution of Newton equation to be the perturbation model in the updating process. The joint inversion procedure is tested by three synthetic examples. The results show that jointly inverting permittivity and conductivity in GPR FWI effectively increases the structural similarities between the two parameters, corrects the structures of parameter models, and significantly improves the accuracy of conductivity model, resulting in a better inversion result than the individual inversion.

  20. M-Band Analysis of Chromosome Aberrations in Human Epithelial Cells Induced By Low- and High-Let Radiations

    NASA Technical Reports Server (NTRS)

    Hada, M.; Gersey, B.; Saganti, P. B.; Wilkins, R.; Gonda, S. R.; Cucinotta, F. A.; Wu, H.

    2007-01-01

    Energetic primary and secondary particles pose a health risk to astronauts in extended ISS and future Lunar and Mars missions. High-LET radiation is much more effective than low-LET radiation in the induction of various biological effects, including cell inactivation, genetic mutations, cataracts and cancer. Most of these biological endpoints are closely correlated to chromosomal damage, which can be utilized as a biomarker for radiation insult. In this study, human epithelial cells were exposed in vitro to gamma rays, 1 GeV/nucleon Fe ions and secondary neutrons whose spectrum is similar to that measured inside the Space Station. Chromosomes were condensed using a premature chromosome condensation technique and chromosome aberrations were analyzed with the multi-color banding (mBAND) technique. With this technique, individually painted chromosomal bands on one chromosome allowed the identification of both interchromosomal (translocation to unpainted chromosomes) and intrachromosomal aberrations (inversions and deletions within a single painted chromosome). Results of the study confirmed the observation of higher incidence of inversions for high-LET irradiation. However, detailed analysis of the inversion type revealed that all of the three radiation types in the study induced a low incidence of simple inversions. Half of the inversions observed in the low-LET irradiated samples were accompanied by other types of intrachromosome aberrations, but few inversions were accompanied by interchromosome aberrations. In contrast, Fe ions induced a significant fraction of inversions that involved complex rearrangements of both the inter- and intrachromosome exchanges.

  1. Expert judgement and uncertainty quantification for climate change

    NASA Astrophysics Data System (ADS)

    Oppenheimer, Michael; Little, Christopher M.; Cooke, Roger M.

    2016-05-01

    Expert judgement is an unavoidable element of the process-based numerical models used for climate change projections, and the statistical approaches used to characterize uncertainty across model ensembles. Here, we highlight the need for formalized approaches to unifying numerical modelling with expert judgement in order to facilitate characterization of uncertainty in a reproducible, consistent and transparent fashion. As an example, we use probabilistic inversion, a well-established technique used in many other applications outside of climate change, to fuse two recent analyses of twenty-first century Antarctic ice loss. Probabilistic inversion is but one of many possible approaches to formalizing the role of expert judgement, and the Antarctic ice sheet is only one possible climate-related application. We recommend indicators or signposts that characterize successful science-based uncertainty quantification.

  2. Inverse design of near unity efficiency perfectly vertical grating couplers

    NASA Astrophysics Data System (ADS)

    Michaels, Andrew; Yablonovitch, Eli

    2018-02-01

    Efficient coupling between integrated optical waveguides and optical fibers is essential to the success of integrated photonics. While many solutions exist, perfectly vertical grating couplers which scatter light out of a waveguide in the direction normal to the waveguide's top surface are an ideal candidate due to their potential to reduce packaging complexity. Designing such couplers with high efficiency, however, has proven difficult. In this paper, we use electromagnetic inverse design techniques to optimize a high efficiency two-layer perfectly vertical silicon grating coupler. Our base design achieves a chip-to-fiber coupling efficiency of over 99% (-0.04 dB) at 1550 nm. Using this base design, we apply subsequent constrained optimizations to achieve vertical couplers with over 96% efficiency which are fabricable using a 65 nm process.

  3. A Joint Method of Envelope Inversion Combined with Hybrid-domain Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    CUI, C.; Hou, W.

    2017-12-01

    Full waveform inversion (FWI) aims to construct high-precision subsurface models by fully using the information in seismic records, including amplitude, travel time, phase and so on. However, high non-linearity and the absence of low frequency information in seismic data lead to the well-known cycle skipping problem and make inversion easily fall into local minima. In addition, those 3D inversion methods that are based on acoustic approximation ignore the elastic effects in real seismic field, and make inversion harder. As a result, the accuracy of final inversion results highly relies on the quality of initial model. In order to improve stability and quality of inversion results, multi-scale inversion that reconstructs subsurface model from low to high frequency are applied. But, the absence of very low frequencies (< 3Hz) in field data is still bottleneck in the FWI. By extracting ultra low-frequency data from field data, envelope inversion is able to recover low wavenumber model with a demodulation operator (envelope operator), though the low frequency data does not really exist in field data. To improve the efficiency and viability of the inversion, in this study, we proposed a joint method of envelope inversion combined with hybrid-domain FWI. First, we developed 3D elastic envelope inversion, and the misfit function and the corresponding gradient operator were derived. Then we performed hybrid-domain FWI with envelope inversion result as initial model which provides low wavenumber component of model. Here, forward modeling is implemented in the time domain and inversion in the frequency domain. To accelerate the inversion, we adopt CPU/GPU heterogeneous computing techniques. There were two levels of parallelism. In the first level, the inversion tasks are decomposed and assigned to each computation node by shot number. In the second level, GPU multithreaded programming is used for the computation tasks in each node, including forward modeling, envelope extraction, DFT (discrete Fourier transform) calculation and gradients calculation. Numerical tests demonstrated that the combined envelope inversion + hybrid-domain FWI could obtain much faithful and accurate result than conventional hybrid-domain FWI. The CPU/GPU heterogeneous parallel computation could improve the performance speed.

  4. Optimum electrode configuration selection for electrical resistance change based damage detection in composites using an effective independence measure

    NASA Astrophysics Data System (ADS)

    Escalona, Luis; Díaz-Montiel, Paulina; Venkataraman, Satchi

    2016-04-01

    Laminated carbon fiber reinforced polymer (CFRP) composite materials are increasingly used in aerospace structures due to their superior mechanical properties and reduced weight. Assessing the health and integrity of these structures requires non-destructive evaluation (NDE) techniques to detect and measure interlaminar delamination and intralaminar matrix cracking damage. The electrical resistance change (ERC) based NDE technique uses the inherent changes in conductive properties of the composite to characterize internal damage. Several works that have explored the ERC technique have been limited to thin cross-ply laminates with simple linear or circular electrode arrangements. This paper investigates a method of optimum selection of electrode configurations for delamination detection in thick cross-ply laminates using ERC. Inverse identification of damage requires numerical optimization of the measured response with a model predicted response. Here, the electrical voltage field in the CFRP composite laminate is calculated using finite element analysis (FEA) models for different specified delamination size and locations, and location of ground and current electrodes. Reducing the number of sensor locations and measurements is needed to reduce hardware requirements, and computational effort needed for inverse identification. This paper explores the use of effective independence (EI) measure originally proposed for sensor location optimization in experimental vibration modal analysis. The EI measure is used for selecting the minimum set of resistance measurements among all possible combinations of selecting a pair of electrodes among the n electrodes. To enable use of EI to ERC required, it is proposed in this research a singular value decomposition SVD to obtain a spectral representation of the resistance measurements in the laminate. The effectiveness of EI measure in eliminating redundant electrode pairs is demonstrated by performing inverse identification of damage using the full set of resistance measurements and the reduced set of measurements. The investigation shows that the EI measure is effective for optimally selecting the electrode pairs needed for resistance measurements in ERC based damage detection.

  5. Selected inversion as key to a stable Langevin evolution across the QCD phase boundary

    NASA Astrophysics Data System (ADS)

    Bloch, Jacques; Schenk, Olaf

    2018-03-01

    We present new results of full QCD at nonzero chemical potential. In PRD 92, 094516 (2015) the complex Langevin method was shown to break down when the inverse coupling decreases and enters the transition region from the deconfined to the confined phase. We found that the stochastic technique used to estimate the drift term can be very unstable for indefinite matrices. This may be avoided by using the full inverse of the Dirac operator, which is, however, too costly for four-dimensional lattices. The major breakthrough in this work was achieved by realizing that the inverse elements necessary for the drift term can be computed efficiently using the selected inversion technique provided by the parallel sparse direct solver package PARDISO. In our new study we show that no breakdown of the complex Langevin method is encountered and that simulations can be performed across the phase boundary.

  6. Electron beam dispersion measurements in nitrogen using two-dimensional imaging of N2(+) fluorescence

    NASA Technical Reports Server (NTRS)

    Clapp, L. H.; Twiss, R. G.; Cattolica, R. J.

    1991-01-01

    Experimental results are presented related to the radial spread of fluorescence excited by 10 and 20 KeV electron beams passing through nonflowing rarefied nitrogen at 293 K. An imaging technique for obtaining species distributions from measured beam-excited fluorescence is described, based on a signal inversion scheme mathematically equivalent to the inversion of the Abel integral equation. From fluorescence image data, measurements of beam radius, integrated signal intensity, and spatially resolved distributions of N2(+) first-negative-band fluorescence-emitting species have been made. Data are compared with earlier measurements and with an heuristic beam spread model.

  7. Modeling and Simulation of Upset-Inducing Disturbances for Digital Systems in an Electromagnetic Reverberation Chamber

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2014-01-01

    This report describes a modeling and simulation approach for disturbance patterns representative of the environment experienced by a digital system in an electromagnetic reverberation chamber. The disturbance is modeled by a multi-variate statistical distribution based on empirical observations. Extended versions of the Rejection Samping and Inverse Transform Sampling techniques are developed to generate multi-variate random samples of the disturbance. The results show that Inverse Transform Sampling returns samples with higher fidelity relative to the empirical distribution. This work is part of an ongoing effort to develop a resilience assessment methodology for complex safety-critical distributed systems.

  8. Error reduction and parameter optimization of the TAPIR method for fast T1 mapping.

    PubMed

    Zaitsev, M; Steinhoff, S; Shah, N J

    2003-06-01

    A methodology is presented for the reduction of both systematic and random errors in T(1) determination using TAPIR, a Look-Locker-based fast T(1) mapping technique. The relations between various sequence parameters were carefully investigated in order to develop recipes for choosing optimal sequence parameters. Theoretical predictions for the optimal flip angle were verified experimentally. Inversion pulse imperfections were identified as the main source of systematic errors in T(1) determination with TAPIR. An effective remedy is demonstrated which includes extension of the measurement protocol to include a special sequence for mapping the inversion efficiency itself. Copyright 2003 Wiley-Liss, Inc.

  9. Getting in shape: Reconstructing three-dimensional long-track speed skating kinematics by comparing several body pose reconstruction techniques.

    PubMed

    van der Kruk, E; Schwab, A L; van der Helm, F C T; Veeger, H E J

    2018-03-01

    In gait studies body pose reconstruction (BPR) techniques have been widely explored, but no previous protocols have been developed for speed skating, while the peculiarities of the skating posture and technique do not automatically allow for the transfer of the results of those explorations to kinematic skating data. The aim of this paper is to determine the best procedure for body pose reconstruction and inverse dynamics of speed skating, and to what extend this choice influences the estimation of joint power. The results show that an eight body segment model together with a global optimization method with revolute joint in the knee and in the lumbosacral joint, while keeping the other joints spherical, would be the most realistic model to use for the inverse kinematics in speed skating. To determine joint power, this method should be combined with a least-square error method for the inverse dynamics. Reporting on the BPR technique and the inverse dynamic method is crucial to enable comparison between studies. Our data showed an underestimation of up to 74% in mean joint power when no optimization procedure was applied for BPR and an underestimation of up to 31% in mean joint power when a bottom-up inverse dynamics method was chosen instead of a least square error approach. Although these results are aimed at speed skating, reporting on the BPR procedure and the inverse dynamics method, together with setting a golden standard should be common practice in all human movement research to allow comparison between studies. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Solution of the Inverse Problem for Thin Film Patterning by Electrohydrodynamic Forces

    NASA Astrophysics Data System (ADS)

    Zhou, Chengzhe; Troian, Sandra

    2017-11-01

    Micro- and nanopatterning techniques for applications ranging from optoelectronics to biofluidics have multiplied in number over the past decade to include adaptations of mature technologies as well as novel lithographic techniques based on periodic spatial modulation of surface stresses. We focus here on one such technique which relies on shape changes in nanofilms responding to a patterned counter-electrode. The interaction of a patterned electric field with the polarization charges at the liquid interface causes a patterned electrostatic pressure counterbalanced by capillary pressure which leads to 3D protrusions whose shape and evolution can be terminated as needed. All studies to date, however, have investigated the evolution of the liquid film in response to a preset counter-electrode pattern. In this talk, we present solution of the inverse problem for the thin film equation governing the electrohydrodynamic response by treating the system as a transient control problem. Optimality conditions are derived and an efficient corresponding solution algorithm is presented. We demonstrate such implementation of film control to achieve periodic, free surface shapes ranging from simple circular cap arrays to more complex square and sawtooth patterns.

  11. Parallel halftoning technique using dot diffusion optimization

    NASA Astrophysics Data System (ADS)

    Molina-Garcia, Javier; Ponomaryov, Volodymyr I.; Reyes-Reyes, Rogelio; Cruz-Ramos, Clara

    2017-05-01

    In this paper, a novel approach for halftone images is proposed and implemented for images that are obtained by the Dot Diffusion (DD) method. Designed technique is based on an optimization of the so-called class matrix used in DD algorithm and it consists of generation new versions of class matrix, which has no baron and near-baron in order to minimize inconsistencies during the distribution of the error. Proposed class matrix has different properties and each is designed for two different applications: applications where the inverse-halftoning is necessary, and applications where this method is not required. The proposed method has been implemented in GPU (NVIDIA GeForce GTX 750 Ti), multicore processors (AMD FX(tm)-6300 Six-Core Processor and in Intel core i5-4200U), using CUDA and OpenCV over a PC with linux. Experimental results have shown that novel framework generates a good quality of the halftone images and the inverse halftone images obtained. The simulation results using parallel architectures have demonstrated the efficiency of the novel technique when it is implemented in real-time processing.

  12. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.

  13. An evolutive real-time source inversion based on a linear inverse formulation

    NASA Astrophysics Data System (ADS)

    Sanchez Reyes, H. S.; Tago, J.; Cruz-Atienza, V. M.; Metivier, L.; Contreras Zazueta, M. A.; Virieux, J.

    2016-12-01

    Finite source inversion is a steppingstone to unveil earthquake rupture. It is used on ground motion predictions and its results shed light on seismic cycle for better tectonic understanding. It is not yet used for quasi-real-time analysis. Nowadays, significant progress has been made on approaches regarding earthquake imaging, thanks to new data acquisition and methodological advances. However, most of these techniques are posterior procedures once seismograms are available. Incorporating source parameters estimation into early warning systems would require to update the source build-up while recording data. In order to go toward this dynamic estimation, we developed a kinematic source inversion formulated in the time-domain, for which seismograms are linearly related to the slip distribution on the fault through convolutions with Green's functions previously estimated and stored (Perton et al., 2016). These convolutions are performed in the time-domain as we progressively increase the time window of records at each station specifically. Selected unknowns are the spatio-temporal slip-rate distribution to keep the linearity of the forward problem with respect to unknowns, as promoted by Fan and Shearer (2014). Through the spatial extension of the expected rupture zone, we progressively build-up the slip-rate when adding new data by assuming rupture causality. This formulation is based on the adjoint-state method for efficiency (Plessix, 2006). The inverse problem is non-unique and, in most cases, underdetermined. While standard regularization terms are used for stabilizing the inversion, we avoid strategies based on parameter reduction leading to an unwanted non-linear relationship between parameters and seismograms for our progressive build-up. Rise time, rupture velocity and other quantities can be extracted later on as attributs from the slip-rate inversion we perform. Satisfactory results are obtained on a synthetic example (FIgure 1) proposed by the Source Inversion Validation project (Mai et al. 2011). A real case application is currently being explored. Our specific formulation, combined with simple prior information, as well as numerical results obtained so far, yields interesting perspectives for a real-time implementation.

  14. Noise models for low counting rate coherent diffraction imaging.

    PubMed

    Godard, Pierre; Allain, Marc; Chamard, Virginie; Rodenburg, John

    2012-11-05

    Coherent diffraction imaging (CDI) is a lens-less microscopy method that extracts the complex-valued exit field from intensity measurements alone. It is of particular importance for microscopy imaging with diffraction set-ups where high quality lenses are not available. The inversion scheme allowing the phase retrieval is based on the use of an iterative algorithm. In this work, we address the question of the choice of the iterative process in the case of data corrupted by photon or electron shot noise. Several noise models are presented and further used within two inversion strategies, the ordered subset and the scaled gradient. Based on analytical and numerical analysis together with Monte-Carlo studies, we show that any physical interpretations drawn from a CDI iterative technique require a detailed understanding of the relationship between the noise model and the used inversion method. We observe that iterative algorithms often assume implicitly a noise model. For low counting rates, each noise model behaves differently. Moreover, the used optimization strategy introduces its own artefacts. Based on this analysis, we develop a hybrid strategy which works efficiently in the absence of an informed initial guess. Our work emphasises issues which should be considered carefully when inverting experimental data.

  15. Bayesian Inversion of 2D Models from Airborne Transient EM Data

    NASA Astrophysics Data System (ADS)

    Blatter, D. B.; Key, K.; Ray, A.

    2016-12-01

    The inherent non-uniqueness in most geophysical inverse problems leads to an infinite number of Earth models that fit observed data to within an adequate tolerance. To resolve this ambiguity, traditional inversion methods based on optimization techniques such as the Gauss-Newton and conjugate gradient methods rely on an additional regularization constraint on the properties that an acceptable model can possess, such as having minimal roughness. While allowing such an inversion scheme to converge on a solution, regularization makes it difficult to estimate the uncertainty associated with the model parameters. This is because regularization biases the inversion process toward certain models that satisfy the regularization constraint and away from others that don't, even when both may suitably fit the data. By contrast, a Bayesian inversion framework aims to produce not a single `most acceptable' model but an estimate of the posterior likelihood of the model parameters, given the observed data. In this work, we develop a 2D Bayesian framework for the inversion of transient electromagnetic (TEM) data. Our method relies on a reversible-jump Markov Chain Monte Carlo (RJ-MCMC) Bayesian inverse method with parallel tempering. Previous gradient-based inversion work in this area used a spatially constrained scheme wherein individual (1D) soundings were inverted together and non-uniqueness was tackled by using lateral and vertical smoothness constraints. By contrast, our work uses a 2D model space of Voronoi cells whose parameterization (including number of cells) is fully data-driven. To make the problem work practically, we approximate the forward solution for each TEM sounding using a local 1D approximation where the model is obtained from the 2D model by retrieving a vertical profile through the Voronoi cells. The implicit parsimony of the Bayesian inversion process leads to the simplest models that adequately explain the data, obviating the need for explicit smoothness constraints. In addition, credible intervals in model space are directly obtained, resolving some of the uncertainty introduced by regularization. An example application shows how the method can be used to quantify the uncertainty in airborne EM soundings for imaging subglacial brine channels and groundwater systems.

  16. Identification of different geologic units using fuzzy constrained resistivity tomography

    NASA Astrophysics Data System (ADS)

    Singh, Anand; Sharma, S. P.

    2018-01-01

    Different geophysical inversion strategies are utilized as a component of an interpretation process that tries to separate geologic units based on the resistivity distribution. In the present study, we present the results of separating different geologic units using fuzzy constrained resistivity tomography. This was accomplished using fuzzy c means, a clustering procedure to improve the 2D resistivity image and geologic separation within the iterative minimization through inversion. First, we developed a Matlab-based inversion technique to obtain a reliable resistivity image using different geophysical data sets (electrical resistivity and electromagnetic data). Following this, the recovered resistivity model was converted into a fuzzy constrained resistivity model by assigning the highest probability value of each model cell to the cluster utilizing fuzzy c means clustering procedure during the iterative process. The efficacy of the algorithm is demonstrated using three synthetic plane wave electromagnetic data sets and one electrical resistivity field dataset. The presented approach shows improvement on the conventional inversion approach to differentiate between different geologic units if the correct number of geologic units will be identified. Further, fuzzy constrained resistivity tomography was performed to examine the augmentation of uranium mineralization in the Beldih open cast mine as a case study. We also compared geologic units identified by fuzzy constrained resistivity tomography with geologic units interpreted from the borehole information.

  17. The inverse problem of acoustic wave scattering by an air-saturated poroelastic cylinder.

    PubMed

    Ogam, Erick; Fellah, Z E A; Baki, Paul

    2013-03-01

    The efficient use of plastic foams in a diverse range of structural applications like in noise reduction, cushioning, and sleeping mattresses requires detailed characterization of their permeability and deformation (load-bearing) behavior. The elastic moduli and airflow resistance properties of foams are often measured using two separate techniques, one employing mechanical vibration methods and the other, flow rates of fluids based on fluid mechanics technology, respectively. A multi-parameter inverse acoustic scattering problem to recover airflow resistivity (AR) and mechanical properties of an air-saturated foam cylinder is solved. A wave-fluid saturated poroelastic structure interaction model based on the modified Biot theory and plane-wave decomposition using orthogonal cylindrical functions is employed to solve the inverse problem. The solutions to the inverse problem are obtained by constructing the objective functional given by the total square of the difference between predictions from the model and scattered acoustic field data acquired in an anechoic chamber. The value of the recovered AR is in good agreement with that of a slab sample cut from the cylinder and characterized using a method employing low frequency transmitted and reflected acoustic waves in a long waveguide developed by Fellah et al. [Rev. Sci. Instrum. 78(11), 114902 (2007)].

  18. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    NASA Astrophysics Data System (ADS)

    Irving, J.; Koepke, C.; Elsheikh, A. H.

    2017-12-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion procedure. In each case, the developed model-error approach enables to remove posterior bias and obtain a more realistic characterization of uncertainty.

  19. Inverse electrocardiographic transformations: dependence on the number of epicardial regions and body surface data points.

    PubMed

    Johnston, P R; Walker, S J; Hyttinen, J A; Kilpatrick, D

    1994-04-01

    The inverse problem of electrocardiography, the computation of epicardial potentials from body surface potentials, is influenced by the desired resolution on the epicardium, the number of recording points on the body surface, and the method of limiting the inversion process. To examine the role of these variables in the computation of the inverse transform, Tikhonov's zero-order regularization and singular value decomposition (SVD) have been used to invert the forward transfer matrix. The inverses have been compared in a data-independent manner using the resolution and the noise amplification as endpoints. Sets of 32, 50, 192, and 384 leads were chosen as sets of body surface data, and 26, 50, 74, and 98 regions were chosen to represent the epicardium. The resolution and noise were both improved by using a greater number of electrodes on the body surface. When 60% of the singular values are retained, the results show a trade-off between noise and resolution, with typical maximal epicardial noise levels of less than 0.5% of maximum epicardial potentials for 26 epicardial regions, 2.5% for 50 epicardial regions, 7.5% for 74 epicardial regions, and 50% for 98 epicardial regions. As the number of epicardial regions is increased, the regularization technique effectively fixes the noise amplification but markedly decreases the resolution, whereas SVD results in an increase in noise and a moderate decrease in resolution. Overall the regularization technique performs slightly better than SVD in the noise-resolution relationship. There is a region at the posterior of the heart that was poorly resolved regardless of the number of regions chosen. The variance of the resolution was such as to suggest the use of variable-size epicardial regions based on the resolution.

  20. Rigorous Approach in Investigation of Seismic Structure and Source Characteristicsin Northeast Asia: Hierarchical and Trans-dimensional Bayesian Inversion

    NASA Astrophysics Data System (ADS)

    Mustac, M.; Kim, S.; Tkalcic, H.; Rhie, J.; Chen, Y.; Ford, S. R.; Sebastian, N.

    2015-12-01

    Conventional approaches to inverse problems suffer from non-linearity and non-uniqueness in estimations of seismic structures and source properties. Estimated results and associated uncertainties are often biased by applied regularizations and additional constraints, which are commonly introduced to solve such problems. Bayesian methods, however, provide statistically meaningful estimations of models and their uncertainties constrained by data information. In addition, hierarchical and trans-dimensional (trans-D) techniques are inherently implemented in the Bayesian framework to account for involved error statistics and model parameterizations, and, in turn, allow more rigorous estimations of the same. Here, we apply Bayesian methods throughout the entire inference process to estimate seismic structures and source properties in Northeast Asia including east China, the Korean peninsula, and the Japanese islands. Ambient noise analysis is first performed to obtain a base three-dimensional (3-D) heterogeneity model using continuous broadband waveforms from more than 300 stations. As for the tomography of surface wave group and phase velocities in the 5-70 s band, we adopt a hierarchical and trans-D Bayesian inversion method using Voronoi partition. The 3-D heterogeneity model is further improved by joint inversions of teleseismic receiver functions and dispersion data using a newly developed high-efficiency Bayesian technique. The obtained model is subsequently used to prepare 3-D structural Green's functions for the source characterization. A hierarchical Bayesian method for point source inversion using regional complete waveform data is applied to selected events from the region. The seismic structure and source characteristics with rigorously estimated uncertainties from the novel Bayesian methods provide enhanced monitoring and discrimination of seismic events in northeast Asia.

  1. Finite frequency shear wave splitting tomography: a model space search approach

    NASA Astrophysics Data System (ADS)

    Mondal, P.; Long, M. D.

    2017-12-01

    Observations of seismic anisotropy provide key constraints on past and present mantle deformation. A common method for upper mantle anisotropy is to measure shear wave splitting parameters (delay time and fast direction). However, the interpretation is not straightforward, because splitting measurements represent an integration of structure along the ray path. A tomographic approach that allows for localization of anisotropy is desirable; however, tomographic inversion for anisotropic structure is a daunting task, since 21 parameters are needed to describe general anisotropy. Such a large parameter space does not allow a straightforward application of tomographic inversion. Building on previous work on finite frequency shear wave splitting tomography, this study aims to develop a framework for SKS splitting tomography with a new parameterization of anisotropy and a model space search approach. We reparameterize the full elastic tensor, reducing the number of parameters to three (a measure of strength based on symmetry considerations for olivine, plus the dip and azimuth of the fast symmetry axis). We compute Born-approximation finite frequency sensitivity kernels relating model perturbations to splitting intensity observations. The strong dependence of the sensitivity kernels on the starting anisotropic model, and thus the strong non-linearity of the inverse problem, makes a linearized inversion infeasible. Therefore, we implement a Markov Chain Monte Carlo technique in the inversion procedure. We have performed tests with synthetic data sets to evaluate computational costs and infer the resolving power of our algorithm for synthetic models with multiple anisotropic layers. Our technique can resolve anisotropic parameters on length scales of ˜50 km for realistic station and event configurations for dense broadband experiments. We are proceeding towards applications to real data sets, with an initial focus on the High Lava Plains of Oregon.

  2. Inverse methods for 3D quantitative optical coherence elasticity imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Dong, Li; Wijesinghe, Philip; Hugenberg, Nicholas; Sampson, David D.; Munro, Peter R. T.; Kennedy, Brendan F.; Oberai, Assad A.

    2017-02-01

    In elastography, quantitative elastograms are desirable as they are system and operator independent. Such quantification also facilitates more accurate diagnosis, longitudinal studies and studies performed across multiple sites. In optical elastography (compression, surface-wave or shear-wave), quantitative elastograms are typically obtained by assuming some form of homogeneity. This simplifies data processing at the expense of smearing sharp transitions in elastic properties, and/or introducing artifacts in these regions. Recently, we proposed an inverse problem-based approach to compression OCE that does not assume homogeneity, and overcomes the drawbacks described above. In this approach, the difference between the measured and predicted displacement field is minimized by seeking the optimal distribution of elastic parameters. The predicted displacements and recovered elastic parameters together satisfy the constraint of the equations of equilibrium. This approach, which has been applied in two spatial dimensions assuming plane strain, has yielded accurate material property distributions. Here, we describe the extension of the inverse problem approach to three dimensions. In addition to the advantage of visualizing elastic properties in three dimensions, this extension eliminates the plane strain assumption and is therefore closer to the true physical state. It does, however, incur greater computational costs. We address this challenge through a modified adjoint problem, spatially adaptive grid resolution, and three-dimensional decomposition techniques. Through these techniques the inverse problem is solved on a typical desktop machine within a wall clock time of 20 hours. We present the details of the method and quantitative elasticity images of phantoms and tissue samples.

  3. DIRECT OBSERVATION OF SOLAR CORONAL MAGNETIC FIELDS BY VECTOR TOMOGRAPHY OF THE CORONAL EMISSION LINE POLARIZATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kramar, M.; Lin, H.; Tomczyk, S., E-mail: kramar@cua.edu, E-mail: lin@ifa.hawaii.edu, E-mail: tomczyk@ucar.edu

    We present the first direct “observation” of the global-scale, 3D coronal magnetic fields of Carrington Rotation (CR) Cycle 2112 using vector tomographic inversion techniques. The vector tomographic inversion uses measurements of the Fe xiii 10747 Å Hanle effect polarization signals by the Coronal Multichannel Polarimeter (CoMP) and 3D coronal density and temperature derived from scalar tomographic inversion of Solar Terrestrial Relations Observatory (STEREO)/Extreme Ultraviolet Imager (EUVI) coronal emission lines (CELs) intensity images as inputs to derive a coronal magnetic field model that best reproduces the observed polarization signals. While independent verifications of the vector tomography results cannot be performed, wemore » compared the tomography inverted coronal magnetic fields with those constructed by magnetohydrodynamic (MHD) simulations based on observed photospheric magnetic fields of CR 2112 and 2113. We found that the MHD model for CR 2112 is qualitatively consistent with the tomography inverted result for most of the reconstruction domain except for several regions. Particularly, for one of the most noticeable regions, we found that the MHD simulation for CR 2113 predicted a model that more closely resembles the vector tomography inverted magnetic fields. In another case, our tomographic reconstruction predicted an open magnetic field at a region where a coronal hole can be seen directly from a STEREO-B/EUVI image. We discuss the utilities and limitations of the tomographic inversion technique, and present ideas for future developments.« less

  4. Earth's core and inner-core resonances from analysis of VLBI nutation and superconducting gravimeter data

    NASA Astrophysics Data System (ADS)

    Rosat, S.; Lambert, S. B.; Gattano, C.; Calvo, M.

    2017-01-01

    Geophysical parameters of the deep Earth's interior can be evaluated through the resonance effects associated with the core and inner-core wobbles on the forced nutations of the Earth's figure axis, as observed by very long baseline interferometry (VLBI), or on the diurnal tidal waves, retrieved from the time-varying surface gravity recorded by superconducting gravimeters (SGs). In this paper, we inverse for the rotational mode parameters from both techniques to retrieve geophysical parameters of the deep Earth. We analyse surface gravity data from 15 SG stations and VLBI delays accumulated over the last 35 yr. We show existing correlations between several basic Earth parameters and then decide to inverse for the rotational modes parameters. We employ a Bayesian inversion based on the Metropolis-Hastings algorithm with a Markov-chain Monte Carlo method. We obtain estimates of the free core nutation resonant period and quality factor that are consistent for both techniques. We also attempt an inversion for the free inner-core nutation (FICN) resonant period from gravity data. The most probable solution gives a period close to the annual prograde term (or S1 tide). However the 95 per cent confidence interval extends the possible values between roughly 28 and 725 d for gravity, and from 362 to 414 d from nutation data, depending on the prior bounds. The precisions of the estimated long-period nutation and respective small diurnal tidal constituents are hence not accurate enough for a correct determination of the FICN complex frequency.

  5. Targeted next generation sequencing for the detection of ciprofloxacin resistance markers using molecular inversion probes

    DTIC Science & Technology

    2016-07-06

    1 Targeted next-generation sequencing for the detection of ciprofloxacin resistance markers using molecular inversion probes Christopher P...development and evaluation of a panel of 44 single-stranded molecular inversion probes (MIPs) coupled to next-generation sequencing (NGS) for the...padlock and molecular inversion probes as upfront enrichment steps for use with NGS showed the specificity and multiplexability of these techniques

  6. Optimizing Photosynthetic and Respiratory Parameters Based on the Seasonal Variation Pattern in Regional Net Ecosystem Productivity Obtained from Atmospheric Inversion

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Chen, J.; Zheng, X.; Jiang, F.; Zhang, S.; Ju, W.; Yuan, W.; Mo, G.

    2014-12-01

    In this study, we explore the feasibility of optimizing ecosystem photosynthetic and respiratory parameters from the seasonal variation pattern of the net carbon flux. An optimization scheme is proposed to estimate two key parameters (Vcmax and Q10) by exploiting the seasonal variation in the net ecosystem carbon flux retrieved by an atmospheric inversion system. This scheme is implemented to estimate Vcmax and Q10 of the Boreal Ecosystem Productivity Simulator (BEPS) to improve its NEP simulation in the Boreal North America (BNA) region. Simultaneously, in-situ NEE observations at six eddy covariance sites are used to evaluate the NEE simulations. The results show that the performance of the optimized BEPS is superior to that of the BEPS with the default parameter values. These results have the implication on using atmospheric CO2 data for optimizing ecosystem parameters through atmospheric inversion or data assimilation techniques.

  7. Stretch or contraction induced inversion of rectification in diblock molecular junctions

    NASA Astrophysics Data System (ADS)

    Zhang, Guang-Ping; Hu, Gui-Chao; Song, Yang; Xie, Zhen; Wang, Chuan-Kui

    2013-09-01

    Based on ab initio theory and nonequilibrium Green's function method, the effect of stretch or contraction on the rectification in diblock co-oligomer molecular diodes is investigated theoretically. Interestingly, an inversion of rectifying direction induced by stretching or contracting the molecular junctions, which is closely related to the number of the pyrimidinyl-phenyl units, is proposed. The analysis of the molecular projected self-consistent Hamiltonian and the evolution of the frontier molecular orbitals as well as transmission coefficients under external biases gives an inside view of the observed results. It reveals that the asymmetric molecular level shift and asymmetric evolution of orbital wave functions under biases are competitive mechanisms for rectification. The stretching or contracting induced inversion of the rectification is due to the conversion of the dominant mechanism. This work suggests a feasible technique to manipulate the rectification performance in molecular diodes by use of the mechanically controllable method.

  8. Radiative-conductive inverse problem for lumped parameter systems

    NASA Astrophysics Data System (ADS)

    Alifanov, O. M.; Nenarokomov, A. V.; Gonzalez, V. M.

    2008-11-01

    The purpose of this paper is to introduce a iterative regularization method in the research of radiative and thermal properties of materials with applications in the design of Thermal Control Systems (TCS) of spacecrafts. In this paper the radiative and thermal properties (emissivity and thermal conductance) of a multilayered thermal-insulating blanket (MLI), which is a screen-vacuum thermal insulation as a part of the (TCS) for perspective spacecrafts, are estimated. Properties of the materials under study are determined in the result of temperature and heat flux measurement data processing based on the solution of the Inverse Heat Transfer Problem (IHTP) technique. Given are physical and mathematical models of heat transfer processes in a specimen of the multilayered thermal-insulating blanket located in the experimental facility. A mathematical formulation of the inverse heat conduction problem is presented too. The practical testing were performed for specimen of the real MLI.

  9. Study of multilayer thermal insulation by inverse problems method

    NASA Astrophysics Data System (ADS)

    Alifanov, O. M.; Nenarokomov, A. V.; Gonzalez, V. M.

    2009-11-01

    The purpose of this paper is to introduce a new method in the research of radiative and thermal properties of materials with further applications in the design of thermal control systems (TCS) of spacecrafts. In this paper the radiative and thermal properties (emissivity and thermal conductance) of a multilayered thermal-insulating blanket (MLI), which is a screen-vacuum thermal insulation as a part of the TCS for perspective spacecrafts, are estimated. Properties of the materials under study are determined in the result of temperature and heat flux measurement data processing based on the solution of the inverse heat transfer problem (IHTP) technique. Given are physical and mathematical models of heat transfer processes in a specimen of the multilayered thermal-insulating blanket located in the experimental facility. A mathematical formulation of the inverse heat conduction problem is presented as well. The practical approves were made for specimen of the real MLI.

  10. ℓ1-Regularized full-waveform inversion with prior model information based on orthant-wise limited memory quasi-Newton method

    NASA Astrophysics Data System (ADS)

    Dai, Meng-Xue; Chen, Jing-Bo; Cao, Jian

    2017-07-01

    Full-waveform inversion (FWI) is an ill-posed optimization problem which is sensitive to noise and initial model. To alleviate the ill-posedness of the problem, regularization techniques are usually adopted. The ℓ1-norm penalty is a robust regularization method that preserves contrasts and edges. The Orthant-Wise Limited-Memory Quasi-Newton (OWL-QN) method extends the widely-used limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method to the ℓ1-regularized optimization problems and inherits the efficiency of L-BFGS. To take advantage of the ℓ1-regularized method and the prior model information obtained from sonic logs and geological information, we implement OWL-QN algorithm in ℓ1-regularized FWI with prior model information in this paper. Numerical experiments show that this method not only improve the inversion results but also has a strong anti-noise ability.

  11. Non-cavitating propeller noise modeling and inversion

    NASA Astrophysics Data System (ADS)

    Kim, Dongho; Lee, Keunhwa; Seong, Woojae

    2014-12-01

    Marine propeller is the dominant exciter of the hull surface above it causing high level of noise and vibration in the ship structure. Recent successful developments have led to non-cavitating propeller designs and thus present focus is the non-cavitating characteristics of propeller such as hydrodynamic noise and its induced hull excitation. In this paper, analytic source model of propeller non-cavitating noise, described by longitudinal quadrupoles and dipoles, is suggested based on the propeller hydrodynamics. To find the source unknown parameters, the multi-parameter inversion technique is adopted using the pressure data obtained from the model scale experiment and pressure field replicas calculated by boundary element method. The inversion results show that the proposed source model is appropriate in modeling non-cavitating propeller noise. The result of this study can be utilized in the prediction of propeller non-cavitating noise and hull excitation at various stages in design and analysis.

  12. A new phase correction method in NMR imaging based on autocorrelation and histogram analysis.

    PubMed

    Ahn, C B; Cho, Z H

    1987-01-01

    A new statistical approach to phase correction in NMR imaging is proposed. The proposed scheme consists of first-and zero-order phase corrections each by the inverse multiplication of estimated phase error. The first-order error is estimated by the phase of autocorrelation calculated from the complex valued phase distorted image while the zero-order correction factor is extracted from the histogram of phase distribution of the first-order corrected image. Since all the correction procedures are performed on the spatial domain after completion of data acquisition, no prior adjustments or additional measurements are required. The algorithm can be applicable to most of the phase-involved NMR imaging techniques including inversion recovery imaging, quadrature modulated imaging, spectroscopic imaging, and flow imaging, etc. Some experimental results with inversion recovery imaging as well as quadrature spectroscopic imaging are shown to demonstrate the usefulness of the algorithm.

  13. A new art code for tomographic interferometry

    NASA Technical Reports Server (NTRS)

    Tan, H.; Modarress, D.

    1987-01-01

    A new algebraic reconstruction technique (ART) code based on the iterative refinement method of least squares solution for tomographic reconstruction is presented. Accuracy and the convergence of the technique is evaluated through the application of numerically generated interferometric data. It was found that, in general, the accuracy of the results was superior to other reported techniques. The iterative method unconditionally converged to a solution for which the residual was minimum. The effects of increased data were studied. The inversion error was found to be a function of the input data error only. The convergence rate, on the other hand, was affected by all three parameters. Finally, the technique was applied to experimental data, and the results are reported.

  14. Application of stepwise multiple regression techniques to inversion of Nimbus 'IRIS' observations.

    NASA Technical Reports Server (NTRS)

    Ohring, G.

    1972-01-01

    Exploratory studies with Nimbus-3 infrared interferometer-spectrometer (IRIS) data indicate that, in addition to temperature, such meteorological parameters as geopotential heights of pressure surfaces, tropopause pressure, and tropopause temperature can be inferred from the observed spectra with the use of simple regression equations. The technique of screening the IRIS spectral data by means of stepwise regression to obtain the best radiation predictors of meteorological parameters is validated. The simplicity of application of the technique and the simplicity of the derived linear regression equations - which contain only a few terms - suggest usefulness for this approach. Based upon the results obtained, suggestions are made for further development and exploitation of the stepwise regression analysis technique.

  15. Structural Anomaly Detection Using Fiber Optic Sensors and Inverse Finite Element Method

    NASA Technical Reports Server (NTRS)

    Quach, Cuong C.; Vazquez, Sixto L.; Tessler, Alex; Moore, Jason P.; Cooper, Eric G.; Spangler, Jan. L.

    2005-01-01

    NASA Langley Research Center is investigating a variety of techniques for mitigating aircraft accidents due to structural component failure. One technique under consideration combines distributed fiber optic strain sensing with an inverse finite element method for detecting and characterizing structural anomalies anomalies that may provide early indication of airframe structure degradation. The technique identifies structural anomalies that result in observable changes in localized strain but do not impact the overall surface shape. Surface shape information is provided by an Inverse Finite Element Method that computes full-field displacements and internal loads using strain data from in-situ fiberoptic sensors. This paper describes a prototype of such a system and reports results from a series of laboratory tests conducted on a test coupon subjected to increasing levels of damage.

  16. 3-D acoustic waveform simulation and inversion at Yasur Volcano, Vanuatu

    NASA Astrophysics Data System (ADS)

    Iezzi, A. M.; Fee, D.; Matoza, R. S.; Austin, A.; Jolly, A. D.; Kim, K.; Christenson, B. W.; Johnson, R.; Kilgour, G.; Garaebiti, E.; Kennedy, B.; Fitzgerald, R.; Key, N.

    2016-12-01

    Acoustic waveform inversion shows promise for improved eruption characterization that may inform volcano monitoring. Well-constrained inversion can provide robust estimates of volume and mass flux, increasing our ability to monitor volcanic emissions (potentially in real-time). Previous studies have made assumptions about the multipole source mechanism, which can be thought of as the combination of pressure fluctuations from a volume change, directionality, and turbulence. This infrasound source could not be well constrained up to this time due to infrasound sensors only being deployed on Earth's surface, so the assumption of no vertical dipole component has been made. In this study we deploy a high-density seismo-acoustic network, including multiple acoustic sensors along a tethered balloon around Yasur Volcano, Vanuatu. Yasur has frequent strombolian eruptions from any one of its three active vents within a 400 m diameter crater. The third dimension (vertical) of pressure sensor coverage allows us to begin to constrain the acoustic source components in a profound way, primarily the horizontal and vertical components and their previously uncharted contributions to volcano infrasound. The deployment also has a geochemical and visual component, including FLIR, FTIR, two scanning FLYSPECs, and a variety of visual imagery. Our analysis employs Finite-Difference Time-Domain (FDTD) modeling to obtain the full 3D Green's functions for each propagation path. This method, following Kim et al. (2015), takes into account realistic topographic scattering based on a digital elevation model created using structure-from-motion techniques. We then invert for the source location and source-time function, constraining the contribution of the vertical sound radiation to the source. The final outcome of this inversion is an infrasound-derived volume flux as a function of time, which we then compare to those derived independently from geochemical techniques as well as the inversion of seismic data. Kim, K., Fee, D., Yokoo, A., & Lees, J. M. (2015). Acoustic source inversion to estimate volume flux from volcanic explosions. Geophysical Research Letters, 42(13), 5243-5249

  17. Inverted Nipple Treatment and Poliglecaprone Spacer.

    PubMed

    Dessena, Lidia; Dast, Sandy; Perez, Simon; Mercut, Razvan; Herlin, Christian; Sinna, Raphael

    2018-05-01

    Nipple inversion is defined as a non-projectile nipple. It is a frequent pathologic condition, in which the whole nipple, or a portion of its, is buried inward towards the lactiferous duct and lies below the plane of the areola. Numerous strategies have been described to correct nipple inversion. All the procedures have the purpose to give a good shape to the nipple, preserving its function and sensitivity, when it is possible. To avoid recurrences and to obtain good aesthetic results, we present a modified percutaneous technique. We performed a retrospective study between 2011 and 2016 and included all the cases of inverted nipples treated in our department. Our modified percutaneous technique consists of a minimal incision supported by a percutaneous suture as a temporary spacer to fill the defect caused by releasing the fibro-ductal bands. A total of 41 cases of inverted nipples were corrected in 32 patients. After 1 year of follow-up, no recurrence was observed and all nipples maintained complete eversion. There was only one case of partial unilateral necrosis in a patient who underwent tumorectomy and radiotherapy. All patients were satisfied with the aesthetic outcomes. This is a simple, safe and cheap technique that should be considered as a reliable method for long-term correction of nipple inversion. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  18. Cooperative inversion of magnetotelluric and seismic data sets

    NASA Astrophysics Data System (ADS)

    Markovic, M.; Santos, F.

    2012-04-01

    Cooperative inversion of magnetotelluric and seismic data sets Milenko Markovic,Fernando Monteiro Santos IDL, Faculdade de Ciências da Universidade de Lisboa 1749-016 Lisboa Inversion of single geophysical data has well-known limitations due to the non-linearity of the fields and non-uniqueness of the model. There is growing need, both in academy and industry to use two or more different data sets and thus obtain subsurface property distribution. In our case ,we are dealing with magnetotelluric and seismic data sets. In our approach,we are developing algorithm based on fuzzy-c means clustering technique, for pattern recognition of geophysical data. Separate inversion is performed on every step, information exchanged for model integration. Interrelationships between parameters from different models is not required in analytical form. We are investigating how different number of clusters, affects zonation and spatial distribution of parameters. In our study optimization in fuzzy c-means clustering (for magnetotelluric and seismic data) is compared for two cases, firstly alternating optimization and then hybrid method (alternating optimization+ Quasi-Newton method). Acknowledgment: This work is supported by FCT Portugal

  19. Interpreting OCO-2 Constrained CO2 Surface Flux Estimates Through the Lens of Atmospheric Transport Uncertainty.

    NASA Astrophysics Data System (ADS)

    Schuh, A. E.; Jacobson, A. R.; Basu, S.; Weir, B.; Baker, D. F.; Bowman, K. W.; Chevallier, F.; Crowell, S.; Deng, F.; Denning, S.; Feng, L.; Liu, J.

    2017-12-01

    The orbiting carbon observatory (OCO-2) was launched in July 2014 and has collected three years of column mean CO2 (XCO2) data. The OCO-2 model inter-comparison project (MIP) was formed to provide a means of analysis of results from many different atmospheric inversion modeling systems. Certain facets of the inversion systems, such as observations and fossil fuel CO2 fluxes were standardized to remove first order sources of difference between the systems. Nevertheless, large variations amongst the flux results from the systems still exist. In this presentation, we explore one dimension of this uncertainty, the impact of different atmospheric transport fields, i.e. wind speeds and directions. Early results illustrate a large systematic difference between two classes of atmospheric transport, arising from winds in the parent GEOS-DAS (NASA-GMAO) and ERA-Interim (ECMWF) data assimilation models. We explore these differences and their effect on inversion-based estimates of surface CO2 flux by using a combination of simplified inversion techniques as well as the full OCO-2 MIP suite of CO2 flux estimates.

  20. Inverse Tomo-Lithography for Making Microscopic 3D Parts

    NASA Technical Reports Server (NTRS)

    White, Victor; Wiberg, Dean

    2003-01-01

    According to a proposal, basic x-ray lithography would be extended to incorporate a technique, called inverse tomography, that would enable the fabrication of microscopic three-dimensional (3D) objects. The proposed inverse tomo-lithographic process would make it possible to produce complex shaped, submillimeter-sized parts that would be difficult or impossible to make in any other way. Examples of such shapes or parts include tapered helices, paraboloids with axes of different lengths, and even Archimedean screws that could serve as rotors in microturbines. The proposed inverse tomo-lithographic process would be based partly on a prior microfabrication process known by the German acronym LIGA (lithographie, galvanoformung, abformung, which means lithography, electroforming, molding). In LIGA, one generates a precise, high-aspect ratio pattern by exposing a thick, x-ray-sensitive resist material to an x-ray beam through a mask that contains the pattern. One can electrodeposit metal into the developed resist pattern to form a precise metal part, then dissolve the resist to free the metal. Aspect ratios of 100:1 and patterns into resist thicknesses of several millimeters are possible.

  1. Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao

    2017-10-18

    Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less

  2. Spectral-element simulations of wave propagation in complex exploration-industry models: Imaging and adjoint tomography

    NASA Astrophysics Data System (ADS)

    Luo, Y.; Nissen-Meyer, T.; Morency, C.; Tromp, J.

    2008-12-01

    Seismic imaging in the exploration industry is often based upon ray-theoretical migration techniques (e.g., Kirchhoff) or other ideas which neglect some fraction of the seismic wavefield (e.g., wavefield continuation for acoustic-wave first arrivals) in the inversion process. In a companion paper we discuss the possibility of solving the full physical forward problem (i.e., including visco- and poroelastic, anisotropic media) using the spectral-element method. With such a tool at hand, we can readily apply the adjoint method to tomographic inversions, i.e., iteratively improving an initial 3D background model to fit the data. In the context of this inversion process, we draw connections between kernels in adjoint tomography and basic imaging principles in migration. We show that the images obtained by migration are nothing but particular kinds of adjoint kernels (mainly density kernels). Migration is basically a first step in the iterative inversion process of adjoint tomography. We apply the approach to basic 2D problems involving layered structures, overthrusting faults, topography, salt domes, and poroelastic regions.

  3. Real-time Inversion of Tsunami Source from GNSS Ground Deformation Observations and Tide Gauges.

    NASA Astrophysics Data System (ADS)

    Arcas, D.; Wei, Y.

    2017-12-01

    Over the last decade, the NOAA Center for Tsunami Research (NCTR) has developed an inversion technique to constrain tsunami sources based on the use of Green's functions in combination with data reported by NOAA's Deep-ocean Assessment and Reporting of Tsunamis (DART®) systems. The system has consistently proven effective in providing highly accurate tsunami forecasts of wave amplitude throughout an entire basin. However, improvement is necessary in two critical areas: reduction of data latency for near-field tsunami predictions and reduction of maintenance cost of the network. Two types of sensors have been proposed as supplementary to the existing network of DART®systems: Global Navigation Satellite System (GNSS) stations and coastal tide gauges. The use GNSS stations to provide autonomous geo-spatial positioning at specific sites during an earthquake has been proposed in recent years to supplement the DART® array in tsunami source inversion. GNSS technology has the potential to provide substantial contributions in the two critical areas of DART® technology where improvement is most necessary. The present study uses GNSS ground displacement observations of the 2011 Tohoku-Oki earthquake in combination with NCTR operational database of Green's functions, to produce a rapid estimate of tsunami source based on GNSS observations alone. The solution is then compared with that obtained via DART® data inversion and the difficulties in obtaining an accurate GNSS-based solution are underlined. The study also identifies the set of conditions required for source inversion from coastal tide-gauges using the degree of nonlinearity of the signal as a primary criteria. We then proceed to identify the conditions and scenarios under which a particular gage could be used to invert a tsunami source.

  4. A simulation based method to assess inversion algorithms for transverse relaxation data

    NASA Astrophysics Data System (ADS)

    Ghosh, Supriyo; Keener, Kevin M.; Pan, Yong

    2008-04-01

    NMR relaxometry is a very useful tool for understanding various chemical and physical phenomena in complex multiphase systems. A Carr-Purcell-Meiboom-Gill (CPMG) [P.T. Callaghan, Principles of Nuclear Magnetic Resonance Microscopy, Clarendon Press, Oxford, 1991] experiment is an easy and quick way to obtain transverse relaxation constant (T2) in low field. Most of the samples usually have a distribution of T2 values. Extraction of this distribution of T2s from the noisy decay data is essentially an ill-posed inverse problem. Various inversion approaches have been used to solve this problem, to date. A major issue in using an inversion algorithm is determining how accurate the computed distribution is. A systematic analysis of an inversion algorithm, UPEN [G.C. Borgia, R.J.S. Brown, P. Fantazzini, Uniform-penalty inversion of multiexponential decay data, Journal of Magnetic Resonance 132 (1998) 65-77; G.C. Borgia, R.J.S. Brown, P. Fantazzini, Uniform-penalty inversion of multiexponential decay data II. Data spacing, T2 data, systematic data errors, and diagnostics, Journal of Magnetic Resonance 147 (2000) 273-285] was performed by means of simulated CPMG data generation. Through our simulation technique and statistical analyses, the effects of various experimental parameters on the computed distribution were evaluated. We converged to the true distribution by matching up the inversion results from a series of true decay data and a noisy simulated data. In addition to simulation studies, the same approach was also applied on real experimental data to support the simulation results.

  5. Part-to-itself model inversion in process compensated resonance testing

    NASA Astrophysics Data System (ADS)

    Mayes, Alexander; Jauriqui, Leanne; Biedermann, Eric; Heffernan, Julieanne; Livings, Richard; Aldrin, John C.; Goodlet, Brent; Mazdiyasni, Siamack

    2018-04-01

    Process Compensated Resonance Testing (PCRT) is a non-destructive evaluation (NDE) method involving the collection and analysis of a part's resonance spectrum to characterize its material or damage state. Prior work used the finite element method (FEM) to develop forward modeling and model inversion techniques. In many cases, the inversion problem can become confounded by multiple parameters having similar effects on a part's resonance frequencies. To reduce the influence of confounding parameters and isolate the change in a part (e.g., creep), a part-to-itself (PTI) approach can be taken. A PTI approach involves inverting only the change in resonance frequencies from the before and after states of a part. This approach reduces the possible inversion parameters to only those that change in response to in-service loads and damage mechanisms. To evaluate the effectiveness of using a PTI inversion approach, creep strain and material properties were estimated in virtual and real samples using FEM inversion. Virtual and real dog bone samples composed of nickel-based superalloy Mar-M-247 were examined. Virtual samples were modeled with typically observed variations in material properties and dimensions. Creep modeling was verified with the collected resonance spectra from an incrementally crept physical sample. All samples were inverted against a model space that allowed for change in the creep damage state and the material properties but was blind to initial part dimensions. Results quantified the capabilities of PTI inversion in evaluating creep strain and material properties, as well as its sensitivity to confounding initial dimensions.

  6. Ultraviolet-infrared laser-induced domain inversion in MgO-doped congruent LiNbO3 and near stoichiometric LiTaO3 crystals

    NASA Astrophysics Data System (ADS)

    Zhi, Ya'nan; Qu, Weijuan; Liu, De'an; Sun, Jianfeng; Yan, Aimin; Liu, Liren

    2008-08-01

    Laser-induced domain inversion is a promising technique for domain engineering in LiNbO3 and LiTaO3. The ultraviolet-infrared laser induced domain inversions in MgO-doped congruent LiNbO3 and near stoichiometric LiTaO3 crystals are investigated for the first time here. Within the wavelength range from 351 to 799 nm, the different reductions of nucleation field induced by the focused continuous laser irradiation are systematically investigated in the MgO-doped congruent LiNbO3 crystals. The investigation of ultrashort-pulse laser-induced domain inversion in MgO-doped congruent LiNbO3 is performed with 800 nm wavelength irradiation. The focused continuous ultraviolet laser-induced ferroelectric domain inversion in the near stoichiometric LiTaO3 is also investigated. The different physical explanations, based on space charge field and defect formation, are presented for the laser-induced domain inversion, and the solid experimental proofs are also presented. The results provide the solid experimental proofs and feasible schemes for the further investigation of laser-induced domain engineering in MgO-doped LiNbO3 and near stoichiometric LiTaO3 crystals. The important characteristics of domain inversion, including domain wall and internal field, in LiNbO3 crystals are also investigated by the digital holographic interferometry with an improved reconstruction method, and some creative experimental results and conclusions are achieved.

  7. Inversion layer MOS solar cells

    NASA Technical Reports Server (NTRS)

    Ho, Fat Duen

    1986-01-01

    Inversion layer (IL) Metal Oxide Semiconductor (MOS) solar cells were fabricated. The fabrication technique and problems are discussed. A plan for modeling IL cells is presented. Future work in this area is addressed.

  8. Improved Tandem Measurement Techniques for Aerosol Particle Analysis

    NASA Astrophysics Data System (ADS)

    Rawat, Vivek Kumar

    Non-spherical, chemically inhomogeneous (complex) nanoparticles are encountered in a number of natural and engineered environments, including combustion systems (which produces highly non-spherical aggregates), reactors used in gas-phase materials synthesis of doped or multicomponent materials, and in ambient air. These nanoparticles are often highly diverse in size, composition and shape, and hence require determination of property distribution functions for accurate characterization. This thesis focuses on development of tandem mobility-mass measurement techniques coupled with appropriate data inversion routines to facilitate measurement of two dimensional size-mass distribution functions while correcting for the non-idealities of the instruments. Chapter 1 provides the detailed background and motivation for the studies performed in this thesis. In chapter 2, the development of an inversion routine is described which is employed to determine two dimensional size-mass distribution functions from Differential Mobility Analyzer-Aerosol Particle Mass analyzer tandem measurements. Chapter 3 demonstrates the application of the two dimensional distribution function to compute cumulative mass distribution function and also evaluates the validity of this technique by comparing the calculated total mass concentrations to measured values for a variety of aerosols. In Chapter 4, this tandem measurement technique with the inversion routine is employed to analyze colloidal suspensions. Chapter 5 focuses on application of a transverse modulation ion mobility spectrometer coupled with a mass spectrometer to study the effect of vapor dopants on the mobility shifts of sub 2 nm peptide ion clusters. These mobility shifts are then compared to models based on vapor uptake theories. Finally, in Chapter 6, a conclusion of all the studies performed in this thesis is provided and future avenues of research are discussed.

  9. Fate of Volatile Organic Compounds in Constructed Wastewater Treatment Wetlands

    USGS Publications Warehouse

    Keefe, S.H.; Barber, L.B.; Runkel, R.L.; Ryan, J.N.

    2004-01-01

    The fate of volatile organic compounds was evaluated in a wastewater-dependent constructed wetland near Phoenix, AZ, using field measurements and solute transport modeling. Numerically based volatilization rates were determined using inverse modeling techniques and hydraulic parameters established by sodium bromide tracer experiments. Theoretical volatilization rates were calculated from the two-film method incorporating physicochemical properties and environmental conditions. Additional analyses were conducted using graphically determined volatilization rates based on field measurements. Transport (with first-order removal) simulations were performed using a range of volatilization rates and were evaluated with respect to field concentrations. The inverse and two-film reactive transport simulations demonstrated excellent agreement with measured concentrations for 1,4-dichlorobenzene, tetrachloroethene, dichloromethane, and trichloromethane and fair agreement for dibromochloromethane, bromo-dichloromethane, and toluene. Wetland removal efficiencies from inlet to outlet ranged from 63% to 87% for target compounds.

  10. Kinematic inversion of the 2008 Mw7 Iwate-Miyagi (Japan) earthquake by two independent methods: Sensitivity and resolution analysis

    NASA Astrophysics Data System (ADS)

    Gallovic, Frantisek; Cirella, Antonella; Plicka, Vladimir; Piatanesi, Alessio

    2013-04-01

    On 14 June 2008, UTC 23:43, the border of Iwate and Miyagi prefectures was hit by an Mw7 reverse-fault type crustal earthquake. The event is known to have the largest ground acceleration observed to date (~4g), which was recorded at station IWTH25. We analyze observed strong motion data with the objective to image the event rupture process and the associated uncertainties. Two different slip inversion approaches are used, the difference between the two methods being only in the parameterization of the source model. To minimize mismodeling of the propagation effects we use crustal model obtained by full waveform inversion of aftershock records in the frequency range between 0.05-0.3 Hz. In the first method, based on linear formulation, the parameters are represented by samples of slip velocity functions along the (finely discretized) fault in a time window spanning the whole rupture duration. Such a source description is very general with no prior constraint on the nucleation point, rupture velocity, shape of the velocity function. Thus the inversion could resolve very general (unexpected) features of the rupture evolution, such as multiple rupturing, rupture-propagation reversals, etc. On the other hand, due to the relatively large number of model parameters, the inversion result is highly non-unique, with possibility of obtaining a biased solution. The second method is a non-linear global inversion technique, where each point on the fault can slip only once, following a prescribed functional form of the source time function. We invert simultaneously for peak slip velocity, slip angle, rise time and rupture time by allowing a given range of variability for each kinematic model parameter. For this reason, unlike to the linear inversion approach, the rupture process needs a smaller number of parameters to be retrieved, and is more constrained with a proper control on the allowed range of parameter values. In order to test the resolution and reliability of the retrieved models, we present a thorough analysis of the performance of the two inversion approaches. In fact, depending on the inversion strategy and the intrinsic 'non-uniqueness' of the inverse problem, the final slip maps and distribution of rupture onset times are generally different, sometimes even incompatible with each other. Great emphasis is devoted to the uncertainty estimate of both techniques. Thus we do not compare only the best fitting models, but their 'compatibility' in terms of the uncertainty limits.

  11. Modeling the 16 September 2015 Chile tsunami source with the inversion of deep-ocean tsunami records by means of the r - solution method

    NASA Astrophysics Data System (ADS)

    Voronina, Tatyana; Romanenko, Alexey; Loskutov, Artem

    2017-04-01

    The key point in the state-of-the-art in the tsunami forecasting is constructing a reliable tsunami source. In this study, we present an application of the original numerical inversion technique to modeling the tsunami sources of the 16 September 2015 Chile tsunami. The problem of recovering a tsunami source from remote measurements of the incoming wave in the deep-water tsunameters is considered as an inverse problem of mathematical physics in the class of ill-posed problems. This approach is based on the least squares and the truncated singular value decomposition techniques. The tsunami wave propagation is considered within the scope of the linear shallow-water theory. As in inverse seismic problem, the numerical solutions obtained by mathematical methods become unstable due to the presence of noise in real data. A method of r-solutions makes it possible to avoid instability in the solution to the ill-posed problem under study. This method seems to be attractive from the computational point of view since the main efforts are required only once for calculating the matrix whose columns consist of computed waveforms for each harmonic as a source (an unknown tsunami source is represented as a part of a spatial harmonics series in the source area). Furthermore, analyzing the singular spectra of the matrix obtained in the course of numerical calculations one can estimate the future inversion by a certain observational system that will allow offering a more effective disposition for the tsunameters with the help of precomputations. In other words, the results obtained allow finding a way to improve the inversion by selecting the most informative set of available recording stations. The case study of the 6 February 2013 Solomon Islands tsunami highlights a critical role of arranging deep-water tsunameters for obtaining the inversion results. Implementation of the proposed methodology to the 16 September 2015 Chile tsunami has successfully produced tsunami source model. The function recovered by the method proposed can find practical applications both as an initial condition for various optimization approaches and for computer calculation of the tsunami wave propagation.

  12. Towards a new technique to construct a 3D shear-wave velocity model based on converted waves

    NASA Astrophysics Data System (ADS)

    Hetényi, G.; Colavitti, L.

    2017-12-01

    A 3D model is essential in all branches of solid Earth sciences because geological structures can be heterogeneous and change significantly in their lateral dimension. The main target of this research is to build a crustal S-wave velocity structure in 3D. The currently popular methodologies to construct 3D shear-wave velocity models are Ambient Noise Tomography (ANT) and Local Earthquake Tomography (LET). Here we propose a new technique to map Earth discontinuities and velocities at depth based on the analysis of receiver functions. The 3D model is obtained by simultaneously inverting P-to-S converted waveforms recorded at a dense array. The individual velocity models corresponding to each trace are extracted from the 3D initial model along ray paths that are calculated using the shooting method, and the velocity model is updated during the inversion. We consider a spherical approximation of ray propagation using a global velocity model (iasp91, Kennett and Engdahl, 1991) for the teleseismic part, while we adopt Cartesian coordinates and a local velocity model for the crust. During the inversion process we work with a multi-layer crustal model for shear-wave velocity, with a flexible mesh for the depth of the interfaces. The RFs inversion represents a complex problem because the amplitude and the arrival time of different phases depend in a non-linear way on the depth of interfaces and the characteristics of the velocity structure. The solution we envisage to manage the inversion problem is the stochastic Neighbourhood Algorithm (NA, Sambridge, 1999), whose goal is to find an ensemble of models that sample the good data-fitting regions of a multidimensional parameter space. Depending on the studied area, this method can accommodate possible independent and complementary geophysical data (gravity, active seismics, LET, ANT, etc.), helping to reduce the non-linearity of the inversion. Our first focus of application is the Central Alps, where a 20-year long dataset of high-quality teleseismic events recorded at 81 stations is available, and we have high-resolution P-wave velocity model available (Diehl et al., 2009). We plan to extend the 3D shear-wave velocity inversion method to the entire Alpine domain in frame of the AlpArray project, and apply it to other areas with a dense network of broadband seismometers.

  13. GUEST EDITORS' INTRODUCTION: Testing inversion algorithms against experimental data: inhomogeneous targets

    NASA Astrophysics Data System (ADS)

    Belkebir, Kamal; Saillard, Marc

    2005-12-01

    This special section deals with the reconstruction of scattering objects from experimental data. A few years ago, inspired by the Ipswich database [1 4], we started to build an experimental database in order to validate and test inversion algorithms against experimental data. In the special section entitled 'Testing inversion algorithms against experimental data' [5], preliminary results were reported through 11 contributions from several research teams. (The experimental data are free for scientific use and can be downloaded from the web site.) The success of this previous section has encouraged us to go further and to design new challenges for the inverse scattering community. Taking into account the remarks formulated by several colleagues, the new data sets deal with inhomogeneous cylindrical targets and transverse electric (TE) polarized incident fields have also been used. Among the four inhomogeneous targets, three are purely dielectric, while the last one is a `hybrid' target mixing dielectric and metallic cylinders. Data have been collected in the anechoic chamber of the Centre Commun de Ressources Micro-ondes in Marseille. The experimental setup as well as the layout of the files containing the measurements are presented in the contribution by J-M Geffrin, P Sabouroux and C Eyraud. The antennas did not change from the ones used previously [5], namely wide-band horn antennas. However, improvements have been achieved by refining the mechanical positioning devices. In order to enlarge the scope of applications, both TE and transverse magnetic (TM) polarizations have been carried out for all targets. Special care has been taken not to move the target under test when switching from TE to TM measurements, ensuring that TE and TM data are available for the same configuration. All data correspond to electric field measurements. In TE polarization the measured component is orthogonal to the axis of invariance. Contributions A Abubakar, P M van den Berg and T M Habashy, Application of the multiplicative regularized contrast source inversion method TM- and TE-polarized experimental Fresnel data, present results of profile inversions obtained using the contrast source inversion (CSI) method, in which a multiplicative regularization is plugged in. The authors successfully inverted both TM- and TE-polarized fields. Note that this paper is one of only two contributions which address the inversion of TE-polarized data. A Baussard, Inversion of multi-frequency experimental data using an adaptive multiscale approach, reports results of reconstructions using the modified gradient method (MGM). It suggests that a coarse-to-fine iterative strategy based on spline pyramids. In this iterative technique, the number of degrees of freedom is reduced, which improves robustness. The introduction, during the iterative process, of finer scales inside areas of interest leads to an accurate representation of the object under test. The efficiency of this technique is shown via comparisons between the results obtained with the standard MGM and those from an adaptive approach. L Crocco, M D'Urso and T Isernia, Testing the contrast source extended Born inversion method against real data: the case of TM data, assume that the main contribution in the domain integral formulation comes from the singularity of Green's function, even though the media involved are lossless. A Fourier Bessel analysis of the incident and scattered measured fields is used to derive a model of the incident field and an estimate of the location and size of the target. The iterative procedure lies on a conjugate gradient method associated with Tikhonov regularization, and the multi-frequency data are dealt with using a frequency-hopping approach. In many cases, it is difficult to reconstruct accurately both real and imaginary parts of the permittivity if no prior information is included. M Donelli, D Franceschini, A Massa, M Pastorino and A Zanetti, Multi-resolution iterative inversion of real inhomogeneous targets, adopt a multi-resolution strategy, which, at each step, adaptive discretization of the integral equation is performed over an irregular mesh, with a coarser grid outside the regions of interest and tighter sampling where better resolution is required. Here, this procedure is achieved while keeping the number of unknowns constant. The way such a strategy could be combined with multi-frequency data, edge preserving regularization, or any technique also devoted to improve resolution, remains to be studied. As done by some other contributors, the model of incident field is chosen to fit the Fourier Bessel expansion of the measured one. A Dubois, K Belkebir and M Saillard, Retrieval of inhomogeneous targets from experimental frequency diversity data, present results of the reconstruction of targets using three different non-regularized techniques. It is suggested to minimize a frequency weighted cost function rather than a standard one. The different approaches are compared and discussed. C Estatico, G Bozza, A Massa, M Pastorino and A Randazzo, A two-step iterative inexact-Newton method for electromagnetic imaging of dielectric structures from real data, use a two nested iterative methods scheme, based on the second-order Born approximation, which is nonlinear in terms of contrast but does not involve the total field. At each step of the outer iteration, the problem is linearized and solved iteratively using the Landweber method. Better reconstructions than with the Born approximation are obtained at low numerical cost. O Feron, B Duchêne and A Mohammad-Djafari, Microwave imaging of inhomogeneous objects made of a finite number of dielectric and conductive materials from experimental data, adopt a Bayesian framework based on a hidden Markov model, built to take into account, as prior knowledge, that the target is composed of a finite number of homogeneous regions. It has been applied to diffraction tomography and to a rigorous formulation of the inverse problem. The latter can be viewed as a Bayesian adaptation of the contrast source method such that prior information about the contrast can be introduced in the prior law distribution, and it results in estimating the posterior mean instead of minimizing a cost functional. The accuracy of the result is thus closely linked to the prior knowledge of the contrast, making this approach well suited for non-destructive testing. J-M Geffrin, P Sabouroux and C Eyraud, Free space experimental scattering database continuation: experimental set-up and measurement precision, describe the experimental set-up used to carry out the data for the inversions. They report the modifications of the experimental system used previously in order to improve the precision of the measurements. Reliability of data is demonstrated through comparisons between measurements and computed scattered field with both fundamental polarizations. In addition, the reader interested in using the database will find the relevant information needed to perform inversions as well as the description of the targets under test. A Litman, Reconstruction by level sets of n-ary scattering obstacles, presents the reconstruction of targets using a level sets representation. It is assumed that the constitutive materials of the obstacles under test are known and the shape is retrieved. Two approaches are reported. In the first one the obstacles of different constitutive materials are represented in a single level set, while in the second approach several level sets are combined. The approaches are applied to the experimental data and compared. U Shahid, M Testorf and M A Fiddy, Minimum-phase-based inverse scattering algorithm applied to Institut Fresnel data, suggest a way of extending the use of minimum phase functions to 2D problems. In the kind of inverse problems we are concerned with, it consists of separating the contributions from the field and from the contrast in the so-called contrast source term, through homomorphic filtering. Images of the targets are obtained by combination with diffraction tomography. Both pre-processing and imaging are thus based on the use of Fourier transforms, making the algorithm very fast compared to classical iterative approaches. It is also pointed out that the design of appropriate filters remains an open topic. C Yu, L-P Song and Q H Liu, Inversion of multi-frequency experimental data for imaging complex objects by a DTA CSI method, use the contrast source inversion (CSI) method for the reconstruction of the targets, in which the initial guess is a solution deduced from another iterative technique based on the diagonal tensor approximation (DTA). In so doing, the authors combine the fast convergence of the DTA method for generating an accurate initial estimate for the CSI method. Note that this paper is one of only two contributions which address the inversion of TE-polarized data. Conclusion In this special section various inverse scattering techniques were used to successfully reconstruct inhomogeneous targets from multi-frequency multi-static measurements. This shows that the database is reliable and can be useful for researchers wanting to test and validate inversion algorithms. From the database, it is also possible to extract subsets to study particular inverse problems, for instance from phaseless data or from `aspect-limited' configurations. Our future efforts will be directed towards extending the database in order to explore inversions from transient fields and the full three-dimensional problem. Acknowledgments The authors would like to thank the Inverse Problems board for opening the journal to us, and offer profound thanks to Elaine Longden-Chapman and Kate Hooper for their help in organizing this special section.

  14. Inversion of calcite twin data for paleostress (1) : improved Etchecopar technique tested on numerically-generated and natural data

    NASA Astrophysics Data System (ADS)

    Parlangeau, Camille; Lacombe, Olivier; Daniel, Jean-Marc; Schueller, Sylvie

    2015-04-01

    Inversion of calcite twin data are known to be a powerful tool to reconstruct the past-state of stress in carbonate rocks of the crust, especially in fold-and-thrust belts and sedimentary basins. This is of key importance to constrain results of geomechanical modelling. Without proposing a new inversion scheme, this contribution reports some recent improvements of the most efficient stress inversion technique to date (Etchecopar, 1984) that allows to reconstruct the 5 parameters of the deviatoric paleostress tensors (principal stress orientations and differential stress magnitudes) from monophase and polyphase twin data sets. The improvements consist in the search of the possible tensors that account for the twin data (twinned and untwinned planes) and the aid to the user to define the best stress tensor solution, among others. We perform a systematic exploration of an hypersphere in 4 dimensions by varying different parameters, Euler's angles and the stress ratio. We first record all tensors with a minimum penalization function accounting for 20% of the twinned planes. We then define clusters of tensors following a dissimilarity criterion based on the stress distance between the 4 parameters of the reduced stress tensors and a degree of disjunction of the related sets of twinned planes. The percentage of twinned data to be explained by each tensor is then progressively increased and tested using the standard Etchecopar procedure until the best solution that explains the maximum number of twinned planes and the whole set of untwinned planes is reached. This new inversion procedure is tested on monophase and polyphase numerically-generated as well as natural calcite twin data in order to more accurately define the ability of the technique to separate more or less similar deviatoric stress tensors applied in sequence on the samples, to test the impact of strain hardening through the change of the critical resolved shear stress for twinning as well as to evaluate the possible bias due to measurement uncertainties or clustering of grain optical axes in the samples.

  15. Validation of Spherically Symmetric Inversion by Use of a Tomographically Reconstructed Three-Dimensional Electron Density of the Solar Corona

    NASA Technical Reports Server (NTRS)

    Wang, Tongjiang; Davila, Joseph M.

    2014-01-01

    Determining the coronal electron density by the inversion of white-light polarized brightness (pB) measurements by coronagraphs is a classic problem in solar physics. An inversion technique based on the spherically symmetric geometry (spherically symmetric inversion, SSI) was developed in the 1950s and has been widely applied to interpret various observations. However, to date there is no study of the uncertainty estimation of this method. We here present the detailed assessment of this method using a three-dimensional (3D) electron density in the corona from 1.5 to 4 solar radius as a model, which is reconstructed by a tomography method from STEREO/COR1 observations during the solar minimum in February 2008 (Carrington Rotation, CR 2066).We first show in theory and observation that the spherically symmetric polynomial approximation (SSPA) method and the Van de Hulst inversion technique are equivalent. Then we assess the SSPA method using synthesized pB images from the 3D density model, and find that the SSPA density values are close to the model inputs for the streamer core near the plane of the sky (POS) with differences generally smaller than about a factor of two; the former has the lower peak but extends more in both longitudinal and latitudinal directions than the latter. We estimate that the SSPA method may resolve the coronal density structure near the POS with angular resolution in longitude of about 50 deg. Our results confirm the suggestion that the SSI method is applicable to the solar minimum streamer (belt), as stated in some previous studies. In addition, we demonstrate that the SSPA method can be used to reconstruct the 3D coronal density, roughly in agreement with the reconstruction by tomography for a period of low solar activity (CR 2066). We suggest that the SSI method is complementary to the 3D tomographic technique in some cases, given that the development of the latter is still an ongoing research effort.

  16. Image resolution enhancement via image restoration using neural network

    NASA Astrophysics Data System (ADS)

    Zhang, Shuangteng; Lu, Yihong

    2011-04-01

    Image super-resolution aims to obtain a high-quality image at a resolution that is higher than that of the original coarse one. This paper presents a new neural network-based method for image super-resolution. In this technique, the super-resolution is considered as an inverse problem. An observation model that closely follows the physical image acquisition process is established to solve the problem. Based on this model, a cost function is created and minimized by a Hopfield neural network to produce high-resolution images from the corresponding low-resolution ones. Not like some other single frame super-resolution techniques, this technique takes into consideration point spread function blurring as well as additive noise and therefore generates high-resolution images with more preserved or restored image details. Experimental results demonstrate that the high-resolution images obtained by this technique have a very high quality in terms of PSNR and visually look more pleasant.

  17. Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies

    PubMed Central

    Theis, Fabian J.

    2017-01-01

    Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464

  18. Inversion of oceanic constituents in case I and II waters with genetic programming algorithms.

    PubMed

    Chami, Malik; Robilliard, Denis

    2002-10-20

    A stochastic inverse technique based on agenetic programming (GP) algorithm was developed toinvert oceanic constituents from simulated data for case I and case II water applications. The simulations were carried out with the Ordre Successifs Ocean Atmosphere (OSOA) radiative transfer model. They include the effects of oceanic substances such as algal-related chlorophyll, nonchlorophyllous suspended matter, and dissolved organic matter. The synthetic data set also takes into account the directional effects of particles through a variation of their phase function that makes the simulated data realistic. It is shown that GP can be successfully applied to the inverse problem with acceptable stability in the presence of realistic noise in the data. GP is compared with neural network methodology for case I waters; GP exhibits similar retrieval accuracy, which is greater than for traditional techniques such as band ratio algorithms. The application of GP to real satellite data [a Sea-viewing Wide Field-of-view Sensor (SeaWiFS)] was also carried out for case I waters as a validation. Good agreement was obtained when GP results were compared with the SeaWiFS empirical algorithm. For case II waters the accuracy of GP is less than 33%, which remains satisfactory, at the present time, for remote-sensing purposes.

  19. The application of inverse Broyden's algorithm for modeling of crack growth in iron crystals.

    PubMed

    Telichev, Igor; Vinogradov, Oleg

    2011-07-01

    In the present paper we demonstrate the use of inverse Broyden's algorithm (IBA) in the simulation of fracture in single iron crystals. The iron crystal structure is treated as a truss system, while the forces between the atoms situated at the nodes are defined by modified Morse inter-atomic potentials. The evolution of lattice structure is interpreted as a sequence of equilibrium states corresponding to the history of applied load/deformation, where each equilibrium state is found using an iterative procedure based on IBA. The results presented demonstrate the success of applying the IBA technique for modeling the mechanisms of elastic, plastic and fracture behavior of single iron crystals.

  20. Determination of the rCBF in the Amygdala and Rhinal Cortex Using a FAIR-TrueFISP Sequence

    PubMed Central

    Martirosian, Petros; Klose, Uwe; Nägele, Thomas; Schick, Fritz; Ernemann, Ulrike

    2011-01-01

    Objective Brain perfusion can be assessed non-invasively by modern arterial spin labeling MRI. The FAIR (flow-sensitive alternating inversion recovery)-TrueFISP (true fast imaging in steady precession) technique was applied for regional assessment of cerebral blood flow in brain areas close to the skull base, since this approach provides low sensitivity to magnetic susceptibility effects. The investigation of the rhinal cortex and the amygdala is a potentially important feature for the diagnosis and research on dementia in its early stages. Materials and Methods Twenty-three subjects with no structural or psychological impairment were investigated. FAIR-True-FISP quantitative perfusion data were evaluated in the amygdala on both sides and in the pons. A preparation of the radiofrequency FOCI (frequency offset corrected inversion) pulse was used for slice selective inversion. After a time delay of 1.2 sec, data acquisition began. Imaging slice thickness was 5 mm and inversion slab thickness for slice selective inversion was 12.5 mm. Image matrix size for perfusion images was 64 × 64 with a field of view of 256 × 256 mm, resulting in a spatial resolution of 4 × 4 × 5 mm. Repetition time was 4.8 ms; echo time was 2.4 ms. Acquisition time for the 50 sets of FAIR images was 6:56 min. Data were compared with perfusion data from the literature. Results Perfusion values in the right amygdala, left amygdala and pons were 65.2 (± 18.2) mL/100 g/minute, 64.6 (± 21.0) mL/100 g/minute, and 74.4 (± 19.3) mL/100 g/minute, respectively. These values were higher than formerly published data using continuous arterial spin labeling but similar to 15O-PET (oxygen-15 positron emission tomography) data. Conclusion The FAIR-TrueFISP approach is feasible for the quantitative assessment of perfusion in the amygdala. Data are comparable with formerly published data from the literature. The applied technique provided excellent image quality, even for brain regions located at the skull base in the vicinity of marked susceptibility steps. PMID:21927556

  1. 3-D acoustic waveform simulation and inversion supplemented by infrasound sensors on a tethered weather balloon at Yasur Volcano, Vanuatu

    NASA Astrophysics Data System (ADS)

    Iezzi, A. M.; Fee, D.; Matoza, R. S.; Jolly, A. D.; Kim, K.; Christenson, B. W.; Johnson, R.; Kilgour, G.; Garaebiti, E.; Austin, A.; Kennedy, B.; Fitzgerald, R.; Gomez, C.; Key, N.

    2017-12-01

    Well-constrained acoustic waveform inversion can provide robust estimates of erupted volume and mass flux, increasing our ability to monitor volcanic emissions (potentially in real-time). Previous studies have made assumptions about the multipole source mechanism, which can be represented as the combination of pressure fluctuations from a volume change, directionality, and turbulence. The vertical dipole has not been addressed due to ground-based recording limitations. In this study we deployed a high-density seismo-acoustic network around Yasur Volcano, Vanuatu, including multiple acoustic sensors along a tethered balloon that was moved every 15-60 minutes. Yasur has frequent strombolian eruptions every 1-4 minutes from any one of three active vents within a 400 m diameter crater. Our experiment captured several explosions from each vent at 38 tether locations covering 200 in azimuth and a take-off range of 50 (Jolly et. al., in review). Additionally, FLIR, FTIR, and a variety of visual imagery were collected during the deployment to aid in the seismo-acoustic interpretations. The third dimension (vertical) of pressure sensor coverage allows us to more completely constrain the acoustic source. Our analysis employs Finite-Difference Time-Domain (FDTD) modeling to obtain the full 3-D Green's functions for each propagation path. This method, following Kim et al. (2015), takes into account realistic topographic scattering based on a high-resolution digital elevation model created using structure-from-motion techniques. We then invert for the source location and multipole source-time function using a grid-search approach. We perform this inversion for multiple events from vents A and C to examine the source characteristics of the vents, including an infrasound-derived volume flux as a function of time. These volumes fluxes are then compared to those derived independently from geochemical and seismic inversion techniques. Jolly, A., Matoza, R., Fee, D., Kennedy, B., Iezzi, A., Fitzgerald, R., Austin, A., & Johnson, R. (in review). Kim, K., Fee, D., Yokoo, A., & Lees, J. M. (2015). Acoustic source inversion to estimate volume flux from volcanic explosions. Geophysical Research Letters, 42(13), 5243-5249.

  2. Covariance specification and estimation to improve top-down Green House Gas emission estimates

    NASA Astrophysics Data System (ADS)

    Ghosh, S.; Lopez-Coto, I.; Prasad, K.; Whetstone, J. R.

    2015-12-01

    The National Institute of Standards and Technology (NIST) operates the North-East Corridor (NEC) project and the Indianapolis Flux Experiment (INFLUX) in order to develop measurement methods to quantify sources of Greenhouse Gas (GHG) emissions as well as their uncertainties in urban domains using a top down inversion method. Top down inversion updates prior knowledge using observations in a Bayesian way. One primary consideration in a Bayesian inversion framework is the covariance structure of (1) the emission prior residuals and (2) the observation residuals (i.e. the difference between observations and model predicted observations). These covariance matrices are respectively referred to as the prior covariance matrix and the model-data mismatch covariance matrix. It is known that the choice of these covariances can have large effect on estimates. The main objective of this work is to determine the impact of different covariance models on inversion estimates and their associated uncertainties in urban domains. We use a pseudo-data Bayesian inversion framework using footprints (i.e. sensitivities of tower measurements of GHGs to surface emissions) and emission priors (based on Hestia project to quantify fossil-fuel emissions) to estimate posterior emissions using different covariance schemes. The posterior emission estimates and uncertainties are compared to the hypothetical truth. We find that, if we correctly specify spatial variability and spatio-temporal variability in prior and model-data mismatch covariances respectively, then we can compute more accurate posterior estimates. We discuss few covariance models to introduce space-time interacting mismatches along with estimation of the involved parameters. We then compare several candidate prior spatial covariance models from the Matern covariance class and estimate their parameters with specified mismatches. We find that best-fitted prior covariances are not always best in recovering the truth. To achieve accuracy, we perform a sensitivity study to further tune covariance parameters. Finally, we introduce a shrinkage based sample covariance estimation technique for both prior and mismatch covariances. This technique allows us to achieve similar accuracy nonparametrically in a more efficient and automated way.

  3. RF Tomography for Tunnel Detection: Principles and Inversion Schemes

    NASA Astrophysics Data System (ADS)

    Lo Monte, L.; Erricolo, D.; Inan, U. S.; Wicks, M. C.

    2008-12-01

    We propose a novel way to detect underground tunnels based on classical seismic tomography, Ground Penetrating Radar (GPR), inverse scattering principles, and the deployment of distributed sensors, which we call "Distributed RF Tomography". Tunnel detection has been a critical problem that cannot be considered fully solved. Presently, tunnel detection is performed by methods that include seismic sensors, electrical impedance, microgravity, boreholes, and GPR. All of these methods have drawbacks that make them not applicable for use in unfriendly environments, such as battlefields. Specifically, they do not cover wide surface areas, they are generally shallow, they are limited to vertical prospecting, and require the user to be in situ, which may jeopardize one's safety. Additional application of the proposed distributed RF tomography include monitoring sensitive areas, (e.g. banks, power plants, military bases, prisons, national borders) and civil applications (e.g. environmental engineering, mine safety, search and rescue, speleology, archaeology and geophysics). The novelty of a Distributed RF tomography system consists of the following. 1) Sensors are scattered randomly above the ground, thus saving time and money compared to the use of boreholes. 2) The use of lower operating frequency (around HF), which allows for deeper penetration. 3) The use of CW diffraction tomography, which increases the resolution to sub-wavelength values, independently from the sensor displacement, and increases the SNR. 4) Use of linear inversion schemes that are suited for tunnel detection. 5) The use of modulation schemes and signal processing algorithms to mitigate interferences and noise. This presentation will cover: 1. Current physical limits of existing techniques for tunnel detection. 2. Concept of Distributed RF Tomography. 3. Inversion theories and strategies a. Proper forward model for voids buried into an homogeneous medium b. Extended matched filtering inversion c. Near field formulation : Dyadic representation d. Fourier approach: principles and techniques aimed at improving the reconstructed image. e. Theoretical Limits f. Super-Resolution : Singular Values Decomposition and MUSIC 4. Propagation Model and theoretical limitations. 5. Transmitting and Receiving design, with signal processing and modulation. 6. Numerical Simulations using FDTD tools.

  4. Acoustic classification of zooplankton

    NASA Astrophysics Data System (ADS)

    Martin Traykovski, Linda V.

    1998-11-01

    Work on the forward problem in zooplankton bioacoustics has resulted in the identification of three categories of acoustic scatterers: elastic-shelled (e.g. pteropods), fluid-like (e.g. euphausiids), and gas-bearing (e.g. siphonophores). The relationship between backscattered energy and animal biomass has been shown to vary by a factor of ~19,000 across these categories, so that to make accurate estimates of zooplankton biomass from acoustic backscatter measurements of the ocean, the acoustic characteristics of the species of interest must be well-understood. This thesis describes the development of both feature based and model based classification techniques to invert broadband acoustic echoes from individual zooplankton for scatterer type, as well as for particular parameters such as animal orientation. The feature based Empirical Orthogonal Function Classifier (EOFC) discriminates scatterer types by identifying characteristic modes of variability in the echo spectra, exploiting only the inherent characteristic structure of the acoustic signatures. The model based Model Parameterisation Classifier (MPC) classifies based on correlation of observed echo spectra with simplified parameterisations of theoretical scattering models for the three classes. The Covariance Mean Variance Classifiers (CMVC) are a set of advanced model based techniques which exploit the full complexity of the theoretical models by searching the entire physical model parameter space without employing simplifying parameterisations. Three different CMVC algorithms were developed: the Integrated Score Classifier (ISC), the Pairwise Score Classifier (PSC) and the Bayesian Probability Classifier (BPC); these classifiers assign observations to a class based on similarities in covariance, mean, and variance, while accounting for model ambiguity and validity. These feature based and model based inversion techniques were successfully applied to several thousand echoes acquired from broadband (~350 kHz-750 kHz) insonifications of live zooplankton collected on Georges Bank and the Gulf of Maine to determine scatterer class. CMVC techniques were also applied to echoes from fluid-like zooplankton (Antarctic krill) to invert for angle of orientation using generic and animal-specific theoretical and empirical models. Application of these inversion techniques in situ will allow correct apportionment of backscattered energy to animal biomass, significantly improving estimates of zooplankton biomass based on acoustic surveys. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  5. Retrieval of the atmospheric compounds using a spectral optical thickness information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ioltukhovski, A.A.

    A spectral inversion technique for retrieval of the atmospheric gases and aerosols contents is proposed. This technique based upon the preliminary measurement or retrieval of the spectral optical thickness. The existence of a priori information about the spectral cross sections for some of the atmospheric components allows to retrieve the relative contents of these components in the atmosphere. Method of smooth filtration makes possible to estimate contents of atmospheric aerosols with known cross sections and to filter out other aerosols; this is done independently from their relative contribution to the optical thickness.

  6. Fast image decompression for telebrowsing of images

    NASA Technical Reports Server (NTRS)

    Miaou, Shaou-Gang; Tou, Julius T.

    1993-01-01

    Progressive image transmission (PIT) is often used to reduce the transmission time of an image telebrowsing system. A side effect of the PIT is the increase of computational complexity at the viewer's site. This effect is more serious in transform domain techniques than in other techniques. Recent attempts to reduce the side effect are futile as they create another side effect, namely, the discontinuous and unpleasant image build-up. Based on a practical assumption that image blocks to be inverse transformed are generally sparse, this paper presents a method to minimize both side effects simultaneously.

  7. Ground-based mm-wave emission spectroscopy for the detection and monitoring of stratospheric ozone

    NASA Technical Reports Server (NTRS)

    Parrish, A.; Dezafra, R.; Solomon, P.

    1981-01-01

    The molecular rotational spectrum of ozone is quite rich in the mm-wave region from 50 to 300 GHz. An apparatus, which was developed primarily for detection and measurement of stratospheric ClO and other trace molecules, is found to be well suited also for the observation of ozone lines. The collecting antenna of the apparatus is a simple mm-waveguide feedhorn. The detector is a superheterodyne mixer using a special high frequency Schottky diode and a klystron local oscillator. The spectrometer is a 256 channel filter bank with 1 MHz resolution per channel. The apparatus is believed to be the first ground-based mm-wave instrument having the capability of obtaining data of sufficient quality to make use of the inversion technique. The ground based radio technique is most sensitive to changes in vertical distribution in the region above 25 km, a region which is difficult to sample by other techniques.

  8. Identification of inelastic parameters based on deep drawing forming operations using a global-local hybrid Particle Swarm approach

    NASA Astrophysics Data System (ADS)

    Vaz, Miguel; Luersen, Marco A.; Muñoz-Rojas, Pablo A.; Trentin, Robson G.

    2016-04-01

    Application of optimization techniques to the identification of inelastic material parameters has substantially increased in recent years. The complex stress-strain paths and high nonlinearity, typical of this class of problems, require the development of robust and efficient techniques for inverse problems able to account for an irregular topography of the fitness surface. Within this framework, this work investigates the application of the gradient-based Sequential Quadratic Programming method, of the Nelder-Mead downhill simplex algorithm, of Particle Swarm Optimization (PSO), and of a global-local PSO-Nelder-Mead hybrid scheme to the identification of inelastic parameters based on a deep drawing operation. The hybrid technique has shown to be the best strategy by combining the good PSO performance to approach the global minimum basin of attraction with the efficiency demonstrated by the Nelder-Mead algorithm to obtain the minimum itself.

  9. Restoration of out-of-focus images based on circle of confusion estimate

    NASA Astrophysics Data System (ADS)

    Vivirito, Paolo; Battiato, Sebastiano; Curti, Salvatore; La Cascia, M.; Pirrone, Roberto

    2002-11-01

    In this paper a new method for a fast out-of-focus blur estimation and restoration is proposed. It is suitable for CFA (Color Filter Array) images acquired by typical CCD/CMOS sensor. The method is based on the analysis of a single image and consists of two steps: 1) out-of-focus blur estimation via Bayer pattern analysis; 2) image restoration. Blur estimation is based on a block-wise edge detection technique. This edge detection is carried out on the green pixels of the CFA sensor image also called Bayer pattern. Once the blur level has been estimated the image is restored through the application of a new inverse filtering technique. This algorithm gives sharp images reducing ringing and crisping artifact, involving wider region of frequency. Experimental results show the effectiveness of the method, both in subjective and numerical way, by comparison with other techniques found in literature.

  10. Recursive partitioned inversion of large (1500 x 1500) symmetric matrices

    NASA Technical Reports Server (NTRS)

    Putney, B. H.; Brownd, J. E.; Gomez, R. A.

    1976-01-01

    A recursive algorithm was designed to invert large, dense, symmetric, positive definite matrices using small amounts of computer core, i.e., a small fraction of the core needed to store the complete matrix. The described algorithm is a generalized Gaussian elimination technique. Other algorithms are also discussed for the Cholesky decomposition and step inversion techniques. The purpose of the inversion algorithm is to solve large linear systems of normal equations generated by working geodetic problems. The algorithm was incorporated into a computer program called SOLVE. In the past the SOLVE program has been used in obtaining solutions published as the Goddard earth models.

  11. Anisotropic modeling and joint-MAP stitching for improved ultrasound model-based iterative reconstruction of large and thick specimens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almansouri, Hani; Venkatakrishnan, Singanallur V.; Clayton, Dwight A.

    One-sided non-destructive evaluation (NDE) is widely used to inspect materials, such as concrete structures in nuclear power plants (NPP). A widely used method for one-sided NDE is the synthetic aperture focusing technique (SAFT). The SAFT algorithm produces reasonable results when inspecting simple structures. However, for complex structures, such as heavily reinforced thick concrete structures, SAFT results in artifacts and hence there is a need for a more sophisticated inversion technique. Model-based iterative reconstruction (MBIR) algorithms, which are typically equivalent to regularized inversion techniques, offer a powerful framework to incorporate complex models for the physics, detector miscalibrations and the materials beingmore » imaged to obtain high quality reconstructions. Previously, we have proposed an ultrasonic MBIR method that signifcantly improves reconstruction quality compared to SAFT. However, the method made some simplifying assumptions on the propagation model and did not disucss ways to handle data that is obtained by raster scanning a system over a surface to inspect large regions. In this paper, we propose a novel MBIR algorithm that incorporates an anisotropic forward model and allows for the joint processing of data obtained from a system that raster scans a large surface. We demonstrate that the new MBIR method can produce dramatic improvements in reconstruction quality compared to SAFT and suppresses articfacts compared to the perviously presented MBIR approach.« less

  12. Anisotropic modeling and joint-MAP stitching for improved ultrasound model-based iterative reconstruction of large and thick specimens

    NASA Astrophysics Data System (ADS)

    Almansouri, Hani; Venkatakrishnan, Singanallur; Clayton, Dwight; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector

    2018-04-01

    One-sided non-destructive evaluation (NDE) is widely used to inspect materials, such as concrete structures in nuclear power plants (NPP). A widely used method for one-sided NDE is the synthetic aperture focusing technique (SAFT). The SAFT algorithm produces reasonable results when inspecting simple structures. However, for complex structures, such as heavily reinforced thick concrete structures, SAFT results in artifacts and hence there is a need for a more sophisticated inversion technique. Model-based iterative reconstruction (MBIR) algorithms, which are typically equivalent to regularized inversion techniques, offer a powerful framework to incorporate complex models for the physics, detector miscalibrations and the materials being imaged to obtain high quality reconstructions. Previously, we have proposed an ultrasonic MBIR method that signifcantly improves reconstruction quality compared to SAFT. However, the method made some simplifying assumptions on the propagation model and did not disucss ways to handle data that is obtained by raster scanning a system over a surface to inspect large regions. In this paper, we propose a novel MBIR algorithm that incorporates an anisotropic forward model and allows for the joint processing of data obtained from a system that raster scans a large surface. We demonstrate that the new MBIR method can produce dramatic improvements in reconstruction quality compared to SAFT and suppresses articfacts compared to the perviously presented MBIR approach.

  13. Iterative Inverse Modeling for Reconciliation of Emission Inventories during the 2006 TexAQS Intensive Field Campaign

    NASA Astrophysics Data System (ADS)

    Xiao, X.; Cohan, D. S.

    2009-12-01

    Substantial uncertainties in current emission inventories have been detected by the Texas Air Quality Study 2006 (TexAQS 2006) intensive field program. These emission uncertainties have caused large inaccuracies in model simulations of air quality and its responses to management strategies. To improve the quantitative understanding of the temporal, spatial, and categorized distributions of primary pollutant emissions by utilizing the corresponding measurements collected during TexAQS 2006, we implemented both the recursive Kalman filter and a batch matrix inversion 4-D data assimilation (FDDA) method in an iterative inverse modeling framework of the CMAQ-DDM model. Equipped with the decoupled direct method, CMAQ-DDM enables simultaneous calculation of the sensitivity coefficients of pollutant concentrations to emissions to be used in the inversions. Primary pollutant concentrations measured by the multiple platforms (TCEQ ground-based, NOAA WP-3D aircraft and Ronald H. Brown vessel, and UH Moody Tower) during TexAQS 2006 have been integrated for the use in the inverse modeling. Firstly pseudo-data analyses have been conducted to assess the two methods, taking a coarse spatial resolution emission inventory as a case. Model base case concentrations of isoprene and ozone at arbitrarily selected ground grid cells were perturbed to generate pseudo measurements with different assumed Gaussian uncertainties expressed by 1-sigma standard deviations. Single-species inversions have been conducted with both methods for isoprene and NOx surface emissions from eight states in the Southeastern United States by using the pseudo measurements of isoprene and ozone, respectively. Utilization of ozone pseudo data to invert for NOx emissions serves only for the purpose of method assessment. Both the Kalman filter and FDDA methods show good performance in tuning arbitrarily shifted a priori emissions to the base case “true” values within 3-4 iterations even for the nonlinear responses of ozone to NOx emissions. While the Kalman filter has better performance under the situation of very large observational uncertainties, the batch matrix FDDA method is better suited for incorporating temporally and spatially irregular data such as those measured by NOAA aircraft and ship. After validating the methods with the pseudo data, the inverse technique is applied to improve emission estimates of NOx from different source sectors and regions in the Houston metropolitan area by using NOx measurements during TexAQS 2006. EPA NEI2005-based and Texas-specified Emission Inventories for 2006 are used as the a priori emission estimates before optimization. The inversion results will be presented and discussed. Future work will conduct inverse modeling for additional species, and then perform a multi-species inversion for emissions consistency and reconciliation with secondary pollutants such as ozone.

  14. Time domain localization technique with sparsity constraint for imaging acoustic sources

    NASA Astrophysics Data System (ADS)

    Padois, Thomas; Doutres, Olivier; Sgard, Franck; Berry, Alain

    2017-09-01

    This paper addresses source localization technique in time domain for broadband acoustic sources. The objective is to accurately and quickly detect the position and amplitude of noise sources in workplaces in order to propose adequate noise control options and prevent workers hearing loss or safety risk. First, the generalized cross correlation associated with a spherical microphone array is used to generate an initial noise source map. Then a linear inverse problem is defined to improve this initial map. Commonly, the linear inverse problem is solved with an l2 -regularization. In this study, two sparsity constraints are used to solve the inverse problem, the orthogonal matching pursuit and the truncated Newton interior-point method. Synthetic data are used to highlight the performances of the technique. High resolution imaging is achieved for various acoustic sources configurations. Moreover, the amplitudes of the acoustic sources are correctly estimated. A comparison of computation times shows that the technique is compatible with quasi real-time generation of noise source maps. Finally, the technique is tested with real data.

  15. Reconstructing source terms from atmospheric concentration measurements: Optimality analysis of an inversion technique

    NASA Astrophysics Data System (ADS)

    Turbelin, Grégory; Singh, Sarvesh Kumar; Issartel, Jean-Pierre

    2014-12-01

    In the event of an accidental or intentional contaminant release in the atmosphere, it is imperative, for managing emergency response, to diagnose the release parameters of the source from measured data. Reconstruction of the source information exploiting measured data is called an inverse problem. To solve such a problem, several techniques are currently being developed. The first part of this paper provides a detailed description of one of them, known as the renormalization method. This technique, proposed by Issartel (2005), has been derived using an approach different from that of standard inversion methods and gives a linear solution to the continuous Source Term Estimation (STE) problem. In the second part of this paper, the discrete counterpart of this method is presented. By using matrix notation, common in data assimilation and suitable for numerical computing, it is shown that the discrete renormalized solution belongs to a family of well-known inverse solutions (minimum weighted norm solutions), which can be computed by using the concept of generalized inverse operator. It is shown that, when the weight matrix satisfies the renormalization condition, this operator satisfies the criteria used in geophysics to define good inverses. Notably, by means of the Model Resolution Matrix (MRM) formalism, we demonstrate that the renormalized solution fulfils optimal properties for the localization of single point sources. Throughout the article, the main concepts are illustrated with data from a wind tunnel experiment conducted at the Environmental Flow Research Centre at the University of Surrey, UK.

  16. Applied Mathematics in EM Studies with Special Emphasis on an Uncertainty Quantification and 3-D Integral Equation Modelling

    NASA Astrophysics Data System (ADS)

    Pankratov, Oleg; Kuvshinov, Alexey

    2016-01-01

    Despite impressive progress in the development and application of electromagnetic (EM) deterministic inverse schemes to map the 3-D distribution of electrical conductivity within the Earth, there is one question which remains poorly addressed—uncertainty quantification of the recovered conductivity models. Apparently, only an inversion based on a statistical approach provides a systematic framework to quantify such uncertainties. The Metropolis-Hastings (M-H) algorithm is the most popular technique for sampling the posterior probability distribution that describes the solution of the statistical inverse problem. However, all statistical inverse schemes require an enormous amount of forward simulations and thus appear to be extremely demanding computationally, if not prohibitive, if a 3-D set up is invoked. This urges development of fast and scalable 3-D modelling codes which can run large-scale 3-D models of practical interest for fractions of a second on high-performance multi-core platforms. But, even with these codes, the challenge for M-H methods is to construct proposal functions that simultaneously provide a good approximation of the target density function while being inexpensive to be sampled. In this paper we address both of these issues. First we introduce a variant of the M-H method which uses information about the local gradient and Hessian of the penalty function. This, in particular, allows us to exploit adjoint-based machinery that has been instrumental for the fast solution of deterministic inverse problems. We explain why this modification of M-H significantly accelerates sampling of the posterior probability distribution. In addition we show how Hessian handling (inverse, square root) can be made practicable by a low-rank approximation using the Lanczos algorithm. Ultimately we discuss uncertainty analysis based on stochastic inversion results. In addition, we demonstrate how this analysis can be performed within a deterministic approach. In the second part, we summarize modern trends in the development of efficient 3-D EM forward modelling schemes with special emphasis on recent advances in the integral equation approach.

  17. Wavelet-sparsity based regularization over time in the inverse problem of electrocardiography.

    PubMed

    Cluitmans, Matthijs J M; Karel, Joël M H; Bonizzi, Pietro; Volders, Paul G A; Westra, Ronald L; Peeters, Ralf L M

    2013-01-01

    Noninvasive, detailed assessment of electrical cardiac activity at the level of the heart surface has the potential to revolutionize diagnostics and therapy of cardiac pathologies. Due to the requirement of noninvasiveness, body-surface potentials are measured and have to be projected back to the heart surface, yielding an ill-posed inverse problem. Ill-posedness ensures that there are non-unique solutions to this problem, resulting in a problem of choice. In the current paper, it is proposed to restrict this choice by requiring that the time series of reconstructed heart-surface potentials is sparse in the wavelet domain. A local search technique is introduced that pursues a sparse solution, using an orthogonal wavelet transform. Epicardial potentials reconstructed from this method are compared to those from existing methods, and validated with actual intracardiac recordings. The new technique improves the reconstructions in terms of smoothness and recovers physiologically meaningful details. Additionally, reconstruction of activation timing seems to be improved when pursuing sparsity of the reconstructed signals in the wavelet domain.

  18. Earth's field NMR detection of oil under arctic ice-water suppression

    NASA Astrophysics Data System (ADS)

    Conradi, Mark S.; Altobelli, Stephen A.; Sowko, Nicholas J.; Conradi, Susan H.; Fukushima, Eiichi

    2018-03-01

    Earth's field NMR has been developed to detect oil trapped under or in Arctic sea-ice. A large challenge, addressed here, is the suppression of the water signal that dominates the oil signal. Selective suppression of water is based on relaxation time T1 because of the negligible chemical shifts in the weak earth's magnetic field, making all proton signals overlap spectroscopically. The first approach is inversion-null recovery, modified for use with pre-polarization. The requirements for efficient inversion over a wide range of B1 and subsequent adiabatic reorientation of the magnetization to align with the static field are stressed. The second method acquires FIDs at two durations of pre-polarization and cancels the water component of the signal after the data are acquired. While less elegant, this technique imposes no stringent requirements. Similar water suppression is found in simulations for the two methods. Oil detection in the presence of water is demonstrated experimentally with both techniques.

  19. Inverse heat transfer problem in digital temperature control in plate fin and tube heat exchangers

    NASA Astrophysics Data System (ADS)

    Taler, Dawid; Sury, Adam

    2011-12-01

    The aim of the paper is a steady-state inverse heat transfer problem for plate-fin and tube heat exchangers. The objective of the process control is to adjust the number of fan revolutions per minute so that the water temperature at the heat exchanger outlet is equal to a preset value. Two control techniques were developed. The first is based on the presented mathematical model of the heat exchanger while the second is a digital proportional-integral-derivative (PID) control. The first procedure is very stable. The digital PID controller becomes unstable if the water volumetric flow rate changes significantly. The developed techniques were implemented in digital control system of the water exit temperature in a plate fin and tube heat exchanger. The measured exit temperature of the water was very close to the set value of the temperature if the first method was used. The experiments showed that the PID controller works also well but becomes frequently unstable.

  20. Earth's field NMR detection of oil under arctic ice-water suppression.

    PubMed

    Conradi, Mark S; Altobelli, Stephen A; Sowko, Nicholas J; Conradi, Susan H; Fukushima, Eiichi

    2018-03-01

    Earth's field NMR has been developed to detect oil trapped under or in Arctic sea-ice. A large challenge, addressed here, is the suppression of the water signal that dominates the oil signal. Selective suppression of water is based on relaxation time T 1 because of the negligible chemical shifts in the weak earth's magnetic field, making all proton signals overlap spectroscopically. The first approach is inversion-null recovery, modified for use with pre-polarization. The requirements for efficient inversion over a wide range of B 1 and subsequent adiabatic reorientation of the magnetization to align with the static field are stressed. The second method acquires FIDs at two durations of pre-polarization and cancels the water component of the signal after the data are acquired. While less elegant, this technique imposes no stringent requirements. Similar water suppression is found in simulations for the two methods. Oil detection in the presence of water is demonstrated experimentally with both techniques. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Frequency and time domain three-dimensional inversion of electromagnetic data for a grounded-wire source

    NASA Astrophysics Data System (ADS)

    Sasaki, Yutaka; Yi, Myeong-Jong; Choi, Jihyang; Son, Jeong-Sul

    2015-01-01

    We present frequency- and time-domain three-dimensional (3-D) inversion approaches that can be applied to transient electromagnetic (TEM) data from a grounded-wire source using a PC. In the direct time-domain approach, the forward solution and sensitivity were obtained in the frequency domain using a finite-difference technique, and the frequency response was then Fourier-transformed using a digital filter technique. In the frequency-domain approach, TEM data were Fourier-transformed using a smooth-spectrum inversion method, and the recovered frequency response was then inverted. The synthetic examples show that for the time derivative of magnetic field, frequency-domain inversion of TEM data performs almost as well as time-domain inversion, with a significant reduction in computational time. In our synthetic studies, we also compared the resolution capabilities of the ground and airborne TEM and controlled-source audio-frequency magnetotelluric (CSAMT) data resulting from a common grounded wire. An airborne TEM survey at 200-m elevation achieved a resolution for buried conductors almost comparable to that of the ground TEM method. It is also shown that the inversion of CSAMT data was able to detect a 3-D resistivity structure better than the TEM inversion, suggesting an advantage of electric-field measurements over magnetic-field-only measurements.

  2. Effects of two-temperature parameter and thermal nonlocal parameter on transient responses of a half-space subjected to ramp-type heating

    NASA Astrophysics Data System (ADS)

    Xue, Zhang-Na; Yu, Ya-Jun; Tian, Xiao-Geng

    2017-07-01

    Based upon the coupled thermoelasticity and Green and Lindsay theory, the new governing equations of two-temperature thermoelastic theory with thermal nonlocal parameter is formulated. To more realistically model thermal loading of a half-space surface, a linear temperature ramping function is adopted. Laplace transform techniques are used to get the general analytical solutions in Laplace domain, and the inverse Laplace transforms based on Fourier expansion techniques are numerically implemented to obtain the numerical solutions in time domain. Specific attention is paid to study the effect of thermal nonlocal parameter, ramping time, and two-temperature parameter on the distributions of temperature, displacement and stress distribution.

  3. The impact of approximations and arbitrary choices on geophysical images

    NASA Astrophysics Data System (ADS)

    Valentine, Andrew P.; Trampert, Jeannot

    2016-01-01

    Whenever a geophysical image is to be constructed, a variety of choices must be made. Some, such as those governing data selection and processing, or model parametrization, are somewhat arbitrary: there may be little reason to prefer one choice over another. Others, such as defining the theoretical framework within which the data are to be explained, may be more straightforward: typically, an `exact' theory exists, but various approximations may need to be adopted in order to make the imaging problem computationally tractable. Differences between any two images of the same system can be explained in terms of differences between these choices. Understanding the impact of each particular decision is essential if images are to be interpreted properly-but little progress has been made towards a quantitative treatment of this effect. In this paper, we consider a general linearized inverse problem, applicable to a wide range of imaging situations. We write down an expression for the difference between two images produced using similar inversion strategies, but where different choices have been made. This provides a framework within which inversion algorithms may be analysed, and allows us to consider how image effects may arise. In this paper, we take a general view, and do not specialize our discussion to any specific imaging problem or setup (beyond the restrictions implied by the use of linearized inversion techniques). In particular, we look at the concept of `hybrid inversion', in which highly accurate synthetic data (typically the result of an expensive numerical simulation) is combined with an inverse operator constructed based on theoretical approximations. It is generally supposed that this offers the benefits of using the more complete theory, without the full computational costs. We argue that the inverse operator is as important as the forward calculation in determining the accuracy of results. We illustrate this using a simple example, based on imaging the density structure of a vibrating string.

  4. Identification of subsurface structures using electromagnetic data and shape priors

    NASA Astrophysics Data System (ADS)

    Tveit, Svenn; Bakr, Shaaban A.; Lien, Martha; Mannseth, Trond

    2015-03-01

    We consider the inverse problem of identifying large-scale subsurface structures using the controlled source electromagnetic method. To identify structures in the subsurface where the contrast in electric conductivity can be small, regularization is needed to bias the solution towards preserving structural information. We propose to combine two approaches for regularization of the inverse problem. In the first approach we utilize a model-based, reduced, composite representation of the electric conductivity that is highly flexible, even for a moderate number of degrees of freedom. With a low number of parameters, the inverse problem is efficiently solved using a standard, second-order gradient-based optimization algorithm. Further regularization is obtained using structural prior information, available, e.g., from interpreted seismic data. The reduced conductivity representation is suitable for incorporation of structural prior information. Such prior information cannot, however, be accurately modeled with a gaussian distribution. To alleviate this, we incorporate the structural information using shape priors. The shape prior technique requires the choice of kernel function, which is application dependent. We argue for using the conditionally positive definite kernel which is shown to have computational advantages over the commonly applied gaussian kernel for our problem. Numerical experiments on various test cases show that the methodology is able to identify fairly complex subsurface electric conductivity distributions while preserving structural prior information during the inversion.

  5. A Scalable O(N) Algorithm for Large-Scale Parallel First-Principles Molecular Dynamics Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osei-Kuffuor, Daniel; Fattebert, Jean-Luc

    2014-01-01

    Traditional algorithms for first-principles molecular dynamics (FPMD) simulations only gain a modest capability increase from current petascale computers, due to their O(N 3) complexity and their heavy use of global communications. To address this issue, we are developing a truly scalable O(N) complexity FPMD algorithm, based on density functional theory (DFT), which avoids global communications. The computational model uses a general nonorthogonal orbital formulation for the DFT energy functional, which requires knowledge of selected elements of the inverse of the associated overlap matrix. We present a scalable algorithm for approximately computing selected entries of the inverse of the overlap matrix,more » based on an approximate inverse technique, by inverting local blocks corresponding to principal submatrices of the global overlap matrix. The new FPMD algorithm exploits sparsity and uses nearest neighbor communication to provide a computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic orbitals are confined, and a cutoff beyond which the entries of the overlap matrix can be omitted when computing selected entries of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to O(100K) atoms on O(100K) processors, with a wall-clock time of O(1) minute per molecular dynamics time step.« less

  6. Center of pressure based segment inertial parameters validation

    PubMed Central

    Rezzoug, Nasser; Gorce, Philippe; Isableu, Brice; Venture, Gentiane

    2017-01-01

    By proposing efficient methods for estimating Body Segment Inertial Parameters’ (BSIP) estimation and validating them with a force plate, it is possible to improve the inverse dynamic computations that are necessary in multiple research areas. Until today a variety of studies have been conducted to improve BSIP estimation but to our knowledge a real validation has never been completely successful. In this paper, we propose a validation method using both kinematic and kinetic parameters (contact forces) gathered from optical motion capture system and a force plate respectively. To compare BSIPs, we used the measured contact forces (Force plate) as the ground truth, and reconstructed the displacements of the Center of Pressure (COP) using inverse dynamics from two different estimation techniques. Only minor differences were seen when comparing the estimated segment masses. Their influence on the COP computation however is large and the results show very distinguishable patterns of the COP movements. Improving BSIP techniques is crucial and deviation from the estimations can actually result in large errors. This method could be used as a tool to validate BSIP estimation techniques. An advantage of this approach is that it facilitates the comparison between BSIP estimation methods and more specifically it shows the accuracy of those parameters. PMID:28662090

  7. Point-source inversion techniques

    NASA Astrophysics Data System (ADS)

    Langston, Charles A.; Barker, Jeffrey S.; Pavlin, Gregory B.

    1982-11-01

    A variety of approaches for obtaining source parameters from waveform data using moment-tensor or dislocation point source models have been investigated and applied to long-period body and surface waves from several earthquakes. Generalized inversion techniques have been applied to data for long-period teleseismic body waves to obtain the orientation, time function and depth of the 1978 Thessaloniki, Greece, event, of the 1971 San Fernando event, and of several events associated with the 1963 induced seismicity sequence at Kariba, Africa. The generalized inversion technique and a systematic grid testing technique have also been used to place meaningful constraints on mechanisms determined from very sparse data sets; a single station with high-quality three-component waveform data is often sufficient to discriminate faulting type (e.g., strike-slip, etc.). Sparse data sets for several recent California earthquakes, for a small regional event associated with the Koyna, India, reservoir, and for several events at the Kariba reservoir have been investigated in this way. Although linearized inversion techniques using the moment-tensor model are often robust, even for sparse data sets, there are instances where the simplifying assumption of a single point source is inadequate to model the data successfully. Numerical experiments utilizing synthetic data and actual data for the 1971 San Fernando earthquake graphically demonstrate that severe problems may be encountered if source finiteness effects are ignored. These techniques are generally applicable to on-line processing of high-quality digital data, but source complexity and inadequacy of the assumed Green's functions are major problems which are yet to be fully addressed.

  8. Iterative joint inversion of in-situ stress state along Simeulue-Nias Island

    NASA Astrophysics Data System (ADS)

    Agustina, Anisa; Sahara, David P.; Nugraha, Andri Dian

    2017-07-01

    In-situ stress inversion from focal mechanisms requires knowledge of which of the two nodal planes is the fault. This is challenging, in particular, because of the inherent ambiguity of focal mechanisms the fault and the auxiliary nodal plane could not be distinguished. A relatively new inversion technique for estimating both stress and fault plane is developed by Vavryĉuk in 2014. The fault orientations are determined by applying the fault instability constraint, and the stress is calculated in iterations. In this study, this method is applied to a high-density earthquake regions, Simeulue-Batu Island. This area is interesting to be investigated because of the occurrence of the two large earthquakes, i.e. Aceh 2004 and Nias 2005 earthquake. The inversion was done based on 343 focal mechanisms data with Magnitude ≥5.5 Mw between 25th Mei 1977- 25th August 2015 from Harvard and Global Centroid Moment Tensor (GCMT) catalog. The area is divided into some grids, in which the analysis of stress orientation variation and its shape ratio is done for each grid. Stress inversion results show that there are three segments along Simeulue-Batu Island based on the variation of orientation stress σ1. The stress characteristics of each segments are discussed, i.e. shape ratio, principal stress orientation and subduction angle. Interestingly, the highest value of shape ratio is 0.93 and its association with the large earthquake Aceh 2004. This suggest that the zonation obtained in this study could also be used as a proxy for the hazard map.

  9. Structural-change localization and monitoring through a perturbation-based inverse problem.

    PubMed

    Roux, Philippe; Guéguen, Philippe; Baillet, Laurent; Hamze, Alaa

    2014-11-01

    Structural-change detection and characterization, or structural-health monitoring, is generally based on modal analysis, for detection, localization, and quantification of changes in structure. Classical methods combine both variations in frequencies and mode shapes, which require accurate and spatially distributed measurements. In this study, the detection and localization of a local perturbation are assessed by analysis of frequency changes (in the fundamental mode and overtones) that are combined with a perturbation-based linear inverse method and a deconvolution process. This perturbation method is applied first to a bending beam with the change considered as a local perturbation of the Young's modulus, using a one-dimensional finite-element model for modal analysis. Localization is successful, even for extended and multiple changes. In a second step, the method is numerically tested under ambient-noise vibration from the beam support with local changes that are shifted step by step along the beam. The frequency values are revealed using the random decrement technique that is applied to the time-evolving vibrations recorded by one sensor at the free extremity of the beam. Finally, the inversion method is experimentally demonstrated at the laboratory scale with data recorded at the free end of a Plexiglas beam attached to a metallic support.

  10. The determination of solubility and diffusion coefficient for solids in liquids by an inverse measurement technique using cylinders of amorphous glucose as a model compound

    NASA Astrophysics Data System (ADS)

    Hu, Chengyao; Huang, Pei

    2011-05-01

    The importance of sugar and sugar-containing materials is well recognized nowadays, owing to their application in industrial processes, particularly in the food, pharmaceutical and cosmetic industries. Because of the large numbers of those compounds involved and the relatively small number of solubility and/or diffusion coefficient data for each compound available, it is highly desirable to measure the solubility and/or diffusion coefficient as efficiently as possible and to be able to improve the accuracy of the methods used. In this work, a new technique was developed for the measurement of the diffusion coefficient of a stationary solid solute in a stagnant solvent which simultaneously measures solubility based on an inverse measurement problem algorithm with the real-time dissolved amount profile as a function of time. This study differs from established techniques in both the experimental method and the data analysis. The experimental method was developed in which the dissolved amount of solid solute in quiescent solvent was investigated using a continuous weighing technique. In the data analysis, the hybrid genetic algorithm is used to minimize an objective function containing a calculated and a measured dissolved amount with time. This is measured on a cylindrical sample of amorphous glucose in methanol or ethanol. The calculated dissolved amount, that is a function of the unknown physical properties of the solid solute in the solvent, is calculated by the solution of the two-dimensional nonlinear inverse natural convection problem. The estimated values of the solubility of amorphous glucose in methanol and ethanol at 293 K were respectively 32.1 g/100 g methanol and 1.48 g/100 g ethanol, in agreement with the literature values, and support the validity of the simultaneously measured diffusion coefficient. These results show the efficiency and the stability of the developed technique to simultaneously estimate the solubility and diffusion coefficient. Also the influence of the solution density change and the initial concentration conditions on the dissolved amount was investigated by the numerical results using the estimated parameters. It is found that the theoretical assumption to simplify the inverse measurement problem algorithm is reasonable for low solubility.

  11. EDITORIAL: Introduction to the special issue on electromagnetic inverse problems: emerging methods and novel applications Introduction to the special issue on electromagnetic inverse problems: emerging methods and novel applications

    NASA Astrophysics Data System (ADS)

    Dorn, O.; Lesselier, D.

    2010-07-01

    Inverse problems in electromagnetics have a long history and have stimulated exciting research over many decades. New applications and solution methods are still emerging, providing a rich source of challenging topics for further investigation. The purpose of this special issue is to combine descriptions of several such developments that are expected to have the potential to fundamentally fuel new research, and to provide an overview of novel methods and applications for electromagnetic inverse problems. There have been several special sections published in Inverse Problems over the last decade addressing fully, or partly, electromagnetic inverse problems. Examples are: Electromagnetic imaging and inversion of the Earth's subsurface (Guest Editors: D Lesselier and T Habashy) October 2000 Testing inversion algorithms against experimental data (Guest Editors: K Belkebir and M Saillard) December 2001 Electromagnetic and ultrasonic nondestructive evaluation (Guest Editors: D Lesselier and J Bowler) December 2002 Electromagnetic characterization of buried obstacles (Guest Editors: D Lesselier and W C Chew) December 2004 Testing inversion algorithms against experimental data: inhomogeneous targets (Guest Editors: K Belkebir and M Saillard) December 2005 Testing inversion algorithms against experimental data: 3D targets (Guest Editors: A Litman and L Crocco) February 2009 In a certain sense, the current issue can be understood as a continuation of this series of special sections on electromagnetic inverse problems. On the other hand, its focus is intended to be more general than previous ones. Instead of trying to cover a well-defined, somewhat specialized research topic as completely as possible, this issue aims to show the broad range of techniques and applications that are relevant to electromagnetic imaging nowadays, which may serve as a source of inspiration and encouragement for all those entering this active and rapidly developing research area. Also, the construction of this special issue is likely to have been different from preceding ones. In addition to the invitations sent to specific research groups involved in electromagnetic inverse problems, the Guest Editors also solicited recommendations, from a large number of experts, of potential authors who were thereupon encouraged to contribute. Moreover, an open call for contributions was published on the homepage of Inverse Problems in order to attract as wide a scope of contributions as possible. This special issue's attempt at generality might also define its limitations: by no means could this collection of papers be exhaustive or complete, and as Guest Editors we are well aware that many exciting topics and potential contributions will be missing. This, however, also determines its very special flavor: besides addressing electromagnetic inverse problems in a broad sense, there were only a few restrictions on the contributions considered for this section. One requirement was plausible evidence of either novelty or the emergent nature of the technique or application described, judged mainly by the referees, and in some cases by the Guest Editors. The technical quality of the contributions always remained a stringent condition of acceptance, final adjudication (possibly questionable either way, not always positive) being made in most cases once a thorough revision process had been carried out. Therefore, we hope that the final result presented here constitutes an interesting collection of novel ideas and applications, properly refereed and edited, which will find its own readership and which can stimulate significant new research in the topics represented. Overall, as Guest Editors, we feel quite fortunate to have obtained such a strong response to the call for this issue and to have a really wide-ranging collection of high-quality contributions which, indeed, can be read from the first to the last page with sustained enthusiasm. A large number of applications and techniques is represented, overall via 16 contributions with 45 authors in total. This shows, in our opinion, that electromagnetic imaging and inversion remain amongst the most challenging and active research areas in applied inverse problems today. Below, we give a brief overview of the contributions included in this issue, ordered alphabetically by the surname of the leading author. 1. The complexity of handling potential randomness of the source in an inverse scattering problem is not minor, and the literature is far from being replete in this configuration. The contribution by G Bao, S N Chow, P Li and H Zhou, `Numerical solution of an inverse medium scattering problem with a stochastic source', exemplifies how to hybridize Wiener chaos expansion with a recursive linearization method in order to solve the stochastic problem as a set of decoupled deterministic ones. 2. In cases where the forward problem is expensive to evaluate, database methods might become a reliable method of choice, while enabling one to deliver more information on the inversion itself. The contribution by S Bilicz, M Lambert and Sz Gyimóthy, `Kriging-based generation of optimal databases as forward and inverse surrogate models', describes such a technique which uses kriging for constructing an efficient database with the goal of achieving an equidistant distribution of points in the measurement space. 3. Anisotropy remains a considerable challenge in electromagnetic imaging, which is tackled in the contribution by F Cakoni, D Colton, P Monk and J Sun, `The inverse electromagnetic scattering problem for anisotropic media', via the fact that transmission eigenvalues can be retrieved from a far-field scattering pattern, yielding, in particular, lower and upper bounds of the index of refraction of the unknown (dielectric anisotropic) scatterer. 4. So-called subspace optimization methods (SOM) have attracted a lot of interest recently in many fields. The contribution by X Chen, `Subspace-based optimization method for inverse scattering problems with an inhomogeneous background medium', illustrates how to address a realistic situation in which the medium containing the unknown obstacles is not homogeneous, via blending a properly developed SOM with a finite-element approach to the required Green's functions. 5. H Egger, M Hanke, C Schneider, J Schöberl and S Zaglmayr, in their contribution `Adjoint-based sampling methods for electromagnetic scattering', show how to efficiently develop sampling methods without explicit knowledge of the dyadic Green's function once an adjoint problem has been solved at much lower computational cost. This is demonstrated by examples in demanding propagative and diffusive situations. 6. Passive sensor arrays can be employed to image reflectors from ambient noise via proper migration of cross-correlation matrices into their embedding medium. This is investigated, and resolution, in particular, is considered in detail, as a function of the characteristics of the sensor array and those of the noise, in the contribution by J Garnier and G Papanicolaou, `Resolution analysis for imaging with noise'. 7. A direct reconstruction technique based on the conformal mapping theorem is proposed and investigated in depth in the contribution by H Haddar and R Kress, `Conformal mapping and impedance tomography'. This paper expands on previous work, with inclusions in homogeneous media, convergence results, and numerical illustrations. 8. The contribution by T Hohage and S Langer, `Acceleration techniques for regularized Newton methods applied to electromagnetic inverse medium scattering problems', focuses on a spectral preconditioner intended to accelerate regularized Newton methods as employed for the retrieval of a local inhomogeneity in a three-dimensional vector electromagnetic case, while also illustrating the implementation of a Lepskiĭ-type stopping rule outsmarting a traditional discrepancy principle. 9. Geophysical applications are a rich source of practically relevant inverse problems. The contribution by M Li, A Abubakar and T Habashy, `Application of a two-and-a-half dimensional model-based algorithm to crosswell electromagnetic data inversion', deals with a model-based inversion technique for electromagnetic imaging which addresses novel challenges such as multi-physics inversion, and incorporation of prior knowledge, such as in hydrocarbon recovery. 10. Non-stationary inverse problems, considered as a special class of Bayesian inverse problems, are framed via an orthogonal decomposition representation in the contribution by A Lipponen, A Seppänen and J P Kaipio, `Reduced order estimation of nonstationary flows with electrical impedance tomography'. The goal is to simultaneously estimate, from electrical impedance tomography data, certain characteristics of the Navier--Stokes fluid flow model together with time-varying concentration distribution. 11. Non-iterative imaging methods of thin, penetrable cracks, based on asymptotic expansion of the scattering amplitude and analysis of the multi-static response matrix, are discussed in the contribution by W-K Park, `On the imaging of thin dielectric inclusions buried within a half-space', completing, for a shallow burial case at multiple frequencies, the direct imaging of small obstacles (here, along their transverse dimension), MUSIC and non-MUSIC type indicator functions being used for that purpose. 12. The contribution by R Potthast, `A study on orthogonality sampling' envisages quick localization and shaping of obstacles from (portions of) far-field scattering patterns collected at one or more time-harmonic frequencies, via the simple calculation (and summation) of scalar products between those patterns and a test function. This is numerically exemplified for Neumann/Dirichlet boundary conditions and homogeneous/heterogeneous embedding media. 13. The contribution by J D Shea, P Kosmas, B D Van Veen and S C Hagness, `Contrast-enhanced microwave imaging of breast tumors: a computational study using 3D realistic numerical phantoms', aims at microwave medical imaging, namely the early detection of breast cancer. The use of contrast enhancing agents is discussed in detail and a number of reconstructions in three-dimensional geometry of realistic numerical breast phantoms are presented. 14. The contribution by D A Subbarayappa and V Isakov, `Increasing stability of the continuation for the Maxwell system', discusses enhanced log-type stability results for continuation of solutions of the time-harmonic Maxwell system, adding a fresh chapter to the interesting story of the study of the Cauchy problem for PDE. 15. In their contribution, `Recent developments of a monotonicity imaging method for magnetic induction tomography in the small skin-depth regime', A Tamburrino, S Ventre and G Rubinacci extend the recently developed monotonicity method toward the application of magnetic induction tomography in order to map surface-breaking defects affecting a damaged metal component. 16. The contribution by F Viani, P Rocca, M Benedetti, G Oliveri and A Massa, `Electromagnetic passive localization and tracking of moving targets in a WSN-infrastructured environment', contributes to what could still be seen as a niche problem, yet both useful in terms of applications, e.g., security, and challenging in terms of methodologies and experiments, in particular, in view of the complexity of environments in which this endeavor is to take place and the variability of the wireless sensor networks employed. To conclude, we would like to thank the able and tireless work of Kate Watt and Zoë Crossman, as past and present Publishers of the Journal, on what was definitely a long and exciting journey (sometimes a little discouraging when reports were not arriving, or authors were late, or Guest Editors overwhelmed) that started from a thorough discussion at the `Manchester workshop on electromagnetic inverse problems' held mid-June 2009, between Kate Watt and the Guest Editors. We gratefully acknowledge the fact that W W Symes gave us his full backing to carry out this special issue and that A K Louis completed it successfully. Last, but not least, the staff of Inverse Problems should be thanked, since they work together to make it a premier journal.

  12. Nonlinear Stimulated Raman Exact Passage by Resonance-Locked Inverse Engineering

    NASA Astrophysics Data System (ADS)

    Dorier, V.; Gevorgyan, M.; Ishkhanyan, A.; Leroy, C.; Jauslin, H. R.; Guérin, S.

    2017-12-01

    We derive an exact and robust stimulated Raman process for nonlinear quantum systems driven by pulsed external fields. The external fields are designed with closed-form expressions from the inverse engineering of a given efficient and stable dynamics. This technique allows one to induce a controlled population inversion which surpasses the usual nonlinear stimulated Raman adiabatic passage efficiency.

  13. Assessing non-uniqueness: An algebraic approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vasco, Don W.

    Geophysical inverse problems are endowed with a rich mathematical structure. When discretized, most differential and integral equations of interest are algebraic (polynomial) in form. Techniques from algebraic geometry and computational algebra provide a means to address questions of existence and uniqueness for both linear and non-linear inverse problem. In a sense, the methods extend ideas which have proven fruitful in treating linear inverse problems.

  14. Airglow studies using observations made with the GLO instrument on the Space Shuttle

    NASA Astrophysics Data System (ADS)

    Alfaro Suzan, Ana Luisa

    2009-12-01

    Our understanding of Earth's upper atmosphere has advanced tremendously over the last few decades due to our enhanced capacity for making remote observations from space. Space based observations of Earth's daytime and nighttime airglow emissions are very good examples of such enhancements to our knowledge. The terrestrial nighttime airglow, or nightglow, is barely discernible to the naked eye as viewed from Earth's surface. However, it is clearly visible from space - as most astronauts have been amazed to report. The nightglow consists of emissions of ultraviolet, visible and near-infrared radiation from electronically excited oxygen molecules and atoms and vibrationally excited OH molecules. It mostly emanates from a 10 km thick layer located about 100 km above Earth's surface. Various photochemical models have been proposed to explain the production of the emitting species. In this study some unique observations of Earth's nightglow made with the GLO instrument on NASA's Space Shuttle, are analyzed to assess the proposed excitation models. Previous analyses of these observations by Broadfoot and Gardner (2001), performed using a 1-D inversion technique, have indicated significant spatial structures and have raised serious questions about the proposed nightglow excitation models. However, the observation of such strong spatial structures calls into serious question the appropriateness of the adopted 1-D inversion technique and, therefore, the validity of the conclusions. In this study a more rigorous 2-D tomographic inversion technique is developed and applied to the available GLO data to determine if some of the apparent discrepancies can be explained by the limitations of the previously applied 1-D inversion approach. The results of this study still reveal some potentially serious inadequacies in the proposed photochemical models. However, alternative explanations for the discrepancies between the GLO observations and the model expectations are suggested. These include upper atmospheric tidal effects and possible errors in the pointing of the GLO instrument.

  15. Ellipsoidal head model for fetal magnetoencephalography: forward and inverse solutions

    NASA Astrophysics Data System (ADS)

    Gutiérrez, David; Nehorai, Arye; Preissl, Hubert

    2005-05-01

    Fetal magnetoencephalography (fMEG) is a non-invasive technique where measurements of the magnetic field outside the maternal abdomen are used to infer the source location and signals of the fetus' neural activity. There are a number of aspects related to fMEG modelling that must be addressed, such as the conductor volume, fetal position and orientation, gestation period, etc. We propose a solution to the forward problem of fMEG based on an ellipsoidal head geometry. This model has the advantage of highlighting special characteristics of the field that are inherent to the anisotropy of the human head, such as the spread and orientation of the field in relationship with the localization and position of the fetal head. Our forward solution is presented in the form of a kernel matrix that facilitates the solution of the inverse problem through decoupling of the dipole localization parameters from the source signals. Then, we use this model and the maximum likelihood technique to solve the inverse problem assuming the availability of measurements from multiple trials. The applicability and performance of our methods are illustrated through numerical examples based on a real 151-channel SQUID fMEG measurement system (SARA). SARA is an MEG system especially designed for fetal assessment and is currently used for heart and brain studies. Finally, since our model requires knowledge of the best-fitting ellipsoid's centre location and semiaxes lengths, we propose a method for estimating these parameters through a least-squares fit on anatomical information obtained from three-dimensional ultrasound images.

  16. Computational inverse methods of heat source in fatigue damage problems

    NASA Astrophysics Data System (ADS)

    Chen, Aizhou; Li, Yuan; Yan, Bo

    2018-04-01

    Fatigue dissipation energy is the research focus in field of fatigue damage at present. It is a new idea to solve the problem of calculating fatigue dissipation energy by introducing inverse method of heat source into parameter identification of fatigue dissipation energy model. This paper introduces the research advances on computational inverse method of heat source and regularization technique to solve inverse problem, as well as the existing heat source solution method in fatigue process, prospects inverse method of heat source applying in fatigue damage field, lays the foundation for further improving the effectiveness of fatigue dissipation energy rapid prediction.

  17. Imaging of the native inversion layer in Silicon-On-Insulator wafers via Scanning Surface Photovoltage: Implications for RF device performance

    NASA Astrophysics Data System (ADS)

    Dahanayaka, Daminda; Wong, Andrew; Kaszuba, Philip; Moszkowicz, Leon; Slinkman, James; IBM SPV Lab Team

    2014-03-01

    Silicon-On-Insulator (SOI) technology has proved beneficial for RF cell phone technologies, which have equivalent performance to GaAs technologies. However, there is evident parasitic inversion layer under the Buried Oxide (BOX) at the interface with the high resistivity Si substrate. The latter is inferred from capacitance-voltage measurements on MOSCAPs. The inversion layer has adverse effects on RF device performance. We present data which, for the first time, show the extent of the inversion layer in the underlying substrate. This knowledge has driven processing techniques to suppress the inversion.

  18. Synthesis of nanostructured materials in inverse miniemulsions and their applications.

    PubMed

    Cao, Zhihai; Ziener, Ulrich

    2013-11-07

    Polymeric nanogels, inorganic nanoparticles, and organic-inorganic hybrid nanoparticles can be prepared via the inverse miniemulsion technique. Hydrophilic functional cargos, such as proteins, DNA, and macromolecular fluoresceins, may be conveniently encapsulated in these nanostructured materials. In this review, the progress of inverse miniemulsions since 2000 is summarized on the basis of the types of reactions carried out in inverse miniemulsions, including conventional free radical polymerization, controlled/living radical polymerization, polycondensation, polyaddition, anionic polymerization, catalytic oxidation reaction, sol-gel process, and precipitation reaction of inorganic precursors. In addition, the applications of the nanostructured materials synthesized in inverse miniemulsions are also reviewed.

  19. Mean-Square Error Due to Gradiometer Field Measuring Devices

    DTIC Science & Technology

    1991-06-01

    convolving the gradiometer data with the inverse transform of I /T(a, 13), applying an ap- Hence (2) may be expressed in the transform domain as propriate... inverse transform of I / T(ot, 1) will not be possible quency measurements," Superconductor Applications: SQUID’s and because its inverse does not exist...and because it is a high- Machines, B. B. Schwartz and S. Foner, Eds. New York: Plenum pass function its use in an inverse transform technique Press

  20. Control of a high beta maneuvering reentry vehicle using dynamic inversion.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watts, Alfred Chapman

    2005-05-01

    The design of flight control systems for high performance maneuvering reentry vehicles presents a significant challenge to the control systems designer. These vehicles typically have a much higher ballistic coefficient than crewed vehicles like as the Space Shuttle or proposed crew return vehicles such as the X-38. Moreover, the missions of high performance vehicles usually require a steeper reentry flight path angle, followed by a pull-out into level flight. These vehicles then must transit the entire atmosphere and robustly perform the maneuvers required for the mission. The vehicles must also be flown with small static margins in order to performmore » the required maneuvers, which can result in highly nonlinear aerodynamic characteristics that frequently transition from being aerodynamically stable to unstable as angle of attack increases. The control system design technique of dynamic inversion has been applied successfully to both high performance aircraft and low beta reentry vehicles. The objective of this study was to explore the application of this technique to high performance maneuvering reentry vehicles, including the basic derivation of the dynamic inversion technique, followed by the extension of that technique to the use of tabular trim aerodynamic models in the controller. The dynamic inversion equations are developed for high performance vehicles and augmented to allow the selection of a desired response for the control system. A six degree of freedom simulation is used to evaluate the performance of the dynamic inversion approach, and results for both nominal and off nominal aerodynamic characteristics are presented.« less

  1. Development of a coupled FLEXPART-TM5 CO2 inverse modeling system

    NASA Astrophysics Data System (ADS)

    Monteil, Guillaume; Scholze, Marko

    2017-04-01

    Inverse modeling techniques are used to derive information on surface CO2 fluxes from measurements of atmospheric CO2 concentrations. The principle is to use an atmospheric transport model to compute the CO2 concentrations corresponding to a prior estimate of the surface CO2 fluxes. From the mismatches between observed and modeled concentrations, a correction of the flux estimate is computed, that represents the best statistical compromise between the prior knowledge and the new information brought in by the observations. Such "top-down" CO2 flux estimates are useful for a number of applications, such as the verification of CO2 emission inventories reported by countries in the framework of international greenhouse gas emission reduction treaties (Paris agreement), or for the validation and improvement of the bottom-up models used in future climate predictions. Inverse modeling CO2 flux estimates are limited in resolution (spatial and temporal) by the lack of observational constraints and by the very heavy computational cost of high-resolution inversions. The observational limitation is however being lifted, with the expansion of regional surface networks such as ICOS in Europe, and with the launch of new satellite instruments to measure tropospheric CO2 concentrations. To make an efficient use of these new observations, it is necessary to step up the resolution of atmospheric inversions. We have developed an inverse modeling system, based on a coupling between the TM5 and the FLEXPART transport models. The coupling follows the approach described in Rodenbeck et al., 2009: a first global, coarse resolution, inversion is performed using TM5-4DVAR, and is used to provide background constraints to a second, regional, fine resolution inversion, using FLEXPART as a transport model. The inversion algorithm is adapted from the 4DVAR algorithm used by TM5, but has been developed to be model-agnostic: it would be straightforward to replace TM5 and/or FLEXPART by other transport models, thus making it well suited to study transport model uncertainties. We will present preliminary European CO2 inversions using ICOS observations, and comparisons with TM5-4DVAR and TM3-STILT inversions. Reference: Rödenbeck, C., Gerbig, C., Trusilova, K., & Heimann, M. (2009). A two-step scheme for high-resolution regional atmospheric trace gas inversions based on independent models. Atmospheric Chemistry and Physics Discussions, 9(1), 1727-1756. http://doi.org/10.5194/acpd-9-1727-2009

  2. Las Vegas Basin Seismic Response Project: Measured Shallow Soil Velocities

    NASA Astrophysics Data System (ADS)

    Luke, B. A.; Louie, J.; Beeston, H. E.; Skidmore, V.; Concha, A.

    2002-12-01

    The Las Vegas valley in Nevada is a deep (up to 5 km) alluvial basin filled with interlayered gravels, sands, and clays. The climate is arid. The water table ranges from a few meters to many tens of meters deep. Laterally extensive thin carbonate-cemented lenses are commonly found across parts of the valley. Lenses range beyond 2 m in thickness, and occur at depths exceeding 200 m. Shallow seismic datasets have been collected at approximately ten sites around the Las Vegas valley, to characterize shear and compression wave velocities in the near surface. Purposes for the surveys include modeling of ground response to dynamic loads, both natural and manmade, quantification of soil stiffness to aid structural foundation design, and non-intrusive materials identification. Borehole-based measurement techniques used include downhole and crosshole, to depths exceeding 100 m. Surface-based techniques used include refraction and three different methods involving inversion of surface-wave dispersion datasets. This latter group includes two active-source techniques, the Spectral Analysis of Surface Waves (SASW) method and the Multi-Channel Analysis of Surface Waves (MASW) method; and a new passive-source technique, the Refraction Mictrotremor (ReMi) method. Depths to halfspace for the active-source measurements ranged beyond 50 m. The passive-source method constrains shear wave velocities to 100 m depths. As expected, the stiff cemented layers profoundly affect local velocity gradients. Scale effects are evident in comparisons of (1) very local measurements typified by borehole methods, to (2) the broader coverage of the SASW and MASW measurements, to (3) the still broader and deeper resolution made possible by the ReMi measurements. The cemented layers appear as sharp spikes in the downhole datasets and are problematic in crosshole measurements due to refraction. The refraction method is useful only to locate the depth to the uppermost cemented layer. The surface-wave methods, on the other hand, can process velocity inversions. With the broader coverage of the active-source surface wave measurements, through careful inversion that takes advantage of prior information to the greatest extent possible, multiple, shallow, stiff layers can be resolved. Data from such broader-coverage methods also provide confidence regarding continuity of the cemented layers. For the ReMi measurements, which provide the broadest coverage of all methods used, the more generalized shallow profile is sometimes characterized by a strong stiffness inversion at a depth of approximately 10 m. We anticipate that this impedance contrast represents the vertical extent of the multiple layered deposits of cemented media.

  3. Reducing uncertainties in the velocities determined by inversion of phase velocity dispersion curves using synthetic seismograms

    NASA Astrophysics Data System (ADS)

    Hosseini, Seyed Mehrdad

    Characterizing the near-surface shear-wave velocity structure using Rayleigh-wave phase velocity dispersion curves is widespread in the context of reservoir characterization, exploration seismology, earthquake engineering, and geotechnical engineering. This surface seismic approach provides a feasible and low-cost alternative to the borehole measurements. Phase velocity dispersion curves from Rayleigh surface waves are inverted to yield the vertical shear-wave velocity profile. A significant problem with the surface wave inversion is its intrinsic non-uniqueness, and although this problem is widely recognized, there have not been systematic efforts to develop approaches to reduce the pervasive uncertainty that affects the velocity profiles determined by the inversion. Non-uniqueness cannot be easily studied in a nonlinear inverse problem such as Rayleigh-wave inversion and the only way to understand its nature is by numerical investigation which can get computationally expensive and inevitably time consuming. Regarding the variety of the parameters affecting the surface wave inversion and possible non-uniqueness induced by them, a technique should be established which is not controlled by the non-uniqueness that is already affecting the surface wave inversion. An efficient and repeatable technique is proposed and tested to overcome the non-uniqueness problem; multiple inverted shear-wave velocity profiles are used in a wavenumber integration technique to generate synthetic time series resembling the geophone recordings. The similarity between synthetic and observed time series is used as an additional tool along with the similarity between the theoretical and experimental dispersion curves. The proposed method is proven to be effective through synthetic and real world examples. In these examples, the nature of the non-uniqueness is discussed and its existence is shown. Using the proposed technique, inverted velocity profiles are estimated and effectiveness of this technique is evaluated; in the synthetic example, final inverted velocity profile is compared with the initial target velocity model, and in the real world example, final inverted shear-wave velocity profile is compared with the velocity model from independent measurements in a nearby borehole. Real world example shows that it is possible to overcome the non-uniqueness and distinguish the representative velocity profile for the site that also matches well with the borehole measurements.

  4. Improved microseismic event locations through large-N arrays and wave-equation imaging and inversion

    NASA Astrophysics Data System (ADS)

    Witten, B.; Shragge, J. C.

    2016-12-01

    The recent increased focus on small-scale seismicity, Mw < 4 has come about primarily for two reasons. First, there is an increase in induced seismicity related to injection operations primarily for wastewater disposal and hydraulic fracturing for oil and gas recovery and for geothermal energy production. While the seismicity associated with injection is sometimes felt, it is more often weak. Some weak events are detected on current sparse arrays; however, accurate location of the events often requires a larger number of (multi-component) sensors. This leads to the second reason for an increased focus on small magnitude seismicity: a greater number of seismometers are being deployed in large N-arrays. The greater number of sensors decreases the detection threshold and therefore significantly increases the number of weak events found. Overall, these two factors bring new challenges and opportunities. Many standard seismological location and inversion techniques are geared toward large, easily identifiable events recorded on a sparse number of stations. However, with large-N arrays we can detect small events by utilizing multi-trace processing techniques, and increased processing power equips us with tools that employ more complete physics for simultaneously locating events and inverting for P- and S-wave velocity structure. We present a method that uses large-N arrays and wave-equation-based imaging and inversion to jointly locate earthquakes and estimate the elastic velocities of the earth. The technique requires no picking and is thus suitable for weak events. We validate the methodology through synthetic and field data examples.

  5. Program manual for the Eppler airfoil inversion program

    NASA Technical Reports Server (NTRS)

    Thomson, W. G.

    1975-01-01

    A computer program is described for calculating the profile of an airfoil as well as the boundary layer momentum thickness and energy form parameter. The theory underlying the airfoil inversion technique developed by Eppler is discussed.

  6. Evaluation of Inversion Methods Applied to Ionospheric ro Observations

    NASA Astrophysics Data System (ADS)

    Rios Caceres, Arq. Estela Alejandra; Rios, Victor Hugo; Guyot, Elia

    The new technique of radio-occultation can be used to study the Earth's ionosphere. The retrieval processes of ionospheric profiling from radio occultation observations usually assume spherical symmetry of electron density distribution at the locality of occultation and use the Abel integral transform to invert the measured total electron content (TEC) values. This pa-per presents a set of ionospheric profiles obtained from SAC-C satellite with the Abel inversion technique. The effects of the ionosphere on the GPS signal during occultation, such as bending and scintillation, are examined. Electron density profiles are obtained using the Abel inversion technique. Ionospheric radio occultations are validated using vertical profiles of electron con-centration from inverted ionograms , obtained from ionosonde sounding in the vicinity of the occultation. Results indicate that the Abel transform works well in the mid-latitudes during the daytime, but is less accurate during the night-time.

  7. Filtering techniques for efficient inversion of two-dimensional Nuclear Magnetic Resonance data

    NASA Astrophysics Data System (ADS)

    Bortolotti, V.; Brizi, L.; Fantazzini, P.; Landi, G.; Zama, F.

    2017-10-01

    The inversion of two-dimensional Nuclear Magnetic Resonance (NMR) data requires the solution of a first kind Fredholm integral equation with a two-dimensional tensor product kernel and lower bound constraints. For the solution of this ill-posed inverse problem, the recently presented 2DUPEN algorithm [V. Bortolotti et al., Inverse Problems, 33(1), 2016] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. In this work, I2DUPEN, an improved version of 2DUPEN that implements Mean Windowing and Singular Value Decomposition filters, is deeply tested. The reconstruction problem with filtered data is formulated as a compressed weighted least squares problem with multi-parameter Tikhonov regularization. Results on synthetic and real 2D NMR data are presented with the main purpose to deeper analyze the separate and combined effects of these filtering techniques on the reconstructed 2D distribution.

  8. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    NASA Astrophysics Data System (ADS)

    Köpke, Corinna; Irving, James; Elsheikh, Ahmed H.

    2018-06-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward model linking subsurface physical properties to measured data, which is typically assumed to be perfectly known in the inversion procedure. However, to make the stochastic solution of the inverse problem computationally tractable using methods such as Markov-chain-Monte-Carlo (MCMC), fast approximations of the forward model are commonly employed. This gives rise to model error, which has the potential to significantly bias posterior statistics if not properly accounted for. Here, we present a new methodology for dealing with the model error arising from the use of approximate forward solvers in Bayesian solutions to hydrogeophysical inverse problems. Our approach is geared towards the common case where this error cannot be (i) effectively characterized through some parametric statistical distribution; or (ii) estimated by interpolating between a small number of computed model-error realizations. To this end, we focus on identification and removal of the model-error component of the residual during MCMC using a projection-based approach, whereby the orthogonal basis employed for the projection is derived in each iteration from the K-nearest-neighboring entries in a model-error dictionary. The latter is constructed during the inversion and grows at a specified rate as the iterations proceed. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar travel-time data considering three different subsurface parameterizations of varying complexity. Synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed for their inversion. In each case, our developed approach enables us to remove posterior bias and obtain a more realistic characterization of uncertainty.

  9. New approach to wireless data communication in a propagation environment

    NASA Astrophysics Data System (ADS)

    Hunek, Wojciech P.; Majewski, Paweł

    2017-10-01

    This paper presents a new idea of perfect signal reconstruction in multivariable wireless communications systems including a different number of transmitting and receiving antennas. The proposed approach is based on the polynomial matrix S-inverse associated with Smith factorization. Crucially, the above mentioned inverse implements the so-called degrees of freedom. It has been confirmed by simulation study that the degrees of freedom allow to minimalize the negative impact of the propagation environment in terms of increasing the robustness of whole signal reconstruction process. Now, the parasitic drawbacks in form of dynamic ISI and ICI effects can be eliminated in framework described by polynomial calculus. Therefore, the new method guarantees not only reducing the financial impact but, more importantly, provides potentially the lower consumption energy systems than other classical ones. In order to show the potential of new approach, the simulation studies were performed by author's simulator based on well-known OFDM technique.

  10. Model-based tomographic reconstruction

    DOEpatents

    Chambers, David H; Lehman, Sean K; Goodman, Dennis M

    2012-06-26

    A model-based approach to estimating wall positions for a building is developed and tested using simulated data. It borrows two techniques from geophysical inversion problems, layer stripping and stacking, and combines them with a model-based estimation algorithm that minimizes the mean-square error between the predicted signal and the data. The technique is designed to process multiple looks from an ultra wideband radar array. The processed signal is time-gated and each section processed to detect the presence of a wall and estimate its position, thickness, and material parameters. The floor plan of a building is determined by moving the array around the outside of the building. In this paper we describe how the stacking and layer stripping algorithms are combined and show the results from a simple numerical example of three parallel walls.

  11. Fast and robust estimation of ophthalmic wavefront aberrations

    NASA Astrophysics Data System (ADS)

    Dillon, Keith

    2016-12-01

    Rapidly rising levels of myopia, particularly in the developing world, have led to an increased need for inexpensive and automated approaches to optometry. A simple and robust technique is provided for estimating major ophthalmic aberrations using a gradient-based wavefront sensor. The approach is based on the use of numerical calculations to produce diverse combinations of phase components, followed by Fourier transforms to calculate the coefficients. The approach does not utilize phase unwrapping nor iterative solution of inverse problems. This makes the method very fast and tolerant to image artifacts, which do not need to be detected and masked or interpolated as is needed in other techniques. These features make it a promising algorithm on which to base low-cost devices for applications that may have limited access to expert maintenance and operation.

  12. Solution of Inverse Kinematics for 6R Robot Manipulators With Offset Wrist Based on Geometric Algebra.

    PubMed

    Fu, Zhongtao; Yang, Wenyu; Yang, Zhen

    2013-08-01

    In this paper, we present an efficient method based on geometric algebra for computing the solutions to the inverse kinematics problem (IKP) of the 6R robot manipulators with offset wrist. Due to the fact that there exist some difficulties to solve the inverse kinematics problem when the kinematics equations are complex, highly nonlinear, coupled and multiple solutions in terms of these robot manipulators stated mathematically, we apply the theory of Geometric Algebra to the kinematic modeling of 6R robot manipulators simply and generate closed-form kinematics equations, reformulate the problem as a generalized eigenvalue problem with symbolic elimination technique, and then yield 16 solutions. Finally, a spray painting robot, which conforms to the type of robot manipulators, is used as an example of implementation for the effectiveness and real-time of this method. The experimental results show that this method has a large advantage over the classical methods on geometric intuition, computation and real-time, and can be directly extended to all serial robot manipulators and completely automatized, which provides a new tool on the analysis and application of general robot manipulators.

  13. Joint inversion of apparent resistivity and seismic surface and body wave data

    NASA Astrophysics Data System (ADS)

    Garofalo, Flora; Sauvin, Guillaume; Valentina Socco, Laura; Lecomte, Isabelle

    2013-04-01

    A novel inversion algorithm has been implemented to jointly invert apparent resistivity curves from vertical electric soundings, surface wave dispersion curves, and P-wave travel times. The algorithm works in the case of laterally varying layered sites. Surface wave dispersion curves and P-wave travel times can be extracted from the same seismic dataset and apparent resistivity curves can be obtained from continuous vertical electric sounding acquisition. The inversion scheme is based on a series of local 1D layered models whose unknown parameters are thickness h, S-wave velocity Vs, P-wave velocity Vp, and Resistivity R of each layer. 1D models are linked to surface-wave dispersion curves and apparent resistivity curves through classical 1D forward modelling, while a 2D model is created by interpolating the 1D models and is linked to refracted P-wave hodograms. A priori information can be included in the inversion and a spatial regularization is introduced as a set of constraints between model parameters of adjacent models and layers. Both a priori information and regularization are weighted by covariance matrixes. We show the comparison of individual inversions and joint inversion for a synthetic dataset that presents smooth lateral variations. Performing individual inversions, the poor sensitivity to some model parameters leads to estimation errors up to 62.5 %, whereas for joint inversion the cooperation of different techniques reduces most of the model estimation errors below 5% with few exceptions up to 39 %, with an overall improvement. Even though the final model retrieved by joint inversion is internally consistent and more reliable, the analysis of the results evidences unacceptable values of Vp/Vs ratio for some layers, thus providing negative Poisson's ratio values. To further improve the inversion performances, an additional constraint is added imposing Poisson's ratio in the range 0-0.5. The final results are globally improved by the introduction of this constraint further reducing the maximum error to 30 %. The same test was performed on field data acquired in a landslide-prone area close by the town of Hvittingfoss, Norway. Seismic data were recorded on two 160-m long profiles in roll-along mode using a 5-kg sledgehammer as source and 24 4.5-Hz vertical geophones with 4-m separation. First-arrival travel times were picked at every shot locations and surface wave dispersion curves extracted at 8 locations for each profile. 2D resistivity measurements were carried out on the same profiles using Gradient and Dipole-Dipole arrays with 2-m electrode spacing. The apparent resistivity curves were extracted at the same location as for the dispersion curves. The data were subsequently jointly inverted and the resulting model compared to individual inversions. Although models from both, individual and joint inversions are consistent, the estimation error is smaller for joint inversion, and more especially for first-arrival travel times. The joint inversion exploits different sensitivities of the methods to model parameters and therefore mitigates solution nonuniqueness and the effects of intrinsic limitations of the different techniques. Moreover, it produces an internally consistent multi-parametric final model that can be profitably interpreted to provide a better understanding of subsurface properties.

  14. Verification of the helioseismology travel-time measurement technique and the inversion procedure for sound speed using artificial data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parchevsky, K. V.; Zhao, J.; Hartlep, T.

    We performed three-dimensional numerical simulations of the solar surface acoustic wave field for the quiet Sun and for three models with different localized sound-speed perturbations in the interior with deep, shallow, and two-layer structures. We used the simulated data generated by two solar acoustics codes that employ the same standard solar model as a background model, but utilize different integration techniques and different models of stochastic wave excitation. Acoustic travel times were measured using a time-distance helioseismology technique, and compared with predictions from ray theory frequently used for helioseismic travel-time inversions. It is found that the measured travel-time shifts agreemore » well with the helioseismic theory for sound-speed perturbations, and for the measurement procedure with and without phase-speed filtering of the oscillation signals. This testing verifies the whole measuring-filtering-inversion procedure for static sound-speed anomalies with small amplitude inside the Sun outside regions of strong magnetic field. It is shown that the phase-speed filtering, frequently used to extract specific wave packets and improve the signal-to-noise ratio, does not introduce significant systematic errors. Results of the sound-speed inversion procedure show good agreement with the perturbation models in all cases. Due to its smoothing nature, the inversion procedure may overestimate sound-speed variations in regions with sharp gradients of the sound-speed profile.« less

  15. Identifing Atmospheric Pollutant Sources Using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Paes, F. F.; Campos, H. F.; Luz, E. P.; Carvalho, A. R.

    2008-05-01

    The estimation of the area source pollutant strength is a relevant issue for atmospheric environment. This characterizes an inverse problem in the atmospheric pollution dispersion. In the inverse analysis, an area source domain is considered, where the strength of such area source term is assumed unknown. The inverse problem is solved by using a supervised artificial neural network: multi-layer perceptron. The conection weights of the neural network are computed from delta rule - learning process. The neural network inversion is compared with results from standard inverse analysis (regularized inverse solution). In the regularization method, the inverse problem is formulated as a non-linear optimization approach, whose the objective function is given by the square difference between the measured pollutant concentration and the mathematical models, associated with a regularization operator. In our numerical experiments, the forward problem is addressed by a source-receptor scheme, where a regressive Lagrangian model is applied to compute the transition matrix. The second order maximum entropy regularization is used, and the regularization parameter is calculated by the L-curve technique. The objective function is minimized employing a deterministic scheme (a quasi-Newton algorithm) [1] and a stochastic technique (PSO: particle swarm optimization) [2]. The inverse problem methodology is tested with synthetic observational data, from six measurement points in the physical domain. The best inverse solutions were obtained with neural networks. References: [1] D. R. Roberti, D. Anfossi, H. F. Campos Velho, G. A. Degrazia (2005): Estimating Emission Rate and Pollutant Source Location, Ciencia e Natura, p. 131-134. [2] E.F.P. da Luz, H.F. de Campos Velho, J.C. Becceneri, D.R. Roberti (2007): Estimating Atmospheric Area Source Strength Through Particle Swarm Optimization. Inverse Problems, Desing and Optimization Symposium IPDO-2007, April 16-18, Miami (FL), USA, vol 1, p. 354-359.

  16. An asymptotic expansion approach to the inverse radiative transfer problem. [to infer concentration profiles of the atmosphere from measurements made onboard a satellite

    NASA Technical Reports Server (NTRS)

    Gomberg, R. I.; Buglia, J. J.

    1979-01-01

    An iterative technique which recovers density profiles in a nonhomogeneous absorbing atmosphere is derived. The technique is based on the concept of factoring a function of the density profile into the product of a known term and a term which is not known, but whose power series expansion can be found. This series converges rapidly under a wide range of conditions. A demonstration example of simulated data from a high resolution infrared heterodyne instrument is inverted. For the examples studied, the technique is shown to be capable of extracting features of ozone profiles in the troposphere and to be particularly stable.

  17. Comparison of two target classification techniques

    NASA Astrophysics Data System (ADS)

    Chen, J. S.; Walton, E. K.

    1986-01-01

    Radar target classification techniques based on backscatter measurements in the resonance region (1.0-20.0 MHz) are discussed. Attention is given to two novel methods currently being tested at the radar range of Ohio State University. The methods include: (1) the nearest neighbor (NN) algorithm for determining the radar cross section (RCS) magnitude and range corrected phase at various operating frequencies; and (2) an inverse Fourier transformation of the complex multifrequency radar returns of the time domain, followed by cross correlation analysis. Comparisons are made of the performance of the two techniques as a function of signal-to-error noise ratio for different types of processing. The results of the comparison are discussed in detail.

  18. Detection of Coal Fires: A Case Study Conducted on Indian Coal Seams Using Neural Network and Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Singh, B. B.

    2016-12-01

    India produces majority of its electricity from coal but a huge quantity of coal burns every day due to coal fires and also poses a threat to the environment as severe pollutants. In the present study we had demonstrated the usage of Neural Network based approach with an integrated Particle Swarm Optimization (PSO) inversion technique. The Self Potential (SP) data set is used for the early detection of coal fires. The study was conducted over the East Basuria colliery, Jharia Coal Field, Jharkhand, India. The causative source was modelled as an inclined sheet like anomaly and the synthetic data was generated. Neural Network scheme consists of an input layer, hidden layers and an output layer. The input layer corresponds to the SP data and the output layer is the estimated depth of the coal fire. A synthetic dataset was modelled with some of the known parameters such as depth, conductivity, inclination angle, half width etc. associated with causative body and gives a very low misfit error of 0.0032%. Therefore, the method was found accurate in predicting the depth of the source body. The technique was applied to the real data set and the model was trained until a very good correlation of determination `R2' value of 0.98 is obtained. The depth of the source body was found to be 12.34m with a misfit error percentage of 0.242%. The inversion results were compared with the lithologs obtained from a nearby well which corresponds to the L3 coal seam. The depth of the coal fire had exactly matched with the half width of the anomaly which suggests that the fire is widely spread. The inclination angle of the anomaly was 135.510 which resembles the development of the geometrically complex fracture planes. These fractures may be developed due to anisotropic weakness of the ground which acts as passage for the air. As a result coal fires spreads along these fracture planes. The results obtained from the Neural Network was compared with PSO inversion results and were found in complete agreement. PSO technique had already been found a well-established technique to model SP anomalies. Therefore for successful control and mitigation, SP surveys coupled with Neural Network and PSO technique proves to be novel and economical approach along with other existing geophysical techniques. Keywords: PSO, Coal fire, Self-Potential, Inversion, Neural Network

  19. Galerkin approximation for inverse problems for nonautonomous nonlinear distributed systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Reich, Simeon; Rosen, I. G.

    1988-01-01

    An abstract framework and convergence theory is developed for Galerkin approximation for inverse problems involving the identification of nonautonomous nonlinear distributed parameter systems. A set of relatively easily verified conditions is provided which are sufficient to guarantee the existence of optimal solutions and their approximation by a sequence of solutions to a sequence of approximating finite dimensional identification problems. The approach is based on the theory of monotone operators in Banach spaces and is applicable to a reasonably broad class of nonlinear distributed systems. Operator theoretic and variational techniques are used to establish a fundamental convergence result. An example involving evolution systems with dynamics described by nonstationary quasilinear elliptic operators along with some applications are presented and discussed.

  20. Deciding Termination for Ancestor Match- Bounded String Rewriting Systems

    NASA Technical Reports Server (NTRS)

    Geser, Alfons; Hofbauer, Dieter; Waldmann, Johannes

    2005-01-01

    Termination of a string rewriting system can be characterized by termination on suitable recursively defined languages. This kind of termination criteria has been criticized for its lack of automation. In an earlier paper we have shown how to construct an automated termination criterion if the recursion is aligned with the rewrite relation. We have demonstrated the technique with Dershowitz's forward closure criterion. In this paper we show that a different approach is suitable when the recursion is aligned with the inverse of the rewrite relation. We apply this idea to Kurth's ancestor graphs and obtain ancestor match-bounded string rewriting systems. Termination is shown to be decidable for this class. The resulting method improves upon those based on match-boundedness or inverse match-boundedness.

  1. Seafloor identification in sonar imagery via simulations of Helmholtz equations and discrete optimization

    NASA Astrophysics Data System (ADS)

    Engquist, Björn; Frederick, Christina; Huynh, Quyen; Zhou, Haomin

    2017-06-01

    We present a multiscale approach for identifying features in ocean beds by solving inverse problems in high frequency seafloor acoustics. The setting is based on Sound Navigation And Ranging (SONAR) imaging used in scientific, commercial, and military applications. The forward model incorporates multiscale simulations, by coupling Helmholtz equations and geometrical optics for a wide range of spatial scales in the seafloor geometry. This allows for detailed recovery of seafloor parameters including material type. Simulated backscattered data is generated using numerical microlocal analysis techniques. In order to lower the computational cost of the large-scale simulations in the inversion process, we take advantage of a pre-computed library of representative acoustic responses from various seafloor parameterizations.

  2. Inferring electric fields and currents from ground magnetometer data - A test with theoretically derived inputs

    NASA Technical Reports Server (NTRS)

    Wolf, R. A.; Kamide, Y.

    1983-01-01

    Advanced techniques considered by Kamide et al. (1981) seem to have the potential for providing observation-based high time resolution pictures of the global ionospheric current and electric field patterns for interesting events. However, a reliance on the proposed magnetogram-inversion schemes for the deduction of global ionospheric current and electric field patterns requires proof that reliable results are obtained. 'Theoretical' tests of the accuracy of the magnetogram inversion schemes have, therefore, been considered. The present investigation is concerned with a test, involving the developed KRM algorithm and the Rice Convection Model (RCM). The test was successful in the sense that there was overall agreement between electric fields and currents calculated by the RCM and KRM schemes.

  3. Relatives with opposite chromosome constitutions, rec(10)dup(10p)inv(10)(p15.1q26.12) and rec(10)dup(10q)inv(10)(p15.1q26.12), due to a familial pericentric inversion.

    PubMed

    Ciuladaite, Zivile; Preiksaitiene, Egle; Utkus, Algirdas; Kučinskas, Vaidutis

    2014-01-01

    Large pericentric inversions in chromosome 10 are rare chromosomal aberrations with only few cases of familial inheritance. Such chromosomal rearrangements may lead to production of unbalanced gametes. As a result of a recombination event in the inversion loop, 2 recombinants with duplicated and deficient chromosome segments, including the regions distal to the inversion, may be produced. We report on 2 relatives in a family with opposite terminal chromosomal rearrangements of chromosome 10, i.e. rec(10)dup(10p)inv(10) and rec(10)dup(10q)inv(10), due to familial pericentric inversion inv(10)(p15.1q26.12). Based on array-CGH results, we characterized the exact genomic regions involved and compared the clinical features of both patients with previous reports on similar pericentric inversions and regional differences within 10p and 10q. The fact that both products of recombination are viable indicates a potentially high recurrence risk of unbalanced offspring. This report of unbalanced rearrangements in chromosome 10 in 2 generations confirms the importance of screening for terminal imbalances in patients with idiopathic intellectual disability by molecular cytogenetic techniques such as FISH, MLPA or microarrays. It also underlines the necessity for FISH to define structural characteristics of such cryptic intrachromosomal rearrangements and the underlying cytogenetic mechanisms. © 2014 S. Karger AG, Basel.

  4. Efficient electromagnetic source imaging with adaptive standardized LORETA/FOCUSS.

    PubMed

    Schimpf, Paul H; Liu, Hesheng; Ramon, Ceon; Haueisen, Jens

    2005-05-01

    Functional brain imaging and source localization based on the scalp's potential field require a solution to an ill-posed inverse problem with many solutions. This makes it necessary to incorporate a priori knowledge in order to select a particular solution. A computational challenge for some subject-specific head models is that many inverse algorithms require a comprehensive sampling of the candidate source space at the desired resolution. In this study, we present an algorithm that can accurately reconstruct details of localized source activity from a sparse sampling of the candidate source space. Forward computations are minimized through an adaptive procedure that increases source resolution as the spatial extent is reduced. With this algorithm, we were able to compute inverses using only 6% to 11% of the full resolution lead-field, with a localization accuracy that was not significantly different than an exhaustive search through a fully-sampled source space. The technique is, therefore, applicable for use with anatomically-realistic, subject-specific forward models for applications with spatially concentrated source activity.

  5. A new stochastic algorithm for inversion of dust aerosol size distribution

    NASA Astrophysics Data System (ADS)

    Wang, Li; Li, Feng; Yang, Ma-ying

    2015-08-01

    Dust aerosol size distribution is an important source of information about atmospheric aerosols, and it can be determined from multiwavelength extinction measurements. This paper describes a stochastic inverse technique based on artificial bee colony (ABC) algorithm to invert the dust aerosol size distribution by light extinction method. The direct problems for the size distribution of water drop and dust particle, which are the main elements of atmospheric aerosols, are solved by the Mie theory and the Lambert-Beer Law in multispectral region. And then, the parameters of three widely used functions, i.e. the log normal distribution (L-N), the Junge distribution (J-J), and the normal distribution (N-N), which can provide the most useful representation of aerosol size distributions, are inversed by the ABC algorithm in the dependent model. Numerical results show that the ABC algorithm can be successfully applied to recover the aerosol size distribution with high feasibility and reliability even in the presence of random noise.

  6. Dependence of the forward light scattering on the refractive index of particles

    NASA Astrophysics Data System (ADS)

    Guo, Lufang; Shen, Jianqi

    2018-05-01

    In particle sizing technique based on forward light scattering, the scattered light signal (SLS) is closely related to the relative refractive index (RRI) of the particles to the surrounding, especially when the particles are transparent (or weakly absorbent) and the particles are small in size. The interference between the diffraction (Diff) and the multiple internal reflections (MIR) of scattered light can lead to the oscillation of the SLS on RRI and the abnormal intervals, especially for narrowly-distributed small particle systems. This makes the inverse problem more difficult. In order to improve the inverse results, Tikhonov regularization algorithm with B-spline functions is proposed, in which the matrix element is calculated for a range of particle sizes instead using the mean particle diameter of size fractions. In this way, the influence of abnormal intervals on the inverse results can be eliminated. In addition, for measurements on narrowly distributed small particles, it is suggested to detect the SLS in a wider scattering angle to include more information.

  7. Localization of incipient tip vortex cavitation using ray based matched field inversion method

    NASA Astrophysics Data System (ADS)

    Kim, Dongho; Seong, Woojae; Choo, Youngmin; Lee, Jeunghoon

    2015-10-01

    Cavitation of marine propeller is one of the main contributing factors of broadband radiated ship noise. In this research, an algorithm for the source localization of incipient vortex cavitation is suggested. Incipient cavitation is modeled as monopole type source and matched-field inversion method is applied to find the source position by comparing the spatial correlation between measured and replicated pressure fields at the receiver array. The accuracy of source localization is improved by broadband matched-field inversion technique that enhances correlation by incoherently averaging correlations of individual frequencies. Suggested localization algorithm is verified through known virtual source and model test conducted in Samsung ship model basin cavitation tunnel. It is found that suggested localization algorithm enables efficient localization of incipient tip vortex cavitation using a few pressure data measured on the outer hull above the propeller and practically applicable to the typically performed model scale experiment in a cavitation tunnel at the early design stage.

  8. Comparison of weighting techniques for acoustic full waveform inversion

    NASA Astrophysics Data System (ADS)

    Jeong, Gangwon; Hwang, Jongha; Min, Dong-Joo

    2017-12-01

    To reconstruct long-wavelength structures in full waveform inversion (FWI), the wavefield-damping and weighting techniques have been used to synthesize and emphasize low-frequency data components in frequency-domain FWI. However, these methods have some weak points. The application of wavefield-damping method on filtered data fails to synthesize reliable low-frequency data; the optimization formula obtained introducing the weighting technique is not theoretically complete, because it is not directly derived from the objective function. In this study, we address these weak points and present how to overcome them. We demonstrate that the source estimation in FWI using damped wavefields fails when the data used in the FWI process does not satisfy the causality condition. This phenomenon occurs when a non-causal filter is applied to data. We overcome this limitation by designing a causal filter. Also we modify the conventional weighting technique so that its optimization formula is directly derived from the objective function, retaining its original characteristic of emphasizing the low-frequency data components. Numerical results show that the newly designed causal filter enables to recover long-wavelength structures using low-frequency data components synthesized by damping wavefields in frequency-domain FWI, and the proposed weighting technique enhances the inversion results.

  9. Towards a Full Waveform Ambient Noise Inversion

    NASA Astrophysics Data System (ADS)

    Sager, K.; Ermert, L. A.; Boehm, C.; Fichtner, A.

    2015-12-01

    Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green's function between the two receivers. This assumption, however, is only met under specific conditions, for instance, wavefield diffusivity and equipartitioning, zero attenuation, etc., that are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations regarding Earth structure and noise generation. To overcome this limitation we attempt to develop a method that consistently accounts for noise distribution, 3D heterogeneous Earth structure and the full seismic wave propagation physics in order to improve the current resolution of tomographic images of the Earth. As an initial step towards a full waveform ambient noise inversion we develop a preliminary inversion scheme based on a 2D finite-difference code simulating correlation functions and on adjoint techniques. With respect to our final goal, a simultaneous inversion for noise distribution and Earth structure, we address the following two aspects: (1) the capabilities of different misfit functionals to image wave speed anomalies and source distribution and (2) possible source-structure trade-offs, especially to what extent unresolvable structure could be mapped into the inverted noise source distribution and vice versa.

  10. An Inversion Analysis of Recent Variability in CO2 Fluxes Using GOSAT and In Situ Observations

    NASA Astrophysics Data System (ADS)

    Wang, J. S.; Kawa, S. R.; Baker, D. F.; Collatz, G. J.

    2016-12-01

    About one-half of the global CO2 emissions from fossil fuel combustion and deforestation accumulates in the atmosphere, where it contributes to global warming. The rest is taken up by vegetation and the ocean. The precise contribution of the two sinks and their location and year-to-year variability are not well understood. We use two different approaches, batch Bayesian synthesis inversion and variational data assimilation, to deduce the global spatiotemporal distributions of CO2 fluxes during 2009-2010. One of our objectives is to assess different sources of uncertainties in inferred fluxes, including uncertainties in prior flux estimates and observations, and differences in inversion techniques. For prior constraints, we utilize fluxes and uncertainties from the CASA-GFED model of the terrestrial biosphere and biomass burning driven by satellite observations. We also use measurement-based ocean flux estimates and fixed fossil CO2 emissions. Our inversions incorporate column CO2 measurements from the GOSAT satellite (ACOS retrieval, bias-corrected) and in situ observations (individual flask and afternoon-average continuous observations) to estimate fluxes in 108 regions over 8-day intervals for the batch inversion and at 3° x 3.75° weekly for the variational system. Relationships between fluxes and atmospheric concentrations are derived consistently for the two inversion systems using the PCTM transport model with MERRA meteorology. We compare the posterior fluxes and uncertainties derived using different data sets and the two inversion approaches, and evaluate the posterior atmospheric concentrations against independent data including aircraft measurements. The optimized fluxes generally resemble each other and those from other studies. For example, a GOSAT-only inversion suggests a shift in the global sink from the tropics/south to the north relative to the prior and to an in-situ-only inversion. The posterior fluxes of the GOSAT inversion are better constrained in most regions than those of the in situ inversion because of the greater spatial coverage of the GOSAT observations. The GOSAT inversion also indicates a significantly smaller terrestrial sink in higher-latitude northern regions in boreal summer of 2010 relative to 2009, consistent with observed drought conditions.

  11. Elliptical concentrators.

    PubMed

    Garcia-Botella, Angel; Fernandez-Balbuena, Antonio Alvarez; Bernabeu, Eusebio

    2006-10-10

    Nonimaging optics is a field devoted to the design of optical components for applications such as solar concentration or illumination. In this field, many different techniques have been used to produce optical devices, including the use of reflective and refractive components or inverse engineering techniques. However, many of these optical components are based on translational symmetries, rotational symmetries, or free-form surfaces. We study a new family of nonimaging concentrators called elliptical concentrators. This new family of concentrators provides new capabilities and can have different configurations, either homofocal or nonhomofocal. Translational and rotational concentrators can be considered as particular cases of elliptical concentrators.

  12. Comparing interpolation techniques for annual temperature mapping across Xinjiang region

    NASA Astrophysics Data System (ADS)

    Ren-ping, Zhang; Jing, Guo; Tian-gang, Liang; Qi-sheng, Feng; Aimaiti, Yusupujiang

    2016-11-01

    Interpolating climatic variables such as temperature is challenging due to the highly variable nature of meteorological processes and the difficulty in establishing a representative network of stations. In this paper, based on the monthly temperature data which obtained from the 154 official meteorological stations in the Xinjiang region and surrounding areas, we compared five spatial interpolation techniques: Inverse distance weighting (IDW), Ordinary kriging, Cokriging, thin-plate smoothing splines (ANUSPLIN) and Empirical Bayesian kriging(EBK). Error metrics were used to validate interpolations against independent data. Results indicated that, the ANUSPLIN performed best than the other four interpolation methods.

  13. On the reconstruction of the surface structure of the spotted stars

    NASA Astrophysics Data System (ADS)

    Kolbin, A. I.; Shimansky, V. V.; Sakhibullin, N. A.

    2013-07-01

    We have developed and tested a light-curve inversion technique for photometric mapping of spotted stars. The surface of a spotted star is partitioned into small area elements, over which a search is carried out for the intensity distribution providing the best agreement between the observed and model light curves within a specified uncertainty. We have tested mapping techniques based on the use of both a single light curve and several light curves obtained in different photometric bands. Surface reconstruction artifacts due to the ill-posed nature of the problem have been identified.

  14. Inversion of solar extinction data from the Apollo-Soyuz Test Project Stratospheric Aerosol Measurement (ASTP/SAM) experiment

    NASA Technical Reports Server (NTRS)

    Pepin, T. J.

    1977-01-01

    The inversion methods are reported that have been used to determine the vertical profile of the extinction coefficient due to the stratospheric aerosols from data measured during the ASTP/SAM solar occultation experiment. Inversion methods include the onion skin peel technique and methods of solving the Fredholm equation for the problem subject to smoothing constraints. The latter of these approaches involves a double inversion scheme. Comparisons are made between the inverted results from the SAM experiment and near simultaneous measurements made by lidar and balloon born dustsonde. The results are used to demonstrate the assumptions required to perform the inversions for aerosols.

  15. Oil encapsulation in core-shell alginate capsules by inverse gelation II: comparison between dripping techniques using W/O or O/W emulsions.

    PubMed

    Martins, Evandro; Poncelet, Denis; Rodrigues, Ramila Cristiane; Renard, Denis

    2017-09-01

    In the first part of this article, it was described an innovative method of oil encapsulation from dripping-inverse gelation using water-in-oil (W/O) emulsions. It was noticed that the method of oil encapsulation was quite different depending on the emulsion type (W/O or oil-in-water (O/W)) used and that the emulsion structure (W/O or O/W) had a high impact on the dripping technique and the capsules characteristics. The objective of this article was to elucidate the differences between the dripping techniques using both emulsions and compare the capsule properties (mechanical resistance and release of actives). The oil encapsulation using O/W emulsions was easier to perform and did not require the use of emulsion destabilisers. However, capsules produced from W/O emulsions were more resistant to compression and showed the slower release of actives over time. The findings detailed here widened the knowledge of the inverse gelation and gave opportunities to develop new techniques of oil encapsulation.

  16. Mixing of thawed coagulation samples prior to testing: Is any technique better than another?

    PubMed

    Lima-Oliveira, Gabriel; Adcock, Dorothy M; Salvagno, Gian Luca; Favaloro, Emmanuel J; Lippi, Giuseppe

    2016-12-01

    Thus study was aimed to investigate whether the mixing technique could influence the results of routine and specialized clotting tests on post-thawed specimens. The sample population consisted of 13 healthy volunteers. Venous blood was collected by evacuated system into three 3.5mL tubes containing 0.109mmol/L buffered sodium citrate. The three blood tubes of each subject were pooled immediately after collection inside a Falcon 15mL tube, then mixed by 6 gentle end-over-end inversions, and centrifuged at 1500g for 15min. Plasma-pool of each subject was then divided in 4 identical aliquots. All aliquots were thawed after 2-day freezing -70°C. Immediately afterwards, the plasma of the four paired aliquots were treated using four different techniques: (a) reference procedure, entailing 6 gentle end-over-end inversions; (b) placing the sample on a blood tube rocker (i.e., rotor mixing) for 5min to induce agitation and mixing; (c) use of a vortex mixer for 20s to induce agitation and mixing; and (d) no mixing. The significance of differences against the reference technique for mixing thawed plasma specimens (i.e., 6 gentle end-over-end inversions) were assessed with paired Student's t-test. The statistical significance was set at p<0.05. As compared to the reference 6-time gentle inversion technique, statistically significant differences were only observed for fibrinogen, and factor VIII in plasma mixed on tube rocker. Some trends were observed in the remaining other cases, but the bias did not achieve statistical significance. We hence suggest that each laboratory should standardize the procedures for mixing of thawed plasma according to a single technique. Copyright © 2016 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  17. Bayer image parallel decoding based on GPU

    NASA Astrophysics Data System (ADS)

    Hu, Rihui; Xu, Zhiyong; Wei, Yuxing; Sun, Shaohua

    2012-11-01

    In the photoelectrical tracking system, Bayer image is decompressed in traditional method, which is CPU-based. However, it is too slow when the images become large, for example, 2K×2K×16bit. In order to accelerate the Bayer image decoding, this paper introduces a parallel speedup method for NVIDA's Graphics Processor Unit (GPU) which supports CUDA architecture. The decoding procedure can be divided into three parts: the first is serial part, the second is task-parallelism part, and the last is data-parallelism part including inverse quantization, inverse discrete wavelet transform (IDWT) as well as image post-processing part. For reducing the execution time, the task-parallelism part is optimized by OpenMP techniques. The data-parallelism part could advance its efficiency through executing on the GPU as CUDA parallel program. The optimization techniques include instruction optimization, shared memory access optimization, the access memory coalesced optimization and texture memory optimization. In particular, it can significantly speed up the IDWT by rewriting the 2D (Tow-dimensional) serial IDWT into 1D parallel IDWT. Through experimenting with 1K×1K×16bit Bayer image, data-parallelism part is 10 more times faster than CPU-based implementation. Finally, a CPU+GPU heterogeneous decompression system was designed. The experimental result shows that it could achieve 3 to 5 times speed increase compared to the CPU serial method.

  18. Direct vibro-elastography FEM inversion in Cartesian and cylindrical coordinate systems without the local homogeneity assumption

    NASA Astrophysics Data System (ADS)

    Honarvar, M.; Lobo, J.; Mohareri, O.; Salcudean, S. E.; Rohling, R.

    2015-05-01

    To produce images of tissue elasticity, the vibro-elastography technique involves applying a steady-state multi-frequency vibration to tissue, estimating displacements from ultrasound echo data, and using the estimated displacements in an inverse elasticity problem with the shear modulus spatial distribution as the unknown. In order to fully solve the inverse problem, all three displacement components are required. However, using ultrasound, the axial component of the displacement is measured much more accurately than the other directions. Therefore, simplifying assumptions must be used in this case. Usually, the equations of motion are transformed into a Helmholtz equation by assuming tissue incompressibility and local homogeneity. The local homogeneity assumption causes significant imaging artifacts in areas of varying elasticity. In this paper, we remove the local homogeneity assumption. In particular we introduce a new finite element based direct inversion technique in which only the coupling terms in the equation of motion are ignored, so it can be used with only one component of the displacement. Both Cartesian and cylindrical coordinate systems are considered. The use of multi-frequency excitation also allows us to obtain multiple measurements and reduce artifacts in areas where the displacement of one frequency is close to zero. The proposed method was tested in simulations and experiments against a conventional approach in which the local homogeneity is used. The results show significant improvements in elasticity imaging with the new method compared to previous methods that assumes local homogeneity. For example in simulations, the contrast to noise ratio (CNR) for the region with spherical inclusion increases from an average value of 1.5-17 after using the proposed method instead of the local inversion with homogeneity assumption, and similarly in the prostate phantom experiment, the CNR improved from an average value of 1.6 to about 20.

  19. WE-AB-209-02: A New Inverse Planning Framework with Principle-Based Modeling of Inter-Structural Dosimetric Tradeoffs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, H; Dong, P; Xing, L

    Purpose: Traditional radiotherapy inverse planning relies on the weighting factors to phenomenologically balance the conflicting criteria for different structures. The resulting manual trial-and-error determination of the weights has long been recognized as the most time-consuming part of treatment planning. The purpose of this work is to develop an inverse planning framework that parameterizes the inter-structural dosimetric tradeoff among with physically more meaningful quantities to simplify the search for a clinically sensible plan. Methods: A permissible dosimetric uncertainty is introduced for each of the structures to balance their conflicting dosimetric requirements. The inverse planning is then formulated as a convex feasibilitymore » problem, which aims to generate plans with acceptable dosimetric uncertainties. A sequential procedure (SP) is derived to decompose the model into three submodels to constrain the uncertainty in the planning target volume (PTV), the critical structures, and all other structures to spare, sequentially. The proposed technique is applied to plan a liver case and a head-and-neck case and compared with a conventional approach. Results: Our results show that the strategy is able to generate clinically sensible plans with little trial-and-error. In the case of liver IMRT, the fractional volumes to liver and heart above 20Gy are found to be 22% and 10%, respectively, which are 15.1% and 33.3% lower than that of the counterpart conventional plan while maintaining the same PTV coverage. The planning of the head and neck IMRT show the same level of success, with the DVHs for all organs at risk and PTV very competitive to a counterpart plan. Conclusion: A new inverse planning framework has been established. With physically more meaningful modeling of the inter-structural tradeoff, the technique enables us to substantially reduce the need for trial-and-error adjustment of the model parameters and opens new opportunities of incorporating prior knowledge to facilitate the treatment planning process.« less

  20. Inverse measurement of wall pressure field in flexible-wall wind tunnels using global wall deformation data

    NASA Astrophysics Data System (ADS)

    Brown, Kenneth; Brown, Julian; Patil, Mayuresh; Devenport, William

    2018-02-01

    The Kevlar-wall anechoic wind tunnel offers great value to the aeroacoustics research community, affording the capability to make simultaneous aeroacoustic and aerodynamic measurements. While the aeroacoustic potential of the Kevlar-wall test section is already being leveraged, the aerodynamic capability of these test sections is still to be fully realized. The flexibility of the Kevlar walls suggests the possibility that the internal test section flow may be characterized by precisely measuring small deflections of the flexible walls. Treating the Kevlar fabric walls as tensioned membranes with known pre-tension and material properties, an inverse stress problem arises where the pressure distribution over the wall is sought as a function of the measured wall deflection. Experimental wall deformations produced by the wind loading of an airfoil model are measured using digital image correlation and subsequently projected onto polynomial basis functions which have been formulated to mitigate the impact of measurement noise based on a finite-element study. Inserting analytic derivatives of the basis functions into the equilibrium relations for a membrane, full-field pressure distributions across the Kevlar walls are computed. These inversely calculated pressures, after being validated against an independent measurement technique, can then be integrated along the length of the test section to give the sectional lift of the airfoil. Notably, these first-time results are achieved with a non-contact technique and in an anechoic environment.

  1. A radionuclide counting technique for measuring wind velocity. [drag force anemometers

    NASA Technical Reports Server (NTRS)

    Singh, J. J.; Khandelwal, G. S.; Mall, G. H.

    1981-01-01

    A technique for measuring wind velocities of meteorological interest is described. It is based on inverse-square-law variation of the counting rates as the radioactive source-to-counter distance is changed by wind drag on the source ball. Results of a feasibility study using a weak bismuth 207 radiation source and three Geiger-Muller radiation counters are reported. The use of the technique is not restricted to Martian or Mars-like environments. A description of the apparatus, typical results, and frequency response characteristics are included. A discussion of a double-pendulum arrangement is presented. Measurements reported herein indicate that the proposed technique may be suitable for measuring wind speeds up to 100 m/sec, which are either steady or whose rates of fluctuation are less than 1 kHz.

  2. A new technique for the characterization of chaff elements

    NASA Astrophysics Data System (ADS)

    Scholfield, David; Myat, Maung; Dauby, Jason; Fesler, Jonathon; Bright, Jonathan

    2011-07-01

    A new technique for the experimental characterization of electromagnetic chaff based on Inverse Synthetic Aperture Radar is presented. This technique allows for the characterization of as few as one filament of chaff in a controlled anechoic environment allowing for stability and repeatability of experimental results. This approach allows for a deeper understanding of the fundamental phenomena of electromagnetic scattering from chaff through an incremental analysis approach. Chaff analysis can now begin with a single element and progress through the build-up of particles into pseudo-cloud structures. This controlled incremental approach is supported by an identical incremental modeling and validation process. Additionally, this technique has the potential to produce considerable savings in financial and schedule cost and provides a stable and repeatable experiment to aid model valuation.

  3. SERS-based inverse molecular sentinel (iMS) nanoprobes for multiplexed detection of microRNA cancer biomarkers in biological samples

    NASA Astrophysics Data System (ADS)

    Crawford, Bridget M.; Wang, Hsin-Neng; Fales, Andrew M.; Bowie, Michelle L.; Seewaldt, Victoria L.; Vo-Dinh, Tuan

    2017-02-01

    The development of sensitive and selective biosensing techniques is of great interest for clinical diagnostics. Here, we describe the development and application of a surface enhanced Raman scattering (SERS) sensing technology, referred to as "inverse Molecular Sentinel (iMS)" nanoprobes, for the detection of nucleic acid biomarkers in biological samples. This iMS nanoprobe involves the use of plasmonic-active nanostars as the sensing platform for a homogenous assay for multiplexed detection of nucleic acid biomarkers, including DNA, RNA and microRNA (miRNA). The "OFF-to-ON" signal switch is based on a non-enzymatic strand-displacement process and the conformational change of stem-loop (hairpin) oligonucleotide probes upon target binding. Here, we demonstrate the development of iMS nanoprobes for the detection of DNA sequences as well as a modified design of the nanoprobe for the detection of short (22-nt) microRNA sequences. The application of iMS nanoprobes to detect miRNAs in real biological samples was performed with total small RNA extracted from breast cancer cell lines. The multiplex capability of the iMS technique was demonstrated using a mixture of the two differently labeled nanoprobes to detect miR-21 and miR-34a miRNA biomarkers for breast cancer. The results of this study demonstrate the feasibility of applying the iMS technique for multiplexed detection of nucleic acid biomarkers, including short miRNAs molecules.

  4. Electro-magneto interaction in fractional Green-Naghdi thermoelastic solid with a cylindrical cavity

    NASA Astrophysics Data System (ADS)

    Ezzat, M. A.; El-Bary, A. A.

    2018-01-01

    A unified mathematical model of Green-Naghdi's thermoelasticty theories (GN), based on fractional time-derivative of heat transfer is constructed. The model is applied to solve a one-dimensional problem of a perfect conducting unbounded body with a cylindrical cavity subjected to sinusoidal pulse heating in the presence of an axial uniform magnetic field. Laplace transform techniques are used to get the general analytical solutions in Laplace domain, and the inverse Laplace transforms based on Fourier expansion techniques are numerically implemented to obtain the numerical solutions in time domain. Comparisons are made with the results predicted by the two theories. The effects of the fractional derivative parameter on thermoelastic fields for different theories are discussed.

  5. Image contrast mechanisms in dynamic friction force microscopy: Antimony particles on graphite

    NASA Astrophysics Data System (ADS)

    Mertens, Felix; Göddenhenrich, Thomas; Dietzel, Dirk; Schirmeisen, Andre

    2017-01-01

    Dynamic Friction Force Microscopy (DFFM) is a technique based on Atomic Force Microscopy (AFM) where resonance oscillations of the cantilever are excited by lateral actuation of the sample. During this process, the AFM tip in contact with the sample undergoes a complex movement which consists of alternating periods of sticking and sliding. Therefore, DFFM can give access to dynamic transition effects in friction that are not accessible by alternative techniques. Using antimony nanoparticles on graphite as a model system, we analyzed how combined influences of friction and topography can effect different experimental configurations of DFFM. Based on the experimental results, for example, contrast inversion between fractional resonance and band excitation imaging strategies to extract reliable tribological information from DFFM images are devised.

  6. Adapting Better Interpolation Methods to Model Amphibious MT Data Along the Cascadian Subduction Zone.

    NASA Astrophysics Data System (ADS)

    Parris, B. A.; Egbert, G. D.; Key, K.; Livelybrooks, D.

    2016-12-01

    Magnetotellurics (MT) is an electromagnetic technique used to model the inner Earth's electrical conductivity structure. MT data can be analyzed using iterative, linearized inversion techniques to generate models imaging, in particular, conductive partial melts and aqueous fluids that play critical roles in subduction zone processes and volcanism. For example, the Magnetotelluric Observations of Cascadia using a Huge Array (MOCHA) experiment provides amphibious data useful for imaging subducted fluids from trench to mantle wedge corner. When using MOD3DEM(Egbert et al. 2012), a finite difference inversion package, we have encountered problems inverting, particularly, sea floor stations due to the strong, nearby conductivity gradients. As a work-around, we have found that denser, finer model grids near the land-sea interface produce better inversions, as characterized by reduced data residuals. This is partly to be due to our ability to more accurately capture topography and bathymetry. We are experimenting with improved interpolation schemes that more accurately track EM fields across cell boundaries, with an eye to enhancing the accuracy of the simulated responses and, thus, inversion results. We are adapting how MOD3DEM interpolates EM fields in two ways. The first seeks to improve weighting functions for interpolants to better address current continuity across grid boundaries. Electric fields are interpolated using a tri-linear spline technique, where the eight nearest electrical field estimates are each given weights determined by the technique, a kind of weighted average. We are modifying these weights to include cross-boundary conductivity ratios to better model current continuity. We are also adapting some of the techniques discussed in Shantsev et al (2014) to enhance the accuracy of the interpolated fields calculated by our forward solver, as well as to better approximate the sensitivities passed to the software's Jacobian that are used to generate a new forward model during each iteration of the inversion.

  7. Pattern-Based Inverse Modeling for Characterization of Subsurface Flow Models with Complex Geologic Heterogeneity

    NASA Astrophysics Data System (ADS)

    Golmohammadi, A.; Jafarpour, B.; M Khaninezhad, M. R.

    2017-12-01

    Calibration of heterogeneous subsurface flow models leads to ill-posed nonlinear inverse problems, where too many unknown parameters are estimated from limited response measurements. When the underlying parameters form complex (non-Gaussian) structured spatial connectivity patterns, classical variogram-based geostatistical techniques cannot describe the underlying connectivity patterns. Modern pattern-based geostatistical methods that incorporate higher-order spatial statistics are more suitable for describing such complex spatial patterns. Moreover, when the underlying unknown parameters are discrete (geologic facies distribution), conventional model calibration techniques that are designed for continuous parameters cannot be applied directly. In this paper, we introduce a novel pattern-based model calibration method to reconstruct discrete and spatially complex facies distributions from dynamic flow response data. To reproduce complex connectivity patterns during model calibration, we impose a feasibility constraint to ensure that the solution follows the expected higher-order spatial statistics. For model calibration, we adopt a regularized least-squares formulation, involving data mismatch, pattern connectivity, and feasibility constraint terms. Using an alternating directions optimization algorithm, the regularized objective function is divided into a continuous model calibration problem, followed by mapping the solution onto the feasible set. The feasibility constraint to honor the expected spatial statistics is implemented using a supervised machine learning algorithm. The two steps of the model calibration formulation are repeated until the convergence criterion is met. Several numerical examples are used to evaluate the performance of the developed method.

  8. ANNIT - An Efficient Inversion Algorithm based on Prediction Principles

    NASA Astrophysics Data System (ADS)

    Růžek, B.; Kolář, P.

    2009-04-01

    Solution of inverse problems represents meaningful job in geophysics. The amount of data is continuously increasing, methods of modeling are being improved and the computer facilities are also advancing great technical progress. Therefore the development of new and efficient algorithms and computer codes for both forward and inverse modeling is still up to date. ANNIT is contributing to this stream since it is a tool for efficient solution of a set of non-linear equations. Typical geophysical problems are based on parametric approach. The system is characterized by a vector of parameters p, the response of the system is characterized by a vector of data d. The forward problem is usually represented by unique mapping F(p)=d. The inverse problem is much more complex and the inverse mapping p=G(d) is available in an analytical or closed form only exceptionally and generally it may not exist at all. Technically, both forward and inverse mapping F and G are sets of non-linear equations. ANNIT solves such situation as follows: (i) joint subspaces {pD, pM} of original data and model spaces D, M, resp. are searched for, within which the forward mapping F is sufficiently smooth that the inverse mapping G does exist, (ii) numerical approximation of G in subspaces {pD, pM} is found, (iii) candidate solution is predicted by using this numerical approximation. ANNIT is working in an iterative way in cycles. The subspaces {pD, pM} are searched for by generating suitable populations of individuals (models) covering data and model spaces. The approximation of the inverse mapping is made by using three methods: (a) linear regression, (b) Radial Basis Function Network technique, (c) linear prediction (also known as "Kriging"). The ANNIT algorithm has built in also an archive of already evaluated models. Archive models are re-used in a suitable way and thus the number of forward evaluations is minimized. ANNIT is now implemented both in MATLAB and SCILAB. Numerical tests show good performance of the algorithm. Both versions and documentation are available on Internet and anybody can download them. The goal of this presentation is to offer the algorithm and computer codes for anybody interested in the solution to inverse problems.

  9. INTRODUCTION Introduction to the conference proceeding of the Workshop on Electromagnetic Inverse ProblemsThe University of Manchester, UK, 15-18 June, 2009

    NASA Astrophysics Data System (ADS)

    Dorn, Oliver; Lionheart, Bill

    2010-11-01

    This proceeding combines selected contributions from participants of the Workshop on Electromagnetic Inverse Problems which was hosted by the University of Manchester in June 2009. The workshop was organized by the two guest editors of this conference proceeding and ran in parallel to the 10th International Conference on Electrical Impedance Tomography, which was guided by Bill Lionheart, Richard Bayford, and Eung Je Woo. Both events shared plenary talks and several selected sessions. One reason for combining these two events was the goal of bringing together scientists from various related disciplines who normally might not attend the same conferences, and to enhance discussions between these different groups. So, for example, one day of the workshop was dedicated to the broader area of geophysical inverse problems (including inverse problems in petroleum engineering), where participants from the EIT community and from the medical imaging community were also encouraged to participate, with great success. Other sessions concentrated on microwave medical imaging, on inverse scattering, or on eddy current imaging, with active feedback also from geophysically oriented scientists. Furthermore, several talks addressed such diverse topics as optical tomography, photoacoustic tomography, time reversal, or electrosensing fish. As a result of the workshop, speakers were invited to contribute extended papers to this conference proceeding. All submissions were thoroughly reviewed and, after a thoughtful revision by the authors, combined in this proceeding. The resulting set of six papers presenting the work of in total 22 authors from 5 different countries provides a very interesting overview of several of the themes which were represented at the workshop. These can be divided into two important categories, namely (i) modelling and (ii) data inversion. The first three papers of this selection, as outlined below, focus more on modelling aspects, being an essential component of any successful inversion, whereas the other three papers discuss novel inversion techniques for specific applications. In the first contribution, with the title A Novel Simplified Mathematical Model for Antennas used in Medical Imaging Applications, the authors M J Fernando, M Elsdon, K Busawon and D Smith discuss a new technique for modelling the current across a monopole antenna from which the radiation fields of the antenna can be calculated very efficiently in specific medical imaging applications. This new technique is then tested on two examples, a quarter wavelength and a three quarter wavelength monopole antenna. The next contribution, with the title An investigation into the use of a mixture model for simulating the electrical properties of soil with varying effective saturation levels for sub-soil imaging using ECT by R R Hayes, P A Newill, F J W Podd, T A York, B D Grieve and O Dorn, considers the development of a new visualization tool for monitoring soil moisture content surrounding certain seed breeder plants. An electrical capacitance tomography technique is employed for verifying how efficiently each plant utilises the water and nutrients available in the surrounding soil. The goal of this study is to help in developing and identifying new drought tolerant food crops. In the third contribution Combination of Maximin and Kriging Prediction Methods for Eddy-Current Testing Database Generation by S Bilicz, M Lambert, E Vazquez and S Gyimóthy, a novel database generation technique is proposed for its use in solving inverse eddy-current testing problems. For avoiding expensive repeated forward simulations during the creation of this database, a kriging interpolation technique is employed for filling uniformly the data output space with sample points. Mathematically this is achieved by using a maximin formalism. The paper 2.5D inversion of CSEM data in a vertically anisotropic earth by C Ramananjaona and L MacGregor considers controlled-source electromagnetic techniques for imaging the earth in a marine environment. It focuses in particular on taking into account anisotropy effects in the inversion. Results of this technique are demonstrated from simulated and from real field data. Furthermore, in the contribution Multiple level-sets for elliptic Cauchy problems in three-dimensional domains by A Leitão and M Marques Alves the authors consider a TV-H1regularization technique for multiple level-set inversion of elliptic Cauchy problems. Generalized minimizers are defined and convergence and stability results are provided for this method, in addition to several numerical experiments. Finally, in the paper Development of in-vivo fluorescence imaging with the matrix-free method, the authors A Zacharopoulos, A Garofalakis, J Ripoll and S Arridge address a recently developed non-contact fluorescence molecular tomography technique where the use of non-contact acquisition systems poses new challenges on computational efficiency during data processing. The matrix-free method is designed to reduce computational cost and memory requirements during the inversion. Reconstructions from a simulated mouse phantom are provided for demonstrating the performance of the proposed technique in realistic scenarios. We hope that this selection of strong and thought-provoking papers will help stimulating further cross-disciplinary research in the spirit of the workshop. We thank all authors for providing us with this excellent set of high-quality contributions. We also thank EPSRC for having provided funding for the workshop under grant EP/G065047/1. Oliver Dorn, Bill Lionheart School of Mathematics, University of Manchester, Alan Turing Building, Oxford Rd Manchester, M13 9PL, UK E-mail: oliver.dorn@manchester.ac.uk, bill.lionheart@manchester.ac.uk Guest Editors

  10. Characterizing open and non-uniform vertical heat sources: towards the identification of real vertical cracks in vibrothermography experiments

    NASA Astrophysics Data System (ADS)

    Castelo, A.; Mendioroz, A.; Celorrio, R.; Salazar, A.; López de Uralde, P.; Gorosmendi, I.; Gorostegui-Colinas, E.

    2017-05-01

    Lock-in vibrothermography is used to characterize vertical kissing and open cracks in metals. In this technique the crack heats up during ultrasound excitation due mainly to friction between the defect's faces. We have solved the inverse problem, consisting in determining the heat source distribution produced at cracks under amplitude modulated ultrasound excitation, which is an ill-posed inverse problem. As a consequence the minimization of the residual is unstable. We have stabilized the algorithm introducing a penalty term based on Total Variation functional. In the inversion, we combine amplitude and phase surface temperature data obtained at several modulation frequencies. Inversions of synthetic data with added noise indicate that compact heat sources are characterized accurately and that the particular upper contours can be retrieved for shallow heat sources. The overall shape of open and homogeneous semicircular strip-shaped heat sources representing open half-penny cracks can also be retrieved but the reconstruction of the deeper end of the heat source loses contrast. Angle-, radius- and depth-dependent inhomogeneous heat flux distributions within these semicircular strips can also be qualitatively characterized. Reconstructions of experimental data taken on samples containing calibrated heat sources confirm the predictions from reconstructions of synthetic data. We also present inversions of experimental data obtained from a real welded Inconel 718 specimen. The results are in good qualitative agreement with the results of liquids penetrants testing.

  11. Measuring the misfit between seismograms using an optimal transport distance: application to full waveform inversion

    NASA Astrophysics Data System (ADS)

    Métivier, L.; Brossier, R.; Mérigot, Q.; Oudet, E.; Virieux, J.

    2016-04-01

    Full waveform inversion using the conventional L2 distance to measure the misfit between seismograms is known to suffer from cycle skipping. An alternative strategy is proposed in this study, based on a measure of the misfit computed with an optimal transport distance. This measure allows to account for the lateral coherency of events within the seismograms, instead of considering each seismic trace independently, as is done generally in full waveform inversion. The computation of this optimal transport distance relies on a particular mathematical formulation allowing for the non-conservation of the total energy between seismograms. The numerical solution of the optimal transport problem is performed using proximal splitting techniques. Three synthetic case studies are investigated using this strategy: the Marmousi 2 model, the BP 2004 salt model, and the Chevron 2014 benchmark data. The results emphasize interesting properties of the optimal transport distance. The associated misfit function is less prone to cycle skipping. A workflow is designed to reconstruct accurately the salt structures in the BP 2004 model, starting from an initial model containing no information about these structures. A high-resolution P-wave velocity estimation is built from the Chevron 2014 benchmark data, following a frequency continuation strategy. This estimation explains accurately the data. Using the same workflow, full waveform inversion based on the L2 distance converges towards a local minimum. These results yield encouraging perspectives regarding the use of the optimal transport distance for full waveform inversion: the sensitivity to the accuracy of the initial model is reduced, the reconstruction of complex salt structure is made possible, the method is robust to noise, and the interpretation of seismic data dominated by reflections is enhanced.

  12. Speckle noise reduction in quantitative optical metrology techniques by application of the discrete wavelet transformation

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Pryputniewicz, Ryszard J.

    2002-06-01

    Effective suppression of speckle noise content in interferometric data images can help in improving accuracy and resolution of the results obtained with interferometric optical metrology techniques. In this paper, novel speckle noise reduction algorithms based on the discrete wavelet transformation are presented. The algorithms proceed by: (a) estimating the noise level contained in the interferograms of interest, (b) selecting wavelet families, (c) applying the wavelet transformation using the selected families, (d) wavelet thresholding, and (e) applying the inverse wavelet transformation, producing denoised interferograms. The algorithms are applied to the different stages of the processing procedures utilized for generation of quantitative speckle correlation interferometry data of fiber-optic based opto-electronic holography (FOBOEH) techniques, allowing identification of optimal processing conditions. It is shown that wavelet algorithms are effective for speckle noise reduction while preserving image features otherwise faded with other algorithms.

  13. Transect-scale imaging of root zone electrical conductivity by inversion of multiple-height EMI measurements under different salinity conditions

    NASA Astrophysics Data System (ADS)

    Piero Deidda, Gian; Coppola, Antonio; Dragonetti, Giovanna; Comegna, Alessandro; Rodriguez, Giuseppe; Vignoli, Giulio

    2017-04-01

    The ability to determine the effects of salts on soils and plants, are of great importance to agriculture. To control its harmful effects, soil salinity needs to be monitored in space and time. This requires knowledge of its magnitude, temporal dynamics, and spatial variability. Soil salinity can be evaluated by measuring the bulk electrical conductivity (σb) in the field. Measurements of σb can be made with either in situ or remote devices (Rhoades and Oster, 1986; Rhoades and Corwin, 1990; Rhoades and Miyamoto, 1990). Time Domain Reflectometry (TDR) sensors allow simultaneous measurements of water content, θ, and σb. They may be calibrated in the laboratory for estimating the electrical conductivity of the soil solution (σw). However, they have a relatively small observation volume and thus they only provide local-scale measurements. The spatial range of the sensors is limited to tens of centimeters and extension of the information to a large area can be problematic. Also, information on the vertical distribution of the σb soil profile may only be obtained by installing sensors at different depths. In this sense, the TDR may be considered as an invasive technique. Compared to the TDR, non-invasive electromagnetic induction (EMI) techniques can be used for extensively mapping the bulk electrical conductivity in the field. The problem is that all these techniques give depth-weighted apparent electrical conductivity (ECa) measurements, depending on the specific depth distribution of the σb, as well as on the depth response function of the sensor used. In order to deduce the actual distribution of local σb in the soil profile, one may invert the signal coming from EMI sensors. Most studies use the linear model proposed by McNeill (1980), describing the relative depth-response of the ground conductivity meter. By using the forward linear model of McNeill, Borchers et al. (1997) implemented a Least Squares inverse procedure with second order Tikhonov regularization, to estimate σb vertical distribution from EMI field data. More recent studies (Hendrickx et al., 2002; Deidda et al., 2003; Deidda et al., 2014, among others), extended the approach to a more complicated non linear model of the response of a ground conductivity meter to changes with depth of σb. Noteworthy, these inverse procedures are only based on electromagnetic physics. Thus, they are only based on ECa readings, possibly taken with both the horizontal and vertical configurations and with the sensor at different heights above the ground, and do not require any further field calibration. Nevertheless, as discussed by Hendrickx et al. (2002), important issues on inverse approaches are about: i) the applicability to heterogeneous field soils of physical equations originally developed for the electromagnetic response of homogeneous media and ii) nonuniqueness and instability problems inherent to inverse procedures, even after Tikhonov regularization. Besides, as discussed by Cook and Walker (1992), these mathematical inversions procedures using layered-earth models were originally designed for interpreting porous systems with distinct layering. Where subsurface layers are not sharply defined, this type of inversion may be subject to considerable error. With these premises, the main aim of this study is estimating the vertical σb distribution by ECa measured using ground surface EMI methods under different salinity conditions and using TDR data as ground-truth data for validation of the inversion procedure. The latter is based on a regularized 1D inversion procedure designed to swiftly manage nonlinear multiple EMI-depth responses (Deidda et al., 2014). It is based on the coupling of the damped Gauss-Newton method with either the truncated singular value decomposition (TSVD) or the truncated generalized singular value decomposition (TGSVD), and it implements an explicit (exact) representation of the Jacobian to solve the nonlinear inverse problem. The experimental field (30 m x 15.6 m; for a total area of 468 m2) was divided into three transects 30 m long and 4.2 width, cultivated with green bean and irrigated with three different salinity levels (1 dS/m, 3 dS/m, and 6 dS/m). Each transect consisted of seven rows equipped with a sprinkler irrigation system, which supplied a water flux of 2 l/h. As for the salt application, CaCl2 were dissolved in tap water, and subsequently siphoned into the irrigation system. For each transect, 24 regularly spaced monitoring sites (1 m apart) were selected for soil measurements, using different equipments: i) a TDR100, ii), a Geonics EM-38; iii). Overall, fifteen measurement campaigns were carried out.

  14. Imaging through Scattering Media with Grating-Based Interferometers.

    DTIC Science & Technology

    1980-12-01

    Theoretically, if the instantaneous impulse response nf the scat- tering medium can be measured and an inverse filter [7, 8] can be created in real time, it... impulse response of a time- varying volume scattering medium. Moreover, no modulator appears to possess the required temporal and spatial bandwidth for...or optical deblurring techniques. Thirdly, since the achromatic grating interferometric system discriminates by the directions of propa- gation, the

  15. Light-field camera-based 3D volumetric particle image velocimetry with dense ray tracing reconstruction technique

    NASA Astrophysics Data System (ADS)

    Shi, Shengxian; Ding, Junfei; New, T. H.; Soria, Julio

    2017-07-01

    This paper presents a dense ray tracing reconstruction technique for a single light-field camera-based particle image velocimetry. The new approach pre-determines the location of a particle through inverse dense ray tracing and reconstructs the voxel value using multiplicative algebraic reconstruction technique (MART). Simulation studies were undertaken to identify the effects of iteration number, relaxation factor, particle density, voxel-pixel ratio and the effect of the velocity gradient on the performance of the proposed dense ray tracing-based MART method (DRT-MART). The results demonstrate that the DRT-MART method achieves higher reconstruction resolution at significantly better computational efficiency than the MART method (4-50 times faster). Both DRT-MART and MART approaches were applied to measure the velocity field of a low speed jet flow which revealed that for the same computational cost, the DRT-MART method accurately resolves the jet velocity field with improved precision, especially for the velocity component along the depth direction.

  16. Hydrologic Process Regularization for Improved Geoelectrical Monitoring of a Lab-Scale Saline Tracer Experiment

    NASA Astrophysics Data System (ADS)

    Oware, E. K.; Moysey, S. M.

    2016-12-01

    Regularization stabilizes the geophysical imaging problem resulting from sparse and noisy measurements that render solutions unstable and non-unique. Conventional regularization constraints are, however, independent of the physics of the underlying process and often produce smoothed-out tomograms with mass underestimation. Cascaded time-lapse (CTL) is a widely used reconstruction technique for monitoring wherein a tomogram obtained from the background dataset is employed as starting model for the inversion of subsequent time-lapse datasets. In contrast, a proper orthogonal decomposition (POD)-constrained inversion framework enforces physics-based regularization based upon prior understanding of the expected evolution of state variables. The physics-based constraints are represented in the form of POD basis vectors. The basis vectors are constructed from numerically generated training images (TIs) that mimic the desired process. The target can be reconstructed from a small number of selected basis vectors, hence, there is a reduction in the number of inversion parameters compared to the full dimensional space. The inversion involves finding the optimal combination of the selected basis vectors conditioned on the geophysical measurements. We apply the algorithm to 2-D lab-scale saline transport experiments with electrical resistivity (ER) monitoring. We consider two transport scenarios with one and two mass injection points evolving into unimodal and bimodal plume morphologies, respectively. The unimodal plume is consistent with the assumptions underlying the generation of the TIs, whereas bimodality in plume morphology was not conceptualized. We compare difference tomograms retrieved from POD with those obtained from CTL. Qualitative comparisons of the difference tomograms with images of their corresponding dye plumes suggest that POD recovered more compact plumes in contrast to those of CTL. While mass recovery generally deteriorated with increasing number of time-steps, POD outperformed CTL in terms of mass recovery accuracy rates. POD is computationally superior requiring only 2.5 mins to complete each inversion compared to 3 hours for CTL to do the same.

  17. Content based image retrieval for matching images of improvised explosive devices in which snake initialization is viewed as an inverse problem

    NASA Astrophysics Data System (ADS)

    Acton, Scott T.; Gilliam, Andrew D.; Li, Bing; Rossi, Adam

    2008-02-01

    Improvised explosive devices (IEDs) are common and lethal instruments of terrorism, and linking a terrorist entity to a specific device remains a difficult task. In the effort to identify persons associated with a given IED, we have implemented a specialized content based image retrieval system to search and classify IED imagery. The system makes two contributions to the art. First, we introduce a shape-based matching technique exploiting shape, color, and texture (wavelet) information, based on novel vector field convolution active contours and a novel active contour initialization method which treats coarse segmentation as an inverse problem. Second, we introduce a unique graph theoretic approach to match annotated printed circuit board images for which no schematic or connectivity information is available. The shape-based image retrieval method, in conjunction with the graph theoretic tool, provides an efficacious system for matching IED images. For circuit imagery, the basic retrieval mechanism has a precision of 82.1% and the graph based method has a precision of 98.1%. As of the fall of 2007, the working system has processed over 400,000 case images.

  18. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  19. Multiple grid arrangement improves ligand docking with unknown binding sites: Application to the inverse docking problem.

    PubMed

    Ban, Tomohiro; Ohue, Masahito; Akiyama, Yutaka

    2018-04-01

    The identification of comprehensive drug-target interactions is important in drug discovery. Although numerous computational methods have been developed over the years, a gold standard technique has not been established. Computational ligand docking and structure-based drug design allow researchers to predict the binding affinity between a compound and a target protein, and thus, they are often used to virtually screen compound libraries. In addition, docking techniques have also been applied to the virtual screening of target proteins (inverse docking) to predict target proteins of a drug candidate. Nevertheless, a more accurate docking method is currently required. In this study, we proposed a method in which a predicted ligand-binding site is covered by multiple grids, termed multiple grid arrangement. Notably, multiple grid arrangement facilitates the conformational search for a grid-based ligand docking software and can be applied to the state-of-the-art commercial docking software Glide (Schrödinger, LLC). We validated the proposed method by re-docking with the Astex diverse benchmark dataset and blind binding site situations, which improved the correct prediction rate of the top scoring docking pose from 27.1% to 34.1%; however, only a slight improvement in target prediction accuracy was observed with inverse docking scenarios. These findings highlight the limitations and challenges of current scoring functions and the need for more accurate docking methods. The proposed multiple grid arrangement method was implemented in Glide by modifying a cross-docking script for Glide, xglide.py. The script of our method is freely available online at http://www.bi.cs.titech.ac.jp/mga_glide/. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. On the recovery of missing low and high frequency information from bandlimited reflectivity data

    NASA Astrophysics Data System (ADS)

    Sacchi, M. D.; Ulrych, T. J.

    2007-12-01

    During the last two decades, an important effort in the seismic exploration community has been made to retrieve broad-band seismic data by means of deconvolution and inversion. In general, the problem can be stated as a spectral reconstruction problem. In other words, given limited spectral information about the earth's reflectivity sequence, one attempts to create a broadband estimate of the Fourier spectra of the unknown reflectivity. Techniques based on the principle of parsimony can be effectively used to retrieve a sparse spike sequence and, consequently, a broad band signal. Alternatively, continuation methods, e.g., autoregressive modeling, can be used to extrapolate the recorded bandwidth of the seismic signal. The goal of this paper is to examine under what conditions the recovery of low and high frequencies from band-limited and noisy signals is possible. At the heart of the methods we discuss, is the celebrated non-Gaussian assumption so important in many modern signal processing methods, such as ICA, for example. Spectral recovery from limited information tends to work when the reflectivity consist of a few well isolated events. Results degrade with the number of reflectors, decreasing SNR and decreasing bandwidth of the source wavelet. Constrains and information-based priors can be used to stabilize the recovery but, as in all inverse problems, the solution is nonunique and effort is required to understand the level of recovery that is achievable, always keeping the physics of the problem in mind. We provide in this paper, a survey of methods to recover broad-band reflectivity sequences and examine the role that these techniques can play in the processing and inversion as applied to exploration and global seismology.

  1. Refraction traveltime tomography based on damped wave equation for irregular topographic model

    NASA Astrophysics Data System (ADS)

    Park, Yunhui; Pyun, Sukjoon

    2018-03-01

    Land seismic data generally have time-static issues due to irregular topography and weathered layers at shallow depths. Unless the time static is handled appropriately, interpretation of the subsurface structures can be easily distorted. Therefore, static corrections are commonly applied to land seismic data. The near-surface velocity, which is required for static corrections, can be inferred from first-arrival traveltime tomography, which must consider the irregular topography, as the land seismic data are generally obtained in irregular topography. This paper proposes a refraction traveltime tomography technique that is applicable to an irregular topographic model. This technique uses unstructured meshes to express an irregular topography, and traveltimes calculated from the frequency-domain damped wavefields using the finite element method. The diagonal elements of the approximate Hessian matrix were adopted for preconditioning, and the principle of reciprocity was introduced to efficiently calculate the Fréchet derivative. We also included regularization to resolve the ill-posed inverse problem, and used the nonlinear conjugate gradient method to solve the inverse problem. As the damped wavefields were used, there were no issues associated with artificial reflections caused by unstructured meshes. In addition, the shadow zone problem could be circumvented because this method is based on the exact wave equation, which does not require a high-frequency assumption. Furthermore, the proposed method was both robust to an initial velocity model and efficient compared to full wavefield inversions. Through synthetic and field data examples, our method was shown to successfully reconstruct shallow velocity structures. To verify our method, static corrections were roughly applied to the field data using the estimated near-surface velocity. By comparing common shot gathers and stack sections with and without static corrections, we confirmed that the proposed tomography algorithm can be used to correct the statics of land seismic data.

  2. Inversion climatology at San Jose, California

    NASA Technical Reports Server (NTRS)

    Morgan, T.; Bornstein, R. D.

    1977-01-01

    Month-to-month variations in the early morning surface-based and near-noon elevated inversions at San Jose, Calif., were determined from slow rise radiosondes launched during a four-year period. A high frequency of shallow, radiative, surface-based inversions were found in winter during the early morning hours, while during the same period in summer, a low frequency of deeper based inversions arose from a combination of radiative and subsidence processes. The frequency of elevated inversions in the hours near noon was lowest during fall and spring, while inversion bases were highest and thicknesses least during these periods.

  3. A stress-constrained geodetic inversion method for spatiotemporal slip of a slow slip event with earthquake swarm

    NASA Astrophysics Data System (ADS)

    Hirose, H.; Tanaka, T.

    2017-12-01

    Geodetic inversions have been performed by using GNSS data and/or tiltmeter data in order to estimate spatio-temporal fault slip distributions. They have been applied for slow slip events (SSEs), which are episodic fault slip lasting for days to years (e.g., Ozawa et al., 2001; Hirose et al., 2014). Although their slip distributions are important information in terms of inferring strain budget and frictional characteristics on a subduction plate interface, inhomogeneous station coverage generally yields spatially non-uniform slip resolution, and in a worse case, a slip distribution can not be recovered. It is known that an SSE which accompanies an earthquake swarm around the SSE slip area, such as the Boso Peninsula SSEs (e.g., Hirose et al., 2014). Some researchers hypothesize that these earthquakes are triggered by a stress change caused by the accompanying SSE (e.g., Segall et al., 2006). Based on this assumption, it is possible that a conventional geodetic inversion which impose a constraint on the stress change that promotes earthquake activities may improve the resolution of the slip distribution. Here we develop an inversion method based on the Network Inversion Filter technique (Segall and Matthews, 1997), incorporating a constraint on a positive change in Coulomb failure stress (Delta-CFS) at the accompanied earthquakes. In addition, we apply this new method to synthetic data in order to check the effectiveness of the method and the characteristics of the inverted slip distributions. The results show that there is a case in which the reproduction of a slip distribution is better with earthquake information than without it. That is, it is possible to improve the reproducibility of a slip distribution of an SSE with this new inversion method if an earthquake catalog for the accompanying earthquake activity can be used when available geodetic data are insufficient.

  4. MO-F-CAMPUS-T-03: Continuous Dose Delivery with Gamma Knife Perfexion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghobadi,; Li, W; Chung, C

    2015-06-15

    Purpose: We propose continuous dose delivery techniques for stereotactic treatments delivered by Gamma Knife Perfexion using inverse treatment planning system that can be applied to various tumour sites in the brain. We test the accuracy of the plans on Perfexion’s planning system (GammaPlan) to ensure the obtained plans are viable. This approach introduces continuous dose delivery for Perefxion, as opposed to the currently employed step-and-shoot approaches, for different tumour sites. Additionally, this is the first realization of automated inverse planning on GammaPlan. Methods: The inverse planning approach is divided into two steps of identifying a quality path inside the target,more » and finding the best collimator composition for the path. To find a path, we select strategic regions inside the target volume and find a path that visits each region exactly once. This path is then passed to a mathematical model which finds the best combination of collimators and their durations. The mathematical model minimizes the dose spillage to the surrounding tissues while ensuring the prescribed dose is delivered to the target(s). Organs-at-risk and their corresponding allowable doses can also be added to the model to protect adjacent organs. Results: We test this approach on various tumour sizes and sites. The quality of the obtained treatment plans are comparable or better than forward plans and inverse plans that use step- and-shoot technique. The conformity indices in the obtained continuous dose delivery plans are similar to those of forward plans while the beam-on time is improved on average (see Table 1 in supporting document). Conclusion: We employ inverse planning for continuous dose delivery in Perfexion for brain tumours. The quality of the obtained plans is similar to forward and inverse plans that use conventional step-and-shoot technique. We tested the inverse plans on GammaPlan to verify clinical relevance. This research was partially supported by Elekta, Sweden (vendor of Gamma Knife Perfexion)« less

  5. A new method for the inversion of atmospheric parameters of A/Am stars

    NASA Astrophysics Data System (ADS)

    Gebran, M.; Farah, W.; Paletou, F.; Monier, R.; Watson, V.

    2016-05-01

    Context. We present an automated procedure that simultaneously derives the effective temperature Teff, surface gravity log g, metallicity [Fe/H], and equatorial projected rotational velocity vsini for "normal" A and Am stars. The procedure is based on the principal component analysis (PCA) inversion method, which we published in a recent paper . Aims: A sample of 322 high-resolution spectra of F0-B9 stars, retrieved from the Polarbase, SOPHIE, and ELODIE databases, were used to test this technique with real data. We selected the spectral region from 4400-5000 Å as it contains many metallic lines and the Balmer Hβ line. Methods: Using three data sets at resolving powers of R = 42 000, 65 000 and 76 000, about ~6.6 × 106 synthetic spectra were calculated to build a large learning database. The online power iteration algorithm was applied to these learning data sets to estimate the principal components (PC). The projection of spectra onto the few PCs offered an efficient comparison metric in a low-dimensional space. The spectra of the well-known A0- and A1-type stars, Vega and Sirius A, were used as control spectra in the three databases. Spectra of other well-known A-type stars were also employed to characterize the accuracy of the inversion technique. Results: We inverted all of the observational spectra and derived the atmospheric parameters. After removal of a few outliers, the PCA-inversion method appeared to be very efficient in determining Teff, [Fe/H], and vsini for A/Am stars. The derived parameters agree very well with previous determinations. Using a statistical approach, deviations of around 150 K, 0.35 dex, 0.15 dex, and 2 km s-1 were found for Teff, log g, [Fe/H], and vsini with respect to literature values for A-type stars. Conclusions: The PCA inversion proves to be a very fast, practical, and reliable tool for estimating stellar parameters of FGK and A stars and for deriving effective temperatures of M stars. Based on data retrieved from the Polarbase, SOPHIE, and ELODIE archives.Table 2 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/589/A83

  6. Unlocking the spatial inversion of large scanning magnetic microscopy datasets

    NASA Astrophysics Data System (ADS)

    Myre, J. M.; Lascu, I.; Andrade Lima, E.; Feinberg, J. M.; Saar, M. O.; Weiss, B. P.

    2013-12-01

    Modern scanning magnetic microscopy provides the ability to perform high-resolution, ultra-high sensitivity moment magnetometry, with spatial resolutions better than 10^-4 m and magnetic moments as weak as 10^-16 Am^2. These microscopy capabilities have enhanced numerous magnetic studies, including investigations of the paleointensity of the Earth's magnetic field, shock magnetization and demagnetization of impacts, magnetostratigraphy, the magnetic record in speleothems, and the records of ancient core dynamos of planetary bodies. A common component among many studies utilizing scanning magnetic microscopy is solving an inverse problem to determine the non-negative magnitude of the magnetic moments that produce the measured component of the magnetic field. The two most frequently used methods to solve this inverse problem are classic fast Fourier techniques in the frequency domain and non-negative least squares (NNLS) methods in the spatial domain. Although Fourier techniques are extremely fast, they typically violate non-negativity and it is difficult to implement constraints associated with the space domain. NNLS methods do not violate non-negativity, but have typically been computation time prohibitive for samples of practical size or resolution. Existing NNLS methods use multiple techniques to attain tractable computation. To reduce computation time in the past, typically sample size or scan resolution would have to be reduced. Similarly, multiple inversions of smaller sample subdivisions can be performed, although this frequently results in undesirable artifacts at subdivision boundaries. Dipole interactions can also be filtered to only compute interactions above a threshold which enables the use of sparse methods through artificial sparsity. To improve upon existing spatial domain techniques, we present the application of the TNT algorithm, named TNT as it is a "dynamite" non-negative least squares algorithm which enhances the performance and accuracy of spatial domain inversions. We show that the TNT algorithm reduces the execution time of spatial domain inversions from months to hours and that inverse solution accuracy is improved as the TNT algorithm naturally produces solutions with small norms. Using sIRM and NRM measures of multiple synthetic and natural samples we show that the capabilities of the TNT algorithm allow very large samples to be inverted without the need for alternative techniques to make the problems tractable. Ultimately, the TNT algorithm enables accurate spatial domain analysis of scanning magnetic microscopy data on an accelerated time scale that renders spatial domain analyses tractable for numerous studies, including searches for the best fit of unidirectional magnetization direction and high-resolution step-wise magnetization and demagnetization.

  7. A fast iterative convolution weighting approach for gridding-based direct Fourier three-dimensional reconstruction with correction for the contrast transfer function.

    PubMed

    Abrishami, V; Bilbao-Castro, J R; Vargas, J; Marabini, R; Carazo, J M; Sorzano, C O S

    2015-10-01

    We describe a fast and accurate method for the reconstruction of macromolecular complexes from a set of projections. Direct Fourier inversion (in which the Fourier Slice Theorem plays a central role) is a solution for dealing with this inverse problem. Unfortunately, the set of projections provides a non-equidistantly sampled version of the macromolecule Fourier transform in the single particle field (and, therefore, a direct Fourier inversion) may not be an optimal solution. In this paper, we introduce a gridding-based direct Fourier method for the three-dimensional reconstruction approach that uses a weighting technique to compute a uniform sampled Fourier transform. Moreover, the contrast transfer function of the microscope, which is a limiting factor in pursuing a high resolution reconstruction, is corrected by the algorithm. Parallelization of this algorithm, both on threads and on multiple CPU's, makes the process of three-dimensional reconstruction even faster. The experimental results show that our proposed gridding-based direct Fourier reconstruction is slightly more accurate than similar existing methods and presents a lower computational complexity both in terms of time and memory, thereby allowing its use on larger volumes. The algorithm is fully implemented in the open-source Xmipp package and is downloadable from http://xmipp.cnb.csic.es. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Correlation of contrast-detail analysis and clinical image quality assessment in chest radiography with a human cadaver study.

    PubMed

    De Crop, An; Bacher, Klaus; Van Hoof, Tom; Smeets, Peter V; Smet, Barbara S; Vergauwen, Merel; Kiendys, Urszula; Duyck, Philippe; Verstraete, Koenraad; D'Herde, Katharina; Thierens, Hubert

    2012-01-01

    To determine the correlation between the clinical and physical image quality of chest images by using cadavers embalmed with the Thiel technique and a contrast-detail phantom. The use of human cadavers fulfilled the requirements of the institutional ethics committee. Clinical image quality was assessed by using three human cadavers embalmed with the Thiel technique, which results in excellent preservation of the flexibility and plasticity of organs and tissues. As a result, lungs can be inflated during image acquisition to simulate the pulmonary anatomy seen on a chest radiograph. Both contrast-detail phantom images and chest images of the Thiel-embalmed bodies were acquired with an amorphous silicon flat-panel detector. Tube voltage (70, 81, 90, 100, 113, 125 kVp), copper filtration (0.1, 0.2, 0.3 mm Cu), and exposure settings (200, 280, 400, 560, 800 speed class) were altered to simulate different quality levels. Four experienced radiologists assessed the image quality by using a visual grading analysis (VGA) technique based on European Quality Criteria for Chest Radiology. The phantom images were scored manually and automatically with use of dedicated software, both resulting in an inverse image quality figure (IQF). Spearman rank correlations between inverse IQFs and VGA scores were calculated. A statistically significant correlation (r = 0.80, P < .01) was observed between the VGA scores and the manually obtained inverse IQFs. Comparison of the VGA scores and the automated evaluated phantom images showed an even better correlation (r = 0.92, P < .001). The results support the value of contrast-detail phantom analysis for evaluating clinical image quality in chest radiography. © RSNA, 2011.

  9. Experimental evidence of mobility enhancement in short-channel ultra-thin body double-gate MOSFETs by magnetoresistance technique

    NASA Astrophysics Data System (ADS)

    Chaisantikulwat, W.; Mouis, M.; Ghibaudo, G.; Cristoloveanu, S.; Widiez, J.; Vinet, M.; Deleonibus, S.

    2007-11-01

    Double-gate transistor with ultra-thin body (UTB) has proved to offer advantages over bulk device for high-speed, low-power applications. There is thus a strong need to obtain an accurate understanding of carrier transport and mobility in such device. In this work, we report for the first time an experimental evidence of mobility enhancement in UTB double-gate (DG) MOSFETs using magnetoresistance mobility extraction technique. Mobility in planar DG transistor operating in single- and double-gate mode is compared. The influence of different scattering mechanisms in the channel is also investigated by obtaining mobility values at low temperatures. The results show a clear mobility improvement in double-gate mode compared to single-gate mode mobility at the same inversion charge density. This is explained by the role of volume inversion in ultra-thin body transistor operating in DG mode. Volume inversion is found to be especially beneficial in terms of mobility gain at low-inversion densities.

  10. Inverse dynamics of a 3 degree of freedom spatial flexible manipulator

    NASA Technical Reports Server (NTRS)

    Bayo, Eduardo; Serna, M.

    1989-01-01

    A technique is presented for solving the inverse dynamics and kinematics of 3 degree of freedom spatial flexible manipulator. The proposed method finds the joint torques necessary to produce a specified end effector motion. Since the inverse dynamic problem in elastic manipulators is closely coupled to the inverse kinematic problem, the solution of the first also renders the displacements and rotations at any point of the manipulator, including the joints. Furthermore the formulation is complete in the sense that it includes all the nonlinear terms due to the large rotation of the links. The Timoshenko beam theory is used to model the elastic characteristics, and the resulting equations of motion are discretized using the finite element method. An iterative solution scheme is proposed that relies on local linearization of the problem. The solution of each linearization is carried out in the frequency domain. The performance and capabilities of this technique are tested through simulation analysis. Results show the potential use of this method for the smooth motion control of space telerobots.

  11. Graphical and PC-software analysis of volcano eruption precursors according to the Materials Failure Forecast Method (FFM)

    NASA Astrophysics Data System (ADS)

    Cornelius, Reinold R.; Voight, Barry

    1995-03-01

    The Materials Failure Forecasting Method for volcanic eruptions (FFM) analyses the rate of precursory phenomena. Time of eruption onset is derived from the time of "failure" implied by accelerating rate of deformation. The approach attempts to fit data, Ω, to the differential relationship Ω¨=AΩ˙, where the dot superscript represents the time derivative, and the data Ω may be any of several parameters describing the accelerating deformation or energy release of the volcanic system. Rate coefficients, A and α, may be derived from appropriate data sets to provide an estimate of time to "failure". As the method is still an experimental technique, it should be used with appropriate judgment during times of volcanic crisis. Limitations of the approach are identified and discussed. Several kinds of eruption precursory phenomena, all simulating accelerating creep during the mechanical deformation of the system, can be used with FFM. Among these are tilt data, slope-distance measurements, crater fault movements and seismicity. The use of seismic coda, seismic amplitude-derived energy release and time-integrated amplitudes or coda lengths are examined. Usage of cumulative coda length directly has some practical advantages to more rigorously derived parameters, and RSAM and SSAM technologies appear to be well suited to real-time applications. One graphical and four numerical techniques of applying FFM are discussed. The graphical technique is based on an inverse representation of rate versus time. For α = 2, the inverse rate plot is linear; it is concave upward for α < 2 and concave downward for α > 2. The eruption time is found by simple extrapolation of the data set toward the time axis. Three numerical techniques are based on linear least-squares fits to linearized data sets. The "linearized least-squares technique" is most robust and is expected to be the most practical numerical technique. This technique is based on an iterative linearization of the given rate-time series. The hindsight technique is disadvantaged by a bias favouring a too early eruption time in foresight applications. The "log rate versus log acceleration technique", utilizing a logarithmic representation of the fundamental differential equation, is disadvantaged by large data scatter after interpolation of accelerations. One further numerical technique, a nonlinear least-squares fit to rate data, requires special and more complex software. PC-oriented computer codes were developed for data manipulation, application of the three linearizing numerical methods, and curve fitting. Separate software is required for graphing purposes. All three linearizing techniques facilitate an eruption window based on a data envelope according to the linear least-squares fit, at a specific level of confidence, and an estimated rate at time of failure.

  12. Implementation of magnetic resonance elastography for the investigation of traumatic brain injuries

    NASA Astrophysics Data System (ADS)

    Boulet, Thomas

    Magnetic resonance elastography (MRE) is a potentially transformative imaging modality allowing local and non-invasive measurement of biological tissue mechanical properties. It uses a specific phase contrast MR pulse sequence to measure induced vibratory motion in soft material, from which material properties can be estimated. Compared to other imaging techniques, MRE is able to detect tissue pathology at early stages by quantifying the changes in tissue stiffness associated with diseases. In an effort to develop the technique and improve its capabilities, two inversion algorithms were written to evaluate viscoelastic properties from the measured displacements fields. The first one was based on a direct algebraic inversion of the differential equation of motion, which decouples under certain simplifying assumptions, and featured a spatio-temporal multi-directional filter. The second one relies on a finite element discretization of the governing equations to perform a direct inversion. Several applications of this technique have also been investigated, including the estimation of mechanical parameters in various gel phantoms and polymers, as well as the use of MRE as a diagnostic tools for brain disorders. In this respect, the particular interest was to investigate traumatic brain injury (TBI), a complex and diverse injury affecting 1.7 million Americans annually. The sensitivity of MRE to TBI was first assessed on excised rat brains subjected to a controlled cortical impact (CCI) injury, before execution of in vivo experiments in mice. MRE was also applied in vivo on mouse models of medulloblastoma tumors and multiple sclerosis. These studies showed the potential of MRE in mapping the brain mechanically and providing non-invasive in vivo imaging markers for neuropathology and pathogenesis of brain diseases. Furthermore, MRE can easily be translatable to clinical settings; thus, while this technique may not be used directly to diagnose different abnormalities in the brain at this time, it may be helpful to detect abnormalities, follow therapies, and trace macroscopic changes that are not seen by conventional methods with clinical relevance.

  13. Analysis of the feasibility of an experiment to measure carbon monoxide in the atmosphere. [using remote platform interferometry

    NASA Technical Reports Server (NTRS)

    Bortner, M. H.; Alyea, F. N.; Grenda, R. N.; Liebling, G. R.; Levy, G. M.

    1973-01-01

    The feasibility of measuring atmospheric carbon monoxide from a remote platform using the correlation interferometry technique was considered. It has been determined that CO data can be obtained with an accuracy of 10 percent using this technique on the first overtone band of CO at 2.3 mu. That band has been found to be much more suitable than the stronger fundamental band at 4.6 mu. Calculations for both wavelengths are presented which illustrate the effects of atmospheric temperature profiles, inversion layers, ground temperature and emissivity, CO profile, reflectivity, and atmospheric pressure. The applicable radiative transfer theory on which these calculations are based is described together with the principles of the technique.

  14. Three-dimensional mosaicking of the South Korean radar network

    NASA Astrophysics Data System (ADS)

    Berenguer, Marc; Sempere-Torres, Daniel; Lee, GyuWon

    2016-04-01

    Dense radar networks offer the possibility of improved Quantitative Precipitation Estimation thanks to the additional information collected in the overlapping areas, which allows mitigating errors associated with the Vertical Profile of Reflectivity or path attenuation by intense rain. With this aim, Roca-Sancho et al. (2014) proposed a technique to generate 3-D reflectivity mosaics from the multiple radars of a network. The technique is based on an inverse method that simulates the radar sampling of the atmosphere considering the characteristics (location, frequency and scanning protocol) of each individual radar. This technique has been applied to mosaic the observations of the radar network of South Korea (composed of 14 S-band radars), and integrate the observations of the small X-band network which to be installed near Seoul in the framework of a project funded by the Korea Agency for Infrastructure Technology Advancement (KAIA). The evaluation of the generated 3-D mosaics has been done by comparison with point measurements (i.e. rain gauges and disdrometers) and with the observations of independent radars. Reference: Roca-Sancho, J., M. Berenguer, and D. Sempere-Torres (2014), An inverse method to retrieve 3D radar reflectivity composites, Journal of Hydrology, 519, 947-965, doi: 10.1016/j.jhydrol.2014.07.039.

  15. Convergence analysis of surrogate-based methods for Bayesian inverse problems

    NASA Astrophysics Data System (ADS)

    Yan, Liang; Zhang, Yuan-Xiang

    2017-12-01

    The major challenges in the Bayesian inverse problems arise from the need for repeated evaluations of the forward model, as required by Markov chain Monte Carlo (MCMC) methods for posterior sampling. Many attempts at accelerating Bayesian inference have relied on surrogates for the forward model, typically constructed through repeated forward simulations that are performed in an offline phase. Although such approaches can be quite effective at reducing computation cost, there has been little analysis of the approximation on posterior inference. In this work, we prove error bounds on the Kullback-Leibler (KL) distance between the true posterior distribution and the approximation based on surrogate models. Our rigorous error analysis show that if the forward model approximation converges at certain rate in the prior-weighted L 2 norm, then the posterior distribution generated by the approximation converges to the true posterior at least two times faster in the KL sense. The error bound on the Hellinger distance is also provided. To provide concrete examples focusing on the use of the surrogate model based methods, we present an efficient technique for constructing stochastic surrogate models to accelerate the Bayesian inference approach. The Christoffel least squares algorithms, based on generalized polynomial chaos, are used to construct a polynomial approximation of the forward solution over the support of the prior distribution. The numerical strategy and the predicted convergence rates are then demonstrated on the nonlinear inverse problems, involving the inference of parameters appearing in partial differential equations.

  16. A simple calculation method for determination of equivalent square field.

    PubMed

    Shafiei, Seyed Ali; Hasanzadeh, Hadi; Shafiei, Seyed Ahmad

    2012-04-01

    Determination of the equivalent square fields for rectangular and shielded fields is of great importance in radiotherapy centers and treatment planning software. This is accomplished using standard tables and empirical formulas. The goal of this paper is to present a formula based on analysis of scatter reduction due to inverse square law to obtain equivalent field. Tables are published by different agencies such as ICRU (International Commission on Radiation Units and measurements), which are based on experimental data; but there exist mathematical formulas that yield the equivalent square field of an irregular rectangular field which are used extensively in computation techniques for dose determination. These processes lead to some complicated and time-consuming formulas for which the current study was designed. In this work, considering the portion of scattered radiation in absorbed dose at a point of measurement, a numerical formula was obtained based on which a simple formula was developed to calculate equivalent square field. Using polar coordinate and inverse square law will lead to a simple formula for calculation of equivalent field. The presented method is an analytical approach based on which one can estimate the equivalent square field of a rectangular field and may be used for a shielded field or an off-axis point. Besides, one can calculate equivalent field of rectangular field with the concept of decreased scatter radiation with inverse square law with a good approximation. This method may be useful in computing Percentage Depth Dose and Tissue-Phantom Ratio which are extensively used in treatment planning.

  17. There is more than one way to turn a spherical cellular monolayer inside out: type B embryo inversion in Volvox globator

    PubMed Central

    2011-01-01

    Background Epithelial folding is a common morphogenetic process during the development of multicellular organisms. In metazoans, the biological and biomechanical processes that underlie such three-dimensional (3D) developmental events are usually complex and difficult to investigate. Spheroidal green algae of the genus Volvox are uniquely suited as model systems for studying the basic principles of epithelial folding. Volvox embryos begin life inside out and then must turn their spherical cell monolayer outside in to achieve their adult configuration; this process is called 'inversion.' There are two fundamentally different sequences of inversion processes in Volvocaceae: type A and type B. Type A inversion is well studied, but not much is known about type B inversion. How does the embryo of a typical type B inverter, V. globator, turn itself inside out? Results In this study, we investigated the type B inversion of V. globator embryos and focused on the major movement patterns of the cellular monolayer, cell shape changes and changes in the localization of cytoplasmic bridges (CBs) connecting the cells. Isolated intact, sectioned and fragmented embryos were analyzed throughout the inversion process using light microscopy, confocal laser scanning microscopy, scanning electron microscopy and transmission electron microscopy techniques. We generated 3D models of the identified cell shapes, including the localizations of CBs. We show how concerted cell-shape changes and concerted changes in the position of cells relative to the CB system cause cell layer movements and turn the spherical cell monolayer inside out. The type B inversion of V. globator is compared to the type A inversion in V. carteri. Conclusions Concerted, spatially and temporally coordinated changes in cellular shapes in conjunction with concerted migration of cells relative to the CB system are the causes of type B inversion in V. globator. Despite significant similarities between type A and type B inverters, differences exist in almost all details of the inversion process, suggesting analogous inversion processes that arose through parallel evolution. Based on our results and due to the cellular biomechanical implications of the involved tensile and compressive forces, we developed a global mechanistic scenario that predicts epithelial folding during embryonic inversion in V. globator. PMID:22206406

  18. On the dosimetric effect and reduction of inverse consistency and transitivity errors in deformable image registration for dose accumulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bender, Edward T.; Hardcastle, Nicholas; Tome, Wolfgang A.

    2012-01-15

    Purpose: Deformable image registration (DIR) is necessary for accurate dose accumulation between multiple radiotherapy image sets. DIR algorithms can suffer from inverse and transitivity inconsistencies. When using deformation vector fields (DVFs) that exhibit inverse-inconsistency and are nontransitive, dose accumulation on a given image set via different image pathways will lead to different accumulated doses. The purpose of this study was to investigate the dosimetric effect of and propose a postprocessing solution to reduce inverse consistency and transitivity errors. Methods: Four MVCT images and four phases of a lung 4DCT, each with an associated calculated dose, were selected for analysis. DVFsmore » between all four images in each data set were created using the Fast Symmetric Demons algorithm. Dose was accumulated on the fourth image in each set using DIR via two different image pathways. The two accumulated doses on the fourth image were compared. The inverse consistency and transitivity errors in the DVFs were then reduced. The dose accumulation was repeated using the processed DVFs, the results of which were compared with the accumulated dose from the original DVFs. To evaluate the influence of the postprocessing technique on DVF accuracy, the original and processed DVF accuracy was evaluated on the lung 4DCT data on which anatomical landmarks had been identified by an expert. Results: Dose accumulation to the same image via different image pathways resulted in two different accumulated dose results. After the inverse consistency errors were reduced, the difference between the accumulated doses diminished. The difference was further reduced after reducing the transitivity errors. The postprocessing technique had minimal effect on the accuracy of the DVF for the lung 4DCT images. Conclusions: This study shows that inverse consistency and transitivity errors in DIR have a significant dosimetric effect in dose accumulation; Depending on the image pathway taken to accumulate the dose, different results may be obtained. A postprocessing technique that reduces inverse consistency and transitivity error is presented, which allows for consistent dose accumulation regardless of the image pathway followed.« less

  19. A novel technique for real-time estimation of edge pedestal density gradients via reflectometer time delay data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, L., E-mail: zeng@fusion.gat.com; Doyle, E. J.; Rhodes, T. L.

    2016-11-15

    A new model-based technique for fast estimation of the pedestal electron density gradient has been developed. The technique uses ordinary mode polarization profile reflectometer time delay data and does not require direct profile inversion. Because of its simple data processing, the technique can be readily implemented via a Field-Programmable Gate Array, so as to provide a real-time density gradient estimate, suitable for use in plasma control systems such as envisioned for ITER, and possibly for DIII-D and Experimental Advanced Superconducting Tokamak. The method is based on a simple edge plasma model with a linear pedestal density gradient and low scrape-off-layermore » density. By measuring reflectometer time delays for three adjacent frequencies, the pedestal density gradient can be estimated analytically via the new approach. Using existing DIII-D profile reflectometer data, the estimated density gradients obtained from the new technique are found to be in good agreement with the actual density gradients for a number of dynamic DIII-D plasma conditions.« less

  20. Exact exchange-correlation potentials of singlet two-electron systems

    NASA Astrophysics Data System (ADS)

    Ryabinkin, Ilya G.; Ospadov, Egor; Staroverov, Viktor N.

    2017-10-01

    We suggest a non-iterative analytic method for constructing the exchange-correlation potential, v XC ( r ) , of any singlet ground-state two-electron system. The method is based on a convenient formula for v XC ( r ) in terms of quantities determined only by the system's electronic wave function, exact or approximate, and is essentially different from the Kohn-Sham inversion technique. When applied to Gaussian-basis-set wave functions, the method yields finite-basis-set approximations to the corresponding basis-set-limit v XC ( r ) , whereas the Kohn-Sham inversion produces physically inappropriate (oscillatory and divergent) potentials. The effectiveness of the procedure is demonstrated by computing accurate exchange-correlation potentials of several two-electron systems (helium isoelectronic series, H2, H3 + ) using common ab initio methods and Gaussian basis sets.

  1. A robust spatial filtering technique for multisource localization and geoacoustic inversion.

    PubMed

    Stotts, S A

    2005-07-01

    Geoacoustic inversion and source localization using beamformed data from a ship of opportunity has been demonstrated with a bottom-mounted array. An alternative approach, which lies within a class referred to as spatial filtering, transforms element level data into beam data, applies a bearing filter, and transforms back to element level data prior to performing inversions. Automation of this filtering approach is facilitated for broadband applications by restricting the inverse transform to the degrees of freedom of the array, i.e., the effective number of elements, for frequencies near or below the design frequency. A procedure is described for nonuniformly spaced elements that guarantees filter stability well above the design frequency. Monitoring energy conservation with respect to filter output confirms filter stability. Filter performance with both uniformly spaced and nonuniformly spaced array elements is discussed. Vertical (range and depth) and horizontal (range and bearing) ambiguity surfaces are constructed to examine filter performance. Examples that demonstrate this filtering technique with both synthetic data and real data are presented along with comparisons to inversion results using beamformed data. Examinations of cost functions calculated within a simulated annealing algorithm reveal the efficacy of the approach.

  2. Detailed p- and s-wave velocity models along the LARSE II transect, Southern California

    USGS Publications Warehouse

    Murphy, J.M.; Fuis, G.S.; Ryberg, T.; Lutter, W.J.; Catchings, R.D.; Goldman, M.R.

    2010-01-01

    Structural details of the crust determined from P-wave velocity models can be improved with S-wave velocity models, and S-wave velocities are needed for model-based predictions of strong ground motion in southern California. We picked P- and S-wave travel times for refracted phases from explosive-source shots of the Los Angeles Region Seismic Experiment, Phase II (LARSE II); we developed refraction velocity models from these picks using two different inversion algorithms. For each inversion technique, we calculated ratios of P- to S-wave velocities (VP/VS) where there is coincident P- and S-wave ray coverage.We compare the two VP inverse velocity models to each other and to results from forward modeling, and we compare the VS inverse models. The VS and VP/VS models differ in structural details from the VP models. In particular, dipping, tabular zones of low VS, or high VP/VS, appear to define two fault zones in the central Transverse Ranges that could be parts of a positive flower structure to the San Andreas fault. These two zones are marginally resolved, but their presence in two independent models lends them some credibility. A plot of VS versus VP differs from recently published plots that are based on direct laboratory or down-hole sonic measurements. The difference in plots is most prominent in the range of VP = 3 to 5 km=s (or VS ~ 1:25 to 2:9 km/s), where our refraction VS is lower by a few tenths of a kilometer per second from VS based on direct measurements. Our new VS - VP curve may be useful for modeling the lower limit of VS from a VP model in calculating strong motions from scenario earthquakes.

  3. Inverse Planning Approach for 3-D MRI-Based Pulse-Dose Rate Intracavitary Brachytherapy in Cervix Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chajon, Enrique; Dumas, Isabelle; Touleimat, Mahmoud B.Sc.

    2007-11-01

    Purpose: The purpose of this study was to evaluate the inverse planning simulated annealing (IPSA) software for the optimization of dose distribution in patients with cervix carcinoma treated with MRI-based pulsed-dose rate intracavitary brachytherapy. Methods and Materials: Thirty patients treated with a technique using a customized vaginal mold were selected. Dose-volume parameters obtained using the IPSA method were compared with the classic manual optimization method (MOM). Target volumes and organs at risk were delineated according to the Gynecological Brachytherapy Group/European Society for Therapeutic Radiology and Oncology recommendations. Because the pulsed dose rate program was based on clinical experience with lowmore » dose rate, dwell time values were required to be as homogeneous as possible. To achieve this goal, different modifications of the IPSA program were applied. Results: The first dose distribution calculated by the IPSA algorithm proposed a heterogeneous distribution of dwell time positions. The mean D90, D100, and V100 calculated with both methods did not differ significantly when the constraints were applied. For the bladder, doses calculated at the ICRU reference point derived from the MOM differed significantly from the doses calculated by the IPSA method (mean, 58.4 vs. 55 Gy respectively; p = 0.0001). For the rectum, the doses calculated at the ICRU reference point were also significantly lower with the IPSA method. Conclusions: The inverse planning method provided fast and automatic solutions for the optimization of dose distribution. However, the straightforward use of IPSA generated significant heterogeneity in dwell time values. Caution is therefore recommended in the use of inverse optimization tools with clinical relevance study of new dosimetric rules.« less

  4. Multivariate Formation Pressure Prediction with Seismic-derived Petrophysical Properties from Prestack AVO inversion and Poststack Seismic Motion Inversion

    NASA Astrophysics Data System (ADS)

    Yu, H.; Gu, H.

    2017-12-01

    A novel multivariate seismic formation pressure prediction methodology is presented, which incorporates high-resolution seismic velocity data from prestack AVO inversion, and petrophysical data (porosity and shale volume) derived from poststack seismic motion inversion. In contrast to traditional seismic formation prediction methods, the proposed methodology is based on a multivariate pressure prediction model and utilizes a trace-by-trace multivariate regression analysis on seismic-derived petrophysical properties to calibrate model parameters in order to make accurate predictions with higher resolution in both vertical and lateral directions. With prestack time migration velocity as initial velocity model, an AVO inversion was first applied to prestack dataset to obtain high-resolution seismic velocity with higher frequency that is to be used as the velocity input for seismic pressure prediction, and the density dataset to calculate accurate Overburden Pressure (OBP). Seismic Motion Inversion (SMI) is an inversion technique based on Markov Chain Monte Carlo simulation. Both structural variability and similarity of seismic waveform are used to incorporate well log data to characterize the variability of the property to be obtained. In this research, porosity and shale volume are first interpreted on well logs, and then combined with poststack seismic data using SMI to build porosity and shale volume datasets for seismic pressure prediction. A multivariate effective stress model is used to convert velocity, porosity and shale volume datasets to effective stress. After a thorough study of the regional stratigraphic and sedimentary characteristics, a regional normally compacted interval model is built, and then the coefficients in the multivariate prediction model are determined in a trace-by-trace multivariate regression analysis on the petrophysical data. The coefficients are used to convert velocity, porosity and shale volume datasets to effective stress and then to calculate formation pressure with OBP. Application of the proposed methodology to a research area in East China Sea has proved that the method can bridge the gap between seismic and well log pressure prediction and give predicted pressure values close to pressure meassurements from well testing.

  5. The Priority Inversion Problem and Real-Time Symbolic Model Checking

    DTIC Science & Technology

    1993-04-23

    real time systems unpredictable in subtle ways. This makes it more difficult to implement and debug such systems. Our work discusses this problem and presents one possible solution. The solution is formalized and verified using temporal logic model checking techniques. In order to perform the verification, the BDD-based symbolic model checking algorithm given in previous works was extended to handle real-time properties using the bounded until operator. We believe that this algorithm, which is based on discrete time, is able to handle many real-time properties

  6. Waves on Thin Plates: A New (Energy Based) Method on Localization

    NASA Astrophysics Data System (ADS)

    Turkaya, Semih; Toussaint, Renaud; Kvalheim Eriksen, Fredrik; Lengliné, Olivier; Daniel, Guillaume; Grude Flekkøy, Eirik; Jørgen Måløy, Knut

    2016-04-01

    Noisy acoustic signal localization is a difficult problem having a wide range of application. We propose a new localization method applicable for thin plates which is based on energy amplitude attenuation and inversed source amplitude comparison. This inversion is tested on synthetic data using a direct model of Lamb wave propagation and on experimental dataset (recorded with 4 Brüel & Kjær Type 4374 miniature piezoelectric shock accelerometers, 1 - 26 kHz frequency range). We compare the performance of this technique with classical source localization algorithms, arrival time localization, time reversal localization, localization based on energy amplitude. The experimental setup consist of a glass / plexiglass plate having dimensions of 80 cm x 40 cm x 1 cm equipped with four accelerometers and an acquisition card. Signals are generated using a steel, glass or polyamide ball (having different sizes) quasi perpendicular hit (from a height of 2-3 cm) on the plate. Signals are captured by sensors placed on the plate on different locations. We measure and compare the accuracy of these techniques as function of sampling rate, dynamic range, array geometry, signal to noise ratio and computational time. We show that this new technique, which is very versatile, works better than conventional techniques over a range of sampling rates 8 kHz - 1 MHz. It is possible to have a decent resolution (3cm mean error) using a very cheap equipment set. The numerical simulations allow us to track the contributions of different error sources in different methods. The effect of the reflections is also included in our simulation by using the imaginary sources outside the plate boundaries. This proposed method can easily be extended for applications in three dimensional environments, to monitor industrial activities (e.g boreholes drilling/production activities) or natural brittle systems (e.g earthquakes, volcanoes, avalanches).

  7. Inverse dynamic substructuring using the direct hybrid assembly in the frequency domain

    NASA Astrophysics Data System (ADS)

    D'Ambrogio, Walter; Fregolent, Annalisa

    2014-04-01

    The paper deals with the identification of the dynamic behaviour of a structural subsystem, starting from the known dynamic behaviour of both the coupled system and the remaining part of the structural system (residual subsystem). This topic is also known as decoupling problem, subsystem subtraction or inverse dynamic substructuring. Whenever it is necessary to combine numerical models (e.g. FEM) and test models (e.g. FRFs), one speaks of experimental dynamic substructuring. Substructure decoupling techniques can be classified as inverse coupling or direct decoupling techniques. In inverse coupling, the equations describing the coupling problem are rearranged to isolate the unknown substructure instead of the coupled structure. On the contrary, direct decoupling consists in adding to the coupled system a fictitious subsystem that is the negative of the residual subsystem. Starting from a reduced version of the 3-field formulation (dynamic equilibrium using FRFs, compatibility and equilibrium of interface forces), a direct hybrid assembly is developed by requiring that both compatibility and equilibrium conditions are satisfied exactly, either at coupling DoFs only, or at additional internal DoFs of the residual subsystem. Equilibrium and compatibility DoFs might not be the same: this generates the so-called non-collocated approach. The technique is applied using experimental data from an assembled system made by a plate and a rigid mass.

  8. Estimation of near-surface shear-wave velocity by inversion of Rayleigh waves

    USGS Publications Warehouse

    Xia, J.; Miller, R.D.; Park, C.B.

    1999-01-01

    The shear-wave (S-wave) velocity of near-surface materials (soil, rocks, pavement) and its effect on seismic-wave propagation are of fundamental interest in many groundwater, engineering, and environmental studies. Rayleigh-wave phase velocity of a layered-earth model is a function of frequency and four groups of earth properties: P-wave velocity, S-wave velocity, density, and thickness of layers. Analysis of the Jacobian matrix provides a measure of dispersion-curve sensitivity to earth properties. S-wave velocities are the dominant influence on a dispersion curve in a high-frequency range (>5 Hz) followed by layer thickness. An iterative solution technique to the weighted equation proved very effective in the high-frequency range when using the Levenberg-Marquardt and singular-value decomposition techniques. Convergence of the weighted solution is guaranteed through selection of the damping factor using the Levenberg-Marquardt method. Synthetic examples demonstrated calculation efficiency and stability of inverse procedures. We verify our method using borehole S-wave velocity measurements.Iterative solutions to the weighted equation by the Levenberg-Marquardt and singular-value decomposition techniques are derived to estimate near-surface shear-wave velocity. Synthetic and real examples demonstrate the calculation efficiency and stability of the inverse procedure. The inverse results of the real example are verified by borehole S-wave velocity measurements.

  9. Backwards compatible high dynamic range video compression

    NASA Astrophysics Data System (ADS)

    Dolzhenko, Vladimir; Chesnokov, Vyacheslav; Edirisinghe, Eran A.

    2014-02-01

    This paper presents a two layer CODEC architecture for high dynamic range video compression. The base layer contains the tone mapped video stream encoded with 8 bits per component which can be decoded using conventional equipment. The base layer content is optimized for rendering on low dynamic range displays. The enhancement layer contains the image difference, in perceptually uniform color space, between the result of inverse tone mapped base layer content and the original video stream. Prediction of the high dynamic range content reduces the redundancy in the transmitted data while still preserves highlights and out-of-gamut colors. Perceptually uniform colorspace enables using standard ratedistortion optimization algorithms. We present techniques for efficient implementation and encoding of non-uniform tone mapping operators with low overhead in terms of bitstream size and number of operations. The transform representation is based on human vision system model and suitable for global and local tone mapping operators. The compression techniques include predicting the transform parameters from previously decoded frames and from already decoded data for current frame. Different video compression techniques are compared: backwards compatible and non-backwards compatible using AVC and HEVC codecs.

  10. Parts-based geophysical inversion with application to water flooding interface detection and geological facies detection

    NASA Astrophysics Data System (ADS)

    Zhang, Junwei

    I built parts-based and manifold based mathematical learning model for the geophysical inverse problem and I applied this approach to two problems. One is related to the detection of the oil-water encroachment front during the water flooding of an oil reservoir. In this application, I propose a new 4D inversion approach based on the Gauss-Newton approach to invert time-lapse cross-well resistance data. The goal of this study is to image the position of the oil-water encroachment front in a heterogeneous clayey sand reservoir. This approach is based on explicitly connecting the change of resistivity to the petrophysical properties controlling the position of the front (porosity and permeability) and to the saturation of the water phase through a petrophysical resistivity model accounting for bulk and surface conductivity contributions and saturation. The distributions of the permeability and porosity are also inverted using the time-lapse resistivity data in order to better reconstruct the position of the oil water encroachment front. In our synthetic test case, we get a better position of the front with the by-products of porosity and permeability inferences near the flow trajectory and close to the wells. The numerical simulations show that the position of the front is recovered well but the distribution of the recovered porosity and permeability is only fair. A comparison with a commercial code based on a classical Gauss-Newton approach with no information provided by the two-phase flow model fails to recover the position of the front. The new approach could be also used for the time-lapse monitoring of various processes in both geothermal fields and oil and gas reservoirs using a combination of geophysical methods. A paper has been published in Geophysical Journal International on this topic and I am the first author of this paper. The second application is related to the detection of geological facies boundaries and their deforation to satisfy to geophysica data and prior distributions. We pose the geophysical inverse problem in terms of Gaussian random fields with mean functions controlled by petrophysical relationships and covariance functions controlled by a prior geological cross-section, including the definition of spatial boundaries for the geological facies. The petrophysical relationship problem is formulated as a regression problem upon each facies. The inversion is performed in a Bayesian framework. We demonstrate the usefulness of this strategy using a first synthetic case study, performing a joint inversion of gravity and galvanometric resistivity data with the stations all located at the ground surface. The joint inversion is used to recover the density and resistivity distributions of the subsurface. In a second step, we consider the possibility that the facies boundaries are deformable and their shapes are inverted as well. We use the level set approach to deform the facies boundaries preserving prior topological properties of the facies throughout the inversion. With the additional help of prior facies petrophysical relationships, topological characteristic of each facies, we make posterior inference about multiple geophysical tomograms based on their corresponding geophysical data misfits. The result of the inversion technique is encouraging when applied to a second synthetic case study, showing that we can recover the heterogeneities inside the facies, the mean values for the petrophysical properties, and, to some extent, the facies boundaries. A paper has been submitted to Geophysics on this topic and I am the first author of this paper. During this thesis, I also worked on the time lapse inversion problem of gravity data in collaboration with Marios Karaoulis and a paper was published in Geophysical Journal international on this topic. I also worked on the time-lapse inversion of cross-well geophysical data (seismic and resistivity) using both a structural approach named the cross-gradient approach and a petrophysical approach. A paper was published in Geophysics on this topic.

  11. Study of synthesis techniques for insensitive aircraft control systems

    NASA Technical Reports Server (NTRS)

    Harvey, C. A.; Pope, R. E.

    1977-01-01

    Insensitive flight control system design criteria was defined in terms of maximizing performance (handling qualities, RMS gust response, transient response, stability margins) over a defined parameter range. Wing load alleviation for the C-5A was chosen as a design problem. The C-5A model was a 79-state, two-control structure with uncertainties assumed to exist in dynamic pressure, structural damping and frequency, and the stability derivative, M sub w. Five new techniques (mismatch estimation, uncertainty weighting, finite dimensional inverse, maximum difficulty, dual Lyapunov) were developed. Six existing techniques (additive noise, minimax, multiplant, sensitivity vector augmentation, state dependent noise, residualization) and the mismatch estimation and uncertainty weighting techniques were synthesized and evaluated on the design example. Evaluation and comparison of these six techniques indicated that the minimax and the uncertainty weighting techniques were superior to the other six, and of these two, uncertainty weighting has lower computational requirements. Techniques based on the three remaining new concepts appear promising and are recommended for further research.

  12. Structural Acoustic Characteristics of Aircraft and Active Control of Interior Noise

    NASA Technical Reports Server (NTRS)

    Fuller, C. R.

    1998-01-01

    The reduction of aircraft cabin sound levels to acceptable values still remains a topic of much research. The use of conventional passive approaches has been extensively studied and implemented. However performance limits of these techniques have been reached. In this project, new techniques for understanding the structural acoustic behavior of aircraft fuselages and the use of this knowledge in developing advanced new control approaches are investigated. A central feature of the project is the Aircraft Fuselage Test Facility at Va Tech which is based around a full scale Cessna Citation III fuselage. The work is divided into two main parts; the first part investigates the use of an inverse technique for identifying dominant fuselage vibrations. The second part studies the development and implementation of active and active-passive techniques for controlling aircraft interior noise.

  13. Outcome of Vaginoplasty in Male-to-Female Transgenders: A Systematic Review of Surgical Techniques.

    PubMed

    Horbach, Sophie E R; Bouman, Mark-Bram; Smit, Jan Maerten; Özer, Müjde; Buncamper, Marlon E; Mullender, Margriet G

    2015-06-01

    Gender reassignment surgery is the keystone of the treatment of transgender patients. For male-to-female transgenders, this involves the creation of a neovagina. Many surgical methods for vaginoplasty have been opted. The penile skin inversion technique is the method of choice for most gender surgeons. However, the optimal surgical technique for vaginoplasty in transgender women has not yet been identified, as outcomes of the different techniques have never been compared. With this systematic review, we aim to give a detailed overview of the published outcomes of all currently available techniques for vaginoplasty in male-to-female transgenders. A PubMed and EMBASE search for relevant publications (1995-present), which provided data on the outcome of techniques for vaginoplasty in male-to-female transgender patients. Main outcome measures are complications, neovaginal depth and width, sexual function, patient satisfaction, and improvement in quality of life (QoL). Twenty-six studies satisfied the inclusion criteria. The majority of these studies were retrospective case series of low to intermediate quality. Outcome of the penile skin inversion technique was reported in 1,461 patients, bowel vaginoplasty in 102 patients. Neovaginal stenosis was the most frequent complication in both techniques. Sexual function and patient satisfaction were overall acceptable, but many different outcome measures were used. QoL was only reported in one study. Comparison between techniques was difficult due to the lack of standardization. The penile skin inversion technique is the most researched surgical procedure. Outcome of bowel vaginoplasty has been reported less frequently but does not seem to be inferior. The available literature is heterogeneous in patient groups, surgical procedure, outcome measurement tools, and follow-up. Standardized protocols and prospective study designs are mandatory for correct interpretation and comparability of data. © 2015 International Society for Sexual Medicine.

  14. Accelerating non-contrast-enhanced MR angiography with inflow inversion recovery imaging by skipped phase encoding and edge deghosting (SPEED).

    PubMed

    Chang, Zheng; Xiang, Qing-San; Shen, Hao; Yin, Fang-Fang

    2010-03-01

    To accelerate non-contrast-enhanced MR angiography (MRA) with inflow inversion recovery (IFIR) with a fast imaging method, Skipped Phase Encoding and Edge Deghosting (SPEED). IFIR imaging uses a preparatory inversion pulse to reduce signals from static tissue, while leaving inflow arterial blood unaffected, resulting in sparse arterial vasculature on modest tissue background. By taking advantage of vascular sparsity, SPEED can be simplified with a single-layer model to achieve higher efficiency in both scan time reduction and image reconstruction. SPEED can also make use of information available in multiple coils for further acceleration. The techniques are demonstrated with a three-dimensional renal non-contrast-enhanced IFIR MRA study. Images are reconstructed by SPEED based on a single-layer model to achieve an undersampling factor of up to 2.5 using one skipped phase encoding direction. By making use of information available in multiple coils, SPEED can achieve an undersampling factor of up to 8.3 with four receiver coils. The reconstructed images generally have comparable quality as that of the reference images reconstructed from full k-space data. As demonstrated with a three-dimensional renal IFIR scan, SPEED based on a single-layer model is able to reduce scan time further and achieve higher computational efficiency than the original SPEED.

  15. Use of time series and harmonic constituents of tidal propagation to enhance estimation of coastal aquifer heterogeneity

    USGS Publications Warehouse

    Hughes, Joseph D.; White, Jeremy T.; Langevin, Christian D.

    2010-01-01

    A synthetic two‐dimensional model of a horizontally and vertically heterogeneous confined coastal aquifer system, based on the Upper Floridan aquifer in south Florida, USA, subjected to constant recharge and a complex tidal signal was used to generate 15‐minute water‐level data at select locations over a 7‐day simulation period.   “Observed” water‐level data were generated by adding noise, representative of typical barometric pressure variations and measurement errors, to 15‐minute data from the synthetic model. Permeability was calibrated using a non‐linear gradient‐based parameter inversion approach with preferred‐value Tikhonov regularization and 1) “observed” water‐level data, 2) harmonic constituent data, or 3) a combination of “observed” water‐level and harmonic constituent data.    In all cases, high‐frequency data used in the parameter inversion process were able to characterize broad‐scale heterogeneities; the ability to discern fine‐scale heterogeneity was greater when harmonic constituent data were used.  These results suggest that the combined use of highly parameterized‐inversion techniques and high frequency time and/or processed‐harmonic constituent water‐level data could be a useful approach to better characterize aquifer heterogeneities in coastal aquifers influenced by ocean tides.

  16. SORTAN: a Unix program for calculation and graphical presentation of fault slip as induced by stresses

    NASA Astrophysics Data System (ADS)

    Pascal, Christophe

    2004-04-01

    Stress inversion programs are nowadays frequently used in tectonic analysis. The purpose of this family of programs is to reconstruct the stress tensor characteristics from fault slip data acquired in the field or derived from earthquake focal mechanisms (i.e. inverse methods). Until now, little attention has been paid to direct methods (i.e. to determine fault slip directions from an inferred stress tensor). During the 1990s, the fast increase in resolution in 3D seismic reflection techniques made it possible to determine the geometry of subsurface faults with a satisfactory accuracy but not to determine precisely their kinematics. This recent improvement allows the use of direct methods. A computer program, namely SORTAN, is introduced. The program is highly portable on Unix platforms, straightforward to install and user-friendly. The computation is based on classical stress-fault slip relationships and allows for fast treatment of a set of faults and graphical presentation of the results (i.e. slip directions). In addition, the SORTAN program permits one to test the sensitivity of the results to input uncertainties. It is a complementary tool to classical stress inversion methods and can be used to check the mechanical consistency and the limits of structural interpretations based upon 3D seismic reflection surveys.

  17. Propeller sheet cavitation noise source modeling and inversion

    NASA Astrophysics Data System (ADS)

    Lee, Keunhwa; Lee, Jaehyuk; Kim, Dongho; Kim, Kyungseop; Seong, Woojae

    2014-02-01

    Propeller sheet cavitation is the main contributor to high level of noise and vibration in the after body of a ship. Full measurement of the cavitation-induced hull pressure over the entire surface of the affected area is desired but not practical. Therefore, using a few measurements on the outer hull above the propeller in a cavitation tunnel, empirical or semi-empirical techniques based on physical model have been used to predict the hull-induced pressure (or hull-induced force). In this paper, with the analytic source model for sheet cavitation, a multi-parameter inversion scheme to find the positions of noise sources and their strengths is suggested. The inversion is posed as a nonlinear optimization problem, which is solved by the optimization algorithm based on the adaptive simplex simulated annealing algorithm. Then, the resulting hull pressure can be modeled with boundary element method from the inverted cavitation noise sources. The suggested approach is applied to the hull pressure data measured in a cavitation tunnel of the Samsung Heavy Industry. Two monopole sources are adequate to model the propeller sheet cavitation noise. The inverted source information is reasonable with the cavitation dynamics of the propeller and the modeled hull pressure shows good agreement with cavitation tunnel experimental data.

  18. Inversion of Surface-wave Dispersion Curves due to Low-velocity-layer Models

    NASA Astrophysics Data System (ADS)

    Shen, C.; Xia, J.; Mi, B.

    2016-12-01

    A successful inversion relies on exact forward modeling methods. It is a key step to accurately calculate multi-mode dispersion curves of a given model in high-frequency surface-wave (Rayleigh wave and Love wave) methods. For normal models (shear (S)-wave velocity increasing with depth), their theoretical dispersion curves completely match the dispersion spectrum that is generated based on wave equation. For models containing a low-velocity-layer, however, phase velocities calculated by existing forward-modeling algorithms (e.g. Thomson-Haskell algorithm, Knopoff algorithm, fast vector-transfer algorithm and so on) fail to be consistent with the dispersion spectrum at a high frequency range. They will approach a value that close to the surface-wave velocity of the low-velocity-layer under the surface layer, rather than that of the surface layer when their corresponding wavelengths are short enough. This phenomenon conflicts with the characteristics of surface waves, which results in an erroneous inverted model. By comparing the theoretical dispersion curves with simulated dispersion energy, we proposed a direct and essential solution to accurately compute surface-wave phase velocities due to low-velocity-layer models. Based on the proposed forward modeling technique, we can achieve correct inversion for these types of models. Several synthetic data proved the effectiveness of our method.

  19. Calibrating electromagnetic induction conductivities with time-domain reflectometry measurements

    NASA Astrophysics Data System (ADS)

    Dragonetti, Giovanna; Comegna, Alessandro; Ajeel, Ali; Piero Deidda, Gian; Lamaddalena, Nicola; Rodriguez, Giuseppe; Vignoli, Giulio; Coppola, Antonio

    2018-02-01

    This paper deals with the issue of monitoring the spatial distribution of bulk electrical conductivity, σb, in the soil root zone by using electromagnetic induction (EMI) sensors under different water and salinity conditions. To deduce the actual distribution of depth-specific σb from EMI apparent electrical conductivity (ECa) measurements, we inverted the data by using a regularized 1-D inversion procedure designed to manage nonlinear multiple EMI-depth responses. The inversion technique is based on the coupling of the damped Gauss-Newton method with truncated generalized singular value decomposition (TGSVD). The ill-posedness of the EMI data inversion is addressed by using a sharp stabilizer term in the objective function. This specific stabilizer promotes the reconstruction of blocky targets, thereby contributing to enhance the spatial resolution of the EMI results in the presence of sharp boundaries (otherwise smeared out after the application of more standard Occam-like regularization strategies searching for smooth solutions). Time-domain reflectometry (TDR) data are used as ground-truth data for calibration of the inversion results. An experimental field was divided into four transects 30 m long and 2.8 m wide, cultivated with green bean, and irrigated with water at two different salinity levels and using two different irrigation volumes. Clearly, this induces different salinity and water contents within the soil profiles. For each transect, 26 regularly spaced monitoring soundings (1 m apart) were selected for the collection of (i) Geonics EM-38 and (ii) Tektronix reflectometer data. Despite the original discrepancies in the EMI and TDR data, we found a significant correlation of the means and standard deviations of the two data series; in particular, after a low-pass spatial filtering of the TDR data. Based on these findings, this paper introduces a novel methodology to calibrate EMI-based electrical conductivities via TDR direct measurements. This calibration strategy consists of a linear mapping of the original inversion results into a new conductivity spatial distribution with the coefficients of the transformation uniquely based on the statistics of the two original measurement datasets (EMI and TDR conductivities).

  20. Detection of DNA double-strand breaks and chromosome translocations using ligation-mediated PCR and inverse PCR.

    PubMed

    Singh, Sheetal; Shih, Shyh-Jen; Vaughan, Andrew T M

    2014-01-01

    Current techniques for examining the global creation and repair of DNA double-strand breaks are restricted in their sensitivity, and such techniques mask any site-dependent variations in breakage and repair rate or fidelity. We present here a system for analyzing the fate of documented DNA breaks, using the MLL gene as an example, through application of ligation-mediated PCR. Here, a simple asymmetric double-stranded DNA adapter molecule is ligated to experimentally induced DNA breaks and subjected to seminested PCR using adapter- and gene-specific primers. The rate of appearance and loss of specific PCR products allows detection of both the break and its repair. Using the additional technique of inverse PCR, the presence of misrepaired products (translocations) can be detected at the same site, providing information on the fidelity of the ligation reaction in intact cells. Such techniques may be adapted for the analysis of DNA breaks and rearrangements introduced into any identifiable genomic location. We have also applied parallel sequencing for the high-throughput analysis of inverse PCR products to facilitate the unbiased recording of all rearrangements located at a specific genomic location.

  1. Remote monitoring of environmental particulate pollution - A problem in inversion of first-kind integral equations

    NASA Technical Reports Server (NTRS)

    Fymat, A. L.

    1975-01-01

    The determination of the microstructure, chemical nature, and dynamical evolution of scattering particulates in the atmosphere is considered. A description is given of indirect sampling techniques which can circumvent most of the difficulties associated with direct sampling techniques, taking into account methods based on scattering, extinction, and diffraction of an incident light beam. Approaches for reconstructing the particulate size distribution from the direct and the scattered radiation are discussed. A new method is proposed for determining the chemical composition of the particulates and attention is given to the relevance of methods of solution involving first kind Fredholm integral equations.

  2. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.

  3. Optimization Parameters of Air-conditioning and Heat Insulation Systems of a Pressurized Cabins of Long-distance Airplanes

    NASA Astrophysics Data System (ADS)

    Gusev, Sergey A.; Nikolaev, Vladimir N.

    2018-01-01

    The method for determination of an aircraft compartment thermal condition, based on a mathematical model of a compartment thermal condition was developed. Development of solution techniques for solving heat exchange direct and inverse problems and for determining confidence intervals of parametric identification estimations was carried out. The required performance of air-conditioning, ventilation systems and heat insulation depth of crew and passenger cabins were received.

  4. The investigation of advanced remote sensing techniques for the measurement of aerosol characteristics

    NASA Technical Reports Server (NTRS)

    Deepak, A.; Becher, J.

    1979-01-01

    Advanced remote sensing techniques and inversion methods for the measurement of characteristics of aerosol and gaseous species in the atmosphere were investigated. Of particular interest were the physical and chemical properties of aerosols, such as their size distribution, number concentration, and complex refractive index, and the vertical distribution of these properties on a local as well as global scale. Remote sensing techniques for monitoring of tropospheric aerosols were developed as well as satellite monitoring of upper tropospheric and stratospheric aerosols. Computer programs were developed for solving multiple scattering and radiative transfer problems, as well as inversion/retrieval problems. A necessary aspect of these efforts was to develop models of aerosol properties.

  5. The 2-D magnetotelluric inverse problem solved with optimization

    NASA Astrophysics Data System (ADS)

    van Beusekom, Ashley E.; Parker, Robert L.; Bank, Randolph E.; Gill, Philip E.; Constable, Steven

    2011-02-01

    The practical 2-D magnetotelluric inverse problem seeks to determine the shallow-Earth conductivity structure using finite and uncertain data collected on the ground surface. We present an approach based on using PLTMG (Piecewise Linear Triangular MultiGrid), a special-purpose code for optimization with second-order partial differential equation (PDE) constraints. At each frequency, the electromagnetic field and conductivity are treated as unknowns in an optimization problem in which the data misfit is minimized subject to constraints that include Maxwell's equations and the boundary conditions. Within this framework it is straightforward to accommodate upper and lower bounds or other conditions on the conductivity. In addition, as the underlying inverse problem is ill-posed, constraints may be used to apply various kinds of regularization. We discuss some of the advantages and difficulties associated with using PDE-constrained optimization as the basis for solving large-scale nonlinear geophysical inverse problems. Combined transverse electric and transverse magnetic complex admittances from the COPROD2 data are inverted. First, we invert penalizing size and roughness giving solutions that are similar to those found previously. In a second example, conventional regularization is replaced by a technique that imposes upper and lower bounds on the model. In both examples the data misfit is better than that obtained previously, without any increase in model complexity.

  6. Optimization of computations for adjoint field and Jacobian needed in 3D CSEM inversion

    NASA Astrophysics Data System (ADS)

    Dehiya, Rahul; Singh, Arun; Gupta, Pravin K.; Israil, M.

    2017-01-01

    We present the features and results of a newly developed code, based on Gauss-Newton optimization technique, for solving three-dimensional Controlled-Source Electromagnetic inverse problem. In this code a special emphasis has been put on representing the operations by block matrices for conjugate gradient iteration. We show how in the computation of Jacobian, the matrix formed by differentiation of system matrix can be made independent of frequency to optimize the operations at conjugate gradient step. The coarse level parallel computing, using OpenMP framework, is used primarily due to its simplicity in implementation and accessibility of shared memory multi-core computing machine to almost anyone. We demonstrate how the coarseness of modeling grid in comparison to source (comp`utational receivers) spacing can be exploited for efficient computing, without compromising the quality of the inverted model, by reducing the number of adjoint calls. It is also demonstrated that the adjoint field can even be computed on a grid coarser than the modeling grid without affecting the inversion outcome. These observations were reconfirmed using an experiment design where the deviation of source from straight tow line is considered. Finally, a real field data inversion experiment is presented to demonstrate robustness of the code.

  7. Limited-memory BFGS based least-squares pre-stack Kirchhoff depth migration

    NASA Astrophysics Data System (ADS)

    Wu, Shaojiang; Wang, Yibo; Zheng, Yikang; Chang, Xu

    2015-08-01

    Least-squares migration (LSM) is a linearized inversion technique for subsurface reflectivity estimation. Compared to conventional migration algorithms, it can improve spatial resolution significantly with a few iterative calculations. There are three key steps in LSM, (1) calculate data residuals between observed data and demigrated data using the inverted reflectivity model; (2) migrate data residuals to form reflectivity gradient and (3) update reflectivity model using optimization methods. In order to obtain an accurate and high-resolution inversion result, the good estimation of inverse Hessian matrix plays a crucial role. However, due to the large size of Hessian matrix, the inverse matrix calculation is always a tough task. The limited-memory BFGS (L-BFGS) method can evaluate the Hessian matrix indirectly using a limited amount of computer memory which only maintains a history of the past m gradients (often m < 10). We combine the L-BFGS method with least-squares pre-stack Kirchhoff depth migration. Then, we validate the introduced approach by the 2-D Marmousi synthetic data set and a 2-D marine data set. The results show that the introduced method can effectively obtain reflectivity model and has a faster convergence rate with two comparison gradient methods. It might be significant for general complex subsurface imaging.

  8. Parallel solution of the symmetric tridiagonal eigenproblem. Research report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessup, E.R.

    1989-10-01

    This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed-memory Multiple Instruction, Multiple Data multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speed up, and accuracy. Experiments on an IPSC hypercube multiprocessor reveal that Cuppen's method ismore » the most accurate approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effect of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptions of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less

  9. Parallel solution of the symmetric tridiagonal eigenproblem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessup, E.R.

    1989-01-01

    This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed memory MIMD multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues, and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speedup, and accuracy. Experiments on an iPSC hyper-cube multiprocessor reveal that Cuppen's method is the most accuratemore » approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effects of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptations of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less

  10. Effects of tillage technologies and application of biopreparations on micromycetes in the rhizosphere and rhizoplane of spring wheat

    NASA Astrophysics Data System (ADS)

    Shirokikh, I. G.; Kozlova, L. M.; Shirokikh, A. A.; Popov, F. A.; Tovstik, E. V.

    2017-07-01

    The population density and structure of complexes of soil microscopic fungi in the rhizosphere and rhizoplane of spring wheat ( Triticum aestivum L.), plant damage by root rot and leaf diseases, and crop yield were determined in a stationary field experiment on a silty loamy soddy-podzolic soil (Albic Retisol (Loamic, Aric)) in dependence on the soil tillage technique: (a) moldboard plowing to 20-22 cm and (b) non-inversive tillage to 14-16 cm. The results were treated with the two-way ANOVA method. It was shown that the number of fungal propagules in the rhizosphere and rhizoplane of plants in the variant with non-inversive tillage was significantly smaller than that in the variant with plowing. Minimization of the impact on the soil during five years led to insignificant changes in the structure of micromycete complexes in the rhizosphere of wheat. The damage of the plants with root rot and leaf diseases upon non-inversive tillage did not increase in comparison with that upon plowing. Wheat yield in the variant with non-inversive tillage was insignificantly lower than that in the variant with moldboard plowing. The application of biopreparations based on the Streptomyces hygroscopicus A4 and Pseudomonas aureofaciens BS 1393 resulted in a significant decrease of plant damage with leaf rust.

  11. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  12. Modular Approaches to Earth Science Scientific Computing: 3D Electromagnetic Induction Modeling as an Example

    NASA Astrophysics Data System (ADS)

    Tandon, K.; Egbert, G.; Siripunvaraporn, W.

    2003-12-01

    We are developing a modular system for three-dimensional inversion of electromagnetic (EM) induction data, using an object oriented programming approach. This approach allows us to modify the individual components of the inversion scheme proposed, and also reuse the components for variety of problems in earth science computing howsoever diverse they might be. In particular, the modularity allows us to (a) change modeling codes independently of inversion algorithm details; (b) experiment with new inversion algorithms; and (c) modify the way prior information is imposed in the inversion to test competing hypothesis and techniques required to solve an earth science problem. Our initial code development is for EM induction equations on a staggered grid, using iterative solution techniques in 3D. An example illustrated here is an experiment with the sensitivity of 3D magnetotelluric inversion to uncertainties in the boundary conditions required for regional induction problems. These boundary conditions should reflect the large-scale geoelectric structure of the study area, which is usually poorly constrained. In general for inversion of MT data, one fixes boundary conditions at the edge of the model domain, and adjusts the earth?s conductivity structure within the modeling domain. Allowing for errors in specification of the open boundary values is simple in principle, but no existing inversion codes that we are aware of have this feature. Adding a feature such as this is straightforward within the context of the modular approach. More generally, a modular approach provides an efficient methodology for setting up earth science computing problems to test various ideas. As a concrete illustration relevant to EM induction problems, we investigate the sensitivity of MT data near San Andreas Fault at Parkfield (California) to uncertainties in the regional geoelectric structure.

  13. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE PAGES

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  14. An imaged-based inverse finite element method to determine in-vivo mechanical properties of the human trabecular meshwork.

    PubMed

    Pant, Anup D; Kagemann, Larry; Schuman, Joel S; Sigal, Ian A; Amini, Rouzbeh

    2017-01-01

    Previous studies have shown that the trabecular meshwork (TM) is mechanically stiffer in glaucomatous eyes as compared to normal eyes. It is believed that elevated TM stiffness increases resistance to the aqueous humor outflow, producing increased intraocular pressure (IOP). It would be advantageous to measure TM mechanical properties in vivo , as these properties are believed to play an important role in the pathophysiology of glaucoma and could be useful for identifying potential risk factors. The purpose of this study was to develop a method to estimate in-vivo TM mechanical properties using clinically available exams and computer simulations. Inverse finite element simulation. A finite element model of the TM was constructed from optical coherence tomography (OCT) images of a healthy volunteer before and during IOP elevation. An axisymmetric model of the TM was then constructed. Images of the TM at a baseline IOP level of 11, and elevated level of 23 mmHg were treated as the undeformed and deformed configurations, respectively. An inverse modeling technique was subsequently used to estimate the TM shear modulus ( G ). An optimization technique was used to find the shear modulus that minimized the difference between Schlemm's canal area in the in-vivo images and simulations. Upon completion of inverse finite element modeling, the simulated area of the Schlemm's canal changed from 8,889 µm 2 to 2,088 µm 2 , similar to the experimentally measured areal change of the canal (from 8,889 µm 2 to 2,100 µm 2 ). The calculated value of shear modulus was found to be 1.93 kPa, (implying an approximate Young's modulus of 5.75 kPa), which is consistent with previous ex-vivo measurements. The combined imaging and computational simulation technique provides a unique approach to calculate the mechanical properties of the TM in vivo without any surgical intervention. Quantification of such mechanical properties will help us examine the mechanistic role of TM biomechanics in the regulation of IOP in healthy and glaucomatous eyes.

  15. Imaging model for the scintillator and its application to digital radiography image enhancement.

    PubMed

    Wang, Qian; Zhu, Yining; Li, Hongwei

    2015-12-28

    Digital Radiography (DR) images obtained by OCD-based (optical coupling detector) Micro-CT system usually suffer from low contrast. In this paper, a mathematical model is proposed to describe the image formation process in scintillator. By solving the correlative inverse problem, the quality of DR images is improved, i.e. higher contrast and spatial resolution. By analyzing the radiative transfer process of visible light in scintillator, scattering is recognized as the main factor leading to low contrast. Moreover, involved blurring effect is also concerned and described as point spread function (PSF). Based on these physical processes, the scintillator imaging model is then established. When solving the inverse problem, pre-correction to the intensity of x-rays, dark channel prior based haze removing technique, and an effective blind deblurring approach are employed. Experiments on a variety of DR images show that the proposed approach could improve the contrast of DR images dramatically as well as eliminate the blurring vision effectively. Compared with traditional contrast enhancement methods, such as CLAHE, our method could preserve the relative absorption values well.

  16. Application of random seismic inversion method based on tectonic model in thin sand body research

    NASA Astrophysics Data System (ADS)

    Dianju, W.; Jianghai, L.; Qingkai, F.

    2017-12-01

    The oil and gas exploitation at Songliao Basin, Northeast China have already progressed to the period with high water production. The previous detailed reservoir description that based on seismic image, sediment core, borehole logging has great limitations in small scale structural interpretation and thin sand body characterization. Thus, precise guidance for petroleum exploration is badly in need of a more advanced method. To do so, we derived the method of random seismic inversion constrained by tectonic model.It can effectively improve the depicting ability of thin sand bodies, combining numerical simulation techniques, which can credibly reducing the blindness of reservoir analysis from the whole to the local and from the macroscopic to the microscopic. At the same time, this can reduce the limitations of the study under the constraints of different geological conditions of the reservoir, accomplish probably the exact estimation for the effective reservoir. Based on the research, this paper has optimized the regional effective reservoir evaluation and the productive location adjustment of applicability, combined with the practical exploration and development in Aonan oil field.

  17. Recent advances in polarized 3 He based neutron spin filter development

    NASA Astrophysics Data System (ADS)

    Chen, Wangchun; Gentile, Thomas; Erwin, Ross; Watson, Shannon; Krycka, Kathryn; Ye, Qiang; NCNR NIST Team; University of Maryland Team

    2015-04-01

    Polarized 3 He neutron spin filters (NSFs) are based on the strong spin-dependence of the neutron absorption cross section by 3 He. NSFs can polarize large area, widely divergent, and broadband neutron beams effectively and allow for combining a neutron polarizer and a spin flipper into a single polarizing device. The last capability utilizes 3 He spin inversion based on the adiabatic fast passage (AFP) nuclear magnetic resonance technique. Polarized 3 He NSFs are significantly expanding the polarized neutron measurement capabilities at the NIST Center for Neutron Research (NCNR). Here we present an overview of 3 He NSF applications to small-angle neutron scattering, thermal triple axis spectrometry, and wide-angle polarization analysis. We discuss a recent upgrade of our spin-exchange optical pumping (SEOP) systems that utilize chirped volume holographic gratings for spectral narrowing. The new capability allows us to polarize rubidium/potassium hybrid SEOP cells over a liter in volume within a day, with 3 He polarizations up to 88%, Finally we discuss how we can achieve nearly lossless 3 He polarization inversion with AFP.

  18. An FPGA-Based People Detection System

    NASA Astrophysics Data System (ADS)

    Nair, Vinod; Laprise, Pierre-Olivier; Clark, James J.

    2005-12-01

    This paper presents an FPGA-based system for detecting people from video. The system is designed to use JPEG-compressed frames from a network camera. Unlike previous approaches that use techniques such as background subtraction and motion detection, we use a machine-learning-based approach to train an accurate detector. We address the hardware design challenges involved in implementing such a detector, along with JPEG decompression, on an FPGA. We also present an algorithm that efficiently combines JPEG decompression with the detection process. This algorithm carries out the inverse DCT step of JPEG decompression only partially. Therefore, it is computationally more efficient and simpler to implement, and it takes up less space on the chip than the full inverse DCT algorithm. The system is demonstrated on an automated video surveillance application and the performance of both hardware and software implementations is analyzed. The results show that the system can detect people accurately at a rate of about[InlineEquation not available: see fulltext.] frames per second on a Virtex-II 2V1000 using a MicroBlaze processor running at[InlineEquation not available: see fulltext.], communicating with dedicated hardware over FSL links.

  19. Ensemble-based data assimilation and optimal sensor placement for scalar source reconstruction

    NASA Astrophysics Data System (ADS)

    Mons, Vincent; Wang, Qi; Zaki, Tamer

    2017-11-01

    Reconstructing the characteristics of a scalar source from limited remote measurements in a turbulent flow is a problem of great interest for environmental monitoring, and is challenging due to several aspects. Firstly, the numerical estimation of the scalar dispersion in a turbulent flow requires significant computational resources. Secondly, in actual practice, only a limited number of observations are available, which generally makes the corresponding inverse problem ill-posed. Ensemble-based variational data assimilation techniques are adopted to solve the problem of scalar source localization in a turbulent channel flow at Reτ = 180 . This approach combines the components of variational data assimilation and ensemble Kalman filtering, and inherits the robustness from the former and the ease of implementation from the latter. An ensemble-based methodology for optimal sensor placement is also proposed in order to improve the condition of the inverse problem, which enhances the performances of the data assimilation scheme. This work has been partially funded by the Office of Naval Research (Grant N00014-16-1-2542) and by the National Science Foundation (Grant 1461870).

  20. Remote Sensing Image Fusion Method Based on Nonsubsampled Shearlet Transform and Sparse Representation

    NASA Astrophysics Data System (ADS)

    Moonon, Altan-Ulzii; Hu, Jianwen; Li, Shutao

    2015-12-01

    The remote sensing image fusion is an important preprocessing technique in remote sensing image processing. In this paper, a remote sensing image fusion method based on the nonsubsampled shearlet transform (NSST) with sparse representation (SR) is proposed. Firstly, the low resolution multispectral (MS) image is upsampled and color space is transformed from Red-Green-Blue (RGB) to Intensity-Hue-Saturation (IHS). Then, the high resolution panchromatic (PAN) image and intensity component of MS image are decomposed by NSST to high and low frequency coefficients. The low frequency coefficients of PAN and the intensity component are fused by the SR with the learned dictionary. The high frequency coefficients of intensity component and PAN image are fused by local energy based fusion rule. Finally, the fused result is obtained by performing inverse NSST and inverse IHS transform. The experimental results on IKONOS and QuickBird satellites demonstrate that the proposed method provides better spectral quality and superior spatial information in the fused image than other remote sensing image fusion methods both in visual effect and object evaluation.

  1. Using level set based inversion of arrival times to recover shear wave speed in transient elastography and supersonic imaging

    NASA Astrophysics Data System (ADS)

    McLaughlin, Joyce; Renzi, Daniel

    2006-04-01

    Transient elastography and supersonic imaging are promising new techniques for characterizing the elasticity of soft tissues. Using this method, an 'ultrafast imaging' system (up to 10 000 frames s-1) follows in real time the propagation of a low-frequency shear wave. The displacement of the propagating shear wave is measured as a function of time and space. Here we develop a fast level set based algorithm for finding the shear wave speed from the interior positions of the propagating front. We compare the performance of level curve methods developed here and our previously developed (McLaughlin J and Renzi D 2006 Shear wave speed recovery in transient elastography and supersonic imaging using propagating fronts Inverse Problems 22 681-706) distance methods. We give reconstruction examples from synthetic data and from data obtained from a phantom experiment accomplished by Mathias Fink's group (the Laboratoire Ondes et Acoustique, ESPCI, Université Paris VII).

  2. Parameter Estimation for Geoscience Applications Using a Measure-Theoretic Approach

    NASA Astrophysics Data System (ADS)

    Dawson, C.; Butler, T.; Mattis, S. A.; Graham, L.; Westerink, J. J.; Vesselinov, V. V.; Estep, D.

    2016-12-01

    Effective modeling of complex physical systems arising in the geosciences is dependent on knowing parameters which are often difficult or impossible to measure in situ. In this talk we focus on two such problems, estimating parameters for groundwater flow and contaminant transport, and estimating parameters within a coastal ocean model. The approach we will describe, proposed by collaborators D. Estep, T. Butler and others, is based on a novel stochastic inversion technique based on measure theory. In this approach, given a probability space on certain observable quantities of interest, one searches for the sets of highest probability in parameter space which give rise to these observables. When viewed as mappings between sets, the stochastic inversion problem is well-posed in certain settings, but there are computational challenges related to the set construction. We will focus the talk on estimating scalar parameters and fields in a contaminant transport setting, and in estimating bottom friction in a complicated near-shore coastal application.

  3. Parameter Identification Of Multilayer Thermal Insulation By Inverse Problems

    NASA Astrophysics Data System (ADS)

    Nenarokomov, Aleksey V.; Alifanov, Oleg M.; Gonzalez, Vivaldo M.

    2012-07-01

    The purpose of this paper is to introduce an iterative regularization method in the research of radiative and thermal properties of materials with further applications in the design of Thermal Control Systems (TCS) of spacecrafts. In this paper the radiative and thermal properties (heat capacity, emissivity and thermal conductance) of a multilayered thermal-insulating blanket (MLI), which is a screen-vacuum thermal insulation as a part of the (TCS) for perspective spacecrafts, are estimated. Properties of the materials under study are determined in the result of temperature and heat flux measurement data processing based on the solution of the Inverse Heat Transfer Problem (IHTP) technique. Given are physical and mathematical models of heat transfer processes in a specimen of the multilayered thermal-insulating blanket located in the experimental facility. A mathematical formulation of the IHTP, based on sensitivity function approach, is presented too. The practical testing was performed for specimen of the real MLI. This paper consists of recent researches, which developed the approach suggested at [1].

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di, Zichao; Leyffer, Sven; Wild, Stefan M.

    Fluorescence tomographic reconstruction, based on the detection of photons coming from fluorescent emission, can be used for revealing the internal elemental composition of a sample. On the other hand, conventional X-ray transmission tomography can be used for reconstructing the spatial distribution of the absorption coefficient inside a sample. In this work, we integrate both X-ray fluorescence and X-ray transmission data modalities and formulate a nonlinear optimization-based approach for reconstruction of the elemental composition of a given object. This model provides a simultaneous reconstruction of both the quantitative spatial distribution of all elements and the absorption effect in the sample. Mathematicallymore » speaking, we show that compared with the single-modality inversion (i.e., the X-ray transmission or fluorescence alone), the joint inversion provides a better-posed problem, which implies a better recovery. Therefore, the challenges in X-ray fluorescence tomography arising mainly from the effects of self-absorption in the sample are partially mitigated. The use of this technique is demonstrated on the reconstruction of several synthetic samples.« less

  5. Inverse design of near unity efficiency perfectly vertical grating couplers.

    PubMed

    Michaels, Andrew; Yablonovitch, Eli

    2018-02-19

    Efficient coupling between integrated optical waveguides and optical fibers is essential to the success of silicon photonics. While many solutions exist, perfectly vertical grating couplers that scatter light out of a waveguide in the direction normal to the waveguide's top surface are an ideal candidate due to their potential to reduce packaging complexity. Designing such couplers with high efficiencies, however, has proven difficult. In this paper, we use inverse electromagnetic design techniques to optimize a high efficiency two-layer perfectly vertical silicon grating coupler. Our base design achieves a chip-to-fiber coupling efficiency of 99.2% (-0.035 dB) at 1550 nm. Using this base design as a starting point, we run subsequent constrained optimizations to realize vertical couplers with coupling efficiencies over 96% and back reflections of less than -40 dB which can be fabricated using 65 nm-resolution lithography. These results demonstrate a new path forward for designing fabrication-tolerant ultra high efficiency grating couplers.

  6. Metamodel-based inverse method for parameter identification: elastic-plastic damage model

    NASA Astrophysics Data System (ADS)

    Huang, Changwu; El Hami, Abdelkhalak; Radi, Bouchaïb

    2017-04-01

    This article proposed a metamodel-based inverse method for material parameter identification and applies it to elastic-plastic damage model parameter identification. An elastic-plastic damage model is presented and implemented in numerical simulation. The metamodel-based inverse method is proposed in order to overcome the disadvantage in computational cost of the inverse method. In the metamodel-based inverse method, a Kriging metamodel is constructed based on the experimental design in order to model the relationship between material parameters and the objective function values in the inverse problem, and then the optimization procedure is executed by the use of a metamodel. The applications of the presented material model and proposed parameter identification method in the standard A 2017-T4 tensile test prove that the presented elastic-plastic damage model is adequate to describe the material's mechanical behaviour and that the proposed metamodel-based inverse method not only enhances the efficiency of parameter identification but also gives reliable results.

  7. Non-destructive evaluation of laboratory scale hydraulic fracturing using acoustic emission

    NASA Astrophysics Data System (ADS)

    Hampton, Jesse Clay

    The primary objective of this research is to develop techniques to characterize hydraulic fractures and fracturing processes using acoustic emission monitoring based on laboratory scale hydraulic fracturing experiments. Individual microcrack AE source characterization is performed to understand the failure mechanisms associated with small failures along pre-existing discontinuities and grain boundaries. Individual microcrack analysis methods include moment tensor inversion techniques to elucidate the mode of failure, crack slip and crack normal direction vectors, and relative volumetric deformation of an individual microcrack. Differentiation between individual microcrack analysis and AE cloud based techniques is studied in efforts to refine discrete fracture network (DFN) creation and regional damage quantification of densely fractured media. Regional damage estimations from combinations of individual microcrack analyses and AE cloud density plotting are used to investigate the usefulness of weighting cloud based AE analysis techniques with microcrack source data. Two granite types were used in several sample configurations including multi-block systems. Laboratory hydraulic fracturing was performed with sample sizes ranging from 15 x 15 x 25 cm3 to 30 x 30 x 25 cm 3 in both unconfined and true-triaxially confined stress states using different types of materials. Hydraulic fracture testing in rock block systems containing a large natural fracture was investigated in terms of AE response throughout fracture interactions. Investigations of differing scale analyses showed the usefulness of individual microcrack characterization as well as DFN and cloud based techniques. Individual microcrack characterization weighting cloud based techniques correlated well with post-test damage evaluations.

  8. A harmonic analysis approach to joint inversion of P-receiver functions and wave dispersion data in high dense seismic profiles

    NASA Astrophysics Data System (ADS)

    Molina-Aguilera, A.; Mancilla, F. D. L.; Julià, J.; Morales, J.

    2017-12-01

    Joint inversion techniques of P-receiver functions and wave dispersion data implicitly assume an isotropic radial stratified earth. The conventional approach invert stacked radial component receiver functions from different back-azimuths to obtain a laterally homogeneous single-velocity model. However, in the presence of strong lateral heterogeneities as anisotropic layers and/or dipping interfaces, receiver functions are considerably perturbed and both the radial and transverse components exhibit back azimuthal dependences. Harmonic analysis methods exploit these azimuthal periodicities to separate the effects due to the isotropic flat-layered structure from those effects caused by lateral heterogeneities. We implement a harmonic analysis method based on radial and transverse receiver functions components and carry out a synthetic study to illuminate the capabilities of the method in isolating the isotropic flat-layered part of receiver functions and constrain the geometry and strength of lateral heterogeneities. The independent of the baz P receiver function are jointly inverted with phase and group dispersion curves using a linearized inversion procedure. We apply this approach to high dense seismic profiles ( 2 km inter-station distance, see figure) located in the central Betics (western Mediterranean region), a region which has experienced complex geodynamic processes and exhibit strong variations in Moho topography. The technique presented here is robust and can be applied systematically to construct a 3-D model of the crust and uppermost mantle across large networks.

  9. Inversion of Airborne Electromagnetic Data: Application to Oil Sands Exploration

    NASA Astrophysics Data System (ADS)

    Cristall, J.; Farquharson, C. G.; Oldenburg, D. W.

    2004-05-01

    In general, three-dimensional inversion of airborne electromagnetic data for models of the conductivity variation in the Earth is currently impractical because of the large amount of computation time that it requires. At the other extreme, one-dimensional imaging techniques based on transforming the observed data as a function of measurement time or frequency at each location to values of conductivity as a function of depth are very fast. Such techniques can provide an image that, in many circumstances, is a fair, qualitative representation of the subsurface. However, this is not the same as a model that is known to reproduce the observations to a level considered appropriate for the noise in the data. This makes it hard to assess the quality and reliability of the images produced by the transform techniques until other information such as bore-hole logs is obtained. A compromise between these two interpretation strategies is to retain the approximation of a one-dimensional variation of conductivity beneath each observation location, but to invert the corresponding data as functions of time or frequency, taking advantage of all available aspects of inversion methodology. For example, using an automatic method such as the GCV or L-curve criteria for determining how well to fit a set of data when the actual amount of noise is not known, even when there are clear multi-dimensional effects in the data; using something other than a sum-of-squares measure for the misfit, for example the Huber M-measure, which affords a robust fit to data that contain non-Gaussian noise; and using an l1-norm or similar measure of model structure that enables piecewise constant, blocky models to be constructed. These features, as well as the basic concepts of minimum-structure inversion, result in a flexible and powerful interpretation procedure that, because of the one-dimensional approximation, is sufficiently rapid to be a viable alternative to the imaging techniques presently in use. We provide an example that involves the interpretation of an airborne time-domain electromagnetic data-set from an oil sands exploration project in Alberta. The target is the layer that potentially contains oil sands. This layer is relatively resistive, with its resistivity increasing with increasing hydrocarbon content, and is sandwiched between two more conductive layers. This is quite different from the classical electromagnetic geophysics scenario of looking for a conductive mineral deposit in resistive shield rocks. However, inverting the data enabled the depth, thickness and resistivity of the target layer to be well determined. As a consequence, it is concluded that airborne electromagnetic surveys, when combined with inversion procedures, can be a very cost-effective way of mapping even fairly subtle conductivity variations over large areas.

  10. FORGE Newberry 3D Gravity Density Model for Newberry Volcano

    DOE Data Explorer

    Alain Bonneville

    2016-03-11

    These data are Pacific Northwest National Lab inversions of an amalgamation of two surface gravity datasets: Davenport-Newberry gravity collected prior to 2012 stimulations and Zonge International gravity collected for the project "Novel use of 4D Monitoring Techniques to Improve Reservoir Longevity and Productivity in Enhanced Geothermal Systems" in 2012. Inversions of surface gravity recover a 3D distribution of density contrast from which intrusive igneous bodies are identified. The data indicate a body name, body type, point type, UTM X and Y coordinates, Z data is specified as meters below sea level (negative values then indicate elevations above sea level), thickness of the body in meters, suscept, density anomaly in g/cc, background density in g/cc, and density in g/cc. The model was created using a commercial gravity inversion software called ModelVision 12.0 (http://www.tensor-research.com.au/Geophysical-Products/ModelVision). The initial model is based on the seismic tomography interpretation (Beachly et al., 2012). All the gravity data used to constrain this model are on the GDR: https://gdr.openei.org/submissions/760.

  11. A Mathematical Relationship for Hydromorphone Loading into Liposomes with Trans-Membrane Ammonium Sulfate Gradients

    PubMed Central

    TU, SHENG; MCGINNIS, TAMARA; KRUGNER-HIGBY, LISA; HEATH, TIMOTHY D.

    2014-01-01

    We have studied the loading of the opioid hydromorphone into liposomes using ammonium sulfate gradients. Unlike other drugs loaded with this technique, hydromorphone is freely soluble as the sulfate salt, and, consequently, does not precipitate in the liposomes after loading. We have derived a mathematical relationship that can predict the extent of loading based on the ammonium ion content of the liposomes and the amount of drug added for loading. We have adapted and used the Berthelot indophenol assay to measure the amount of ammonium ions in the liposomes. Plots of the inverse of the fraction of hydromorphone loaded versus the amount of hydromorphone added are linear, and the slope should be the inverse of the amount of ammonium ions present in the liposomes. The inverse of the slopes obtained closely correspond to the amount of ammonium ions in the liposomes measured with the Berthelot indophenol assay. We also show that loading can be less than optimal under conditions where osmotically driven loss of ammonium ions or leakage of drug after loading may occur. PMID:20014429

  12. A mathematical relationship for hydromorphone loading into liposomes with trans-membrane ammonium sulfate gradients.

    PubMed

    Tu, Sheng; McGinnis, Tamara; Krugner-Higby, Lisa; Heath, Timothy D

    2010-06-01

    We have studied the loading of the opioid hydromorphone into liposomes using ammonium sulfate gradients. Unlike other drugs loaded with this technique, hydromorphone is freely soluble as the sulfate salt, and, consequently, does not precipitate in the liposomes after loading. We have derived a mathematical relationship that can predict the extent of loading based on the ammonium ion content of the liposomes and the amount of drug added for loading. We have adapted and used the Berthelot indophenol assay to measure the amount of ammonium ions in the liposomes. Plots of the inverse of the fraction of hydromorphone loaded versus the amount of hydromorphone added are linear, and the slope should be the inverse of the amount of ammonium ions present in the liposomes. The inverse of the slopes obtained closely correspond to the amount of ammonium ions in the liposomes measured with the Berthelot indophenol assay. We also show that loading can be less than optimal under conditions where osmotically driven loss of ammonium ions or leakage of drug after loading may occur. (c) 2009 Wiley-Liss, Inc. and the American Pharmacists Association

  13. Recovery of time-dependent volatility in option pricing model

    NASA Astrophysics Data System (ADS)

    Deng, Zui-Cha; Hon, Y. C.; Isakov, V.

    2016-11-01

    In this paper we investigate an inverse problem of determining the time-dependent volatility from observed market prices of options with different strikes. Due to the non linearity and sparsity of observations, an analytical solution to the problem is generally not available. Numerical approximation is also difficult to obtain using most of the existing numerical algorithms. Based on our recent theoretical results, we apply the linearisation technique to convert the problem into an inverse source problem from which recovery of the unknown volatility function can be achieved. Two kinds of strategies, namely, the integral equation method and the Landweber iterations, are adopted to obtain the stable numerical solution to the inverse problem. Both theoretical analysis and numerical examples confirm that the proposed approaches are effective. The work described in this paper was partially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region (Project No. CityU 101112) and grants from the NNSF of China (Nos. 11261029, 11461039), and NSF grants DMS 10-08902 and 15-14886 and by Emylou Keith and Betty Dutcher Distinguished Professorship at the Wichita State University (USA).

  14. Kolmogorov complexity, statistical regularization of inverse problems, and Birkhoff's formalization of beauty

    NASA Astrophysics Data System (ADS)

    Kreinovich, Vladik; Longpre, Luc; Koshelev, Misha

    1998-09-01

    Most practical applications of statistical methods are based on the implicit assumption that if an event has a very small probability, then it cannot occur. For example, the probability that a kettle placed on a cold stove would start boiling by itself is not 0, it is positive, but it is so small, that physicists conclude that such an event is simply impossible. This assumption is difficult to formalize in traditional probability theory, because this theory only describes measures on sets and does not allow us to divide functions into 'random' and non-random ones. This distinction was made possible by the idea of algorithmic randomness, introduce by Kolmogorov and his student Martin- Loef in the 1960s. We show that this idea can also be used for inverse problems. In particular, we prove that for every probability measure, the corresponding set of random functions is compact, and, therefore, the corresponding restricted inverse problem is well-defined. The resulting techniques turns out to be interestingly related with the qualitative esthetic measure introduced by G. Birkhoff as order/complexity.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nurhandoko, Bagus Endar B.; Wely, Woen; Setiadi, Herlan

    It is already known that tomography has a great impact for analyzing and mapping unknown objects based on inversion, travel time as well as waveform inversion. Therefore, tomography has used in wide area, not only in medical but also in petroleum as well as mining. Recently, tomography method is being applied in several mining industries. A case study of tomography imaging has been carried out in DOZ ( Deep Ore Zone ) block caving mine, Tembagapura, Papua. Many researchers are undergoing to investigate the properties of DOZ cave not only outside but also inside which is unknown. Tomography takes amore » part for determining this objective.The sources are natural from the seismic events that caused by mining induced seismicity and rocks deformation activity, therefore it is called as passive seismic. These microseismic travel time data are processed by Simultaneous Iterative Reconstruction Technique (SIRT). The result of the inversion can be used for DOZ cave monitoring. These information must be used for identifying weak zone inside the cave. In addition, these results of tomography can be used to determine DOZ and cave information to support mine activity in PT. Freeport Indonesia.« less

  16. Supercritical phase inversion of starch-poly(epsilon-caprolactone) for tissue engineering applications.

    PubMed

    Duarte, Ana Rita C; Mano, João F; Reis, Rui L

    2010-02-01

    In this work, a starch-based polymer, namely a blend of starch-poly(epsilon-caprolactone) was processed by supercritical assisted phase inversion process. This processing technique has been proposed for the development of 3D structures with potential applications in tissue engineering applications, as scaffolds. The use of carbon dioxide as non-solvent in the phase inversion process leads to the formation of a porous and interconnected structure, dry and free of any residual solvent. Different processing conditions such as pressure (from 80 up to 150 bar) and temperature (45 and 55 degrees C) were studied and the effect on the morphological features of the scaffolds was evaluated by scanning electron microscopy and micro-computed tomography. The mechanical properties of the SPCL scaffolds prepared were also studied. Additionally, in this work, the in vitro biological performance of the scaffolds was studied. Cell adhesion and morphology, viability and proliferation was assessed and the results suggest that the materials prepared are allow cell attachment and promote cell proliferation having thus potential to be used in some for biomedical applications.

  17. Spectral identification of a 90Sr source in the presence of masking nuclides using Maximum-Likelihood deconvolution

    NASA Astrophysics Data System (ADS)

    Neuer, Marcus J.

    2013-11-01

    A technique for the spectral identification of strontium-90 is shown, utilising a Maximum-Likelihood deconvolution. Different deconvolution approaches are discussed and summarised. Based on the intensity distribution of the beta emission and Geant4 simulations, a combined response matrix is derived, tailored to the β- detection process in sodium iodide detectors. It includes scattering effects and attenuation by applying a base material decomposition extracted from Geant4 simulations with a CAD model for a realistic detector system. Inversion results of measurements show the agreement between deconvolution and reconstruction. A detailed investigation with additional masking sources like 40K, 226Ra and 131I shows that a contamination of strontium can be found in the presence of these nuisance sources. Identification algorithms for strontium are presented based on the derived technique. For the implementation of blind identification, an exemplary masking ratio is calculated.

  18. Truncated feature representation for automatic target detection using transformed data-based decomposition

    NASA Astrophysics Data System (ADS)

    Riasati, Vahid R.

    2016-05-01

    In this work, the data covariance matrix is diagonalized to provide an orthogonal bases set using the eigen vectors of the data. The eigen-vector decomposition of the data is transformed and filtered in the transform domain to truncate the data for robust features related to a specified set of targets. These truncated eigen features are then combined and reconstructed to utilize in a composite filter and consequently utilized for the automatic target detection of the same class of targets. The results associated with the testing of the current technique are evaluated using the peak-correlation and peak-correlation energy metrics and are presented in this work. The inverse transformed eigen-bases of the current technique may be thought of as an injected sparsity to minimize data in representing the skeletal data structure information associated with the set of targets under consideration.

  19. Reconstruction of Atmospheric Tracer Releases with Optimal Resolution Features: Concentration Data Assimilation

    NASA Astrophysics Data System (ADS)

    Singh, Sarvesh Kumar; Turbelin, Gregory; Issartel, Jean-Pierre; Kumar, Pramod; Feiz, Amir Ali

    2015-04-01

    The fast growing urbanization, industrialization and military developments increase the risk towards the human environment and ecology. This is realized in several past mortality incidents, for instance, Chernobyl nuclear explosion (Ukraine), Bhopal gas leak (India), Fukushima-Daichi radionuclide release (Japan), etc. To reduce the threat and exposure to the hazardous contaminants, a fast and preliminary identification of unknown releases is required by the responsible authorities for the emergency preparedness and air quality analysis. Often, an early detection of such contaminants is pursued by a distributed sensor network. However, identifying the origin and strength of unknown releases following the sensor reported concentrations is a challenging task. This requires an optimal strategy to integrate the measured concentrations with the predictions given by the atmospheric dispersion models. This is an inverse problem. The measured concentrations are insufficient and atmospheric dispersion models suffer from inaccuracy due to the lack of process understanding, turbulence uncertainties, etc. These lead to a loss of information in the reconstruction process and thus, affect the resolution, stability and uniqueness of the retrieved source. An additional well known issue is the numerical artifact arisen at the measurement locations due to the strong concentration gradient and dissipative nature of the concentration. Thus, assimilation techniques are desired which can lead to an optimal retrieval of the unknown releases. In general, this is facilitated within the Bayesian inference and optimization framework with a suitable choice of a priori information, regularization constraints, measurement and background error statistics. An inversion technique is introduced here for an optimal reconstruction of unknown releases using limited concentration measurements. This is based on adjoint representation of the source-receptor relationship and utilization of a weight function which exhibits a priori information about the unknown releases apparent to the monitoring network. The properties of the weight function provide an optimal data resolution and model resolution to the retrieved source estimates. The retrieved source estimates are proved theoretically to be stable against the random measurement errors and their reliability can be interpreted in terms of the distribution of the weight functions. Further, the same framework can be extended for the identification of the point type releases by utilizing the maximum of the retrieved source estimates. The inversion technique has been evaluated with the several diffusion experiments, like, Idaho low wind diffusion experiment (1974), IIT Delhi tracer experiment (1991), European Tracer Experiment (1994), Fusion Field Trials (2007), etc. In case of point release experiments, the source parameters are mostly retrieved close to the true source parameters with least error. Primarily, the proposed technique overcomes two major difficulties incurred in the source reconstruction: (i) The initialization of the source parameters as required by the optimization based techniques. The converged solution depends on their initialization. (ii) The statistical knowledge about the measurement and background errors as required by the Bayesian inference based techniques. These are hypothetically assumed in case of no prior knowledge.

  20. Robust state preparation in quantum simulations of Dirac dynamics

    NASA Astrophysics Data System (ADS)

    Song, Xue-Ke; Deng, Fu-Guo; Lamata, Lucas; Muga, J. G.

    2017-02-01

    A nonrelativistic system such as an ultracold trapped ion may perform a quantum simulation of a Dirac equation dynamics under specific conditions. The resulting Hamiltonian and dynamics are highly controllable, but the coupling between momentum and internal levels poses some difficulties to manipulate the internal states accurately in wave packets. We use invariants of motion to inverse engineer robust population inversion processes with a homogeneous, time-dependent simulated electric field. This exemplifies the usefulness of inverse-engineering techniques to improve the performance of quantum simulation protocols.

Top