Sample records for inversion techniques applied

  1. Approximated Stable Inversion for Nonlinear Systems with Nonhyperbolic Internal Dynamics. Revised

    NASA Technical Reports Server (NTRS)

    Devasia, Santosh

    1999-01-01

    A technique to achieve output tracking for nonminimum phase nonlinear systems with non- hyperbolic internal dynamics is presented. The present paper integrates stable inversion techniques (that achieve exact-tracking) with approximation techniques (that modify the internal dynamics) to circumvent the nonhyperbolicity of the internal dynamics - this nonhyperbolicity is an obstruction to applying presently available stable inversion techniques. The theory is developed for nonlinear systems and the method is applied to a two-cart with inverted-pendulum example.

  2. Real Variable Inversion of Laplace Transforms: An Application in Plasma Physics.

    ERIC Educational Resources Information Center

    Bohn, C. L.; Flynn, R. W.

    1978-01-01

    Discusses the nature of Laplace transform techniques and explains an alternative to them: the Widder's real inversion. To illustrate the power of this new technique, it is applied to a difficult inversion: the problem of Landau damping. (GA)

  3. Output Tracking for Systems with Non-Hyperbolic and Near Non-Hyperbolic Internal Dynamics: Helicopter Hover Control

    NASA Technical Reports Server (NTRS)

    Devasia, Santosh

    1996-01-01

    A technique to achieve output tracking for nonminimum phase linear systems with non-hyperbolic and near non-hyperbolic internal dynamics is presented. This approach integrates stable inversion techniques, that achieve exact-tracking, with approximation techniques, that modify the internal dynamics to achieve desirable performance. Such modification of the internal dynamics is used (1) to remove non-hyperbolicity which an obstruction to applying stable inversion techniques and (2) to reduce large pre-actuation time needed to apply stable inversion for near non-hyperbolic cases. The method is applied to an example helicopter hover control problem with near non-hyperbolic internal dynamic for illustrating the trade-off between exact tracking and reduction of pre-actuation time.

  4. Inversion of particle-size distribution from angular light-scattering data with genetic algorithms.

    PubMed

    Ye, M; Wang, S; Lu, Y; Hu, T; Zhu, Z; Xu, Y

    1999-04-20

    A stochastic inverse technique based on a genetic algorithm (GA) to invert particle-size distribution from angular light-scattering data is developed. This inverse technique is independent of any given a priori information of particle-size distribution. Numerical tests show that this technique can be successfully applied to inverse problems with high stability in the presence of random noise and low susceptibility to the shape of distributions. It has also been shown that the GA-based inverse technique is more efficient in use of computing time than the inverse Monte Carlo method recently developed by Ligon et al. [Appl. Opt. 35, 4297 (1996)].

  5. Large-scale inverse model analyses employing fast randomized data reduction

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  6. Error analysis applied to several inversion techniques used for the retrieval of middle atmospheric constituents from limb-scanning MM-wave spectroscopic measurements

    NASA Technical Reports Server (NTRS)

    Puliafito, E.; Bevilacqua, R.; Olivero, J.; Degenhardt, W.

    1992-01-01

    The formal retrieval error analysis of Rodgers (1990) allows the quantitative determination of such retrieval properties as measurement error sensitivity, resolution, and inversion bias. This technique was applied to five numerical inversion techniques and two nonlinear iterative techniques used for the retrieval of middle atmospheric constituent concentrations from limb-scanning millimeter-wave spectroscopic measurements. It is found that the iterative methods have better vertical resolution, but are slightly more sensitive to measurement error than constrained matrix methods. The iterative methods converge to the exact solution, whereas two of the matrix methods under consideration have an explicit constraint, the sensitivity of the solution to the a priori profile. Tradeoffs of these retrieval characteristics are presented.

  7. Full-Physics Inverse Learning Machine for Satellite Remote Sensing Retrievals

    NASA Astrophysics Data System (ADS)

    Loyola, D. G.

    2017-12-01

    The satellite remote sensing retrievals are usually ill-posed inverse problems that are typically solved by finding a state vector that minimizes the residual between simulated data and real measurements. The classical inversion methods are very time-consuming as they require iterative calls to complex radiative-transfer forward models to simulate radiances and Jacobians, and subsequent inversion of relatively large matrices. In this work we present a novel and extremely fast algorithm for solving inverse problems called full-physics inverse learning machine (FP-ILM). The FP-ILM algorithm consists of a training phase in which machine learning techniques are used to derive an inversion operator based on synthetic data generated using a radiative transfer model (which expresses the "full-physics" component) and the smart sampling technique, and an operational phase in which the inversion operator is applied to real measurements. FP-ILM has been successfully applied to the retrieval of the SO2 plume height during volcanic eruptions and to the retrieval of ozone profile shapes from UV/VIS satellite sensors. Furthermore, FP-ILM will be used for the near-real-time processing of the upcoming generation of European Sentinel sensors with their unprecedented spectral and spatial resolution and associated large increases in the amount of data.

  8. Randomly iterated search and statistical competency as powerful inversion tools for deformation source modeling: Application to volcano interferometric synthetic aperture radar data

    NASA Astrophysics Data System (ADS)

    Shirzaei, M.; Walter, T. R.

    2009-10-01

    Modern geodetic techniques provide valuable and near real-time observations of volcanic activity. Characterizing the source of deformation based on these observations has become of major importance in related monitoring efforts. We investigate two random search approaches, simulated annealing (SA) and genetic algorithm (GA), and utilize them in an iterated manner. The iterated approach helps to prevent GA in general and SA in particular from getting trapped in local minima, and it also increases redundancy for exploring the search space. We apply a statistical competency test for estimating the confidence interval of the inversion source parameters, considering their internal interaction through the model, the effect of the model deficiency, and the observational error. Here, we present and test this new randomly iterated search and statistical competency (RISC) optimization method together with GA and SA for the modeling of data associated with volcanic deformations. Following synthetic and sensitivity tests, we apply the improved inversion techniques to two episodes of activity in the Campi Flegrei volcanic region in Italy, observed by the interferometric synthetic aperture radar technique. Inversion of these data allows derivation of deformation source parameters and their associated quality so that we can compare the two inversion methods. The RISC approach was found to be an efficient method in terms of computation time and search results and may be applied to other optimization problems in volcanic and tectonic environments.

  9. A simple approach to the joint inversion of seismic body and surface waves applied to the southwest U.S.

    NASA Astrophysics Data System (ADS)

    West, Michael; Gao, Wei; Grand, Stephen

    2004-08-01

    Body and surface wave tomography have complementary strengths when applied to regional-scale studies of the upper mantle. We present a straight-forward technique for their joint inversion which hinges on treating surface waves as horizontally-propagating rays with deep sensitivity kernels. This formulation allows surface wave phase or group measurements to be integrated directly into existing body wave tomography inversions with modest effort. We apply the joint inversion to a synthetic case and to data from the RISTRA project in the southwest U.S. The data variance reductions demonstrate that the joint inversion produces a better fit to the combined dataset, not merely a compromise. For large arrays, this method offers an improvement over augmenting body wave tomography with a one-dimensional model. The joint inversion combines the absolute velocity of a surface wave model with the high resolution afforded by body waves-both qualities that are required to understand regional-scale mantle phenomena.

  10. A direct-inverse method for transonic and separated flows about airfoils

    NASA Technical Reports Server (NTRS)

    Carlson, K. D.

    1985-01-01

    A direct-inverse technique and computer program called TAMSEP that can be sued for the analysis of the flow about airfoils at subsonic and low transonic freestream velocities is presented. The method is based upon a direct-inverse nonconservative full potential inviscid method, a Thwaites laminar boundary layer technique, and the Barnwell turbulent momentum integral scheme; and it is formulated using Cartesian coordinates. Since the method utilizes inverse boundary conditions in regions of separated flow, it is suitable for predicing the flowfield about airfoils having trailing edge separated flow under high lift conditions. Comparisons with experimental data indicate that the method should be a useful tool for applied aerodynamic analyses.

  11. A direct-inverse method for transonic and separated flows about airfoils

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1990-01-01

    A direct-inverse technique and computer program called TAMSEP that can be used for the analysis of the flow about airfoils at subsonic and low transonic freestream velocities is presented. The method is based upon a direct-inverse nonconservative full potential inviscid method, a Thwaites laminar boundary layer technique, and the Barnwell turbulent momentum integral scheme; and it is formulated using Cartesian coordinates. Since the method utilizes inverse boundary conditions in regions of separated flow, it is suitable for predicting the flow field about airfoils having trailing edge separated flow under high lift conditions. Comparisons with experimental data indicate that the method should be a useful tool for applied aerodynamic analyses.

  12. Guidance of Nonlinear Nonminimum-Phase Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Devasia, Santosh

    1996-01-01

    The research work has advanced the inversion-based guidance theory for: systems with non-hyperbolic internal dynamics; systems with parameter jumps; and systems where a redesign of the output trajectory is desired. A technique to achieve output tracking for nonminimum phase linear systems with non-hyperbolic and near non-hyperbolic internal dynamics was developed. This approach integrated stable inversion techniques, that achieve exact-tracking, with approximation techniques, that modify the internal dynamics to achieve desirable performance. Such modification of the internal dynamics was used (a) to remove non-hyperbolicity which is an obstruction to applying stable inversion techniques and (b) to reduce large preactuation times needed to apply stable inversion for near non-hyperbolic cases. The method was applied to an example helicopter hover control problem with near non-hyperbolic internal dynamics for illustrating the trade-off between exact tracking and reduction of preactuation time. Future work will extend these results to guidance of nonlinear non-hyperbolic systems. The exact output tracking problem for systems with parameter jumps was considered. Necessary and sufficient conditions were derived for the elimination of switching-introduced output transient. While previous works had studied this problem by developing a regulator that maintains exact tracking through parameter jumps (switches), such techniques are, however, only applicable to minimum-phase systems. In contrast, our approach is also applicable to nonminimum-phase systems and leads to bounded but possibly non-causal solutions. In addition, for the case when the reference trajectories are generated by an exosystem, we developed an exact-tracking controller which could be written in a feedback form. As in standard regulator theory, we also obtained a linear map from the states of the exosystem to the desired system state, which was defined via a matrix differential equation.

  13. Mean-Square Error Due to Gradiometer Field Measuring Devices

    DTIC Science & Technology

    1991-06-01

    convolving the gradiometer data with the inverse transform of I /T(a, 13), applying an ap- Hence (2) may be expressed in the transform domain as propriate... inverse transform of I / T(ot, 1) will not be possible quency measurements," Superconductor Applications: SQUID’s and because its inverse does not exist...and because it is a high- Machines, B. B. Schwartz and S. Foner, Eds. New York: Plenum pass function its use in an inverse transform technique Press

  14. Point-source inversion techniques

    NASA Astrophysics Data System (ADS)

    Langston, Charles A.; Barker, Jeffrey S.; Pavlin, Gregory B.

    1982-11-01

    A variety of approaches for obtaining source parameters from waveform data using moment-tensor or dislocation point source models have been investigated and applied to long-period body and surface waves from several earthquakes. Generalized inversion techniques have been applied to data for long-period teleseismic body waves to obtain the orientation, time function and depth of the 1978 Thessaloniki, Greece, event, of the 1971 San Fernando event, and of several events associated with the 1963 induced seismicity sequence at Kariba, Africa. The generalized inversion technique and a systematic grid testing technique have also been used to place meaningful constraints on mechanisms determined from very sparse data sets; a single station with high-quality three-component waveform data is often sufficient to discriminate faulting type (e.g., strike-slip, etc.). Sparse data sets for several recent California earthquakes, for a small regional event associated with the Koyna, India, reservoir, and for several events at the Kariba reservoir have been investigated in this way. Although linearized inversion techniques using the moment-tensor model are often robust, even for sparse data sets, there are instances where the simplifying assumption of a single point source is inadequate to model the data successfully. Numerical experiments utilizing synthetic data and actual data for the 1971 San Fernando earthquake graphically demonstrate that severe problems may be encountered if source finiteness effects are ignored. These techniques are generally applicable to on-line processing of high-quality digital data, but source complexity and inadequacy of the assumed Green's functions are major problems which are yet to be fully addressed.

  15. Time-lapse joint inversion of geophysical data with automatic joint constraints and dynamic attributes

    NASA Astrophysics Data System (ADS)

    Rittgers, J. B.; Revil, A.; Mooney, M. A.; Karaoulis, M.; Wodajo, L.; Hickey, C. J.

    2016-12-01

    Joint inversion and time-lapse inversion techniques of geophysical data are often implemented in an attempt to improve imaging of complex subsurface structures and dynamic processes by minimizing negative effects of random and uncorrelated spatial and temporal noise in the data. We focus on the structural cross-gradient (SCG) approach (enforcing recovered models to exhibit similar spatial structures) in combination with time-lapse inversion constraints applied to surface-based electrical resistivity and seismic traveltime refraction data. The combination of both techniques is justified by the underlying petrophysical models. We investigate the benefits and trade-offs of SCG and time-lapse constraints. Using a synthetic case study, we show that a combined joint time-lapse inversion approach provides an overall improvement in final recovered models. Additionally, we introduce a new approach to reweighting SCG constraints based on an iteratively updated normalized ratio of model sensitivity distributions at each time-step. We refer to the new technique as the Automatic Joint Constraints (AJC) approach. The relevance of the new joint time-lapse inversion process is demonstrated on the synthetic example. Then, these approaches are applied to real time-lapse monitoring field data collected during a quarter-scale earthen embankment induced-piping failure test. The use of time-lapse joint inversion is justified by the fact that a change of porosity drives concomitant changes in seismic velocities (through its effect on the bulk and shear moduli) and resistivities (through its influence upon the formation factor). Combined with the definition of attributes (i.e. specific characteristics) of the evolving target associated with piping, our approach allows localizing the position of the preferential flow path associated with internal erosion. This is not the case using other approaches.

  16. Computational inverse methods of heat source in fatigue damage problems

    NASA Astrophysics Data System (ADS)

    Chen, Aizhou; Li, Yuan; Yan, Bo

    2018-04-01

    Fatigue dissipation energy is the research focus in field of fatigue damage at present. It is a new idea to solve the problem of calculating fatigue dissipation energy by introducing inverse method of heat source into parameter identification of fatigue dissipation energy model. This paper introduces the research advances on computational inverse method of heat source and regularization technique to solve inverse problem, as well as the existing heat source solution method in fatigue process, prospects inverse method of heat source applying in fatigue damage field, lays the foundation for further improving the effectiveness of fatigue dissipation energy rapid prediction.

  17. Bayesian inversion of refraction seismic traveltime data

    NASA Astrophysics Data System (ADS)

    Ryberg, T.; Haberland, Ch

    2018-03-01

    We apply a Bayesian Markov chain Monte Carlo (McMC) formalism to the inversion of refraction seismic, traveltime data sets to derive 2-D velocity models below linear arrays (i.e. profiles) of sources and seismic receivers. Typical refraction data sets, especially when using the far-offset observations, are known as having experimental geometries which are very poor, highly ill-posed and far from being ideal. As a consequence, the structural resolution quickly degrades with depth. Conventional inversion techniques, based on regularization, potentially suffer from the choice of appropriate inversion parameters (i.e. number and distribution of cells, starting velocity models, damping and smoothing constraints, data noise level, etc.) and only local model space exploration. McMC techniques are used for exhaustive sampling of the model space without the need of prior knowledge (or assumptions) of inversion parameters, resulting in a large number of models fitting the observations. Statistical analysis of these models allows to derive an average (reference) solution and its standard deviation, thus providing uncertainty estimates of the inversion result. The highly non-linear character of the inversion problem, mainly caused by the experiment geometry, does not allow to derive a reference solution and error map by a simply averaging procedure. We present a modified averaging technique, which excludes parts of the prior distribution in the posterior values due to poor ray coverage, thus providing reliable estimates of inversion model properties even in those parts of the models. The model is discretized by a set of Voronoi polygons (with constant slowness cells) or a triangulated mesh (with interpolation within the triangles). Forward traveltime calculations are performed by a fast, finite-difference-based eikonal solver. The method is applied to a data set from a refraction seismic survey from Northern Namibia and compared to conventional tomography. An inversion test for a synthetic data set from a known model is also presented.

  18. Asteroseismic inversions in the Kepler era: application to the Kepler Legacy sample

    NASA Astrophysics Data System (ADS)

    Buldgen, Gaël; Reese, Daniel; Dupret, Marc-Antoine

    2017-10-01

    In the past few years, the CoRoT and Kepler missions have carried out what is now called the space photometry revolution. This revolution is still ongoing thanks to K2 and will be continued by the Tess and Plato2.0 missions. However, the photometry revolution must also be followed by progress in stellar modelling, in order to lead to more precise and accurate determinations of fundamental stellar parameters such as masses, radii and ages. In this context, the long-lasting problems related to mixing processes in stellar interior is the main obstacle to further improvements of stellar modelling. In this contribution, we will apply structural asteroseismic inversion techniques to targets from the Kepler Legacy sample and analyse how these can help us constrain the fundamental parameters and mixing processes in these stars. Our approach is based on previous studies using the SOLA inversion technique [1] to determine integrated quantities such as the mean density [2], the acoustic radius, and core conditions indicators [3], and has already been successfully applied to the 16Cyg binary system [4]. We will show how this technique can be applied to the Kepler Legacy sample and how new indicators can help us to further constrain the chemical composition profiles of stars as well as provide stringent constraints on stellar ages.

  19. Joint inversion of multiple geophysical and petrophysical data using generalized fuzzy clustering algorithms

    NASA Astrophysics Data System (ADS)

    Sun, Jiajia; Li, Yaoguo

    2017-02-01

    Joint inversion that simultaneously inverts multiple geophysical data sets to recover a common Earth model is increasingly being applied to exploration problems. Petrophysical data can serve as an effective constraint to link different physical property models in such inversions. There are two challenges, among others, associated with the petrophysical approach to joint inversion. One is related to the multimodality of petrophysical data because there often exist more than one relationship between different physical properties in a region of study. The other challenge arises from the fact that petrophysical relationships have different characteristics and can exhibit point, linear, quadratic, or exponential forms in a crossplot. The fuzzy c-means (FCM) clustering technique is effective in tackling the first challenge and has been applied successfully. We focus on the second challenge in this paper and develop a joint inversion method based on variations of the FCM clustering technique. To account for the specific shapes of petrophysical relationships, we introduce several different fuzzy clustering algorithms that are capable of handling different shapes of petrophysical relationships. We present two synthetic and one field data examples and demonstrate that, by choosing appropriate distance measures for the clustering component in the joint inversion algorithm, the proposed joint inversion method provides an effective means of handling common petrophysical situations we encounter in practice. The jointly inverted models have both enhanced structural similarity and increased petrophysical correlation, and better represent the subsurface in the spatial domain and the parameter domain of physical properties.

  20. Genotyping the factor VIII intron 22 inversion locus using fluorescent in situ hybridization.

    PubMed

    Sheen, Campbell R; McDonald, Margaret A; George, Peter M; Smith, Mark P; Morris, Christine M

    2011-02-15

    The factor VIII intron 22 inversion is the most common cause of hemophilia A, accounting for approximately 40% of all severe cases of the disease. Southern hybridization and multiplex long distance PCR are the most commonly used techniques to detect the inversion in a diagnostic setting, although both have significant limitations. Here we describe our experience establishing a multicolor fluorescent in situ hybridization (FISH) based assay as an alternative to existing methods for genetic diagnosis of the inversion. Our assay was designed to apply three differentially labelled BAC DNA probes that when hybridized to interphase nuclei would exhibit signal patterns that are consistent with the normal or the inversion locus. When the FISH assay was applied to five normal and five inversion male samples, the correct genotype was assignable with p<0.001 for all samples. When applied to carrier female samples the assay could not assign a genotype to all female samples, probably due to a lower proportion of informative nuclei in female samples caused by the added complexity of a second X chromosome. Despite this complication, these pilot findings show that the assay performs favourably compared to the commonly used methods. Copyright © 2010 Elsevier Inc. All rights reserved.

  1. Eruption mass estimation using infrasound waveform inversion and ash and gas measurements: Evaluation at Sakurajima Volcano, Japan

    NASA Astrophysics Data System (ADS)

    Fee, David; Izbekov, Pavel; Kim, Keehoon; Yokoo, Akihiko; Lopez, Taryn; Prata, Fred; Kazahaya, Ryunosuke; Nakamichi, Haruhisa; Iguchi, Masato

    2017-12-01

    Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been applied to the inversion technique. Here we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan. Six infrasound stations deployed from 12-20 February 2015 recorded the explosions. We compute numerical Green's functions using 3-D Finite Difference Time Domain modeling and a high-resolution digital elevation model. The inversion, assuming a simple acoustic monopole source, provides realistic eruption masses and excellent fit to the data for the majority of the explosions. The inversion results are compared to independent eruption masses derived from ground-based ash collection and volcanic gas measurements. Assuming realistic flow densities, our infrasound-derived eruption masses for ash-rich eruptions compare favorably to the ground-based estimates, with agreement ranging from within a factor of two to one order of magnitude. Uncertainties in the time-dependent flow density and acoustic propagation likely contribute to the mismatch between the methods. Our results suggest that realistic and accurate infrasound-based eruption mass and mass flow rate estimates can be computed using the method employed here. If accurate volcanic flow parameters are known, application of this technique could be broadly applied to enable near real-time calculation of eruption mass flow rates and total masses. These critical input parameters for volcanic eruption modeling and monitoring are not currently available.

  2. Nonlinear adaptive inverse control via the unified model neural network

    NASA Astrophysics Data System (ADS)

    Jeng, Jin-Tsong; Lee, Tsu-Tian

    1999-03-01

    In this paper, we propose a new nonlinear adaptive inverse control via a unified model neural network. In order to overcome nonsystematic design and long training time in nonlinear adaptive inverse control, we propose the approximate transformable technique to obtain a Chebyshev Polynomials Based Unified Model (CPBUM) neural network for the feedforward/recurrent neural networks. It turns out that the proposed method can use less training time to get an inverse model. Finally, we apply this proposed method to control magnetic bearing system. The experimental results show that the proposed nonlinear adaptive inverse control architecture provides a greater flexibility and better performance in controlling magnetic bearing systems.

  3. Application of Carbonate Reservoir using waveform inversion and reverse-time migration methods

    NASA Astrophysics Data System (ADS)

    Kim, W.; Kim, H.; Min, D.; Keehm, Y.

    2011-12-01

    Recent exploration targets of oil and gas resources are deeper and more complicated subsurface structures, and carbonate reservoirs have become one of the attractive and challenging targets in seismic exploration. To increase the rate of success in oil and gas exploration, it is required to delineate detailed subsurface structures. Accordingly, migration method is more important factor in seismic data processing for the delineation. Seismic migration method has a long history, and there have been developed lots of migration techniques. Among them, reverse-time migration is promising, because it can provide reliable images for the complicated model even in the case of significant velocity contrasts in the model. The reliability of seismic migration images is dependent on the subsurface velocity models, which can be extracted in several ways. These days, geophysicists try to obtain velocity models through seismic full waveform inversion. Since Lailly (1983) and Tarantola (1984) proposed that the adjoint state of wave equations can be used in waveform inversion, the back-propagation techniques used in reverse-time migration have been used in waveform inversion, which accelerated the development of waveform inversion. In this study, we applied acoustic waveform inversion and reverse-time migration methods to carbonate reservoir models with various reservoir thicknesses to examine the feasibility of the methods in delineating carbonate reservoir models. We first extracted subsurface material properties from acoustic waveform inversion, and then applied reverse-time migration using the inverted velocities as a background model. The waveform inversion in this study used back-propagation technique, and conjugate gradient method was used in optimization. The inversion was performed using the frequency-selection strategy. Finally waveform inversion results showed that carbonate reservoir models are clearly inverted by waveform inversion and migration images based on the inversion results are quite reliable. Different thicknesses of reservoir models were also described and the results revealed that the lower boundary of the reservoir was not delineated because of energy loss. From these results, it was noted that carbonate reservoirs can be properly imaged and interpreted by waveform inversion and reverse-time migration methods. This work was supported by the Energy Resources R&D program of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government Ministry of Knowledge Economy (No. 2009201030001A, No. 2010T100200133) and the Brain Korea 21 project of Energy System Engineering.

  4. Vibrato in Singing Voice: The Link between Source-Filter and Sinusoidal Models

    NASA Astrophysics Data System (ADS)

    Arroabarren, Ixone; Carlosena, Alfonso

    2004-12-01

    The application of inverse filtering techniques for high-quality singing voice analysis/synthesis is discussed. In the context of source-filter models, inverse filtering provides a noninvasive method to extract the voice source, and thus to study voice quality. Although this approach is widely used in speech synthesis, this is not the case in singing voice. Several studies have proved that inverse filtering techniques fail in the case of singing voice, the reasons being unclear. In order to shed light on this problem, we will consider here an additional feature of singing voice, not present in speech: the vibrato. Vibrato has been traditionally studied by sinusoidal modeling. As an alternative, we will introduce here a novel noninteractive source filter model that incorporates the mechanisms of vibrato generation. This model will also allow the comparison of the results produced by inverse filtering techniques and by sinusoidal modeling, as they apply to singing voice and not to speech. In this way, the limitations of these conventional techniques, described in previous literature, will be explained. Both synthetic signals and singer recordings are used to validate and compare the techniques presented in the paper.

  5. Noise suppression in surface microseismic data

    USGS Publications Warehouse

    Forghani-Arani, Farnoush; Batzle, Mike; Behura, Jyoti; Willis, Mark; Haines, Seth S.; Davidson, Michael

    2012-01-01

    We introduce a passive noise suppression technique, based on the τ − p transform. In the τ − p domain, one can separate microseismic events from surface noise based on distinct characteristics that are not visible in the time-offset domain. By applying the inverse τ − p transform to the separated microseismic event, we suppress the surface noise in the data. Our technique significantly improves the signal-to-noise ratios of the microseismic events and is superior to existing techniques for passive noise suppression in the sense that it preserves the waveform. We introduce a passive noise suppression technique, based on the τ − p transform. In the τ − p domain, one can separate microseismic events from surface noise based on distinct characteristics that are not visible in the time-offset domain. By applying the inverse τ − p transform to the separated microseismic event, we suppress the surface noise in the data. Our technique significantly improves the signal-to-noise ratios of the microseismic events and is superior to existing techniques for passive noise suppression in the sense that it preserves the waveform.

  6. Mixed linear-non-linear inversion of crustal deformation data: Bayesian inference of model, weighting and regularization parameters

    NASA Astrophysics Data System (ADS)

    Fukuda, Jun'ichi; Johnson, Kaj M.

    2010-06-01

    We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.

  7. Mini-batch optimized full waveform inversion with geological constrained gradient filtering

    NASA Astrophysics Data System (ADS)

    Yang, Hui; Jia, Junxiong; Wu, Bangyu; Gao, Jinghuai

    2018-05-01

    High computation cost and generating solutions without geological sense have hindered the wide application of Full Waveform Inversion (FWI). Source encoding technique is a way to dramatically reduce the cost of FWI but subject to fix-spread acquisition setup requirement and slow convergence for the suppression of cross-talk. Traditionally, gradient regularization or preconditioning is applied to mitigate the ill-posedness. An isotropic smoothing filter applied on gradients generally gives non-geological inversion results, and could also introduce artifacts. In this work, we propose to address both the efficiency and ill-posedness of FWI by a geological constrained mini-batch gradient optimization method. The mini-batch gradient descent optimization is adopted to reduce the computation time by choosing a subset of entire shots for each iteration. By jointly applying the structure-oriented smoothing to the mini-batch gradient, the inversion converges faster and gives results with more geological meaning. Stylized Marmousi model is used to show the performance of the proposed method on realistic synthetic model.

  8. EIT image reconstruction based on a hybrid FE-EFG forward method and the complete-electrode model.

    PubMed

    Hadinia, M; Jafari, R; Soleimani, M

    2016-06-01

    This paper presents the application of the hybrid finite element-element free Galerkin (FE-EFG) method for the forward and inverse problems of electrical impedance tomography (EIT). The proposed method is based on the complete electrode model. Finite element (FE) and element-free Galerkin (EFG) methods are accurate numerical techniques. However, the FE technique has meshing task problems and the EFG method is computationally expensive. In this paper, the hybrid FE-EFG method is applied to take both advantages of FE and EFG methods, the complete electrode model of the forward problem is solved, and an iterative regularized Gauss-Newton method is adopted to solve the inverse problem. The proposed method is applied to compute Jacobian in the inverse problem. Utilizing 2D circular homogenous models, the numerical results are validated with analytical and experimental results and the performance of the hybrid FE-EFG method compared with the FE method is illustrated. Results of image reconstruction are presented for a human chest experimental phantom.

  9. Evaluation of Inversion Methods Applied to Ionospheric ro Observations

    NASA Astrophysics Data System (ADS)

    Rios Caceres, Arq. Estela Alejandra; Rios, Victor Hugo; Guyot, Elia

    The new technique of radio-occultation can be used to study the Earth's ionosphere. The retrieval processes of ionospheric profiling from radio occultation observations usually assume spherical symmetry of electron density distribution at the locality of occultation and use the Abel integral transform to invert the measured total electron content (TEC) values. This pa-per presents a set of ionospheric profiles obtained from SAC-C satellite with the Abel inversion technique. The effects of the ionosphere on the GPS signal during occultation, such as bending and scintillation, are examined. Electron density profiles are obtained using the Abel inversion technique. Ionospheric radio occultations are validated using vertical profiles of electron con-centration from inverted ionograms , obtained from ionosonde sounding in the vicinity of the occultation. Results indicate that the Abel transform works well in the mid-latitudes during the daytime, but is less accurate during the night-time.

  10. Preview-Based Stable-Inversion for Output Tracking

    NASA Technical Reports Server (NTRS)

    Zou, Qing-Ze; Devasia, Santosh

    1999-01-01

    Stable Inversion techniques can be used to achieve high-accuracy output tracking. However, for nonminimum phase systems, the inverse is non-causal - hence the inverse has to be pre-computed using a pre-specified desired-output trajectory. This requirement for pre-specification of the desired output restricts the use of inversion-based approaches to trajectory planning problems (for nonminimum phase systems). In the present article, it is shown that preview information of the desired output can be used to achieve online inversion-based output tracking of linear systems. The amount of preview-time needed is quantified in terms of the tracking error and the internal dynamics of the system (zeros of the system). The methodology is applied to the online output tracking of a flexible structure and experimental results are presented.

  11. The analysis of a rocket tomography measurement of the N2+3914A emission and N2 ionization rates in an auroral arc

    NASA Technical Reports Server (NTRS)

    Mcdade, Ian C.

    1991-01-01

    Techniques were developed for recovering two-dimensional distributions of auroral volume emission rates from rocket photometer measurements made in a tomographic spin scan mode. These tomographic inversion procedures are based upon an algebraic reconstruction technique (ART) and utilize two different iterative relaxation techniques for solving the problems associated with noise in the observational data. One of the inversion algorithms is based upon a least squares method and the other on a maximum probability approach. The performance of the inversion algorithms, and the limitations of the rocket tomography technique, were critically assessed using various factors such as (1) statistical and non-statistical noise in the observational data, (2) rocket penetration of the auroral form, (3) background sources of emission, (4) smearing due to the photometer field of view, and (5) temporal variations in the auroral form. These tests show that the inversion procedures may be successfully applied to rocket observations made in medium intensity aurora with standard rocket photometer instruments. The inversion procedures have been used to recover two-dimensional distributions of auroral emission rates and ionization rates from an existing set of N2+3914A rocket photometer measurements which were made in a tomographic spin scan mode during the ARIES auroral campaign. The two-dimensional distributions of the 3914A volume emission rates recoverd from the inversion of the rocket data compare very well with the distributions that were inferred from ground-based measurements using triangulation-tomography techniques and the N2 ionization rates derived from the rocket tomography results are in very good agreement with the in situ particle measurements that were made during the flight. Three pre-prints describing the tomographic inversion techniques and the tomographic analysis of the ARIES rocket data are included as appendices.

  12. Time-reversal and Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Debski, Wojciech

    2017-04-01

    Probabilistic inversion technique is superior to the classical optimization-based approach in all but one aspects. It requires quite exhaustive computations which prohibit its use in huge size inverse problems like global seismic tomography or waveform inversion to name a few. The advantages of the approach are, however, so appealing that there is an ongoing continuous afford to make the large inverse task as mentioned above manageable with the probabilistic inverse approach. One of the perspective possibility to achieve this goal relays on exploring the internal symmetry of the seismological modeling problems in hand - a time reversal and reciprocity invariance. This two basic properties of the elastic wave equation when incorporating into the probabilistic inversion schemata open a new horizons for Bayesian inversion. In this presentation we discuss the time reversal symmetry property, its mathematical aspects and propose how to combine it with the probabilistic inverse theory into a compact, fast inversion algorithm. We illustrate the proposed idea with the newly developed location algorithm TRMLOC and discuss its efficiency when applied to mining induced seismic data.

  13. Rigorous Approach in Investigation of Seismic Structure and Source Characteristicsin Northeast Asia: Hierarchical and Trans-dimensional Bayesian Inversion

    NASA Astrophysics Data System (ADS)

    Mustac, M.; Kim, S.; Tkalcic, H.; Rhie, J.; Chen, Y.; Ford, S. R.; Sebastian, N.

    2015-12-01

    Conventional approaches to inverse problems suffer from non-linearity and non-uniqueness in estimations of seismic structures and source properties. Estimated results and associated uncertainties are often biased by applied regularizations and additional constraints, which are commonly introduced to solve such problems. Bayesian methods, however, provide statistically meaningful estimations of models and their uncertainties constrained by data information. In addition, hierarchical and trans-dimensional (trans-D) techniques are inherently implemented in the Bayesian framework to account for involved error statistics and model parameterizations, and, in turn, allow more rigorous estimations of the same. Here, we apply Bayesian methods throughout the entire inference process to estimate seismic structures and source properties in Northeast Asia including east China, the Korean peninsula, and the Japanese islands. Ambient noise analysis is first performed to obtain a base three-dimensional (3-D) heterogeneity model using continuous broadband waveforms from more than 300 stations. As for the tomography of surface wave group and phase velocities in the 5-70 s band, we adopt a hierarchical and trans-D Bayesian inversion method using Voronoi partition. The 3-D heterogeneity model is further improved by joint inversions of teleseismic receiver functions and dispersion data using a newly developed high-efficiency Bayesian technique. The obtained model is subsequently used to prepare 3-D structural Green's functions for the source characterization. A hierarchical Bayesian method for point source inversion using regional complete waveform data is applied to selected events from the region. The seismic structure and source characteristics with rigorously estimated uncertainties from the novel Bayesian methods provide enhanced monitoring and discrimination of seismic events in northeast Asia.

  14. Resolving model parameter values from carbon and nitrogen stock measurements in a wide range of tropical mature forests using nonlinear inversion and regression trees

    Treesearch

    Shuguang Liua; Pamela Anderson; Guoyi Zhoud; Boone Kauffman; Flint Hughes; David Schimel; Vicente Watson; Joseph Tosi

    2008-01-01

    Objectively assessing the performance of a model and deriving model parameter values from observations are critical and challenging in landscape to regional modeling. In this paper, we applied a nonlinear inversion technique to calibrate the ecosystem model CENTURY against carbon (C) and nitrogen (N) stock measurements collected from 39 mature tropical forest sites in...

  15. Trans-dimensional and hierarchical Bayesian approaches toward rigorous estimation of seismic sources and structures in the Northeast Asia

    NASA Astrophysics Data System (ADS)

    Kim, Seongryong; Tkalčić, Hrvoje; Mustać, Marija; Rhie, Junkee; Ford, Sean

    2016-04-01

    A framework is presented within which we provide rigorous estimations for seismic sources and structures in the Northeast Asia. We use Bayesian inversion methods, which enable statistical estimations of models and their uncertainties based on data information. Ambiguities in error statistics and model parameterizations are addressed by hierarchical and trans-dimensional (trans-D) techniques, which can be inherently implemented in the Bayesian inversions. Hence reliable estimation of model parameters and their uncertainties is possible, thus avoiding arbitrary regularizations and parameterizations. Hierarchical and trans-D inversions are performed to develop a three-dimensional velocity model using ambient noise data. To further improve the model, we perform joint inversions with receiver function data using a newly developed Bayesian method. For the source estimation, a novel moment tensor inversion method is presented and applied to regional waveform data of the North Korean nuclear explosion tests. By the combination of new Bayesian techniques and the structural model, coupled with meaningful uncertainties related to each of the processes, more quantitative monitoring and discrimination of seismic events is possible.

  16. Getting in shape: Reconstructing three-dimensional long-track speed skating kinematics by comparing several body pose reconstruction techniques.

    PubMed

    van der Kruk, E; Schwab, A L; van der Helm, F C T; Veeger, H E J

    2018-03-01

    In gait studies body pose reconstruction (BPR) techniques have been widely explored, but no previous protocols have been developed for speed skating, while the peculiarities of the skating posture and technique do not automatically allow for the transfer of the results of those explorations to kinematic skating data. The aim of this paper is to determine the best procedure for body pose reconstruction and inverse dynamics of speed skating, and to what extend this choice influences the estimation of joint power. The results show that an eight body segment model together with a global optimization method with revolute joint in the knee and in the lumbosacral joint, while keeping the other joints spherical, would be the most realistic model to use for the inverse kinematics in speed skating. To determine joint power, this method should be combined with a least-square error method for the inverse dynamics. Reporting on the BPR technique and the inverse dynamic method is crucial to enable comparison between studies. Our data showed an underestimation of up to 74% in mean joint power when no optimization procedure was applied for BPR and an underestimation of up to 31% in mean joint power when a bottom-up inverse dynamics method was chosen instead of a least square error approach. Although these results are aimed at speed skating, reporting on the BPR procedure and the inverse dynamics method, together with setting a golden standard should be common practice in all human movement research to allow comparison between studies. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Inverse analysis of aerodynamic loads from strain information using structural models and neural networks

    NASA Astrophysics Data System (ADS)

    Wada, Daichi; Sugimoto, Yohei

    2017-04-01

    Aerodynamic loads on aircraft wings are one of the key parameters to be monitored for reliable and effective aircraft operations and management. Flight data of the aerodynamic loads would be used onboard to control the aircraft and accumulated data would be used for the condition-based maintenance and the feedback for the fatigue and critical load modeling. The effective sensing techniques such as fiber optic distributed sensing have been developed and demonstrated promising capability of monitoring structural responses, i.e., strains on the surface of the aircraft wings. By using the developed techniques, load identification methods for structural health monitoring are expected to be established. The typical inverse analysis for load identification using strains calculates the loads in a discrete form of concentrated forces, however, the distributed form of the loads is essential for the accurate and reliable estimation of the critical stress at structural parts. In this study, we demonstrate an inverse analysis to identify the distributed loads from measured strain information. The introduced inverse analysis technique calculates aerodynamic loads not in a discrete but in a distributed manner based on a finite element model. In order to verify the technique through numerical simulations, we apply static aerodynamic loads on a flat panel model, and conduct the inverse identification of the load distributions. We take two approaches to build the inverse system between loads and strains. The first one uses structural models and the second one uses neural networks. We compare the performance of the two approaches, and discuss the effect of the amount of the strain sensing information.

  18. Frequency and time domain three-dimensional inversion of electromagnetic data for a grounded-wire source

    NASA Astrophysics Data System (ADS)

    Sasaki, Yutaka; Yi, Myeong-Jong; Choi, Jihyang; Son, Jeong-Sul

    2015-01-01

    We present frequency- and time-domain three-dimensional (3-D) inversion approaches that can be applied to transient electromagnetic (TEM) data from a grounded-wire source using a PC. In the direct time-domain approach, the forward solution and sensitivity were obtained in the frequency domain using a finite-difference technique, and the frequency response was then Fourier-transformed using a digital filter technique. In the frequency-domain approach, TEM data were Fourier-transformed using a smooth-spectrum inversion method, and the recovered frequency response was then inverted. The synthetic examples show that for the time derivative of magnetic field, frequency-domain inversion of TEM data performs almost as well as time-domain inversion, with a significant reduction in computational time. In our synthetic studies, we also compared the resolution capabilities of the ground and airborne TEM and controlled-source audio-frequency magnetotelluric (CSAMT) data resulting from a common grounded wire. An airborne TEM survey at 200-m elevation achieved a resolution for buried conductors almost comparable to that of the ground TEM method. It is also shown that the inversion of CSAMT data was able to detect a 3-D resistivity structure better than the TEM inversion, suggesting an advantage of electric-field measurements over magnetic-field-only measurements.

  19. Improving Estimates Of Phase Parameters When Amplitude Fluctuates

    NASA Technical Reports Server (NTRS)

    Vilnrotter, V. A.; Brown, D. H.; Hurd, W. J.

    1989-01-01

    Adaptive inverse filter applied to incoming signal and noise. Time-varying inverse-filtering technique developed to improve digital estimate of phase of received carrier signal. Intended for use where received signal fluctuates in amplitude as well as in phase and signal tracked by digital phase-locked loop that keeps its phase error much smaller than 1 radian. Useful in navigation systems, reception of time- and frequency-standard signals, and possibly spread-spectrum communication systems.

  20. A direct method for nonlinear ill-posed problems

    NASA Astrophysics Data System (ADS)

    Lakhal, A.

    2018-02-01

    We propose a direct method for solving nonlinear ill-posed problems in Banach-spaces. The method is based on a stable inversion formula we explicitly compute by applying techniques for analytic functions. Furthermore, we investigate the convergence and stability of the method and prove that the derived noniterative algorithm is a regularization. The inversion formula provides a systematic sensitivity analysis. The approach is applicable to a wide range of nonlinear ill-posed problems. We test the algorithm on a nonlinear problem of travel-time inversion in seismic tomography. Numerical results illustrate the robustness and efficiency of the algorithm.

  1. A Novel Instructional Approach to the Design of Standard Controllers: Using Inversion Formulae

    ERIC Educational Resources Information Center

    Ntogramatzidis, Lorenzo; Zanasi, Roberto; Cuoghi, Stefania

    2014-01-01

    This paper describes a range of design techniques for standard compensators (Lead-Lag networks and PID controllers) that have been applied to the teaching of many undergraduate control courses throughout Italy over the last twenty years, but that have received little attention elsewhere. These techniques hinge upon a set of simple formulas--herein…

  2. Pioneer 10 and 11 radio occultations by Jupiter. [atmospheric temperature structure

    NASA Technical Reports Server (NTRS)

    Kliore, A. J.; Woiceshyn, P. M.; Hubbard, W. B.

    1977-01-01

    Results on the temperature structure of the Jovian atmosphere are reviewed which were obtained by applying an integral inversion technique combined with a model for the planet's shape based on gravity data to Pioneer 10 and 11 radio-occultation data. The technique applied to obtain temperature profiles from the Pioneer data consisted of defining a center of refraction based on a computation of the radius of curvature in the plane of refraction and the normal direction to the equipotential surface at the closest approach point of a ray. Observations performed during the Pioneer 10 entry and exit and the Pioneer 11 exit are analyzed, sources of uncertainty are identified, and representative pressure-temperature profiles are presented which clearly show a temperature inversion between 10 and 100 mb. Effects of zonal winds on the reliability of radio-occultation temperature profiles are briefly discussed.

  3. Comparison of weighting techniques for acoustic full waveform inversion

    NASA Astrophysics Data System (ADS)

    Jeong, Gangwon; Hwang, Jongha; Min, Dong-Joo

    2017-12-01

    To reconstruct long-wavelength structures in full waveform inversion (FWI), the wavefield-damping and weighting techniques have been used to synthesize and emphasize low-frequency data components in frequency-domain FWI. However, these methods have some weak points. The application of wavefield-damping method on filtered data fails to synthesize reliable low-frequency data; the optimization formula obtained introducing the weighting technique is not theoretically complete, because it is not directly derived from the objective function. In this study, we address these weak points and present how to overcome them. We demonstrate that the source estimation in FWI using damped wavefields fails when the data used in the FWI process does not satisfy the causality condition. This phenomenon occurs when a non-causal filter is applied to data. We overcome this limitation by designing a causal filter. Also we modify the conventional weighting technique so that its optimization formula is directly derived from the objective function, retaining its original characteristic of emphasizing the low-frequency data components. Numerical results show that the newly designed causal filter enables to recover long-wavelength structures using low-frequency data components synthesized by damping wavefields in frequency-domain FWI, and the proposed weighting technique enhances the inversion results.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yu; Gao, Kai; Huang, Lianjie

    Accurate imaging and characterization of fracture zones is crucial for geothermal energy exploration. Aligned fractures within fracture zones behave as anisotropic media for seismic-wave propagation. The anisotropic properties in fracture zones introduce extra difficulties for seismic imaging and waveform inversion. We have recently developed a new anisotropic elastic-waveform inversion method using a modified total-variation regularization scheme and a wave-energy-base preconditioning technique. Our new inversion method uses the parameterization of elasticity constants to describe anisotropic media, and hence it can properly handle arbitrary anisotropy. We apply our new inversion method to a seismic velocity model along a 2D-line seismic data acquiredmore » at Eleven-Mile Canyon located at the Southern Dixie Valley in Nevada for geothermal energy exploration. Our inversion results show that anisotropic elastic-waveform inversion has potential to reconstruct subsurface anisotropic elastic parameters for imaging and characterization of fracture zones.« less

  5. Measuring Two Decades of Ice Mass Loss using GRACE and SLR

    NASA Astrophysics Data System (ADS)

    Bonin, J. A.; Chambers, D. P.

    2016-12-01

    We use Satellite Laser Ranging (SLR) to extend the time series of ice mass change back in time to 1994. The SLR series is of far lesser spatial resolution than GRACE, so we apply a constrained inversion technique to better localize the signal. We approximate the likely errors due to SLR's measurement errors combined with the inversion errors from using a low-resolution series, then estimate the interannual mass change over Greenland and Antarctica.

  6. Multi-frequency subspace migration for imaging of perfectly conducting, arc-like cracks in full- and limited-view inverse scattering problems

    NASA Astrophysics Data System (ADS)

    Park, Won-Kwang

    2015-02-01

    Multi-frequency subspace migration imaging techniques are usually adopted for the non-iterative imaging of unknown electromagnetic targets, such as cracks in concrete walls or bridges and anti-personnel mines in the ground, in the inverse scattering problems. It is confirmed that this technique is very fast, effective, robust, and can not only be applied to full- but also to limited-view inverse problems if a suitable number of incidents and corresponding scattered fields are applied and collected. However, in many works, the application of such techniques is heuristic. With the motivation of such heuristic application, this study analyzes the structure of the imaging functional employed in the subspace migration imaging technique in two-dimensional full- and limited-view inverse scattering problems when the unknown targets are arbitrary-shaped, arc-like perfectly conducting cracks located in the two-dimensional homogeneous space. In contrast to the statistical approach based on statistical hypothesis testing, our approach is based on the fact that the subspace migration imaging functional can be expressed by a linear combination of the Bessel functions of integer order of the first kind. This is based on the structure of the Multi-Static Response (MSR) matrix collected in the far-field at nonzero frequency in either Transverse Magnetic (TM) mode (Dirichlet boundary condition) or Transverse Electric (TE) mode (Neumann boundary condition). The investigation of the expression of imaging functionals gives us certain properties of subspace migration and explains why multi-frequency enhances imaging resolution. In particular, we carefully analyze the subspace migration and confirm some properties of imaging when a small number of incident fields are applied. Consequently, we introduce a weighted multi-frequency imaging functional and confirm that it is an improved version of subspace migration in TM mode. Various results of numerical simulations performed on the far-field data affected by large amounts of random noise are similar to the analytical results derived in this study, and they provide a direction for future studies.

  7. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.

  8. Inverse transport calculations in optical imaging with subspace optimization algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Tian, E-mail: tding@math.utexas.edu; Ren, Kui, E-mail: ren@math.utexas.edu

    2014-09-15

    Inverse boundary value problems for the radiative transport equation play an important role in optics-based medical imaging techniques such as diffuse optical tomography (DOT) and fluorescence optical tomography (FOT). Despite the rapid progress in the mathematical theory and numerical computation of these inverse problems in recent years, developing robust and efficient reconstruction algorithms remains a challenging task and an active research topic. We propose here a robust reconstruction method that is based on subspace minimization techniques. The method splits the unknown transport solution (or a functional of it) into low-frequency and high-frequency components, and uses singular value decomposition to analyticallymore » recover part of low-frequency information. Minimization is then applied to recover part of the high-frequency components of the unknowns. We present some numerical simulations with synthetic data to demonstrate the performance of the proposed algorithm.« less

  9. Improved preconditioned conjugate gradient algorithm and application in 3D inversion of gravity-gradiometry data

    NASA Astrophysics Data System (ADS)

    Wang, Tai-Han; Huang, Da-Nian; Ma, Guo-Qing; Meng, Zhao-Hai; Li, Ye

    2017-06-01

    With the continuous development of full tensor gradiometer (FTG) measurement techniques, three-dimensional (3D) inversion of FTG data is becoming increasingly used in oil and gas exploration. In the fast processing and interpretation of large-scale high-precision data, the use of the graphics processing unit process unit (GPU) and preconditioning methods are very important in the data inversion. In this paper, an improved preconditioned conjugate gradient algorithm is proposed by combining the symmetric successive over-relaxation (SSOR) technique and the incomplete Choleksy decomposition conjugate gradient algorithm (ICCG). Since preparing the preconditioner requires extra time, a parallel implement based on GPU is proposed. The improved method is then applied in the inversion of noisecontaminated synthetic data to prove its adaptability in the inversion of 3D FTG data. Results show that the parallel SSOR-ICCG algorithm based on NVIDIA Tesla C2050 GPU achieves a speedup of approximately 25 times that of a serial program using a 2.0 GHz Central Processing Unit (CPU). Real airborne gravity-gradiometry data from Vinton salt dome (southwest Louisiana, USA) are also considered. Good results are obtained, which verifies the efficiency and feasibility of the proposed parallel method in fast inversion of 3D FTG data.

  10. Adjoint Sensitivity Method to Determine Optimal Set of Stations for Tsunami Source Inversion

    NASA Astrophysics Data System (ADS)

    Gusman, A. R.; Hossen, M. J.; Cummins, P. R.; Satake, K.

    2017-12-01

    We applied the adjoint sensitivity technique in tsunami science for the first time to determine an optimal set of stations for a tsunami source inversion. The adjoint sensitivity (AS) method has been used in numerical weather prediction to find optimal locations for adaptive observations. We implemented this technique to Green's Function based Time Reverse Imaging (GFTRI), which is recently used in tsunami source inversion in order to reconstruct the initial sea surface displacement, known as tsunami source model. This method has the same source representation as the traditional least square (LSQ) source inversion method where a tsunami source is represented by dividing the source region into a regular grid of "point" sources. For each of these, Green's function (GF) is computed using a basis function for initial sea surface displacement whose amplitude is concentrated near the grid point. We applied the AS method to the 2009 Samoa earthquake tsunami that occurred on 29 September 2009 in the southwest Pacific, near the Tonga trench. Many studies show that this earthquake is a doublet associated with both normal faulting in the outer-rise region and thrust faulting in the subduction interface. To estimate the tsunami source model for this complex event, we initially considered 11 observations consisting of 5 tide gauges and 6 DART bouys. After implementing AS method, we found the optimal set of observations consisting with 8 stations. Inversion with this optimal set provides better result in terms of waveform fitting and source model that shows both sub-events associated with normal and thrust faulting.

  11. CSAMT Data Processing with Source Effect and Static Corrections, Application of Occam's Inversion, and Its Application in Geothermal System

    NASA Astrophysics Data System (ADS)

    Hamdi, H.; Qausar, A. M.; Srigutomo, W.

    2016-08-01

    Controlled source audio-frequency magnetotellurics (CSAMT) is a frequency-domain electromagnetic sounding technique which uses a fixed grounded dipole as an artificial signal source. Measurement of CSAMT with finite distance between transmitter and receiver caused a complex wave. The shifted of the electric field due to the static effect caused elevated resistivity curve up or down and affects the result of measurement. The objective of this study was to obtain data that have been corrected for source and static effects as to have the same characteristic as MT data which are assumed to exhibit plane wave properties. Corrected CSAMT data were inverted to reveal subsurface resistivity model. Source effect correction method was applied to eliminate the effect of the signal source and static effect was corrected by using spatial filtering technique. Inversion method that used in this study is the Occam's 2D Inversion. The results of inversion produces smooth models with a small misfit value, it means the model can describe subsurface conditions well. Based on the result of inversion was predicted measurement area is rock that has high permeability values with rich hot fluid.

  12. A robust spatial filtering technique for multisource localization and geoacoustic inversion.

    PubMed

    Stotts, S A

    2005-07-01

    Geoacoustic inversion and source localization using beamformed data from a ship of opportunity has been demonstrated with a bottom-mounted array. An alternative approach, which lies within a class referred to as spatial filtering, transforms element level data into beam data, applies a bearing filter, and transforms back to element level data prior to performing inversions. Automation of this filtering approach is facilitated for broadband applications by restricting the inverse transform to the degrees of freedom of the array, i.e., the effective number of elements, for frequencies near or below the design frequency. A procedure is described for nonuniformly spaced elements that guarantees filter stability well above the design frequency. Monitoring energy conservation with respect to filter output confirms filter stability. Filter performance with both uniformly spaced and nonuniformly spaced array elements is discussed. Vertical (range and depth) and horizontal (range and bearing) ambiguity surfaces are constructed to examine filter performance. Examples that demonstrate this filtering technique with both synthetic data and real data are presented along with comparisons to inversion results using beamformed data. Examinations of cost functions calculated within a simulated annealing algorithm reveal the efficacy of the approach.

  13. Rapid inverse planning for pressure-driven drug infusions in the brain.

    PubMed

    Rosenbluth, Kathryn H; Martin, Alastair J; Mittermeyer, Stephan; Eschermann, Jan; Dickinson, Peter J; Bankiewicz, Krystof S

    2013-01-01

    Infusing drugs directly into the brain is advantageous to oral or intravenous delivery for large molecules or drugs requiring high local concentrations with low off-target exposure. However, surgeons manually planning the cannula position for drug delivery in the brain face a challenging three-dimensional visualization task. This study presents an intuitive inverse-planning technique to identify the optimal placement that maximizes coverage of the target structure while minimizing the potential for leakage outside the target. The technique was retrospectively validated using intraoperative magnetic resonance imaging of infusions into the striatum of non-human primates and into a tumor in a canine model and applied prospectively to upcoming human clinical trials.

  14. Inversion of Zeeman polarization for solar magnetic field diagnostics

    NASA Astrophysics Data System (ADS)

    Derouich, M.

    2017-05-01

    The topic of magnetic field diagnostics with the Zeeman effect is currently vividly discussed. There are some testable inversion codes available to the spectropolarimetry community and their application allowed for a better understanding of the magnetism of the solar atmosphere. In this context, we propose an inversion technique associated with a new numerical code. The inversion procedure is promising and particularly successful for interpreting the Stokes profiles in quick and sufficiently precise way. In our inversion, we fit a part of each Stokes profile around a target wavelength, and then determine the magnetic field as a function of the wavelength which is equivalent to get the magnetic field as a function of the height of line formation. To test the performance of the new numerical code, we employed "hare and hound" approach by comparing an exact solution (called input) with the solution obtained by the code (called output). The precision of the code is also checked by comparing our results to the ones obtained with the HAO MERLIN code. The inversion code has been applied to synthetic Stokes profiles of the Na D1 line available in the literature. We investigated the limitations in recovering the input field in case of noisy data. As an application, we applied our inversion code to the polarization profiles of the Fe Iλ 6302.5 Å observed at IRSOL in Locarno.

  15. The application of inverse Broyden's algorithm for modeling of crack growth in iron crystals.

    PubMed

    Telichev, Igor; Vinogradov, Oleg

    2011-07-01

    In the present paper we demonstrate the use of inverse Broyden's algorithm (IBA) in the simulation of fracture in single iron crystals. The iron crystal structure is treated as a truss system, while the forces between the atoms situated at the nodes are defined by modified Morse inter-atomic potentials. The evolution of lattice structure is interpreted as a sequence of equilibrium states corresponding to the history of applied load/deformation, where each equilibrium state is found using an iterative procedure based on IBA. The results presented demonstrate the success of applying the IBA technique for modeling the mechanisms of elastic, plastic and fracture behavior of single iron crystals.

  16. Invariant-Based Inverse Engineering of Crane Control Parameters

    NASA Astrophysics Data System (ADS)

    González-Resines, S.; Guéry-Odelin, D.; Tobalina, A.; Lizuain, I.; Torrontegui, E.; Muga, J. G.

    2017-11-01

    By applying invariant-based inverse engineering in the small-oscillation regime, we design the time dependence of the control parameters of an overhead crane (trolley displacement and rope length) to transport a load between two positions at different heights with minimal final-energy excitation for a microcanonical ensemble of initial conditions. The analogy between ion transport in multisegmented traps or neutral-atom transport in moving optical lattices and load manipulation by cranes opens a route for a useful transfer of techniques among very different fields.

  17. Control of a high beta maneuvering reentry vehicle using dynamic inversion.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watts, Alfred Chapman

    2005-05-01

    The design of flight control systems for high performance maneuvering reentry vehicles presents a significant challenge to the control systems designer. These vehicles typically have a much higher ballistic coefficient than crewed vehicles like as the Space Shuttle or proposed crew return vehicles such as the X-38. Moreover, the missions of high performance vehicles usually require a steeper reentry flight path angle, followed by a pull-out into level flight. These vehicles then must transit the entire atmosphere and robustly perform the maneuvers required for the mission. The vehicles must also be flown with small static margins in order to performmore » the required maneuvers, which can result in highly nonlinear aerodynamic characteristics that frequently transition from being aerodynamically stable to unstable as angle of attack increases. The control system design technique of dynamic inversion has been applied successfully to both high performance aircraft and low beta reentry vehicles. The objective of this study was to explore the application of this technique to high performance maneuvering reentry vehicles, including the basic derivation of the dynamic inversion technique, followed by the extension of that technique to the use of tabular trim aerodynamic models in the controller. The dynamic inversion equations are developed for high performance vehicles and augmented to allow the selection of a desired response for the control system. A six degree of freedom simulation is used to evaluate the performance of the dynamic inversion approach, and results for both nominal and off nominal aerodynamic characteristics are presented.« less

  18. Detection of DNA double-strand breaks and chromosome translocations using ligation-mediated PCR and inverse PCR.

    PubMed

    Singh, Sheetal; Shih, Shyh-Jen; Vaughan, Andrew T M

    2014-01-01

    Current techniques for examining the global creation and repair of DNA double-strand breaks are restricted in their sensitivity, and such techniques mask any site-dependent variations in breakage and repair rate or fidelity. We present here a system for analyzing the fate of documented DNA breaks, using the MLL gene as an example, through application of ligation-mediated PCR. Here, a simple asymmetric double-stranded DNA adapter molecule is ligated to experimentally induced DNA breaks and subjected to seminested PCR using adapter- and gene-specific primers. The rate of appearance and loss of specific PCR products allows detection of both the break and its repair. Using the additional technique of inverse PCR, the presence of misrepaired products (translocations) can be detected at the same site, providing information on the fidelity of the ligation reaction in intact cells. Such techniques may be adapted for the analysis of DNA breaks and rearrangements introduced into any identifiable genomic location. We have also applied parallel sequencing for the high-throughput analysis of inverse PCR products to facilitate the unbiased recording of all rearrangements located at a specific genomic location.

  19. Active and passive electrical and seismic time-lapse monitoring of earthen embankments

    NASA Astrophysics Data System (ADS)

    Rittgers, Justin Bradley

    In this dissertation, I present research involving the application of active and passive geophysical data collection, data assimilation, and inverse modeling for the purpose of earthen embankment infrastructure assessment. Throughout the dissertation, I identify several data characteristics, and several challenges intrinsic to characterization and imaging of earthen embankments and anomalous seepage phenomena, from both a static and time-lapse geophysical monitoring perspective. I begin with the presentation of a field study conducted on a seeping earthen dam, involving static and independent inversions of active tomography data sets, and self-potential modeling of fluid flow within a confined aquifer. Additionally, I present results of active and passive time-lapse geophysical monitoring conducted during two meso-scale laboratory experiments involving the failure and self-healing of embankment filter materials via induced vertical cracking. Identified data signatures and trends, as well as 4D inversion results, are discussed as an underlying motivation for conducting subsequent research. Next, I present a new 4D acoustic emissions source localization algorithm that is applied to passive seismic monitoring data collected during a full-scale embankment failure test. Acoustic emissions localization results are then used to help spatially constrain 4D inversion of collocated self-potential monitoring data. I then turn to time-lapse joint inversion of active tomographic data sets applied to the characterization and monitoring of earthen embankments. Here, I develop a new technique for applying spatiotemporally varying structural joint inversion constraints. The new technique, referred to as Automatic Joint Constraints (AJC), is first demonstrated on a synthetic 2D joint model space, and is then applied to real geophysical monitoring data sets collected during a full-scale earthen embankment piping-failure test. Finally, I discuss some non-technical issues related to earthen embankment failures from a Science, Technology, Engineering, and Policy (STEP) perspective. Here, I discuss how the proclaimed scientific expertise and shifting of responsibility (Responsibilization) by governing entities tasked with operating and maintaining water storage and conveyance infrastructure throughout the United States tends to create barriers for 1) public voice and participation in relevant technical activities and outcomes, 2) meaningful discussions with the public and media during crisis communication, and 3) public perception of risk and the associated resilience of downhill communities.

  20. Geophysical assessments of renewable gas energy compressed in geologic pore storage reservoirs.

    PubMed

    Al Hagrey, Said Attia; Köhn, Daniel; Rabbel, Wolfgang

    2014-01-01

    Renewable energy resources can indisputably minimize the threat of global warming and climate change. However, they are intermittent and need buffer storage to bridge the time-gap between production (off peak) and demand peaks. Based on geologic and geochemical reasons, the North German Basin has a very large capacity for compressed air/gas energy storage CAES in porous saltwater aquifers and salt cavities. Replacing pore reservoir brine with CAES causes changes in physical properties (elastic moduli, density and electrical properties) and justify applications of integrative geophysical methods for monitoring this energy storage. Here we apply techniques of the elastic full waveform inversion FWI, electric resistivity tomography ERT and gravity to map and quantify a gradually saturated gas plume injected in a thin deep saline aquifer within the North German Basin. For this subsurface model scenario we generated different synthetic data sets without and with adding random noise in order to robust the applied techniques for the real field applications. Datasets are inverted by posing different constraints on the initial model. Results reveal principally the capability of the applied integrative geophysical approach to resolve the CAES targets (plume, host reservoir, and cap rock). Constrained inversion models of elastic FWI and ERT are even able to recover well the gradual gas desaturation with depth. The spatial parameters accurately recovered from each technique are applied in the adequate petrophysical equations to yield precise quantifications of gas saturations. Resulting models of gas saturations independently determined from elastic FWI and ERT techniques are in accordance with each other and with the input (true) saturation model. Moreover, the gravity technique show high sensitivity to the mass deficit resulting from the gas storage and can resolve saturations and temporal saturation changes down to ±3% after reducing any shallow fluctuation such as that of groundwater table.

  1. Researched applied to transonic compressors in numerical fluid mechanics of inviscid flow and viscous flow

    NASA Technical Reports Server (NTRS)

    Thompkins, W. T., Jr.

    1985-01-01

    A streamline Euler solver which combines high accuracy and good convergence rates with capabilities for inverse or direct mode solution modes and an analysis technique for finite difference models of hyperbolic partial difference equations were developed.

  2. An improved pulse sequence and inversion algorithm of T2 spectrum

    NASA Astrophysics Data System (ADS)

    Ge, Xinmin; Chen, Hua; Fan, Yiren; Liu, Juntao; Cai, Jianchao; Liu, Jianyu

    2017-03-01

    The nuclear magnetic resonance transversal relaxation time is widely applied in geological prospecting, both in laboratory and downhole environments. However, current methods used for data acquisition and inversion should be reformed to characterize geological samples with complicated relaxation components and pore size distributions, such as samples of tight oil, gas shale, and carbonate. We present an improved pulse sequence to collect transversal relaxation signals based on the CPMG (Carr, Purcell, Meiboom, and Gill) pulse sequence. The echo spacing is not constant but varies in different windows, depending on prior knowledge or customer requirements. We use the entropy based truncated singular value decomposition (TSVD) to compress the ill-posed matrix and discard small singular values which cause the inversion instability. A hybrid algorithm combining the iterative TSVD and a simultaneous iterative reconstruction technique is implemented to reach the global convergence and stability of the inversion. Numerical simulations indicate that the improved pulse sequence leads to the same result as CPMG, but with lower echo numbers and computational time. The proposed method is a promising technique for geophysical prospecting and other related fields in future.

  3. Three-Dimensional Inverse Transport Solver Based on Compressive Sensing Technique

    NASA Astrophysics Data System (ADS)

    Cheng, Yuxiong; Wu, Hongchun; Cao, Liangzhi; Zheng, Youqi

    2013-09-01

    According to the direct exposure measurements from flash radiographic image, a compressive sensing-based method for three-dimensional inverse transport problem is presented. The linear absorption coefficients and interface locations of objects are reconstructed directly at the same time. It is always very expensive to obtain enough measurements. With limited measurements, compressive sensing sparse reconstruction technique orthogonal matching pursuit is applied to obtain the sparse coefficients by solving an optimization problem. A three-dimensional inverse transport solver is developed based on a compressive sensing-based technique. There are three features in this solver: (1) AutoCAD is employed as a geometry preprocessor due to its powerful capacity in graphic. (2) The forward projection matrix rather than Gauss matrix is constructed by the visualization tool generator. (3) Fourier transform and Daubechies wavelet transform are adopted to convert an underdetermined system to a well-posed system in the algorithm. Simulations are performed and numerical results in pseudo-sine absorption problem, two-cube problem and two-cylinder problem when using compressive sensing-based solver agree well with the reference value.

  4. Inverse dynamic substructuring using the direct hybrid assembly in the frequency domain

    NASA Astrophysics Data System (ADS)

    D'Ambrogio, Walter; Fregolent, Annalisa

    2014-04-01

    The paper deals with the identification of the dynamic behaviour of a structural subsystem, starting from the known dynamic behaviour of both the coupled system and the remaining part of the structural system (residual subsystem). This topic is also known as decoupling problem, subsystem subtraction or inverse dynamic substructuring. Whenever it is necessary to combine numerical models (e.g. FEM) and test models (e.g. FRFs), one speaks of experimental dynamic substructuring. Substructure decoupling techniques can be classified as inverse coupling or direct decoupling techniques. In inverse coupling, the equations describing the coupling problem are rearranged to isolate the unknown substructure instead of the coupled structure. On the contrary, direct decoupling consists in adding to the coupled system a fictitious subsystem that is the negative of the residual subsystem. Starting from a reduced version of the 3-field formulation (dynamic equilibrium using FRFs, compatibility and equilibrium of interface forces), a direct hybrid assembly is developed by requiring that both compatibility and equilibrium conditions are satisfied exactly, either at coupling DoFs only, or at additional internal DoFs of the residual subsystem. Equilibrium and compatibility DoFs might not be the same: this generates the so-called non-collocated approach. The technique is applied using experimental data from an assembled system made by a plate and a rigid mass.

  5. Four dimensional data assimilation (FDDA) impacts on WRF performance in simulating inversion layer structure and distributions of CMAQ-simulated winter ozone concentrations in Uintah Basin

    NASA Astrophysics Data System (ADS)

    Tran, Trang; Tran, Huy; Mansfield, Marc; Lyman, Seth; Crosman, Erik

    2018-03-01

    Four-dimensional data assimilation (FDDA) was applied in WRF-CMAQ model sensitivity tests to study the impact of observational and analysis nudging on model performance in simulating inversion layers and O3 concentration distributions within the Uintah Basin, Utah, U.S.A. in winter 2013. Observational nudging substantially improved WRF model performance in simulating surface wind fields, correcting a 10 °C warm surface temperature bias, correcting overestimation of the planetary boundary layer height (PBLH) and correcting underestimation of inversion strengths produced by regular WRF model physics without nudging. However, the combined effects of poor performance of WRF meteorological model physical parameterization schemes in simulating low clouds, and warm and moist biases in the temperature and moisture initialization and subsequent simulation fields, likely amplified the overestimation of warm clouds during inversion days when observational nudging was applied, impacting the resulting O3 photochemical formation in the chemistry model. To reduce the impact of a moist bias in the simulations on warm cloud formation, nudging with the analysis water mixing ratio above the planetary boundary layer (PBL) was applied. However, due to poor analysis vertical temperature profiles, applying analysis nudging also increased the errors in the modeled inversion layer vertical structure compared to observational nudging. Combining both observational and analysis nudging methods resulted in unrealistically extreme stratified stability that trapped pollutants at the lowest elevations at the center of the Uintah Basin and yielded the worst WRF performance in simulating inversion layer structure among the four sensitivity tests. The results of this study illustrate the importance of carefully considering the representativeness and quality of the observational and model analysis data sets when applying nudging techniques within stable PBLs, and the need to evaluate model results on a basin-wide scale.

  6. Spectral-element simulations of wave propagation in complex exploration-industry models: Imaging and adjoint tomography

    NASA Astrophysics Data System (ADS)

    Luo, Y.; Nissen-Meyer, T.; Morency, C.; Tromp, J.

    2008-12-01

    Seismic imaging in the exploration industry is often based upon ray-theoretical migration techniques (e.g., Kirchhoff) or other ideas which neglect some fraction of the seismic wavefield (e.g., wavefield continuation for acoustic-wave first arrivals) in the inversion process. In a companion paper we discuss the possibility of solving the full physical forward problem (i.e., including visco- and poroelastic, anisotropic media) using the spectral-element method. With such a tool at hand, we can readily apply the adjoint method to tomographic inversions, i.e., iteratively improving an initial 3D background model to fit the data. In the context of this inversion process, we draw connections between kernels in adjoint tomography and basic imaging principles in migration. We show that the images obtained by migration are nothing but particular kinds of adjoint kernels (mainly density kernels). Migration is basically a first step in the iterative inversion process of adjoint tomography. We apply the approach to basic 2D problems involving layered structures, overthrusting faults, topography, salt domes, and poroelastic regions.

  7. Estimating surface acoustic impedance with the inverse method.

    PubMed

    Piechowicz, Janusz

    2011-01-01

    Sound field parameters are predicted with numerical methods in sound control systems, in acoustic designs of building and in sound field simulations. Those methods define the acoustic properties of surfaces, such as sound absorption coefficients or acoustic impedance, to determine boundary conditions. Several in situ measurement techniques were developed; one of them uses 2 microphones to measure direct and reflected sound over a planar test surface. Another approach is used in the inverse boundary elements method, in which estimating acoustic impedance of a surface is expressed as an inverse boundary problem. The boundary values can be found from multipoint sound pressure measurements in the interior of a room. This method can be applied to arbitrarily-shaped surfaces. This investigation is part of a research programme on using inverse methods in industrial room acoustics.

  8. The Collaborative Seismic Earth Model: Generation 1

    NASA Astrophysics Data System (ADS)

    Fichtner, Andreas; van Herwaarden, Dirk-Philip; Afanasiev, Michael; SimutÄ--, SaulÄ--; Krischer, Lion; ćubuk-Sabuncu, Yeşim; Taymaz, Tuncay; Colli, Lorenzo; Saygin, Erdinc; Villaseñor, Antonio; Trampert, Jeannot; Cupillard, Paul; Bunge, Hans-Peter; Igel, Heiner

    2018-05-01

    We present a general concept for evolutionary, collaborative, multiscale inversion of geophysical data, specifically applied to the construction of a first-generation Collaborative Seismic Earth Model. This is intended to address the limited resources of individual researchers and the often limited use of previously accumulated knowledge. Model evolution rests on a Bayesian updating scheme, simplified into a deterministic method that honors today's computational restrictions. The scheme is able to harness distributed human and computing power. It furthermore handles conflicting updates, as well as variable parameterizations of different model refinements or different inversion techniques. The first-generation Collaborative Seismic Earth Model comprises 12 refinements from full seismic waveform inversion, ranging from regional crustal- to continental-scale models. A global full-waveform inversion ensures that regional refinements translate into whole-Earth structure.

  9. A general rough-surface inversion algorithm: Theory and application to SAR data

    NASA Technical Reports Server (NTRS)

    Moghaddam, M.

    1993-01-01

    Rough-surface inversion has significant applications in interpretation of SAR data obtained over bare soil surfaces and agricultural lands. Due to the sparsity of data and the large pixel size in SAR applications, it is not feasible to carry out inversions based on numerical scattering models. The alternative is to use parameter estimation techniques based on approximate analytical or empirical models. Hence, there are two issues to be addressed, namely, what model to choose and what estimation algorithm to apply. Here, a small perturbation model (SPM) is used to express the backscattering coefficients of the rough surface in terms of three surface parameters. The algorithm used to estimate these parameters is based on a nonlinear least-squares criterion. The least-squares optimization methods are widely used in estimation theory, but the distinguishing factor for SAR applications is incorporating the stochastic nature of both the unknown parameters and the data into formulation, which will be discussed in detail. The algorithm is tested with synthetic data, and several Newton-type least-squares minimization methods are discussed to compare their convergence characteristics. Finally, the algorithm is applied to multifrequency polarimetric SAR data obtained over some bare soil and agricultural fields. Results will be shown and compared to ground-truth measurements obtained from these areas. The strength of this general approach to inversion of SAR data is that it can be easily modified for use with any scattering model without changing any of the inversion steps. Note also that, for the same reason it is not limited to inversion of rough surfaces, and can be applied to any parameterized scattering process.

  10. PREFACE: First International Congress of the International Association of Inverse Problems (IPIA): Applied Inverse Problems 2007: Theoretical and Computational Aspects

    NASA Astrophysics Data System (ADS)

    Uhlmann, Gunther

    2008-07-01

    This volume represents the proceedings of the fourth Applied Inverse Problems (AIP) international conference and the first congress of the Inverse Problems International Association (IPIA) which was held in Vancouver, Canada, June 25 29, 2007. The organizing committee was formed by Uri Ascher, University of British Columbia, Richard Froese, University of British Columbia, Gary Margrave, University of Calgary, and Gunther Uhlmann, University of Washington, chair. The conference was part of the activities of the Pacific Institute of Mathematical Sciences (PIMS) Collaborative Research Group on inverse problems (http://www.pims.math.ca/scientific/collaborative-research-groups/past-crgs). This event was also supported by grants from NSF and MITACS. Inverse Problems (IP) are problems where causes for a desired or an observed effect are to be determined. They lie at the heart of scientific inquiry and technological development. The enormous increase in computing power and the development of powerful algorithms have made it possible to apply the techniques of IP to real-world problems of growing complexity. Applications include a number of medical as well as other imaging techniques, location of oil and mineral deposits in the earth's substructure, creation of astrophysical images from telescope data, finding cracks and interfaces within materials, shape optimization, model identification in growth processes and, more recently, modelling in the life sciences. The series of Applied Inverse Problems (AIP) Conferences aims to provide a primary international forum for academic and industrial researchers working on all aspects of inverse problems, such as mathematical modelling, functional analytic methods, computational approaches, numerical algorithms etc. The steering committee of the AIP conferences consists of Heinz Engl (Johannes Kepler Universität, Austria), Joyce McLaughlin (RPI, USA), William Rundell (Texas A&M, USA), Erkki Somersalo (Helsinki University of Technology, Finland), Masahiro Yamamoto (University of Tokyo, Japan), Gunther Uhlmann (University of Washington) and Jun Zou (Chinese University of Hong Kong). IPIA is a recently formed organization that intends to promote the field of inverse problem at all levels. See http://www.inverse-problems.net/. IPIA awarded the first Calderón prize at the opening of the conference to Matti Lassas (see first article in the Proceedings). There was also a general meeting of IPIA during the workshop. This was probably the largest conference ever on IP with 350 registered participants. The program consisted of 18 invited speakers and the Calderón Prize Lecture given by Matti Lassas. Another integral part of the program was the more than 60 mini-symposia that covered a broad spectrum of the theory and applications of inverse problems, focusing on recent developments in medical imaging, seismic exploration, remote sensing, industrial applications, numerical and regularization methods in inverse problems. Another important related topic was image processing in particular the advances which have allowed for significant enhancement of widely used imaging techniques. For more details on the program see the web page: http://www.pims.math.ca/science/2007/07aip. These proceedings reflect the broad spectrum of topics covered in AIP 2007. The conference and these proceedings would not have happened without the contributions of many people. I thank all my fellow organizers, the invited speakers, the speakers and organizers of mini-symposia for making this an exciting and vibrant event. I also thank PIMS, NSF and MITACS for their generous financial support. I take this opportunity to thank the PIMS staff, particularly Ken Leung, for making the local arrangements. Also thanks are due to Stephen McDowall for his help in preparing the schedule of the conference and Xiaosheng Li for the help in preparing these proceedings. I also would like to thank the contributors of this volume and the referees. Finally, many thanks are due to Graham Douglas and Elaine Longden-Chapman for suggesting publication in Journal of Physics: Conference Series.

  11. Identifing Atmospheric Pollutant Sources Using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Paes, F. F.; Campos, H. F.; Luz, E. P.; Carvalho, A. R.

    2008-05-01

    The estimation of the area source pollutant strength is a relevant issue for atmospheric environment. This characterizes an inverse problem in the atmospheric pollution dispersion. In the inverse analysis, an area source domain is considered, where the strength of such area source term is assumed unknown. The inverse problem is solved by using a supervised artificial neural network: multi-layer perceptron. The conection weights of the neural network are computed from delta rule - learning process. The neural network inversion is compared with results from standard inverse analysis (regularized inverse solution). In the regularization method, the inverse problem is formulated as a non-linear optimization approach, whose the objective function is given by the square difference between the measured pollutant concentration and the mathematical models, associated with a regularization operator. In our numerical experiments, the forward problem is addressed by a source-receptor scheme, where a regressive Lagrangian model is applied to compute the transition matrix. The second order maximum entropy regularization is used, and the regularization parameter is calculated by the L-curve technique. The objective function is minimized employing a deterministic scheme (a quasi-Newton algorithm) [1] and a stochastic technique (PSO: particle swarm optimization) [2]. The inverse problem methodology is tested with synthetic observational data, from six measurement points in the physical domain. The best inverse solutions were obtained with neural networks. References: [1] D. R. Roberti, D. Anfossi, H. F. Campos Velho, G. A. Degrazia (2005): Estimating Emission Rate and Pollutant Source Location, Ciencia e Natura, p. 131-134. [2] E.F.P. da Luz, H.F. de Campos Velho, J.C. Becceneri, D.R. Roberti (2007): Estimating Atmospheric Area Source Strength Through Particle Swarm Optimization. Inverse Problems, Desing and Optimization Symposium IPDO-2007, April 16-18, Miami (FL), USA, vol 1, p. 354-359.

  12. Lightcurves for Shape Modeling: 852 Wladilena, 1089 Tama, and 1180 Rita

    NASA Astrophysics Data System (ADS)

    Polishook, David

    2012-10-01

    The folded lightcurves and synodic periods of 852 Wladilena, 1089 Tama, and 1180 Rita are reported. The data are used by Hanus et al. (2012) to derive the rotation axis and to construct a shape model by applying the inversion lightcurve technique.

  13. Bayesian inversion of data from effusive volcanic eruptions using physics-based models: Application to Mount St. Helens 2004--2008

    USGS Publications Warehouse

    Anderson, Kyle; Segall, Paul

    2013-01-01

    Physics-based models of volcanic eruptions can directly link magmatic processes with diverse, time-varying geophysical observations, and when used in an inverse procedure make it possible to bring all available information to bear on estimating properties of the volcanic system. We develop a technique for inverting geodetic, extrusive flux, and other types of data using a physics-based model of an effusive silicic volcanic eruption to estimate the geometry, pressure, depth, and volatile content of a magma chamber, and properties of the conduit linking the chamber to the surface. A Bayesian inverse formulation makes it possible to easily incorporate independent information into the inversion, such as petrologic estimates of melt water content, and yields probabilistic estimates for model parameters and other properties of the volcano. Probability distributions are sampled using a Markov-Chain Monte Carlo algorithm. We apply the technique using GPS and extrusion data from the 2004–2008 eruption of Mount St. Helens. In contrast to more traditional inversions such as those involving geodetic data alone in combination with kinematic forward models, this technique is able to provide constraint on properties of the magma, including its volatile content, and on the absolute volume and pressure of the magma chamber. Results suggest a large chamber of >40 km3 with a centroid depth of 11–18 km and a dissolved water content at the top of the chamber of 2.6–4.9 wt%.

  14. On the inversion of geodetic integrals defined over the sphere using 1-D FFT

    NASA Astrophysics Data System (ADS)

    García, R. V.; Alejo, C. A.

    2005-08-01

    An iterative method is presented which performs inversion of integrals defined over the sphere. The method is based on one-dimensional fast Fourier transform (1-D FFT) inversion and is implemented with the projected Landweber technique, which is used to solve constrained least-squares problems reducing the associated 1-D cyclic-convolution error. The results obtained are as precise as the direct matrix inversion approach, but with better computational efficiency. A case study uses the inversion of Hotine’s integral to obtain gravity disturbances from geoid undulations. Numerical convergence is also analyzed and comparisons with respect to the direct matrix inversion method using conjugate gradient (CG) iteration are presented. Like the CG method, the number of iterations needed to get the optimum (i.e., small) error decreases as the measurement noise increases. Nevertheless, for discrete data given over a whole parallel band, the method can be applied directly without implementing the projected Landweber method, since no cyclic convolution error exists.

  15. Comparison of data inversion techniques for remotely sensed wide-angle observations of Earth emitted radiation

    NASA Technical Reports Server (NTRS)

    Green, R. N.

    1981-01-01

    The shape factor, parameter estimation, and deconvolution data analysis techniques were applied to the same set of Earth emitted radiation measurements to determine the effects of different techniques on the estimated radiation field. All three techniques are defined and their assumptions, advantages, and disadvantages are discussed. Their results are compared globally, zonally, regionally, and on a spatial spectrum basis. The standard deviations of the regional differences in the derived radiant exitance varied from 7.4 W-m/2 to 13.5 W-m/2.

  16. An algorithm for deriving core magnetic field models from the Swarm data set

    NASA Astrophysics Data System (ADS)

    Rother, Martin; Lesur, Vincent; Schachtschneider, Reyko

    2013-11-01

    In view of an optimal exploitation of the Swarm data set, we have prepared and tested software dedicated to the determination of accurate core magnetic field models and of the Euler angles between the magnetic sensors and the satellite reference frame. The dedicated core field model estimation is derived directly from the GFZ Reference Internal Magnetic Model (GRIMM) inversion and modeling family. The data selection techniques and the model parameterizations are similar to what were used for the derivation of the second (Lesur et al., 2010) and third versions of GRIMM, although the usage of observatory data is not planned in the framework of the application to Swarm. The regularization technique applied during the inversion process smoothes the magnetic field model in time. The algorithm to estimate the Euler angles is also derived from the CHAMP studies. The inversion scheme includes Euler angle determination with a quaternion representation for describing the rotations. It has been built to handle possible weak time variations of these angles. The modeling approach and software have been initially validated on a simple, noise-free, synthetic data set and on CHAMP vector magnetic field measurements. We present results of test runs applied to the synthetic Swarm test data set.

  17. Anisotropic three-dimensional inversion of CSEM data using finite-element techniques on unstructured grids

    NASA Astrophysics Data System (ADS)

    Wang, Feiyan; Morten, Jan Petter; Spitzer, Klaus

    2018-05-01

    In this paper, we present a recently developed anisotropic 3-D inversion framework for interpreting controlled-source electromagnetic (CSEM) data in the frequency domain. The framework integrates a high-order finite-element forward operator and a Gauss-Newton inversion algorithm. Conductivity constraints are applied using a parameter transformation. We discretize the continuous forward and inverse problems on unstructured grids for a flexible treatment of arbitrarily complex geometries. Moreover, an unstructured mesh is more desirable in comparison to a single rectilinear mesh for multisource problems because local grid refinement will not significantly influence the mesh density outside the region of interest. The non-uniform spatial discretization facilitates parametrization of the inversion domain at a suitable scale. For a rapid simulation of multisource EM data, we opt to use a parallel direct solver. We further accelerate the inversion process by decomposing the entire data set into subsets with respect to frequencies (and transmitters if memory requirement is affordable). The computational tasks associated with each data subset are distributed to different processes and run in parallel. We validate the scheme using a synthetic marine CSEM model with rough bathymetry, and finally, apply it to an industrial-size 3-D data set from the Troll field oil province in the North Sea acquired in 2008 to examine its robustness and practical applicability.

  18. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  19. Waveform inversion of acoustic waves for explosion yield estimation

    DOE PAGES

    Kim, K.; Rodgers, A. J.

    2016-07-08

    We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less

  20. Neural network explanation using inversion.

    PubMed

    Saad, Emad W; Wunsch, Donald C

    2007-01-01

    An important drawback of many artificial neural networks (ANN) is their lack of explanation capability [Andrews, R., Diederich, J., & Tickle, A. B. (1996). A survey and critique of techniques for extracting rules from trained artificial neural networks. Knowledge-Based Systems, 8, 373-389]. This paper starts with a survey of algorithms which attempt to explain the ANN output. We then present HYPINV, a new explanation algorithm which relies on network inversion; i.e. calculating the ANN input which produces a desired output. HYPINV is a pedagogical algorithm, that extracts rules, in the form of hyperplanes. It is able to generate rules with arbitrarily desired fidelity, maintaining a fidelity-complexity tradeoff. To our knowledge, HYPINV is the only pedagogical rule extraction method, which extracts hyperplane rules from continuous or binary attribute neural networks. Different network inversion techniques, involving gradient descent as well as an evolutionary algorithm, are presented. An information theoretic treatment of rule extraction is presented. HYPINV is applied to example synthetic problems, to a real aerospace problem, and compared with similar algorithms using benchmark problems.

  1. Waveform inversion of acoustic waves for explosion yield estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, K.; Rodgers, A. J.

    We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less

  2. Chiropractic biophysics technique: a linear algebra approach to posture in chiropractic.

    PubMed

    Harrison, D D; Janik, T J; Harrison, G R; Troyanovich, S; Harrison, D E; Harrison, S O

    1996-10-01

    This paper discusses linear algebra as applied to human posture in chiropractic, specifically chiropractic biophysics technique (CBP). Rotations, reflections and translations are geometric functions studied in vector spaces in linear algebra. These mathematical functions are termed rigid body transformations and are applied to segmental spinal movement in the literature. Review of the literature indicates that these linear algebra concepts have been used to describe vertebral motion. However, these rigid body movers are presented here as applying to the global postural movements of the head, thoracic cage and pelvis. The unique inverse functions of rotations, reflections and translations provide a theoretical basis for making postural corrections in neutral static resting posture. Chiropractic biophysics technique (CBP) uses these concepts in examination procedures, manual spinal manipulation, instrument assisted spinal manipulation, postural exercises, extension traction and clinical outcome measures.

  3. Multiple Frequency Contrast Source Inversion Method for Vertical Electromagnetic Profiling: 2D Simulation Results and Analyses

    NASA Astrophysics Data System (ADS)

    Li, Jinghe; Song, Linping; Liu, Qing Huo

    2016-02-01

    A simultaneous multiple frequency contrast source inversion (CSI) method is applied to reconstructing hydrocarbon reservoir targets in a complex multilayered medium in two dimensions. It simulates the effects of a salt dome sedimentary formation in the context of reservoir monitoring. In this method, the stabilized biconjugate-gradient fast Fourier transform (BCGS-FFT) algorithm is applied as a fast solver for the 2D volume integral equation for the forward computation. The inversion technique with CSI combines the efficient FFT algorithm to speed up the matrix-vector multiplication and the stable convergence of the simultaneous multiple frequency CSI in the iteration process. As a result, this method is capable of making quantitative conductivity image reconstruction effectively for large-scale electromagnetic oil exploration problems, including the vertical electromagnetic profiling (VEP) survey investigated here. A number of numerical examples have been demonstrated to validate the effectiveness and capacity of the simultaneous multiple frequency CSI method for a limited array view in VEP.

  4. Determining the metallicity of the solar envelope using seismic inversion techniques

    NASA Astrophysics Data System (ADS)

    Buldgen, G.; Salmon, S. J. A. J.; Noels, A.; Scuflaire, R.; Dupret, M. A.; Reese, D. R.

    2017-11-01

    The solar metallicity issue is a long-lasting problem of astrophysics, impacting multiple fields and still subject to debate and uncertainties. While spectroscopy has mostly been used to determine the solar heavy elements abundance, helioseismologists attempted providing a seismic determination of the metallicity in the solar convective envelope. However, the puzzle remains since two independent groups provided two radically different values for this crucial astrophysical parameter. We aim at providing an independent seismic measurement of the solar metallicity in the convective envelope. Our main goal is to help provide new information to break the current stalemate amongst seismic determinations of the solar heavy element abundance. We start by presenting the kernels, the inversion technique and the target function of the inversion we have developed. We then test our approach in multiple hare-and-hounds exercises to assess its reliability and accuracy. We then apply our technique to solar data using calibrated solar models and determine an interval of seismic measurements for the solar metallicity. We show that our inversion can indeed be used to estimate the solar metallicity thanks to our hare-and-hounds exercises. However, we also show that further dependencies in the physical ingredients of solar models lead to a low accuracy. Nevertheless, using various physical ingredients for our solar models, we determine metallicity values between 0.008 and 0.014.

  5. Analyzing the performance of PROSPECT model inversion based on different spectral information for leaf biochemical properties retrieval

    NASA Astrophysics Data System (ADS)

    Sun, Jia; Shi, Shuo; Yang, Jian; Du, Lin; Gong, Wei; Chen, Biwu; Song, Shalei

    2018-01-01

    Leaf biochemical constituents provide useful information about major ecological processes. As a fast and nondestructive method, remote sensing techniques are critical to reflect leaf biochemistry via models. PROSPECT model has been widely applied in retrieving leaf traits by providing hemispherical reflectance and transmittance. However, the process of measuring both reflectance and transmittance can be time-consuming and laborious. Contrary to use reflectance spectrum alone in PROSPECT model inversion, which has been adopted by many researchers, this study proposes to use transmission spectrum alone, with the increasing availability of the latter through various remote sensing techniques. Then we analyzed the performance of PROSPECT model inversion with (1) only transmission spectrum, (2) only reflectance and (3) both reflectance and transmittance, using synthetic datasets (with varying levels of random noise and systematic noise) and two experimental datasets (LOPEX and ANGERS). The results show that (1) PROSPECT-5 model inversion based solely on transmission spectrum is viable with results generally better than that based solely on reflectance spectrum; (2) leaf dry matter can be better estimated using only transmittance or reflectance than with both reflectance and transmittance spectra.

  6. Velocity structure of a bottom simulating reflector offshore Peru: Results from full waveform inversion

    USGS Publications Warehouse

    Pecher, I.A.; Minshull, T.A.; Singh, S.C.; von Huene, Roland E.

    1996-01-01

    Much of our knowledge of the worldwide distribution of submarine gas hydrates comes from seismic observations of Bottom Simulating Reflectors (BSRs). Full waveform inversion has proven to be a reliable technique for studying the fine structure of BSRs using the compressional wave velocity. We applied a non-linear full waveform inversion technique to a BSR at a location offshore Peru. We first determined the large-scale features of seismic velocity variations using a statistical inversion technique to maximise coherent energy along travel-time curves. These velocities were used for a starting velocity model for the full waveform inversion, which yielded a detailed velocity/depth model in the vicinity of the BSR. We found that the data are best fit by a model in which the BSR consists of a thin, low-velocity layer. The compressional wave velocity drops from 2.15 km/s down to an average of 1.70 km/s in an 18m thick interval, with a minimum velocity of 1.62 km/s in a 6 m interval. The resulting compressional wave velocity was used to estimate gas content in the sediments. Our results suggest that the low velocity layer is a 6-18 m thick zone containing a few percent of free gas in the pore space. The presence of the BSR coincides with a region of vertical uplift. Therefore, we suggest that gas at this BSR is formed by a dissociation of hydrates at the base of the hydrate stability zone due to uplift and subsequently a decrease in pressure.

  7. 3D first-arrival traveltime tomography with modified total variation regularization

    NASA Astrophysics Data System (ADS)

    Jiang, Wenbin; Zhang, Jie

    2018-02-01

    Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.

  8. Source-space ICA for MEG source imaging.

    PubMed

    Jonmohamadi, Yaqub; Jones, Richard D

    2016-02-01

    One of the most widely used approaches in electroencephalography/magnetoencephalography (MEG) source imaging is application of an inverse technique (such as dipole modelling or sLORETA) on the component extracted by independent component analysis (ICA) (sensor-space ICA + inverse technique). The advantage of this approach over an inverse technique alone is that it can identify and localize multiple concurrent sources. Among inverse techniques, the minimum-variance beamformers offer a high spatial resolution. However, in order to have both high spatial resolution of beamformer and be able to take on multiple concurrent sources, sensor-space ICA + beamformer is not an ideal combination. We propose source-space ICA for MEG as a powerful alternative approach which can provide the high spatial resolution of the beamformer and handle multiple concurrent sources. The concept of source-space ICA for MEG is to apply the beamformer first and then singular value decomposition + ICA. In this paper we have compared source-space ICA with sensor-space ICA both in simulation and real MEG. The simulations included two challenging scenarios of correlated/concurrent cluster sources. Source-space ICA provided superior performance in spatial reconstruction of source maps, even though both techniques performed equally from a temporal perspective. Real MEG from two healthy subjects with visual stimuli were also used to compare performance of sensor-space ICA and source-space ICA. We have also proposed a new variant of minimum-variance beamformer called weight-normalized linearly-constrained minimum-variance with orthonormal lead-field. As sensor-space ICA-based source reconstruction is popular in EEG and MEG imaging, and given that source-space ICA has superior spatial performance, it is expected that source-space ICA will supersede its predecessor in many applications.

  9. Contribution of 3D inversion of Electrical Resistivity Tomography data applied to volcanic structures

    NASA Astrophysics Data System (ADS)

    Portal, Angélie; Fargier, Yannick; Lénat, Jean-François; Labazuy, Philippe

    2016-04-01

    The electrical resistivity tomography (ERT) method, initially developed for environmental and engineering exploration, is now commonly used for geological structures imaging. Such structures can present complex characteristics that conventional 2D inversion processes cannot perfectly integrate. Here we present a new 3D inversion algorithm named EResI, firstly developed for levee investigation, and presently applied to the study of a complex lava dome (the Puy de Dôme volcano, France). EResI algorithm is based on a conventional regularized Gauss-Newton inversion scheme and a 3D non-structured discretization of the model (double grid method based on tetrahedrons). This discretization allows to accurately model the topography of investigated structure (without a mesh deformation procedure) and also permits a precise location of the electrodes. Moreover, we demonstrate that a complete 3D unstructured discretization limits the number of inversion cells and is better adapted to the resolution capacity of tomography than a structured discretization. This study shows that a 3D inversion with a non-structured parametrization has some advantages compared to classical 2D inversions. The first advantage comes from the fact that a 2D inversion leads to artefacts due to 3D effects (3D topography, 3D internal resistivity). The second advantage comes from the fact that the capacity to experimentally align electrodes along an axis (for 2D surveys) depends on the constrains on the field (topography...). In this case, a 2D assumption induced by 2.5D inversion software prevents its capacity to model electrodes outside this axis leading to artefacts in the inversion result. The last limitation comes from the use of mesh deformation techniques used to accurately model the topography in 2D softwares. This technique used for structured discretization (Res2dinv) is prohibed for strong topography (>60 %) and leads to a small computational errors. A wide geophysical survey was carried out on the Puy de Dôme volcano resulting in 12 ERT profiles with approximatively 800 electrodes. We performed two processing stages by inverting independently each profiles in 2D (RES2DINV software) and the complete data set in 3D (EResI). The comparison of the 3D inversion results with those obtained through a conventional 2D inversion process underlined that EResI allows to accurately take into account the random electrodes positioning and reduce out-line artefacts into the inversion models due to positioning errors out of the profile axis. This comparison also highlighted the advantages to integrate several ERT lines to compute the 3D models of complex volcanic structures. Finally, the resulting 3D model allows a better interpretation of the Puy de Dome Volcano.

  10. MO-F-CAMPUS-T-03: Continuous Dose Delivery with Gamma Knife Perfexion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghobadi,; Li, W; Chung, C

    2015-06-15

    Purpose: We propose continuous dose delivery techniques for stereotactic treatments delivered by Gamma Knife Perfexion using inverse treatment planning system that can be applied to various tumour sites in the brain. We test the accuracy of the plans on Perfexion’s planning system (GammaPlan) to ensure the obtained plans are viable. This approach introduces continuous dose delivery for Perefxion, as opposed to the currently employed step-and-shoot approaches, for different tumour sites. Additionally, this is the first realization of automated inverse planning on GammaPlan. Methods: The inverse planning approach is divided into two steps of identifying a quality path inside the target,more » and finding the best collimator composition for the path. To find a path, we select strategic regions inside the target volume and find a path that visits each region exactly once. This path is then passed to a mathematical model which finds the best combination of collimators and their durations. The mathematical model minimizes the dose spillage to the surrounding tissues while ensuring the prescribed dose is delivered to the target(s). Organs-at-risk and their corresponding allowable doses can also be added to the model to protect adjacent organs. Results: We test this approach on various tumour sizes and sites. The quality of the obtained treatment plans are comparable or better than forward plans and inverse plans that use step- and-shoot technique. The conformity indices in the obtained continuous dose delivery plans are similar to those of forward plans while the beam-on time is improved on average (see Table 1 in supporting document). Conclusion: We employ inverse planning for continuous dose delivery in Perfexion for brain tumours. The quality of the obtained plans is similar to forward and inverse plans that use conventional step-and-shoot technique. We tested the inverse plans on GammaPlan to verify clinical relevance. This research was partially supported by Elekta, Sweden (vendor of Gamma Knife Perfexion)« less

  11. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  12. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE PAGES

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  13. Prolongation structures of nonlinear evolution equations

    NASA Technical Reports Server (NTRS)

    Wahlquist, H. D.; Estabrook, F. B.

    1975-01-01

    A technique is developed for systematically deriving a 'prolongation structure' - a set of interrelated potentials and pseudopotentials - for nonlinear partial differential equations in two independent variables. When this is applied to the Korteweg-de Vries equation, a new infinite set of conserved quantities is obtained. Known solution techniques are shown to result from the discovery of such a structure: related partial differential equations for the potential functions, linear 'inverse scattering' equations for auxiliary functions, Backlund transformations. Generalizations of these techniques will result from the use of irreducible matrix representations of the prolongation structure.

  14. Estimating soil hydraulic parameters from transient flow experiments in a centrifuge using parameter optimization technique

    USGS Publications Warehouse

    Šimůnek, Jirka; Nimmo, John R.

    2005-01-01

    A modified version of the Hydrus software package that can directly or inversely simulate water flow in a transient centrifugal field is presented. The inverse solver for parameter estimation of the soil hydraulic parameters is then applied to multirotation transient flow experiments in a centrifuge. Using time‐variable water contents measured at a sequence of several rotation speeds, soil hydraulic properties were successfully estimated by numerical inversion of transient experiments. The inverse method was then evaluated by comparing estimated soil hydraulic properties with those determined independently using an equilibrium analysis. The optimized soil hydraulic properties compared well with those determined using equilibrium analysis and steady state experiment. Multirotation experiments in a centrifuge not only offer significant time savings by accelerating time but also provide significantly more information for the parameter estimation procedure compared to multistep outflow experiments in a gravitational field.

  15. Inversion for the driving forces of plate tectonics

    NASA Technical Reports Server (NTRS)

    Richardson, R. M.

    1983-01-01

    Inverse modeling techniques have been applied to the problem of determining the roles of various forces that may drive and resist plate tectonic motions. Separate linear inverse problems have been solved to find the best fitting pole of rotation for finite element grid point velocities and to find the best combination of force models to fit the observed relative plate velocities for the earth's twelve major plates using the generalized inverse operator. Variance-covariance data on plate motion have also been included. Results emphasize the relative importance of ridge push forces in the driving mechanism. Convergent margin forces are smaller by at least a factor of two, and perhaps by as much as a factor of twenty. Slab pull, apparently, is poorly transmitted to the surface plate as a driving force. Drag forces at the base of the plate are smaller than ridge push forces, although the sign of the force remains in question.

  16. Speckle noise reduction in quantitative optical metrology techniques by application of the discrete wavelet transformation

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Pryputniewicz, Ryszard J.

    2002-06-01

    Effective suppression of speckle noise content in interferometric data images can help in improving accuracy and resolution of the results obtained with interferometric optical metrology techniques. In this paper, novel speckle noise reduction algorithms based on the discrete wavelet transformation are presented. The algorithms proceed by: (a) estimating the noise level contained in the interferograms of interest, (b) selecting wavelet families, (c) applying the wavelet transformation using the selected families, (d) wavelet thresholding, and (e) applying the inverse wavelet transformation, producing denoised interferograms. The algorithms are applied to the different stages of the processing procedures utilized for generation of quantitative speckle correlation interferometry data of fiber-optic based opto-electronic holography (FOBOEH) techniques, allowing identification of optimal processing conditions. It is shown that wavelet algorithms are effective for speckle noise reduction while preserving image features otherwise faded with other algorithms.

  17. Airglow studies using observations made with the GLO instrument on the Space Shuttle

    NASA Astrophysics Data System (ADS)

    Alfaro Suzan, Ana Luisa

    2009-12-01

    Our understanding of Earth's upper atmosphere has advanced tremendously over the last few decades due to our enhanced capacity for making remote observations from space. Space based observations of Earth's daytime and nighttime airglow emissions are very good examples of such enhancements to our knowledge. The terrestrial nighttime airglow, or nightglow, is barely discernible to the naked eye as viewed from Earth's surface. However, it is clearly visible from space - as most astronauts have been amazed to report. The nightglow consists of emissions of ultraviolet, visible and near-infrared radiation from electronically excited oxygen molecules and atoms and vibrationally excited OH molecules. It mostly emanates from a 10 km thick layer located about 100 km above Earth's surface. Various photochemical models have been proposed to explain the production of the emitting species. In this study some unique observations of Earth's nightglow made with the GLO instrument on NASA's Space Shuttle, are analyzed to assess the proposed excitation models. Previous analyses of these observations by Broadfoot and Gardner (2001), performed using a 1-D inversion technique, have indicated significant spatial structures and have raised serious questions about the proposed nightglow excitation models. However, the observation of such strong spatial structures calls into serious question the appropriateness of the adopted 1-D inversion technique and, therefore, the validity of the conclusions. In this study a more rigorous 2-D tomographic inversion technique is developed and applied to the available GLO data to determine if some of the apparent discrepancies can be explained by the limitations of the previously applied 1-D inversion approach. The results of this study still reveal some potentially serious inadequacies in the proposed photochemical models. However, alternative explanations for the discrepancies between the GLO observations and the model expectations are suggested. These include upper atmospheric tidal effects and possible errors in the pointing of the GLO instrument.

  18. Estimation of Regional Carbon Balance from Atmospheric Observations

    NASA Astrophysics Data System (ADS)

    Denning, S.; Uliasz, M.; Skidmore, J.

    2002-12-01

    Variations in the concentration of CO2 and other trace gases in time and space contain information about sources and sinks at regional scales. Several methods have been developed to quantitatively extract this information from atmospheric measurements. Mass-balance techniques depend on the ability to repeatedly sample the same mass of air, which involves careful attention to airmass trajectories. Inverse and adjoint techniques rely on decomposition of the source field into quasi-independent "basis functions" that are propagated through transport models and then used to synthesize optimal linear combinations that best match observations. A recently proposed method for regional flux estimation from continuous measurements at tall towers relies on time-mean vertical gradients, and requires careful trajectory analysis to map the estimates onto regional ecosystems. Each of these techniques is likely to be applied to measurements made during the North American Carbon Program. We have also explored the use of Bayesian synthesis inversion at regional scales, using a Lagrangian particle dispersion model driven by mesoscale transport fields. Influence functions were calculated for each hypothetical observation in a realistic diurnally-varying flow. These influence functions were then treated as basis functions for the purpose of separate inversions for daytime photosynthesis and 24-hour mean ecosystem respiration. Our results highlight the importance of estimating CO2 fluxes through the lateral boundaries of the model. Respiration fluxes were well constrained by one or two hypothetical towers, regardless of inflow fluxes. Time-varying assimilation fluxes were less well constrained, and much more dependent on knowledge of inflow fluxes. The small net difference between respiration and photosynthesis was the most difficult to determine, being extremely sensitive to knowledge of inflow fluxes. Finally, we explored the feasibility of directly incorporating mid-day concentration values measured at surface-layer flux towers in global inversions for regional surface fluxes. We found that such data would substantially improve the observational constraint on current carbon cycle models, especially if applied selectively to a well-designed subset of the current network of flux towers.

  19. Three-Dimensional Anisotropic Acoustic and Elastic Full-Waveform Seismic Inversion

    NASA Astrophysics Data System (ADS)

    Warner, M.; Morgan, J. V.

    2013-12-01

    Three-dimensional full-waveform inversion is a high-resolution, high-fidelity, quantitative, seismic imaging technique that has advanced rapidly within the oil and gas industry. The method involves the iterative improvement of a starting model using a series of local linearized updates to solve the full non-linear inversion problem. During the inversion, forward modeling employs the full two-way three-dimensional heterogeneous anisotropic acoustic or elastic wave equation to predict the observed raw field data, wiggle-for-wiggle, trace-by-trace. The method is computationally demanding; it is highly parallelized, and runs on large multi-core multi-node clusters. Here, we demonstrate what can be achieved by applying this newly practical technique to several high-density 3D seismic datasets that were acquired to image four contrasting sedimentary targets: a gas cloud above an oil reservoir, a radially faulted dome, buried fluvial channels, and collapse structures overlying an evaporate sequence. We show that the resulting anisotropic p-wave velocity models match in situ measurements in deep boreholes, reproduce detailed structure observed independently on high-resolution seismic reflection sections, accurately predict the raw seismic data, simplify and sharpen reverse-time-migrated reflection images of deeper horizons, and flatten Kirchhoff-migrated common-image gathers. We also show that full-elastic 3D full-waveform inversion of pure pressure data can generate a reasonable shear-wave velocity model for one of these datasets. For two of the four datasets, the inclusion of significant transversely isotropic anisotropy with a vertical axis of symmetry was necessary in order to fit the kinematics of the field data properly. For the faulted dome, the full-waveform-inversion p-wave velocity model recovers the detailed structure of every fault that can be seen on coincident seismic reflection data. Some of the individual faults represent high-velocity zones, some represent low-velocity zones, some have more-complex internal structure, and some are visible merely as offsets between two regions with contrasting velocity. Although this has not yet been demonstrated quantitatively for this dataset, it seems likely that at least some of this fine structure in the recovered velocity model is related to the detailed lithology, strain history and fluid properties within the individual faults. We have here applied this technique to seismic data that were acquired by the extractive industries, however this inversion scheme is immediately scalable and applicable to a much wider range of problems given sufficient quality and density of observed data. Potential targets range from shallow magma chambers beneath active volcanoes, through whole-crustal sections across plate boundaries, to regional and whole-Earth models.

  20. Restart Operator Meta-heuristics for a Problem-Oriented Evolutionary Strategies Algorithm in Inverse Mathematical MISO Modelling Problem Solving

    NASA Astrophysics Data System (ADS)

    Ryzhikov, I. S.; Semenkin, E. S.

    2017-02-01

    This study is focused on solving an inverse mathematical modelling problem for dynamical systems based on observation data and control inputs. The mathematical model is being searched in the form of a linear differential equation, which determines the system with multiple inputs and a single output, and a vector of the initial point coordinates. The described problem is complex and multimodal and for this reason the proposed evolutionary-based optimization technique, which is oriented on a dynamical system identification problem, was applied. To improve its performance an algorithm restart operator was implemented.

  1. Inverse Calibration Free fs-LIBS of Copper-Based Alloys

    NASA Astrophysics Data System (ADS)

    Smaldone, Antonella; De Bonis, Angela; Galasso, Agostino; Guarnaccio, Ambra; Santagata, Antonio; Teghil, Roberto

    2016-09-01

    In this work the analysis by Laser Induced Breakdown Spectroscopy (LIBS) technique of copper-based alloys having different composition and performed with fs laser pulses is presented. A Nd:Glass laser (Twinkle Light Conversion, λ = 527 nm at 250 fs) and a set of bronze and brass certified standards were used. The inverse Calibration-Free method (inverse CF-LIBS) was applied for estimating the temperature of the fs laser induced plasma in order to achieve quantitative elemental analysis of such materials. This approach strengthens the hypothesis that, through the assessment of the plasma temperature occurring in fs-LIBS, straightforward and reliable analytical data can be provided. With this aim the capability of the here adopted inverse CF-LIBS method, which is based on the fulfilment of the Local Thermodynamic Equilibrium (LTE) condition, for an indirect determination of the species excitation temperature, is shown. It is reported that the estimated temperatures occurring during the process provide a good figure of merit between the certified and the experimentally determined composition of the bronze and brass materials, here employed, although further correction procedure, like the use of calibration curves, can be demanded. The reported results demonstrate that the inverse CF-LIBS method can be applied when fs laser pulses are used even though the plasma properties could be affected by the matrix effects restricting its full employment to unknown samples provided that a certified standard having similar composition is available.

  2. The inverse electroencephalography pipeline

    NASA Astrophysics Data System (ADS)

    Weinstein, David Michael

    The inverse electroencephalography (EEG) problem is defined as determining which regions of the brain are active based on remote measurements recorded with scalp EEG electrodes. An accurate solution to this problem would benefit both fundamental neuroscience research and clinical neuroscience applications. However, constructing accurate patient-specific inverse EEG solutions requires complex modeling, simulation, and visualization algorithms, and to date only a few systems have been developed that provide such capabilities. In this dissertation, a computational system for generating and investigating patient-specific inverse EEG solutions is introduced, and the requirements for each stage of this Inverse EEG Pipeline are defined and discussed. While the requirements of many of the stages are satisfied with existing algorithms, others have motivated research into novel modeling and simulation methods. The principal technical results of this work include novel surface-based volume modeling techniques, an efficient construction for the EEG lead field, and the Open Source release of the Inverse EEG Pipeline software for use by the bioelectric field research community. In this work, the Inverse EEG Pipeline is applied to three research problems in neurology: comparing focal and distributed source imaging algorithms; separating measurements into independent activation components for multifocal epilepsy; and localizing the cortical activity that produces the P300 effect in schizophrenia.

  3. The Effect of Flow Velocity on Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Lee, D.; Shin, S.; Chung, W.; Ha, J.; Lim, Y.; Kim, S.

    2017-12-01

    The waveform inversion is a velocity modeling technique that reconstructs accurate subsurface physical properties. Therefore, using the model in its final, updated version, we generated data identical to modeled data. Flow velocity, like several other factors, affects observed data in seismic exploration. Despite this, there is insufficient research on its relationship with waveform inversion. In this study, the generated synthetic data considering flow velocity was factored in waveform inversion and the influence of flow velocity in waveform inversion was analyzed. Measuring the flow velocity generally requires additional equipment. However, for situations where only seismic data was available, flow velocity was calculated by fixed-point iteration method using direct wave in observed data. Further, a new waveform inversion was proposed, which can be applied to the calculated flow velocity. We used a wave equation, which can work with the flow velocities used in the study by Käser and Dumbser. Further, we enhanced the efficiency of computation by applying the back-propagation method. To verify the proposed algorithm, six different data sets were generated using the Marmousi2 model; each of these data sets used different flow velocities in the range 0-50, i.e., 0, 2, 5, 10, 25, and 50. Thereafter, the inversion results from these data sets along with the results without the use of flow velocity were compared and analyzed. In this study, we analyzed the results of waveform inversion after flow velocity has been factored in. It was demonstrated that the waveform inversion is not affected significantly when the flow velocity is of smaller value. However, when the flow velocity has a large value, factoring it in the waveform inversion produces superior results. This research was supported by the Basic Research Project(17-3312, 17-3313) of the Korea Institute of Geoscience and Mineral Resources(KIGAM) funded by the Ministry of Science, ICT and Future Planning of Korea.

  4. Spectral line inversion for sounding of stratospheric minor constituents by infrared heterodyne technique from balloon altitudes

    NASA Technical Reports Server (NTRS)

    Abbas, M. M.; Shapiro, G. L.; Allario, F.; Alvarez, J. M.

    1981-01-01

    A combination of two different techniques for the inversion of infrared laser heterodyne measurements of tenuous gases in the stratosphere by solar occulation is presented which incorporates the advantages of each technique. An experimental approach and inversion technique are developed which optimize the retrieval of concentration profiles by incorporating the onion peel collection scheme into the spectral inversion technique. A description of an infrared heterodyne spectrometer and the mode of observations for solar occulation measurement is presented, and the results of inversions of some synthetic ClO spectral lines corresponding to solar occulation limb-scans of the stratosphere are examined. A comparison between the new techniques and one of the current techniques indicates that considerable improvement in the accuracy of the retrieved profiles can be achieved. It is found that noise affects the accuracy of both techniques but not in a straightforward manner since there is interaction between the noise level, noise propagation through inversion, and the number of scans leading to an optimum retrieval.

  5. Review of inversion techniques using analysis of different tests

    NASA Astrophysics Data System (ADS)

    Smaglichenko, T. A.

    2012-04-01

    Tomographic techniques are tools, which estimate the Earth's deep interior by inverting seismic data. Reliability of visualization provides adequate understanding of geodynamic processes for prediction of natural hazard and protection of environment. This presentation focuses on two interrelated factors, which affect on the reliability namely: particularities of geophysical medium and strategy for choice of inversion method. Three main techniques are under review. First, the standard LSQR algorithm is derived directly by the Lanczos algebraic application. The Double Difference tomography widely incorporates this algorithm and its expansion. Next, the CSSA technique, or method of subtraction has been introduced into seismology by Nikolaev et al. in 1985. This method got farther development in 2003 (Smaglichenko et al.) as the coordinate method of possible directions, which has been already known in the theory of numerical methods. And finally, the new Differentiated Approach (DA) tomography that has been recently developed by the author for seismology and introduced into applied mathematics as the modification of Gaussian elimination. Different test models are presented by detecting various properties of the medium and having a value for the mining sector as well for prediction of seismic activity. They are: 1) checker-board resolution test; 2) the single anomalous block surrounded by an uniform zone; 3) the large-size structure; 4) the most complicated case, when the model consist of contrast layers and the observation response is equal zero value. The geometry of experiment for all models is given in the note of Leveque et al., 1993. It was assumed that errors in experimental data are in limits of pre-assigned accuracy. The testing showed that LSQR is effective, when the small-size structure (1) is retrieved, while CSSA works faster under reconstruction of the separated anomaly (2). The large-size structure (3) can be reconstructed applying DA, which uses both Lanczos's method and CSSA as composed parts of the inversion process. Difficulty of the model of contrast layers (4) can be overcome with a priori information that could allow the DA implementation. The testing leads us to the following conclusion. Careful analyze and weighted assumptions about characteristics of the being investigated medium should be done before start of data inversion. The choice of suitable technique will provide reliability of solution. Nevertheless, DA is preferred in the case of noisy and large data.

  6. Evaluation of concrete cover by surface wave technique: Identification procedure

    NASA Astrophysics Data System (ADS)

    Piwakowski, Bogdan; Kaczmarek, Mariusz; Safinowski, Paweł

    2012-05-01

    Concrete cover degradation is induced by aggressive agents in ambiance, such as moisture, chemicals or temperature variations. Due to degradation usually a thin (a few millimeters thick) surface layer has porosity slightly higher than the deeper sound material. The non destructive evaluation of concrete cover is vital to monitor the integrity of concrete structures and prevent their irreversible damage. In this paper the methodology applied by the classical technique used for ground structure recovery called Multichanel Analysis of Surface Waves is discussed as the NDT tool in civil engineering domain to characterize the concrete cover. In order to obtain the velocity as a function of sample depth the dispersion of surface waves is used as an input for solving inverse problem. The paper describes the inversion procedure and provides the practical example of use of developed system.

  7. Using the dGEMRIC technique to evaluate cartilage health in the presence of surgical hardware at 3T: comparison of inversion recovery and saturation recovery approaches.

    PubMed

    d'Entremont, Agnes G; Kolind, Shannon H; Mädler, Burkhard; Wilson, David R; MacKay, Alexander L

    2014-03-01

    To evaluate the effect of metal artifact reduction techniques on dGEMRIC T(1) calculation with surgical hardware present. We examined the effect of stainless-steel and titanium hardware on dGEMRIC T(1) maps. We tested two strategies to reduce metal artifact in dGEMRIC: (1) saturation recovery (SR) instead of inversion recovery (IR) and (2) applying the metal artifact reduction sequence (MARS), in a gadolinium-doped agarose gel phantom and in vivo with titanium hardware. T(1) maps were obtained using custom curve-fitting software and phantom ROIs were defined to compare conditions (metal, MARS, IR, SR). A large area of artifact appeared in phantom IR images with metal when T(I) ≤ 700 ms. IR maps with metal had additional artifact both in vivo and in the phantom (shifted null points, increased mean T(1) (+151 % IR ROI(artifact)) and decreased mean inversion efficiency (f; 0.45 ROI(artifact), versus 2 for perfect inversion)) compared to the SR maps (ROI(artifact): +13 % T(1) SR, 0.95 versus 1 for perfect excitation), however, SR produced noisier T(1) maps than IR (phantom SNR: 118 SR, 212 IR). MARS subtly reduced the extent of artifact in the phantom (IR and SR). dGEMRIC measurement in the presence of surgical hardware at 3T is possible with appropriately applied strategies. Measurements may work best in the presence of titanium and are severely limited with stainless steel. For regions near hardware where IR produces large artifacts making dGEMRIC analysis impossible, SR-MARS may allow dGEMRIC measurements. The position and size of the IR artifact is variable, and must be assessed for each implant/imaging set-up.

  8. Research and application of spectral inversion technique in frequency domain to improve resolution of converted PS-wave

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; He, Zhen-Hua; Li, Ya-Lin; Li, Rui; He, Guamg-Ming; Li, Zhong

    2017-06-01

    Multi-wave exploration is an effective means for improving precision in the exploration and development of complex oil and gas reservoirs that are dense and have low permeability. However, converted wave data is characterized by a low signal-to-noise ratio and low resolution, because the conventional deconvolution technology is easily affected by the frequency range limits, and there is limited scope for improving its resolution. The spectral inversion techniques is used to identify λ/8 thin layers and its breakthrough regarding band range limits has greatly improved the seismic resolution. The difficulty associated with this technology is how to use the stable inversion algorithm to obtain a high-precision reflection coefficient, and then to use this reflection coefficient to reconstruct broadband data for processing. In this paper, we focus on how to improve the vertical resolution of the converted PS-wave for multi-wave data processing. Based on previous research, we propose a least squares inversion algorithm with a total variation constraint, in which we uses the total variance as a priori information to solve under-determined problems, thereby improving the accuracy and stability of the inversion. Here, we simulate the Gaussian fitting amplitude spectrum to obtain broadband wavelet data, which we then process to obtain a higher resolution converted wave. We successfully apply the proposed inversion technology in the processing of high-resolution data from the Penglai region to obtain higher resolution converted wave data, which we then verify in a theoretical test. Improving the resolution of converted PS-wave data will provide more accurate data for subsequent velocity inversion and the extraction of reservoir reflection information.

  9. Modeling T1 and T2 relaxation in bovine white matter

    NASA Astrophysics Data System (ADS)

    Barta, R.; Kalantari, S.; Laule, C.; Vavasour, I. M.; MacKay, A. L.; Michal, C. A.

    2015-10-01

    The fundamental basis of T1 and T2 contrast in brain MRI is not well understood; recent literature contains conflicting views on the nature of relaxation in white matter (WM). We investigated the effects of inversion pulse bandwidth on measurements of T1 and T2 in WM. Hybrid inversion-recovery/Carr-Purcell-Meiboom-Gill experiments with broad or narrow bandwidth inversion pulses were applied to bovine WM in vitro. Data were analysed with the commonly used 1D-non-negative least squares (NNLS) algorithm, a 2D-NNLS algorithm, and a four-pool model which was based upon microscopically distinguishable WM compartments (myelin non-aqueous protons, myelin water, non-myelin non-aqueous protons and intra/extracellular water) and incorporated magnetization exchange between adjacent compartments. 1D-NNLS showed that different T2 components had different T1 behaviours and yielded dissimilar results for the two inversion conditions. 2D-NNLS revealed significantly more complicated T1/T2 distributions for narrow bandwidth than for broad bandwidth inversion pulses. The four-pool model fits allow physical interpretation of the parameters, fit better than the NNLS techniques, and fits results from both inversion conditions using the same parameters. The results demonstrate that exchange cannot be neglected when analysing experimental inversion recovery data from WM, in part because it can introduce exponential components having negative amplitude coefficients that cannot be correctly modeled with nonnegative fitting techniques. While assignment of an individual T1 to one particular pool is not possible, the results suggest that under carefully controlled experimental conditions the amplitude of an apparent short T1 component might be used to quantify myelin water.

  10. Tomographic inversion techniques incorporating physical constraints for line integrated spectroscopy in stellarators and tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pablant, N. A.; Bell, R. E.; Bitter, M.

    2014-11-15

    Accurate tomographic inversion is important for diagnostic systems on stellarators and tokamaks which rely on measurements of line integrated emission spectra. A tomographic inversion technique based on spline optimization with enforcement of constraints is described that can produce unique and physically relevant inversions even in situations with noisy or incomplete input data. This inversion technique is routinely used in the analysis of data from the x-ray imaging crystal spectrometer (XICS) installed at the Large Helical Device. The XICS diagnostic records a 1D image of line integrated emission spectra from impurities in the plasma. Through the use of Doppler spectroscopy andmore » tomographic inversion, XICS can provide profile measurements of the local emissivity, temperature, and plasma flow. Tomographic inversion requires the assumption that these measured quantities are flux surface functions, and that a known plasma equilibrium reconstruction is available. In the case of low signal levels or partial spatial coverage of the plasma cross-section, standard inversion techniques utilizing matrix inversion and linear-regularization often cannot produce unique and physically relevant solutions. The addition of physical constraints, such as parameter ranges, derivative directions, and boundary conditions, allow for unique solutions to be reliably found. The constrained inversion technique described here utilizes a modified Levenberg-Marquardt optimization scheme, which introduces a condition avoidance mechanism by selective reduction of search directions. The constrained inversion technique also allows for the addition of more complicated parameter dependencies, for example, geometrical dependence of the emissivity due to asymmetries in the plasma density arising from fast rotation. The accuracy of this constrained inversion technique is discussed, with an emphasis on its applicability to systems with limited plasma coverage.« less

  11. Tomographic inversion techniques incorporating physical constraints for line integrated spectroscopy in stellarators and tokamaksa)

    DOE PAGES

    Pablant, N. A.; Bell, R. E.; Bitter, M.; ...

    2014-08-08

    Accurate tomographic inversion is important for diagnostic systems on stellarators and tokamaks which rely on measurements of line integrated emission spectra. A tomographic inversion technique based on spline optimization with enforcement of constraints is described that can produce unique and physically relevant inversions even in situations with noisy or incomplete input data. This inversion technique is routinely used in the analysis of data from the x-ray imaging crystal spectrometer (XICS) installed at LHD. The XICS diagnostic records a 1D image of line integrated emission spectra from impurities in the plasma. Through the use of Doppler spectroscopy and tomographic inversion, XICSmore » can provide pro file measurements of the local emissivity, temperature and plasma flow. Tomographic inversion requires the assumption that these measured quantities are flux surface functions, and that a known plasma equilibrium reconstruction is available. In the case of low signal levels or partial spatial coverage of the plasma cross-section, standard inversion techniques utilizing matrix inversion and linear-regularization often cannot produce unique and physically relevant solutions. The addition of physical constraints, such as parameter ranges, derivative directions, and boundary conditions, allow for unique solutions to be reliably found. The constrained inversion technique described here utilizes a modifi ed Levenberg-Marquardt optimization scheme, which introduces a condition avoidance mechanism by selective reduction of search directions. The constrained inversion technique also allows for the addition of more complicated parameter dependencies, for example geometrical dependence of the emissivity due to asymmetries in the plasma density arising from fast rotation. The accuracy of this constrained inversion technique is discussed, with an emphasis on its applicability to systems with limited plasma coverage.« less

  12. Analysing 21cm signal with artificial neural network

    NASA Astrophysics Data System (ADS)

    Shimabukuro, Hayato; a Semelin, Benoit

    2018-05-01

    The 21cm signal at epoch of reionization (EoR) should be observed within next decade. We expect that cosmic 21cm signal at the EoR provides us both cosmological and astrophysical information. In order to extract fruitful information from observation data, we need to develop inversion method. For such a method, we introduce artificial neural network (ANN) which is one of the machine learning techniques. We apply the ANN to inversion problem to constrain astrophysical parameters from 21cm power spectrum. We train the architecture of the neural network with 70 training datasets and apply it to 54 test datasets with different value of parameters. We find that the quality of the parameter reconstruction depends on the sensitivity of the power spectrum to the different parameter sets at a given redshift and also find that the accuracy of reconstruction is improved by increasing the number of given redshifts. We conclude that the ANN is viable inversion method whose main strength is that they require a sparse extrapolation of the parameter space and thus should be usable with full simulation.

  13. A stochastic approach for model reduction and memory function design in hydrogeophysical inversion

    NASA Astrophysics Data System (ADS)

    Hou, Z.; Kellogg, A.; Terry, N.

    2009-12-01

    Geophysical (e.g., seismic, electromagnetic, radar) techniques and statistical methods are essential for research related to subsurface characterization, including monitoring subsurface flow and transport processes, oil/gas reservoir identification, etc. For deep subsurface characterization such as reservoir petroleum exploration, seismic methods have been widely used. Recently, electromagnetic (EM) methods have drawn great attention in the area of reservoir characterization. However, considering the enormous computational demand corresponding to seismic and EM forward modeling, it is usually a big problem to have too many unknown parameters in the modeling domain. For shallow subsurface applications, the characterization can be very complicated considering the complexity and nonlinearity of flow and transport processes in the unsaturated zone. It is warranted to reduce the dimension of parameter space to a reasonable level. Another common concern is how to make the best use of time-lapse data with spatial-temporal correlations. This is even more critical when we try to monitor subsurface processes using geophysical data collected at different times. The normal practice is to get the inverse images individually. These images are not necessarily continuous or even reasonably related, because of the non-uniqueness of hydrogeophysical inversion. We propose to use a stochastic framework by integrating minimum-relative-entropy concept, quasi Monto Carlo sampling techniques, and statistical tests. The approach allows efficient and sufficient exploration of all possibilities of model parameters and evaluation of their significances to geophysical responses. The analyses enable us to reduce the parameter space significantly. The approach can be combined with Bayesian updating, allowing us to treat the updated ‘posterior’ pdf as a memory function, which stores all the information up to date about the distributions of soil/field attributes/properties, then consider the memory function as a new prior and generate samples from it for further updating when more geophysical data is available. We applied this approach for deep oil reservoir characterization and for shallow subsurface flow monitoring. The model reduction approach reliably helps reduce the joint seismic/EM/radar inversion computational time to reasonable levels. Continuous inversion images are obtained using time-lapse data with the “memory function” applied in the Bayesian inversion.

  14. Resolving Isotropic Components from Regional Waves using Grid Search and Moment Tensor Inversion Methods

    NASA Astrophysics Data System (ADS)

    Ichinose, G. A.; Saikia, C. K.

    2007-12-01

    We applied the moment tensor (MT) analysis scheme to identify seismic sources using regional seismograms based on the representation theorem for the elastic wave displacement field. This method is applied to estimate the isotropic (ISO) and deviatoric MT components of earthquake, volcanic, and isotropic sources within the Basin and Range Province (BRP) and western US. The ISO components from Hoya, Bexar, Montello and Junction were compared to recently well recorded recent earthquakes near Little Skull Mountain, Scotty's Junction, Eureka Valley, and Fish Lake Valley within southern Nevada. We also examined "dilatational" sources near Mammoth Lakes Caldera and two mine collapses including the August 2007 event in Utah recorded by US Array. Using our formulation we have first implemented the full MT inversion method on long period filtered regional data. We also applied a grid-search technique to solve for the percent deviatoric and %ISO moments. By using the grid-search technique, high-frequency waveforms are used with calibrated velocity models. We modeled the ISO and deviatoric components (spall and tectonic release) as separate events delayed in time or offset in space. Calibrated velocity models helped the resolution of the ISO components and decrease the variance over the average, initial or background velocity models. The centroid location and time shifts are velocity model dependent. Models can be improved as was done in previously published work in which we used an iterative waveform inversion method with regional seismograms from four well recorded and constrained earthquakes. The resulting velocity models reduced the variance between predicted synthetics by about 50 to 80% for frequencies up to 0.5 Hz. Tests indicate that the individual path-specific models perform better at recovering the earthquake MT solutions even after using a sparser distribution of stations than the average or initial models.

  15. Evaluation of Süleymanköy (Diyarbakir, Eastern Turkey) and Seferihisar (Izmir, Western Turkey) Self Potential Anomalies with Multilayer Perceptron Neural Networks

    NASA Astrophysics Data System (ADS)

    Kaftan, Ilknur; Sindirgi, Petek

    2013-04-01

    Self-potential (SP) is one of the oldest geophysical methods that provides important information about near-surface structures. Several methods have been developed to interpret SP data using simple geometries. This study investigated inverse solution of a buried, polarized sphere-shaped self-potential (SP ) anomaly via Multilayer Perceptron Neural Networks ( MLPNN ). The polarization angle ( α ) and depth to the centre of sphere ( h )were estimated. The MLPNN is applied to synthetic and field SP data. In order to see the capability of the method in detecting the number of sources, MLPNN was applied to different spherical models at different depths and locations.. Additionally, the performance of MLPNN was tested by adding random noise to the same synthetic test data. The sphere model successfully obtained similar parameters under different S/N ratios. Then, MLPNN method was applied to two field examples. The first one is the cross section taken from the SP anomaly map of the Ergani-Süleymanköy (Turkey) copper mine. MLPNN was also applied to SP data from Seferihisar Izmir (Western Turkey) geothermal field. The MLPNN results showed good agreement with the original synthetic data set. The effect of The technique gave satisfactory results following the addition of 5% and 10% Gaussian noise levels. The MLPNN results were compared to other SP interpretation techniques, such as Normalized Full Gradient (NFG), inverse solution and nomogram methods. All of the techniques showed strong similarity. Consequently, the synthetic and field applications of this study show that MLPNN provides reliable evaluation of the self potential data modelled by the sphere model.

  16. Full-Physics Inverse Learning Machine for Satellite Remote Sensing of Ozone Profile Shapes and Tropospheric Columns

    NASA Astrophysics Data System (ADS)

    Xu, J.; Heue, K.-P.; Coldewey-Egbers, M.; Romahn, F.; Doicu, A.; Loyola, D.

    2018-04-01

    Characterizing vertical distributions of ozone from nadir-viewing satellite measurements is known to be challenging, particularly the ozone information in the troposphere. A novel retrieval algorithm called Full-Physics Inverse Learning Machine (FP-ILM), has been developed at DLR in order to estimate ozone profile shapes based on machine learning techniques. In contrast to traditional inversion methods, the FP-ILM algorithm formulates the profile shape retrieval as a classification problem. Its implementation comprises a training phase to derive an inverse function from synthetic measurements, and an operational phase in which the inverse function is applied to real measurements. This paper extends the ability of the FP-ILM retrieval to derive tropospheric ozone columns from GOME- 2 measurements. Results of total and tropical tropospheric ozone columns are compared with the ones using the official GOME Data Processing (GDP) product and the convective-cloud-differential (CCD) method, respectively. Furthermore, the FP-ILM framework will be used for the near-real-time processing of the new European Sentinel sensors with their unprecedented spectral and spatial resolution and corresponding large increases in the amount of data.

  17. Three-dimensional imaging of buried objects in very lossy earth by inversion of VETEM data

    USGS Publications Warehouse

    Cui, T.J.; Aydiner, A.A.; Chew, W.C.; Wright, D.L.; Smith, D.V.

    2003-01-01

    The very early time electromagnetic system (VETEM) is an efficient tool for the detection of buried objects in very lossy earth, which allows a deeper penetration depth compared to the ground-penetrating radar. In this paper, the inversion of VETEM data is investigated using three-dimensional (3-D) inverse scattering techniques, where multiple frequencies are applied in the frequency range from 0-5 MHz. For small and moderately sized problems, the Born approximation and/or the Born iterative method have been used with the aid of the singular value decomposition and/or the conjugate gradient method in solving the linearized integral equations. For large-scale problems, a localized 3-D inversion method based on the Born approximation has been proposed for the inversion of VETEM data over a large measurement domain. Ways to process and to calibrate the experimental VETEM data are discussed to capture the real physics of buried objects. Reconstruction examples using synthesized VETEM data and real-world VETEM data are given to test the validity and efficiency of the proposed approach.

  18. Regularization of soft-X-ray imaging in the DIII-D tokamak

    DOE PAGES

    Wingen, A.; Shafer, M. W.; Unterberg, E. A.; ...

    2015-03-02

    We developed an image inversion scheme for the soft X-ray imaging system (SXRIS) diagnostic at the DIII-D tokamak in order to obtain the local soft X-ray emission at a poloidal cross-section from the spatially line-integrated image taken by the SXRIS camera. The scheme uses the Tikhonov regularization method since the inversion problem is generally ill-posed. The regularization technique uses the generalized singular value decomposition to determine a solution that depends on a free regularization parameter. The latter has to be chosen carefully, and the so called {\\it L-curve} method to find the optimum regularization parameter is outlined. A representative testmore » image is used to study the properties of the inversion scheme with respect to inversion accuracy, amount/strength of regularization, image noise and image resolution. Moreover, the optimum inversion parameters are identified, while the L-curve method successfully computes the optimum regularization parameter. Noise is found to be the most limiting issue, but sufficient regularization is still possible at noise to signal ratios up to 10%-15%. Finally, the inversion scheme is applied to measured SXRIS data and the line-integrated SXRIS image is successfully inverted.« less

  19. A general approach to regularizing inverse problems with regional data using Slepian wavelets

    NASA Astrophysics Data System (ADS)

    Michel, Volker; Simons, Frederik J.

    2017-12-01

    Slepian functions are orthogonal function systems that live on subdomains (for example, geographical regions on the Earth’s surface, or bandlimited portions of the entire spectrum). They have been firmly established as a useful tool for the synthesis and analysis of localized (concentrated or confined) signals, and for the modeling and inversion of noise-contaminated data that are only regionally available or only of regional interest. In this paper, we consider a general abstract setup for inverse problems represented by a linear and compact operator between Hilbert spaces with a known singular-value decomposition (svd). In practice, such an svd is often only given for the case of a global expansion of the data (e.g. on the whole sphere) but not for regional data distributions. We show that, in either case, Slepian functions (associated to an arbitrarily prescribed region and the given compact operator) can be determined and applied to construct a regularization for the ill-posed regional inverse problem. Moreover, we describe an algorithm for constructing the Slepian basis via an algebraic eigenvalue problem. The obtained Slepian functions can be used to derive an svd for the combination of the regionalizing projection and the compact operator. As a result, standard regularization techniques relying on a known svd become applicable also to those inverse problems where the data are regionally given only. In particular, wavelet-based multiscale techniques can be used. An example for the latter case is elaborated theoretically and tested on two synthetic numerical examples.

  20. A harmonic analysis approach to joint inversion of P-receiver functions and wave dispersion data in high dense seismic profiles

    NASA Astrophysics Data System (ADS)

    Molina-Aguilera, A.; Mancilla, F. D. L.; Julià, J.; Morales, J.

    2017-12-01

    Joint inversion techniques of P-receiver functions and wave dispersion data implicitly assume an isotropic radial stratified earth. The conventional approach invert stacked radial component receiver functions from different back-azimuths to obtain a laterally homogeneous single-velocity model. However, in the presence of strong lateral heterogeneities as anisotropic layers and/or dipping interfaces, receiver functions are considerably perturbed and both the radial and transverse components exhibit back azimuthal dependences. Harmonic analysis methods exploit these azimuthal periodicities to separate the effects due to the isotropic flat-layered structure from those effects caused by lateral heterogeneities. We implement a harmonic analysis method based on radial and transverse receiver functions components and carry out a synthetic study to illuminate the capabilities of the method in isolating the isotropic flat-layered part of receiver functions and constrain the geometry and strength of lateral heterogeneities. The independent of the baz P receiver function are jointly inverted with phase and group dispersion curves using a linearized inversion procedure. We apply this approach to high dense seismic profiles ( 2 km inter-station distance, see figure) located in the central Betics (western Mediterranean region), a region which has experienced complex geodynamic processes and exhibit strong variations in Moho topography. The technique presented here is robust and can be applied systematically to construct a 3-D model of the crust and uppermost mantle across large networks.

  1. A novel post-processing scheme for two-dimensional electrical impedance tomography based on artificial neural networks

    PubMed Central

    2017-01-01

    Objective Electrical Impedance Tomography (EIT) is a powerful non-invasive technique for imaging applications. The goal is to estimate the electrical properties of living tissues by measuring the potential at the boundary of the domain. Being safe with respect to patient health, non-invasive, and having no known hazards, EIT is an attractive and promising technology. However, it suffers from a particular technical difficulty, which consists of solving a nonlinear inverse problem in real time. Several nonlinear approaches have been proposed as a replacement for the linear solver, but in practice very few are capable of stable, high-quality, and real-time EIT imaging because of their very low robustness to errors and inaccurate modeling, or because they require considerable computational effort. Methods In this paper, a post-processing technique based on an artificial neural network (ANN) is proposed to obtain a nonlinear solution to the inverse problem, starting from a linear solution. While common reconstruction methods based on ANNs estimate the solution directly from the measured data, the method proposed here enhances the solution obtained from a linear solver. Conclusion Applying a linear reconstruction algorithm before applying an ANN reduces the effects of noise and modeling errors. Hence, this approach significantly reduces the error associated with solving 2D inverse problems using machine-learning-based algorithms. Significance This work presents radical enhancements in the stability of nonlinear methods for biomedical EIT applications. PMID:29206856

  2. Advanced analysis of complex seismic waveforms to characterize the subsurface Earth structure

    NASA Astrophysics Data System (ADS)

    Jia, Tianxia

    2011-12-01

    This thesis includes three major parts, (1) Body wave analysis of mantle structure under the Calabria slab, (2) Spatial Average Coherency (SPAC) analysis of microtremor to characterize the subsurface structure in urban areas, and (3) Surface wave dispersion inversion for shear wave velocity structure. Although these three projects apply different techniques and investigate different parts of the Earth, their aims are the same, which is to better understand and characterize the subsurface Earth structure by analyzing complex seismic waveforms that are recorded on the Earth surface. My first project is body wave analysis of mantle structure under the Calabria slab. Its aim is to better understand the subduction structure of the Calabria slab by analyzing seismograms generated by natural earthquakes. The rollback and subduction of the Calabrian Arc beneath the southern Tyrrhenian Sea is a case study of slab morphology and slab-mantle interactions at short spatial scale. I analyzed the seismograms traversing the Calabrian slab and upper mantle wedge under the southern Tyrrhenian Sea through body wave dispersion, scattering and attenuation, which are recorded during the PASSCAL CAT/SCAN experiment. Compressional body waves exhibit dispersion correlating with slab paths, which is high-frequency components arrivals being delayed relative to low-frequency components. Body wave scattering and attenuation are also spatially correlated with slab paths. I used this correlation to estimate the positions of slab boundaries, and further suggested that the observed spatial variation in near-slab attenuation could be ascribed to mantle flow patterns around the slab. My second project is Spatial Average Coherency (SPAC) analysis of microtremors for subsurface structure characterization. Shear-wave velocity (Vs) information in soil and rock has been recognized as a critical parameter for site-specific ground motion prediction study, which is highly necessary for urban areas located in seismic active zones. SPAC analysis of microtremors provides an efficient way to estimate Vs structure. Compared with other Vs estimating methods, SPAC is noninvasive and does not require any active sources, and therefore, it is especially useful in big cities. I applied SPAC method in two urban areas. The first is the historic city, Charleston, South Carolina, where high levels of seismic hazard lead to great public concern. Accurate Vs information, therefore, is critical for seismic site classification and site response studies. The second SPAC study is in Manhattan, New York City, where depths of high velocity contrast and soil-to-bedrock are different along the island. The two experiments show that Vs structure could be estimated with good accuracy using SPAC method compared with borehole and other techniques. SPAC is proved to be an effective technique for Vs estimation in urban areas. One important issue in seismology is the inversion of subsurface structures from surface recordings of seismograms. My third project focuses on solving this complex geophysical inverse problems, specifically, surface wave phase velocity dispersion curve inversion for shear wave velocity. In addition to standard linear inversion, I developed advanced inversion techniques including joint inversion using borehole data as constrains, nonlinear inversion using Monte Carlo, and Simulated Annealing algorithms. One innovative way of solving the inverse problem is to make inference from the ensemble of all acceptable models. The statistical features of the ensemble provide a better way to characterize the Earth model.

  3. Application of 2-D travel-time inversion of seismic refraction data to the mid-continent rift beneath Lake Superior

    USGS Publications Warehouse

    Lutter, William J.; Tréhu, Anne M.; Nowack, Robert L.

    1993-01-01

    The inversion technique of Nowack and Lutter (1988a) and Lutter et al. (1990) has been applied to first arrival seismic refraction data collected along Line A of the 1986 Lake Superior GLIMPCE experiment, permitting comparison of the inversion image with an independently derived forward model (Trehu et al., 1991; Shay and Trehu, in press). For this study, the inversion method was expanded to allow variable grid spacing for the bicubic spline parameterization of velocity. The variable grid spacing improved model delineation and data fit by permitting model parameters to be clustered at features of interest. Over 800 first-arrival travel-times were fit with a final RMS error of 0.045 s. The inversion model images a low velocity central graben and smaller flanking half-grabens of the Midcontinent Rift, and higher velocity regions (+0.5 to +0.75 km/s) associated with the Isle Royale and Keweenaw faults, which bound the central graben. Although the forward modeling interpretation gives finer details associated with the near surface expression of the two faults because of the inclusion of secondary reflections and refractions that were not included in the inversion, the inversion model reproduces the primary features of the forward model.

  4. Joint two dimensional inversion of gravity and magnetotelluric data using correspondence maps

    NASA Astrophysics Data System (ADS)

    Carrillo Lopez, J.; Gallardo, L. A.

    2016-12-01

    Inverse problems in Earth sciences are inherently non-unique. To improve models and reduce the number of solutions we need to provide extra information. In geological context, this information could be a priori information, for example, geological information, well log data, smoothness, or actually, information of measures of different kind of data. Joint inversion provides an approach to improve the solution and reduce the errors due to suppositions of each method. To do that, we need a link between two or more models. Some approaches have been explored successfully in recent years. For example, Gallardo and Meju (2003), Gallardo and Meju (2004, 2011), and Gallardo et. al. (2012) used the directions of properties to measure the similarity between models minimizing their cross gradients. In this work, we proposed a joint iterative inversion method that use spatial distribution of properties as a link. Correspondence maps could be better characterizing specific Earth systems due they consider the relation between properties. We implemented a code in Fortran to do a two dimensional inversion of magnetotelluric and gravity data, which are two of the standard methods in geophysical exploration. Synthetic tests show the advantages of joint inversion using correspondence maps against separate inversion. Finally, we applied this technique to magnetotelluric and gravity data in the geothermal zone located in Cerro Prieto, México.

  5. Round-off errors in cutting plane algorithms based on the revised simplex procedure

    NASA Technical Reports Server (NTRS)

    Moore, J. E.

    1973-01-01

    This report statistically analyzes computational round-off errors associated with the cutting plane approach to solving linear integer programming problems. Cutting plane methods require that the inverse of a sequence of matrices be computed. The problem basically reduces to one of minimizing round-off errors in the sequence of inverses. Two procedures for minimizing this problem are presented, and their influence on error accumulation is statistically analyzed. One procedure employs a very small tolerance factor to round computed values to zero. The other procedure is a numerical analysis technique for reinverting or improving the approximate inverse of a matrix. The results indicated that round-off accumulation can be effectively minimized by employing a tolerance factor which reflects the number of significant digits carried for each calculation and by applying the reinversion procedure once to each computed inverse. If 18 significant digits plus an exponent are carried for each variable during computations, then a tolerance value of 0.1 x 10 to the minus 12th power is reasonable.

  6. Eigenproblem solution by a combined Sturm sequence and inverse iteration technique.

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.

    1973-01-01

    Description of an efficient and numerically stable algorithm, along with a complete listing of the associated computer program, developed for the accurate computation of specified roots and associated vectors of the eigenvalue problem Aq = lambda Bq with band symmetric A and B, B being also positive-definite. The desired roots are first isolated by the Sturm sequence procedure; then a special variant of the inverse iteration technique is applied for the individual determination of each root along with its vector. The algorithm fully exploits the banded form of relevant matrices, and the associated program written in FORTRAN V for the JPL UNIVAC 1108 computer proves to be most significantly economical in comparison to similar existing procedures. The program may be conveniently utilized for the efficient solution of practical engineering problems, involving free vibration and buckling analysis of structures. Results of such analyses are presented for representative structures.

  7. Sparse Image Reconstruction on the Sphere: Analysis and Synthesis.

    PubMed

    Wallis, Christopher G R; Wiaux, Yves; McEwen, Jason D

    2017-11-01

    We develop techniques to solve ill-posed inverse problems on the sphere by sparse regularization, exploiting sparsity in both axisymmetric and directional scale-discretized wavelet space. Denoising, inpainting, and deconvolution problems and combinations thereof, are considered as examples. Inverse problems are solved in both the analysis and synthesis settings, with a number of different sampling schemes. The most effective approach is that with the most restricted solution-space, which depends on the interplay between the adopted sampling scheme, the selection of the analysis/synthesis problem, and any weighting of the l 1 norm appearing in the regularization problem. More efficient sampling schemes on the sphere improve reconstruction fidelity by restricting the solution-space and also by improving sparsity in wavelet space. We apply the technique to denoise Planck 353-GHz observations, improving the ability to extract the structure of Galactic dust emission, which is important for studying Galactic magnetism.

  8. Non-destructive testing of ceramic materials using mid-infrared ultrashort-pulse laser

    NASA Astrophysics Data System (ADS)

    Sun, S. C.; Qi, Hong; An, X. Y.; Ren, Y. T.; Qiao, Y. B.; Ruan, Liming M.

    2018-04-01

    The non-destructive testing (NDT) of ceramic materials using mid-infrared ultrashort-pulse laser is investigated in this study. The discrete ordinate method is applied to solve the transient radiative transfer equation in 2D semitransparent medium and the emerging radiative intensity on boundary serves as input for the inverse analysis. The sequential quadratic programming algorithm is employed as the inverse technique to optimize objective function, in which the gradient of objective function with respect to reconstruction parameters is calculated using the adjoint model. Two reticulated porous ceramics including partially stabilized zirconia and oxide-bonded silicon carbide are tested. The retrieval results show that the main characteristics of defects such as optical properties, geometric shapes and positions can be accurately reconstructed by the present model. The proposed technique is effective and robust in NDT of ceramics even with measurement errors.

  9. A cut-&-paste strategy for the 3-D inversion of helicopter-borne electromagnetic data - I. 3-D inversion using the explicit Jacobian and a tensor-based formulation

    NASA Astrophysics Data System (ADS)

    Scheunert, M.; Ullmann, A.; Afanasjew, M.; Börner, R.-U.; Siemon, B.; Spitzer, K.

    2016-06-01

    We present an inversion concept for helicopter-borne frequency-domain electromagnetic (HEM) data capable of reconstructing 3-D conductivity structures in the subsurface. Standard interpretation procedures often involve laterally constrained stitched 1-D inversion techniques to create pseudo-3-D models that are largely representative for smoothly varying conductivity distributions in the subsurface. Pronounced lateral conductivity changes may, however, produce significant artifacts that can lead to serious misinterpretation. Still, 3-D inversions of entire survey data sets are numerically very expensive. Our approach is therefore based on a cut-&-paste strategy whereupon the full 3-D inversion needs to be applied only to those parts of the survey where the 1-D inversion actually fails. The introduced 3-D Gauss-Newton inversion scheme exploits information given by a state-of-the-art (laterally constrained) 1-D inversion. For a typical HEM measurement, an explicit representation of the Jacobian matrix is inevitable which is caused by the unique transmitter-receiver relation. We introduce tensor quantities which facilitate the matrix assembly of the forward operator as well as the efficient calculation of the Jacobian. The finite difference forward operator incorporates the displacement currents because they may seriously affect the electromagnetic response at frequencies above 100. Finally, we deliver the proof of concept for the inversion using a synthetic data set with a noise level of up to 5%.

  10. Application of mass spectrometer-inverse gas chromatography to study polymer-solvent diffusivity and solubility.

    PubMed

    Galdámez, J Román; Danner, Ronald P; Duda, J Larry

    2007-07-20

    The application of a mass spectrometer detector in capillary column inverse gas chromatography is shown to be a valuable tool in the measurement of diffusion and solubility in polymer-solvent systems. The component specific detector provides excellent results for binary polymer-solvent systems, but it is particularly valuable because it can be readily applied to multicomponent systems. Results for a number of infinitely dilute solvents in poly(vinyl acetate) (PVAc) are reported over a range of temperature from 60 to 150 degrees C. Results are also reported for finite concentrations of toluene and methanol in PVAc from 60 to 110 degrees C. Finally, the technique was applied to study the effect of finite concentrations of toluene on the diffusion coefficients of THF and cyclohexane in PVAc. The experimental data compare well with literature values for both infinite and finite concentrations, indicating that the experimental protocol described in this work is sound.

  11. Formal verification of AI software

    NASA Technical Reports Server (NTRS)

    Rushby, John; Whitehurst, R. Alan

    1989-01-01

    The application of formal verification techniques to Artificial Intelligence (AI) software, particularly expert systems, is investigated. Constraint satisfaction and model inversion are identified as two formal specification paradigms for different classes of expert systems. A formal definition of consistency is developed, and the notion of approximate semantics is introduced. Examples are given of how these ideas can be applied in both declarative and imperative forms.

  12. Unscented Kalman filter assimilation of time-lapse self-potential data for monitoring solute transport

    NASA Astrophysics Data System (ADS)

    Cui, Yi-an; Liu, Lanbo; Zhu, Xiaoxiong

    2017-08-01

    Monitoring the extent and evolution of contaminant plumes in local and regional groundwater systems from existing landfills is critical in contamination control and remediation. The self-potential survey is an efficient and economical nondestructive geophysical technique that can be used to investigate underground contaminant plumes. Based on the unscented transform, we have built a Kalman filtering cycle to conduct time-lapse data assimilation for monitoring the transport of solute based on the solute transport experiment using a bench-scale physical model. The data assimilation was formed by modeling the evolution based on the random walk model and observation correcting based on the self-potential forward. Thus, monitoring self-potential data can be inverted by the data assimilation technique. As a result, we can reconstruct the dynamic process of the contaminant plume instead of using traditional frame-to-frame static inversion, which may cause inversion artifacts. The data assimilation inversion algorithm was evaluated through noise-added synthetic time-lapse self-potential data. The result of the numerical experiment shows validity, accuracy and tolerance to the noise of the dynamic inversion. To validate the proposed algorithm, we conducted a scaled-down sandbox self-potential observation experiment to generate time-lapse data that closely mimics the real-world contaminant monitoring setup. The results of physical experiments support the idea that the data assimilation method is a potentially useful approach for characterizing the transport of contamination plumes using the unscented Kalman filter (UKF) data assimilation technique applied to field time-lapse self-potential data.

  13. Geo-Acoustic Doppler Spectroscopy: A Novel Acoustic Technique For Surveying The Seabed

    NASA Astrophysics Data System (ADS)

    Buckingham, Michael J.

    2010-09-01

    An acoustic inversion technique, known as Geo-Acoustic Doppler Spectroscopy, has recently been developed for estimating the geo-acoustic parameters of the seabed in shallow water. The technique is unusual in that it utilizes a low-flying, propeller-driven light aircraft as an acoustic source. Both the engine and propeller produce sound and, since they are rotating sources, the acoustic signature of each takes the form of a sequence of narrow-band harmonics. Although the coupling of the harmonics across the air-sea interface is inefficient, due to the large impedance mismatch between air and water, sufficient energy penetrates the sea surface to provide a useable underwater signal at sensors either in the water column or buried in the sediment. The received signals, which are significantly Doppler shifted due to the motion of the aircraft, will have experienced a number of reflections from the seabed and thus they contain information about the sediment. A geo-acoustic inversion of the Doppler-shifted modes associated with each harmonic yields an estimate of the sound speed in the sediment; and, once the sound speed has been determined, the known correlations between it and the remaining geo-acoustic parameters allow all of the latter to be computed. This inversion technique has been applied to aircraft data collected in the shallow water north of Scripps pier, returning values of the sound speed, shear speed, porosity, density and grain size that are consistent with the known properties of the sandy sediment in the channel.

  14. Inversion of calcite twin data for paleostress orientations and magnitudes: A new technique tested and calibrated on numerically-generated and natural data

    NASA Astrophysics Data System (ADS)

    Parlangeau, Camille; Lacombe, Olivier; Schueller, Sylvie; Daniel, Jean-Marc

    2018-01-01

    The inversion of calcite twin data is a powerful tool to reconstruct paleostresses sustained by carbonate rocks during their geological history. Following Etchecopar's (1984) pioneering work, this study presents a new technique for the inversion of calcite twin data that reconstructs the 5 parameters of the deviatoric stress tensors from both monophase and polyphase twin datasets. The uncertainties in the parameters of the stress tensors reconstructed by this new technique are evaluated on numerically-generated datasets. The technique not only reliably defines the 5 parameters of the deviatoric stress tensor, but also reliably separates very close superimposed stress tensors (30° of difference in maximum principal stress orientation or switch between σ3 and σ2 axes). The technique is further shown to be robust to sampling bias and to slight variability in the critical resolved shear stress. Due to our still incomplete knowledge of the evolution of the critical resolved shear stress with grain size, our results show that it is recommended to analyze twin data subsets of homogeneous grain size to minimize possible errors, mainly those concerning differential stress values. The methodological uncertainty in principal stress orientations is about ± 10°; it is about ± 0.1 for the stress ratio. For differential stresses, the uncertainty is lower than ± 30%. Applying the technique to vein samples within Mesozoic limestones from the Monte Nero anticline (northern Apennines, Italy) demonstrates its ability to reliably detect and separate tectonically significant paleostress orientations and magnitudes from naturally deformed polyphase samples, hence to fingerprint the regional paleostresses of interest in tectonic studies.

  15. A new art code for tomographic interferometry

    NASA Technical Reports Server (NTRS)

    Tan, H.; Modarress, D.

    1987-01-01

    A new algebraic reconstruction technique (ART) code based on the iterative refinement method of least squares solution for tomographic reconstruction is presented. Accuracy and the convergence of the technique is evaluated through the application of numerically generated interferometric data. It was found that, in general, the accuracy of the results was superior to other reported techniques. The iterative method unconditionally converged to a solution for which the residual was minimum. The effects of increased data were studied. The inversion error was found to be a function of the input data error only. The convergence rate, on the other hand, was affected by all three parameters. Finally, the technique was applied to experimental data, and the results are reported.

  16. 3D Acoustic Full Waveform Inversion for Engineering Purpose

    NASA Astrophysics Data System (ADS)

    Lim, Y.; Shin, S.; Kim, D.; Kim, S.; Chung, W.

    2017-12-01

    Seismic waveform inversion is the most researched data processing technique. In recent years, with an increase in marine development projects, seismic surveys are commonly conducted for engineering purposes; however, researches for application of waveform inversion are insufficient. The waveform inversion updates the subsurface physical property by minimizing the difference between modeled and observed data. Furthermore, it can be used to generate an accurate subsurface image; however, this technique consumes substantial computational resources. Its most compute-intensive step is the calculation of the gradient and hessian values. This aspect gains higher significance in 3D as compared to 2D. This paper introduces a new method for calculating gradient and hessian values, in an effort to reduce computational overburden. In the conventional waveform inversion, the calculation area covers all sources and receivers. In seismic surveys for engineering purposes, the number of receivers is limited. Therefore, it is inefficient to construct the hessian and gradient for the entire region (Figure 1). In order to tackle this problem, we calculate the gradient and the hessian for a single shot within the range of the relevant source and receiver. This is followed by summing up of these positions for the entire shot (Figure 2). In this paper, we demonstrate that reducing the area of calculation of the hessian and gradient for one shot reduces the overall amount of computation and therefore, the computation time. Furthermore, it is proved that the waveform inversion can be suitably applied for engineering purposes. In future research, we propose to ascertain an effective calculation range. This research was supported by the Basic Research Project(17-3314) of the Korea Institute of Geoscience and Mineral Resources(KIGAM) funded by the Ministry of Science, ICT and Future Planning of Korea.

  17. Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration

    USGS Publications Warehouse

    Doherty, John E.; Hunt, Randall J.

    2010-01-01

    Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.

  18. Teleseismic tomography for imaging Earth's upper mantle

    NASA Astrophysics Data System (ADS)

    Aktas, Kadircan

    Teleseismic tomography is an important imaging tool in earthquake seismology, used to characterize lithospheric structure beneath a region of interest. In this study I investigate three different tomographic techniques applied to real and synthetic teleseismic data, with the aim of imaging the velocity structure of the upper mantle. First, by applying well established traveltime tomographic techniques to teleseismic data from southern Ontario, I obtained high-resolution images of the upper mantle beneath the lower Great Lakes. Two salient features of the 3D models are: (1) a patchy, NNW-trending low-velocity region, and (2) a linear, NE-striking high-velocity anomaly. I interpret the high-velocity anomaly as a possible relict slab associated with ca. 1.25 Ga subduction, whereas the low-velocity anomaly is interpreted as a zone of alteration and metasomatism associated with the ascent of magmas that produced the Late Cretaceous Monteregian plutons. The next part of the thesis is concerned with adaptation of existing full-waveform tomographic techniques for application to teleseismic body-wave observations. The method used here is intended to be complementary to traveltime tomography, and to take advantage of efficient frequency-domain methodologies that have been developed for inverting large controlled-source datasets. Existing full-waveform acoustic modelling and inversion codes have been modified to handle plane waves impinging from the base of the lithospheric model at a known incidence angle. A processing protocol has been developed to prepare teleseismic observations for the inversion algorithm. To assess the validity of the acoustic approximation, the processing procedure and modelling-inversion algorithm were tested using synthetic seismograms computed using an elastic Kirchhoff integral method. These tests were performed to evaluate the ability of the frequency-domain full-waveform inversion algorithm to recover topographic variations of the Moho under a variety of realistic scenarios. Results show that frequency-domain full-waveform tomography is generally successful in recovering both sharp and discontinuous features. Thirdly, I developed a new method for creating an initial background velocity model for the inversion algorithm, which is sufficiently close to the true model so that convergence is likely to be achieved. I adapted a method named Deformable Layer Tomography (DLT), which adjusts interfaces between layers rather than velocities within cells. I applied this method to a simple model comprising a single uniform crustal layer and a constant-velocity mantle, separated by an irregular Moho interface. A series of tests was performed to evaluate the sensitivity of the DLT algorithm; the results show that my algorithm produces useful results within a realistic range of incident-wave obliquity, incidence angle and signal-to-noise level. Keywords. Teleseismic tomography, full waveform tomography, deformable layer tomography, lower Great Lakes, crust and upper mantle.

  19. Dynamics of Mount Somma-Vesuvius edifice: from stress field inversion to analogue and numerical modelling

    NASA Astrophysics Data System (ADS)

    De Matteo, Ada; Massa, Bruno; D'Auria, Luca; Castaldo, Raffaele

    2017-04-01

    Geological processes are generally very complex and too slow to be directly observed in their completeness; modelling procedures overcome this limit. The state of stress in the upper lithosphere is the main responsible for driving geodynamical processes; in order to retrieve the active stress field in a rock volume, stress inversion techniques can be applied on both seismological and structural datasets. This approach has been successfully applied to active tectonics as well as volcanic areas. In this context the best approach in managing heterogeneous datasets in volcanic environments consists in the analysis of spatial variations of the stress field by applying robust techniques of inversion. The study of volcanic seismicity is an efficient tool to retrieve spatial and temporal pattern of the pre-, syn- and inter-eruptive stress field: magma migration as well as dynamics of magma chamber and hydrothermal system are directly connected to the volcanic seismicity. Additionally, analysis of the temporal variations of stress field pattern in volcanoes could be a useful monitoring tool. Recently the stress field acting on several active volcanoes has been investigated by using stress inversion techniques on seismological datasets (Massa et al., 2016). The Bayesian Right Trihedra Method (BRTM; D'Auria and Massa, 2015) is able to successfully manage heterogeneous datasets allowing the identification of regional fields locally overcame by the stress field due to volcano specific dynamics. In particular, the analysis of seismicity and stress field inversion at the Somma-Vesuvius highlighted the presence of two superposed volumes characterized by different behaviour and stress field pattern: a top volume dominated by an extensional stress field, in accordance with a gravitational spreading-style of deformation, and a bottom volume related to a regional extensional stress field. In addition, in order to evaluate the dynamics of deformation, both analogue and numerical modelling are being performed. Scaled analogue models of the Somma-Vesuvius are being built accordingly with the actual geometrical asymmetry of the volcano, varying just few parameters connected to the uncertainty of the depth and thickness of a buried decoupling layer. Experiments are being monitored by an optical stereo image system, useful to build a 3D time-lapsed models used to retrieve the model deformations. Simultaneously, a time-dependent 3D Finite Element model is being carried out in a fluid-dynamic context by fixing the same parameters of the proposed analogue model. Finally, a comparative analysis is being made between the model deformations and the DInSAR measurements derived from satellite data in order to estimate the uncertain parameters (i.e., thickness and viscosity of ductile layer). Preliminary results of the analogue models fit with the hypothesis of a spreading deformation active at the Somma-Vesuvius.

  20. Damage Diagnosis in Semiconductive Materials Using Electrical Impedance Measurements

    NASA Technical Reports Server (NTRS)

    Ross, Richard W.; Hinton, Yolanda L.

    2008-01-01

    Recent aerospace industry trends have resulted in an increased demand for real-time, effective techniques for in-flight structural health monitoring. A promising technique for damage diagnosis uses electrical impedance measurements of semiconductive materials. By applying a small electrical current into a material specimen and measuring the corresponding voltages at various locations on the specimen, changes in the electrical characteristics due to the presence of damage can be assessed. An artificial neural network uses these changes in electrical properties to provide an inverse solution that estimates the location and magnitude of the damage. The advantage of the electrical impedance method over other damage diagnosis techniques is that it uses the material as the sensor. Simple voltage measurements can be used instead of discrete sensors, resulting in a reduction in weight and system complexity. This research effort extends previous work by employing finite element method models to improve accuracy of complex models with anisotropic conductivities and by enhancing the computational efficiency of the inverse techniques. The paper demonstrates a proof of concept of a damage diagnosis approach using electrical impedance methods and a neural network as an effective tool for in-flight diagnosis of structural damage to aircraft components.

  1. Crustal and Upper Mantle Velocity and Q Structures of Mainland China

    DTIC Science & Technology

    1979-11-01

    CLASIFICATION OFTHIS PAGE(117..t- [).(t ntred) with identical source-receiver geometry. The generalized surface wave inversion technique was applied...in the recent past. A particularly unusual crustal and upper mantle structure is found underlying the Tibet Dlateau. AOceSIon For DDC TAB Ubazmnounced...the AIR FORCE OFFICE OF SCIENTIFIC RESEARCH by the GEOPHYSICAL LABORATORY UNIVERSITY OF SOUTHERN CALIFORNIA Contractor: University of Southern

  2. OCT structure, COB location and magmatic type of the S Angolan & SE Brazilian margins from integrated quantitative analysis of deep seismic reflection and gravity anomaly data

    NASA Astrophysics Data System (ADS)

    Cowie, Leanne; Kusznir, Nick; Horn, Brian

    2014-05-01

    Integrated quantitative analysis using deep seismic reflection data and gravity inversion have been applied to the S Angolan and SE Brazilian margins to determine OCT structure, COB location and magmatic type. Knowledge of these margin parameters are of critical importance for understanding rifted continental margin formation processes and in evaluating petroleum systems in deep-water frontier oil and gas exploration. The OCT structure, COB location and magmatic type of the S Angolan and SE Brazilian rifted continental margins are much debated; exhumed and serpentinised mantle have been reported at these margins. Gravity anomaly inversion, incorporating a lithosphere thermal gravity anomaly correction, has been used to determine Moho depth, crustal basement thickness and continental lithosphere thinning. Residual Depth Anomaly (RDA) analysis has been used to investigate OCT bathymetric anomalies with respect to expected oceanic bathymetries and subsidence analysis has been used to determine the distribution of continental lithosphere thinning. These techniques have been validated for profiles Lusigal 12 and ISE-01 on the Iberian margin. In addition a joint inversion technique using deep seismic reflection and gravity anomaly data has been applied to the ION-GXT BS1-575 SE Brazil and ION-GXT CS1-2400 S Angola deep seismic reflection lines. The joint inversion method solves for coincident seismic and gravity Moho in the time domain and calculates the lateral variations in crustal basement densities and velocities along the seismic profiles. Gravity inversion, RDA and subsidence analysis along the ION-GXT BS1-575 profile, which crosses the Sao Paulo Plateau and Florianopolis Ridge of the SE Brazilian margin, predict the COB to be located SE of the Florianopolis Ridge. Integrated quantitative analysis shows no evidence for exhumed mantle on this margin profile. The joint inversion technique predicts oceanic crustal thicknesses of between 7 and 8 km thickness with normal oceanic basement seismic velocities and densities. Beneath the Sao Paulo Plateau and Florianopolis Ridge, joint inversion predicts crustal basement thicknesses between 10-15km with high values of basement density and seismic velocities under the Sao Paulo Plateau which are interpreted as indicating a significant magmatic component within the crustal basement. The Sao Paulo Plateau and Florianopolis Ridge are separated by a thin region of crustal basement beneath the salt interpreted as a regional transtensional structure. Sediment corrected RDAs and gravity derived "synthetic" RDAs are of a similar magnitude on oceanic crust, implying negligible mantle dynamic topography. Gravity inversion, RDA and subsidence analysis along the S Angolan ION-GXT CS1-2400 profile suggests that exhumed mantle, corresponding to a magma poor margin, is absent..The thickness of earliest oceanic crust, derived from gravity and deep seismic reflection data, is approximately 7km consistent with the global average oceanic crustal thicknesses. The joint inversion predicts a small difference between oceanic and continental crustal basement density and seismic velocity, with the change in basement density and velocity corresponding to the COB independently determined from RDA and subsidence analysis. The difference between the sediment corrected RDA and that predicted from gravity inversion crustal thickness variation implies that this margin is experiencing approximately 500m of anomalous uplift attributed to mantle dynamic uplift.

  3. Estimation of VOC emissions from produced-water treatment ponds in Uintah Basin oil and gas field using modeling techniques

    NASA Astrophysics Data System (ADS)

    Tran, H.; Mansfield, M. L.; Lyman, S. N.; O'Neil, T.; Jones, C. P.

    2015-12-01

    Emissions from produced-water treatment ponds are poorly characterized sources in oil and gas emission inventories that play a critical role in studying elevated winter ozone events in the Uintah Basin, Utah, U.S. Information gaps include un-quantified amounts and compositions of gases emitted from these facilities. The emitted gases are often known as volatile organic compounds (VOCs) which, beside nitrogen oxides (NOX), are major precursors for ozone formation in the near-surface layer. Field measurement campaigns using the flux-chamber technique have been performed to measure VOC emissions from a limited number of produced water ponds in the Uintah Basin of eastern Utah. Although the flux chamber provides accurate measurements at the point of sampling, it covers just a limited area of the ponds and is prone to altering environmental conditions (e.g., temperature, pressure). This fact raises the need to validate flux chamber measurements. In this study, we apply an inverse-dispersion modeling technique with evacuated canister sampling to validate the flux-chamber measurements. This modeling technique applies an initial and arbitrary emission rate to estimate pollutant concentrations at pre-defined receptors, and adjusts the emission rate until the estimated pollutant concentrations approximates measured concentrations at the receptors. The derived emission rates are then compared with flux-chamber measurements and differences are analyzed. Additionally, we investigate the applicability of the WATER9 wastewater emission model for the estimation of VOC emissions from produced-water ponds in the Uintah Basin. WATER9 estimates the emission of each gas based on properties of the gas, its concentration in the waste water, and the characteristics of the influent and treatment units. Results of VOC emission estimations using inverse-dispersion and WATER9 modeling techniques will be reported.

  4. Magnetic topology of Co-based inverse opal-like structures

    NASA Astrophysics Data System (ADS)

    Grigoryeva, N. A.; Mistonov, A. A.; Napolskii, K. S.; Sapoletova, N. A.; Eliseev, A. A.; Bouwman, W.; Byelov, D. V.; Petukhov, A. V.; Chernyshov, D. Yu.; Eckerlebe, H.; Vasilieva, A. V.; Grigoriev, S. V.

    2011-08-01

    The magnetic and structural properties of a cobalt inverse opal-like crystal have been studied by a combination of complementary techniques ranging from polarized neutron scattering and superconducting quantum interference device (SQUID) magnetometry to x-ray diffraction. Microradian small-angle x-ray diffraction shows that the inverse opal-like structure (OLS) synthesized by the electrochemical method fully duplicates the three-dimensional net of voids of the template artificial opal. The inverse OLS has a face-centered cubic (fcc) structure with a lattice constant of 640±10 nm and with a clear tendency to a random hexagonal close-packed structure along the [111] axes. Wide-angle x-ray powder diffraction shows that the atomic cobalt structure is described by coexistence of 95% hexagonal close-packed and 5% fcc phases. The SQUID measurements demonstrate that the inverse OLS film possesses easy-plane magnetization geometry with a coercive field of 14.0 ± 0.5 mT at room temperature. The detailed picture of the transformation of the magnetic structure under an in-plane applied field was detected with the help of small-angle diffraction of polarized neutrons. In the demagnetized state the magnetic system consists of randomly oriented magnetic domains. A complex magnetic structure appears upon application of the magnetic field, with nonhomogeneous distribution of magnetization density within the unit element of the OLS. This distribution is determined by the combined effect of the easy-plane geometry of the film and the crystallographic geometry of the opal-like structure with respect to the applied field direction.

  5. Iterative joint inversion of in-situ stress state along Simeulue-Nias Island

    NASA Astrophysics Data System (ADS)

    Agustina, Anisa; Sahara, David P.; Nugraha, Andri Dian

    2017-07-01

    In-situ stress inversion from focal mechanisms requires knowledge of which of the two nodal planes is the fault. This is challenging, in particular, because of the inherent ambiguity of focal mechanisms the fault and the auxiliary nodal plane could not be distinguished. A relatively new inversion technique for estimating both stress and fault plane is developed by Vavryĉuk in 2014. The fault orientations are determined by applying the fault instability constraint, and the stress is calculated in iterations. In this study, this method is applied to a high-density earthquake regions, Simeulue-Batu Island. This area is interesting to be investigated because of the occurrence of the two large earthquakes, i.e. Aceh 2004 and Nias 2005 earthquake. The inversion was done based on 343 focal mechanisms data with Magnitude ≥5.5 Mw between 25th Mei 1977- 25th August 2015 from Harvard and Global Centroid Moment Tensor (GCMT) catalog. The area is divided into some grids, in which the analysis of stress orientation variation and its shape ratio is done for each grid. Stress inversion results show that there are three segments along Simeulue-Batu Island based on the variation of orientation stress σ1. The stress characteristics of each segments are discussed, i.e. shape ratio, principal stress orientation and subduction angle. Interestingly, the highest value of shape ratio is 0.93 and its association with the large earthquake Aceh 2004. This suggest that the zonation obtained in this study could also be used as a proxy for the hazard map.

  6. Optimal Inversion Parameters for Full Waveform Inversion using OBS Data Set

    NASA Astrophysics Data System (ADS)

    Kim, S.; Chung, W.; Shin, S.; Kim, D.; Lee, D.

    2017-12-01

    In recent years, full Waveform Inversion (FWI) has been the most researched technique in seismic data processing. It uses the residuals between observed and modeled data as an objective function; thereafter, the final subsurface velocity model is generated through a series of iterations meant to minimize the residuals.Research on FWI has expanded from acoustic media to elastic media. In acoustic media, the subsurface property is defined by P-velocity; however, in elastic media, properties are defined by multiple parameters, such as P-velocity, S-velocity, and density. Further, the elastic media can also be defined by Lamé constants, density or impedance PI, SI; consequently, research is being carried out to ascertain the optimal parameters.From results of advanced exploration equipment and Ocean Bottom Seismic (OBS) survey, it is now possible to obtain multi-component seismic data. However, to perform FWI on these data and generate an accurate subsurface model, it is important to determine optimal inversion parameters among (Vp, Vs, ρ), (λ, μ, ρ), and (PI, SI) in elastic media. In this study, staggered grid finite difference method was applied to simulate OBS survey. As in inversion, l2-norm was set as objective function. Further, the accurate computation of gradient direction was performed using the back-propagation technique and its scaling was done using the Pseudo-hessian matrix.In acoustic media, only Vp is used as the inversion parameter. In contrast, various sets of parameters, such as (Vp, Vs, ρ) and (λ, μ, ρ) can be used to define inversion in elastic media. Therefore, it is important to ascertain the parameter that gives the most accurate result for inversion with OBS data set.In this study, we generated Vp and Vs subsurface models by using (λ, μ, ρ) and (Vp, Vs, ρ) as inversion parameters in every iteration, and compared the final two FWI results.This research was supported by the Basic Research Project(17-3312) of the Korea Institute of Geoscience and Mineral Resources(KIGAM) funded by the Ministry of Science, ICT and Future Planning of Korea.

  7. Phase Inversion: Inferring Solar Subphotospheric Flow and Other Asphericity from the Distortion of Acoustic Waves

    NASA Technical Reports Server (NTRS)

    Gough, Douglas; Merryfield, William J.; Toomre, Juri

    1998-01-01

    A method is proposed for analyzing an almost monochromatic train of waves propagating in a single direction in an inhomogeneous medium that is not otherwise changing in time. An effective phase is defined in terms of the Hilbert transform of the wave function, which is related, via the JWKB approximation, to the spatial variation of the background state against which the wave is propagating. The contaminating effect of interference between the truly monochromatic components of the train is eliminated using its propagation properties. Measurement errors, provided they are uncorrelated, are manifest as rapidly varying noise; although that noise can dominate the raw phase-processed signal, it can largely be removed by low-pass filtering. The intended purpose of the analysis is to determine the distortion of solar oscillations induced by horizontal structural variation and material flow. It should be possible to apply the method directly to sectoral modes. The horizontal phase distortion provides a measure of longitudinally averaged properties of the Sun in the vicinity of the equator, averaged also in radius down to the depth to which the modes penetrate. By combining such averages from different modes, the two-dimensional variation can be inferred by standard inversion techniques. After taking due account of horizontal refraction, it should be possible to apply the technique also to locally sectoral modes that propagate obliquely to the equator and thereby build a network of lateral averages at each radius, from which the full three-dimensional structure of the Sun can, in principle, be determined as an inverse Radon transform.

  8. Total-variation based velocity inversion with Bregmanized operator splitting algorithm

    NASA Astrophysics Data System (ADS)

    Zand, Toktam; Gholami, Ali

    2018-04-01

    Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.

  9. Convergence acceleration in scattering series and seismic waveform inversion using nonlinear Shanks transformation

    NASA Astrophysics Data System (ADS)

    Eftekhar, Roya; Hu, Hao; Zheng, Yingcai

    2018-06-01

    Iterative solution process is fundamental in seismic inversions, such as in full-waveform inversions and some inverse scattering methods. However, the convergence could be slow or even divergent depending on the initial model used in the iteration. We propose to apply Shanks transformation (ST for short) to accelerate the convergence of the iterative solution. ST is a local nonlinear transformation, which transforms a series locally into another series with an improved convergence property. ST works by separating the series into a smooth background trend called the secular term versus an oscillatory transient term. ST then accelerates the convergence of the secular term. Since the transformation is local, we do not need to know all the terms in the original series which is very important in the numerical implementation. The ST performance was tested numerically for both the forward Born series and the inverse scattering series (ISS). The ST has been shown to accelerate the convergence in several examples, including three examples of forward modeling using the Born series and two examples of velocity inversion based on a particular type of the ISS. We observe that ST is effective in accelerating the convergence and it can also achieve convergence even for a weakly divergent scattering series. As such, it provides a useful technique to invert for a large-contrast medium perturbation in seismic inversion.

  10. Atmospheric inverse modeling via sparse reconstruction

    NASA Astrophysics Data System (ADS)

    Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten

    2017-10-01

    Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.

  11. Full wave two-dimensional modeling of scattering and inverse scattering for layered rough surfaces with buried objects

    NASA Astrophysics Data System (ADS)

    Kuo, Chih-Hao

    Efficient and accurate modeling of electromagnetic scattering from layered rough surfaces with buried objects finds applications ranging from detection of landmines to remote sensing of subsurface soil moisture. The formulation of a hybrid numerical/analytical solution to electromagnetic scattering from layered rough surfaces is first presented in this dissertation. The solution to scattering from each rough interface is sought independently based on the extended boundary condition method (EBCM), where the scattered fields of each rough interface are expressed as a summation of plane waves and then cast into reflection/transmission matrices. To account for interactions between multiple rough boundaries, the scattering matrix method (SMM) is applied to recursively cascade reflection and transmission matrices of each rough interface and obtain the composite reflection matrix from the overall scattering medium. The validation of this method against the Method of Moments (MoM) and Small Perturbation Method (SPM) is addressed and the numerical results which investigate the potential of low frequency radar systems in estimating deep soil moisture are presented. Computational efficiency of the proposed method is also discussed. In order to demonstrate the capability of this method in modeling coherent multiple scattering phenomena, the proposed method has been employed to analyze backscattering enhancement and satellite peaks due to surface plasmon waves from layered rough surfaces. Numerical results which show the appearance of enhanced backscattered peaks and satellite peaks are presented. Following the development of the EBCM/SMM technique, a technique which incorporates a buried object in layered rough surfaces by employing the T-matrix method and the cylindrical-to-spatial harmonics transformation is proposed. Validation and numerical results are provided. Finally, a multi-frequency polarimetric inversion algorithm for the retrieval of subsurface soil properties using VHF/UHF band radar measurements is devised. The top soil dielectric constant is first determined using an L-band inversion algorithm. For the retrieval of subsurface properties, a time-domain inversion technique is employed together with a parameter optimization for the pulse shape of time delay echoes from VHF/UHF band radar observations. Numerical studies to investigate the accuracy of the proposed inversion technique in presence of errors are addressed.

  12. 2D Inversion of Transient Electromagnetic Method (TEM)

    NASA Astrophysics Data System (ADS)

    Bortolozo, Cassiano Antonio; Luís Porsani, Jorge; Acácio Monteiro dos Santos, Fernando

    2017-04-01

    A new methodology was developed for 2D inversion of Transient Electromagnetic Method (TEM). The methodology consists in the elaboration of a set of routines in Matlab code for modeling and inversion of TEM data and the determination of the most efficient field array for the problem. In this research, the 2D TEM modeling uses the finite differences discretization. To solve the inversion problem, were applied an algorithm based on Marquardt technique, also known as Ridge Regression. The algorithm is stable and efficient and it is widely used in geoelectrical inversion problems. The main advantage of 1D survey is the rapid data acquisition in a large area, but in regions with two-dimensional structures or that need more details, is essential to use two-dimensional interpretation methodologies. For an efficient field acquisition we used in an innovative form the fixed-loop array, with a square transmitter loop (200m x 200m) and 25m spacing between the sounding points. The TEM surveys were conducted only inside the transmitter loop, in order to not deal with negative apparent resistivity values. Although it is possible to model the negative values, it makes the inversion convergence more difficult. Therefore the methodology described above has been developed in order to achieve maximum optimization of data acquisition. Since it is necessary only one transmitter loop disposition in the surface for each series of soundings inside the loop. The algorithms were tested with synthetic data and the results were essential to the interpretation of the results with real data and will be useful in future situations. With the inversion of the real data acquired over the Paraná Sedimentary Basin (PSB) was successful realized a 2D TEM inversion. The results indicate a robust geoelectrical characterization for the sedimentary and crystalline aquifers in the PSB. Therefore, using a new and relevant approach for 2D TEM inversion, this research effectively contributed to map the most promising regions for groundwater exploration. In addition, there was the development of new geophysical software that can be applied as an important tool for many geological/hydrogeological applications and educational purposes.

  13. A three-step Maximum-A-Posterior probability method for InSAR data inversion of coseismic rupture with application to four recent large earthquakes in Asia

    NASA Astrophysics Data System (ADS)

    Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.

    2012-12-01

    We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of the method in earthquake studies and a number of advantages of it over other methods. The details will be reported on the meeting.

  14. The transient divided bar method for laboratory measurements of thermal properties

    NASA Astrophysics Data System (ADS)

    Bording, Thue S.; Nielsen, Søren B.; Balling, Niels

    2016-12-01

    Accurate information on thermal conductivity and thermal diffusivity of materials is of central importance in relation to geoscience and engineering problems involving the transfer of heat. Several methods, including the classical divided bar technique, are available for laboratory measurements of thermal conductivity, but much fewer for thermal diffusivity. We have generalized the divided bar technique to the transient case in which thermal conductivity, volumetric heat capacity and thereby also thermal diffusivity are measured simultaneously. As the density of samples is easily determined independently, specific heat capacity can also be determined. The finite element formulation provides a flexible forward solution for heat transfer across the bar, and thermal properties are estimated by inverse Monte Carlo modelling. This methodology enables a proper quantification of experimental uncertainties on measured thermal properties and information on their origin. The developed methodology was applied to various materials, including a standard ceramic material and different rock samples, and measuring results were compared with results applying traditional steady-state divided bar and an independent line-source method. All measurements show highly consistent results and with excellent reproducibility and high accuracy. For conductivity the obtained uncertainty is typically 1-3 per cent, and for diffusivity uncertainty may be reduced to about 3-5 per cent. The main uncertainty originates from the presence of thermal contact resistance associated with the internal interfaces in the bar. These are not resolved during inversion and it is imperative that they are minimized. The proposed procedure is simple and may quite easily be implemented to the many steady-state divided bar systems in operation. A thermally controlled bath, as applied here, may not be needed. Simpler systems, such as applying temperature-controlled water directly from a tap, may also be applied.

  15. Transdimensional, hierarchical, Bayesian inversion of ambient seismic noise: Australia

    NASA Astrophysics Data System (ADS)

    Crowder, E.; Rawlinson, N.; Cornwell, D. G.

    2017-12-01

    We present models of crustal velocity structure in southeastern Australia using a novel, transdimensional and hierarchical, Bayesian inversion approach. The inversion is applied to long-time ambient noise cross-correlations. The study area of SE Australia is thought to represent the eastern margin of Gondwana. Conflicting tectonic models have been proposed to explain the formation of eastern Gondwana and the enigmatic geological relationships in Bass Strait, which separates Tasmania and the mainland. A geologically complex area of crustal accretion, Bass Strait may contain part of an exotic continental block entrained in colliding crusts. Ambient noise data recorded by an array of 24 seismometers is used to produce a high resolution, 3D shear wave velocity model of Bass Strait. Phase velocity maps in the period range 2-30 s are produced and subsequently inverted for 3D shear wave velocity structure. The transdimensional, hierarchical Bayesian, inversion technique is used. This technique proves far superior to linearised inversion. The inversion model is dynamically parameterised during the process, implicitly controlled by the data, and noise is treated as an inversion unknown. The resulting shear wave velocity model shows three sedimentary basins in Bass Strait constrained by slow shear velocities (2.4-2.9 km/s) at 2-10 km depth. These failed rift basins from the breakup of Australia-Antartica appear to be overlying thinned crust, where typical mantle velocities of 3.8-4.0 km/s occur at depths greater than 20 km. High shear wave velocities ( 3.7-3.8 km/s) in our new model also match well with regions of high magnetic and gravity anomalies. Furthermore, we use both Rayleigh and Love wave phase data to to construct Vsv and Vsh maps. These are used to estimate crustal radial anisotropy in the Bass Strait. We interpret that structures delineated by our velocity models support the presence and extent of the exotic Precambrian micro-continent (the Selwyn Block) that was most likely entrained during crustal accretion.

  16. Determination of the rCBF in the Amygdala and Rhinal Cortex Using a FAIR-TrueFISP Sequence

    PubMed Central

    Martirosian, Petros; Klose, Uwe; Nägele, Thomas; Schick, Fritz; Ernemann, Ulrike

    2011-01-01

    Objective Brain perfusion can be assessed non-invasively by modern arterial spin labeling MRI. The FAIR (flow-sensitive alternating inversion recovery)-TrueFISP (true fast imaging in steady precession) technique was applied for regional assessment of cerebral blood flow in brain areas close to the skull base, since this approach provides low sensitivity to magnetic susceptibility effects. The investigation of the rhinal cortex and the amygdala is a potentially important feature for the diagnosis and research on dementia in its early stages. Materials and Methods Twenty-three subjects with no structural or psychological impairment were investigated. FAIR-True-FISP quantitative perfusion data were evaluated in the amygdala on both sides and in the pons. A preparation of the radiofrequency FOCI (frequency offset corrected inversion) pulse was used for slice selective inversion. After a time delay of 1.2 sec, data acquisition began. Imaging slice thickness was 5 mm and inversion slab thickness for slice selective inversion was 12.5 mm. Image matrix size for perfusion images was 64 × 64 with a field of view of 256 × 256 mm, resulting in a spatial resolution of 4 × 4 × 5 mm. Repetition time was 4.8 ms; echo time was 2.4 ms. Acquisition time for the 50 sets of FAIR images was 6:56 min. Data were compared with perfusion data from the literature. Results Perfusion values in the right amygdala, left amygdala and pons were 65.2 (± 18.2) mL/100 g/minute, 64.6 (± 21.0) mL/100 g/minute, and 74.4 (± 19.3) mL/100 g/minute, respectively. These values were higher than formerly published data using continuous arterial spin labeling but similar to 15O-PET (oxygen-15 positron emission tomography) data. Conclusion The FAIR-TrueFISP approach is feasible for the quantitative assessment of perfusion in the amygdala. Data are comparable with formerly published data from the literature. The applied technique provided excellent image quality, even for brain regions located at the skull base in the vicinity of marked susceptibility steps. PMID:21927556

  17. Interpretaion of synthetic seismic time-lapse monitoring data for Korea CCS project based on the acoustic-elastic coupled inversion

    NASA Astrophysics Data System (ADS)

    Oh, J.; Min, D.; Kim, W.; Huh, C.; Kang, S.

    2012-12-01

    Recently, the CCS (Carbon Capture and Storage) is one of the promising methods to reduce the CO2 emission. To evaluate the success of the CCS project, various geophysical monitoring techniques have been applied. Among them, the time-lapse seismic monitoring is one of the effective methods to investigate the migration of CO2 plume. To monitor the injected CO2 plume accurately, it is needed to interpret seismic monitoring data using not only the imaging technique but also the full waveform inversion, because subsurface material properties can be estimated through the inversion. However, previous works for interpreting seismic monitoring data are mainly based on the imaging technique. In this study, we perform the frequency-domain full waveform inversion for synthetic data obtained by the acoustic-elastic coupled modeling for the geological model made after Ulleung Basin, which is one of the CO2 storage prospects in Korea. We suppose the injection layer is located in fault-related anticlines in the Dolgorae Deformed Belt and, for more realistic situation, we contaminate the synthetic monitoring data with random noise and outliers. We perform the time-lapse full waveform inversion in two scenarios. One scenario is that the injected CO2 plume migrates within the injection layer and is stably captured. The other scenario is that the injected CO2 plume leaks through the weak part of the cap rock. Using the inverted P- and S-wave velocities and Poisson's ratio, we were able to detect the migration of the injected CO2 plume. Acknowledgment This work was financially supported by the Brain Korea 21 project of Energy Systems Engineering, the "Development of Technology for CO2 Marine Geological Storage" program funded by the Ministry of Land, Transport and Maritime Affairs (MLTM) of Korea and the Korea CCS R&D Center (KCRC) grant funded by the Korea government (Ministry of Education, Science and Technology) (No. 2012-0008926).

  18. The Inverse Problem for Confined Aquifer Flow: Identification and Estimation With Extensions

    NASA Astrophysics Data System (ADS)

    Loaiciga, Hugo A.; MariñO, Miguel A.

    1987-01-01

    The contributions of this work are twofold. First, a methodology for estimating the elements of parameter matrices in the governing equation of flow in a confined aquifer is developed. The estimation techniques for the distributed-parameter inverse problem pertain to linear least squares and generalized least squares methods. The linear relationship among the known heads and unknown parameters of the flow equation provides the background for developing criteria for determining the identifiability status of unknown parameters. Under conditions of exact or overidentification it is possible to develop statistically consistent parameter estimators and their asymptotic distributions. The estimation techniques, namely, two-stage least squares and three stage least squares, are applied to a specific groundwater inverse problem and compared between themselves and with an ordinary least squares estimator. The three-stage estimator provides the closer approximation to the actual parameter values, but it also shows relatively large standard errors as compared to the ordinary and two-stage estimators. The estimation techniques provide the parameter matrices required to simulate the unsteady groundwater flow equation. Second, a nonlinear maximum likelihood estimation approach to the inverse problem is presented. The statistical properties of maximum likelihood estimators are derived, and a procedure to construct confidence intervals and do hypothesis testing is given. The relative merits of the linear and maximum likelihood estimators are analyzed. Other topics relevant to the identification and estimation methodologies, i.e., a continuous-time solution to the flow equation, coping with noise-corrupted head measurements, and extension of the developed theory to nonlinear cases are also discussed. A simulation study is used to evaluate the methods developed in this study.

  19. Time-domain induced polarization - an analysis of Cole-Cole parameter resolution and correlation using Markov Chain Monte Carlo inversion

    NASA Astrophysics Data System (ADS)

    Madsen, Line Meldgaard; Fiandaca, Gianluca; Auken, Esben; Christiansen, Anders Vest

    2017-12-01

    The application of time-domain induced polarization (TDIP) is increasing with advances in acquisition techniques, data processing and spectral inversion schemes. An inversion of TDIP data for the spectral Cole-Cole parameters is a non-linear problem, but by applying a 1-D Markov Chain Monte Carlo (MCMC) inversion algorithm, a full non-linear uncertainty analysis of the parameters and the parameter correlations can be accessed. This is essential to understand to what degree the spectral Cole-Cole parameters can be resolved from TDIP data. MCMC inversions of synthetic TDIP data, which show bell-shaped probability distributions with a single maximum, show that the Cole-Cole parameters can be resolved from TDIP data if an acquisition range above two decades in time is applied. Linear correlations between the Cole-Cole parameters are observed and by decreasing the acquisitions ranges, the correlations increase and become non-linear. It is further investigated how waveform and parameter values influence the resolution of the Cole-Cole parameters. A limiting factor is the value of the frequency exponent, C. As C decreases, the resolution of all the Cole-Cole parameters decreases and the results become increasingly non-linear. While the values of the time constant, τ, must be in the acquisition range to resolve the parameters well, the choice between a 50 per cent and a 100 per cent duty cycle for the current injection does not have an influence on the parameter resolution. The limits of resolution and linearity are also studied in a comparison between the MCMC and a linearized gradient-based inversion approach. The two methods are consistent for resolved models, but the linearized approach tends to underestimate the uncertainties for poorly resolved parameters due to the corresponding non-linear features. Finally, an MCMC inversion of 1-D field data verifies that spectral Cole-Cole parameters can also be resolved from TD field measurements.

  20. SU-E-T-558: An Exploratory RF Pulse Sequence Technique Used to Induce Differential Heating in Tissues Containing Iron Oxide Nanoparticles for a Possible Hyperthermic Adjuvant Effect to Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yee, S; Ionascu, D; Wilson, G

    2014-06-01

    Purpose: In pre-clinical trials of cancer thermotherapy, hyperthermia can be induced by exposing localized super-paramagnetic iron oxide nanoparticles (SPION) to external alternating magnetic fields generated by a solenoid electrical circuit (Zhao et al., Theranostics 2012). Alternatively, an RF pulse technique implemented in a regular MRI system is explored as a possible hyperthermia induction technique . Methods: A new thermal RF pulse sequence was developed using the Philips pulse programming tool for the 3T Ingenia MRI system to provide a sinusoidal magnetic field alternating at the frequency of 1.43 kHz (multiples of sine waves of 0.7 ms period) before each excitationmore » RF pulse for imaging. The duration of each thermal RF pulse routine was approximately 3 min, and the thermal pulse was applied multiple times to a phantom that contains different concentrations (high, medium and low) of SPION samples. After applying the thermal pulse each time, the temperature change was estimated by measuring the phase changes in the T1-weighted inversion-prepared multi-shot turbo field echo (TFE) sequence (TR=5.5 ms, TE=2.7 ms, inversion time=200 ms). Results: The phase values and relative differences among them changed as the number of applied thermal RF pulses increased. After the 5th application of the thermal RF pulse, the relative phase differences increased significantly, suggesting the thermal activation of the SPION. The increase of the phase difference was approximately linear with the SPION concentration. Conclusion: A sinusoidal RF pulse from the MRI system may be utilized to selectively thermally activate tissues containing super-paramagnetic iron oxide nanoparticles.« less

  1. Decomposing Large Inverse Problems with an Augmented Lagrangian Approach: Application to Joint Inversion of Body-Wave Travel Times and Surface-Wave Dispersion Measurements

    NASA Astrophysics Data System (ADS)

    Reiter, D. T.; Rodi, W. L.

    2015-12-01

    Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.

  2. Recent Advances and Field Trial Results Integrating Cosmic Ray Muon Tomography with Other Data Sources for Mineral Exploration

    NASA Astrophysics Data System (ADS)

    Schouten, D.

    2015-12-01

    CRM GeoTomography Technologies, Inc. is leading the way in applying muon tomography to discovery and definition of dense ore bodies for mineral exploration and resource estimation. We have successfully imaged volcanogenic massive sulfide (VMS) deposits at mines in North America using our suite of field-proven muon tracking detectors, and are at various stages of development for other applications. Recently we developed in-house inversion software that integrates data from assays, surface and borehole gravity, and underground muon flux measurements. We have found that the differing geophysical data sources provide complementary information and that dramatic improvements in inversion results are attained using various inversion performance metrics related to the excess tonnage of the mineral deposits, as well as their spatial extents and locations. This presentation will outline field tests of muon tomography performed by CRM Geotomography in some real world examples, and will demonstrate the effectiveness of joint muon tomography, assay and gravity inversion techniques in field tests (where data are available) and in simulations.

  3. Waveform inversion for 3-D earth structure using the Direct Solution Method implemented on vector-parallel supercomputer

    NASA Astrophysics Data System (ADS)

    Hara, Tatsuhiko

    2004-08-01

    We implement the Direct Solution Method (DSM) on a vector-parallel supercomputer and show that it is possible to significantly improve its computational efficiency through parallel computing. We apply the parallel DSM calculation to waveform inversion of long period (250-500 s) surface wave data for three-dimensional (3-D) S-wave velocity structure in the upper and uppermost lower mantle. We use a spherical harmonic expansion to represent lateral variation with the maximum angular degree 16. We find significant low velocities under south Pacific hot spots in the transition zone. This is consistent with other seismological studies conducted in the Superplume project, which suggests deep roots of these hot spots. We also perform simultaneous waveform inversion for 3-D S-wave velocity and Q structure. Since resolution for Q is not good, we develop a new technique in which power spectra are used as data for inversion. We find good correlation between long wavelength patterns of Vs and Q in the transition zone such as high Vs and high Q under the western Pacific.

  4. 3D linear inversion of magnetic susceptibility data acquired by frequency domain EMI

    NASA Astrophysics Data System (ADS)

    Thiesson, J.; Tabbagh, A.; Simon, F.-X.; Dabas, M.

    2017-01-01

    Low induction number EMI instruments are able to simultaneously measure a soil's apparent magnetic susceptibility and electrical conductivity. This family of dual measurement instruments is highly useful for the analysis of soils and archeological sites. However, the electromagnetic properties of soils are found to vary over considerably different ranges: whereas their electrical conductivity varies from ≤ 0.1 to ≥ 100 mS/m, their relative magnetic permeability remains within a very small range, between 1.0001 and 1.01 SI. Consequently, although apparent conductivity measurements need to be inverted using non-linear processes, the variations of the apparent magnetic susceptibility can be approximated through the use of linear processes, as in the case of the magnetic prospection technique. Our proposed 3D inversion algorithm starts from apparent susceptibility data sets, acquired using different instruments over a given area. A reference vertical profile is defined by considering the mode of the vertical distributions of both the electrical resistivity and of the magnetic susceptibility. At each point of the mapped area, the reference vertical profile response is subtracted to obtain the apparent susceptibility variation dataset. A 2D horizontal Fourier transform is applied to these variation datasets and to the dipole (impulse) response of each instrument, a (vertical) 1D inversion is performed at each point in the spectral domain, and finally the resulting dataset is inverse transformed to restore the apparent 3D susceptibility variations. It has been shown that when applied to synthetic results, this method is able to correct the apparent deformations of a buried object resulting from the geometry of the instrument, and to restore reliable quantitative susceptibility contrasts. It also allows the thin layer solution, similar to that used in magnetic prospection, to be implemented. When applied to field data it initially delivers a level of contrast comparable to that obtained with a non-linear 3D inversion. Over four different sites, this method is able to produce, following an acceptably short computation time, realistic values for the lateral and vertical variations in susceptibility, which are significantly different to those given by a point-by-point 1D inversion.

  5. High-Temperature Lubricant Analyses Using the System for Thermal Diagnostic Studies (STDS). A Feasibility Study

    DTIC Science & Technology

    1990-07-01

    permeation chromatography (GPC) have been applied to lubricant type samples. 8 Most recently the newly introduced supercritical fluid chromatography (SFC... fluids , such as lubricants and hydraulic fluids can also be examined using various inverse chromatography procedures. Another mode, known as reaction...introduction of new gaseous extraction techniques, e.g., supercritical fluid extraction, procedures such as IGC will probably be developed for vastly

  6. New Inversion and Interpretation of Public-Domain Electromagnetic Survey Data from Selected Areas in Alaska

    NASA Astrophysics Data System (ADS)

    Smith, B. D.; Kass, A.; Saltus, R. W.; Minsley, B. J.; Deszcz-Pan, M.; Bloss, B. R.; Burns, L. E.

    2013-12-01

    Public-domain airborne geophysical surveys (combined electromagnetics and magnetics), mostly collected for and released by the State of Alaska, Division of Geological and Geophysical Surveys (DGGS), are a unique and valuable resource for both geologic interpretation and geophysical methods development. A new joint effort by the US Geological Survey (USGS) and the DGGS aims to add value to these data through the application of novel advanced inversion methods and through innovative and intuitive display of data: maps, profiles, voxel-based models, and displays of estimated inversion quality and confidence. Our goal is to make these data even more valuable for interpretation of geologic frameworks, geotechnical studies, and cryosphere studies, by producing robust estimates of subsurface resistivity that can be used by non-geophysicists. The available datasets, which are available in the public domain, include 39 frequency-domain electromagnetic datasets collected since 1993, and continue to grow with 5 more data releases pending in 2013. The majority of these datasets were flown for mineral resource purposes, with one survey designed for infrastructure analysis. In addition, several USGS datasets are included in this study. The USGS has recently developed new inversion methodologies for airborne EM data and have begun to apply these and other new techniques to the available datasets. These include a trans-dimensional Markov Chain Monte Carlo technique, laterally-constrained regularized inversions, and deterministic inversions which include calibration factors as a free parameter. Incorporation of the magnetic data as an additional constraining dataset has also improved the inversion results. Processing has been completed in several areas, including Fortymile and the Alaska Highway surveys, and continues in others such as the Styx River and Nome surveys. Utilizing these new techniques, we provide models beyond the apparent resistivity maps supplied by the original contractors, allowing us to produce a variety of products, such as maps of resistivity as a function of depth or elevation, cross section maps, and 3D voxel models, which have been treated consistently both in terms of processing and error analysis throughout the state. These products facilitate a more fruitful exchange between geologists and geophysicists and a better understanding of uncertainty, and the process results in iterative development and improvement of geologic models, both on small and large scales.

  7. Topology-optimized dual-polarization Dirac cones

    NASA Astrophysics Data System (ADS)

    Lin, Zin; Christakis, Lysander; Li, Yang; Mazur, Eric; Rodriguez, Alejandro W.; Lončar, Marko

    2018-02-01

    We apply a large-scale computational technique, known as topology optimization, to the inverse design of photonic Dirac cones. In particular, we report on a variety of photonic crystal geometries, realizable in simple isotropic dielectric materials, which exhibit dual-polarization Dirac cones. We present photonic crystals of different symmetry types, such as fourfold and sixfold rotational symmetries, with Dirac cones at different points within the Brillouin zone. The demonstrated and related optimization techniques open avenues to band-structure engineering and manipulating the propagation of light in periodic media, with possible applications to exotic optical phenomena such as effective zero-index media and topological photonics.

  8. Coloured computational imaging with single-pixel detectors based on a 2D discrete cosine transform

    NASA Astrophysics Data System (ADS)

    Liu, Bao-Lei; Yang, Zhao-Hua; Liu, Xia; Wu, Ling-An

    2017-02-01

    We propose and demonstrate a computational imaging technique that uses structured illumination based on a two-dimensional discrete cosine transform to perform imaging with a single-pixel detector. A scene is illuminated by a projector with two sets of orthogonal patterns, then by applying an inverse cosine transform to the spectra obtained from the single-pixel detector a full-colour image is retrieved. This technique can retrieve an image from sub-Nyquist measurements, and the background noise is easily cancelled to give excellent image quality. Moreover, the experimental set-up is very simple.

  9. Estimating source parameters from deformation data, with an application to the March 1997 earthquake swarm off the Izu Peninsula, Japan

    NASA Astrophysics Data System (ADS)

    Cervelli, P.; Murray, M. H.; Segall, P.; Aoki, Y.; Kato, T.

    2001-06-01

    We have applied two Monte Carlo optimization techniques, simulated annealing and random cost, to the inversion of deformation data for fault and magma chamber geometry. These techniques involve an element of randomness that permits them to escape local minima and ultimately converge to the global minimum of misfit space. We have tested the Monte Carlo algorithms on two synthetic data sets. We have also compared them to one another in terms of their efficiency and reliability. We have applied the bootstrap method to estimate confidence intervals for the source parameters, including the correlations inherent in the data. Additionally, we present methods that use the information from the bootstrapping procedure to visualize the correlations between the different model parameters. We have applied these techniques to GPS, tilt, and leveling data from the March 1997 earthquake swarm off of the Izu Peninsula, Japan. Using the two Monte Carlo algorithms, we have inferred two sources, a dike and a fault, that fit the deformation data and the patterns of seismicity and that are consistent with the regional stress field.

  10. Inverse methods for 3D quantitative optical coherence elasticity imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Dong, Li; Wijesinghe, Philip; Hugenberg, Nicholas; Sampson, David D.; Munro, Peter R. T.; Kennedy, Brendan F.; Oberai, Assad A.

    2017-02-01

    In elastography, quantitative elastograms are desirable as they are system and operator independent. Such quantification also facilitates more accurate diagnosis, longitudinal studies and studies performed across multiple sites. In optical elastography (compression, surface-wave or shear-wave), quantitative elastograms are typically obtained by assuming some form of homogeneity. This simplifies data processing at the expense of smearing sharp transitions in elastic properties, and/or introducing artifacts in these regions. Recently, we proposed an inverse problem-based approach to compression OCE that does not assume homogeneity, and overcomes the drawbacks described above. In this approach, the difference between the measured and predicted displacement field is minimized by seeking the optimal distribution of elastic parameters. The predicted displacements and recovered elastic parameters together satisfy the constraint of the equations of equilibrium. This approach, which has been applied in two spatial dimensions assuming plane strain, has yielded accurate material property distributions. Here, we describe the extension of the inverse problem approach to three dimensions. In addition to the advantage of visualizing elastic properties in three dimensions, this extension eliminates the plane strain assumption and is therefore closer to the true physical state. It does, however, incur greater computational costs. We address this challenge through a modified adjoint problem, spatially adaptive grid resolution, and three-dimensional decomposition techniques. Through these techniques the inverse problem is solved on a typical desktop machine within a wall clock time of 20 hours. We present the details of the method and quantitative elasticity images of phantoms and tissue samples.

  11. Wavelength modulation spectroscopy--digital detection of gas absorption harmonics based on Fourier analysis.

    PubMed

    Mei, Liang; Svanberg, Sune

    2015-03-20

    This work presents a detailed study of the theoretical aspects of the Fourier analysis method, which has been utilized for gas absorption harmonic detection in wavelength modulation spectroscopy (WMS). The lock-in detection of the harmonic signal is accomplished by studying the phase term of the inverse Fourier transform of the Fourier spectrum that corresponds to the harmonic signal. The mathematics and the corresponding simulation results are given for each procedure when applying the Fourier analysis method. The present work provides a detailed view of the WMS technique when applying the Fourier analysis method.

  12. Monitoring and inversion on land subsidence over mining area with InSAR technique

    USGS Publications Warehouse

    Wang, Y.; Zhang, Q.; Zhao, C.; Lu, Z.; Ding, X.

    2011-01-01

    The Wulanmulun town, located in Inner Mongolia, is one of the main mining areas of Shendong Company such as Shangwan coal mine and Bulianta coal mine, which has been suffering serious mine collapse with the underground mine withdrawal. We use ALOS/PALSAR data to extract land deformation under these regions, in which Small Baseline Subsets (SBAS) method was applied. Then we compared InSAR results with the underground mining activities, and found high correlations between them. Lastly we applied Distributed Dislocation (Okada) model to invert the mine collapse mechanism. ?? 2011 Copyright Society of Photo-Optical Instrumentation Engineers (SPIE).

  13. Color enhancement of landsat agricultural imagery: JPL LACIE image processing support task

    NASA Technical Reports Server (NTRS)

    Madura, D. P.; Soha, J. M.; Green, W. B.; Wherry, D. B.; Lewis, S. D.

    1978-01-01

    Color enhancement techniques were applied to LACIE LANDSAT segments to determine if such enhancement can assist analysis in crop identification. The procedure involved increasing the color range by removing correlation between components. First, a principal component transformation was performed, followed by contrast enhancement to equalize component variances, followed by an inverse transformation to restore familiar color relationships. Filtering was applied to lower order components to reduce color speckle in the enhanced products. Use of single acquisition and multiple acquisition statistics to control the enhancement were compared, and the effects of normalization investigated. Evaluation is left to LACIE personnel.

  14. Magnetic-field-induced crossover from the inverse Faraday effect to the optical orientation in EuTe

    NASA Astrophysics Data System (ADS)

    Pavlov, V. V.; Pisarev, R. V.; Nefedov, S. G.; Akimov, I. A.; Yakovlev, D. R.; Bayer, M.; Henriques, A. B.; Rappl, P. H. O.; Abramof, E.

    2018-05-01

    A time-resolved optical pump-probe technique has been applied for studying the ultrafast dynamics in the magnetic semiconductor EuTe near the absorption band gap. We show that application of external magnetic field up to 6 T results in crossover from the inverse Faraday effect taking place on the femtosecond time scale to the optical orientation phenomenon with an evolution in the picosecond time domain. We propose a model which includes both these processes, possessing different spectral and temporal properties. The circularly polarized optical pumping induces the electronic transition 4 f 7 5 d 0 → 4 f 6 5 d 1 forming the absorption band gap in EuTe. The observed crossover is related to a strong magnetic-field shift of the band gap in EuTe at low temperatures. It was found that manipulation of spin states on intrinsic defect levels takes place on a time scale of 19 ps in the applied magnetic field of 6 T.

  15. Final Technical Report for "Applied Mathematics Research: Simulation Based Optimization and Application to Electromagnetic Inverse Problems"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haber, Eldad

    2014-03-17

    The focus of research was: Developing adaptive mesh for the solution of Maxwell's equations; Developing a parallel framework for time dependent inverse Maxwell's equations; Developing multilevel methods for optimization problems with inequality constraints; A new inversion code for inverse Maxwell's equations in the 0th frequency (DC resistivity); A new inversion code for inverse Maxwell's equations in low frequency regime. Although the research concentrated on electromagnetic forward and in- verse problems the results of the research was applied to the problem of image registration.

  16. Inverse Opal Photonic Crystals as an Optofluidic Platform for Fast Analysis of Hydrocarbon Mixtures.

    PubMed

    Xu, Qiwei; Mahpeykar, Seyed Milad; Burgess, Ian B; Wang, Xihua

    2018-06-13

    Most of the reported optofluidic devices analyze liquid by measuring its refractive index. Recently, the wettability of liquid on various substrates has also been used as a key sensing parameter in optofluidic sensors. However, the above-mentioned techniques face challenges in the analysis of the relative concentration of components in an alkane hydrocarbon mixture, as both refractive indices and wettabilities of alkane hydrocarbons are very close. Here, we propose to apply volatility of liquid as the key sensing parameter, correlate it to the optical property of liquid inside inverse opal photonic crystals, and construct powerful optofluidic sensors for alkane hydrocarbon identification and analysis. We have demonstrated that via evaporation of hydrocarbons inside the periodic structure of inverse opal photonic crystals and observation of their reflection spectra, an inverse opal film could be used as a fast-response optofluidic sensor to accurately differentiate pure hydrocarbon liquids and relative concentrations of their binary and ternary mixtures in tens of seconds. In these 3D photonic crystals, pure chemicals with different volatilities would have different evaporation rates and can be easily identified via the total drying time. For multicomponent mixtures, the same strategy is applied to determine the relative concentration of each component simply by measuring drying time under different temperatures. Using this optofluidic sensing platform, we have determined the relative concentrations of ternary hydrocarbon mixtures with the difference of only one carbon between alkane hydrocarbons, which is a big step toward detailed hydrocarbon analysis for practical use.

  17. Elastic Cherenkov effects in transversely isotropic soft materials-I: Theoretical analysis, simulations and inverse method

    NASA Astrophysics Data System (ADS)

    Li, Guo-Yang; Zheng, Yang; Liu, Yanlin; Destrade, Michel; Cao, Yanping

    2016-11-01

    A body force concentrated at a point and moving at a high speed can induce shear-wave Mach cones in dusty-plasma crystals or soft materials, as observed experimentally and named the elastic Cherenkov effect (ECE). The ECE in soft materials forms the basis of the supersonic shear imaging (SSI) technique, an ultrasound-based dynamic elastography method applied in clinics in recent years. Previous studies on the ECE in soft materials have focused on isotropic material models. In this paper, we investigate the existence and key features of the ECE in anisotropic soft media, by using both theoretical analysis and finite element (FE) simulations, and we apply the results to the non-invasive and non-destructive characterization of biological soft tissues. We also theoretically study the characteristics of the shear waves induced in a deformed hyperelastic anisotropic soft material by a source moving with high speed, considering that contact between the ultrasound probe and the soft tissue may lead to finite deformation. On the basis of our theoretical analysis and numerical simulations, we propose an inverse approach to infer both the anisotropic and hyperelastic parameters of incompressible transversely isotropic (TI) soft materials. Finally, we investigate the properties of the solutions to the inverse problem by deriving the condition numbers in analytical form and performing numerical experiments. In Part II of the paper, both ex vivo and in vivo experiments are conducted to demonstrate the applicability of the inverse method in practical use.

  18. Deciding Termination for Ancestor Match- Bounded String Rewriting Systems

    NASA Technical Reports Server (NTRS)

    Geser, Alfons; Hofbauer, Dieter; Waldmann, Johannes

    2005-01-01

    Termination of a string rewriting system can be characterized by termination on suitable recursively defined languages. This kind of termination criteria has been criticized for its lack of automation. In an earlier paper we have shown how to construct an automated termination criterion if the recursion is aligned with the rewrite relation. We have demonstrated the technique with Dershowitz's forward closure criterion. In this paper we show that a different approach is suitable when the recursion is aligned with the inverse of the rewrite relation. We apply this idea to Kurth's ancestor graphs and obtain ancestor match-bounded string rewriting systems. Termination is shown to be decidable for this class. The resulting method improves upon those based on match-boundedness or inverse match-boundedness.

  19. Final Project Report: Imaging Fault Zones Using a Novel Elastic Reverse-Time Migration Imaging Technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Lianjie; Chen, Ting; Tan, Sirui

    Imaging fault zones and fractures is crucial for geothermal operators, providing important information for reservoir evaluation and management strategies. However, there are no existing techniques available for directly and clearly imaging fault zones, particularly for steeply dipping faults and fracture zones. In this project, we developed novel acoustic- and elastic-waveform inversion methods for high-resolution velocity model building. In addition, we developed acoustic and elastic reverse-time migration methods for high-resolution subsurface imaging of complex subsurface structures and steeply-dipping fault/fracture zones. We first evaluated and verified the improved capabilities of our newly developed seismic inversion and migration imaging methods using synthetic seismicmore » data. Our numerical tests verified that our new methods directly image subsurface fracture/fault zones using surface seismic reflection data. We then applied our novel seismic inversion and migration imaging methods to a field 3D surface seismic dataset acquired at the Soda Lake geothermal field using Vibroseis sources. Our migration images of the Soda Lake geothermal field obtained using our seismic inversion and migration imaging algorithms revealed several possible fault/fracture zones. AltaRock Energy, Inc. is working with Cyrq Energy, Inc. to refine the geologic interpretation at the Soda Lake geothermal field. Trenton Cladouhos, Senior Vice President R&D of AltaRock, was very interested in our imaging results of 3D surface seismic data from the Soda Lake geothermal field. He planed to perform detailed interpretation of our images in collaboration with James Faulds and Holly McLachlan of University of Nevada at Reno. Using our high-resolution seismic inversion and migration imaging results can help determine the optimal locations to drill wells for geothermal energy production and reduce the risk of geothermal exploration.« less

  20. Passive acoustic measurement of bedload grain size distribution using self-generated noise

    NASA Astrophysics Data System (ADS)

    Petrut, Teodor; Geay, Thomas; Gervaise, Cédric; Belleudy, Philippe; Zanker, Sebastien

    2018-01-01

    Monitoring sediment transport processes in rivers is of particular interest to engineers and scientists to assess the stability of rivers and hydraulic structures. Various methods for sediment transport process description were proposed using conventional or surrogate measurement techniques. This paper addresses the topic of the passive acoustic monitoring of bedload transport in rivers and especially the estimation of the bedload grain size distribution from self-generated noise. It discusses the feasibility of linking the acoustic signal spectrum shape to bedload grain sizes involved in elastic impacts with the river bed treated as a massive slab. Bedload grain size distribution is estimated by a regularized algebraic inversion scheme fed with the power spectrum density of river noise estimated from one hydrophone. The inversion methodology relies upon a physical model that predicts the acoustic field generated by the collision between rigid bodies. Here we proposed an analytic model of the acoustic energy spectrum generated by the impacts between a sphere and a slab. The proposed model computes the power spectral density of bedload noise using a linear system of analytic energy spectra weighted by the grain size distribution. The algebraic system of equations is then solved by least square optimization and solution regularization methods. The result of inversion leads directly to the estimation of the bedload grain size distribution. The inversion method was applied to real acoustic data from passive acoustics experiments realized on the Isère River, in France. The inversion of in situ measured spectra reveals good estimations of grain size distribution, fairly close to what was estimated by physical sampling instruments. These results illustrate the potential of the hydrophone technique to be used as a standalone method that could ensure high spatial and temporal resolution measurements for sediment transport in rivers.

  1. A Non-linear Geodetic Data Inversion Using ABIC for Slip Distribution on a Fault With an Unknown dip Angle

    NASA Astrophysics Data System (ADS)

    Fukahata, Y.; Wright, T. J.

    2006-12-01

    We developed a method of geodetic data inversion for slip distribution on a fault with an unknown dip angle. When fault geometry is unknown, the problem of geodetic data inversion is non-linear. A common strategy for obtaining slip distribution is to first determine the fault geometry by minimizing the square misfit under the assumption of a uniform slip on a rectangular fault, and then apply the usual linear inversion technique to estimate a slip distribution on the determined fault. It is not guaranteed, however, that the fault determined under the assumption of a uniform slip gives the best fault geometry for a spatially variable slip distribution. In addition, in obtaining a uniform slip fault model, we have to simultaneously determine the values of the nine mutually dependent parameters, which is a highly non-linear, complicated process. Although the inverse problem is non-linear for cases with unknown fault geometries, the non-linearity of the problems is actually weak, when we can assume the fault surface to be flat. In particular, when a clear fault trace is observed on the EarthOs surface after an earthquake, we can precisely estimate the strike and the location of the fault. In this case only the dip angle has large ambiguity. In geodetic data inversion we usually need to introduce smoothness constraints in order to compromise reciprocal requirements for model resolution and estimation errors in a natural way. Strictly speaking, the inverse problem with smoothness constraints is also non-linear, even if the fault geometry is known. The non-linearity has been dissolved by introducing AkaikeOs Bayesian Information Criterion (ABIC), with which the optimal value of the relative weight of observed data to smoothness constraints is objectively determined. In this study, using ABIC in determining the optimal dip angle, we dissolved the non-linearity of the inverse problem. We applied the method to the InSAR data of the 1995 Dinar, Turkey earthquake and obtained a much shallower dip angle than before.

  2. A hybrid artificial bee colony algorithm and pattern search method for inversion of particle size distribution from spectral extinction data

    NASA Astrophysics Data System (ADS)

    Wang, Li; Li, Feng; Xing, Jian

    2017-10-01

    In this paper, a hybrid artificial bee colony (ABC) algorithm and pattern search (PS) method is proposed and applied for recovery of particle size distribution (PSD) from spectral extinction data. To be more useful and practical, size distribution function is modelled as the general Johnson's ? function that can overcome the difficulty of not knowing the exact type beforehand encountered in many real circumstances. The proposed hybrid algorithm is evaluated through simulated examples involving unimodal, bimodal and trimodal PSDs with different widths and mean particle diameters. For comparison, all examples are additionally validated by the single ABC algorithm. In addition, the performance of the proposed algorithm is further tested by actual extinction measurements with real standard polystyrene samples immersed in water. Simulation and experimental results illustrate that the hybrid algorithm can be used as an effective technique to retrieve the PSDs with high reliability and accuracy. Compared with the single ABC algorithm, our proposed algorithm can produce more accurate and robust inversion results while taking almost comparative CPU time over ABC algorithm alone. The superiority of ABC and PS hybridization strategy in terms of reaching a better balance of estimation accuracy and computation effort increases its potentials as an excellent inversion technique for reliable and efficient actual measurement of PSD.

  3. Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies

    PubMed Central

    Theis, Fabian J.

    2017-01-01

    Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464

  4. A Hydrological Tomography Collocated with Time-varying Gravimetry for Hydrogeology -An Example in Yun-Lin Alluvial Plain and Ming-Ju Basin in Taiwan

    NASA Astrophysics Data System (ADS)

    Chen, K. H.; Cheng, C. C.; Hwang, C.

    2016-12-01

    A new inversion technique featured by the collocation of hydrological modeling and gravimetry observation is presented in this report. Initially this study started from a project attempting to build a sequence of hydrodynamic models of ground water system, which was applied to identify the supplement areas of alluvial plains and basins along the west coast of Taiwan. To calibrate the decent hydro-geological parameters for the modeling, geological evolution were carefully investigated and absolute gravity observations, along with other on-site hydrological monitoring data were specially introduced. It was discovered in the data processing that the time-varying gravimetrical data are highly sensitive to certain boundary conditions in the hydrodynamic model, which are correspondent with respective geological features. A new inversion technique coined by the term "hydrological tomography" is therefore developed by reversing the boundary condition into the unknowns to be solved. An example of accurate estimate for water storage and precipitation infiltration of a costal alluvial plain Yun-Lin is presented. In the mean time, the study of an anticline structure of the upstream basin Ming-Ju is also presented to demonstrate how a geological formation is outlined when the gravimetrical data and hydrodynamic model are re-directed into an inversion.

  5. We introduce an algorithm for the simultaneous reconstruction of faults and slip fields. We prove that the minimum of a related regularized functional converges to the unique solution of the fault inverse problem. We consider a Bayesian approach. We use a parallel multi-core platform and we discuss techniques to save on computational time.

    NASA Astrophysics Data System (ADS)

    Volkov, D.

    2017-12-01

    We introduce an algorithm for the simultaneous reconstruction of faults and slip fields on those faults. We define a regularized functional to be minimized for the reconstruction. We prove that the minimum of that functional converges to the unique solution of the related fault inverse problem. Due to inherent uncertainties in measurements, rather than seeking a deterministic solution to the fault inverse problem, we consider a Bayesian approach. The advantage of such an approach is that we obtain a way of quantifying uncertainties as part of our final answer. On the downside, this Bayesian approach leads to a very large computation. To contend with the size of this computation we developed an algorithm for the numerical solution to the stochastic minimization problem which can be easily implemented on a parallel multi-core platform and we discuss techniques to save on computational time. After showing how this algorithm performs on simulated data and assessing the effect of noise, we apply it to measured data. The data was recorded during a slow slip event in Guerrero, Mexico.

  6. Inversion of oceanic constituents in case I and II waters with genetic programming algorithms.

    PubMed

    Chami, Malik; Robilliard, Denis

    2002-10-20

    A stochastic inverse technique based on agenetic programming (GP) algorithm was developed toinvert oceanic constituents from simulated data for case I and case II water applications. The simulations were carried out with the Ordre Successifs Ocean Atmosphere (OSOA) radiative transfer model. They include the effects of oceanic substances such as algal-related chlorophyll, nonchlorophyllous suspended matter, and dissolved organic matter. The synthetic data set also takes into account the directional effects of particles through a variation of their phase function that makes the simulated data realistic. It is shown that GP can be successfully applied to the inverse problem with acceptable stability in the presence of realistic noise in the data. GP is compared with neural network methodology for case I waters; GP exhibits similar retrieval accuracy, which is greater than for traditional techniques such as band ratio algorithms. The application of GP to real satellite data [a Sea-viewing Wide Field-of-view Sensor (SeaWiFS)] was also carried out for case I waters as a validation. Good agreement was obtained when GP results were compared with the SeaWiFS empirical algorithm. For case II waters the accuracy of GP is less than 33%, which remains satisfactory, at the present time, for remote-sensing purposes.

  7. A Joint Method of Envelope Inversion Combined with Hybrid-domain Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    CUI, C.; Hou, W.

    2017-12-01

    Full waveform inversion (FWI) aims to construct high-precision subsurface models by fully using the information in seismic records, including amplitude, travel time, phase and so on. However, high non-linearity and the absence of low frequency information in seismic data lead to the well-known cycle skipping problem and make inversion easily fall into local minima. In addition, those 3D inversion methods that are based on acoustic approximation ignore the elastic effects in real seismic field, and make inversion harder. As a result, the accuracy of final inversion results highly relies on the quality of initial model. In order to improve stability and quality of inversion results, multi-scale inversion that reconstructs subsurface model from low to high frequency are applied. But, the absence of very low frequencies (< 3Hz) in field data is still bottleneck in the FWI. By extracting ultra low-frequency data from field data, envelope inversion is able to recover low wavenumber model with a demodulation operator (envelope operator), though the low frequency data does not really exist in field data. To improve the efficiency and viability of the inversion, in this study, we proposed a joint method of envelope inversion combined with hybrid-domain FWI. First, we developed 3D elastic envelope inversion, and the misfit function and the corresponding gradient operator were derived. Then we performed hybrid-domain FWI with envelope inversion result as initial model which provides low wavenumber component of model. Here, forward modeling is implemented in the time domain and inversion in the frequency domain. To accelerate the inversion, we adopt CPU/GPU heterogeneous computing techniques. There were two levels of parallelism. In the first level, the inversion tasks are decomposed and assigned to each computation node by shot number. In the second level, GPU multithreaded programming is used for the computation tasks in each node, including forward modeling, envelope extraction, DFT (discrete Fourier transform) calculation and gradients calculation. Numerical tests demonstrated that the combined envelope inversion + hybrid-domain FWI could obtain much faithful and accurate result than conventional hybrid-domain FWI. The CPU/GPU heterogeneous parallel computation could improve the performance speed.

  8. Structural-change localization and monitoring through a perturbation-based inverse problem.

    PubMed

    Roux, Philippe; Guéguen, Philippe; Baillet, Laurent; Hamze, Alaa

    2014-11-01

    Structural-change detection and characterization, or structural-health monitoring, is generally based on modal analysis, for detection, localization, and quantification of changes in structure. Classical methods combine both variations in frequencies and mode shapes, which require accurate and spatially distributed measurements. In this study, the detection and localization of a local perturbation are assessed by analysis of frequency changes (in the fundamental mode and overtones) that are combined with a perturbation-based linear inverse method and a deconvolution process. This perturbation method is applied first to a bending beam with the change considered as a local perturbation of the Young's modulus, using a one-dimensional finite-element model for modal analysis. Localization is successful, even for extended and multiple changes. In a second step, the method is numerically tested under ambient-noise vibration from the beam support with local changes that are shifted step by step along the beam. The frequency values are revealed using the random decrement technique that is applied to the time-evolving vibrations recorded by one sensor at the free extremity of the beam. Finally, the inversion method is experimentally demonstrated at the laboratory scale with data recorded at the free end of a Plexiglas beam attached to a metallic support.

  9. Male infertility associated with de novo pericentric inversion of chromosome 1.

    PubMed

    Balasar, Özgür; Zamani, Ayşe Gül; Balasar, Mehmet; Acar, Hasan

    2017-12-01

    Inversion occurs after two breaks in a chromosome have happened and the segment rotates 180° before reinserting. Inversion carriers have produced abnormal gametes if there is an odd number crossing- over between the inverted and the normal homologous chromosomes causing a duplication or deletion. Reproductive risks such as infertility, abortion, stillbirth and birth of malformed child would be expected in that case. A 54-year- old male patient was consulted to our clinic for primary infertility. The routine chromosome study were applied using peripheral blood lymphocyte cultures and analyzed by giemsa-trypsin-giemsa (GTG) banding, and centromer banding (C-banding) stains. Y chromosome microdeletions in the azoospermia factor (AZF) regions were analyzed with polymerase chain reaction. Additional test such as fluorescence in situ hybridization (FISH) was used to detect the sex-determining region of the Y chromosome (SRY). Semen analysis showed azoospermia. A large pericentric inversion of chromosome 1 46,XY, inv(1) (p22q32) was found in routine chromosome analysis. No microdeletions were seen in AZF regions. In our patient the presence of SRY region was observed by using FISH technique with SRY-specific probe. Men who have pericentric inversion of chromosome 1, appear to be at risk for infertility brought about by spermatogenic breakdown. The etiopathogenic relationship between azoospermia and pericentric inversion of chromosome 1 is discussed.

  10. Large-scale 3D inversion of marine controlled source electromagnetic data using the integral equation method

    NASA Astrophysics Data System (ADS)

    Zhdanov, M. S.; Cuma, M.; Black, N.; Wilson, G. A.

    2009-12-01

    The marine controlled source electromagnetic (MCSEM) method has become widely used in offshore oil and gas exploration. Interpretation of MCSEM data is still a very challenging problem, especially if one would like to take into account the realistic 3D structure of the subsurface. The inversion of MCSEM data is complicated by the fact that the EM response of a hydrocarbon-bearing reservoir is very weak in comparison with the background EM fields generated by an electric dipole transmitter in complex geoelectrical structures formed by a conductive sea-water layer and the terranes beneath it. In this paper, we present a review of the recent developments in the area of large-scale 3D EM forward modeling and inversion. Our approach is based on using a new integral form of Maxwell’s equations allowing for an inhomogeneous background conductivity, which results in a numerically effective integral representation for 3D EM field. This representation provides an efficient tool for the solution of 3D EM inverse problems. To obtain a robust inverse model of the conductivity distribution, we apply regularization based on a focusing stabilizing functional which allows for the recovery of models with both smooth and sharp geoelectrical boundaries. The method is implemented in a fully parallel computer code, which makes it possible to run large-scale 3D inversions on grids with millions of inversion cells. This new technique can be effectively used for active EM detection and monitoring of the subsurface targets.

  11. Multiple grid arrangement improves ligand docking with unknown binding sites: Application to the inverse docking problem.

    PubMed

    Ban, Tomohiro; Ohue, Masahito; Akiyama, Yutaka

    2018-04-01

    The identification of comprehensive drug-target interactions is important in drug discovery. Although numerous computational methods have been developed over the years, a gold standard technique has not been established. Computational ligand docking and structure-based drug design allow researchers to predict the binding affinity between a compound and a target protein, and thus, they are often used to virtually screen compound libraries. In addition, docking techniques have also been applied to the virtual screening of target proteins (inverse docking) to predict target proteins of a drug candidate. Nevertheless, a more accurate docking method is currently required. In this study, we proposed a method in which a predicted ligand-binding site is covered by multiple grids, termed multiple grid arrangement. Notably, multiple grid arrangement facilitates the conformational search for a grid-based ligand docking software and can be applied to the state-of-the-art commercial docking software Glide (Schrödinger, LLC). We validated the proposed method by re-docking with the Astex diverse benchmark dataset and blind binding site situations, which improved the correct prediction rate of the top scoring docking pose from 27.1% to 34.1%; however, only a slight improvement in target prediction accuracy was observed with inverse docking scenarios. These findings highlight the limitations and challenges of current scoring functions and the need for more accurate docking methods. The proposed multiple grid arrangement method was implemented in Glide by modifying a cross-docking script for Glide, xglide.py. The script of our method is freely available online at http://www.bi.cs.titech.ac.jp/mga_glide/. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Refraction traveltime tomography based on damped wave equation for irregular topographic model

    NASA Astrophysics Data System (ADS)

    Park, Yunhui; Pyun, Sukjoon

    2018-03-01

    Land seismic data generally have time-static issues due to irregular topography and weathered layers at shallow depths. Unless the time static is handled appropriately, interpretation of the subsurface structures can be easily distorted. Therefore, static corrections are commonly applied to land seismic data. The near-surface velocity, which is required for static corrections, can be inferred from first-arrival traveltime tomography, which must consider the irregular topography, as the land seismic data are generally obtained in irregular topography. This paper proposes a refraction traveltime tomography technique that is applicable to an irregular topographic model. This technique uses unstructured meshes to express an irregular topography, and traveltimes calculated from the frequency-domain damped wavefields using the finite element method. The diagonal elements of the approximate Hessian matrix were adopted for preconditioning, and the principle of reciprocity was introduced to efficiently calculate the Fréchet derivative. We also included regularization to resolve the ill-posed inverse problem, and used the nonlinear conjugate gradient method to solve the inverse problem. As the damped wavefields were used, there were no issues associated with artificial reflections caused by unstructured meshes. In addition, the shadow zone problem could be circumvented because this method is based on the exact wave equation, which does not require a high-frequency assumption. Furthermore, the proposed method was both robust to an initial velocity model and efficient compared to full wavefield inversions. Through synthetic and field data examples, our method was shown to successfully reconstruct shallow velocity structures. To verify our method, static corrections were roughly applied to the field data using the estimated near-surface velocity. By comparing common shot gathers and stack sections with and without static corrections, we confirmed that the proposed tomography algorithm can be used to correct the statics of land seismic data.

  13. Processing grounded-wire TEM signal in time-frequency-pseudo-seismic domain: A new paradigm

    NASA Astrophysics Data System (ADS)

    Khan, M. Y.; Xue, G. Q.; Chen, W.; Huasen, Z.

    2017-12-01

    Grounded-wire TEM has received great attention in mineral, hydrocarbon and hydrogeological investigations for the last several years. Conventionally, TEM soundings have been presented as apparent resistivity curves as function of time. With development of sophisticated computational algorithms, it became possible to extract more realistic geoelectric information by applying inversion programs to 1-D & 3-D problems. Here, we analyze grounded-wire TEM data by carrying out analysis in time, frequency and pseudo-seismic domain supported by borehole information. At first, H, K, A & Q type geoelectric models are processed using a proven inversion program (1-D Occam inversion). Second, time-to-frequency transformation is conducted from TEM ρa(t) curves to magneto telluric MT ρa(f) curves for the same models based on all-time apparent resistivity curves. Third, 1-D Bostick's algorithm was applied to the transformed resistivity. Finally, EM diffusion field is transformed into propagating wave field obeying the standard wave equation using wavelet transformation technique and constructed pseudo-seismic section. The transformed seismic-like wave indicates that some reflection and refraction phenomena appear when the EM wave field interacts with geoelectric interface at different depth intervals due to contrast in resistivity. The resolution of the transformed TEM data is significantly improved in comparison to apparent resistivity plots. A case study illustrates the successful hydrogeophysical application of proposed approach in recovering water-filled mined-out area in a coal field located in Ye county, Henan province, China. The results support the introduction of pseudo-seismic imaging technology in short-offset version of TEM which can also be an useful aid if integrated with seismic reflection technique to explore possibilities for high resolution EM imaging in future.

  14. A simulation based method to assess inversion algorithms for transverse relaxation data

    NASA Astrophysics Data System (ADS)

    Ghosh, Supriyo; Keener, Kevin M.; Pan, Yong

    2008-04-01

    NMR relaxometry is a very useful tool for understanding various chemical and physical phenomena in complex multiphase systems. A Carr-Purcell-Meiboom-Gill (CPMG) [P.T. Callaghan, Principles of Nuclear Magnetic Resonance Microscopy, Clarendon Press, Oxford, 1991] experiment is an easy and quick way to obtain transverse relaxation constant (T2) in low field. Most of the samples usually have a distribution of T2 values. Extraction of this distribution of T2s from the noisy decay data is essentially an ill-posed inverse problem. Various inversion approaches have been used to solve this problem, to date. A major issue in using an inversion algorithm is determining how accurate the computed distribution is. A systematic analysis of an inversion algorithm, UPEN [G.C. Borgia, R.J.S. Brown, P. Fantazzini, Uniform-penalty inversion of multiexponential decay data, Journal of Magnetic Resonance 132 (1998) 65-77; G.C. Borgia, R.J.S. Brown, P. Fantazzini, Uniform-penalty inversion of multiexponential decay data II. Data spacing, T2 data, systematic data errors, and diagnostics, Journal of Magnetic Resonance 147 (2000) 273-285] was performed by means of simulated CPMG data generation. Through our simulation technique and statistical analyses, the effects of various experimental parameters on the computed distribution were evaluated. We converged to the true distribution by matching up the inversion results from a series of true decay data and a noisy simulated data. In addition to simulation studies, the same approach was also applied on real experimental data to support the simulation results.

  15. Inverse Modeling of Texas NOx Emissions Using Space-Based and Ground-Based NO2 Observations

    NASA Technical Reports Server (NTRS)

    Tang, Wei; Cohan, D.; Lamsal, L. N.; Xiao, X.; Zhou, W.

    2013-01-01

    Inverse modeling of nitrogen oxide (NOx) emissions using satellite-based NO2 observations has become more prevalent in recent years, but has rarely been applied to regulatory modeling at regional scales. In this study, OMI satellite observations of NO2 column densities are used to conduct inverse modeling of NOx emission inventories for two Texas State Implementation Plan (SIP) modeling episodes. Addition of lightning, aircraft, and soil NOx emissions to the regulatory inventory narrowed but did not close the gap between modeled and satellite observed NO2 over rural regions. Satellitebased top-down emission inventories are created with the regional Comprehensive Air Quality Model with extensions (CAMx) using two techniques: the direct scaling method and discrete Kalman filter (DKF) with Decoupled Direct Method (DDM) sensitivity analysis. The simulations with satellite-inverted inventories are compared to the modeling results using the a priori inventory as well as an inventory created by a ground-level NO2 based DKF inversion. The DKF inversions yield conflicting results: the satellite based inversion scales up the a priori NOx emissions in most regions by factors of 1.02 to 1.84, leading to 3-55% increase in modeled NO2 column densities and 1-7 ppb increase in ground 8 h ozone concentrations, while the ground-based inversion indicates the a priori NOx emissions should be scaled by factors of 0.34 to 0.57 in each region. However, none of the inversions improve the model performance in simulating aircraft-observed NO2 or ground-level ozone (O3) concentrations.

  16. Inverse modeling of Texas NOx emissions using space-based and ground-based NO2 observations

    NASA Astrophysics Data System (ADS)

    Tang, W.; Cohan, D. S.; Lamsal, L. N.; Xiao, X.; Zhou, W.

    2013-11-01

    Inverse modeling of nitrogen oxide (NOx) emissions using satellite-based NO2 observations has become more prevalent in recent years, but has rarely been applied to regulatory modeling at regional scales. In this study, OMI satellite observations of NO2 column densities are used to conduct inverse modeling of NOx emission inventories for two Texas State Implementation Plan (SIP) modeling episodes. Addition of lightning, aircraft, and soil NOx emissions to the regulatory inventory narrowed but did not close the gap between modeled and satellite-observed NO2 over rural regions. Satellite-based top-down emission inventories are created with the regional Comprehensive Air Quality Model with extensions (CAMx) using two techniques: the direct scaling method and discrete Kalman filter (DKF) with decoupled direct method (DDM) sensitivity analysis. The simulations with satellite-inverted inventories are compared to the modeling results using the a priori inventory as well as an inventory created by a ground-level NO2-based DKF inversion. The DKF inversions yield conflicting results: the satellite-based inversion scales up the a priori NOx emissions in most regions by factors of 1.02 to 1.84, leading to 3-55% increase in modeled NO2 column densities and 1-7 ppb increase in ground 8 h ozone concentrations, while the ground-based inversion indicates the a priori NOx emissions should be scaled by factors of 0.34 to 0.57 in each region. However, none of the inversions improve the model performance in simulating aircraft-observed NO2 or ground-level ozone (O3) concentrations.

  17. Inverse modeling of Texas NOx emissions using space-based and ground-based NO2 observations

    NASA Astrophysics Data System (ADS)

    Tang, W.; Cohan, D.; Lamsal, L. N.; Xiao, X.; Zhou, W.

    2013-07-01

    Inverse modeling of nitrogen oxide (NOx) emissions using satellite-based NO2 observations has become more prevalent in recent years, but has rarely been applied to regulatory modeling at regional scales. In this study, OMI satellite observations of NO2 column densities are used to conduct inverse modeling of NOx emission inventories for two Texas State Implementation Plan (SIP) modeling episodes. Addition of lightning, aircraft, and soil NOx emissions to the regulatory inventory narrowed but did not close the gap between modeled and satellite observed NO2 over rural regions. Satellite-based top-down emission inventories are created with the regional Comprehensive Air Quality Model with extensions (CAMx) using two techniques: the direct scaling method and discrete Kalman filter (DKF) with Decoupled Direct Method (DDM) sensitivity analysis. The simulations with satellite-inverted inventories are compared to the modeling results using the a priori inventory as well as an inventory created by a ground-level NO2 based DKF inversion. The DKF inversions yield conflicting results: the satellite-based inversion scales up the a priori NOx emissions in most regions by factors of 1.02 to 1.84, leading to 3-55% increase in modeled NO2 column densities and 1-7 ppb increase in ground 8 h ozone concentrations, while the ground-based inversion indicates the a priori NOx emissions should be scaled by factors of 0.34 to 0.57 in each region. However, none of the inversions improve the model performance in simulating aircraft-observed NO2 or ground-level ozone (O3) concentrations.

  18. Magnetic resonance separation imaging using a divided inversion recovery technique (DIRT).

    PubMed

    Goldfarb, James W

    2010-04-01

    The divided inversion recovery technique is an MRI separation method based on tissue T(1) relaxation differences. When tissue T(1) relaxation times are longer than the time between inversion pulses in a segmented inversion recovery pulse sequence, longitudinal magnetization does not pass through the null point. Prior to additional inversion pulses, longitudinal magnetization may have an opposite polarity. Spatial displacement of tissues in inversion recovery balanced steady-state free-precession imaging has been shown to be due to this magnetization phase change resulting from incomplete magnetization recovery. In this paper, it is shown how this phase change can be used to provide image separation. A pulse sequence parameter, the time between inversion pulses (T180), can be adjusted to provide water-fat or fluid separation. Example water-fat and fluid separation images of the head, heart, and abdomen are presented. The water-fat separation performance was investigated by comparing image intensities in short-axis divided inversion recovery technique images of the heart. Fat, blood, and fluid signal was suppressed to the background noise level. Additionally, the separation performance was not affected by main magnetic field inhomogeneities.

  19. Correcting for dependent censoring in routine outcome monitoring data by applying the inverse probability censoring weighted estimator.

    PubMed

    Willems, Sjw; Schat, A; van Noorden, M S; Fiocco, M

    2018-02-01

    Censored data make survival analysis more complicated because exact event times are not observed. Statistical methodology developed to account for censored observations assumes that patients' withdrawal from a study is independent of the event of interest. However, in practice, some covariates might be associated to both lifetime and censoring mechanism, inducing dependent censoring. In this case, standard survival techniques, like Kaplan-Meier estimator, give biased results. The inverse probability censoring weighted estimator was developed to correct for bias due to dependent censoring. In this article, we explore the use of inverse probability censoring weighting methodology and describe why it is effective in removing the bias. Since implementing this method is highly time consuming and requires programming and mathematical skills, we propose a user friendly algorithm in R. Applications to a toy example and to a medical data set illustrate how the algorithm works. A simulation study was carried out to investigate the performance of the inverse probability censoring weighted estimators in situations where dependent censoring is present in the data. In the simulation process, different sample sizes, strengths of the censoring model, and percentages of censored individuals were chosen. Results show that in each scenario inverse probability censoring weighting reduces the bias induced in the traditional Kaplan-Meier approach where dependent censoring is ignored.

  20. Angular dependence of multiangle dynamic light scattering for particle size distribution inversion using a self-adapting regularization algorithm

    NASA Astrophysics Data System (ADS)

    Li, Lei; Yu, Long; Yang, Kecheng; Li, Wei; Li, Kai; Xia, Min

    2018-04-01

    The multiangle dynamic light scattering (MDLS) technique can better estimate particle size distributions (PSDs) than single-angle dynamic light scattering. However, determining the inversion range, angular weighting coefficients, and scattering angle combination is difficult but fundamental to the reconstruction for both unimodal and multimodal distributions. In this paper, we propose a self-adapting regularization method called the wavelet iterative recursion nonnegative Tikhonov-Phillips-Twomey (WIRNNT-PT) algorithm. This algorithm combines a wavelet multiscale strategy with an appropriate inversion method and could self-adaptively optimize several noteworthy issues containing the choices of the weighting coefficients, the inversion range and the optimal inversion method from two regularization algorithms for estimating the PSD from MDLS measurements. In addition, the angular dependence of the MDLS for estimating the PSDs of polymeric latexes is thoroughly analyzed. The dependence of the results on the number and range of measurement angles was analyzed in depth to identify the optimal scattering angle combination. Numerical simulations and experimental results for unimodal and multimodal distributions are presented to demonstrate both the validity of the WIRNNT-PT algorithm and the angular dependence of MDLS and show that the proposed algorithm with a six-angle analysis in the 30-130° range can be satisfactorily applied to retrieve PSDs from MDLS measurements.

  1. Investigation of inversion polymorphisms in the human genome using principal components analysis.

    PubMed

    Ma, Jianzhong; Amos, Christopher I

    2012-01-01

    Despite the significant advances made over the last few years in mapping inversions with the advent of paired-end sequencing approaches, our understanding of the prevalence and spectrum of inversions in the human genome has lagged behind other types of structural variants, mainly due to the lack of a cost-efficient method applicable to large-scale samples. We propose a novel method based on principal components analysis (PCA) to characterize inversion polymorphisms using high-density SNP genotype data. Our method applies to non-recurrent inversions for which recombination between the inverted and non-inverted segments in inversion heterozygotes is suppressed due to the loss of unbalanced gametes. Inside such an inversion region, an effect similar to population substructure is thus created: two distinct "populations" of inversion homozygotes of different orientations and their 1:1 admixture, namely the inversion heterozygotes. This kind of substructure can be readily detected by performing PCA locally in the inversion regions. Using simulations, we demonstrated that the proposed method can be used to detect and genotype inversion polymorphisms using unphased genotype data. We applied our method to the phase III HapMap data and inferred the inversion genotypes of known inversion polymorphisms at 8p23.1 and 17q21.31. These inversion genotypes were validated by comparing with literature results and by checking Mendelian consistency using the family data whenever available. Based on the PCA-approach, we also performed a preliminary genome-wide scan for inversions using the HapMap data, which resulted in 2040 candidate inversions, 169 of which overlapped with previously reported inversions. Our method can be readily applied to the abundant SNP data, and is expected to play an important role in developing human genome maps of inversions and exploring associations between inversions and susceptibility of diseases.

  2. Automatic alignment for three-dimensional tomographic reconstruction

    NASA Astrophysics Data System (ADS)

    van Leeuwen, Tristan; Maretzke, Simon; Joost Batenburg, K.

    2018-02-01

    In tomographic reconstruction, the goal is to reconstruct an unknown object from a collection of line integrals. Given a complete sampling of such line integrals for various angles and directions, explicit inverse formulas exist to reconstruct the object. Given noisy and incomplete measurements, the inverse problem is typically solved through a regularized least-squares approach. A challenge for both approaches is that in practice the exact directions and offsets of the x-rays are only known approximately due to, e.g. calibration errors. Such errors lead to artifacts in the reconstructed image. In the case of sufficient sampling and geometrically simple misalignment, the measurements can be corrected by exploiting so-called consistency conditions. In other cases, such conditions may not apply and we have to solve an additional inverse problem to retrieve the angles and shifts. In this paper we propose a general algorithmic framework for retrieving these parameters in conjunction with an algebraic reconstruction technique. The proposed approach is illustrated by numerical examples for both simulated data and an electron tomography dataset.

  3. A new stochastic algorithm for inversion of dust aerosol size distribution

    NASA Astrophysics Data System (ADS)

    Wang, Li; Li, Feng; Yang, Ma-ying

    2015-08-01

    Dust aerosol size distribution is an important source of information about atmospheric aerosols, and it can be determined from multiwavelength extinction measurements. This paper describes a stochastic inverse technique based on artificial bee colony (ABC) algorithm to invert the dust aerosol size distribution by light extinction method. The direct problems for the size distribution of water drop and dust particle, which are the main elements of atmospheric aerosols, are solved by the Mie theory and the Lambert-Beer Law in multispectral region. And then, the parameters of three widely used functions, i.e. the log normal distribution (L-N), the Junge distribution (J-J), and the normal distribution (N-N), which can provide the most useful representation of aerosol size distributions, are inversed by the ABC algorithm in the dependent model. Numerical results show that the ABC algorithm can be successfully applied to recover the aerosol size distribution with high feasibility and reliability even in the presence of random noise.

  4. Localization of incipient tip vortex cavitation using ray based matched field inversion method

    NASA Astrophysics Data System (ADS)

    Kim, Dongho; Seong, Woojae; Choo, Youngmin; Lee, Jeunghoon

    2015-10-01

    Cavitation of marine propeller is one of the main contributing factors of broadband radiated ship noise. In this research, an algorithm for the source localization of incipient vortex cavitation is suggested. Incipient cavitation is modeled as monopole type source and matched-field inversion method is applied to find the source position by comparing the spatial correlation between measured and replicated pressure fields at the receiver array. The accuracy of source localization is improved by broadband matched-field inversion technique that enhances correlation by incoherently averaging correlations of individual frequencies. Suggested localization algorithm is verified through known virtual source and model test conducted in Samsung ship model basin cavitation tunnel. It is found that suggested localization algorithm enables efficient localization of incipient tip vortex cavitation using a few pressure data measured on the outer hull above the propeller and practically applicable to the typically performed model scale experiment in a cavitation tunnel at the early design stage.

  5. A genetic meta-algorithm-assisted inversion approach: hydrogeological study for the determination of volumetric rock properties and matrix and fluid parameters in unsaturated formations

    NASA Astrophysics Data System (ADS)

    Szabó, Norbert Péter

    2018-03-01

    An evolutionary inversion approach is suggested for the interpretation of nuclear and resistivity logs measured by direct-push tools in shallow unsaturated sediments. The efficiency of formation evaluation is improved by estimating simultaneously (1) the petrophysical properties that vary rapidly along a drill hole with depth and (2) the zone parameters that can be treated as constant, in one inversion procedure. In the workflow, the fractional volumes of water, air, matrix and clay are estimated in adjacent depths by linearized inversion, whereas the clay and matrix properties are updated using a float-encoded genetic meta-algorithm. The proposed inversion method provides an objective estimate of the zone parameters that appear in the tool response equations applied to solve the forward problem, which can significantly increase the reliability of the petrophysical model as opposed to setting these parameters arbitrarily. The global optimization meta-algorithm not only assures the best fit between the measured and calculated data but also gives a reliable solution, practically independent of the initial model, as laboratory data are unnecessary in the inversion procedure. The feasibility test uses engineering geophysical sounding logs observed in an unsaturated loessy-sandy formation in Hungary. The multi-borehole extension of the inversion technique is developed to determine the petrophysical properties and their estimation errors along a profile of drill holes. The genetic meta-algorithmic inversion method is recommended for hydrogeophysical logging applications of various kinds to automatically extract the volumetric ratios of rock and fluid constituents as well as the most important zone parameters in a reliable inversion procedure.

  6. A stress-constrained geodetic inversion method for spatiotemporal slip of a slow slip event with earthquake swarm

    NASA Astrophysics Data System (ADS)

    Hirose, H.; Tanaka, T.

    2017-12-01

    Geodetic inversions have been performed by using GNSS data and/or tiltmeter data in order to estimate spatio-temporal fault slip distributions. They have been applied for slow slip events (SSEs), which are episodic fault slip lasting for days to years (e.g., Ozawa et al., 2001; Hirose et al., 2014). Although their slip distributions are important information in terms of inferring strain budget and frictional characteristics on a subduction plate interface, inhomogeneous station coverage generally yields spatially non-uniform slip resolution, and in a worse case, a slip distribution can not be recovered. It is known that an SSE which accompanies an earthquake swarm around the SSE slip area, such as the Boso Peninsula SSEs (e.g., Hirose et al., 2014). Some researchers hypothesize that these earthquakes are triggered by a stress change caused by the accompanying SSE (e.g., Segall et al., 2006). Based on this assumption, it is possible that a conventional geodetic inversion which impose a constraint on the stress change that promotes earthquake activities may improve the resolution of the slip distribution. Here we develop an inversion method based on the Network Inversion Filter technique (Segall and Matthews, 1997), incorporating a constraint on a positive change in Coulomb failure stress (Delta-CFS) at the accompanied earthquakes. In addition, we apply this new method to synthetic data in order to check the effectiveness of the method and the characteristics of the inverted slip distributions. The results show that there is a case in which the reproduction of a slip distribution is better with earthquake information than without it. That is, it is possible to improve the reproducibility of a slip distribution of an SSE with this new inversion method if an earthquake catalog for the accompanying earthquake activity can be used when available geodetic data are insufficient.

  7. Spectral analysis of GEOS-3 altimeter data and frequency domain collocation. [to estimate gravity anomalies

    NASA Technical Reports Server (NTRS)

    Eren, K.

    1980-01-01

    The mathematical background in spectral analysis as applied to geodetic applications is summarized. The resolution (cut-off frequency) of the GEOS 3 altimeter data is examined by determining the shortest wavelength (corresponding to the cut-off frequency) recoverable. The data from some 18 profiles are used. The total power (variance) in the sea surface topography with respect to the reference ellipsoid as well as with respect to the GEM-9 surface is computed. A fast inversion algorithm for matrices of simple and block Toeplitz matrices and its application to least squares collocation is explained. This algorithm yields a considerable gain in computer time and storage in comparison with conventional least squares collocation. Frequency domain least squares collocation techniques are also introduced and applied to estimating gravity anomalies from GEOS 3 altimeter data. These techniques substantially reduce the computer time and requirements in storage associated with the conventional least squares collocation. Numerical examples given demonstrate the efficiency and speed of these techniques.

  8. Laterally constrained inversion for CSAMT data interpretation

    NASA Astrophysics Data System (ADS)

    Wang, Ruo; Yin, Changchun; Wang, Miaoyue; Di, Qingyun

    2015-10-01

    Laterally constrained inversion (LCI) has been successfully applied to the inversion of dc resistivity, TEM and airborne EM data. However, it hasn't been yet applied to the interpretation of controlled-source audio-frequency magnetotelluric (CSAMT) data. In this paper, we apply the LCI method for CSAMT data inversion by preconditioning the Jacobian matrix. We apply a weighting matrix to Jacobian to balance the sensitivity of model parameters, so that the resolution with respect to different model parameters becomes more uniform. Numerical experiments confirm that this can improve the convergence of the inversion. We first invert a synthetic dataset with and without noise to investigate the effect of LCI applications to CSAMT data, for the noise free data, the results show that the LCI method can recover the true model better compared to the traditional single-station inversion; and for the noisy data, the true model is recovered even with a noise level of 8%, indicating that LCI inversions are to some extent noise insensitive. Then, we re-invert two CSAMT datasets collected respectively in a watershed and a coal mine area in Northern China and compare our results with those from previous inversions. The comparison with the previous inversion in a coal mine shows that LCI method delivers smoother layer interfaces that well correlate to seismic data, while comparison with a global searching algorithm of simulated annealing (SA) in a watershed shows that though both methods deliver very similar good results, however, LCI algorithm presented in this paper runs much faster. The inversion results for the coal mine CSAMT survey show that a conductive water-bearing zone that was not revealed by the previous inversions has been identified by the LCI. This further demonstrates that the method presented in this paper works for CSAMT data inversion.

  9. Convergence of Chahine's nonlinear relaxation inversion method used for limb viewing remote sensing

    NASA Technical Reports Server (NTRS)

    Chu, W. P.

    1985-01-01

    The application of Chahine's (1970) inversion technique to remote sensing problems utilizing the limb viewing geometry is discussed. The problem considered here involves occultation-type measurements and limb radiance-type measurements from either spacecraft or balloon platforms. The kernel matrix of the inversion problem is either an upper or lower triangular matrix. It is demonstrated that the Chahine inversion technique always converges, provided the diagonal elements of the kernel matrix are nonzero.

  10. An ionospheric occultation inversion technique based on epoch difference

    NASA Astrophysics Data System (ADS)

    Lin, Jian; Xiong, Jing; Zhu, Fuying; Yang, Jian; Qiao, Xuejun

    2013-09-01

    Of the ionospheric radio occultation (IRO) electron density profile (EDP) retrievals, the Abel based calibrated TEC inversion (CTI) is the most widely used technique. In order to eliminate the contribution from the altitude above the RO satellite, it is necessary to utilize the calibrated TEC to retrieve the EDP, which introduces the error due to the coplanar assumption. In this paper, a new technique based on the epoch difference inversion (EDI) is firstly proposed to eliminate this error. The comparisons between CTI and EDI have been done, taking advantage of the simulated and real COSMIC data. The following conclusions can be drawn: the EDI technique can successfully retrieve the EDPs without non-occultation side measurements and shows better performance than the CTI method, especially for lower orbit mission; no matter which technique is used, the inversion results at the higher altitudes are better than those at the lower altitudes, which could be explained theoretically.

  11. Inversion technique for IR heterodyne sounding of stratospheric constituents from space platforms

    NASA Technical Reports Server (NTRS)

    Abbas, M. M.; Shapiro, G. L.; Alvarez, J. M.

    1981-01-01

    The techniques which have been employed for inversion of IR heterodyne measurements for remote sounding of stratospheric trace constituents usually rely on either geometric effects based on limb-scan observations (i.e., onion peel techniques) or spectral effects by using weighting functions corresponding to different frequencies of an IR spectral line. An experimental approach and inversion technique are discussed which optimize the retrieval of concentration profiles by combining the geometric and the spectral effects in an IR heterodyne receiver. The results of inversions of some synthetic CIO spectral lines corresponding to solar occultation limb scans of the stratosphere are presented, indicating considerable improvement in the accuracy of the retrieved profiles. The effects of noise on the accuracy of retrievals are discussed for realistic situations.

  12. Inversion technique for IR heterodyne sounding of stratospheric constituents from space platforms.

    PubMed

    Abbas, M M; Shapiro, G L; Alvarez, J M

    1981-11-01

    The techniques which have been employed for inversion of IR heterodyne measurements for remote sounding of stratospheric trace constituents usually rely on either geometric effects based on limb-scan observations (i.e., onion peel techniques) or spectral effects by using weighting functions corresponding to different frequencies of an IR spectral line. An experimental approach and inversion technique are discussed which optimize the retrieval of concentration profiles by combining the geometric and the spectral effects in an IR heterodyne receiver. The results of inversions of some synthetic ClO spectral lines corresponding to solar occultation limb scans of the stratosphere are presented, indicating considerable improvement in the accuracy of the retrieved profiles. The effects of noise on the accuracy of retrievals are discussed for realistic situations.

  13. Measuring soil moisture with imaging radars

    NASA Technical Reports Server (NTRS)

    Dubois, Pascale C.; Vanzyl, Jakob; Engman, Ted

    1995-01-01

    An empirical model was developed to infer soil moisture and surface roughness from radar data. The accuracy of the inversion technique is assessed by comparing soil moisture obtained with the inversion technique to in situ measurements. The effect of vegetation on the inversion is studied and a method to eliminate the areas where vegetation impairs the algorithm is described.

  14. Large scale GW calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Govoni, Marco; Galli, Giulia

    We present GW calculations of molecules, ordered and disordered solids and interfaces, which employ an efficient contour deformation technique for frequency integration and do not require the explicit evaluation of virtual electronic states nor the inversion of dielectric matrices. We also present a parallel implementation of the algorithm, which takes advantage of separable expressions of both the single particle Green’s function and the screened Coulomb interaction. The method can be used starting from density functional theory calculations performed with semilocal or hybrid functionals. The newly developed technique was applied to GW calculations of systems of unprecedented size, including water/semiconductor interfacesmore » with thousands of electrons.« less

  15. Large Scale GW Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Govoni, Marco; Galli, Giulia

    We present GW calculations of molecules, ordered and disordered solids and interfaces, which employ an efficient contour deformation technique for frequency integration and do not require the explicit evaluation of virtual electronic states nor the inversion of dielectric matrices. We also present a parallel implementation of the algorithm which takes advantage of separable expressions of both the single particle Green's function and the screened Coulomb interaction. The method can be used starting from density functional theory calculations performed with semilocal or hybrid functionals. We applied the newly developed technique to GW calculations of systems of unprecedented size, including water/semiconductor interfacesmore » with thousands of electrons.« less

  16. Large scale GW calculations

    DOE PAGES

    Govoni, Marco; Galli, Giulia

    2015-01-12

    We present GW calculations of molecules, ordered and disordered solids and interfaces, which employ an efficient contour deformation technique for frequency integration and do not require the explicit evaluation of virtual electronic states nor the inversion of dielectric matrices. We also present a parallel implementation of the algorithm, which takes advantage of separable expressions of both the single particle Green’s function and the screened Coulomb interaction. The method can be used starting from density functional theory calculations performed with semilocal or hybrid functionals. The newly developed technique was applied to GW calculations of systems of unprecedented size, including water/semiconductor interfacesmore » with thousands of electrons.« less

  17. Learning Inverse Rig Mappings by Nonlinear Regression.

    PubMed

    Holden, Daniel; Saito, Jun; Komura, Taku

    2017-03-01

    We present a framework to design inverse rig-functions-functions that map low level representations of a character's pose such as joint positions or surface geometry to the representation used by animators called the animation rig. Animators design scenes using an animation rig, a framework widely adopted in animation production which allows animators to design character poses and geometry via intuitive parameters and interfaces. Yet most state-of-the-art computer animation techniques control characters through raw, low level representations such as joint angles, joint positions, or vertex coordinates. This difference often stops the adoption of state-of-the-art techniques in animation production. Our framework solves this issue by learning a mapping between the low level representations of the pose and the animation rig. We use nonlinear regression techniques, learning from example animation sequences designed by the animators. When new motions are provided in the skeleton space, the learned mapping is used to estimate the rig controls that reproduce such a motion. We introduce two nonlinear functions for producing such a mapping: Gaussian process regression and feedforward neural networks. The appropriate solution depends on the nature of the rig and the amount of data available for training. We show our framework applied to various examples including articulated biped characters, quadruped characters, facial animation rigs, and deformable characters. With our system, animators have the freedom to apply any motion synthesis algorithm to arbitrary rigging and animation pipelines for immediate editing. This greatly improves the productivity of 3D animation, while retaining the flexibility and creativity of artistic input.

  18. Multireference adaptive noise canceling applied to the EEG.

    PubMed

    James, C J; Hagan, M T; Jones, R D; Bones, P J; Carroll, G J

    1997-08-01

    The technique of multireference adaptive noise canceling (MRANC) is applied to enhance transient nonstationarities in the electroeancephalogram (EEG), with the adaptation implemented by means of a multilayer-perception artificial neural network (ANN). The method was applied to recorded EEG segments and the performance on documented nonstationarities recorded. The results show that the neural network (nonlinear) gives an improvement in performance (i.e., signal-to-noise ratio (SNR) of the nonstationarities) compared to a linear implementation of MRANC. In both cases an improvement in the SNR was obtained. The advantage of the spatial filtering aspect of MRANC is highlighted when the performance of MRANC is compared to that of the inverse auto-regressive filtering of the EEG, a purely temporal filter.

  19. A fast direct solver for boundary value problems on locally perturbed geometries

    NASA Astrophysics Data System (ADS)

    Zhang, Yabin; Gillman, Adrianna

    2018-03-01

    Many applications including optimal design and adaptive discretization techniques involve solving several boundary value problems on geometries that are local perturbations of an original geometry. This manuscript presents a fast direct solver for boundary value problems that are recast as boundary integral equations. The idea is to write the discretized boundary integral equation on a new geometry as a low rank update to the discretized problem on the original geometry. Using the Sherman-Morrison formula, the inverse can be expressed in terms of the inverse of the original system applied to the low rank factors and the right hand side. Numerical results illustrate for problems where perturbation is localized the fast direct solver is three times faster than building a new solver from scratch.

  20. Inverse design of near unity efficiency perfectly vertical grating couplers

    NASA Astrophysics Data System (ADS)

    Michaels, Andrew; Yablonovitch, Eli

    2018-02-01

    Efficient coupling between integrated optical waveguides and optical fibers is essential to the success of integrated photonics. While many solutions exist, perfectly vertical grating couplers which scatter light out of a waveguide in the direction normal to the waveguide's top surface are an ideal candidate due to their potential to reduce packaging complexity. Designing such couplers with high efficiency, however, has proven difficult. In this paper, we use electromagnetic inverse design techniques to optimize a high efficiency two-layer perfectly vertical silicon grating coupler. Our base design achieves a chip-to-fiber coupling efficiency of over 99% (-0.04 dB) at 1550 nm. Using this base design, we apply subsequent constrained optimizations to achieve vertical couplers with over 96% efficiency which are fabricable using a 65 nm process.

  1. Exact exchange-correlation potentials of singlet two-electron systems

    NASA Astrophysics Data System (ADS)

    Ryabinkin, Ilya G.; Ospadov, Egor; Staroverov, Viktor N.

    2017-10-01

    We suggest a non-iterative analytic method for constructing the exchange-correlation potential, v XC ( r ) , of any singlet ground-state two-electron system. The method is based on a convenient formula for v XC ( r ) in terms of quantities determined only by the system's electronic wave function, exact or approximate, and is essentially different from the Kohn-Sham inversion technique. When applied to Gaussian-basis-set wave functions, the method yields finite-basis-set approximations to the corresponding basis-set-limit v XC ( r ) , whereas the Kohn-Sham inversion produces physically inappropriate (oscillatory and divergent) potentials. The effectiveness of the procedure is demonstrated by computing accurate exchange-correlation potentials of several two-electron systems (helium isoelectronic series, H2, H3 + ) using common ab initio methods and Gaussian basis sets.

  2. Non-recursive augmented Lagrangian algorithms for the forward and inverse dynamics of constrained flexible multibodies

    NASA Technical Reports Server (NTRS)

    Bayo, Eduardo; Ledesma, Ragnar

    1993-01-01

    A technique is presented for solving the inverse dynamics of flexible planar multibody systems. This technique yields the non-causal joint efforts (inverse dynamics) as well as the internal states (inverse kinematics) that produce a prescribed nominal trajectory of the end effector. A non-recursive global Lagrangian approach is used in formulating the equations for motion as well as in solving the inverse dynamics equations. Contrary to the recursive method previously presented, the proposed method solves the inverse problem in a systematic and direct manner for both open-chain as well as closed-chain configurations. Numerical simulation shows that the proposed procedure provides an excellent tracking of the desired end effector trajectory.

  3. Inversion of the perturbation GPS-TEC data induced by tsunamis in order to estimate the sea level anomaly.

    NASA Astrophysics Data System (ADS)

    Rakoto, Virgile; Lognonné, Philippe; Rolland, Lucie; Coïsson, Pierdavide; Drilleau, Mélanie

    2017-04-01

    Large underwater earthquakes (Mw > 7) can transmit part of their energy to the surrounding ocean through large sea-floor motions, generating tsunamis that propagate over long distances. The forcing effect of tsunami waves on the atmosphere generate internal gravity waves which produce detectable ionospheric perturbations when they reach the upper atmosphere. Theses perturbations are frequently observed in the total electron content (TEC) measured by the multi-frequency Global navigation Satellite systems (GNSS) data (e.g., GPS,GLONASS). In this paper, we performed for the first time an inversion of the sea level anomaly using the GPS TEC data using a least square inversion (LSQ) through a normal modes summation modeling technique. Using the tsunami of the 2012 Haida Gwaii in far field as a test case, we showed that the amplitude peak to peak of the sea level anomaly inverted using this method is below 10 % error. Nevertheless, we cannot invert the second wave arriving 20 minutes later. This second wave is generaly explain by the coastal reflection which the normal modeling does not take into account. Our technique is then applied to two other tsunamis : the 2006 Kuril Islands tsunami in far field, and the 2011 Tohoku tsunami in closer field. This demonstrates that the inversion using a normal mode approach is able to estimate fairly well the amplitude of the first arrivals of the tsunami. In the future, we plan to invert in real the TEC data in order to retrieve the tsunami height.

  4. Finite element simulation of Reference Point Indentation on bone.

    PubMed

    Idkaidek, Ashraf; Agarwal, Vineet; Jasiuk, Iwona

    2017-01-01

    Reference Point Indentation (RPI) is a novel technique aimed to assess bone quality. Measurements are recorded by the BioDent instrument that applies multiple indents to the same location of cortical bone. Ten RPI parameters are obtained from the resulting force-displacement curves. Using the commercial finite element analysis software Abaqus, we assess the significance of the RPI parameters. We create an axisymmetric model and employ an isotropic viscoelastic-plastic constitutive relation with damage to simulate indentations on a human cortical bone. Fracture of bone tissue is not simulated for simplicity. The RPI outputs are computed for different simulated test cases and then compared with experimental results, measured using the BioDent, found in literature. The number of cycles, maximum indentation load, indenter tip radius, and the mechanical properties of bone: Young׳s modulus, compressive yield stress, and viscosity and damage constants, are varied. The trends in the RPI parameters are then investigated. We find that the RPI parameters are sensitive to the mechanical properties of bone. An increase in Young׳s modulus of bone causes the force-displacement loading and unloading slopes to increase and the total indentation distance (TID) to decrease. The compressive yield stress is inversely proportional to a creep indentation distance (CID1) and the TID. The viscosity constant is proportional to the CID1 and an average of the energy dissipated (AvED). The maximum indentation load is proportional to the TID, CID1, loading and unloading slopes, and AvED. The damage parameter is proportional to the TID, but it is inversely proportional to both the loading and unloading slopes and the AvED. The value of an indenter tip radius is proportional to the CID1 and inversely proportional to the TID. The number of load cycles is inversely proportional to an average of a creep indentation depth (AvCID) and the AvED. The indentation distance increase (IDI) is strongly inversely proportional to the compressive yield stress, and strongly proportional to the viscosity constant and maximum applied load, but has weak relation with the damage parameter, indenter tip radius, and elastic modulus. This computational study advances our understanding of the RPI outputs and provides a starting point for more comprehensive computational studies of the RPI technique. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Load cell having strain gauges of arbitrary location

    DOEpatents

    Spletzer, Barry [Albuquerque, NM

    2007-03-13

    A load cell utilizes a plurality of strain gauges mounted upon the load cell body such that there are six independent load-strain relations. Load is determined by applying the inverse of a load-strain sensitivity matrix to a measured strain vector. The sensitivity matrix is determined by performing a multivariate regression technique on a set of known loads correlated to the resulting strains. Temperature compensation is achieved by configuring the strain gauges as co-located orthogonal pairs.

  6. Inversion of calcite twin data for paleostress (1) : improved Etchecopar technique tested on numerically-generated and natural data

    NASA Astrophysics Data System (ADS)

    Parlangeau, Camille; Lacombe, Olivier; Daniel, Jean-Marc; Schueller, Sylvie

    2015-04-01

    Inversion of calcite twin data are known to be a powerful tool to reconstruct the past-state of stress in carbonate rocks of the crust, especially in fold-and-thrust belts and sedimentary basins. This is of key importance to constrain results of geomechanical modelling. Without proposing a new inversion scheme, this contribution reports some recent improvements of the most efficient stress inversion technique to date (Etchecopar, 1984) that allows to reconstruct the 5 parameters of the deviatoric paleostress tensors (principal stress orientations and differential stress magnitudes) from monophase and polyphase twin data sets. The improvements consist in the search of the possible tensors that account for the twin data (twinned and untwinned planes) and the aid to the user to define the best stress tensor solution, among others. We perform a systematic exploration of an hypersphere in 4 dimensions by varying different parameters, Euler's angles and the stress ratio. We first record all tensors with a minimum penalization function accounting for 20% of the twinned planes. We then define clusters of tensors following a dissimilarity criterion based on the stress distance between the 4 parameters of the reduced stress tensors and a degree of disjunction of the related sets of twinned planes. The percentage of twinned data to be explained by each tensor is then progressively increased and tested using the standard Etchecopar procedure until the best solution that explains the maximum number of twinned planes and the whole set of untwinned planes is reached. This new inversion procedure is tested on monophase and polyphase numerically-generated as well as natural calcite twin data in order to more accurately define the ability of the technique to separate more or less similar deviatoric stress tensors applied in sequence on the samples, to test the impact of strain hardening through the change of the critical resolved shear stress for twinning as well as to evaluate the possible bias due to measurement uncertainties or clustering of grain optical axes in the samples.

  7. Validation of Spherically Symmetric Inversion by Use of a Tomographically Reconstructed Three-Dimensional Electron Density of the Solar Corona

    NASA Technical Reports Server (NTRS)

    Wang, Tongjiang; Davila, Joseph M.

    2014-01-01

    Determining the coronal electron density by the inversion of white-light polarized brightness (pB) measurements by coronagraphs is a classic problem in solar physics. An inversion technique based on the spherically symmetric geometry (spherically symmetric inversion, SSI) was developed in the 1950s and has been widely applied to interpret various observations. However, to date there is no study of the uncertainty estimation of this method. We here present the detailed assessment of this method using a three-dimensional (3D) electron density in the corona from 1.5 to 4 solar radius as a model, which is reconstructed by a tomography method from STEREO/COR1 observations during the solar minimum in February 2008 (Carrington Rotation, CR 2066).We first show in theory and observation that the spherically symmetric polynomial approximation (SSPA) method and the Van de Hulst inversion technique are equivalent. Then we assess the SSPA method using synthesized pB images from the 3D density model, and find that the SSPA density values are close to the model inputs for the streamer core near the plane of the sky (POS) with differences generally smaller than about a factor of two; the former has the lower peak but extends more in both longitudinal and latitudinal directions than the latter. We estimate that the SSPA method may resolve the coronal density structure near the POS with angular resolution in longitude of about 50 deg. Our results confirm the suggestion that the SSI method is applicable to the solar minimum streamer (belt), as stated in some previous studies. In addition, we demonstrate that the SSPA method can be used to reconstruct the 3D coronal density, roughly in agreement with the reconstruction by tomography for a period of low solar activity (CR 2066). We suggest that the SSI method is complementary to the 3D tomographic technique in some cases, given that the development of the latter is still an ongoing research effort.

  8. Building a 3D faulted a priori model for stratigraphic inversion: Illustration of a new methodology applied on a North Sea field case study

    NASA Astrophysics Data System (ADS)

    Rainaud, Jean-François; Clochard, Vincent; Delépine, Nicolas; Crabié, Thomas; Poudret, Mathieu; Perrin, Michel; Klein, Emmanuel

    2018-07-01

    Accurate reservoir characterization is needed all along the development of an oil and gas field study. It helps building 3D numerical reservoir simulation models for estimating the original oil and gas volumes in place and for simulating fluid flow behaviors. At a later stage of the field development, reservoir characterization can also help deciding which recovery techniques need to be used for fluids extraction. In complex media, such as faulted reservoirs, flow behavior predictions within volumes close to faults can be a very challenging issue. During the development plan, it is necessary to determine which types of communication exist between faults or which potential barriers exist for fluid flows. The solving of these issues rests on accurate fault characterization. In most cases, faults are not preserved along reservoir characterization workflows. The memory of the interpreted faults from seismic is not kept during seismic inversion and further interpretation of the result. The goal of our study is at first to integrate a 3D fault network as a priori information into a model-based stratigraphic inversion procedure. Secondly, we apply our methodology on a well-known oil and gas case study over a typical North Sea field (UK Northern North Sea) in order to demonstrate its added value for determining reservoir properties. More precisely, the a priori model is composed of several geological units populated by physical attributes, they are extrapolated from well log data following the deposition mode, but usually a priori model building methods respect neither the 3D fault geometry nor the stratification dips on the fault sides. We address this difficulty by applying an efficient flattening method for each stratigraphic unit in our workflow. Even before seismic inversion, the obtained stratigraphic model has been directly used to model synthetic seismic on our case study. Comparisons between synthetic seismic obtained from our 3D fault network model give much lower residuals than with a "basic" stratigraphic model. Finally, we apply our model-based inversion considering both faulted and non-faulted a priori models. By comparing the rock impedances results obtain in the two cases, we can see a better delineation of the Brent-reservoir compartments by using the 3D faulted a priori model built with our method.

  9. Joint Inversion of Source Location and Source Mechanism of Induced Microseismics

    NASA Astrophysics Data System (ADS)

    Liang, C.

    2014-12-01

    Seismic source mechanism is a useful property to indicate the source physics and stress and strain distribution in regional, local and micro scales. In this study we jointly invert source mechanisms and locations for microseismics induced in fluid fracturing treatment in the oil and gas industry. For the events that are big enough to see waveforms, there are quite a few techniques can be applied to invert the source mechanism including waveform inversion, first polarity inversion and many other methods and variants based on these methods. However, for events that are too small to identify in seismic traces such as the microseismics induced by the fluid fracturing in the Oil and Gas industry, a source scanning algorithms (SSA for short) with waveform stacking are usually applied. At the same time, a joint inversion of location and source mechanism are possible but at a cost of high computation budget. The algorithm is thereby called Source Location and Mechanism Scanning Algorithm, SLMSA for short. In this case, for given velocity structure, all possible combinations of source locations (X,Y and Z) and source mechanism (Strike, Dip and Rake) are used to compute travel-times and polarities of waveforms. Correcting Normal moveout times and polarities, and stacking all waveforms, the (X, Y, Z , strike, dip, rake) combination that gives the strongest stacking waveform is identified as the solution. To solve the problem of high computation problem, CPU-GPU programing is applied. Numerical datasets are used to test the algorithm. The SLMSA has also been applied to a fluid fracturing datasets and reveal several advantages against the location only method: (1) for shear sources, the source only program can hardly locate them because of the canceling out of positive and negative polarized traces, but the SLMSA method can successfully pick up those events; (2) microseismic locations alone may not be enough to indicate the directionality of micro-fractures. The statistics of source mechanisms can certainly provide more knowledges on the orientation of fractures; (3) in our practice, the joint inversion method almost always yield more events than the source only method and for those events that are also picked by the SSA method, the stacking power of SLMSA are always higher than the ones obtained in SSA.

  10. Role of magnetic exchange interaction due to magnetic anisotropy on inverse spin Hall voltage at FeSi3%/Pt thin film bilayer interface

    NASA Astrophysics Data System (ADS)

    Shah, Jyoti; Ahmad, Saood; Chaujar, Rishu; Puri, Nitin K.; Negi, P. S.; Kotnala, R. K.

    2017-12-01

    In our recent studies inverse spin Hall voltage (ISHE) was investigated by ferromagnetic resonance (FMR) using bilayer FeSi3%/Pt thin film prepared by pulsed laser deposition (PLD) technique. In ISHE measurement microwave signal was applied on FeSi3% film along with DC magnetic field. Higher magnetization value along the film-plane was measured by magnetic hysteresis (M-H) loop. Presence of magnetic anisotropy has been obtained by M-H loop which showed easy direction of magnetization when applied magnetic field is parallel to the film plane. The main result of this study is that FMR induced inverse spin Hall voltage 12.6 μV at 1.0 GHz was obtained across Pt layer. Magnetic exchange field at bilayer interface responsible for field torque was measured 6 × 1014 Ω-1 m-2 by spin Hall magnetoresistance. The damping torque and spin Hall angle have been evaluated as 0.084 and 0.071 respectively. Presence of Si atom in FeSi3% inhomogenize the magnetic exchange field among accumulated spins at bilayer interface and feebly influenced by spin torque of FeSi3% layer. Weak field torque suppresses the spin pumping to Pt layer thus low value of inverse spin Hall voltage is obtained. This study provides an excellent opportunity to investigate spin transfer torque effect, thus motivating a more intensive experimental effort for its utilization at maximum potential. The improvement in spin transfer torque may be useful in spin valve, spin battery and spin transistor application.

  11. Emissions of organic compounds from produced water ponds II: Evaluation of flux chamber measurements with inverse-modeling techniques.

    PubMed

    Tran, Huy N Q; Lyman, Seth N; Mansfield, Marc L; O'Neil, Trevor; Bowers, Richard L; Smith, Ann P; Keslar, Cara

    2018-07-01

    In this study, the authors apply two different dispersion models to evaluate flux chamber measurements of emissions of 58 organic compounds, including C2-C11 hydrocarbons and methanol, ethanol, and isopropanol from oil- and gas-produced water ponds in the Uintah Basin. Field measurement campaigns using the flux chamber technique were performed at a limited number of produced water ponds in the basin throughout 2013-2016. Inverse-modeling results showed significantly higher emissions than were measured by the flux chamber. Discrepancies between the two methods vary across hydrocarbon compounds and are largest in alcohols due to their physical chemistries. This finding, in combination with findings in a related study using the WATER9 wastewater emission model, suggests that the flux chamber technique may underestimate organic compound emissions, especially alcohols, due to its limited coverage of the pond area and alteration of environmental conditions, especially wind speed. Comparisons of inverse-model estimations with flux chamber measurements varied significantly with the complexity of pond facilities and geometries. Both model results and flux chamber measurements suggest significant contributions from produced water ponds to total organic compound emission from oil and gas productions in the basin. This research is a component of an extensive study that showed significant amount of hydrocarbon emissions from produced water ponds in the Uintah Basin, Utah. Such findings have important meanings to air quality management agencies in developing control strategies for air pollution in oil and gas fields, especially for the Uintah Basin in which ozone pollutions frequently occurred in winter seasons.

  12. Inverse boundary-layer theory and comparison with experiment

    NASA Technical Reports Server (NTRS)

    Carter, J. E.

    1978-01-01

    Inverse boundary layer computational procedures, which permit nonsingular solutions at separation and reattachment, are presented. In the first technique, which is for incompressible flow, the displacement thickness is prescribed; in the second technique, for compressible flow, a perturbation mass flow is the prescribed condition. The pressure is deduced implicitly along with the solution in each of these techniques. Laminar and turbulent computations, which are typical of separated flow, are presented and comparisons are made with experimental data. In both inverse procedures, finite difference techniques are used along with Newton iteration. The resulting procedure is no more complicated than conventional boundary layer computations. These separated boundary layer techniques appear to be well suited for complete viscous-inviscid interaction computations.

  13. Using remote sensing and GIS techniques to estimate discharge and recharge fluxes for the Death Valley regional groundwater flow system, USA

    USGS Publications Warehouse

    D'Agnese, F. A.; Faunt, C.C.; Turner, A.K.; ,

    1996-01-01

    The recharge and discharge components of the Death Valley regional groundwater flow system were defined by techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were used to calculate discharge volumes for these area. An empirical method of groundwater recharge estimation was modified to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter estimation techniques also used these data to evaluate the reliability and sensitivity of estimated values.The recharge and discharge components of the Death Valley regional groundwater flow system were defined by remote sensing and GIS techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. This map provided a basis for subsequent evapotranspiration and infiltration estimations. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were then used to calculate discharge volumes for these areas. A previously used empirical method of groundwater recharge estimation was modified by GIS methods to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter estimation techniques also used these data to evaluate the reliability and sensitivity of estimated values.

  14. Efficient Monte Carlo sampling of inverse problems using a neural network-based forward—applied to GPR crosshole traveltime inversion

    NASA Astrophysics Data System (ADS)

    Hansen, T. M.; Cordua, K. S.

    2017-12-01

    Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.

  15. 3-D linear inversion of gravity data: method and application to Basse-Terre volcanic island, Guadeloupe, Lesser Antilles

    NASA Astrophysics Data System (ADS)

    Barnoud, Anne; Coutant, Olivier; Bouligand, Claire; Gunawan, Hendra; Deroussi, Sébastien

    2016-04-01

    We use a Bayesian formalism combined with a grid node discretization for the linear inversion of gravimetric data in terms of 3-D density distribution. The forward modelling and the inversion method are derived from seismological inversion techniques in order to facilitate joint inversion or interpretation of density and seismic velocity models. The Bayesian formulation introduces covariance matrices on model parameters to regularize the ill-posed problem and reduce the non-uniqueness of the solution. This formalism favours smooth solutions and allows us to specify a spatial correlation length and to perform inversions at multiple scales. We also extract resolution parameters from the resolution matrix to discuss how well our density models are resolved. This method is applied to the inversion of data from the volcanic island of Basse-Terre in Guadeloupe, Lesser Antilles. A series of synthetic tests are performed to investigate advantages and limitations of the methodology in this context. This study results in the first 3-D density models of the island of Basse-Terre for which we identify: (i) a southward decrease of densities parallel to the migration of volcanic activity within the island, (ii) three dense anomalies beneath Petite Plaine Valley, Beaugendre Valley and the Grande-Découverte-Carmichaël-Soufrière Complex that may reflect the trace of former major volcanic feeding systems, (iii) shallow low-density anomalies in the southern part of Basse-Terre, especially around La Soufrière active volcano, Piton de Bouillante edifice and along the western coast, reflecting the presence of hydrothermal systems and fractured and altered rocks.

  16. A constrained reconstruction technique of hyperelasticity parameters for breast cancer assessment

    NASA Astrophysics Data System (ADS)

    Mehrabian, Hatef; Campbell, Gordon; Samani, Abbas

    2010-12-01

    In breast elastography, breast tissue usually undergoes large compression resulting in significant geometric and structural changes. This implies that breast elastography is associated with tissue nonlinear behavior. In this study, an elastography technique is presented and an inverse problem formulation is proposed to reconstruct parameters characterizing tissue hyperelasticity. Such parameters can potentially be used for tumor classification. This technique can also have other important clinical applications such as measuring normal tissue hyperelastic parameters in vivo. Such parameters are essential in planning and conducting computer-aided interventional procedures. The proposed parameter reconstruction technique uses a constrained iterative inversion; it can be viewed as an inverse problem. To solve this problem, we used a nonlinear finite element model corresponding to its forward problem. In this research, we applied Veronda-Westmann, Yeoh and polynomial models to model tissue hyperelasticity. To validate the proposed technique, we conducted studies involving numerical and tissue-mimicking phantoms. The numerical phantom consisted of a hemisphere connected to a cylinder, while we constructed the tissue-mimicking phantom from polyvinyl alcohol with freeze-thaw cycles that exhibits nonlinear mechanical behavior. Both phantoms consisted of three types of soft tissues which mimic adipose, fibroglandular tissue and a tumor. The results of the simulations and experiments show feasibility of accurate reconstruction of tumor tissue hyperelastic parameters using the proposed method. In the numerical phantom, all hyperelastic parameters corresponding to the three models were reconstructed with less than 2% error. With the tissue-mimicking phantom, we were able to reconstruct the ratio of the hyperelastic parameters reasonably accurately. Compared to the uniaxial test results, the average error of the ratios of the parameters reconstructed for inclusion to the middle and external layers were 13% and 9.6%, respectively. Given that the parameter ratios of the abnormal tissues to the normal ones range from three times to more than ten times, this accuracy is sufficient for tumor classification.

  17. pyGIMLi: An open-source library for modelling and inversion in geophysics

    NASA Astrophysics Data System (ADS)

    Rücker, Carsten; Günther, Thomas; Wagner, Florian M.

    2017-12-01

    Many tasks in applied geosciences cannot be solved by single measurements, but require the integration of geophysical, geotechnical and hydrological methods. Numerical simulation techniques are essential both for planning and interpretation, as well as for the process understanding of modern geophysical methods. These trends encourage open, simple, and modern software architectures aiming at a uniform interface for interdisciplinary and flexible modelling and inversion approaches. We present pyGIMLi (Python Library for Inversion and Modelling in Geophysics), an open-source framework that provides tools for modelling and inversion of various geophysical but also hydrological methods. The modelling component supplies discretization management and the numerical basis for finite-element and finite-volume solvers in 1D, 2D and 3D on arbitrarily structured meshes. The generalized inversion framework solves the minimization problem with a Gauss-Newton algorithm for any physical forward operator and provides opportunities for uncertainty and resolution analyses. More general requirements, such as flexible regularization strategies, time-lapse processing and different sorts of coupling individual methods are provided independently of the actual methods used. The usage of pyGIMLi is first demonstrated by solving the steady-state heat equation, followed by a demonstration of more complex capabilities for the combination of different geophysical data sets. A fully coupled hydrogeophysical inversion of electrical resistivity tomography (ERT) data of a simulated tracer experiment is presented that allows to directly reconstruct the underlying hydraulic conductivity distribution of the aquifer. Another example demonstrates the improvement of jointly inverting ERT and ultrasonic data with respect to saturation by a new approach that incorporates petrophysical relations in the inversion. Potential applications of the presented framework are manifold and include time-lapse, constrained, joint, and coupled inversions of various geophysical and hydrological data sets.

  18. Parameter estimation using meta-heuristics in systems biology: a comprehensive review.

    PubMed

    Sun, Jianyong; Garibaldi, Jonathan M; Hodgman, Charlie

    2012-01-01

    This paper gives a comprehensive review of the application of meta-heuristics to optimization problems in systems biology, mainly focussing on the parameter estimation problem (also called the inverse problem or model calibration). It is intended for either the system biologist who wishes to learn more about the various optimization techniques available and/or the meta-heuristic optimizer who is interested in applying such techniques to problems in systems biology. First, the parameter estimation problems emerging from different areas of systems biology are described from the point of view of machine learning. Brief descriptions of various meta-heuristics developed for these problems follow, along with outlines of their advantages and disadvantages. Several important issues in applying meta-heuristics to the systems biology modelling problem are addressed, including the reliability and identifiability of model parameters, optimal design of experiments, and so on. Finally, we highlight some possible future research directions in this field.

  19. Investigation of Inversion Polymorphisms in the Human Genome Using Principal Components Analysis

    PubMed Central

    Ma, Jianzhong; Amos, Christopher I.

    2012-01-01

    Despite the significant advances made over the last few years in mapping inversions with the advent of paired-end sequencing approaches, our understanding of the prevalence and spectrum of inversions in the human genome has lagged behind other types of structural variants, mainly due to the lack of a cost-efficient method applicable to large-scale samples. We propose a novel method based on principal components analysis (PCA) to characterize inversion polymorphisms using high-density SNP genotype data. Our method applies to non-recurrent inversions for which recombination between the inverted and non-inverted segments in inversion heterozygotes is suppressed due to the loss of unbalanced gametes. Inside such an inversion region, an effect similar to population substructure is thus created: two distinct “populations” of inversion homozygotes of different orientations and their 1∶1 admixture, namely the inversion heterozygotes. This kind of substructure can be readily detected by performing PCA locally in the inversion regions. Using simulations, we demonstrated that the proposed method can be used to detect and genotype inversion polymorphisms using unphased genotype data. We applied our method to the phase III HapMap data and inferred the inversion genotypes of known inversion polymorphisms at 8p23.1 and 17q21.31. These inversion genotypes were validated by comparing with literature results and by checking Mendelian consistency using the family data whenever available. Based on the PCA-approach, we also performed a preliminary genome-wide scan for inversions using the HapMap data, which resulted in 2040 candidate inversions, 169 of which overlapped with previously reported inversions. Our method can be readily applied to the abundant SNP data, and is expected to play an important role in developing human genome maps of inversions and exploring associations between inversions and susceptibility of diseases. PMID:22808122

  20. A Sparsity-based Framework for Resolution Enhancement in Optical Fault Analysis of Integrated Circuits

    DTIC Science & Technology

    2015-01-01

    for IC fault detection . This section provides background information on inversion methods. Conventional inversion techniques and their shortcomings are...physical techniques, electron beam imaging/analysis, ion beam techniques, scanning probe techniques. Electrical tests are used to detect faults in 13 an...hand, there is also the second harmonic technique through which duty cycle degradation faults are detected by collecting the magnitude and the phase of

  1. Cohesive phase-field fracture and a PDE constrained optimization approach to fracture inverse problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tupek, Michael R.

    2016-06-30

    In recent years there has been a proliferation of modeling techniques for forward predictions of crack propagation in brittle materials, including: phase-field/gradient damage models, peridynamics, cohesive-zone models, and G/XFEM enrichment techniques. However, progress on the corresponding inverse problems has been relatively lacking. Taking advantage of key features of existing modeling approaches, we propose a parabolic regularization of Barenblatt cohesive models which borrows extensively from previous phase-field and gradient damage formulations. An efficient explicit time integration strategy for this type of nonlocal fracture model is then proposed and justified. In addition, we present a C++ computational framework for computing in- putmore » parameter sensitivities efficiently for explicit dynamic problems using the adjoint method. This capability allows for solving inverse problems involving crack propagation to answer interesting engineering questions such as: 1) what is the optimal design topology and material placement for a heterogeneous structure to maximize fracture resistance, 2) what loads must have been applied to a structure for it to have failed in an observed way, 3) what are the existing cracks in a structure given various experimental observations, etc. In this work, we focus on the first of these engineering questions and demonstrate a capability to automatically and efficiently compute optimal designs intended to minimize crack propagation in structures.« less

  2. Airborne remote sensing and in situ measurements of atmospheric CO2 to quantify point source emissions

    NASA Astrophysics Data System (ADS)

    Krings, Thomas; Neininger, Bruno; Gerilowski, Konstantin; Krautwurst, Sven; Buchwitz, Michael; Burrows, John P.; Lindemann, Carsten; Ruhtz, Thomas; Schüttemeyer, Dirk; Bovensmann, Heinrich

    2018-02-01

    Reliable techniques to infer greenhouse gas emission rates from localised sources require accurate measurement and inversion approaches. In this study airborne remote sensing observations of CO2 by the MAMAP instrument and airborne in situ measurements are used to infer emission estimates of carbon dioxide released from a cluster of coal-fired power plants. The study area is complex due to sources being located in close proximity and overlapping associated carbon dioxide plumes. For the analysis of in situ data, a mass balance approach is described and applied, whereas for the remote sensing observations an inverse Gaussian plume model is used in addition to a mass balance technique. A comparison between methods shows that results for all methods agree within 10 % or better with uncertainties of 10 to 30 % for cases in which in situ measurements were made for the complete vertical plume extent. The computed emissions for individual power plants are in agreement with results derived from emission factors and energy production data for the time of the overflight.

  3. Rainfall assimilation in RAMS by means of the Kuo parameterisation inversion: method and preliminary results

    NASA Astrophysics Data System (ADS)

    Orlandi, A.; Ortolani, A.; Meneguzzo, F.; Levizzani, V.; Torricella, F.; Turk, F. J.

    2004-03-01

    In order to improve high-resolution forecasts, a specific method for assimilating rainfall rates into the Regional Atmospheric Modelling System model has been developed. It is based on the inversion of the Kuo convective parameterisation scheme. A nudging technique is applied to 'gently' increase with time the weight of the estimated precipitation in the assimilation process. A rough but manageable technique is explained to estimate the partition of convective precipitation from stratiform one, without requiring any ancillary measurement. The method is general purpose, but it is tuned for geostationary satellite rainfall estimation assimilation. Preliminary results are presented and discussed, both through totally simulated experiments and through experiments assimilating real satellite-based precipitation observations. For every case study, Rainfall data are computed with a rapid update satellite precipitation estimation algorithm based on IR and MW satellite observations. This research was carried out in the framework of the EURAINSAT project (an EC research project co-funded by the Energy, Environment and Sustainable Development Programme within the topic 'Development of generic Earth observation technologies', Contract number EVG1-2000-00030).

  4. Magnetic resonance imaging protocols for examination of the neurocranium at 3 T.

    PubMed

    Schwindt, W; Kugel, H; Bachmann, R; Kloska, S; Allkemper, T; Maintz, D; Pfleiderer, B; Tombach, B; Heindel, W

    2003-09-01

    The increasing availability of high-field (3 T) MR scanners requires adapting and optimizing clinical imaging protocols to exploit the theoretically higher signal-to-noise ratio (SNR) of the higher field strength. Our aim was to establish reliable and stable protocols meeting the clinical demands for imaging the neurocranium at 3 T. Two hundred patients with a broad range of indications received an examination of the neurocranium with an appropriate assortment of imaging techniques at 3 T. Several imaging parameters were optimized. Keeping scan times comparable to those at 1.5 T we increased spatial resolution. Contrast-enhanced and non-enhanced T1-weighted imaging was best applying gradient-echo and inversion recovery (rather than spin-echo) techniques, respectively. For fluid-attenuated inversion recovery (FLAIR) imaging a TE of 120 ms yielded optimum contrast-to-noise ratio (CNR). High-resolution isotropic 3D data sets were acquired within reasonable scan times. Some artifacts were pronounced, but generally imaging profited from the higher SNR. We present a set of optimized examination protocols for neuroimaging at 3 T, which proved to be reliable in a clinical routine setting.

  5. Simultaneous Gaussian and exponential inversion for improved analysis of shales by NMR relaxometry

    USGS Publications Warehouse

    Washburn, Kathryn E.; Anderssen, Endre; Vogt, Sarah J.; Seymour, Joseph D.; Birdwell, Justin E.; Kirkland, Catherine M.; Codd, Sarah L.

    2014-01-01

    Nuclear magnetic resonance (NMR) relaxometry is commonly used to provide lithology-independent porosity and pore-size estimates for petroleum resource evaluation based on fluid-phase signals. However in shales, substantial hydrogen content is associated with solid and fluid signals and both may be detected. Depending on the motional regime, the signal from the solids may be best described using either exponential or Gaussian decay functions. When the inverse Laplace transform, the standard method for analysis of NMR relaxometry results, is applied to data containing Gaussian decays, this can lead to physically unrealistic responses such as signal or porosity overcall and relaxation times that are too short to be determined using the applied instrument settings. We apply a new simultaneous Gaussian-Exponential (SGE) inversion method to simulated data and measured results obtained on a variety of oil shale samples. The SGE inversion produces more physically realistic results than the inverse Laplace transform and displays more consistent relaxation behavior at high magnetic field strengths. Residuals for the SGE inversion are consistently lower than for the inverse Laplace method and signal overcall at short T2 times is mitigated. Beyond geological samples, the method can also be applied in other fields where the sample relaxation consists of both Gaussian and exponential decays, for example in material, medical and food sciences.

  6. Seismic Tomography

    NASA Astrophysics Data System (ADS)

    Nowack, Robert L.; Li, Cuiping

    The inversion of seismic travel-time data for radially varying media was initially investigated by Herglotz, Wiechert, and Bateman (the HWB method) in the early part of the 20th century [1]. Tomographic inversions for laterally varying media began in seismology starting in the 1970’s. This included early work by Aki, Christoffersson, and Husebye who developed an inversion technique for estimating lithospheric structure beneath a seismic array from distant earthquakes (the ACH method) [2]. Also, Alekseev and others in Russia performed early inversions of refraction data for laterally varying upper mantle structure [3]. Aki and Lee [4] developed an inversion technique using travel-time data from local earthquakes.

  7. Combination of photogrammetric and geoelectric methods to assess 3d structures associated to natural hazards

    NASA Astrophysics Data System (ADS)

    Fargier, Yannick; Dore, Ludovic; Antoine, Raphael; Palma Lopes, Sérgio; Fauchard, Cyrille

    2016-04-01

    The extraction of subsurface materials is a key element for the economy of a nation. However, natural degradation of underground quarries is a major issue from an economic and public safety point of view. Consequently, the quarries stakeholders require relevant tools to define hazards associated to these structures. Safety assessment methods of underground quarries are recent and mainly based on rock physical properties. This kind of method leads to a certain homogeneity assumption of pillar internal properties that can cause an underestimation of the risk. Electrical Resistivity Imaging (ERI) is a widely used method that possesses two advantages to overcome this limitation. The first is to provide a qualitative understanding for the detection and monitoring of anomalies in the pillar body (e.g. faults). The second is to provide a quantitative description of the electrical resistivity distribution inside the pillar. This quantitative description can be interpreted with constitutive laws to help decision support (water content decreases the mechanical resistance of a chalk). However, conventional 2D and 3D Imaging techniques are usually applied to flat surface surveys or to surfaces with moderate topography. A 3D inversion of more complex media (case of the pillar) requires a full consideration of the geometry that was never taken into account before. The Photogrammetric technique presents a cost effective solution to obtain an accurate description of the external geometry of a complex media. However, this method has never been fully coupled with a geophysical method to enhance/improve the inversion process. Consequently we developed a complete procedure showing that photogrammetric and ERI tools can be efficiently combined to assess a complex 3D structure. This procedure includes in a first part a photogrammetric survey, a processing stage with an open source software and a post-processing stage finalizing a 3D surface model. The second part necessitates the production of a complete 3D mesh of the previous surface model to operate some forward modelization of the geo-electrical problem. To solve the inverse problem and obtain a 3D resistivity distribution we use a double grid method associated with a regularized Gauss-Newton inversion scheme. We applied this procedure to a synthetic case to demonstrate the impact of the geometry on the inversion result. This study shows that geometrical information in between electrodes are necessary to reconstruct finely the "true model". Finally, we apply the methodology to a real underground quarry pillar, implying one photogrammetric survey and three ERI surveys. The results show that the procedure can greatly improve the reconstruction and avoid some artifacts due to strong geometry variations.

  8. Huge Inverse Magnetization Generated by Faraday Induction in Nano-Sized Au@Ni Core@Shell Nanoparticles.

    PubMed

    Kuo, Chen-Chen; Li, Chi-Yen; Lee, Chi-Hung; Li, Hsiao-Chi; Li, Wen-Hsien

    2015-08-25

    We report on the design and observation of huge inverse magnetizations pointing in the direction opposite to the applied magnetic field, induced in nano-sized amorphous Ni shells deposited on crystalline Au nanoparticles by turning the applied magnetic field off. The magnitude of the induced inverse magnetization is very sensitive to the field reduction rate as well as to the thermal and field processes before turning the magnetic field off, and can be as high as 54% of the magnetization prior to cutting off the applied magnetic field. Memory effect of the induced inverse magnetization is clearly revealed in the relaxation measurements. The relaxation of the inverse magnetization can be described by an exponential decay profile, with a critical exponent that can be effectively tuned by the wait time right after reaching the designated temperature and before the applied magnetic field is turned off. The key to these effects is to have the induced eddy current running beneath the amorphous Ni shells through Faraday induction.

  9. Ambient Seismic Source Inversion in a Heterogeneous Earth: Theory and Application to the Earth's Hum

    NASA Astrophysics Data System (ADS)

    Ermert, Laura; Sager, Korbinian; Afanasiev, Michael; Boehm, Christian; Fichtner, Andreas

    2017-11-01

    The sources of ambient seismic noise are extensively studied both to better understand their influence on ambient noise tomography and related techniques, and to infer constraints on their excitation mechanisms. Here we develop a gradient-based inversion method to infer the space-dependent and time-varying source power spectral density of the Earth's hum from cross correlations of continuous seismic data. The precomputation of wavefields using spectral elements allows us to account for both finite-frequency sensitivity and for three-dimensional Earth structure. Although similar methods have been proposed previously, they have not yet been applied to data to the best of our knowledge. We apply this method to image the seasonally varying sources of Earth's hum during North and South Hemisphere winter. The resulting models suggest that hum sources are localized, persistent features that occur at Pacific coasts or shelves and in the North Atlantic during North Hemisphere winter, as well as South Pacific coasts and several distinct locations in the Southern Ocean in South Hemisphere winter. The contribution of pelagic sources from the central North Pacific cannot be constrained. Besides improving the accuracy of noise source locations through the incorporation of finite-frequency effects and 3-D Earth structure, this method may be used in future cross-correlation waveform inversion studies to provide initial source models and source model updates.

  10. Magnetotelluric measurements across the southern Barberton greenstone belt, South Africa: data improving strategies and 2-D inversion results

    NASA Astrophysics Data System (ADS)

    Kutter, S.; Chen, X.; Weckmann, U.

    2011-12-01

    Magnetotelluric (MT) measurements in areas with electromagnetic (EM) noise sources such as electric fences, power and railway lines pose severe challenges to the standard processing procedures. In order to significantly improve the data quality advanced filtering and processing techniques need to be applied. The presented 5-component MT data set from two field campaigns in 2009 and 2010 in the Barberton/Badplaas area, South Africa, was acquired within the framework of the German-South African geo-scientific research initiative Inkaba yeAfrica. Approximately 200 MT sites aligned along six profiles provide a good areal coverage of the southern part of the Barberton Greenstone Belt (BGB). Since it is one of the few remaining well-preserved geological formations from the Archean, it presents an ideal area to study the tectonic evolution and the role of plate tectonics on Early Earth. Comparing the electric properties, the surrounding high and low grade metamorphic rocks are characteristically resistive whereas mineralized shear zones are possible areas of higher electrical conductivity. Mapping their depth extension is a crucial step towards understanding the formation and the evolution of the BGB. Unfortunately, in the measurement area numerous noise sources were active, producing severe spikes and steps in the EM fields. These disturbances mainly affect long periods which are needed for resolving the deepest structures. The Remote Reference technique as well as two filtering techniques are applied to improve the data in different period ranges. Adjusting their parameters for each site is necessary to obtain the best possible results. The improved data set is used for two-dimensional inversion studies for the six profiles applying the RLM2DI algorithm by Rodi and Mackie (2001, implemented in WinGlink). In the models, areas with higher conductivity can be traced beneath known faults throughout the entire array along different profiles. Resistive zones seem to correlate well with plutonic intrusions.

  11. Abel inversion using fast Fourier transforms.

    PubMed

    Kalal, M; Nugent, K A

    1988-05-15

    A fast Fourier transform based Abel inversion technique is proposed. The method is faster than previously used techniques, potentially very accurate (even for a relatively small number of points), and capable of handling large data sets. The technique is discussed in the context of its use with 2-D digital interferogram analysis algorithms. Several examples are given.

  12. The convolutional differentiator method for numerical modelling of acoustic and elastic wavefields

    NASA Astrophysics Data System (ADS)

    Zhang, Zhong-Jie; Teng, Ji-Wen; Yang, Ding-Hui

    1996-02-01

    Based on the techniques of forward and inverse Fourier transformation, the authors discussed the design scheme of ordinary differentiator used and applied in the simulation of acoustic and elastic wavefields in isotropic media respectively. To compress Gibbs effects by truncation effectively, Hanning window is introduced in. The model computation shows that, the convolutional differentiator method has the advantages of rapidity, low requirements of computer’s inner storage and high precision, which is a potential method of numerical simulation.

  13. Application of resistivity monitoring to evaluate cement grouting effect in earth filled dam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jin-Mo; Yoon, Wang-Jung

    In this paper, we applied electrical resistivity monitoring method to evaluate the cement grouting effect. There are a lot of ways to evaluate cement grouting effect. In order to do this evaluation in a great safety, high efficiency, and lower cost, resistivity monitoring is found to be the most appropriate technique. In this paper we have selected a dam site from Korea to acquire resistivity monitoring data and compare the results of inversion to estimate the cement grouting effect.

  14. Selective field evaporation in field-ion microscopy for ordered alloys

    NASA Astrophysics Data System (ADS)

    Ge, Xi-jin; Chen, Nan-xian; Zhang, Wen-qing; Zhu, Feng-wu

    1999-04-01

    Semiempirical pair potentials, obtained by applying the Chen-inversion technique to a cohesion equation of Rose et al. [Phys. Rev. B 29, 2963 (1984)], are employed to assess the bonding energies of surface atoms of intermetallic compounds. This provides a new calculational model of selective field evaporation in field-ion microscopy (FIM). Based on this model, a successful interpretation of FIM image contrasts for Fe3Al, PtCo, Pt3Co, Ni4Mo, Ni3Al, and Ni3Fe is given.

  15. Surface-tension-driven flow in a glass melt

    NASA Technical Reports Server (NTRS)

    Mcneil, Thomas J.; Cole, Robert; Shankar Subramanian, R.

    1985-01-01

    Motion driven by surface tension gradients was observed in a vertical capillary liquid bridge geometry in a sodium borate melt. The surface tension gradients were introduced by maintaining a temperature gradient on the free melt surface. The flow velocities at the free surface of the melt, which were measured using a tracer technique, were found to be proportional to the applied temperature difference and inversely proportional to the melt viscosity. The experimentally observed velocities were in reasonable accord with predictions from a theoretical model of the system.

  16. Oil core microcapsules by inverse gelation technique.

    PubMed

    Martins, Evandro; Renard, Denis; Davy, Joëlle; Marquis, Mélanie; Poncelet, Denis

    2015-01-01

    A promising technique for oil encapsulation in Ca-alginate capsules by inverse gelation was proposed by Abang et al. This method consists of emulsifying calcium chloride solution in oil and then adding it dropwise in an alginate solution to produce Ca-alginate capsules. Spherical capsules with diameters around 3 mm were produced by this technique, however the production of smaller capsules was not demonstrated. The objective of this study is to propose a new method of oil encapsulation in a Ca-alginate membrane by inverse gelation. The optimisation of the method leads to microcapsules with diameters around 500 μm. In a search of microcapsules with improved diffusion characteristics, the size reduction is an essential factor to broaden the applications in food, cosmetics and pharmaceuticals areas. This work contributes to a better understanding of the inverse gelation technique and allows the production of microcapsules with a well-defined shell-core structure.

  17. CSI-EPT in Presence of RF-Shield for MR-Coils.

    PubMed

    Arduino, Alessandro; Zilberti, Luca; Chiampi, Mario; Bottauscio, Oriano

    2017-07-01

    Contrast source inversion electric properties tomography (CSI-EPT) is a recently developed technique for the electric properties tomography that recovers the electric properties distribution starting from measurements performed by magnetic resonance imaging scanners. This method is an optimal control approach based on the contrast source inversion technique, which distinguishes itself from other electric properties tomography techniques for its capability to recover also the local specific absorption rate distribution, essential for online dosimetry. Up to now, CSI-EPT has only been described in terms of integral equations, limiting its applicability to homogeneous unbounded background. In order to extend the method to the presence of a shield in the domain-as in the recurring case of shielded radio frequency coils-a more general formulation of CSI-EPT, based on a functional viewpoint, is introduced here. Two different implementations of CSI-EPT are proposed for a 2-D transverse magnetic model problem, one dealing with an unbounded domain and one considering the presence of a perfectly conductive shield. The two implementations are applied on the same virtual measurements obtained by numerically simulating a shielded radio frequency coil. The results are compared in terms of both electric properties recovery and local specific absorption rate estimate, in order to investigate the requirement of an accurate modeling of the underlying physical problem.

  18. Three-dimensional mosaicking of the South Korean radar network

    NASA Astrophysics Data System (ADS)

    Berenguer, Marc; Sempere-Torres, Daniel; Lee, GyuWon

    2016-04-01

    Dense radar networks offer the possibility of improved Quantitative Precipitation Estimation thanks to the additional information collected in the overlapping areas, which allows mitigating errors associated with the Vertical Profile of Reflectivity or path attenuation by intense rain. With this aim, Roca-Sancho et al. (2014) proposed a technique to generate 3-D reflectivity mosaics from the multiple radars of a network. The technique is based on an inverse method that simulates the radar sampling of the atmosphere considering the characteristics (location, frequency and scanning protocol) of each individual radar. This technique has been applied to mosaic the observations of the radar network of South Korea (composed of 14 S-band radars), and integrate the observations of the small X-band network which to be installed near Seoul in the framework of a project funded by the Korea Agency for Infrastructure Technology Advancement (KAIA). The evaluation of the generated 3-D mosaics has been done by comparison with point measurements (i.e. rain gauges and disdrometers) and with the observations of independent radars. Reference: Roca-Sancho, J., M. Berenguer, and D. Sempere-Torres (2014), An inverse method to retrieve 3D radar reflectivity composites, Journal of Hydrology, 519, 947-965, doi: 10.1016/j.jhydrol.2014.07.039.

  19. Inverse statistical physics of protein sequences: a key issues review.

    PubMed

    Cocco, Simona; Feinauer, Christoph; Figliuzzi, Matteo; Monasson, Rémi; Weigt, Martin

    2018-03-01

    In the course of evolution, proteins undergo important changes in their amino acid sequences, while their three-dimensional folded structure and their biological function remain remarkably conserved. Thanks to modern sequencing techniques, sequence data accumulate at unprecedented pace. This provides large sets of so-called homologous, i.e. evolutionarily related protein sequences, to which methods of inverse statistical physics can be applied. Using sequence data as the basis for the inference of Boltzmann distributions from samples of microscopic configurations or observables, it is possible to extract information about evolutionary constraints and thus protein function and structure. Here we give an overview over some biologically important questions, and how statistical-mechanics inspired modeling approaches can help to answer them. Finally, we discuss some open questions, which we expect to be addressed over the next years.

  20. Inverse statistical physics of protein sequences: a key issues review

    NASA Astrophysics Data System (ADS)

    Cocco, Simona; Feinauer, Christoph; Figliuzzi, Matteo; Monasson, Rémi; Weigt, Martin

    2018-03-01

    In the course of evolution, proteins undergo important changes in their amino acid sequences, while their three-dimensional folded structure and their biological function remain remarkably conserved. Thanks to modern sequencing techniques, sequence data accumulate at unprecedented pace. This provides large sets of so-called homologous, i.e. evolutionarily related protein sequences, to which methods of inverse statistical physics can be applied. Using sequence data as the basis for the inference of Boltzmann distributions from samples of microscopic configurations or observables, it is possible to extract information about evolutionary constraints and thus protein function and structure. Here we give an overview over some biologically important questions, and how statistical-mechanics inspired modeling approaches can help to answer them. Finally, we discuss some open questions, which we expect to be addressed over the next years.

  1. Full-waveform inversion of GPR data for civil engineering applications

    NASA Astrophysics Data System (ADS)

    van der Kruk, Jan; Kalogeropoulos, Alexis; Hugenschmidt, Johannes; Klotzsche, Anja; Busch, Sebastian; Vereecken, Harry

    2014-05-01

    Conventional GPR ray-based techniques are often limited in their capability to image complex structures due to the pertaining approximations. Due to the increased computational power, it is becoming more easy to use modeling and inversion tools that explicitly take into account the detailed electromagnetic wave propagation characteristics. In this way, new civil engineering application avenues are opening up that enable an improved high resolution imaging of quantitative medium properties. In this contribution, we show recent developments that enable the full-waveform inversion of off-ground, on-ground and crosshole GPR data. For a successful inversion, a proper start model must be used that generates synthetic data that overlaps the measured data with at least half a wavelength. In addition, the GPR system must be calibrated such that an effective wavelet is obtained that encompasses the complexity of the GPR source and receiver antennas. Simple geometries such as horizontal layers can be described with a limited number of model parameters, which enable the use of a combined global and local search using the Simplex search algorithm. This approach has been implemented for the full-waveform inversion of off-ground and on-ground GPR data measured over horizontally layered media. In this way, an accurate 3D frequency domain forward model of Maxwell's equation can be used where the integral representation of the electric field is numerically evaluated. The full-waveform inversion (FWI) for a large number of unknowns uses gradient-based optimization methods where a 3D to 2D conversion is used to apply this method to experimental data. Off-ground GPR data, measured over homogeneous concrete specimens, were inverted using the full-waveform inversion. In contrast to traditional ray-based techniques we were able to obtain quantitative values for the permittivity and conductivity and in this way distinguish between moisture and chloride effects. For increasing chloride content increasing frequency-dependent conductivity values were obtained. The off-ground full-waveform inversion was extended to invert for positive and negative gradients in conductivity and the conductivity gradient direction could be correctly identified. Experimental specimen containing gradients were generated by exposing a concrete slab to controlled wetting-drying cycles using a saline solution. Full-waveform inversion of the measured data correctly identified the conductivity gradient direction which was confirmed by destructive analysis. On-ground CMP GPR data measured over a concrete layer overlying a metal plate show interfering multiple reflections, which indicates that the structure acts as a waveguide. Calculation of the phase-velocity spectrum shows the presence of several higher order modes. Whereas the dispersion inversion returns the thickness and layer height, the full-waveform inversion was also able to estimate quantitative conductivity values. This abstract is a contribution to COST Action TU1208

  2. Strategies to Enhance the Model Update in Regions of Weak Sensitivities for Use in Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Nuber, André; Manukyan, Edgar; Maurer, Hansruedi

    2014-05-01

    Conventional methods of interpreting seismic data rely on filtering and processing limited portions of the recorded wavefield. Typically, either reflections, refractions or surface waves are considered in isolation. Particularly in near-surface engineering and environmental investigations (depths less than, say 100 m), these wave types often overlap in time and are difficult to separate. Full waveform inversion is a technique that seeks to exploit and interpret the full information content of the seismic records without the need for separating events first; it yields models of the subsurface at sub-wavelength resolution. We use a finite element modelling code to solve the 2D elastic isotropic wave equation in the frequency domain. This code is part of a Gauss-Newton inversion scheme which we employ to invert for the P- and S-wave velocities as well as for density in the subsurface. For shallow surface data the use of an elastic forward solver is essential because surface waves often dominate the seismograms. This leads to high sensitivities (partial derivatives contained in the Jacobian matrix of the Gauss-Newton inversion scheme) and thus large model updates close to the surface. Reflections from deeper structures may also include useful information, but the large sensitivities of the surface waves often preclude this information from being fully exploited. We have developed two methods that balance the sensitivity distributions and thus may help resolve the deeper structures. The first method includes equilibrating the columns of the Jacobian matrix prior to every inversion step by multiplying them with individual scaling factors. This is expected to also balance the model updates throughout the entire subsurface model. It can be shown that this procedure is mathematically equivalent to balancing the regularization weights of the individual model parameters. A proper choice of the scaling factors required to balance the Jacobian matrix is critical. We decided to normalise the columns of the Jacobian based on their absolute column sum, but defining an upper threshold for the scaling factors. This avoids particularly small and therefore insignificant sensitivities being over-boosted, which would produce unstable results. The second method proposed includes adjusting the inversion cell size with depth. Multiple cells of the forward modelling grid are merged to form larger inversion cells (typical ratios between forward and inversion cells are in the order of 1:100). The irregular inversion grid is adapted to the expected resolution power of full waveform inversion. Besides stabilizing the inversion, this approach also reduces the number of model parameters to be recovered. Consequently, the computational costs and the memory consumption are reduced significantly. This is particularly critical when Gauss-Newton type inversion schemes are employed. Extensive tests with synthetic data demonstrated that both methods stabilise the inversion and improve the inversion results. The two methods have some redundancy, which can be seen when both are applied simultaneously, that is, when scaling of the Jacobian matrix is applied to an irregular inversion grid. The calculated scaling factors are quite balanced and span a much smaller range than in the case of a regular inversion grid.

  3. Real time flaw detection and characterization in tube through partial least squares and SVR: Application to eddy current testing

    NASA Astrophysics Data System (ADS)

    Ahmed, Shamim; Miorelli, Roberto; Calmon, Pierre; Anselmi, Nicola; Salucci, Marco

    2018-04-01

    This paper describes Learning-By-Examples (LBE) technique for performing quasi real time flaw localization and characterization within a conductive tube based on Eddy Current Testing (ECT) signals. Within the framework of LBE, the combination of full-factorial (i.e., GRID) sampling and Partial Least Squares (PLS) feature extraction (i.e., GRID-PLS) techniques are applied for generating a suitable training set in offine phase. Support Vector Regression (SVR) is utilized for model development and inversion during offine and online phases, respectively. The performance and robustness of the proposed GIRD-PLS/SVR strategy on noisy test set is evaluated and compared with standard GRID/SVR approach.

  4. Cortical dipole imaging using truncated total least squares considering transfer matrix error.

    PubMed

    Hori, Junichi; Takeuchi, Kosuke

    2013-01-01

    Cortical dipole imaging has been proposed as a method to visualize electroencephalogram in high spatial resolution. We investigated the inverse technique of cortical dipole imaging using a truncated total least squares (TTLS). The TTLS is a regularization technique to reduce the influence from both the measurement noise and the transfer matrix error caused by the head model distortion. The estimation of the regularization parameter was also investigated based on L-curve. The computer simulation suggested that the estimation accuracy was improved by the TTLS compared with Tikhonov regularization. The proposed method was applied to human experimental data of visual evoked potentials. We confirmed the TTLS provided the high spatial resolution of cortical dipole imaging.

  5. Direct vibro-elastography FEM inversion in Cartesian and cylindrical coordinate systems without the local homogeneity assumption

    NASA Astrophysics Data System (ADS)

    Honarvar, M.; Lobo, J.; Mohareri, O.; Salcudean, S. E.; Rohling, R.

    2015-05-01

    To produce images of tissue elasticity, the vibro-elastography technique involves applying a steady-state multi-frequency vibration to tissue, estimating displacements from ultrasound echo data, and using the estimated displacements in an inverse elasticity problem with the shear modulus spatial distribution as the unknown. In order to fully solve the inverse problem, all three displacement components are required. However, using ultrasound, the axial component of the displacement is measured much more accurately than the other directions. Therefore, simplifying assumptions must be used in this case. Usually, the equations of motion are transformed into a Helmholtz equation by assuming tissue incompressibility and local homogeneity. The local homogeneity assumption causes significant imaging artifacts in areas of varying elasticity. In this paper, we remove the local homogeneity assumption. In particular we introduce a new finite element based direct inversion technique in which only the coupling terms in the equation of motion are ignored, so it can be used with only one component of the displacement. Both Cartesian and cylindrical coordinate systems are considered. The use of multi-frequency excitation also allows us to obtain multiple measurements and reduce artifacts in areas where the displacement of one frequency is close to zero. The proposed method was tested in simulations and experiments against a conventional approach in which the local homogeneity is used. The results show significant improvements in elasticity imaging with the new method compared to previous methods that assumes local homogeneity. For example in simulations, the contrast to noise ratio (CNR) for the region with spherical inclusion increases from an average value of 1.5-17 after using the proposed method instead of the local inversion with homogeneity assumption, and similarly in the prostate phantom experiment, the CNR improved from an average value of 1.6 to about 20.

  6. WE-AB-209-02: A New Inverse Planning Framework with Principle-Based Modeling of Inter-Structural Dosimetric Tradeoffs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, H; Dong, P; Xing, L

    Purpose: Traditional radiotherapy inverse planning relies on the weighting factors to phenomenologically balance the conflicting criteria for different structures. The resulting manual trial-and-error determination of the weights has long been recognized as the most time-consuming part of treatment planning. The purpose of this work is to develop an inverse planning framework that parameterizes the inter-structural dosimetric tradeoff among with physically more meaningful quantities to simplify the search for a clinically sensible plan. Methods: A permissible dosimetric uncertainty is introduced for each of the structures to balance their conflicting dosimetric requirements. The inverse planning is then formulated as a convex feasibilitymore » problem, which aims to generate plans with acceptable dosimetric uncertainties. A sequential procedure (SP) is derived to decompose the model into three submodels to constrain the uncertainty in the planning target volume (PTV), the critical structures, and all other structures to spare, sequentially. The proposed technique is applied to plan a liver case and a head-and-neck case and compared with a conventional approach. Results: Our results show that the strategy is able to generate clinically sensible plans with little trial-and-error. In the case of liver IMRT, the fractional volumes to liver and heart above 20Gy are found to be 22% and 10%, respectively, which are 15.1% and 33.3% lower than that of the counterpart conventional plan while maintaining the same PTV coverage. The planning of the head and neck IMRT show the same level of success, with the DVHs for all organs at risk and PTV very competitive to a counterpart plan. Conclusion: A new inverse planning framework has been established. With physically more meaningful modeling of the inter-structural tradeoff, the technique enables us to substantially reduce the need for trial-and-error adjustment of the model parameters and opens new opportunities of incorporating prior knowledge to facilitate the treatment planning process.« less

  7. Coupled Hydrogeophysical Inversion and Hydrogeological Data Fusion

    NASA Astrophysics Data System (ADS)

    Cirpka, O. A.; Schwede, R. L.; Li, W.

    2012-12-01

    Tomographic geophysical monitoring methods give the opportunity to observe hydrogeological tests at higher spatial resolution than is possible with classical hydraulic monitoring tools. This has been demonstrated in a substantial number of studies in which electrical resistivity tomography (ERT) has been used to monitor salt-tracer experiments. It is now accepted that inversion of such data sets requires a fully coupled framework, explicitly accounting for the hydraulic processes (groundwater flow and solute transport), the relationship between solute and geophysical properties (petrophysical relationship such as Archie's law), and the governing equations of the geophysical surveying techniques (e.g., the Poisson equation) as consistent coupled system. These data sets can be amended with data from other - more direct - hydrogeological tests to infer the distribution of hydraulic aquifer parameters. In the inversion framework, meaningful condensation of data does not only contribute to inversion efficiency but also increases the stability of the inversion. In particular, transient concentration data themselves only weakly depend on hydraulic conductivity, and model improvement using gradient-based methods is only possible when a substantial agreement between measurements and model output already exists. The latter also holds when concentrations are monitored by ERT. Tracer arrival times, by contrast, show high sensitivity and a more monotonic dependence on hydraulic conductivity than concentrations themselves. Thus, even without using temporal-moment generating equations, inverting travel times rather than concentrations or related geoelectrical signals themselves is advantageous. We have applied this approach to concentrations measured directly or via ERT, and to heat-tracer data. We present a consistent inversion framework including temporal moments of concentrations, geoelectrical signals obtained during salt-tracer tests, drawdown data from hydraulic tomography and flowmeter measurements to identify mainly the hydraulic-conductivity distribution. By stating the inversion as geostatistical conditioning problem, we obtain parameter sets together with their correlated uncertainty. While we have applied the quasi-linear geostatistical approach as inverse kernel, other methods - such as ensemble Kalman methods - may suit the same purpose, particularly when many data points are to be included. In order to identify 3-D fields, discretized by about 50 million grid points, we use the high-performance-computing framework DUNE to solve the involved partial differential equations on midrange computer cluster. We have quantified the worth of different data types in these inference problems. In practical applications, the constitutive relationships between geophysical, thermal, and hydraulic properties can pose a problem, requiring additional inversion. However, not well constrained transient boundary conditions may put inversion efforts on larger (e.g. regional) scales even more into question. We envision that future hydrogeophysical inversion efforts will target boundary conditions, such as groundwater recharge rates, in conjunction with - or instead of - aquifer parameters. By this, the distinction between data assimilation and parameter estimation will gradually vanish.

  8. Monte Carlo uncertainty analyses of a bLS inverse-dispersion technique for measuring gas emissions from livestock operations

    USDA-ARS?s Scientific Manuscript database

    The backward Lagrangian stochastic (bLS) inverse-dispersion technique has been used to measure fugitive gas emissions from livestock operations. The accuracy of the bLS technique, as indicated by the percentages of gas recovery in various tracer-release experiments, has generally been within ± 10% o...

  9. Recovery of time-dependent volatility in option pricing model

    NASA Astrophysics Data System (ADS)

    Deng, Zui-Cha; Hon, Y. C.; Isakov, V.

    2016-11-01

    In this paper we investigate an inverse problem of determining the time-dependent volatility from observed market prices of options with different strikes. Due to the non linearity and sparsity of observations, an analytical solution to the problem is generally not available. Numerical approximation is also difficult to obtain using most of the existing numerical algorithms. Based on our recent theoretical results, we apply the linearisation technique to convert the problem into an inverse source problem from which recovery of the unknown volatility function can be achieved. Two kinds of strategies, namely, the integral equation method and the Landweber iterations, are adopted to obtain the stable numerical solution to the inverse problem. Both theoretical analysis and numerical examples confirm that the proposed approaches are effective. The work described in this paper was partially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region (Project No. CityU 101112) and grants from the NNSF of China (Nos. 11261029, 11461039), and NSF grants DMS 10-08902 and 15-14886 and by Emylou Keith and Betty Dutcher Distinguished Professorship at the Wichita State University (USA).

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nurhandoko, Bagus Endar B.; Wely, Woen; Setiadi, Herlan

    It is already known that tomography has a great impact for analyzing and mapping unknown objects based on inversion, travel time as well as waveform inversion. Therefore, tomography has used in wide area, not only in medical but also in petroleum as well as mining. Recently, tomography method is being applied in several mining industries. A case study of tomography imaging has been carried out in DOZ ( Deep Ore Zone ) block caving mine, Tembagapura, Papua. Many researchers are undergoing to investigate the properties of DOZ cave not only outside but also inside which is unknown. Tomography takes amore » part for determining this objective.The sources are natural from the seismic events that caused by mining induced seismicity and rocks deformation activity, therefore it is called as passive seismic. These microseismic travel time data are processed by Simultaneous Iterative Reconstruction Technique (SIRT). The result of the inversion can be used for DOZ cave monitoring. These information must be used for identifying weak zone inside the cave. In addition, these results of tomography can be used to determine DOZ and cave information to support mine activity in PT. Freeport Indonesia.« less

  11. Tomographic reconstruction of atmospheric turbulence with the use of time-dependent stochastic inversion.

    PubMed

    Vecherin, Sergey N; Ostashev, Vladimir E; Ziemann, A; Wilson, D Keith; Arnold, K; Barth, M

    2007-09-01

    Acoustic travel-time tomography allows one to reconstruct temperature and wind velocity fields in the atmosphere. In a recently published paper [S. Vecherin et al., J. Acoust. Soc. Am. 119, 2579 (2006)], a time-dependent stochastic inversion (TDSI) was developed for the reconstruction of these fields from travel times of sound propagation between sources and receivers in a tomography array. TDSI accounts for the correlation of temperature and wind velocity fluctuations both in space and time and therefore yields more accurate reconstruction of these fields in comparison with algebraic techniques and regular stochastic inversion. To use TDSI, one needs to estimate spatial-temporal covariance functions of temperature and wind velocity fluctuations. In this paper, these spatial-temporal covariance functions are derived for locally frozen turbulence which is a more general concept than a widely used hypothesis of frozen turbulence. The developed theory is applied to reconstruction of temperature and wind velocity fields in the acoustic tomography experiment carried out by University of Leipzig, Germany. The reconstructed temperature and velocity fields are presented and errors in reconstruction of these fields are studied.

  12. Objective quantification of perturbations produced with a piecewise PV inversion technique

    NASA Astrophysics Data System (ADS)

    Fita, L.; Romero, R.; Ramis, C.

    2007-11-01

    PV inversion techniques have been widely used in numerical studies of severe weather cases. These techniques can be applied as a way to study the sensitivity of the responsible meteorological system to changes in the initial conditions of the simulations. Dynamical effects of a collection of atmospheric features involved in the evolution of the system can be isolated. However, aspects, such as the definition of the atmospheric features or the amount of change in the initial conditions, are largely case-dependent and/or subjectively defined. An objective way to calculate the modification of the initial fields is proposed to alleviate this problem. The perturbations are quantified as the mean absolute variations of the total energy between the original and modified fields, and an unique energy variation value is fixed for all the perturbations derived from different PV anomalies. Thus, PV features of different dimensions and characteristics introduce the same net modification of the initial conditions from an energetic point of view. The devised quantification method is applied to study the high impact weather case of 9-11 November 2001 in the Western Mediterranean basin, when a deep and strong cyclone was formed. On the Balearic Islands 4 people died, and sustained winds of 30 ms-1 and precipitation higher than 200 mm/24 h were recorded. Moreover, 700 people died in Algiers during the first phase of the event. The sensitivities to perturbations in the initial conditions of a deep upper level trough, the anticyclonic system related to the North Atlantic high and the surface thermal anomaly related to the baroclinicity of the environment are determined. Results reveal a high influence of the upper level trough and the surface thermal anomaly and a minor role of the North Atlantic high during the genesis of the cyclone.

  13. An approximate inverse scattering technique for reconstructing blockage profiles in water pipelines using acoustic transients.

    PubMed

    Jing, Liwen; Li, Zhao; Wang, Wenjie; Dubey, Amartansh; Lee, Pedro; Meniconi, Silvia; Brunone, Bruno; Murch, Ross D

    2018-05-01

    An approximate inverse scattering technique is proposed for reconstructing cross-sectional area variation along water pipelines to deduce the size and position of blockages. The technique allows the reconstructed blockage profile to be written explicitly in terms of the measured acoustic reflectivity. It is based upon the Born approximation and provides good accuracy, low computational complexity, and insight into the reconstruction process. Numerical simulations and experimental results are provided for long pipelines with mild and severe blockages of different lengths. Good agreement is found between the inverse result and the actual pipe condition for mild blockages.

  14. Inverse modeling of the Chernobyl source term using atmospheric concentration and deposition measurements

    NASA Astrophysics Data System (ADS)

    Evangeliou, Nikolaos; Hamburger, Thomas; Cozic, Anne; Balkanski, Yves; Stohl, Andreas

    2017-07-01

    This paper describes the results of an inverse modeling study for the determination of the source term of the radionuclides 134Cs, 137Cs and 131I released after the Chernobyl accident. The accident occurred on 26 April 1986 in the Former Soviet Union and released about 1019 Bq of radioactive materials that were transported as far away as the USA and Japan. Thereafter, several attempts to assess the magnitude of the emissions were made that were based on the knowledge of the core inventory and the levels of the spent fuel. More recently, when modeling tools were further developed, inverse modeling techniques were applied to the Chernobyl case for source term quantification. However, because radioactivity is a sensitive topic for the public and attracts a lot of attention, high-quality measurements, which are essential for inverse modeling, were not made available except for a few sparse activity concentration measurements far from the source and far from the main direction of the radioactive fallout. For the first time, we apply Bayesian inversion of the Chernobyl source term using not only activity concentrations but also deposition measurements from the most recent public data set. These observations refer to a data rescue attempt that started more than 10 years ago, with a final goal to provide available measurements to anyone interested. In regards to our inverse modeling results, emissions of 134Cs were estimated to be 80 PBq or 30-50 % higher than what was previously published. From the released amount of 134Cs, about 70 PBq were deposited all over Europe. Similar to 134Cs, emissions of 137Cs were estimated as 86 PBq, on the same order as previously reported results. Finally, 131I emissions of 1365 PBq were found, which are about 10 % less than the prior total releases. The inversion pushes the injection heights of the three radionuclides to higher altitudes (up to about 3 km) than previously assumed (≈ 2.2 km) in order to better match both concentration and deposition observations over Europe. The results of the present inversion were confirmed using an independent Eulerian model, for which deposition patterns were also improved when using the estimated posterior releases. Although the independent model tends to underestimate deposition in countries that are not in the main direction of the plume, it reproduces country levels of deposition very efficiently. The results were also tested for robustness against different setups of the inversion through sensitivity runs. The source term data from this study are publicly available.

  15. OCT structure, COB location and magmatic type of the SE Brazilian & S Angolan margins from integrated quantitative analysis of deep seismic reflection and gravity anomaly data

    NASA Astrophysics Data System (ADS)

    Cowie, L.; Kusznir, N. J.; Horn, B.

    2013-12-01

    Knowledge of ocean-continent transition (OCT) structure, continent-ocean boundary (COB) location and magmatic type are of critical importance for understanding rifted continental margin formation processes and in evaluating petroleum systems in deep-water frontier oil and gas exploration. The OCT structure, COB location and magmatic type of the SE Brazilian and S Angolan rifted continental margins are much debated; exhumed and serpentinised mantle have been reported at these margins. Integrated quantitative analysis using deep seismic reflection data and gravity inversion have been used to determine OCT structure, COB location and magmatic type for the SE Brazilian and S Angolan margins. Gravity inversion has been used to determine Moho depth, crustal basement thickness and continental lithosphere thinning. Residual Depth Anomaly (RDA) analysis has been used to investigate OCT bathymetric anomalies with respect to expected oceanic bathymetries and subsidence analysis has been used to determine the distribution of continental lithosphere thinning. These techniques have been validated on the Iberian margin for profiles IAM9 and ISE-01. In addition a joint inversion technique using deep seismic reflection and gravity anomaly data has been applied to the ION-GXT BS1-575 SE Brazil and ION-GXT CS1-2400 S Angola. The joint inversion method solves for coincident seismic and gravity Moho in the time domain and calculates the lateral variations in crustal basement densities and velocities along profile. Gravity inversion, RDA and subsidence analysis along the S Angolan ION-GXT CS1-2400 profile has been used to determine OCT structure and COB location. Analysis suggests that exhumed mantle, corresponding to a magma poor margin, is absent beneath the allochthonous salt. The thickness of earliest oceanic crust, derived from gravity and deep seismic reflection data is approximately 7km. The joint inversion predicts crustal basement densities and seismic velocities which are slightly less than expected for 'normal' oceanic crust. The difference between the sediment corrected RDA and that predicted from gravity inversion crustal thickness variation implies that this margin is experiencing ~300m of anomalous uplift attributed to mantle dynamic uplift. Gravity inversion, RDA and subsidence analysis have also been used to determine OCT structure and COB location along the ION-GXT BS1-575 profile, crossing the Sao Paulo Plateau and Florianopolis Ridge of the SE Brazilian margin. Gravity inversion, RDA and subsidence analysis predict the COB to be located SE of the Florianopolis Ridge. Analysis shows no evidence for exhumed mantle on this margin profile. The joint inversion technique predicts normal oceanic basement seismic velocities and densities and beneath the Sao Paulo Plateau and Florianopolis Ridge predicts crustal basement thicknesses between 10-15km. The Sao Paulo Plateau and Florianopolis Ridge are separated by a thin region of crustal basement beneath the salt interpreted as a regional transtensional structure. Sediment corrected RDAs and gravity derived 'synthetic' RDAs are of a similar magnitude on oceanic crust, implying negligible mantle dynamic topography.

  16. Recovery of surface mass redistribution from a joint inversion of GPS and GRACE data - A methodology and results from the Australian and other continents

    NASA Astrophysics Data System (ADS)

    Han, S. C.; Tangdamrongsub, N.; Razeghi, S. M.

    2017-12-01

    We present a methodology to invert a regional set of vertical displacement data from Global Positioning System (GPS) to determine surface mass redistribution. It is assumed that GPS deformation is a result of the Earth's elastic response to the surface mass load of hydrology, atmosphere, and ocean. The identical assumption is made when global geopotential change data from Gravity Recovery And Climate Experiment (GRACE) are used to determine surface mass changes. We developed an algorithm to estimate the spectral information of displacements from "regional" GPS data through regional spherical (Slepian) basis functions and apply the load Love numbers to estimate the mass load. We rigorously examine all systematic errors caused by various truncations (spherical harmonic series and Slepian series) and the smoothing constraint applied to the GPS-only inversion. We demonstrate the technique by processing 16 years of daily vertical motions determined from 114 GPS stations in Australia. The GPS inverted surface mass changes are validated against GRACE data, atmosphere and ocean models, and a land surface model. Seasonal and inter-annual terrestrial mass variations from GPS are in good agreement with GRACE data and the water storage models. The GPS recovery compares better with the water storage model around the smaller coastal basins of Australia than two different GRACE solutions. The sub-monthly mass changes from GPS provide meaningful results agreeing with atmospheric mass changes in central Australia. Finally, we integrate GPS data from different continents with GRACE in the least-square normal equations and solve for the global surface mass changes by jointly inverting GPS and GRACE data. We present the results of surface mass changes from the GPS-only inversion and from the joint GPS-GRACE inversion.

  17. Inverse Planning Approach for 3-D MRI-Based Pulse-Dose Rate Intracavitary Brachytherapy in Cervix Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chajon, Enrique; Dumas, Isabelle; Touleimat, Mahmoud B.Sc.

    2007-11-01

    Purpose: The purpose of this study was to evaluate the inverse planning simulated annealing (IPSA) software for the optimization of dose distribution in patients with cervix carcinoma treated with MRI-based pulsed-dose rate intracavitary brachytherapy. Methods and Materials: Thirty patients treated with a technique using a customized vaginal mold were selected. Dose-volume parameters obtained using the IPSA method were compared with the classic manual optimization method (MOM). Target volumes and organs at risk were delineated according to the Gynecological Brachytherapy Group/European Society for Therapeutic Radiology and Oncology recommendations. Because the pulsed dose rate program was based on clinical experience with lowmore » dose rate, dwell time values were required to be as homogeneous as possible. To achieve this goal, different modifications of the IPSA program were applied. Results: The first dose distribution calculated by the IPSA algorithm proposed a heterogeneous distribution of dwell time positions. The mean D90, D100, and V100 calculated with both methods did not differ significantly when the constraints were applied. For the bladder, doses calculated at the ICRU reference point derived from the MOM differed significantly from the doses calculated by the IPSA method (mean, 58.4 vs. 55 Gy respectively; p = 0.0001). For the rectum, the doses calculated at the ICRU reference point were also significantly lower with the IPSA method. Conclusions: The inverse planning method provided fast and automatic solutions for the optimization of dose distribution. However, the straightforward use of IPSA generated significant heterogeneity in dwell time values. Caution is therefore recommended in the use of inverse optimization tools with clinical relevance study of new dosimetric rules.« less

  18. Feasibility study for automatic reduction of phase change imagery

    NASA Technical Reports Server (NTRS)

    Nossaman, G. O.

    1971-01-01

    The feasibility of automatically reducing a form of pictorial aerodynamic heating data is discussed. The imagery, depicting the melting history of a thin coat of fusible temperature indicator painted on an aerodynamically heated model, was previously reduced by manual methods. Careful examination of various lighting theories and approaches led to an experimentally verified illumination concept capable of yielding high-quality imagery. Both digital and video image processing techniques were applied to reduction of the data, and it was demonstrated that either method can be used to develop superimposed contours. Mathematical techniques were developed to find the model-to-image and the inverse image-to-model transformation using six conjugate points, and methods were developed using these transformations to determine heating rates on the model surface. A video system was designed which is able to reduce the imagery rapidly, economically and accurately. Costs for this system were estimated. A study plan was outlined whereby the mathematical transformation techniques developed to produce model coordinate heating data could be applied to operational software, and methods were discussed and costs estimated for obtaining the digital information necessary for this software.

  19. Frequency-domain elastic full waveform inversion using encoded simultaneous sources

    NASA Astrophysics Data System (ADS)

    Jeong, W.; Son, W.; Pyun, S.; Min, D.

    2011-12-01

    Currently, numerous studies have endeavored to develop robust full waveform inversion and migration algorithms. These processes require enormous computational costs, because of the number of sources in the survey. To avoid this problem, the phase encoding technique for prestack migration was proposed by Romero (2000) and Krebs et al. (2009) proposed the encoded simultaneous-source inversion technique in the time domain. On the other hand, Ben-Hadj-Ali et al. (2011) demonstrated the robustness of the frequency-domain full waveform inversion with simultaneous sources for noisy data changing the source assembling. Although several studies on simultaneous-source inversion tried to estimate P- wave velocity based on the acoustic wave equation, seismic migration and waveform inversion based on the elastic wave equations are required to obtain more reliable subsurface information. In this study, we propose a 2-D frequency-domain elastic full waveform inversion technique using phase encoding methods. In our algorithm, the random phase encoding method is employed to calculate the gradients of the elastic parameters, source signature estimation and the diagonal entries of approximate Hessian matrix. The crosstalk for the estimated source signature and the diagonal entries of approximate Hessian matrix are suppressed with iteration as for the gradients. Our 2-D frequency-domain elastic waveform inversion algorithm is composed using the back-propagation technique and the conjugate-gradient method. Source signature is estimated using the full Newton method. We compare the simultaneous-source inversion with the conventional waveform inversion for synthetic data sets of the Marmousi-2 model. The inverted results obtained by simultaneous sources are comparable to those obtained by individual sources, and source signature is successfully estimated in simultaneous source technique. Comparing the inverted results using the pseudo Hessian matrix with previous inversion results provided by the approximate Hessian matrix, it is noted that the latter are better than the former for deeper parts of the model. This work was financially supported by the Brain Korea 21 project of Energy System Engineering, by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2010-0006155), by the Energy Efficiency & Resources of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government Ministry of Knowledge Economy (No. 2010T100200133).

  20. A three-step maximum a posteriori probability method for InSAR data inversion of coseismic rupture with application to the 14 April 2010 Mw 6.9 Yushu, China, earthquake

    NASA Astrophysics Data System (ADS)

    Sun, Jianbao; Shen, Zheng-Kang; Bürgmann, Roland; Wang, Min; Chen, Lichun; Xu, Xiwei

    2013-08-01

    develop a three-step maximum a posteriori probability method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic deformation solutions of earthquake rupture. The method originates from the fully Bayesian inversion and mixed linear-nonlinear Bayesian inversion methods and shares the same posterior PDF with them, while overcoming difficulties with convergence when large numbers of low-quality data are used and greatly improving the convergence rate using optimization procedures. A highly efficient global optimization algorithm, adaptive simulated annealing, is used to search for the maximum of a posterior PDF ("mode" in statistics) in the first step. The second step inversion approaches the "true" solution further using the Monte Carlo inversion technique with positivity constraints, with all parameters obtained from the first step as the initial solution. Then slip artifacts are eliminated from slip models in the third step using the same procedure of the second step, with fixed fault geometry parameters. We first design a fault model with 45° dip angle and oblique slip, and produce corresponding synthetic interferometric synthetic aperture radar (InSAR) data sets to validate the reliability and efficiency of the new method. We then apply this method to InSAR data inversion for the coseismic slip distribution of the 14 April 2010 Mw 6.9 Yushu, China earthquake. Our preferred slip model is composed of three segments with most of the slip occurring within 15 km depth and the maximum slip reaches 1.38 m at the surface. The seismic moment released is estimated to be 2.32e+19 Nm, consistent with the seismic estimate of 2.50e+19 Nm.

  1. Pre-stack full-waveform inversion of multichannel seismic data to retrieve thermohaline ocean structure. Application to the Gulf of Cadiz (SW Iberia).

    NASA Astrophysics Data System (ADS)

    Dagnino, Daniel; Jiménez Tejero, Clara-Estela; Meléndez, Adrià; Gras, Clàudia; Sallarès, Valentí; Ranero, César R.

    2016-04-01

    This work demonstrates the feasibility to retrieve high-resolution models of oceanic physical parameters by means of 2D adjoint-state full-waveform inversion (FWI). The proposed method is applied to pre-stack multi-channel seismic (MCS) data acquired in the Gulf of Cadiz (SW Iberia) in the framework of the EU GO (Geophysical Oceanography) project in 2006. We first design and apply a specific data processing flow that allows reducing data noise without modifying trace amplitudes. This step is shown to be essential to obtain accurate results due to the low signal-to-noise ratio (SNR) of water layer reflections, which are typically three-to-four orders of magnitude weaker than those in solid earth. Second, we propose new techniques to improve the inversion results by reducing the artefacts appearing in the gradient and misfit as a consequence of the low SNR. We use a weight and filter operator to focus in the regions where the gradient is reliable. The source wavelet is then inverted together with the sound speed. We demonstrate the efficiency of the proposed method and inversion strategy retrieving a 2D sound speed model along a 50 km-long MCS profile collected in the Gulf of Cadiz during the GO experiment. In this region, the Mediterranean outflow entrains the Atlantic waters, creating a salinity complex thermohaline structure that can be measured by a difference in acoustic impedance. The inverted sound speed model have a resolution of 75m for the horizontal direction, which is two orders of magnitude better than the models obtained using conventional, probe-based oceanographic techniques. In a second step, temperature and salinity are derived from the sound speed by minimizing the difference between the inverted and the theoretical sound speed estimated using the thermodynamic equation of seawater (TEOS-10 software). To apply the TEOS-10 we first calculate a linear-fitting between temperature and salinity using regional data from the National Oceanic and Atmospheric Administration (NOAA) compilation. Pressure is calculated from latitude and depth. In the final step, salinity is calculated using the Temperature-Salinity relation and the previously estimated temperature. The comparison of the inverted temperature, salinity model with measures from XBT and CTD probes deployed simultaneously to the MCS data acquisition shows that the accuracy of the inverted models is ˜0.15°C for temperature and ˜0.1psu for salinity.

  2. The 2-D magnetotelluric inverse problem solved with optimization

    NASA Astrophysics Data System (ADS)

    van Beusekom, Ashley E.; Parker, Robert L.; Bank, Randolph E.; Gill, Philip E.; Constable, Steven

    2011-02-01

    The practical 2-D magnetotelluric inverse problem seeks to determine the shallow-Earth conductivity structure using finite and uncertain data collected on the ground surface. We present an approach based on using PLTMG (Piecewise Linear Triangular MultiGrid), a special-purpose code for optimization with second-order partial differential equation (PDE) constraints. At each frequency, the electromagnetic field and conductivity are treated as unknowns in an optimization problem in which the data misfit is minimized subject to constraints that include Maxwell's equations and the boundary conditions. Within this framework it is straightforward to accommodate upper and lower bounds or other conditions on the conductivity. In addition, as the underlying inverse problem is ill-posed, constraints may be used to apply various kinds of regularization. We discuss some of the advantages and difficulties associated with using PDE-constrained optimization as the basis for solving large-scale nonlinear geophysical inverse problems. Combined transverse electric and transverse magnetic complex admittances from the COPROD2 data are inverted. First, we invert penalizing size and roughness giving solutions that are similar to those found previously. In a second example, conventional regularization is replaced by a technique that imposes upper and lower bounds on the model. In both examples the data misfit is better than that obtained previously, without any increase in model complexity.

  3. Effect of surface-related Rayleigh and multiple waves on velocity reconstruction with time-domain elastic FWI

    NASA Astrophysics Data System (ADS)

    Fang, Jinwei; Zhou, Hui; Zhang, Qingchen; Chen, Hanming; Wang, Ning; Sun, Pengyuan; Wang, Shucheng

    2018-01-01

    It is critically important to assess the effectiveness of elastic full waveform inversion (FWI) algorithms when FWI is applied to real land seismic data including strong surface and multiple waves related to the air-earth boundary. In this paper, we review the realization of the free surface boundary condition in staggered-grid finite-difference (FD) discretization of elastic wave equation, and analyze the impact of the free surface on FWI results. To reduce inputs/outputs (I/O) operations in gradient calculation, we adopt the boundary value reconstruction method to rebuild the source wavefields during the backward propagation of the residual data. A time-domain multiscale inversion strategy is conducted by using a convolutional objective function, and a multi-GPU parallel programming technique is used to accelerate our elastic FWI further. Forward simulation and elastic FWI examples without and with considering the free surface are shown and analyzed, respectively. Numerical results indicate that no free surface incorporated elastic FWI fails to recover a good inversion result from the Rayleigh wave contaminated observed data. By contrast, when the free surface is incorporated into FWI, the inversion results become better. We also discuss the dependency of the Rayleigh waveform incorporated FWI on the accuracy of initial models, especially the accuracy of the shallow part of the initial models.

  4. 3D gravity inversion and uncertainty assessment of basement relief via Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Pallero, J. L. G.; Fernández-Martínez, J. L.; Bonvalot, S.; Fudym, O.

    2017-04-01

    Nonlinear gravity inversion in sedimentary basins is a classical problem in applied geophysics. Although a 2D approximation is widely used, 3D models have been also proposed to better take into account the basin geometry. A common nonlinear approach to this 3D problem consists in modeling the basin as a set of right rectangular prisms with prescribed density contrast, whose depths are the unknowns. Then, the problem is iteratively solved via local optimization techniques from an initial model computed using some simplifications or being estimated using prior geophysical models. Nevertheless, this kind of approach is highly dependent on the prior information that is used, and lacks from a correct solution appraisal (nonlinear uncertainty analysis). In this paper, we use the family of global Particle Swarm Optimization (PSO) optimizers for the 3D gravity inversion and model appraisal of the solution that is adopted for basement relief estimation in sedimentary basins. Synthetic and real cases are illustrated, showing that robust results are obtained. Therefore, PSO seems to be a very good alternative for 3D gravity inversion and uncertainty assessment of basement relief when used in a sampling while optimizing approach. That way important geological questions can be answered probabilistically in order to perform risk assessment in the decisions that are made.

  5. Detection of Coal Fires: A Case Study Conducted on Indian Coal Seams Using Neural Network and Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Singh, B. B.

    2016-12-01

    India produces majority of its electricity from coal but a huge quantity of coal burns every day due to coal fires and also poses a threat to the environment as severe pollutants. In the present study we had demonstrated the usage of Neural Network based approach with an integrated Particle Swarm Optimization (PSO) inversion technique. The Self Potential (SP) data set is used for the early detection of coal fires. The study was conducted over the East Basuria colliery, Jharia Coal Field, Jharkhand, India. The causative source was modelled as an inclined sheet like anomaly and the synthetic data was generated. Neural Network scheme consists of an input layer, hidden layers and an output layer. The input layer corresponds to the SP data and the output layer is the estimated depth of the coal fire. A synthetic dataset was modelled with some of the known parameters such as depth, conductivity, inclination angle, half width etc. associated with causative body and gives a very low misfit error of 0.0032%. Therefore, the method was found accurate in predicting the depth of the source body. The technique was applied to the real data set and the model was trained until a very good correlation of determination `R2' value of 0.98 is obtained. The depth of the source body was found to be 12.34m with a misfit error percentage of 0.242%. The inversion results were compared with the lithologs obtained from a nearby well which corresponds to the L3 coal seam. The depth of the coal fire had exactly matched with the half width of the anomaly which suggests that the fire is widely spread. The inclination angle of the anomaly was 135.510 which resembles the development of the geometrically complex fracture planes. These fractures may be developed due to anisotropic weakness of the ground which acts as passage for the air. As a result coal fires spreads along these fracture planes. The results obtained from the Neural Network was compared with PSO inversion results and were found in complete agreement. PSO technique had already been found a well-established technique to model SP anomalies. Therefore for successful control and mitigation, SP surveys coupled with Neural Network and PSO technique proves to be novel and economical approach along with other existing geophysical techniques. Keywords: PSO, Coal fire, Self-Potential, Inversion, Neural Network

  6. Time-lapse seismic waveform inversion for monitoring near-surface microbubble injection

    NASA Astrophysics Data System (ADS)

    Kamei, R.; Jang, U.; Lumley, D. E.; Mouri, T.; Nakatsukasa, M.; Takanashi, M.

    2016-12-01

    Seismic monitoring of the Earth provides valuable information regarding the time-varying changes in subsurface physical properties that are caused by natural or man-made processes. However, the resulting changes in subsurface properties are often small both in terms of magnitude and spatial extent, leading to seismic data differences that are difficult to detect at typical non-repeatable noise levels. In order to better extract information from the time-lapse data, exploiting the full seismic waveform information can be critical, since detected amplitude or traveltime changes may be minimal. We explore methods of waveform inversion that estimate an optimal model of time-varying elastic parameters at the wavelength scale to fit the observed time-lapse seismic data with modelled waveforms based on numerical solutions of the wave equation. We apply acoustic waveform inversion to time-lapse cross-well monitoring surveys of 64-m well intervals, and estimate the velocity changes that occur during the injection of microbubble water into shallow unconsolidated Quaternary sediments in the Kanto basin of Japan at a depth of 25 m below the surface. Microbubble water is comprised of water infused with air bubbles of a diameter less than 0.1mm, and may be useful to improve resistance to ground liquefaction during major earthquakes. Monitoring the space-time distribution and physical properties of microbubble injection is therefore important to understanding the full potential of the technique. Repeated monitoring surveys (>10) reveal transient behaviours in waveforms during microbubble injection. Time-lapse waveform inversion detects changes in P-wave velocity of less than 1 percent, initially as velocity increases and subsequently as velocity decreases. The velocity changes are mainly imaged within a thin (1 m) layer between the injection and the receiver well, inferring the fluid-flow influence of the fluvial sediment depositional environment. The resulting velocity models fit the observed waveforms very well, supporting the validity of the estimated velocity changes. In order to further improve the estimation of velocity changes, we investigate the limitations of acoustic waveform inversion, and apply elastic waveform inversion to the time-lapse data set.

  7. Damped regional-scale stress inversions: Methodology and examples for southern California and the Coalinga aftershock sequence

    USGS Publications Warehouse

    Hardebeck, J.L.; Michael, A.J.

    2006-01-01

    We present a new focal mechanism stress inversion technique to produce regional-scale models of stress orientation containing the minimum complexity necessary to fit the data. Current practice is to divide a region into small subareas and to independently fit a stress tensor to the focal mechanisms of each subarea. This procedure may lead to apparent spatial variability that is actually an artifact of overfitting noisy data or nonuniquely fitting data that does not completely constrain the stress tensor. To remove these artifacts while retaining any stress variations that are strongly required by the data, we devise a damped inversion method to simultaneously invert for stress in all subareas while minimizing the difference in stress between adjacent subareas. This method is conceptually similar to other geophysical inverse techniques that incorporate damping, such as seismic tomography. In checkerboard tests, the damped inversion removes the stress rotation artifacts exhibited by an undamped inversion, while resolving sharper true stress rotations than a simple smoothed model or a moving-window inversion. We show an example of a spatially damped stress field for southern California. The methodology can also be used to study temporal stress changes, and an example for the Coalinga, California, aftershock sequence is shown. We recommend use of the damped inversion technique for any study examining spatial or temporal variations in the stress field.

  8. Inverse Function: Pre-Service Teachers' Techniques and Meanings

    ERIC Educational Resources Information Center

    Paoletti, Teo; Stevens, Irma E.; Hobson, Natalie L. F.; Moore, Kevin C.; LaForest, Kevin R.

    2018-01-01

    Researchers have argued teachers and students are not developing connected meanings for function inverse, thus calling for a closer examination of teachers' and students' inverse function meanings. Responding to this call, we characterize 25 pre-service teachers' inverse function meanings as inferred from our analysis of clinical interviews. After…

  9. Normal-inverse bimodule operation Hadamard transform ion mobility spectrometry.

    PubMed

    Hong, Yan; Huang, Chaoqun; Liu, Sheng; Xia, Lei; Shen, Chengyin; Chu, Yannan

    2018-10-31

    In order to suppress or eliminate the spurious peaks and improve signal-to-noise ratio (SNR) of Hadamard transform ion mobility spectrometry (HT-IMS), a normal-inverse bimodule operation Hadamard transform - ion mobility spectrometry (NIBOHT-IMS) technique was developed. In this novel technique, a normal and inverse pseudo random binary sequence (PRBS) was produced in sequential order by an ion gate controller and utilized to control the ion gate of IMS, and then the normal HT-IMS mobility spectrum and the inverse HT-IMS mobility spectrum were obtained. A NIBOHT-IMS mobility spectrum was gained by subtracting the inverse HT-IMS mobility spectrum from normal HT-IMS mobility spectrum. Experimental results demonstrate that the NIBOHT-IMS technique can significantly suppress or eliminate the spurious peaks, and enhance the SNR by measuring the reactant ions. Furthermore, the gas CHCl 3 and CH 2 Br 2 were measured for evaluating the capability of detecting real sample. The results show that the NIBOHT-IMS technique is able to eliminate the spurious peaks and improve the SNR notably not only for the detection of larger ion signals but also for the detection of small ion signals. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Minimization of model representativity errors in identification of point source emission from atmospheric concentration measurements

    NASA Astrophysics Data System (ADS)

    Sharan, Maithili; Singh, Amit Kumar; Singh, Sarvesh Kumar

    2017-11-01

    Estimation of an unknown atmospheric release from a finite set of concentration measurements is considered an ill-posed inverse problem. Besides ill-posedness, the estimation process is influenced by the instrumental errors in the measured concentrations and model representativity errors. The study highlights the effect of minimizing model representativity errors on the source estimation. This is described in an adjoint modelling framework and followed in three steps. First, an estimation of point source parameters (location and intensity) is carried out using an inversion technique. Second, a linear regression relationship is established between the measured concentrations and corresponding predicted using the retrieved source parameters. Third, this relationship is utilized to modify the adjoint functions. Further, source estimation is carried out using these modified adjoint functions to analyse the effect of such modifications. The process is tested for two well known inversion techniques, called renormalization and least-square. The proposed methodology and inversion techniques are evaluated for a real scenario by using concentrations measurements from the Idaho diffusion experiment in low wind stable conditions. With both the inversion techniques, a significant improvement is observed in the retrieval of source estimation after minimizing the representativity errors.

  11. Implementation of magnetic resonance elastography for the investigation of traumatic brain injuries

    NASA Astrophysics Data System (ADS)

    Boulet, Thomas

    Magnetic resonance elastography (MRE) is a potentially transformative imaging modality allowing local and non-invasive measurement of biological tissue mechanical properties. It uses a specific phase contrast MR pulse sequence to measure induced vibratory motion in soft material, from which material properties can be estimated. Compared to other imaging techniques, MRE is able to detect tissue pathology at early stages by quantifying the changes in tissue stiffness associated with diseases. In an effort to develop the technique and improve its capabilities, two inversion algorithms were written to evaluate viscoelastic properties from the measured displacements fields. The first one was based on a direct algebraic inversion of the differential equation of motion, which decouples under certain simplifying assumptions, and featured a spatio-temporal multi-directional filter. The second one relies on a finite element discretization of the governing equations to perform a direct inversion. Several applications of this technique have also been investigated, including the estimation of mechanical parameters in various gel phantoms and polymers, as well as the use of MRE as a diagnostic tools for brain disorders. In this respect, the particular interest was to investigate traumatic brain injury (TBI), a complex and diverse injury affecting 1.7 million Americans annually. The sensitivity of MRE to TBI was first assessed on excised rat brains subjected to a controlled cortical impact (CCI) injury, before execution of in vivo experiments in mice. MRE was also applied in vivo on mouse models of medulloblastoma tumors and multiple sclerosis. These studies showed the potential of MRE in mapping the brain mechanically and providing non-invasive in vivo imaging markers for neuropathology and pathogenesis of brain diseases. Furthermore, MRE can easily be translatable to clinical settings; thus, while this technique may not be used directly to diagnose different abnormalities in the brain at this time, it may be helpful to detect abnormalities, follow therapies, and trace macroscopic changes that are not seen by conventional methods with clinical relevance.

  12. Ionospheric Asymmetry Evaluation using Tomography to Assess the Effectiveness of Radio Occultation Data Inversion

    NASA Astrophysics Data System (ADS)

    Shaikh, M. M.; Notarpietro, R.; Yin, P.; Nava, B.

    2013-12-01

    The Multi-Instrument Data Analysis System (MIDAS) algorithm is based on the oceanographic imaging techniques first applied to do the imaging of 2D slices of the ionosphere. The first version of MIDAS (version 1.0) was able to deal with any line-integral data such as GPS-ground or GPS-LEO differential-phase data or inverted ionograms. The current version extends tomography into four dimensional (lat, long, height and time) spatial-temporal mapping that combines all observations simultaneously in a single inversion with the minimum of a priori assumptions about the form of the ionospheric electron-concentration distribution. This work is an attempt to investigate the Radio Occultation (RO) data assimilation into MIDAS by assessing the ionospheric asymmetry and its impact on RO data inversion, when the Onion-peeling algorithm is used. Ionospheric RO data from COSMIC mission, specifically data collected during 24 September 2011 storm over mid-latitudes, has been used for the data assimilation. Using output electron density data from Midas (with/without RO assimilation) and ideal RO geometries, we tried to assess ionospheric asymmetry. It has been observed that the level of asymmetry was significantly increased when the storm was active. This was due to the increased ionization, which in turn produced large gradients along occulted ray path in the ionosphere. The presence of larger gradients was better observed when Midas was used with RO assimilated data. A very good correlation has been found between the evaluated asymmetry and errors related to the inversion products, when the inversion is performed considering standard techniques based on the assumption of spherical symmetry of the ionosphere. Errors are evaluated considering the peak electron density (NmF2) estimate and the Vertical TEC (VTEC) evaluation. This work highlights the importance of having a tool which should be able to state the effectiveness of Radio Occultation data inversion considering standard algorithms, like Onion-peeling, which are based on ionospheric spherical symmetry assumption. The outcome of this work will lead to find a better inversion algorithm which will deal with the ionospheric asymmetry in more realistic way. This is foreseen as a task for future research. This work has been done under the framework of TRANSMIT project (ITN Marie Curie Actions - GA No. 264476).

  13. Time-reversal imaging for classification of submerged elastic targets via Gibbs sampling and the Relevance Vector Machine.

    PubMed

    Dasgupta, Nilanjan; Carin, Lawrence

    2005-04-01

    Time-reversal imaging (TRI) is analogous to matched-field processing, although TRI is typically very wideband and is appropriate for subsequent target classification (in addition to localization). Time-reversal techniques, as applied to acoustic target classification, are highly sensitive to channel mismatch. Hence, it is crucial to estimate the channel parameters before time-reversal imaging is performed. The channel-parameter statistics are estimated here by applying a geoacoustic inversion technique based on Gibbs sampling. The maximum a posteriori (MAP) estimate of the channel parameters are then used to perform time-reversal imaging. Time-reversal implementation requires a fast forward model, implemented here by a normal-mode framework. In addition to imaging, extraction of features from the time-reversed images is explored, with these applied to subsequent target classification. The classification of time-reversed signatures is performed by the relevance vector machine (RVM). The efficacy of the technique is analyzed on simulated in-channel data generated by a free-field finite element method (FEM) code, in conjunction with a channel propagation model, wherein the final classification performance is demonstrated to be relatively insensitive to the associated channel parameters. The underlying theory of Gibbs sampling and TRI are presented along with the feature extraction and target classification via the RVM.

  14. High lift selected concepts

    NASA Technical Reports Server (NTRS)

    Henderson, M. L.

    1979-01-01

    The benefits to high lift system maximum life and, alternatively, to high lift system complexity, of applying analytic design and analysis techniques to the design of high lift sections for flight conditions were determined and two high lift sections were designed to flight conditions. The influence of the high lift section on the sizing and economics of a specific energy efficient transport (EET) was clarified using a computerized sizing technique and an existing advanced airplane design data base. The impact of the best design resulting from the design applications studies on EET sizing and economics were evaluated. Flap technology trade studies, climb and descent studies, and augmented stability studies are included along with a description of the baseline high lift system geometry, a calculation of lift and pitching moment when separation is present, and an inverse boundary layer technique for pressure distribution synthesis and optimization.

  15. Tracing of paleo-shear zones by self-potential data inversion: case studies from the KTB, Rittsteig, and Grossensees graphite-bearing fault planes

    NASA Astrophysics Data System (ADS)

    Mehanee, Salah A.

    2015-01-01

    This paper describes a new method for tracing paleo-shear zones of the continental crust by self-potential (SP) data inversion. The method falls within the deterministic inversion framework, and it is exclusively applicable for the interpretation of the SP anomalies measured along a profile over sheet-type structures such as conductive thin films of interconnected graphite precipitations formed on shear planes. The inverse method fits a residual SP anomaly by a single thin sheet and recovers the characteristic parameters (depth to the top h, extension in depth a, amplitude coefficient k, and amount and direction of dip θ) of the sheet. This method minimizes an objective functional in the space of the logarithmed and non-logarithmed model parameters (log( h), log( a), log( k), and θ) successively by the steepest descent (SD) and Gauss-Newton (GN) techniques in order to essentially maintain the stability and convergence of this inverse method. Prior to applying the method to real data, its accuracy, convergence, and stability are successfully verified on numerical examples with and without noise. The method is then applied to SP profiles from the German Continental Deep Drilling Program (Kontinentales Tiefbohrprogramm der Bundesrepublik Deutschla - KTB), Rittsteig, and Grossensees sites in Germany for tracing paleo-shear planes coated with graphitic deposits. The comparisons of geologic sections constructed in this paper (based on the proposed deterministic approach) against the existing published interpretations (obtained based on trial-and-error modeling) for the SP data of the KTB and Rittsteig sites have revealed that the deterministic approach suggests some new details that are of some geological significance. The findings of the proposed inverse scheme are supported by available drilling and other geophysical data. Furthermore, the real SP data of the Grossensees site have been interpreted (apparently for the first time ever) by the deterministic inverse scheme from which interpretive geologic cross sections are suggested. The computational efficiency, analysis of the numerical examples investigated, and comparisons of the real data inverted here have demonstrated that the developed deterministic approach is advantageous to the existing interpretation methods, and it is suitable for meaningful interpretation of SP data acquired elsewhere over graphitic occurrences on fault planes.

  16. High-resolution near-surface velocity model building using full-waveform inversion—a case study from southwest Sweden

    NASA Astrophysics Data System (ADS)

    Adamczyk, A.; Malinowski, M.; Malehmir, A.

    2014-06-01

    Full-waveform inversion (FWI) is an iterative optimization technique that provides high-resolution models of subsurface properties. Frequency-domain, acoustic FWI was applied to seismic data acquired over a known quick-clay landslide scar in southwest Sweden. We inverted data from three 2-D seismic profiles, 261-572 m long, two of them shot with small charges of dynamite and one with a sledgehammer. To our best knowledge this is the first published application of FWI to sledgehammer data. Both sources provided data suitable for waveform inversion, the sledgehammer data containing even wider frequency spectrum. Inversion was performed for frequency groups between 27.5 and 43.1 Hz for the explosive data and 27.5-51.0 Hz for the sledgehammer. The lowest inverted frequency was limited by the resonance frequency of the standard 28-Hz geophones used in the survey. High-velocity granitic bedrock in the area is undulated and very shallow (15-100 m below the surface), and exhibits a large P-wave velocity contrast to the overlying normally consolidated sediments. In order to mitigate the non-linearity of the inverse problem we designed a multiscale layer-stripping inversion strategy. Obtained P-wave velocity models allowed to delineate the top of the bedrock and revealed distinct layers within the overlying sediments of clays and coarse-grained materials. Models were verified in an extensive set of validating procedures and used for pre-stack depth migration, which confirmed their robustness.

  17. A Tensor-Train accelerated solver for integral equations in complex geometries

    NASA Astrophysics Data System (ADS)

    Corona, Eduardo; Rahimian, Abtin; Zorin, Denis

    2017-04-01

    We present a framework using the Quantized Tensor Train (QTT) decomposition to accurately and efficiently solve volume and boundary integral equations in three dimensions. We describe how the QTT decomposition can be used as a hierarchical compression and inversion scheme for matrices arising from the discretization of integral equations. For a broad range of problems, computational and storage costs of the inversion scheme are extremely modest O (log ⁡ N) and once the inverse is computed, it can be applied in O (Nlog ⁡ N) . We analyze the QTT ranks for hierarchically low rank matrices and discuss its relationship to commonly used hierarchical compression techniques such as FMM and HSS. We prove that the QTT ranks are bounded for translation-invariant systems and argue that this behavior extends to non-translation invariant volume and boundary integrals. For volume integrals, the QTT decomposition provides an efficient direct solver requiring significantly less memory compared to other fast direct solvers. We present results demonstrating the remarkable performance of the QTT-based solver when applied to both translation and non-translation invariant volume integrals in 3D. For boundary integral equations, we demonstrate that using a QTT decomposition to construct preconditioners for a Krylov subspace method leads to an efficient and robust solver with a small memory footprint. We test the QTT preconditioners in the iterative solution of an exterior elliptic boundary value problem (Laplace) formulated as a boundary integral equation in complex, multiply connected geometries.

  18. Eruption mass estimation using infrasound waveform inversion and ash and gas measurements: Evaluation at Sakurajima Volcano, Japan [Comparison of eruption masses at Sakurajima Volcano, Japan calculated by infrasound waveform inversion and ground-based sampling

    DOE PAGES

    Fee, David; Izbekov, Pavel; Kim, Keehoon; ...

    2017-10-09

    Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been appliedmore » to the inversion technique. Furthermore we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan.« less

  19. Eruption mass estimation using infrasound waveform inversion and ash and gas measurements: Evaluation at Sakurajima Volcano, Japan [Comparison of eruption masses at Sakurajima Volcano, Japan calculated by infrasound waveform inversion and ground-based sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fee, David; Izbekov, Pavel; Kim, Keehoon

    Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been appliedmore » to the inversion technique. Furthermore we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan.« less

  20. Aerosol properties from spectral extinction and backscatter estimated by an inverse Monte Carlo method.

    PubMed

    Ligon, D A; Gillespie, J B; Pellegrino, P

    2000-08-20

    The feasibility of using a generalized stochastic inversion methodology to estimate aerosol size distributions accurately by use of spectral extinction, backscatter data, or both is examined. The stochastic method used, inverse Monte Carlo (IMC), is verified with both simulated and experimental data from aerosols composed of spherical dielectrics with a known refractive index. Various levels of noise are superimposed on the data such that the effect of noise on the stability and results of inversion can be determined. Computational results show that the application of the IMC technique to inversion of spectral extinction or backscatter data or both can produce good estimates of aerosol size distributions. Specifically, for inversions for which both spectral extinction and backscatter data are used, the IMC technique was extremely accurate in determining particle size distributions well outside the wavelength range. Also, the IMC inversion results proved to be stable and accurate even when the data had significant noise, with a signal-to-noise ratio of 3.

  1. Source localization in electromyography using the inverse potential problem

    NASA Astrophysics Data System (ADS)

    van den Doel, Kees; Ascher, Uri M.; Pai, Dinesh K.

    2011-02-01

    We describe an efficient method for reconstructing the activity in human muscles from an array of voltage sensors on the skin surface. MRI is used to obtain morphometric data which are segmented into muscle tissue, fat, bone and skin, from which a finite element model for volume conduction is constructed. The inverse problem of finding the current sources in the muscles is solved using a careful regularization technique which adds a priori information, yielding physically reasonable solutions from among those that satisfy the basic potential problem. Several regularization functionals are considered and numerical experiments on a 2D test model are performed to determine which performs best. The resulting scheme leads to numerical difficulties when applied to large-scale 3D problems. We clarify the nature of these difficulties and provide a method to overcome them, which is shown to perform well in the large-scale problem setting.

  2. Fast and error-resilient coherent control in an atomic vapor

    NASA Astrophysics Data System (ADS)

    He, Yizun; Wang, Mengbing; Zhao, Jian; Qiu, Liyang; Wang, Yuzhuo; Fang, Yami; Zhao, Kaifeng; Wu, Saijun

    2017-04-01

    Nanosecond chirped pulses from an optical arbitrary waveform generator is applied to both invert and coherently split the D1 line population of potassium vapor within a laser focal volume of 2X105 μ m3. The inversion fidelity of f>96%, mainly limited by spontaneous emission during the nanosecond pulse, is inferred from both probe light transmission and superfluorescence emission. The nearly perfect inversion is uniformly achieved for laser intensity varying over an order of magnitude, and is tolerant to detuning error of more than 1000 times the D1 transition linewidth. We further demonstrate enhanced intensity error resilience with multiple chirped pulses and ``universal composite pulses''. This fast and robust coherent control technique should find wide applications in the field of quantum optics, laser cooling, and atom interferometry. This work is supported by National Key Research Program of China under Grant No. 2016YFA0302000, and NNSFC under Grant No. 11574053.

  3. Robust Inversion and Data Compression in Control Allocation

    NASA Technical Reports Server (NTRS)

    Hodel, A. Scottedward

    2000-01-01

    We present an off-line computational method for control allocation design. The control allocation function delta = F(z)tau = delta (sub 0) (z) mapping commanded body-frame torques to actuator commands is implicitly specified by trim condition delta (sub 0) (z) and by a robust pseudo-inverse problem double vertical line I - G(z) F(z) double vertical line less than epsilon (z) where G(z) is a system Jacobian evaluated at operating point z, z circumflex is an estimate of z, and epsilon (z) less than 1 is a specified error tolerance. The allocation function F(z) = sigma (sub i) psi (z) F (sub i) is computed using a heuristic technique for selecting wavelet basis functions psi and a constrained least-squares criterion for selecting the allocation matrices F (sub i). The method is applied to entry trajectory control allocation for a reusable launch vehicle (X-33).

  4. Two dimensional distribution measurement of electric current generated in a polymer electrolyte fuel cell using 49 NMR surface coils.

    PubMed

    Ogawa, Kuniyasu; Sasaki, Tatsuyoshi; Yoneda, Shigeki; Tsujinaka, Kumiko; Asai, Ritsuko

    2018-05-17

    In order to increase the current density generated in a PEFC (polymer electrolyte fuel cell), a method for measuring the spatial distribution of both the current and the water content of the MEA (membrane electrode assembly) is necessary. Based on the frequency shifts of NMR (nuclear magnetic resonance) signals acquired from the water contained in the MEA using 49 NMR coils in a 7 × 7 arrangement inserted in the PEFC, a method for measuring the two-dimensional spatial distribution of electric current generated in a unit cell with a power generation area of 140 mm × 160 mm was devised. We also developed an inverse analysis method to determine the two-dimensional electric current distribution that can be applied to actual PEFC connections. Two analytical techniques, namely coarse graining of segments and stepwise search, were used to shorten the calculation time required for inverse analysis of the electric current map. Using this method and techniques, spatial distributions of electric current and water content in the MEA were obtained when the PEFC generated electric power at 100 A. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Estimation of flow properties using surface deformation and head data: A trajectory-based approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vasco, D.W.

    2004-07-12

    A trajectory-based algorithm provides an efficient and robust means to infer flow properties from surface deformation and head data. The algorithm is based upon the concept of an ''arrival time'' of a drawdown front, which is defined as the time corresponding to the maximum slope of the drawdown curve. The technique involves three steps: the inference of head changes as a function of position and time, the use of the estimated head changes to define arrival times, and the inversion of the arrival times for flow properties. Trajectories, computed from the output of a numerical simulator, are used to relatemore » the drawdown arrival times to flow properties. The inversion algorithm is iterative, requiring one reservoir simulation for each iteration. The method is applied to data from a set of 14 tiltmeters, located at the Raymond Quarry field site in California. Using the technique, I am able to image a high-conductivity channel which extends to the south of the pumping well. The presence of th is permeable pathway is supported by an analysis of earlier cross-well transient pressure test data.« less

  6. A new approach to the inverse problem for current mapping in thin-film superconductors

    NASA Astrophysics Data System (ADS)

    Zuber, J. W.; Wells, F. S.; Fedoseev, S. A.; Johansen, T. H.; Rosenfeld, A. B.; Pan, A. V.

    2018-03-01

    A novel mathematical approach has been developed to complete the inversion of the Biot-Savart law in one- and two-dimensional cases from measurements of the perpendicular component of the magnetic field using the well-developed Magneto-Optical Imaging technique. Our approach, especially in the 2D case, is provided in great detail to allow a straightforward implementation as opposed to those found in the literature. Our new approach also refines our previous results for the 1D case [Johansen et al., Phys. Rev. B 54, 16264 (1996)], and streamlines the method developed by Jooss et al. [Physica C 299, 215 (1998)] deemed as the most accurate if compared to that of Roth et al. [J. Appl. Phys. 65, 361 (1989)]. We also verify and streamline the iterative technique, which was developed following Laviano et al. [Supercond. Sci. Technol. 16, 71 (2002)] to account for in-plane magnetic fields caused by the bending of the applied magnetic field due to the demagnetising effect. After testing on magneto-optical images of a high quality YBa2Cu3O7 superconducting thin film, we show that the procedure employed is effective.

  7. Efficient sampling of parsimonious inversion histories with application to genome rearrangement in Yersinia.

    PubMed

    Miklós, István; Darling, Aaron E

    2009-06-22

    Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called "MC4Inversion." We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique.

  8. Measurement of leaky Lamb wave dispersion curves with application on coating characterization

    NASA Astrophysics Data System (ADS)

    Lee, Yung-Chun; Cheng, Sheng Wen

    2001-04-01

    This paper describes a new measurement system for measuring dispersion curves of leaky Lamb waves. The measurement system is based on a focusing PVDF transducer, the defocusing measurement, the V(f,z) waveform processing method, and an image displaying technique. The measurement system is applied for the determination of thin-film elastic properties, namely Young's modulus and shear modulus, by the inversion of dispersion curves measured from a thin-film/plate configuration. Elastic constants of electro-deposited nickel layers are determined with this method.

  9. Techniques for Accelerating Iterative Methods for the Solution of Mathematical Problems

    DTIC Science & Technology

    1989-07-01

    m, we can find a solu ion to the problem by using generalized inverses. Hence, ;= Ih.i = GAi = G - where G is of the form (18). A simple choice for V...have understood why I was not available for many of their activities and not home many of the nights. Their love is forever. I have saved the best for...Xk) Extrapolation applied to terms xP through Xk F Operator on x G Iteration function Ik Identity matrix of rank k Solution of the problem or the limit

  10. Quantitative Seismic Interpretation: Applying Rock Physics Tools to Reduce Interpretation Risk

    NASA Astrophysics Data System (ADS)

    Sondergeld, Carl H.

    This book is divided into seven chapters that cover rock physics, statistical rock physics, seismic inversion techniques, case studies, and work flows. On balance, the emphasis is on rock physics. Included are 56 color figures that greatly help in the interpretation of more complicated plots and displays.The domain of rock physics falls between petrophysics and seismics. It is the basis for interpreting seismic observations and therefore is pivotal to the understanding of this book. The first two chapters are dedicated to this topic (109 pages).

  11. Electro-magneto interaction in fractional Green-Naghdi thermoelastic solid with a cylindrical cavity

    NASA Astrophysics Data System (ADS)

    Ezzat, M. A.; El-Bary, A. A.

    2018-01-01

    A unified mathematical model of Green-Naghdi's thermoelasticty theories (GN), based on fractional time-derivative of heat transfer is constructed. The model is applied to solve a one-dimensional problem of a perfect conducting unbounded body with a cylindrical cavity subjected to sinusoidal pulse heating in the presence of an axial uniform magnetic field. Laplace transform techniques are used to get the general analytical solutions in Laplace domain, and the inverse Laplace transforms based on Fourier expansion techniques are numerically implemented to obtain the numerical solutions in time domain. Comparisons are made with the results predicted by the two theories. The effects of the fractional derivative parameter on thermoelastic fields for different theories are discussed.

  12. Detection of phase synchronization from the data: Application to physiology

    NASA Astrophysics Data System (ADS)

    Rosenblum, Michael G.; Pikovsky, Arkady S.; Schäfer, Carsten; Tass, Peter; Kurths, Jürgen

    2000-02-01

    Synchronization of coupled oscillating systems means appearance of certain relations between their phases and frequencies. Here we use this concept in order to address the inverse problem and to reveal interaction between systems from experimental data. We discuss how the phases and frequencies can be estimated from time series and present the techniques for detection and quantification of synchronization. We apply our approach to multichannel magnetoencephalography data and records of muscle activity of a Parkinsonian patient, and also use it to analyze the cardiorespiratory interaction in humans. By means of these examples we demonstrate that our method is effective for the analysis of systems interrelation from noisy nonstationary bivariate data and provides other information than traditional correlation (spectral) techniques.

  13. Living specimen tomography by digital holographic microscopy: morphometry of testate amoeba

    NASA Astrophysics Data System (ADS)

    Charrière, Florian; Pavillon, Nicolas; Colomb, Tristan; Depeursinge, Christian; Heger, Thierry J.; Mitchell, Edward A. D.; Marquet, Pierre; Rappaz, Benjamin

    2006-08-01

    This paper presents an optical diffraction tomography technique based on digital holographic microscopy. Quantitative 2-dimensional phase images are acquired for regularly-spaced angular positions of the specimen covering a total angle of π, allowing to built 3-dimensional quantitative refractive index distributions by an inverse Radon transform. A 20x magnification allows a resolution better than 3 μm in all three dimensions, with accuracy better than 0.01 for the refractive index measurements. This technique is for the first time to our knowledge applied to living specimen (testate amoeba, Protista). Morphometric measurements are extracted from the tomographic reconstructions, showing that the commonly used method for testate amoeba biovolume evaluation leads to systematic under evaluations by about 50%.

  14. GENERATING FRACTAL PATTERNS BY USING p-CIRCLE INVERSION

    NASA Astrophysics Data System (ADS)

    Ramírez, José L.; Rubiano, Gustavo N.; Zlobec, Borut Jurčič

    2015-10-01

    In this paper, we introduce the p-circle inversion which generalizes the classical inversion with respect to a circle (p = 2) and the taxicab inversion (p = 1). We study some basic properties and we also show the inversive images of some basic curves. We apply this new transformation to well-known fractals such as Sierpinski triangle, Koch curve, dragon curve, Fibonacci fractal, among others. Then we obtain new fractal patterns. Moreover, we generalize the method called circle inversion fractal be means of the p-circle inversion.

  15. A New Normalized Difference Cloud Retrieval Technique Applied to Landsat Radiances Over the Oklahoma ARM Site

    NASA Technical Reports Server (NTRS)

    Orepoulos, Lazaros; Cahalan, Robert; Marshak, Alexander; Wen, Guoyong

    1999-01-01

    We suggest a new approach to cloud retrieval, using a normalized difference of nadir reflectivities (NDNR) constructed from a non-absorbing and absorbing (with respect to liquid water) wavelength. Using Monte Carlo simulations we show that this quantity has the potential of removing first order scattering effects caused by cloud side illumination and shadowing at oblique Sun angles. Application of the technique to TM (Thematic Mapper) radiance observations from Landsat-5 over the Southern Great Plains site of the ARM (Atmospheric Radiation Measurement) program gives very similar regional statistics and histograms, but significant differences at the pixel level. NDNR can be also combined with the inverse NIPA (Nonlocal Independent Pixel Approximation) of Marshak (1998) which is applied for the first time on overcast Landsat scene subscenes. We demonstrate the sensitivity of the NIPA-retrieved cloud fields on the parameters of the method and discuss practical issues related to the optimal choice of these parameters.

  16. Ambient Noise Tomography of central Java, with Transdimensional Bayesian Inversion

    NASA Astrophysics Data System (ADS)

    Zulhan, Zulfakriza; Saygin, Erdinc; Cummins, Phil; Widiyantoro, Sri; Nugraha, Andri Dian; Luehr, Birger-G.; Bodin, Thomas

    2014-05-01

    Delineating the crustal structure of central Java is crucial for understanding its complex tectonic setting. However, seismic imaging of the strong heterogeneity typical of such a tectonically active region can be challenging, particularly in the upper crust where velocity contrasts are strongest and steep body wave ray-paths provide poor resolution. We have applied ambient noise cross correlation of pair stations in central Java, Indonesia by using the MERapi Amphibious EXperiment (MERAMEX) dataset. The data were collected between May to October 2004. We used 120 of 134 temporary seismic stations for about 150 days of observation, which covered central Java. More than 5000 Rayleigh wave Green's function were extracted by cross-correlating the noise simultaneously recorded at available station pairs. We applied a fully nonlinear 2D Bayesian inversion technique to the retrieved travel times. Features in the derived tomographic images correlate well with previous studies, and some shallow structures that were not evident in previous studies are clearly imaged with Ambient Noise Tomography. The Kendeng Basin and several active volcanoes appear with very low group velocities, and anomalies with relatively high velocities can be interpreted in terms of crustal sutures and/or surface geological features.

  17. Simulating polarized light scattering in terrestrial snow based on bicontinuous random medium and Monte Carlo ray tracing

    NASA Astrophysics Data System (ADS)

    Xiong, Chuan; Shi, Jiancheng

    2014-01-01

    To date, the light scattering models of snow consider very little about the real snow microstructures. The ideal spherical or other single shaped particle assumptions in previous snow light scattering models can cause error in light scattering modeling of snow and further cause errors in remote sensing inversion algorithms. This paper tries to build up a snow polarized reflectance model based on bicontinuous medium, with which the real snow microstructure is considered. The accurate specific surface area of bicontinuous medium can be analytically derived. The polarized Monte Carlo ray tracing technique is applied to the computer generated bicontinuous medium. With proper algorithms, the snow surface albedo, bidirectional reflectance distribution function (BRDF) and polarized BRDF can be simulated. The validation of model predicted spectral albedo and bidirectional reflectance factor (BRF) using experiment data shows good results. The relationship between snow surface albedo and snow specific surface area (SSA) were predicted, and this relationship can be used for future improvement of snow specific surface area (SSA) inversion algorithms. The model predicted polarized reflectance is validated and proved accurate, which can be further applied in polarized remote sensing.

  18. Toward 2D and 3D imaging of magnetic nanoparticles using EPR measurements.

    PubMed

    Coene, A; Crevecoeur, G; Leliaert, J; Dupré, L

    2015-09-01

    Magnetic nanoparticles (MNPs) are an important asset in many biomedical applications. An effective working of these applications requires an accurate knowledge of the spatial MNP distribution. A promising, noninvasive, and sensitive technique to visualize MNP distributions in vivo is electron paramagnetic resonance (EPR). Currently only 1D MNP distributions can be reconstructed. In this paper, the authors propose extending 1D EPR toward 2D and 3D using computer simulations to allow accurate imaging of MNP distributions. To find the MNP distribution belonging to EPR measurements, an inverse problem needs to be solved. The solution of this inverse problem highly depends on the stability of the inverse problem. The authors adapt 1D EPR imaging to realize the imaging of multidimensional MNP distributions. Furthermore, the authors introduce partial volume excitation in which only parts of the volume are imaged to increase stability of the inverse solution and to speed up the measurements. The authors simulate EPR measurements of different 2D and 3D MNP distributions and solve the inverse problem. The stability is evaluated by calculating the condition measure and by comparing the actual MNP distribution to the reconstructed MNP distribution. Based on these simulations, the authors define requirements for the EPR system to cope with the added dimensions. Moreover, the authors investigate how EPR measurements should be conducted to improve the stability of the associated inverse problem and to increase reconstruction quality. The approach used in 1D EPR can only be employed for the reconstruction of small volumes in 2D and 3D EPRs due to numerical instability of the inverse solution. The authors performed EPR measurements of increasing cylindrical volumes and evaluated the condition measure. This showed that a reduction of the inherent symmetry in the EPR methodology is necessary. By reducing the symmetry of the EPR setup, quantitative images of larger volumes can be obtained. The authors found that, by selectively exciting parts of the volume, the authors could increase the reconstruction quality even further while reducing the amount of measurements. Additionally, the inverse solution of this activation method degrades slower for increasing volumes. Finally, the methodology was applied to noisy EPR measurements: using the reduced EPR setup's symmetry and the partial activation method, an increase in reconstruction quality of ≈ 80% can be seen with a speedup of the measurements with 10%. Applying the aforementioned requirements to the EPR setup and stabilizing the EPR measurements showed a tremendous increase in noise robustness, thereby making EPR a valuable method for quantitative imaging of multidimensional MNP distributions.

  19. The Role of Eigensolutions in Nonlinear Inverse Cavity-Flow-Theory. Revision.

    DTIC Science & Technology

    1985-06-10

    The method of Levi Civita is applied to an isolated fully cavitating body at zero cavitation number and adapted to the solution of the inverse...Eigensolutions in Nonlinear Inverse Cavity-Flow Theory [Revised] Abstract: The method of Levi Civita is applied to an isolated fully cavitating body at...problem is not thought * to present much of a challenge at zero cavitation number. In this case, - the classical method of Levi Civita [7] can be

  20. A technique for increasing the accuracy of the numerical inversion of the Laplace transform with applications

    NASA Technical Reports Server (NTRS)

    Berger, B. S.; Duangudom, S.

    1973-01-01

    A technique is introduced which extends the range of useful approximation of numerical inversion techniques to many cycles of an oscillatory function without requiring either the evaluation of the image function for many values of s or the computation of higher-order terms. The technique consists in reducing a given initial value problem defined over some interval into a sequence of initial value problems defined over a set of subintervals. Several numerical examples demonstrate the utility of the method.

  1. A Geophysical Inversion Model Enhancement Technique Based on the Blind Deconvolution

    NASA Astrophysics Data System (ADS)

    Zuo, B.; Hu, X.; Li, H.

    2011-12-01

    A model-enhancement technique is proposed to enhance the geophysical inversion model edges and details without introducing any additional information. Firstly, the theoretic correctness of the proposed geophysical inversion model-enhancement technique is discussed. An inversion MRM (model resolution matrix) convolution approximating PSF (Point Spread Function) method is designed to demonstrate the correctness of the deconvolution model enhancement method. Then, a total-variation regularization blind deconvolution geophysical inversion model-enhancement algorithm is proposed. In previous research, Oldenburg et al. demonstrate the connection between the PSF and the geophysical inverse solution. Alumbaugh et al. propose that more information could be provided by the PSF if we return to the idea of it behaving as an averaging or low pass filter. We consider the PSF as a low pass filter to enhance the inversion model basis on the theory of the PSF convolution approximation. Both the 1D linear and the 2D magnetotelluric inversion examples are used to analyze the validity of the theory and the algorithm. To prove the proposed PSF convolution approximation theory, the 1D linear inversion problem is considered. It shows the ratio of convolution approximation error is only 0.15%. The 2D synthetic model enhancement experiment is presented. After the deconvolution enhancement, the edges of the conductive prism and the resistive host become sharper, and the enhancement result is closer to the actual model than the original inversion model according the numerical statistic analysis. Moreover, the artifacts in the inversion model are suppressed. The overall precision of model increases 75%. All of the experiments show that the structure details and the numerical precision of inversion model are significantly improved, especially in the anomalous region. The correlation coefficient between the enhanced inversion model and the actual model are shown in Fig. 1. The figure illustrates that more information and details structure of the actual model are enhanced through the proposed enhancement algorithm. Using the proposed enhancement method can help us gain a clearer insight into the results of the inversions and help make better informed decisions.

  2. Estimation of splitting functions from Earth's normal mode spectra using the neighbourhood algorithm

    NASA Astrophysics Data System (ADS)

    Pachhai, Surya; Tkalčić, Hrvoje; Masters, Guy

    2016-01-01

    The inverse problem for Earth structure from normal mode data is strongly non-linear and can be inherently non-unique. Traditionally, the inversion is linearized by taking partial derivatives of the complex spectra with respect to the model parameters (i.e. structure coefficients), and solved in an iterative fashion. This method requires that the earthquake source model is known. However, the release of energy in large earthquakes used for the analysis of Earth's normal modes is not simple. A point source approximation is often inadequate, and a more complete account of energy release at the source is required. In addition, many earthquakes are required for the solution to be insensitive to the initial constraints and regularization. In contrast to an iterative approach, the autoregressive linear inversion technique conveniently avoids the need for earthquake source parameters, but it also requires a number of events to achieve full convergence when a single event does not excite all singlets well. To build on previous improvements, we develop a technique to estimate structure coefficients (and consequently, the splitting functions) using a derivative-free parameter search, known as neighbourhood algorithm (NA). We implement an efficient forward method derived using the autoregresssion of receiver strips, and this allows us to search over a multiplicity of structure coefficients in a relatively short time. After demonstrating feasibility of the use of NA in synthetic cases, we apply it to observations of the inner core sensitive mode 13S2. The splitting function of this mode is dominated by spherical harmonic degree 2 axisymmetric structure and is consistent with the results obtained from the autoregressive linear inversion. The sensitivity analysis of multiple events confirms the importance of the Bolivia, 1994 earthquake. When this event is used in the analysis, as little as two events are sufficient to constrain the splitting functions of 13S2 mode. Apart from not requiring the knowledge of earthquake source, the newly developed technique provides an approximate uncertainty measure of the structure coefficients and allows us to control the type of structure solved for, for example to establish if elastic structure is sufficient.

  3. Location of Sinabung volcano magma chamber on 2013 using lavenberg-marquardt inversion scheme

    NASA Astrophysics Data System (ADS)

    Kumalasari, R.; Srigutomo, W.; Djamal, M.; Meilano, I.; Gunawan, H.

    2018-05-01

    Sinabung Volcano has been monitoring using GPS after his eruption on August 2010. We Applied Levenberg-Marquardt Inversion Scheme to GPS data on 2013 because deformation of Sinabung Volcano in this year show an inflation and deflation, first we applied Levenberg-Marquardt to velocity data on 23 January 2013 then we applied Levenberg-Marquardt Inversion Scheme to data on 31 December 2013. From our analysis we got the depth of the pressure source modeling results that indicate some possibilities that Sinabung has a deep magma chamber about 15km and also shallow magma chamber about 1km from the surface.

  4. Comparative evaluation between anatomic and non-anatomic lateral ligament reconstruction techniques in the ankle joint: A computational study.

    PubMed

    Purevsuren, Tserenchimed; Batbaatar, Myagmarbayar; Khuyagbaatar, Batbayar; Kim, Kyungsoo; Kim, Yoon Hyuk

    2018-03-12

    Biomechanical studies have indicated that the conventional non-anatomic reconstruction techniques for lateral ankle sprain (LAS) tend to restrict subtalar joint motion compared to intact ankle joints. Excessive restriction in subtalar motion may lead to chronic pain, functional difficulties, and development of osteoarthritis. Therefore, various anatomic surgical techniques to reconstruct both the anterior talofibular and calcaneofibular ligaments have been introduced. In this study, ankle joint stability was evaluated using multibody computational ankle joint model to assess two new anatomic reconstruction and three popular non-anatomic reconstruction techniques. An LAS injury, three popular non-anatomic reconstruction models (Watson-Jones, Evans, and Chrisman-Snook), and two common types of anatomic reconstruction models were developed based on the intact ankle model. The stability of ankle in both talocrural and subtalar joint were evaluated under anterior drawer test (150 N anterior force), inversion test (3 Nm inversion moment), internal rotational test (3 Nm internal rotation moment), and the combined loading test (9 Nm inversion and internal moment as well as 1800 N compressive force). Our overall results show that the two anatomic reconstruction techniques were superior to the non-anatomic reconstruction techniques in stabilizing both talocrural and subtalar joints. Restricted subtalar joint motion, which mainly observed in Watson-Jones and Chrisman-Snook techniques, was not shown in the anatomical reconstructions. Evans technique was beneficial for subtalar joint as it does not restrict subtalar motion, though Evans technique was insufficient for restoring talocrural joint inversion. The anatomical reconstruction techniques best recovered ankle stability.

  5. Inversion group (IG) fitting: A new T1 mapping method for modified look-locker inversion recovery (MOLLI) that allows arbitrary inversion groupings and rest periods (including no rest period).

    PubMed

    Sussman, Marshall S; Yang, Issac Y; Fok, Kai-Ho; Wintersperger, Bernd J

    2016-06-01

    The Modified Look-Locker Inversion Recovery (MOLLI) technique is used for T1 mapping in the heart. However, a drawback of this technique is that it requires lengthy rest periods in between inversion groupings to allow for complete magnetization recovery. In this work, a new MOLLI fitting algorithm (inversion group [IG] fitting) is presented that allows for arbitrary combinations of inversion groupings and rest periods (including no rest period). Conventional MOLLI algorithms use a three parameter fitting model. In IG fitting, the number of parameters is two plus the number of inversion groupings. This increased number of parameters permits any inversion grouping/rest period combination. Validation was performed through simulation, phantom, and in vivo experiments. IG fitting provided T1 values with less than 1% discrepancy across a range of inversion grouping/rest period combinations. By comparison, conventional three parameter fits exhibited up to 30% discrepancy for some combinations. The one drawback with IG fitting was a loss of precision-approximately 30% worse than the three parameter fits. IG fitting permits arbitrary inversion grouping/rest period combinations (including no rest period). The cost of the algorithm is a loss of precision relative to conventional three parameter fits. Magn Reson Med 75:2332-2340, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  6. Inverse problems in quantum chemistry

    NASA Astrophysics Data System (ADS)

    Karwowski, Jacek

    Inverse problems constitute a branch of applied mathematics with well-developed methodology and formalism. A broad family of tasks met in theoretical physics, in civil and mechanical engineering, as well as in various branches of medical and biological sciences has been formulated as specific implementations of the general theory of inverse problems. In this article, it is pointed out that a number of approaches met in quantum chemistry can (and should) be classified as inverse problems. Consequently, the methodology used in these approaches may be enriched by applying ideas and theorems developed within the general field of inverse problems. Several examples, including the RKR method for the construction of potential energy curves, determining parameter values in semiempirical methods, and finding external potentials for which the pertinent Schrödinger equation is exactly solvable, are discussed in detail.

  7. Bayesian Inference in Satellite Gravity Inversion

    NASA Technical Reports Server (NTRS)

    Kis, K. I.; Taylor, Patrick T.; Wittmann, G.; Kim, Hyung Rae; Torony, B.; Mayer-Guerr, T.

    2005-01-01

    To solve a geophysical inverse problem means applying measurements to determine the parameters of the selected model. The inverse problem is formulated as the Bayesian inference. The Gaussian probability density functions are applied in the Bayes's equation. The CHAMP satellite gravity data are determined at the altitude of 400 kilometer altitude over the South part of the Pannonian basin. The model of interpretation is the right vertical cylinder. The parameters of the model are obtained from the minimum problem solved by the Simplex method.

  8. Moho Modeling Using FFT Technique

    NASA Astrophysics Data System (ADS)

    Chen, Wenjin; Tenzer, Robert

    2017-04-01

    To improve the numerical efficiency, the Fast Fourier Transform (FFT) technique was facilitated in Parker-Oldenburg's method for a regional gravimetric Moho recovery, which assumes the Earth's planar approximation. In this study, we extend this definition for global applications while assuming a spherical approximation of the Earth. In particular, we utilize the FFT technique for a global Moho recovery, which is practically realized in two numerical steps. The gravimetric forward modeling is first applied, based on methods for a spherical harmonic analysis and synthesis of the global gravity and lithospheric structure models, to compute the refined gravity field, which comprises mainly the gravitational signature of the Moho geometry. The gravimetric inverse problem is then solved iteratively in order to determine the Moho depth. The application of FFT technique to both numerical steps reduces the computation time to a fraction of that required without applying this fast algorithm. The developed numerical producers are used to estimate the Moho depth globally, and the gravimetric result is validated using the global (CRUST1.0) and regional (ESC) seismic Moho models. The comparison reveals a relatively good agreement between the gravimetric and seismic models, with the RMS of differences (of 4-5 km) at the level of expected uncertainties of used input datasets, while without the presence of significant systematic bias.

  9. Technical Note: Atmospheric CO2 inversions on the mesoscale using data-driven prior uncertainties: methodology and system evaluation

    NASA Astrophysics Data System (ADS)

    Kountouris, Panagiotis; Gerbig, Christoph; Rödenbeck, Christian; Karstens, Ute; Koch, Thomas Frank; Heimann, Martin

    2018-03-01

    Atmospheric inversions are widely used in the optimization of surface carbon fluxes on a regional scale using information from atmospheric CO2 dry mole fractions. In many studies the prior flux uncertainty applied to the inversion schemes does not directly reflect the true flux uncertainties but is used to regularize the inverse problem. Here, we aim to implement an inversion scheme using the Jena inversion system and applying a prior flux error structure derived from a model-data residual analysis using high spatial and temporal resolution over a full year period in the European domain. We analyzed the performance of the inversion system with a synthetic experiment, in which the flux constraint is derived following the same residual analysis but applied to the model-model mismatch. The synthetic study showed a quite good agreement between posterior and true fluxes on European, country, annual and monthly scales. Posterior monthly and country-aggregated fluxes improved their correlation coefficient with the known truth by 7 % compared to the prior estimates when compared to the reference, with a mean correlation of 0.92. The ratio of the SD between the posterior and reference and between the prior and reference was also reduced by 33 % with a mean value of 1.15. We identified temporal and spatial scales on which the inversion system maximizes the derived information; monthly temporal scales at around 200 km spatial resolution seem to maximize the information gain.

  10. Identification of subsurface structures using electromagnetic data and shape priors

    NASA Astrophysics Data System (ADS)

    Tveit, Svenn; Bakr, Shaaban A.; Lien, Martha; Mannseth, Trond

    2015-03-01

    We consider the inverse problem of identifying large-scale subsurface structures using the controlled source electromagnetic method. To identify structures in the subsurface where the contrast in electric conductivity can be small, regularization is needed to bias the solution towards preserving structural information. We propose to combine two approaches for regularization of the inverse problem. In the first approach we utilize a model-based, reduced, composite representation of the electric conductivity that is highly flexible, even for a moderate number of degrees of freedom. With a low number of parameters, the inverse problem is efficiently solved using a standard, second-order gradient-based optimization algorithm. Further regularization is obtained using structural prior information, available, e.g., from interpreted seismic data. The reduced conductivity representation is suitable for incorporation of structural prior information. Such prior information cannot, however, be accurately modeled with a gaussian distribution. To alleviate this, we incorporate the structural information using shape priors. The shape prior technique requires the choice of kernel function, which is application dependent. We argue for using the conditionally positive definite kernel which is shown to have computational advantages over the commonly applied gaussian kernel for our problem. Numerical experiments on various test cases show that the methodology is able to identify fairly complex subsurface electric conductivity distributions while preserving structural prior information during the inversion.

  11. Global Ocean Circulation in Thermohaline Coordinates and Small-scale and Mesoscale mixing: An Inverse Estimate.

    NASA Astrophysics Data System (ADS)

    Groeskamp, S.; Zika, J. D.; McDougall, T. J.; Sloyan, B.

    2016-02-01

    I will present results of a new inverse technique that infers small-scale turbulent diffusivities and mesoscale eddy diffusivities from an ocean climatology of Salinity (S) and Temperature (T) in combination with surface freshwater and heat fluxes.First, the ocean circulation is represented in (S,T) coordinates, by the diathermohaline streamfunction. Framing the ocean circulation in (S,T) coordinates, isolates the component of the circulation that is directly related to water-mass transformation.Because water-mass transformation is directly related to fluxes of salt and heat, this framework allows for the formulation of an inverse method in which the diathermohaline streamfunction is balanced with known air-sea forcing and unknown mixing. When applying this inverse method to observations, we obtain observationally based estimates for both the streamfunction and the mixing. The results reveal new information about the component of the global ocean circulation due to water-mass transformation and its relation to surface freshwater and heat fluxes and small-scale and mesoscale mixing. The results provide global constraints on spatially varying patterns of diffusivities, in order to obtain a realistic overturning circulation. We find that mesoscale isopycnal mixing is much smaller than expected. These results are important for our understanding of the relation between global ocean circulation and mixing and may lead to improved parameterisations in numerical ocean models.

  12. A coupled stochastic inverse-management framework for dealing with nonpoint agriculture pollution under groundwater parameter uncertainty

    NASA Astrophysics Data System (ADS)

    Llopis-Albert, Carlos; Palacios-Marqués, Daniel; Merigó, José M.

    2014-04-01

    In this paper a methodology for the stochastic management of groundwater quality problems is presented, which can be used to provide agricultural advisory services. A stochastic algorithm to solve the coupled flow and mass transport inverse problem is combined with a stochastic management approach to develop methods for integrating uncertainty; thus obtaining more reliable policies on groundwater nitrate pollution control from agriculture. The stochastic inverse model allows identifying non-Gaussian parameters and reducing uncertainty in heterogeneous aquifers by constraining stochastic simulations to data. The management model determines the spatial and temporal distribution of fertilizer application rates that maximizes net benefits in agriculture constrained by quality requirements in groundwater at various control sites. The quality constraints can be taken, for instance, by those given by water laws such as the EU Water Framework Directive (WFD). Furthermore, the methodology allows providing the trade-off between higher economic returns and reliability in meeting the environmental standards. Therefore, this new technology can help stakeholders in the decision-making process under an uncertainty environment. The methodology has been successfully applied to a 2D synthetic aquifer, where an uncertainty assessment has been carried out by means of Monte Carlo simulation techniques.

  13. An interactive Bayesian geostatistical inverse protocol for hydraulic tomography

    USGS Publications Warehouse

    Fienen, Michael N.; Clemo, Tom; Kitanidis, Peter K.

    2008-01-01

    Hydraulic tomography is a powerful technique for characterizing heterogeneous hydrogeologic parameters. An explicit trade-off between characterization based on measurement misfit and subjective characterization using prior information is presented. We apply a Bayesian geostatistical inverse approach that is well suited to accommodate a flexible model with the level of complexity driven by the data and explicitly considering uncertainty. Prior information is incorporated through the selection of a parameter covariance model characterizing continuity and providing stability. Often, discontinuities in the parameter field, typically caused by geologic contacts between contrasting lithologic units, necessitate subdivision into zones across which there is no correlation among hydraulic parameters. We propose an interactive protocol in which zonation candidates are implied from the data and are evaluated using cross validation and expert knowledge. Uncertainty introduced by limited knowledge of dynamic regional conditions is mitigated by using drawdown rather than native head values. An adjoint state formulation of MODFLOW-2000 is used to calculate sensitivities which are used both for the solution to the inverse problem and to guide protocol decisions. The protocol is tested using synthetic two-dimensional steady state examples in which the wells are located at the edge of the region of interest.

  14. Inversion of ground-motion data from a seismometer array for rotation using a modification of Jaeger's method

    USGS Publications Warehouse

    Chi, Wu-Cheng; Lee, W.H.K.; Aston, J.A.D.; Lin, C.J.; Liu, C.-C.

    2011-01-01

    We develop a new way to invert 2D translational waveforms using Jaeger's (1969) formula to derive rotational ground motions about one axis and estimate the errors in them using techniques from statistical multivariate analysis. This procedure can be used to derive rotational ground motions and strains using arrayed translational data, thus providing an efficient way to calibrate the performance of rotational sensors. This approach does not require a priori information about the noise level of the translational data and elastic properties of the media. This new procedure also provides estimates of the standard deviations of the derived rotations and strains. In this study, we validated this code using synthetic translational waveforms from a seismic array. The results after the inversion of the synthetics for rotations were almost identical with the results derived using a well-tested inversion procedure by Spudich and Fletcher (2009). This new 2D procedure can be applied three times to obtain the full, three-component rotations. Additional modifications can be implemented to the code in the future to study different features of the rotational ground motions and strains induced by the passage of seismic waves.

  15. Solution of Inverse Kinematics for 6R Robot Manipulators With Offset Wrist Based on Geometric Algebra.

    PubMed

    Fu, Zhongtao; Yang, Wenyu; Yang, Zhen

    2013-08-01

    In this paper, we present an efficient method based on geometric algebra for computing the solutions to the inverse kinematics problem (IKP) of the 6R robot manipulators with offset wrist. Due to the fact that there exist some difficulties to solve the inverse kinematics problem when the kinematics equations are complex, highly nonlinear, coupled and multiple solutions in terms of these robot manipulators stated mathematically, we apply the theory of Geometric Algebra to the kinematic modeling of 6R robot manipulators simply and generate closed-form kinematics equations, reformulate the problem as a generalized eigenvalue problem with symbolic elimination technique, and then yield 16 solutions. Finally, a spray painting robot, which conforms to the type of robot manipulators, is used as an example of implementation for the effectiveness and real-time of this method. The experimental results show that this method has a large advantage over the classical methods on geometric intuition, computation and real-time, and can be directly extended to all serial robot manipulators and completely automatized, which provides a new tool on the analysis and application of general robot manipulators.

  16. One-dimensional inversion of geo-electrical resistivity sounding data using artificial neural networks—a case study

    NASA Astrophysics Data System (ADS)

    Singh, U. K.; Tiwari, R. K.; Singh, S. B.

    2005-02-01

    This paper deals with the application of artificial neural networks (ANN) technique for the study of a case history using 1-D inversion of vertical electrical resistivity sounding (VES) data from the Puga valley, Kashmir, India. The study area is important for its rich geothermal resources as well as from the tectonic point of view as it is located near the collision boundary of the Indo-Asian crustal plates. In order to understand the resistivity structure and layer thicknesses, we used here three-layer feedforward neural networks to model and predict measured VES data. Three algorithms, e.g. back-propagation (BP), adaptive back-propagation (ABP) and Levenberg-Marquardt algorithm (LMA) were applied to the synthetic as well as real VES field data and efficiency of supervised training network are compared. Analyses suggest that LMA is computationally faster and give results, which are comparatively more accurate and consistent than BP and ABP. The results obtained using the ANN inversions are remarkably correlated with the available borehole litho-logs. The feasibility study suggests that ANN methods offer an excellent complementary tool for the direct detection of layered resistivity structure.

  17. Efficient Sampling of Parsimonious Inversion Histories with Application to Genome Rearrangement in Yersinia

    PubMed Central

    Darling, Aaron E.

    2009-01-01

    Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called “MC4Inversion.” We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique. PMID:20333186

  18. Fat suppression with short inversion time inversion-recovery and chemical-shift selective saturation: a dual STIR-CHESS combination prepulse for turbo spin echo pulse sequences.

    PubMed

    Tanabe, Koji; Nishikawa, Keiichi; Sano, Tsukasa; Sakai, Osamu; Jara, Hernán

    2010-05-01

    To test a newly developed fat suppression magnetic resonance imaging (MRI) prepulse that synergistically uses the principles of fat suppression via inversion recovery (STIR) and spectral fat saturation (CHESS), relative to pure CHESS and STIR. This new technique is termed dual fat suppression (Dual-FS). To determine if Dual-FS could be chemically specific for fat, the phantom consisted of the fat-mimicking NiCl(2) aqueous solution, porcine fat, porcine muscle, and water was imaged with the three fat-suppression techniques. For Dual-FS and STIR, several inversion times were used. Signal intensities of each image obtained with each technique were compared. To determine if Dual-FS could be robust to magnetic field inhomogeneities, the phantom consisting of different NiCl(2) aqueous solutions, porcine fat, porcine muscle, and water was imaged with Dual-FS and CHESS at the several off-resonance frequencies. To compare fat suppression efficiency in vivo, 10 volunteer subjects were also imaged with the three fat-suppression techniques. Dual-FS could suppress fat sufficiently within the inversion time of 110-140 msec, thus enabling differentiation between fat and fat-mimicking aqueous structures. Dual-FS was as robust to magnetic field inhomogeneities as STIR and less vulnerable than CHESS. The same results for fat suppression were obtained in volunteers. The Dual-FS-STIR-CHESS is an alternative and promising fat suppression technique for turbo spin echo MRI. Copyright 2010 Wiley-Liss, Inc.

  19. An iterative hyperelastic parameters reconstruction for breast cancer assessment

    NASA Astrophysics Data System (ADS)

    Mehrabian, Hatef; Samani, Abbas

    2008-03-01

    In breast elastography, breast tissues usually undergo large compressions resulting in significant geometric and structural changes, and consequently nonlinear mechanical behavior. In this study, an elastography technique is presented where parameters characterizing tissue nonlinear behavior is reconstructed. Such parameters can be used for tumor tissue classification. To model the nonlinear behavior, tissues are treated as hyperelastic materials. The proposed technique uses a constrained iterative inversion method to reconstruct the tissue hyperelastic parameters. The reconstruction technique uses a nonlinear finite element (FE) model for solving the forward problem. In this research, we applied Yeoh and Polynomial models to model the tissue hyperelasticity. To mimic the breast geometry, we used a computational phantom, which comprises of a hemisphere connected to a cylinder. This phantom consists of two types of soft tissue to mimic adipose and fibroglandular tissues and a tumor. Simulation results show the feasibility of the proposed method in reconstructing the hyperelastic parameters of the tumor tissue.

  20. A k-Vector Approach to Sampling, Interpolation, and Approximation

    NASA Astrophysics Data System (ADS)

    Mortari, Daniele; Rogers, Jonathan

    2013-12-01

    The k-vector search technique is a method designed to perform extremely fast range searching of large databases at computational cost independent of the size of the database. k-vector search algorithms have historically found application in satellite star-tracker navigation systems which index very large star catalogues repeatedly in the process of attitude estimation. Recently, the k-vector search algorithm has been applied to numerous other problem areas including non-uniform random variate sampling, interpolation of 1-D or 2-D tables, nonlinear function inversion, and solution of systems of nonlinear equations. This paper presents algorithms in which the k-vector search technique is used to solve each of these problems in a computationally-efficient manner. In instances where these tasks must be performed repeatedly on a static (or nearly-static) data set, the proposed k-vector-based algorithms offer an extremely fast solution technique that outperforms standard methods.

  1. Determining the Size of Pores in a Partially Transparent Ceramics from Total-Reflection Spectra

    NASA Astrophysics Data System (ADS)

    Mironov, R. A.; Zabezhailov, M. O.; Georgiu, I. F.; Cherepanov, V. V.; Rusin, M. Yu.

    2018-03-01

    A technique is proposed for determining the pore-size distribution based on measuring the dependence of total reflectance in the domain of partial transparency of a material. An assumption about equality of scattering-coefficient spectra determined by solving the inverse radiation transfer problem and by theoretical calculation with the Mie theory is used. The technique is applied to studying a quartz ceramics. The poresize distribution is also determined using mercury and gas porosimetry. All three methods are shown to produce close results for pores with diameters of <180 nm, which occupy 90% of the void volume. In the domain of pore dimensions of >180 nm, the methods show differences that might be related to both specific procedural features and the structural properties of ceramics. The spectral-scattering method has a number of advantages over traditional porosimetry, and it can be viewed as a routine industrial technique.

  2. A comparative study of surface waves inversion techniques at strong motion recording sites in Greece

    USGS Publications Warehouse

    Panagiotis C. Pelekis,; Savvaidis, Alexandros; Kayen, Robert E.; Vlachakis, Vasileios S.; Athanasopoulos, George A.

    2015-01-01

    Surface wave method was used for the estimation of Vs vs depth profile at 10 strong motion stations in Greece. The dispersion data were obtained by SASW method, utilizing a pair of electromechanical harmonic-wave source (shakers) or a random source (drop weight). In this study, three inversion techniques were used a) a recently proposed Simplified Inversion Method (SIM), b) an inversion technique based on a neighborhood algorithm (NA) which allows the incorporation of a priori information regarding the subsurface structure parameters, and c) Occam's inversion algorithm. For each site constant value of Poisson's ratio was assumed (ν=0.4) since the objective of the current study is the comparison of the three inversion schemes regardless the uncertainties resulting due to the lack of geotechnical data. A penalty function was introduced to quantify the deviations of the derived Vs profiles. The Vs models are compared as of Vs(z), Vs30 and EC8 soil category, in order to show the insignificance of the existing variations. The comparison results showed that the average variation of SIM profiles is 9% and 4.9% comparing with NA and Occam's profiles respectively whilst the average difference of Vs30 values obtained from SIM is 7.4% and 5.0% compared with NA and Occam's.

  3. Individually Watermarked Information Distributed Scalable by Modified Transforms

    DTIC Science & Technology

    2009-10-01

    inverse of the secret transform is needed. Each trusted recipient has a unique inverse transform that is similar to the inverse of the original...transform. The elements of this individual inverse transform are given by the individual descrambling key. After applying the individual inverse ... transform the retrieved image is embedded with a recipient individual watermark. Souce 1 I Decode IW1 Decode IW2 Decode ISC Scramb K Recipient 3

  4. Query-based learning for aerospace applications.

    PubMed

    Saad, E W; Choi, J J; Vian, J L; Wunsch, D C Ii

    2003-01-01

    Models of real-world applications often include a large number of parameters with a wide dynamic range, which contributes to the difficulties of neural network training. Creating the training data set for such applications becomes costly, if not impossible. In order to overcome the challenge, one can employ an active learning technique known as query-based learning (QBL) to add performance-critical data to the training set during the learning phase, thereby efficiently improving the overall learning/generalization. The performance-critical data can be obtained using an inverse mapping called network inversion (discrete network inversion and continuous network inversion) followed by oracle query. This paper investigates the use of both inversion techniques for QBL learning, and introduces an original heuristic to select the inversion target values for continuous network inversion method. Efficiency and generalization was further enhanced by employing node decoupled extended Kalman filter (NDEKF) training and a causality index (CI) as a means to reduce the input search dimensionality. The benefits of the overall QBL approach are experimentally demonstrated in two aerospace applications: a classification problem with large input space and a control distribution problem.

  5. In vivo bioluminescence tomography based on multi-view projection and 3D surface reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Shuang; Wang, Kun; Leng, Chengcai; Deng, Kexin; Hu, Yifang; Tian, Jie

    2015-03-01

    Bioluminescence tomography (BLT) is a powerful optical molecular imaging modality, which enables non-invasive realtime in vivo imaging as well as 3D quantitative analysis in preclinical studies. In order to solve the inverse problem and reconstruct inner light sources accurately, the prior structural information is commonly necessary and obtained from computed tomography or magnetic resonance imaging. This strategy requires expensive hybrid imaging system, complicated operation protocol and possible involvement of ionizing radiation. The overall robustness highly depends on the fusion accuracy between the optical and structural information. In this study we present a pure optical bioluminescence tomographic system (POBTS) and a novel BLT method based on multi-view projection acquisition and 3D surface reconstruction. The POBTS acquired a sparse set of white light surface images and bioluminescent images of a mouse. Then the white light images were applied to an approximate surface model to generate a high quality textured 3D surface reconstruction of the mouse. After that we integrated multi-view luminescent images based on the previous reconstruction, and applied an algorithm to calibrate and quantify the surface luminescent flux in 3D.Finally, the internal bioluminescence source reconstruction was achieved with this prior information. A BALB/C mouse with breast tumor of 4T1-fLuc cells mouse model were used to evaluate the performance of the new system and technique. Compared with the conventional hybrid optical-CT approach using the same inverse reconstruction method, the reconstruction accuracy of this technique was improved. The distance error between the actual and reconstructed internal source was decreased by 0.184 mm.

  6. A progress report on the ARRA-funded geotechnical site characterization project

    NASA Astrophysics Data System (ADS)

    Martin, A. J.; Yong, A.; Stokoe, K.; Di Matteo, A.; Diehl, J.; Jack, S.

    2011-12-01

    For the past 18 months, the 2009 American Recovery and Reinvestment Act (ARRA) has funded geotechnical site characterizations at 189 seismographic station sites in California and the central U.S. This ongoing effort applies methods involving surface-wave techniques, which include the horizontal-to-vertical spectral ratio (HVSR) technique and one or more of the following: spectral analysis of surface wave (SASW), active and passive multi-channel analysis of surface wave (MASW) and passive array microtremor techniques. From this multi-method approach, shear-wave velocity profiles (VS) and the time-averaged shear-wave velocity of the upper 30 meters (VS30) are estimated for each site. To accommodate the variability in local conditions (e.g., rural and urban soil locales, as well as weathered and competent rock sites), conventional field procedures are often modified ad-hoc to fit the unanticipated complexity at each location. For the majority of sites (>80%), fundamental-mode Rayleigh wave dispersion-based techniques are deployed and where complex geology is encountered, multiple test locations are made. Due to the presence of high velocity layers, about five percent of the locations require multi-mode inversion of Rayleigh wave (MASW-based) data or 3-D array-based inversion of SASW dispersion data, in combination with shallow P-wave seismic refraction and/or HVSR results. Where a strong impedance contrast (i.e. soil over rock) exists at shallow depth (about 10% of sites), dominant higher modes limit the use of Rayleigh wave dispersion techniques. Here, use of the Love wave dispersion technique, along with seismic refraction and/or HVSR data, is required to model the presence of shallow bedrock. At a small percentage of the sites, surface wave techniques are found not suitable for stand-alone deployment and site characterization is limited to the use of the seismic refraction technique. A USGS Open File Report-describing the surface geology, VS profile and the calculated VS30 for each site-will be prepared after the completion of the project in November 2011.

  7. Imaging of stellar surfaces with the Occamian approach and the least-squares deconvolution technique

    NASA Astrophysics Data System (ADS)

    Järvinen, S. P.; Berdyugina, S. V.

    2010-10-01

    Context. We present in this paper a new technique for the indirect imaging of stellar surfaces (Doppler imaging, DI), when low signal-to-noise spectral data have been improved by the least-squares deconvolution (LSD) method and inverted into temperature maps with the Occamian approach. We apply this technique to both simulated and real data and investigate its applicability for different stellar rotation rates and noise levels in data. Aims: Our goal is to boost the signal of spots in spectral lines and to reduce the effect of photon noise without loosing the temperature information in the lines. Methods: We simulated data from a test star, to which we added different amounts of noise, and employed the inversion technique based on the Occamian approach with and without LSD. In order to be able to infer a temperature map from LSD profiles, we applied the LSD technique for the first time to both the simulated observations and theoretical local line profiles, which remain dependent on temperature and limb angles. We also investigated how the excitation energy of individual lines effects the obtained solution by using three submasks that have lines with low, medium, and high excitation energy levels. Results: We show that our novel approach enables us to overcome the limitations of the two-temperature approximation, which was previously employed for LSD profiles, and to obtain true temperature maps with stellar atmosphere models. The resulting maps agree well with those obtained using the inversion code without LSD, provided the data are noiseless. However, using LSD is only advisable for poor signal-to-noise data. Further, we show that the Occamian technique, both with and without LSD, approaches the surface temperature distribution reasonably well for an adequate spatial resolution. Thus, the stellar rotation rate has a great influence on the result. For instance, in a slowly rotating star, closely situated spots are usually recovered blurred and unresolved, which affects the obtained temperature range of the map. This limitation is critical for small unresolved cool spots and is common for all DI techniques. Finally the LSD method was carried out for high signal-to-noise observations of the young active star V889 Her: the maps obtained with and without LSD are found to be consistent. Conclusions: Our new technique provides meaningful information on the temperature distribution on the stellar surfaces, which was previously inaccessible in DI with LSD. Our approach can be easily adopted for any other multi-line techniques.

  8. Constraint Embedding Technique for Multibody System Dynamics

    NASA Technical Reports Server (NTRS)

    Woo, Simon S.; Cheng, Michael K.

    2011-01-01

    Multibody dynamics play a critical role in simulation testbeds for space missions. There has been a considerable interest in the development of efficient computational algorithms for solving the dynamics of multibody systems. Mass matrix factorization and inversion techniques and the O(N) class of forward dynamics algorithms developed using a spatial operator algebra stand out as important breakthrough on this front. Techniques such as these provide the efficient algorithms and methods for the application and implementation of such multibody dynamics models. However, these methods are limited only to tree-topology multibody systems. Closed-chain topology systems require different techniques that are not as efficient or as broad as those for tree-topology systems. The closed-chain forward dynamics approach consists of treating the closed-chain topology as a tree-topology system subject to additional closure constraints. The resulting forward dynamics solution consists of: (a) ignoring the closure constraints and using the O(N) algorithm to solve for the free unconstrained accelerations for the system; (b) using the tree-topology solution to compute a correction force to enforce the closure constraints; and (c) correcting the unconstrained accelerations with correction accelerations resulting from the correction forces. This constraint-embedding technique shows how to use direct embedding to eliminate local closure-loops in the system and effectively convert the system back to a tree-topology system. At this point, standard tree-topology techniques can be brought to bear on the problem. The approach uses a spatial operator algebra approach to formulating the equations of motion. The operators are block-partitioned around the local body subgroups to convert them into aggregate bodies. Mass matrix operator factorization and inversion techniques are applied to the reformulated tree-topology system. Thus in essence, the new technique allows conversion of a system with closure-constraints into an equivalent tree-topology system, and thus allows one to take advantage of the host of techniques available to the latter class of systems. This technology is highly suitable for the class of multibody systems where the closure-constraints are local, i.e., where they are confined to small groupings of bodies within the system. Important examples of such local closure-constraints are constraints associated with four-bar linkages, geared motors, differential suspensions, etc. One can eliminate these closure-constraints and convert the system into a tree-topology system by embedding the constraints directly into the system dynamics and effectively replacing the body groupings with virtual aggregate bodies. Once eliminated, one can apply the well-known results and algorithms for tree-topology systems to solve the dynamics of such closed-chain system.

  9. The use of forest stand age information in an atmospheric CO2 inversion applied to North America

    Treesearch

    F. Deng; J.M. Chen; Y. Pan; W. Peters; R. Birdsey; K. McCullough; J. Xiao

    2013-01-01

    Atmospheric inversions have become an important tool in quantifying carbon dioxide (CO2) sinks and sources at a variety of spatiotemporal scales, but associated large uncertainties restrain the inversion research community from reaching agreement on many important subjects. We enhanced an atmospheric inversion of the CO2...

  10. Tomographic inversion of satellite photometry

    NASA Technical Reports Server (NTRS)

    Solomon, S. C.; Hays, P. B.; Abreu, V. J.

    1984-01-01

    An inversion algorithm capable of reconstructing the volume emission rate of thermospheric airglow features from satellite photometry has been developed. The accuracy and resolution of this technique are investigated using simulated data, and the inversions of several sets of observations taken by the Visible Airglow Experiment are presented.

  11. A Forward Glimpse into Inverse Problems through a Geology Example

    ERIC Educational Resources Information Center

    Winkel, Brian J.

    2012-01-01

    This paper describes a forward approach to an inverse problem related to detecting the nature of geological substrata which makes use of optimization techniques in a multivariable calculus setting. The true nature of the related inverse problem is highlighted. (Contains 2 figures.)

  12. Prediction of cause of death from forensic autopsy reports using text classification techniques: A comparative study.

    PubMed

    Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa

    2018-07-01

    Automatic text classification techniques are useful for classifying plaintext medical documents. This study aims to automatically predict the cause of death from free text forensic autopsy reports by comparing various schemes for feature extraction, term weighing or feature value representation, text classification, and feature reduction. For experiments, the autopsy reports belonging to eight different causes of death were collected, preprocessed and converted into 43 master feature vectors using various schemes for feature extraction, representation, and reduction. The six different text classification techniques were applied on these 43 master feature vectors to construct a classification model that can predict the cause of death. Finally, classification model performance was evaluated using four performance measures i.e. overall accuracy, macro precision, macro-F-measure, and macro recall. From experiments, it was found that that unigram features obtained the highest performance compared to bigram, trigram, and hybrid-gram features. Furthermore, in feature representation schemes, term frequency, and term frequency with inverse document frequency obtained similar and better results when compared with binary frequency, and normalized term frequency with inverse document frequency. Furthermore, the chi-square feature reduction approach outperformed Pearson correlation, and information gain approaches. Finally, in text classification algorithms, support vector machine classifier outperforms random forest, Naive Bayes, k-nearest neighbor, decision tree, and ensemble-voted classifier. Our results and comparisons hold practical importance and serve as references for future works. Moreover, the comparison outputs will act as state-of-art techniques to compare future proposals with existing automated text classification techniques. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  13. On the Salt Water Intrusion into the Durusu Lake, Istanbul: A Joint Central Loop TEM And Multi-Electrode ERT Field Survey

    NASA Astrophysics Data System (ADS)

    Ardali, Ayça Sultan; Tezkan, Bülent; Gürer, Aysan

    2018-02-01

    Durusu Lake is the biggest and most important freshwater source supplying drinking water to the European side of Istanbul. In this study, electrical resistivity tomography (ERT) and transient electromagnetic (TEM) measurements were applied to detect a possible salt water intrusion into the lake and to delineate the subsurface structure in the north of Durusu Lake. The ERT and TEM measurements were carried out along six parallel profiles extending from the sea coast to the lake shore on the dune barrier. TEM data were interpreted using different 1-D inversion methods such as Occam, Marquardt, and laterally constrained inversion (LCI). ERT data were interpreted using 2-D inversion techniques. The inversion results of ERT and TEM data were shown as resistivity depth sections including topography. The sand layer spreading over the basin has a resistivity of 150-400 Ωm with a thickness of 5-10 m. The sandy layer with clay, silt, and gravel has a resistivity of 15-100 Ωm and a thickness of 10-40 m followed by a clay layer of a resistivity below 10 Ωm. When the inversion of these data is interpreted along with the hydrogeology of the area, it is concluded that the salt water intrusion along the dune barrier is not common and occurs at a particular area where the distance between lake and sea is very close. Using information from boreholes around the lake, it was verified that the common conductive region at depths of 30 m or more consists of clay layers and clay lenses.

  14. Modeling the Absorbing Aerosol Index

    NASA Technical Reports Server (NTRS)

    Penner, Joyce; Zhang, Sophia

    2003-01-01

    We propose a scheme to model the absorbing aerosol index and improve the biomass carbon inventories by optimizing the difference between TOMS aerosol index (AI) and modeled AI with an inverse model. Two absorbing aerosol types are considered, including biomass carbon and mineral dust. A priori biomass carbon source was generated by Liousse et al [1996]. Mineral dust emission is parameterized according to surface wind and soil moisture using the method developed by Ginoux [2000]. In this initial study, the coupled CCM1 and GRANTOUR model was used to determine the aerosol spatial and temporal distribution. With modeled aerosol concentrations and optical properties, we calculate the radiance at the top of the atmosphere at 340 nm and 380 nm with a radiative transfer model. The contrast of radiance at these two wavelengths will be used to calculate AI. Then we compare the modeled AI with TOMS AI. This paper reports our initial modeling for AI and its comparison with TOMS Nimbus 7 AI. For our follow-on project we will model the global AI with aerosol spatial and temporal distribution recomputed from the IMPACT model and DAO GEOS-1 meteorology fields. Then we will build an inverse model, which applies a Bayesian inverse technique to optimize the agreement of between model and observational data. The inverse model will tune the biomass burning source strength to reduce the difference between modelled AI and TOMS AI. Further simulations with a posteriori biomass carbon sources from the inverse model will be carried out. Results will be compared to available observations such as surface concentration and aerosol optical depth.

  15. Joint inversion of geophysical data using petrophysical clustering and facies deformation wth the level set technique

    NASA Astrophysics Data System (ADS)

    Revil, A.

    2015-12-01

    Geological expertise and petrophysical relationships can be brought together to provide prior information while inverting multiple geophysical datasets. The merging of such information can result in more realistic solution in the distribution of the model parameters, reducing ipse facto the non-uniqueness of the inverse problem. We consider two level of heterogeneities: facies, described by facies boundaries and heteroegenities inside each facies determined by a correlogram. In this presentation, we pose the geophysical inverse problem in terms of Gaussian random fields with mean functions controlled by petrophysical relationships and covariance functions controlled by a prior geological cross-section, including the definition of spatial boundaries for the geological facies. The petrophysical relationship problem is formulated as a regression problem upon each facies. The inversion of the geophysical data is performed in a Bayesian framework. We demonstrate the usefulness of this strategy using a first synthetic case for which we perform a joint inversion of gravity and galvanometric resistivity data with the stations located at the ground surface. The joint inversion is used to recover the density and resistivity distributions of the subsurface. In a second step, we consider the possibility that the facies boundaries are deformable and their shapes are inverted as well. We use the level set approach to perform such deformation preserving prior topological properties of the facies throughout the inversion. With the help of prior facies petrophysical relationships and topological characteristic of each facies, we make posterior inference about multiple geophysical tomograms based on their corresponding geophysical data misfits. The method is applied to a second synthetic case showing that we can recover the heterogeneities inside the facies, the mean values for the petrophysical properties, and, to some extent, the facies boundaries using the 2D joint inversion of gravity and galvanometric resistivity data. For this 2D synthetic example, we note that the position of the facies are well-recovered except far from the ground surfce where the sensitivity is too low. The figure shows the evolution of the shape of the facies during the inversion itertion by iteration.

  16. Towards a new technique to construct a 3D shear-wave velocity model based on converted waves

    NASA Astrophysics Data System (ADS)

    Hetényi, G.; Colavitti, L.

    2017-12-01

    A 3D model is essential in all branches of solid Earth sciences because geological structures can be heterogeneous and change significantly in their lateral dimension. The main target of this research is to build a crustal S-wave velocity structure in 3D. The currently popular methodologies to construct 3D shear-wave velocity models are Ambient Noise Tomography (ANT) and Local Earthquake Tomography (LET). Here we propose a new technique to map Earth discontinuities and velocities at depth based on the analysis of receiver functions. The 3D model is obtained by simultaneously inverting P-to-S converted waveforms recorded at a dense array. The individual velocity models corresponding to each trace are extracted from the 3D initial model along ray paths that are calculated using the shooting method, and the velocity model is updated during the inversion. We consider a spherical approximation of ray propagation using a global velocity model (iasp91, Kennett and Engdahl, 1991) for the teleseismic part, while we adopt Cartesian coordinates and a local velocity model for the crust. During the inversion process we work with a multi-layer crustal model for shear-wave velocity, with a flexible mesh for the depth of the interfaces. The RFs inversion represents a complex problem because the amplitude and the arrival time of different phases depend in a non-linear way on the depth of interfaces and the characteristics of the velocity structure. The solution we envisage to manage the inversion problem is the stochastic Neighbourhood Algorithm (NA, Sambridge, 1999), whose goal is to find an ensemble of models that sample the good data-fitting regions of a multidimensional parameter space. Depending on the studied area, this method can accommodate possible independent and complementary geophysical data (gravity, active seismics, LET, ANT, etc.), helping to reduce the non-linearity of the inversion. Our first focus of application is the Central Alps, where a 20-year long dataset of high-quality teleseismic events recorded at 81 stations is available, and we have high-resolution P-wave velocity model available (Diehl et al., 2009). We plan to extend the 3D shear-wave velocity inversion method to the entire Alpine domain in frame of the AlpArray project, and apply it to other areas with a dense network of broadband seismometers.

  17. Time reversal imaging, Inverse problems and Adjoint Tomography}

    NASA Astrophysics Data System (ADS)

    Montagner, J.; Larmat, C. S.; Capdeville, Y.; Kawakatsu, H.; Fink, M.

    2010-12-01

    With the increasing power of computers and numerical techniques (such as spectral element methods), it is possible to address a new class of seismological problems. The propagation of seismic waves in heterogeneous media is simulated more and more accurately and new applications developed, in particular time reversal methods and adjoint tomography in the three-dimensional Earth. Since the pioneering work of J. Claerbout, theorized by A. Tarantola, many similarities were found between time-reversal methods, cross-correlations techniques, inverse problems and adjoint tomography. By using normal mode theory, we generalize the scalar approach of Draeger and Fink (1999) and Lobkis and Weaver (2001) to the 3D- elastic Earth, for theoretically understanding time-reversal method on global scale. It is shown how to relate time-reversal methods on one hand, with auto-correlations of seismograms for source imaging and on the other hand, with cross-correlations between receivers for structural imaging and retrieving Green function. Time-reversal methods were successfully applied in the past to acoustic waves in many fields such as medical imaging, underwater acoustics, non destructive testing and to seismic waves in seismology for earthquake imaging. In the case of source imaging, time reversal techniques make it possible an automatic location in time and space as well as the retrieval of focal mechanism of earthquakes or unknown environmental sources . We present here some applications at the global scale of these techniques on synthetic tests and on real data, such as Sumatra-Andaman (Dec. 2004), Haiti (Jan. 2010), as well as glacial earthquakes and seismic hum.

  18. M-Band Analysis of Chromosome Aberrations in Human Epithelial Cells Induced By Low- and High-Let Radiations

    NASA Technical Reports Server (NTRS)

    Hada, M.; Gersey, B.; Saganti, P. B.; Wilkins, R.; Gonda, S. R.; Cucinotta, F. A.; Wu, H.

    2007-01-01

    Energetic primary and secondary particles pose a health risk to astronauts in extended ISS and future Lunar and Mars missions. High-LET radiation is much more effective than low-LET radiation in the induction of various biological effects, including cell inactivation, genetic mutations, cataracts and cancer. Most of these biological endpoints are closely correlated to chromosomal damage, which can be utilized as a biomarker for radiation insult. In this study, human epithelial cells were exposed in vitro to gamma rays, 1 GeV/nucleon Fe ions and secondary neutrons whose spectrum is similar to that measured inside the Space Station. Chromosomes were condensed using a premature chromosome condensation technique and chromosome aberrations were analyzed with the multi-color banding (mBAND) technique. With this technique, individually painted chromosomal bands on one chromosome allowed the identification of both interchromosomal (translocation to unpainted chromosomes) and intrachromosomal aberrations (inversions and deletions within a single painted chromosome). Results of the study confirmed the observation of higher incidence of inversions for high-LET irradiation. However, detailed analysis of the inversion type revealed that all of the three radiation types in the study induced a low incidence of simple inversions. Half of the inversions observed in the low-LET irradiated samples were accompanied by other types of intrachromosome aberrations, but few inversions were accompanied by interchromosome aberrations. In contrast, Fe ions induced a significant fraction of inversions that involved complex rearrangements of both the inter- and intrachromosome exchanges.

  19. Selected inversion as key to a stable Langevin evolution across the QCD phase boundary

    NASA Astrophysics Data System (ADS)

    Bloch, Jacques; Schenk, Olaf

    2018-03-01

    We present new results of full QCD at nonzero chemical potential. In PRD 92, 094516 (2015) the complex Langevin method was shown to break down when the inverse coupling decreases and enters the transition region from the deconfined to the confined phase. We found that the stochastic technique used to estimate the drift term can be very unstable for indefinite matrices. This may be avoided by using the full inverse of the Dirac operator, which is, however, too costly for four-dimensional lattices. The major breakthrough in this work was achieved by realizing that the inverse elements necessary for the drift term can be computed efficiently using the selected inversion technique provided by the parallel sparse direct solver package PARDISO. In our new study we show that no breakdown of the complex Langevin method is encountered and that simulations can be performed across the phase boundary.

  20. Using Classification and Regression Trees (CART) and random forests to analyze attrition: Results from two simulations.

    PubMed

    Hayes, Timothy; Usami, Satoshi; Jacobucci, Ross; McArdle, John J

    2015-12-01

    In this article, we describe a recent development in the analysis of attrition: using classification and regression trees (CART) and random forest methods to generate inverse sampling weights. These flexible machine learning techniques have the potential to capture complex nonlinear, interactive selection models, yet to our knowledge, their performance in the missing data analysis context has never been evaluated. To assess the potential benefits of these methods, we compare their performance with commonly employed multiple imputation and complete case techniques in 2 simulations. These initial results suggest that weights computed from pruned CART analyses performed well in terms of both bias and efficiency when compared with other methods. We discuss the implications of these findings for applied researchers. (c) 2015 APA, all rights reserved).

  1. Invited commentary: G-computation--lost in translation?

    PubMed

    Vansteelandt, Stijn; Keiding, Niels

    2011-04-01

    In this issue of the Journal, Snowden et al. (Am J Epidemiol. 2011;173(7):731-738) give a didactic explanation of G-computation as an approach for estimating the causal effect of a point exposure. The authors of the present commentary reinforce the idea that their use of G-computation is equivalent to a particular form of model-based standardization, whereby reference is made to the observed study population, a technique that epidemiologists have been applying for several decades. They comment on the use of standardized versus conditional effect measures and on the relative predominance of the inverse probability-of-treatment weighting approach as opposed to G-computation. They further propose a compromise approach, doubly robust standardization, that combines the benefits of both of these causal inference techniques and is not more difficult to implement.

  2. Vorticity field measurement using digital inline holography

    NASA Astrophysics Data System (ADS)

    Mallery, Kevin; Hong, Jiarong

    2017-11-01

    We demonstrate the direct measurement of a 3D vorticity field using digital inline holographic microscopy. Microfiber tracer particles are illuminated with a 532 nm continuous diode laser and imaged using a single CCD camera. The recorded holographic images are processed using a GPU-accelerated inverse problem approach to reconstruct the 3D structure of each microfiber in the imaged volume. The translation and rotation of each microfiber are measured using a time-resolved image sequence - yielding velocity and vorticity point measurements. The accuracy and limitations of this method are investigated using synthetic holograms. Measurements of solid body rotational flow are used to validate the accuracy of the technique under known flow conditions. The technique is further applied to a practical turbulent flow case for investigating its 3D velocity field and vorticity distribution.

  3. Using Classification and Regression Trees (CART) and Random Forests to Analyze Attrition: Results From Two Simulations

    PubMed Central

    Hayes, Timothy; Usami, Satoshi; Jacobucci, Ross; McArdle, John J.

    2016-01-01

    In this article, we describe a recent development in the analysis of attrition: using classification and regression trees (CART) and random forest methods to generate inverse sampling weights. These flexible machine learning techniques have the potential to capture complex nonlinear, interactive selection models, yet to our knowledge, their performance in the missing data analysis context has never been evaluated. To assess the potential benefits of these methods, we compare their performance with commonly employed multiple imputation and complete case techniques in 2 simulations. These initial results suggest that weights computed from pruned CART analyses performed well in terms of both bias and efficiency when compared with other methods. We discuss the implications of these findings for applied researchers. PMID:26389526

  4. The Prediction-Focused Approach: An opportunity for hydrogeophysical data integration and interpretation

    NASA Astrophysics Data System (ADS)

    Hermans, Thomas; Nguyen, Frédéric; Klepikova, Maria; Dassargues, Alain; Caers, Jef

    2017-04-01

    Hydrogeophysics is an interdisciplinary field of sciences aiming at a better understanding of subsurface hydrological processes. If geophysical surveys have been successfully used to qualitatively characterize the subsurface, two important challenges remain for a better quantification of hydrological processes: (1) the inversion of geophysical data and (2) their integration in hydrological subsurface models. The classical inversion approach using regularization suffers from spatially and temporally varying resolution and yields geologically unrealistic solutions without uncertainty quantification, making their utilization for hydrogeological calibration less consistent. More advanced techniques such as coupled inversion allow for a direct use of geophysical data for conditioning groundwater and solute transport model calibration. However, the technique is difficult to apply in complex cases and remains computationally demanding to estimate uncertainty. In a recent study, we investigate a prediction-focused approach (PFA) to directly estimate subsurface physical properties from geophysical data, circumventing the need for classic inversions. In PFA, we seek a direct relationship between the data and the subsurface variables we want to predict (the forecast). This relationship is obtained through a prior set of subsurface models for which both data and forecast are computed. A direct relationship can often be derived through dimension reduction techniques. PFA offers a framework for both hydrogeophysical "inversion" and hydrogeophysical data integration. For hydrogeophysical "inversion", the considered forecast variable is the subsurface variable, such as the salinity. An ensemble of possible solutions is generated, allowing uncertainty quantification. For hydrogeophysical data integration, the forecast variable becomes the prediction we want to make with our subsurface models, such as the concentration of contaminant in a drinking water production well. Geophysical and hydrological data are combined to derive a direct relationship between data and forecast. We illustrate the process for the design of an aquifer thermal energy storage (ATES) system. An ATES system can theoretically recover in winter the heat stored in the aquifer during summer. In practice, the energy efficiency is often lower than expected due to spatial heterogeneity of hydraulic properties combined to a non-favorable hydrogeological gradient. A proper design of ATES systems should consider the uncertainty of the prediction related to those parameters. With a global sensitivity analysis, we identify sensitive parameters for heat storage prediction and validate the use of a short term heat tracing experiment monitored with geophysics to generate informative data. First, we illustrate how PFA can be used to successfully derive the distribution of temperature in the aquifer from ERT during the heat tracing experiment. Then, we successfully integrate the geophysical data to predict medium-term heat storage in the aquifer using PFA. The result is a full quantification of the posterior distribution of the prediction conditioned to observed data in a relatively limited time budget.

  5. Inverse kinematics of a dual linear actuator pitch/roll heliostat

    NASA Astrophysics Data System (ADS)

    Freeman, Joshua; Shankar, Balakrishnan; Sundaram, Ganesh

    2017-06-01

    This work presents a simple, computationally efficient inverse kinematics solution for a pitch/roll heliostat using two linear actuators. The heliostat design and kinematics have been developed, modeled and tested using computer simulation software. A physical heliostat prototype was fabricated to validate the theoretical computations and data. Pitch/roll heliostats have numerous advantages including reduced cost potential and reduced space requirements, with a primary disadvantage being the significantly more complicated kinematics, which are solved here. Novel methods are applied to simplify the inverse kinematics problem which could be applied to other similar problems.

  6. Optimization of the Inverse Algorithm for Estimating the Optical Properties of Biological Materials Using Spatially-resolved Diffuse Reflectance Technique

    USDA-ARS?s Scientific Manuscript database

    Determination of the optical properties from intact biological materials based on diffusion approximation theory is a complicated inverse problem, and it requires proper implementation of inverse algorithm, instrumentation, and experiment. This work was aimed at optimizing the procedure of estimatin...

  7. Axisymmetric deformation in a micropolar thermoelastic medium under fractional order theory of thermoelasticity

    NASA Astrophysics Data System (ADS)

    Kumar, Rajneesh; Singh, Kulwinder; Pathania, Devinder Singh

    2017-07-01

    The purpose of this paper is to study the variations in temperature, radial and normal displacement, normal stress, shear stress and couple stress in a micropolar thermoelastic solid in the context of fractional order theory of thermoelasticity. Eigen value approach together with Laplace and Hankel transforms are employed to obtain the general solution of the problem. The field variables corresponding to different fractional order theories of thermoelasticity have been obtained in the transformed domain. The general solution is applied to an infinite space subjected to a concentrated load at the origin. To obtained solution in the physical domain numerical inversion technique has been applied and numerically computed results are depicted graphically to analyze the effects of fractional order parameter on the field variables.

  8. GUEST EDITORS' INTRODUCTION: Testing inversion algorithms against experimental data: inhomogeneous targets

    NASA Astrophysics Data System (ADS)

    Belkebir, Kamal; Saillard, Marc

    2005-12-01

    This special section deals with the reconstruction of scattering objects from experimental data. A few years ago, inspired by the Ipswich database [1 4], we started to build an experimental database in order to validate and test inversion algorithms against experimental data. In the special section entitled 'Testing inversion algorithms against experimental data' [5], preliminary results were reported through 11 contributions from several research teams. (The experimental data are free for scientific use and can be downloaded from the web site.) The success of this previous section has encouraged us to go further and to design new challenges for the inverse scattering community. Taking into account the remarks formulated by several colleagues, the new data sets deal with inhomogeneous cylindrical targets and transverse electric (TE) polarized incident fields have also been used. Among the four inhomogeneous targets, three are purely dielectric, while the last one is a `hybrid' target mixing dielectric and metallic cylinders. Data have been collected in the anechoic chamber of the Centre Commun de Ressources Micro-ondes in Marseille. The experimental setup as well as the layout of the files containing the measurements are presented in the contribution by J-M Geffrin, P Sabouroux and C Eyraud. The antennas did not change from the ones used previously [5], namely wide-band horn antennas. However, improvements have been achieved by refining the mechanical positioning devices. In order to enlarge the scope of applications, both TE and transverse magnetic (TM) polarizations have been carried out for all targets. Special care has been taken not to move the target under test when switching from TE to TM measurements, ensuring that TE and TM data are available for the same configuration. All data correspond to electric field measurements. In TE polarization the measured component is orthogonal to the axis of invariance. Contributions A Abubakar, P M van den Berg and T M Habashy, Application of the multiplicative regularized contrast source inversion method TM- and TE-polarized experimental Fresnel data, present results of profile inversions obtained using the contrast source inversion (CSI) method, in which a multiplicative regularization is plugged in. The authors successfully inverted both TM- and TE-polarized fields. Note that this paper is one of only two contributions which address the inversion of TE-polarized data. A Baussard, Inversion of multi-frequency experimental data using an adaptive multiscale approach, reports results of reconstructions using the modified gradient method (MGM). It suggests that a coarse-to-fine iterative strategy based on spline pyramids. In this iterative technique, the number of degrees of freedom is reduced, which improves robustness. The introduction, during the iterative process, of finer scales inside areas of interest leads to an accurate representation of the object under test. The efficiency of this technique is shown via comparisons between the results obtained with the standard MGM and those from an adaptive approach. L Crocco, M D'Urso and T Isernia, Testing the contrast source extended Born inversion method against real data: the case of TM data, assume that the main contribution in the domain integral formulation comes from the singularity of Green's function, even though the media involved are lossless. A Fourier Bessel analysis of the incident and scattered measured fields is used to derive a model of the incident field and an estimate of the location and size of the target. The iterative procedure lies on a conjugate gradient method associated with Tikhonov regularization, and the multi-frequency data are dealt with using a frequency-hopping approach. In many cases, it is difficult to reconstruct accurately both real and imaginary parts of the permittivity if no prior information is included. M Donelli, D Franceschini, A Massa, M Pastorino and A Zanetti, Multi-resolution iterative inversion of real inhomogeneous targets, adopt a multi-resolution strategy, which, at each step, adaptive discretization of the integral equation is performed over an irregular mesh, with a coarser grid outside the regions of interest and tighter sampling where better resolution is required. Here, this procedure is achieved while keeping the number of unknowns constant. The way such a strategy could be combined with multi-frequency data, edge preserving regularization, or any technique also devoted to improve resolution, remains to be studied. As done by some other contributors, the model of incident field is chosen to fit the Fourier Bessel expansion of the measured one. A Dubois, K Belkebir and M Saillard, Retrieval of inhomogeneous targets from experimental frequency diversity data, present results of the reconstruction of targets using three different non-regularized techniques. It is suggested to minimize a frequency weighted cost function rather than a standard one. The different approaches are compared and discussed. C Estatico, G Bozza, A Massa, M Pastorino and A Randazzo, A two-step iterative inexact-Newton method for electromagnetic imaging of dielectric structures from real data, use a two nested iterative methods scheme, based on the second-order Born approximation, which is nonlinear in terms of contrast but does not involve the total field. At each step of the outer iteration, the problem is linearized and solved iteratively using the Landweber method. Better reconstructions than with the Born approximation are obtained at low numerical cost. O Feron, B Duchêne and A Mohammad-Djafari, Microwave imaging of inhomogeneous objects made of a finite number of dielectric and conductive materials from experimental data, adopt a Bayesian framework based on a hidden Markov model, built to take into account, as prior knowledge, that the target is composed of a finite number of homogeneous regions. It has been applied to diffraction tomography and to a rigorous formulation of the inverse problem. The latter can be viewed as a Bayesian adaptation of the contrast source method such that prior information about the contrast can be introduced in the prior law distribution, and it results in estimating the posterior mean instead of minimizing a cost functional. The accuracy of the result is thus closely linked to the prior knowledge of the contrast, making this approach well suited for non-destructive testing. J-M Geffrin, P Sabouroux and C Eyraud, Free space experimental scattering database continuation: experimental set-up and measurement precision, describe the experimental set-up used to carry out the data for the inversions. They report the modifications of the experimental system used previously in order to improve the precision of the measurements. Reliability of data is demonstrated through comparisons between measurements and computed scattered field with both fundamental polarizations. In addition, the reader interested in using the database will find the relevant information needed to perform inversions as well as the description of the targets under test. A Litman, Reconstruction by level sets of n-ary scattering obstacles, presents the reconstruction of targets using a level sets representation. It is assumed that the constitutive materials of the obstacles under test are known and the shape is retrieved. Two approaches are reported. In the first one the obstacles of different constitutive materials are represented in a single level set, while in the second approach several level sets are combined. The approaches are applied to the experimental data and compared. U Shahid, M Testorf and M A Fiddy, Minimum-phase-based inverse scattering algorithm applied to Institut Fresnel data, suggest a way of extending the use of minimum phase functions to 2D problems. In the kind of inverse problems we are concerned with, it consists of separating the contributions from the field and from the contrast in the so-called contrast source term, through homomorphic filtering. Images of the targets are obtained by combination with diffraction tomography. Both pre-processing and imaging are thus based on the use of Fourier transforms, making the algorithm very fast compared to classical iterative approaches. It is also pointed out that the design of appropriate filters remains an open topic. C Yu, L-P Song and Q H Liu, Inversion of multi-frequency experimental data for imaging complex objects by a DTA CSI method, use the contrast source inversion (CSI) method for the reconstruction of the targets, in which the initial guess is a solution deduced from another iterative technique based on the diagonal tensor approximation (DTA). In so doing, the authors combine the fast convergence of the DTA method for generating an accurate initial estimate for the CSI method. Note that this paper is one of only two contributions which address the inversion of TE-polarized data. Conclusion In this special section various inverse scattering techniques were used to successfully reconstruct inhomogeneous targets from multi-frequency multi-static measurements. This shows that the database is reliable and can be useful for researchers wanting to test and validate inversion algorithms. From the database, it is also possible to extract subsets to study particular inverse problems, for instance from phaseless data or from `aspect-limited' configurations. Our future efforts will be directed towards extending the database in order to explore inversions from transient fields and the full three-dimensional problem. Acknowledgments The authors would like to thank the Inverse Problems board for opening the journal to us, and offer profound thanks to Elaine Longden-Chapman and Kate Hooper for their help in organizing this special section.

  9. 3D aquifer characterization using stochastic streamline calibration

    NASA Astrophysics Data System (ADS)

    Jang, Minchul

    2007-03-01

    In this study, a new inverse approach, stochastic streamline calibration is proposed. Using both a streamline concept and a stochastic technique, stochastic streamline calibration optimizes an identified field to fit in given observation data in a exceptionally fast and stable fashion. In the stochastic streamline calibration, streamlines are adopted as basic elements not only for describing fluid flow but also for identifying the permeability distribution. Based on the streamline-based inversion by Agarwal et al. [Agarwal B, Blunt MJ. Streamline-based method with full-physics forward simulation for history matching performance data of a North sea field. SPE J 2003;8(2):171-80], Wang and Kovscek [Wang Y, Kovscek AR. Streamline approach for history matching production data. SPE J 2000;5(4):353-62], permeability is modified rather along streamlines than at the individual gridblocks. Permeabilities in the gridblocks which a streamline passes are adjusted by being multiplied by some factor such that we can match flow and transport properties of the streamline. This enables the inverse process to achieve fast convergence. In addition, equipped with a stochastic module, the proposed technique supportively calibrates the identified field in a stochastic manner, while incorporating spatial information into the field. This prevents the inverse process from being stuck in local minima and helps search for a globally optimized solution. Simulation results indicate that stochastic streamline calibration identifies an unknown permeability exceptionally quickly. More notably, the identified permeability distribution reflected realistic geological features, which had not been achieved in the original work by Agarwal et al. with the limitations of the large modifications along streamlines for matching production data only. The constructed model by stochastic streamline calibration forecasted transport of plume which was similar to that of a reference model. By this, we can expect the proposed approach to be applied to the construction of an aquifer model and forecasting of the aquifer performances of interest.

  10. Branchio-otic syndrome caused by a genomic rearrangement: clinical findings and molecular cytogenetic studies in a patient with a pericentric inversion of chromosome 8.

    PubMed

    Schmidt, T; Bierhals, T; Kortüm, F; Bartels, I; Liehr, T; Burfeind, P; Shoukier, M; Frank, V; Bergmann, C; Kutsche, K

    2014-01-01

    Branchio-oto-renal (BOR) syndrome is an autosomal dominantly inherited developmental disorder, which is characterized by anomalies of the ears, the branchial arches and the kidneys. It is caused by mutations in the genes EYA1,SIX1 and SIX5. Genomic rearrangements of chromosome 8 affecting the EYA1 gene have also been described. Owing to this fact, methods for the identification of abnormal copy numbers such as multiplex ligation-dependent probe amplification (MLPA) have been introduced as routine laboratory techniques for molecular diagnostics of BOR syndrome. The advantages of these techniques are clear compared to standard cytogenetic and array approaches as well as Southern blot. MLPA detects deletions or duplications of a part or the entire gene of interest, but not balanced structural aberrations such as inversions and translocations. Consequently, disruption of a gene by a genomic rearrangement may escape detection by a molecular genetic analysis, although this gene interruption results in haploinsufficiency and, therefore, causes the disease. In a patient with clinical features of BOR syndrome, such as hearing loss, preauricular fistulas and facial dysmorphisms, but no renal anomalies, neither sequencing of the 3 genes linked to BOR syndrome nor array comparative genomic hybridization and MLPA were able to uncover a causative mutation. By routine cytogenetic analysis, we finally identified a pericentric inversion of chromosome 8 in the affected female. High-resolution multicolor banding confirmed the chromosome 8 inversion and narrowed down the karyotype to 46,XX,inv(8)(p22q13). By applying fluorescence in situ hybridization, we narrowed down both breakpoints on chromosome 8 and found the EYA1 gene in q13.3 to be directly disrupted. We conclude that standard karyotyping should not be neglected in the genetic diagnostics of BOR syndrome or other Mendelian disorders, particularly when molecular testing failed to detect any causative alteration in patients with a convincing phenotype. © 2013 S. Karger AG, Basel.

  11. Volcanic geothermal system in the Main Ethiopian Rift: insights from 3D MT finite-element inversion and other exploration methods

    NASA Astrophysics Data System (ADS)

    Samrock, F.; Grayver, A.; Eysteinsson, H.; Saar, M. O.

    2017-12-01

    In search for geothermal resources, especially in exploration for high-enthalpy systems found in regions with active volcanism, the magnetotelluric (MT) method has proven to be an efficient tool. Electrical conductivity of the subsurface, imaged by MT, is used for detecting layers of electrically highly conductive clays which form around the surrounding strata of hot circulating fluids and for delineating magmatic heat sources such as zones with partial melting. We present a case study using a novel 3-D inverse solver, based on adaptive local mesh refinement techniques, applied to decoupled forward and inverse mesh parameterizations. The flexible meshing allows accurate representation of surface topography, while keeping computational costs at a reasonable level. The MT data set we analyze was measured at 112 sites, covering an area of 18 by 11 km at a geothermal prospect in the Main Ethiopian Rift. For inverse modelling, we tested a series of different settings to ensure that the recovered structures are supported by the data. Specifically, we tested different starting models, regularization functionals, sets of transfer functions, with and without inclusion of topography. Several robust subsurface structures were revealed. These are prominent features of a high-enthalpy geothermal system: A highly conductive shallow clay cap occurs in an area with high fumarolic activity, and is underlain by a more resistive zone, which is commonly interpreted as a propylitic reservoir and is the main geothermal target for drilling. An interesting discovery is the existence of a channel-like conductor connecting the geothermal field at the surface with an off-rift conductive zone, whose existence was proposed earlier as being related to an off-rift volcanic belt along the western shoulder of the Main Ethiopian Rift. The electrical conductivity model is interpreted together with results from other geoscientific studies and outcomes from satellite remote sensing techniques.

  12. New algorithm and system for measuring size distribution of blood cells

    NASA Astrophysics Data System (ADS)

    Yao, Cuiping; Li, Zheng; Zhang, Zhenxi

    2004-06-01

    In optical scattering particle sizing, a numerical transform is sought so that a particle size distribution can be determined from angular measurements of near forward scattering, which has been adopted in the measurement of blood cells. In this paper a new method of counting and classification of blood cell, laser light scattering method from stationary suspensions, is presented. The genetic algorithm combined with nonnegative least squared algorithm is employed to inverse the size distribution of blood cells. Numerical tests show that these techniques can be successfully applied to measuring size distribution of blood cell with high stability.

  13. Enhancement of multispectral thermal infrared images - Decorrelation contrast stretching

    NASA Technical Reports Server (NTRS)

    Gillespie, Alan R.

    1992-01-01

    Decorrelation contrast stretching is an effective method for displaying information from multispectral thermal infrared (TIR) images. The technique involves transformation of the data to principle components ('decorrelation'), independent contrast 'stretching' of data from the new 'decorrelated' image bands, and retransformation of the stretched data back to the approximate original axes, based on the inverse of the principle component rotation. The enhancement is robust in that colors of the same scene components are similar in enhanced images of similar scenes, or the same scene imaged at different times. Decorrelation contrast stretching is reviewed in the context of other enhancements applied to TIR images.

  14. A regularized clustering approach to brain parcellation from functional MRI data

    NASA Astrophysics Data System (ADS)

    Dillon, Keith; Wang, Yu-Ping

    2017-08-01

    We consider a data-driven approach for the subdivision of an individual subject's functional Magnetic Resonance Imaging (fMRI) scan into regions of interest, i.e., brain parcellation. The approach is based on a computational technique for calculating resolution from inverse problem theory, which we apply to neighborhood selection for brain connectivity networks. This can be efficiently calculated even for very large images, and explicitly incorporates regularization in the form of spatial smoothing and a noise cutoff. We demonstrate the reproducibility of the method on multiple scans of the same subjects, as well as the variations between subjects.

  15. Nitrogen Oxide Emission, Economic Growth and Urbanization in China: a Spatial Econometric Analysis

    NASA Astrophysics Data System (ADS)

    Zhou, Zhimin; Zhou, Yanli; Ge, Xiangyu

    2018-01-01

    This research studies the nexus of nitrogen oxide emissions and economic development/urbanization. Under the environmental Kuznets curve (EKC) hypothesis, we apply the analysis technique of spatial panel data in the STIRPAT framework, and thus obtain the estimated impacts of income/urbanization on nitrogen oxide emission systematically. The empirical findings suggest that spatial dependence on nitrogen oxide emission distribution exist at provincial level, and the inverse N-shape EKC describes both income-nitrogen oxide and urbanization-nitrogen oxide nexuses. In addition, some well-directed policy advices are made to reduce the nitrogen oxide emission in future.

  16. Exploring L1 model space in search of conductivity bounds for the MT problem

    NASA Astrophysics Data System (ADS)

    Wheelock, B. D.; Parker, R. L.

    2013-12-01

    Geophysical inverse problems of the type encountered in electromagnetic techniques are highly non-unique. As a result, any single inverted model, though feasible, is at best inconclusive and at worst misleading. In this paper, we use modified inversion methods to establish bounds on electrical conductivity within a model of the earth. Our method consists of two steps, each making use of the 1-norm in model regularization. Both 1-norm minimization problems are framed without approximation as non-negative least-squares (NNLS) problems. First, we must identify a parsimonious set of regions within the model for which upper and lower bounds on average conductivity will be sought. This is accomplished by minimizing the 1-norm of spatial variation, which produces a model with a limited number of homogeneous regions; in fact, the number of homogeneous regions will never be greater than the number of data, regardless of the number of free parameters supplied. The second step establishes bounds for each of these regions with pairs of inversions. The new suite of inversions also uses a 1-norm penalty, but applied to the conductivity values themselves, rather than the spatial variation thereof. In the bounding step we use the 1-norm of our model parameters because it is proportional to average conductivity. For a lower bound on average conductivity, the 1-norm within a bounding region is minimized. For an upper bound on average conductivity, the 1-norm everywhere outside a bounding region is minimized. The latter minimization has the effect of concentrating conductance into the bounding region. Taken together, these bounds are a measure of the uncertainty in the associated region of our model. Starting with a blocky inverse solution is key in the selection of the bounding regions. Of course, there is a tradeoff between resolution and uncertainty: an increase in resolution (smaller bounding regions), results in greater uncertainty (wider bounds). Minimization of the 1-norm of spatial variation delivers the fewest possible regions defined by a mean conductivity, the quantity we wish to bound. Thus, these regions present a natural set for which the most narrow and discriminating bounds can be found. For illustration, we apply these techniques to synthetic magnetotelluric (MT) data sets resulting from one-dimensional (1D) earth models. In each case we find that with realistic data coverage, any single inverted model can often stray from the truth, while the computed bounds on an encompassing region contain both the inverted and the true conductivities, indicating that our measure of model uncertainty is robust. Such estimates of uncertainty for conductivity can then be translated to bounds on important petrological parameters such as mineralogy, porosity, saturation, and fluid type.

  17. Moment tensor inversions using strong motion waveforms of Taiwan TSMIP data, 1993–2009

    USGS Publications Warehouse

    Chang, Kaiwen; Chi, Wu-Cheng; Gung, Yuancheng; Dreger, Douglas; Lee, William H K.; Chiu, Hung-Chie

    2011-01-01

    Earthquake source parameters are important for earthquake studies and seismic hazard assessment. Moment tensors are among the most important earthquake source parameters, and are now routinely derived using modern broadband seismic networks around the world. Similar waveform inversion techniques can also apply to other available data, including strong-motion seismograms. Strong-motion waveforms are also broadband, and recorded in many regions since the 1980s. Thus, strong-motion data can be used to augment moment tensor catalogs with a much larger dataset than that available from the high-gain, broadband seismic networks. However, a systematic comparison between the moment tensors derived from strong motion waveforms and high-gain broadband waveforms has not been available. In this study, we inverted the source mechanisms of Taiwan earthquakes between 1993 and 2009 by using the regional moment tensor inversion method using digital data from several hundred stations in the Taiwan Strong Motion Instrumentation Program (TSMIP). By testing different velocity models and filter passbands, we were able to successfully derive moment tensor solutions for 107 earthquakes of Mw >= 4.8. The solutions for large events agree well with other available moment tensor catalogs derived from local and global broadband networks. However, for Mw = 5.0 or smaller events, we consistently over estimated the moment magnitudes by 0.5 to 1.0. We have tested accelerograms, and velocity waveforms integrated from accelerograms for the inversions, and found the results are similar. In addition, we used part of the catalogs to study important seismogenic structures in the area near Meishan Taiwan which was the site of a very damaging earthquake a century ago, and found that the structures were dominated by events with complex right-lateral strike-slip faulting during the recent decade. The procedures developed from this study may be applied to other strong-motion datasets to compliment or fill gaps in catalogs from regional broadband networks and teleseismic networks.

  18. On the electromagnetic scattering from infinite rectangular conducting grids

    NASA Technical Reports Server (NTRS)

    Christodoulou, C.

    1985-01-01

    The study and development of two numerical techniques for the analysis of electromagnetic scattering from a rectangular wire mesh are described. Both techniques follow from one basic formulation and they are both solved in the spectral domain. These techniques were developed as a result of an investigation towards more efficient numerical computation for mesh scattering. These techniques are efficient for the following reasons: (a1) make use of the Fast Fourier Transform; (b2) they avoid any convolution problems by converting integrodifferential equations into algebraic equations; and (c3) they do not require inversions of any matrices. The first method, the SIT or Spectral Iteration Technique, is applied for regions where the spacing between wires is not less than two wavelengths. The second method, the SDCG or Spectral Domain Conjugate Gradient approach, can be used for any spacing between adjacent wires. A study of electromagnetic wave properties, such as reflection coefficient, induced currents and aperture fields, as functions of frequency, angle of incidence, polarization and thickness of wires is presented. Examples and comparisons or results with other methods are also included to support the validity of the new algorithms.

  19. 3D motion picture of transparent gas flow by parallel phase-shifting digital holography

    NASA Astrophysics Data System (ADS)

    Awatsuji, Yasuhiro; Fukuda, Takahito; Wang, Yexin; Xia, Peng; Kakue, Takashi; Nishio, Kenzo; Matoba, Osamu

    2018-03-01

    Parallel phase-shifting digital holography is a technique capable of recording three-dimensional (3D) motion picture of dynamic object, quantitatively. This technique can record single hologram of an object with an image sensor having a phase-shift array device and reconstructs the instantaneous 3D image of the object with a computer. In this technique, a single hologram in which the multiple holograms required for phase-shifting digital holography are multiplexed by using space-division multiplexing technique pixel by pixel. We demonstrate 3D motion picture of dynamic and transparent gas flow recorded and reconstructed by the technique. A compressed air duster was used to generate the gas flow. A motion picture of the hologram of the gas flow was recorded at 180,000 frames/s by parallel phase-shifting digital holography. The phase motion picture of the gas flow was reconstructed from the motion picture of the hologram. The Abel inversion was applied to the phase motion picture and then the 3D motion picture of the gas flow was obtained.

  20. Targeted next generation sequencing for the detection of ciprofloxacin resistance markers using molecular inversion probes

    DTIC Science & Technology

    2016-07-06

    1 Targeted next-generation sequencing for the detection of ciprofloxacin resistance markers using molecular inversion probes Christopher P...development and evaluation of a panel of 44 single-stranded molecular inversion probes (MIPs) coupled to next-generation sequencing (NGS) for the...padlock and molecular inversion probes as upfront enrichment steps for use with NGS showed the specificity and multiplexability of these techniques

  1. Methods to control phase inversions and enhance mass transfer in liquid-liquid dispersions

    DOEpatents

    Tsouris, Constantinos; Dong, Junhang

    2002-01-01

    The present invention is directed to the effects of applied electric fields on liquid-liquid dispersions. In general, the present invention is directed to the control of phase inversions in liquid-liquid dispersions. Because of polarization and deformation effects, coalescence of aqueous drops is facilitated by the application of electric fields. As a result, with an increase in the applied voltage, the ambivalence region is narrowed and shifted toward higher volume fractions of the dispersed phase. This permits the invention to be used to ensure that the aqueous phase remains continuous, even at a high volume fraction of the organic phase. Additionally, the volume fraction of the organic phase may be increased without causing phase inversion, and may be used to correct a phase inversion which has already occurred. Finally, the invention may be used to enhance mass transfer rates from one phase to another through the use of phase inversions.

  2. Continuous analog of multiplicative algebraic reconstruction technique for computed tomography

    NASA Astrophysics Data System (ADS)

    Tateishi, Kiyoko; Yamaguchi, Yusaku; Abou Al-Ola, Omar M.; Kojima, Takeshi; Yoshinaga, Tetsuya

    2016-03-01

    We propose a hybrid dynamical system as a continuous analog to the block-iterative multiplicative algebraic reconstruction technique (BI-MART), which is a well-known iterative image reconstruction algorithm for computed tomography. The hybrid system is described by a switched nonlinear system with a piecewise smooth vector field or differential equation and, for consistent inverse problems, the convergence of non-negatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem. Namely, we can prove theoretically that a weighted Kullback-Leibler divergence measure can be a common Lyapunov function for the switched system. We show that discretizing the differential equation by using the first-order approximation (Euler's method) based on the geometric multiplicative calculus leads to the same iterative formula of the BI-MART with the scaling parameter as a time-step of numerical discretization. The present paper is the first to reveal that a kind of iterative image reconstruction algorithm is constructed by the discretization of a continuous-time dynamical system for solving tomographic inverse problems. Iterative algorithms with not only the Euler method but also the Runge-Kutta methods of lower-orders applied for discretizing the continuous-time system can be used for image reconstruction. A numerical example showing the characteristics of the discretized iterative methods is presented.

  3. High-Resolution Light Transmission Spectroscopy of Nanoparticles in Real Time

    NASA Astrophysics Data System (ADS)

    Tanner, Carol; Sun, Nan; Deatsch, Alison; Li, Frank; Ruggiero, Steven

    2017-04-01

    As implemented here, Light Transmission Spectroscopy (LTS) is a high-resolution real-time technique for eliminating spectral noise and systematic effects in wide band spectroscopic measurements of nanoparticles. In this work, we combine LTS with spectral inversion for the purpose of characterizing the size, shape, and number of nanoparticles in solution. The apparatus employs a wide-band multi-wavelength light source and grating spectrometers coupled to CCD detectors. The light source ranges from 210 to 2000 nm, and the wavelength dependent light detection system ranges from 200 to 1100 nm with <=1 nm resolution. With this system, nanoparticles ranging from 1 to 3000 nm diameters can be studied. The nanoparticles are typically suspended in pure water or water-based buffer solutions. For testing and calibration purposes, results are presented for nanoparticles composed of polystyrene and gold. Mie theory is used to model the total extinction cross-section, and spectral inversion is employed to obtain quantitative particle size distributions. Discussed are the precision, accuracy, resolution, and sensitivity of our results. The technique is quite versatile and can be applied to spectroscopic investigations where wideband, accurate, low-noise, real-time spectra are desired. University of Notre Dame Office of Research, College of Science, Department of Physics, and USDA.

  4. SERS-based inverse molecular sentinel (iMS) nanoprobes for multiplexed detection of microRNA cancer biomarkers in biological samples

    NASA Astrophysics Data System (ADS)

    Crawford, Bridget M.; Wang, Hsin-Neng; Fales, Andrew M.; Bowie, Michelle L.; Seewaldt, Victoria L.; Vo-Dinh, Tuan

    2017-02-01

    The development of sensitive and selective biosensing techniques is of great interest for clinical diagnostics. Here, we describe the development and application of a surface enhanced Raman scattering (SERS) sensing technology, referred to as "inverse Molecular Sentinel (iMS)" nanoprobes, for the detection of nucleic acid biomarkers in biological samples. This iMS nanoprobe involves the use of plasmonic-active nanostars as the sensing platform for a homogenous assay for multiplexed detection of nucleic acid biomarkers, including DNA, RNA and microRNA (miRNA). The "OFF-to-ON" signal switch is based on a non-enzymatic strand-displacement process and the conformational change of stem-loop (hairpin) oligonucleotide probes upon target binding. Here, we demonstrate the development of iMS nanoprobes for the detection of DNA sequences as well as a modified design of the nanoprobe for the detection of short (22-nt) microRNA sequences. The application of iMS nanoprobes to detect miRNAs in real biological samples was performed with total small RNA extracted from breast cancer cell lines. The multiplex capability of the iMS technique was demonstrated using a mixture of the two differently labeled nanoprobes to detect miR-21 and miR-34a miRNA biomarkers for breast cancer. The results of this study demonstrate the feasibility of applying the iMS technique for multiplexed detection of nucleic acid biomarkers, including short miRNAs molecules.

  5. Structural Anomaly Detection Using Fiber Optic Sensors and Inverse Finite Element Method

    NASA Technical Reports Server (NTRS)

    Quach, Cuong C.; Vazquez, Sixto L.; Tessler, Alex; Moore, Jason P.; Cooper, Eric G.; Spangler, Jan. L.

    2005-01-01

    NASA Langley Research Center is investigating a variety of techniques for mitigating aircraft accidents due to structural component failure. One technique under consideration combines distributed fiber optic strain sensing with an inverse finite element method for detecting and characterizing structural anomalies anomalies that may provide early indication of airframe structure degradation. The technique identifies structural anomalies that result in observable changes in localized strain but do not impact the overall surface shape. Surface shape information is provided by an Inverse Finite Element Method that computes full-field displacements and internal loads using strain data from in-situ fiberoptic sensors. This paper describes a prototype of such a system and reports results from a series of laboratory tests conducted on a test coupon subjected to increasing levels of damage.

  6. Richardson-Lucy deconvolution as a general tool for combining images with complementary strengths.

    PubMed

    Ingaramo, Maria; York, Andrew G; Hoogendoorn, Eelco; Postma, Marten; Shroff, Hari; Patterson, George H

    2014-03-17

    We use Richardson-Lucy (RL) deconvolution to combine multiple images of a simulated object into a single image in the context of modern fluorescence microscopy techniques. RL deconvolution can merge images with very different point-spread functions, such as in multiview light-sheet microscopes,1, 2 while preserving the best resolution information present in each image. We show that RL deconvolution is also easily applied to merge high-resolution, high-noise images with low-resolution, low-noise images, relevant when complementing conventional microscopy with localization microscopy. We also use RL deconvolution to merge images produced by different simulated illumination patterns, relevant to structured illumination microscopy (SIM)3, 4 and image scanning microscopy (ISM). The quality of our ISM reconstructions is at least as good as reconstructions using standard inversion algorithms for ISM data, but our method follows a simpler recipe that requires no mathematical insight. Finally, we apply RL deconvolution to merge a series of ten images with varying signal and resolution levels. This combination is relevant to gated stimulated-emission depletion (STED) microscopy, and shows that merges of high-quality images are possible even in cases for which a non-iterative inversion algorithm is unknown. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Integrated Analysis Seismic Inversion and Rockphysics for Determining Secondary Porosity Distribution of Carbonate Reservoir at “FR” Field

    NASA Astrophysics Data System (ADS)

    Rosid, M. S.; Augusta, F. F.; Haidar, M. W.

    2018-05-01

    In general, carbonate secondary pore structure is very complex due to the significant diagenesis process. Therefore, the determination of carbonate secondary pore types is an important factor which is related to study of production. This paper mainly deals not only to figure out the secondary pores types, but also to predict the distribution of the secondary pore types of carbonate reservoir. We apply Differential Effective Medium (DEM) for analyzing pore types of carbonate rocks. The input parameter of DEM inclusion model is fraction of porosity and the output parameters are bulk moduli and shear moduli as a function of porosity, which is used as input parameter for creating Vp and Vs modelling. We also apply seismic post-stack inversion technique that is used to map the pore type distribution from 3D seismic data. Afterward, we create porosity cube which is better to use geostatistical method due to the complexity of carbonate reservoir. Thus, the results of this study might show the secondary porosity distribution of carbonate reservoir at “FR” field. In this case, North – Northwest of study area are dominated by interparticle pores and crack pores. Hence, that area has highest permeability that hydrocarbon can be more accumulated.

  8. Electrical Imaging of Roots and Trunks

    NASA Astrophysics Data System (ADS)

    Al Hagrey, S.; Werban, U.; Meissner, R.; Ismaeil, A.; Rabbel, W.

    2005-05-01

    We applied geoelectric and GPR techniques to analyze problems of botanical structures and even processes, e.g., mapping root zones, internal structure of trunks, and water uptake by roots. The dielectric nature of root zones and trunks is generally a consequence of relatively high moisture content. The electric method, applied to root zones, can discriminate between old, thick, isolated roots (high resistivity) and the network of young, active, and hydraulically conductive zones (low resistivity). Both types of roots show low radar velocity and a strong attenuation caused by the dominant effect of moisture (high dielectric constant) on the electromagnetic wave propagation. Single root branches could be observed in radargrams by their reflection and diffraction parabolas. We have perfected the inversion method for perfect and imperfect cylindrical objects, such as trunks, and developed a new multielectrodes (needle or gel) ring array for fast applications on living trees and discs. Using synthetic models we tested the technique successfully and analyzed it as a function of total electrode number and configuration. Measurements at a trunk show a well established inverse relationship between the imaged resistivity and the moisture content determined from cores. The central resistivity maximum of healthy trees strongly decreases toward the rim. This agrees with the moisture decrease to the outside where active sap flow processes take place. Branching, growth anomalies (new or old shoots) and meteorological effects (sunshine and wind direction) lead to deviations of the concentric electric structure. The strongest anomalies are related to infections causing wet, rotting spots or cavities. The heartwood resistivity is highest in olive and oak trunks, intermediate in young fruit trees and lowest in cork oak trunks that are considered to be anomalously wet. Compared to acoustic tomography our electric technique shows a better resolution in imaging internal ring structures where moisture is the most dominating factor. We conclude that our imaging resistivity technique is applicable for investigating or controlling the botanical and physical conditions of endangered trees (health inspection) and capable to monitor dynamic processes of sap flow if adequate tracers are used.

  9. Time-lapse joint AVO inversion using generalized linear method based on exact Zoeppritz equations

    NASA Astrophysics Data System (ADS)

    Zhi, Longxiao; Gu, Hanming

    2018-03-01

    The conventional method of time-lapse AVO (Amplitude Versus Offset) inversion is mainly based on the approximate expression of Zoeppritz equations. Though the approximate expression is concise and convenient to use, it has certain limitations. For example, its application condition is that the difference of elastic parameters between the upper medium and lower medium is little and the incident angle is small. In addition, the inversion of density is not stable. Therefore, we develop the method of time-lapse joint AVO inversion based on exact Zoeppritz equations. In this method, we apply exact Zoeppritz equations to calculate the reflection coefficient of PP wave. And in the construction of objective function for inversion, we use Taylor series expansion to linearize the inversion problem. Through the joint AVO inversion of seismic data in baseline survey and monitor survey, we can obtain the P-wave velocity, S-wave velocity, density in baseline survey and their time-lapse changes simultaneously. We can also estimate the oil saturation change according to inversion results. Compared with the time-lapse difference inversion, the joint inversion doesn't need certain assumptions and can estimate more parameters simultaneously. It has a better applicability. Meanwhile, by using the generalized linear method, the inversion is easily implemented and its calculation cost is small. We use the theoretical model to generate synthetic seismic records to test and analyze the influence of random noise. The results can prove the availability and anti-noise-interference ability of our method. We also apply the inversion to actual field data and prove the feasibility of our method in actual situation.

  10. Simulation studies of phase inversion in agitated vessels using a Monte Carlo technique.

    PubMed

    Yeo, Leslie Y; Matar, Omar K; Perez de Ortiz, E Susana; Hewitt, Geoffrey F

    2002-04-15

    A speculative study on the conditions under which phase inversion occurs in agitated liquid-liquid dispersions is conducted using a Monte Carlo technique. The simulation is based on a stochastic model, which accounts for fundamental physical processes such as drop deformation, breakup, and coalescence, and utilizes the minimization of interfacial energy as a criterion for phase inversion. Profiles of the interfacial energy indicate that a steady-state equilibrium is reached after a sufficiently large number of random moves and that predictions are insensitive to initial drop conditions. The calculated phase inversion holdup is observed to increase with increasing density and viscosity ratio, and to decrease with increasing agitation speed for a fixed viscosity ratio. It is also observed that, for a fixed viscosity ratio, the phase inversion holdup remains constant for large enough agitation speeds. The proposed model is therefore capable of achieving reasonable qualitative agreement with general experimental trends and of reproducing key features observed experimentally. The results of this investigation indicate that this simple stochastic method could be the basis upon which more advanced models for predicting phase inversion behavior can be developed.

  11. Probabilistic estimation of splitting coefficients of normal modes of the Earth, and their uncertainties, using an autoregressive technique

    NASA Astrophysics Data System (ADS)

    Pachhai, S.; Masters, G.; Laske, G.

    2017-12-01

    Earth's normal-mode spectra are crucial to studying the long wavelength structure of the Earth. Such observations have been used extensively to estimate "splitting coefficients" which, in turn, can be used to determine the three-dimensional velocity and density structure. Most past studies apply a non-linear iterative inversion to estimate the splitting coefficients which requires that the earthquake source is known. However, it is challenging to know the source details, particularly for big events as used in normal-mode analyses. Additionally, the final solution of the non-linear inversion can depend on the choice of damping parameter and starting model. To circumvent the need to know the source, a two-step linear inversion has been developed and successfully applied to many mantle and core sensitive modes. The first step takes combinations of the data from a single event to produce spectra known as "receiver strips". The autoregressive nature of the receiver strips can then be used to estimate the structure coefficients without the need to know the source. Based on this approach, we recently employed a neighborhood algorithm to measure the splitting coefficients for an isolated inner-core sensitive mode (13S2). This approach explores the parameter space efficiently without any need of regularization and finds the structure coefficients which best fit the observed strips. Here, we implement a Bayesian approach to data collected for earthquakes from early 2000 and more recent. This approach combines the data (through likelihood) and prior information to provide rigorous parameter values and their uncertainties for both isolated and coupled modes. The likelihood function is derived from the inferred errors of the receiver strips which allows us to retrieve proper uncertainties. Finally, we apply model selection criteria that balance the trade-offs between fit (likelihood) and model complexity to investigate the degree and type of structure (elastic and anelastic) required to explain the data.

  12. 3D Seismic Experimentation and Advanced Processing/Inversion Development for Investigations of the Shallow Subsurface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levander, Alan Richard; Zelt, Colin A.

    2015-03-17

    The work plan for this project was to develop and apply advanced seismic reflection and wide-angle processing and inversion techniques to high resolution seismic data for the shallow subsurface to seismically characterize the shallow subsurface at hazardous waste sites as an aid to containment and cleanup activities. We proposed to continue work on seismic data that we had already acquired under a previous DoE grant, as well as to acquire additional new datasets for analysis. The project successfully developed and/or implemented the use of 3D reflection seismology algorithms, waveform tomography and finite-frequency tomography using compressional and shear waves for highmore » resolution characterization of the shallow subsurface at two waste sites. These two sites have markedly different near-surface structures, groundwater flow patterns, and hazardous waste problems. This is documented in the list of refereed documents, conference proceedings, and Rice graduate theses, listed below.« less

  13. A homogenization approach for the effective drained viscoelastic properties of 2D porous media and an application for cortical bone.

    PubMed

    Nguyen, Sy-Tuan; Vu, Mai-Ba; Vu, Minh-Ngoc; To, Quy-Dong

    2018-02-01

    Closed-form solutions for the effective rheological properties of a 2D viscoelastic drained porous medium made of a Generalized Maxwell viscoelastic matrix and pore inclusions are developed and applied for cortical bone. The in-plane (transverse) effective viscoelastic bulk and shear moduli of the Generalized Maxwell rheology of the homogenized medium are expressed as functions of the porosity and the viscoelastic properties of the solid phase. When deriving these functions, the classical inverse Laplace-Carson transformation technique is avoided, due to its complexity, by considering the short and long term approximations. The approximated results are validated against exact solutions obtained from the inverse Laplace-Carson transform for a simple configuration when the later is available. An application for cortical bone with assumption of circular pore in the transverse plane shows that the proposed approximation fit very well with experimental data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Parallel Fortran-MPI software for numerical inversion of the Laplace transform and its application to oscillatory water levels in groundwater environments

    USGS Publications Warehouse

    Zhan, X.

    2005-01-01

    A parallel Fortran-MPI (Message Passing Interface) software for numerical inversion of the Laplace transform based on a Fourier series method is developed to meet the need of solving intensive computational problems involving oscillatory water level's response to hydraulic tests in a groundwater environment. The software is a parallel version of ACM (The Association for Computing Machinery) Transactions on Mathematical Software (TOMS) Algorithm 796. Running 38 test examples indicated that implementation of MPI techniques with distributed memory architecture speedups the processing and improves the efficiency. Applications to oscillatory water levels in a well during aquifer tests are presented to illustrate how this package can be applied to solve complicated environmental problems involved in differential and integral equations. The package is free and is easy to use for people with little or no previous experience in using MPI but who wish to get off to a quick start in parallel computing. ?? 2004 Elsevier Ltd. All rights reserved.

  15. Removal of Stationary Sinusoidal Noise from Random Vibration Signals.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Brian; Cap, Jerome S.

    In random vibration environments, sinusoidal line noise may appear in the vibration signal and can affect analysis of the resulting data. We studied two methods which remove stationary sine tones from random noise: a matrix inversion algorithm and a chirp-z transform algorithm. In addition, we developed new methods to determine the frequency of the tonal noise. The results show that both of the removal methods can eliminate sine tones in prefabricated random vibration data when the sine-to-random ratio is at least 0.25. For smaller ratios down to 0.02 only the matrix inversion technique can remove the tones, but the metricsmore » to evaluate its effectiveness also degrade. We also found that using fast Fourier transforms best identified the tonal noise, and determined that band-pass-filtering the signals prior to the process improved sine removal. When applied to actual vibration test data, the methods were not as effective at removing harmonic tones, which we believe to be a result of mixed-phase sinusoidal noise.« less

  16. Application of genetic algorithms to focal mechanism determination

    NASA Astrophysics Data System (ADS)

    Kobayashi, Reiji; Nakanishi, Ichiro

    1994-04-01

    Genetic algorithms are a new class of methods for global optimization. They resemble Monte Carlo techniques, but search for solutions more efficiently than uniform Monte Carlo sampling. In the field of geophysics, genetic algorithms have recently been used to solve some non-linear inverse problems (e.g., earthquake location, waveform inversion, migration velocity estimation). We present an application of genetic algorithms to focal mechanism determination from first-motion polarities of P-waves and apply our method to two recent large events, the Kushiro-oki earthquake of January 15, 1993 and the SW Hokkaido (Japan Sea) earthquake of July 12, 1993. Initial solution and curvature information of the objective function that gradient methods need are not required in our approach. Moreover globally optimal solutions can be efficiently obtained. Calculation of polarities based on double-couple models is the most time-consuming part of the source mechanism determination. The amount of calculations required by the method designed in this study is much less than that of previous grid search methods.

  17. Maximum likelihood techniques applied to quasi-elastic light scattering

    NASA Technical Reports Server (NTRS)

    Edwards, Robert V.

    1992-01-01

    There is a necessity of having an automatic procedure for reliable estimation of the quality of the measurement of particle size from QELS (Quasi-Elastic Light Scattering). Getting the measurement itself, before any error estimates can be made, is a problem because it is obtained by a very indirect measurement of a signal derived from the motion of particles in the system and requires the solution of an inverse problem. The eigenvalue structure of the transform that generates the signal is such that an arbitrarily small amount of noise can obliterate parts of any practical inversion spectrum. This project uses the Maximum Likelihood Estimation (MLE) as a framework to generate a theory and a functioning set of software to oversee the measurement process and extract the particle size information, while at the same time providing error estimates for those measurements. The theory involved verifying a correct form of the covariance matrix for the noise on the measurement and then estimating particle size parameters using a modified histogram approach.

  18. Two Dimensional Finite Element Based Magnetotelluric Inversion using Singular Value Decomposition Method on Transverse Electric Mode

    NASA Astrophysics Data System (ADS)

    Tjong, Tiffany; Yihaa’ Roodhiyah, Lisa; Nurhasan; Sutarno, Doddy

    2018-04-01

    In this work, an inversion scheme was performed using a vector finite element (VFE) based 2-D magnetotelluric (MT) forward modelling. We use an inversion scheme with Singular value decomposition (SVD) method toimprove the accuracy of MT inversion.The inversion scheme was applied to transverse electric (TE) mode of MT. SVD method was used in this inversion to decompose the Jacobian matrices. Singular values which obtained from the decomposition process were analyzed. This enabled us to determine the importance of data and therefore to define a threshold for truncation process. The truncation of singular value in inversion processcould improve the resulted model.

  19. Clinical knowledge-based inverse treatment planning

    NASA Astrophysics Data System (ADS)

    Yang, Yong; Xing, Lei

    2004-11-01

    Clinical IMRT treatment plans are currently made using dose-based optimization algorithms, which do not consider the nonlinear dose-volume effects for tumours and normal structures. The choice of structure specific importance factors represents an additional degree of freedom of the system and makes rigorous optimization intractable. The purpose of this work is to circumvent the two problems by developing a biologically more sensible yet clinically practical inverse planning framework. To implement this, the dose-volume status of a structure was characterized by using the effective volume in the voxel domain. A new objective function was constructed with the incorporation of the volumetric information of the system so that the figure of merit of a given IMRT plan depends not only on the dose deviation from the desired distribution but also the dose-volume status of the involved organs. The conventional importance factor of an organ was written into a product of two components: (i) a generic importance that parametrizes the relative importance of the organs in the ideal situation when the goals for all the organs are met; (ii) a dose-dependent factor that quantifies our level of clinical/dosimetric satisfaction for a given plan. The generic importance can be determined a priori, and in most circumstances, does not need adjustment, whereas the second one, which is responsible for the intractable behaviour of the trade-off seen in conventional inverse planning, was determined automatically. An inverse planning module based on the proposed formalism was implemented and applied to a prostate case and a head-neck case. A comparison with the conventional inverse planning technique indicated that, for the same target dose coverage, the critical structure sparing was substantially improved for both cases. The incorporation of clinical knowledge allows us to obtain better IMRT plans and makes it possible to auto-select the importance factors, greatly facilitating the inverse planning process. The new formalism proposed also reveals the relationship between different inverse planning schemes and gives important insight into the problem of therapeutic plan optimization. In particular, we show that the EUD-based optimization is a special case of the general inverse planning formalism described in this paper.

  20. Bayesian Approach to the Joint Inversion of Gravity and Magnetic Data, with Application to the Ismenius Area of Mars

    NASA Technical Reports Server (NTRS)

    Jewell, Jeffrey B.; Raymond, C.; Smrekar, S.; Millbury, C.

    2004-01-01

    This viewgraph presentation reviews a Bayesian approach to the inversion of gravity and magnetic data with specific application to the Ismenius Area of Mars. Many inverse problems encountered in geophysics and planetary science are well known to be non-unique (i.e. inversion of gravity the density structure of a body). In hopes of reducing the non-uniqueness of solutions, there has been interest in the joint analysis of data. An example is the joint inversion of gravity and magnetic data, with the assumption that the same physical anomalies generate both the observed magnetic and gravitational anomalies. In this talk, we formulate the joint analysis of different types of data in a Bayesian framework and apply the formalism to the inference of the density and remanent magnetization structure for a local region in the Ismenius area of Mars. The Bayesian approach allows prior information or constraints in the solutions to be incorporated in the inversion, with the "best" solutions those whose forward predictions most closely match the data while remaining consistent with assumed constraints. The application of this framework to the inversion of gravity and magnetic data on Mars reveals two typical challenges - the forward predictions of the data have a linear dependence on some of the quantities of interest, and non-linear dependence on others (termed the "linear" and "non-linear" variables, respectively). For observations with Gaussian noise, a Bayesian approach to inversion for "linear" variables reduces to a linear filtering problem, with an explicitly computable "error" matrix. However, for models whose forward predictions have non-linear dependencies, inference is no longer given by such a simple linear problem, and moreover, the uncertainty in the solution is no longer completely specified by a computable "error matrix". It is therefore important to develop methods for sampling from the full Bayesian posterior to provide a complete and statistically consistent picture of model uncertainty, and what has been learned from observations. We will discuss advanced numerical techniques, including Monte Carlo Markov

  1. Acoustic Full Waveform Inversion to Characterize Near-surface Chemical Explosions

    NASA Astrophysics Data System (ADS)

    Kim, K.; Rodgers, A. J.

    2015-12-01

    Recent high-quality, atmospheric overpressure data from chemical high-explosive experiments provide a unique opportunity to characterize near-surface explosions, specifically estimating yield and source time function. Typically, yield is estimated from measured signal features, such as peak pressure, impulse, duration and/or arrival time of acoustic signals. However, the application of full waveform inversion to acoustic signals for yield estimation has not been fully explored. In this study, we apply a full waveform inversion method to local overpressure data to extract accurate pressure-time histories of acoustics sources during chemical explosions. A robust and accurate inversion technique for acoustic source is investigated using numerical Green's functions that take into account atmospheric and topographic propagation effects. The inverted pressure-time history represents the pressure fluctuation at the source region associated with the explosion, and thus, provides a valuable information about acoustic source mechanisms and characteristics in greater detail. We compare acoustic source properties (i.e., peak overpressure, duration, and non-isotropic shape) of a series of explosions having different emplacement conditions and investigate the relationship of the acoustic sources to the yields of explosions. The time histories of acoustic sources may refine our knowledge of sound-generation mechanisms of shallow explosions, and thereby allow for accurate yield estimation based on acoustic measurements. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  2. Tsunami Wave Height Estimation from GPS-Derived Ionospheric Data

    NASA Astrophysics Data System (ADS)

    Rakoto, Virgile; Lognonné, Philippe; Rolland, Lucie; Coïsson, P.

    2018-05-01

    Large underwater earthquakes (Mw>7) can transmit part of their energy to the surrounding ocean through large seafloor motions, generating tsunamis that propagate over long distances. The forcing effect of tsunami waves on the atmosphere generates internal gravity waves that, when they reach the upper atmosphere, produce ionospheric perturbations. These perturbations are frequently observed in the total electron content (TEC) measured by multifrequency Global Navigation Satellite Systems (GNSS) such as GPS, GLONASS, and, in the future, Galileo. This paper describes the first inversion of the variation in sea level derived from GPS TEC data. We used a least squares inversion through a normal-mode summation modeling. This technique was applied to three tsunamis in far field associated to the 2012 Haida Gwaii, 2006 Kuril Islands, and 2011 Tohoku events and for Tohoku also in close field. With the exception of the Tohoku far-field case, for which the tsunami reconstruction by the TEC inversion is less efficient due to the ionospheric noise background associated to geomagnetic storm, which occurred on the earthquake day, we show that the peak-to-peak amplitude of the sea level variation inverted by this method can be compared to the tsunami wave height measured by a DART buoy with an error of less than 20%. This demonstrates that the inversion of TEC data with a tsunami normal-mode summation approach is able to estimate quite accurately the amplitude and waveform of the first tsunami arrival.

  3. Surface wave tomography of Europe from ambient seismic noise

    NASA Astrophysics Data System (ADS)

    Lu, Yang; Stehly, Laurent; Paul, Anne

    2017-04-01

    We present a European scale high-resolution 3-D shear wave velocity model derived from ambient seismic noise tomography. In this study, we collect 4 years of continuous seismic recordings from 1293 stations across much of the European region (10˚W-35˚E, 30˚N-75˚N), which yields more than 0.8 million virtual station pairs. This data set compiles records from 67 seismic networks, both permanent and temporary from the EIDA (European Integrated Data Archive). Rayleigh wave group velocity are measured at each station pair using the multiple-filter analysis technique. Group velocity maps are estimated through a linearized tomographic inversion algorithm at period from 5s to 100s. Adaptive parameterization is used to accommodate heterogeneity in data coverage. We then apply a two-step data-driven inversion method to obtain the shear wave velocity model. The two steps refer to a Monte Carlo inversion to build the starting model, followed by a linearized inversion for further improvement. Finally, Moho depth (and its uncertainty) are determined over most of our study region by identifying and analysing sharp velocity discontinuities (and sharpness). The resulting velocity model shows good agreement with main geological features and previous geophyical studies. Moho depth coincides well with that obtained from active seismic experiments. A focus on the Greater Alpine region (covered by the AlpArray seismic network) displays a clear crustal thinning that follows the arcuate shape of the Alps from the southern French Massif Central to southern Germany.

  4. Upper crustal structure of central Java, Indonesia, from transdimensional seismic ambient noise tomography

    NASA Astrophysics Data System (ADS)

    Zulfakriza, Z.; Saygin, E.; Cummins, P. R.; Widiyantoro, S.; Nugraha, A. D.; Lühr, B.-G.; Bodin, T.

    2014-04-01

    Delineating the crustal structure of central Java is crucial for understanding its complex tectonic setting. However, seismic imaging of the strong heterogeneity typical of such a tectonically active region can be challenging, particularly in the upper crust where velocity contrasts are strongest and steep body wave ray paths provide poor resolution. To overcome these difficulties, we apply the technique of ambient noise tomography (ANT) to data collected during the Merapi Amphibious Experiment (MERAMEX), which covered central Java with a temporary deployment of over 120 seismometers during 2004 May-October. More than 5000 Rayleigh wave Green's functions were extracted by cross-correlating the noise simultaneously recorded at available station pairs. We applied a fully non-linear 2-D Bayesian probabilistic inversion technique to the retrieved traveltimes. Features in the derived tomographic images correlate well with previous studies, and some shallow structures that were not evident in previous studies are clearly imaged with ANT. The Kendeng Basin and several active volcanoes appear with very low group velocities, and anomalies with relatively high velocities can be interpreted in terms of crustal sutures and/or surface geological features.

  5. Inversion layer MOS solar cells

    NASA Technical Reports Server (NTRS)

    Ho, Fat Duen

    1986-01-01

    Inversion layer (IL) Metal Oxide Semiconductor (MOS) solar cells were fabricated. The fabrication technique and problems are discussed. A plan for modeling IL cells is presented. Future work in this area is addressed.

  6. Inversion methods for interpretation of asteroid lightcurves

    NASA Technical Reports Server (NTRS)

    Kaasalainen, Mikko; Lamberg, L.; Lumme, K.

    1992-01-01

    We have developed methods of inversion that can be used in the determination of the three-dimensional shape or the albedo distribution of the surface of a body from disk-integrated photometry, assuming the shape to be strictly convex. In addition to the theory of inversion methods, we have studied the practical aspects of the inversion problem and applied our methods to lightcurve data of 39 Laetitia and 16 Psyche.

  7. The determination of gravity anomalies from geoid heights using the inverse Stokes' formula, Fourier transforms, and least squares collocation

    NASA Technical Reports Server (NTRS)

    Rummel, R.; Sjoeberg, L.; Rapp, R. H.

    1978-01-01

    A numerical method for the determination of gravity anomalies from geoid heights is described using the inverse Stokes formula. This discrete form of the inverse Stokes formula applies a numerical integration over the azimuth and an integration over a cubic interpolatory spline function which approximates the step function obtained from the numerical integration. The main disadvantage of the procedure is the lack of a reliable error measure. The method was applied on geoid heights derived from GEOS-3 altimeter measurements in the calibration area of the GEOS-3 satellite.

  8. Hybrid-dual-fourier tomographic algorithm for a fast three-dimensionial optical image reconstruction in turbid media

    NASA Technical Reports Server (NTRS)

    Alfano, Robert R. (Inventor); Cai, Wei (Inventor)

    2007-01-01

    A reconstruction technique for reducing computation burden in the 3D image processes, wherein the reconstruction procedure comprises an inverse and a forward model. The inverse model uses a hybrid dual Fourier algorithm that combines a 2D Fourier inversion with a 1D matrix inversion to thereby provide high-speed inverse computations. The inverse algorithm uses a hybrid transfer to provide fast Fourier inversion for data of multiple sources and multiple detectors. The forward model is based on an analytical cumulant solution of a radiative transfer equation. The accurate analytical form of the solution to the radiative transfer equation provides an efficient formalism for fast computation of the forward model.

  9. Pseudo 2D elastic waveform inversion for attenuation in the near surface

    NASA Astrophysics Data System (ADS)

    Wang, Yue; Zhang, Jie

    2017-08-01

    Seismic waveform propagation could be significantly affected by heterogeneities in the near surface zone (0 m-500 m depth). As a result, it is important to obtain as much near surface information as possible. Seismic attenuation, characterized by QP and QS factors, may affect seismic waveform in both phase and amplitude; however, it is rarely estimated and applied to the near surface zone for seismic data processing. Applying a 1D elastic full waveform modelling program, we demonstrate that such effects cannot be overlooked in the waveform computation if the value of the Q factor is lower than approximately 100. Further, we develop a pseudo 2D elastic waveform inversion method in the common midpoint (CMP) domain that jointly inverts early arrivals for QP and surface waves for QS. In this method, although the forward problem is in 1D, by applying 2D model regularization, we obtain 2D QP and QS models through simultaneous inversion. A cross-gradient constraint between the QP and Qs models is applied to ensure structural consistency of the 2D inversion results. We present synthetic examples and a real case study from an oil field in China.

  10. The Role of Eigensolutions in Nonlinear Inverse Cavity-Flow-Theory.

    DTIC Science & Technology

    1983-01-25

    ere, side if necessary and id.ntify hv hlock number) " The method of Levi Civita is applied to an isolated fully cavitating body at zero cavitation... Levi Civita is applied to an isolated fully cavitating body at zero cavitation number and adapted to the solution of the inverse problem in which one...case, the classical method of Levi Civita [71 can be applied to an isolated •Numbers in square brackets indicate citations in the references listed below

  11. Computing the Sensitivity Kernels for 2.5-D Seismic Waveform Inversion in Heterogeneous, Anisotropic Media

    NASA Astrophysics Data System (ADS)

    Zhou, Bing; Greenhalgh, S. A.

    2011-10-01

    2.5-D modeling and inversion techniques are much closer to reality than the simple and traditional 2-D seismic wave modeling and inversion. The sensitivity kernels required in full waveform seismic tomographic inversion are the Fréchet derivatives of the displacement vector with respect to the independent anisotropic model parameters of the subsurface. They give the sensitivity of the seismograms to changes in the model parameters. This paper applies two methods, called `the perturbation method' and `the matrix method', to derive the sensitivity kernels for 2.5-D seismic waveform inversion. We show that the two methods yield the same explicit expressions for the Fréchet derivatives using a constant-block model parameterization, and are available for both the line-source (2-D) and the point-source (2.5-D) cases. The method involves two Green's function vectors and their gradients, as well as the derivatives of the elastic modulus tensor with respect to the independent model parameters. The two Green's function vectors are the responses of the displacement vector to the two directed unit vectors located at the source and geophone positions, respectively; they can be generally obtained by numerical methods. The gradients of the Green's function vectors may be approximated in the same manner as the differential computations in the forward modeling. The derivatives of the elastic modulus tensor with respect to the independent model parameters can be obtained analytically, dependent on the class of medium anisotropy. Explicit expressions are given for two special cases—isotropic and tilted transversely isotropic (TTI) media. Numerical examples are given for the latter case, which involves five independent elastic moduli (or Thomsen parameters) plus one angle defining the symmetry axis.

  12. Remote sensing of phytoplankton chlorophyll-a concentration by use of ridge function fields.

    PubMed

    Pelletier, Bruno; Frouin, Robert

    2006-02-01

    A methodology is presented for retrieving phytoplankton chlorophyll-a concentration from space. The data to be inverted, namely, vectors of top-of-atmosphere reflectance in the solar spectrum, are treated as explanatory variables conditioned by angular geometry. This approach leads to a continuum of inverse problems, i.e., a collection of similar inverse problems continuously indexed by the angular variables. The resolution of the continuum of inverse problems is studied from the least-squares viewpoint and yields a solution expressed as a function field over the set of permitted values for the angular variables, i.e., a map defined on that set and valued in a subspace of a function space. The function fields of interest, for reasons of approximation theory, are those valued in nested sequences of subspaces, such as ridge function approximation spaces, the union of which is dense. Ridge function fields constructed on synthetic yet realistic data for case I waters handle well situations of both weakly and strongly absorbing aerosols, and they are robust to noise, showing improvement in accuracy compared with classic inversion techniques. The methodology is applied to actual imagery from the Sea-Viewing Wide Field-of-View Sensor (SeaWiFS); noise in the data are taken into account. The chlorophyll-a concentration obtained with the function field methodology differs from that obtained by use of the standard SeaWiFS algorithm by 15.7% on average. The results empirically validate the underlying hypothesis that the inversion is solved in a least-squares sense. They also show that large levels of noise can be managed if the noise distribution is known or estimated.

  13. Optical spectroscopy of nanoscale and heterostructured oxides

    NASA Astrophysics Data System (ADS)

    Senty, Tess R.

    Through careful analysis of a material's properties, devices are continually getting smaller, faster and more efficient each day. Without a complete scientific understanding of material properties, devices cannot continue to improve. This dissertation uses optical spectroscopy techniques to understand light-matter interactions in several oxide materials with promising uses mainly in light harvesting applications. Linear absorption, photoluminescence and transient absorption spectroscopy are primarily used on europium doped yttrium vanadate nanoparticles, copper gallium oxide delafossites doped with iron, and cadmium selenide quantum dots attached to titanium dioxide nanoparticles. Europium doped yttrium vanadate nanoparticles have promising applications for linking to biomolecules. Using Fourier-transform infrared spectroscopy, it was shown that organic ligands (benzoic acid, 3-nitro 4-chloro-benzoic acid and 3,4-dihydroxybenzoic acid) can be attached to the surface of these molecules using metal-carboxylate coordination. Photoluminescence spectroscopy display little difference in the position of the dominant photoluminescence peaks between samples with different organic ligands although there is a strong decrease in their intensity when 3,4-dihydroxybenzoic acid is attached. It is shown that this strong quenching is due to the presence of high-frequency hydroxide vibrational modes within the organic linker. Ultraviolet/visible linear absorption measurements on delafossites display that by doping copper gallium oxide with iron allows for the previously forbidden fundamental gap transition to be accessed. Using tauc plots, it is shown that doping with iron lowers the bandgap from 2.8 eV for pure copper gallium oxide, to 1.7 eV for samples with 1 -- 5% iron doping. Using terahertz transient absorption spectroscopy measurements, it was also determined that doping with iron reduces the charge mobility of the pure delafossite samples. A comparison of cadmium selenide quantum dots, both with and without capping ligands, attached to titanium dioxide nanoparticles is performed using a new transient absorption analysis technique. Multiple exponential fit models were applied to the system and compared with the new inversion analysis technique. It is shown how the new inversion analysis can map out the charge carrier dynamics, providing carrier recombination rates and lifetimes as a function of carrier concentration, where the multiple exponential fit technique is not dependent on the carrier concentration. With the inversion analysis technique it is shown that capping ligands allow for increased charge transfer due to traps being passivated on the quantum dot surface.

  14. Recursive partitioned inversion of large (1500 x 1500) symmetric matrices

    NASA Technical Reports Server (NTRS)

    Putney, B. H.; Brownd, J. E.; Gomez, R. A.

    1976-01-01

    A recursive algorithm was designed to invert large, dense, symmetric, positive definite matrices using small amounts of computer core, i.e., a small fraction of the core needed to store the complete matrix. The described algorithm is a generalized Gaussian elimination technique. Other algorithms are also discussed for the Cholesky decomposition and step inversion techniques. The purpose of the inversion algorithm is to solve large linear systems of normal equations generated by working geodetic problems. The algorithm was incorporated into a computer program called SOLVE. In the past the SOLVE program has been used in obtaining solutions published as the Goddard earth models.

  15. Numerical convergence and validation of the DIMP inverse particle transport model

    DOE PAGES

    Nelson, Noel; Azmy, Yousry

    2017-09-01

    The data integration with modeled predictions (DIMP) model is a promising inverse radiation transport method for solving the special nuclear material (SNM) holdup problem. Unlike previous methods, DIMP is a completely passive nondestructive assay technique that requires no initial assumptions regarding the source distribution or active measurement time. DIMP predicts the most probable source location and distribution through Bayesian inference and quasi-Newtonian optimization of predicted detector re-sponses (using the adjoint transport solution) with measured responses. DIMP performs well with for-ward hemispherical collimation and unshielded measurements, but several considerations are required when using narrow-view collimated detectors. DIMP converged well to themore » correct source distribution as the number of synthetic responses increased. DIMP also performed well for the first experimental validation exercise after applying a collimation factor, and sufficiently reducing the source search vol-ume's extent to prevent the optimizer from getting stuck in local minima. DIMP's simple point detector response function (DRF) is being improved to address coplanar false positive/negative responses, and an angular DRF is being considered for integration with the next version of DIMP to account for highly collimated responses. Overall, DIMP shows promise for solving the SNM holdup inverse problem, especially once an improved optimization algorithm is implemented.« less

  16. Application of a moment tensor inversion code developed for mining-induced seismicity to fracture monitoring of civil engineering materials

    NASA Astrophysics Data System (ADS)

    Linzer, Lindsay; Mhamdi, Lassaad; Schumacher, Thomas

    2015-01-01

    A moment tensor inversion (MTI) code originally developed to compute source mechanisms from mining-induced seismicity data is now being used in the laboratory in a civil engineering research environment. Quantitative seismology methods designed for geological environments are being tested with the aim of developing techniques to assess and monitor fracture processes in structural concrete members such as bridge girders. In this paper, we highlight aspects of the MTI_Toolbox programme that make it applicable to performing inversions on acoustic emission (AE) data recorded by networks of uniaxial sensors. The influence of the configuration of a seismic network on the conditioning of the least-squares system and subsequent moment tensor results for a real, 3-D network are compared to a hypothetical 2-D version of the same network. This comparative analysis is undertaken for different cases: for networks consisting entirely of triaxial or uniaxial sensors; for both P and S-waves, and for P-waves only. The aim is to guide the optimal design of sensor configurations where only uniaxial sensors can be installed. Finally, the findings of recent laboratory experiments where the MTI_Toolbox has been applied to a concrete beam test are presented and discussed.

  17. Moment Tensor Analysis of Shallow Sources

    NASA Astrophysics Data System (ADS)

    Chiang, A.; Dreger, D. S.; Ford, S. R.; Walter, W. R.; Yoo, S. H.

    2015-12-01

    A potential issue for moment tensor inversion of shallow seismic sources is that some moment tensor components have vanishing amplitudes at the free surface, which can result in bias in the moment tensor solution. The effects of the free-surface on the stability of the moment tensor method becomes important as we continue to investigate and improve the capabilities of regional full moment tensor inversion for source-type identification and discrimination. It is important to understand these free surface effects on discriminating shallow explosive sources for nuclear monitoring purposes. It may also be important in natural systems that have shallow seismicity such as volcanoes and geothermal systems. In this study, we apply the moment tensor based discrimination method to the HUMMING ALBATROSS quarry blasts. These shallow chemical explosions at approximately 10 m depth and recorded up to several kilometers distance represent rather severe source-station geometry in terms of vanishing traction issues. We show that the method is capable of recovering a predominantly explosive source mechanism, and the combined waveform and first motion method enables the unique discrimination of these events. Recovering the correct yield using seismic moment estimates from moment tensor inversion remains challenging but we can begin to put error bounds on our moment estimates using the NSS technique.

  18. Electroencephalographic inverse localization of brain activity in acute traumatic brain injury as a guide to surgery, monitoring and treatment

    PubMed Central

    Irimia, Andrei; Goh, S.-Y. Matthew; Torgerson, Carinna M.; Stein, Nathan R.; Chambers, Micah C.; Vespa, Paul M.; Van Horn, John D.

    2013-01-01

    Objective To inverse-localize epileptiform cortical electrical activity recorded from severe traumatic brain injury (TBI) patients using electroencephalography (EEG). Methods Three acute TBI cases were imaged using computed tomography (CT) and multimodal magnetic resonance imaging (MRI). Semi-automatic segmentation was performed to partition the complete TBI head into 25 distinct tissue types, including 6 tissue types accounting for pathology. Segmentations were employed to generate a finite element method model of the head, and EEG activity generators were modeled as dipolar currents distributed over the cortical surface. Results We demonstrate anatomically faithful localization of EEG generators responsible for epileptiform discharges in severe TBI. By accounting for injury-related tissue conductivity changes, our work offers the most realistic implementation currently available for the inverse estimation of cortical activity in TBI. Conclusion Whereas standard localization techniques are available for electrical activity mapping in uninjured brains, they are rarely applied to acute TBI. Modern models of TBI-induced pathology can inform the localization of epileptogenic foci, improve surgical efficacy, contribute to the improvement of critical care monitoring and provide guidance for patient-tailored treatment. With approaches such as this, neurosurgeons and neurologists can study brain activity in acute TBI and obtain insights regarding injury effects upon brain metabolism and clinical outcome. PMID:24011495

  19. Electroencephalographic inverse localization of brain activity in acute traumatic brain injury as a guide to surgery, monitoring and treatment.

    PubMed

    Irimia, Andrei; Goh, S-Y Matthew; Torgerson, Carinna M; Stein, Nathan R; Chambers, Micah C; Vespa, Paul M; Van Horn, John D

    2013-10-01

    To inverse-localize epileptiform cortical electrical activity recorded from severe traumatic brain injury (TBI) patients using electroencephalography (EEG). Three acute TBI cases were imaged using computed tomography (CT) and multimodal magnetic resonance imaging (MRI). Semi-automatic segmentation was performed to partition the complete TBI head into 25 distinct tissue types, including 6 tissue types accounting for pathology. Segmentations were employed to generate a finite element method model of the head, and EEG activity generators were modeled as dipolar currents distributed over the cortical surface. We demonstrate anatomically faithful localization of EEG generators responsible for epileptiform discharges in severe TBI. By accounting for injury-related tissue conductivity changes, our work offers the most realistic implementation currently available for the inverse estimation of cortical activity in TBI. Whereas standard localization techniques are available for electrical activity mapping in uninjured brains, they are rarely applied to acute TBI. Modern models of TBI-induced pathology can inform the localization of epileptogenic foci, improve surgical efficacy, contribute to the improvement of critical care monitoring and provide guidance for patient-tailored treatment. With approaches such as this, neurosurgeons and neurologists can study brain activity in acute TBI and obtain insights regarding injury effects upon brain metabolism and clinical outcome. Published by Elsevier B.V.

  20. Propeller sheet cavitation noise source modeling and inversion

    NASA Astrophysics Data System (ADS)

    Lee, Keunhwa; Lee, Jaehyuk; Kim, Dongho; Kim, Kyungseop; Seong, Woojae

    2014-02-01

    Propeller sheet cavitation is the main contributor to high level of noise and vibration in the after body of a ship. Full measurement of the cavitation-induced hull pressure over the entire surface of the affected area is desired but not practical. Therefore, using a few measurements on the outer hull above the propeller in a cavitation tunnel, empirical or semi-empirical techniques based on physical model have been used to predict the hull-induced pressure (or hull-induced force). In this paper, with the analytic source model for sheet cavitation, a multi-parameter inversion scheme to find the positions of noise sources and their strengths is suggested. The inversion is posed as a nonlinear optimization problem, which is solved by the optimization algorithm based on the adaptive simplex simulated annealing algorithm. Then, the resulting hull pressure can be modeled with boundary element method from the inverted cavitation noise sources. The suggested approach is applied to the hull pressure data measured in a cavitation tunnel of the Samsung Heavy Industry. Two monopole sources are adequate to model the propeller sheet cavitation noise. The inverted source information is reasonable with the cavitation dynamics of the propeller and the modeled hull pressure shows good agreement with cavitation tunnel experimental data.

  1. Inverse problem analysis for identification of reaction kinetics constants in microreactors for biodiesel synthesis

    NASA Astrophysics Data System (ADS)

    Pontes, P. C.; Naveira-Cotta, C. P.

    2016-09-01

    The theoretical analysis for the design of microreactors in biodiesel production is a complicated task due to the complex liquid-liquid flow and mass transfer processes, and the transesterification reaction that takes place within these microsystems. Thus, computational simulation is an important tool that aids in understanding the physical-chemical phenomenon and, consequently, in determining the suitable conditions that maximize the conversion of triglycerides during the biodiesel synthesis. A diffusive-convective-reactive coupled nonlinear mathematical model, that governs the mass transfer process during the transesterification reaction in parallel plates microreactors, under isothermal conditions, is here described. A hybrid numerical-analytical solution via the Generalized Integral Transform Technique (GITT) for this partial differential system is developed and the eigenfunction expansions convergence rates are extensively analyzed and illustrated. The heuristic method of Particle Swarm Optimization (PSO) is applied in the inverse analysis of the proposed direct problem, to estimate the reaction kinetics constants, which is a critical step in the design of such microsystems. The results present a good agreement with the limited experimental data in the literature, but indicate that the GITT methodology combined with the PSO approach provide a reliable computational algorithm for direct-inverse analysis in such reactive mass transfer problems.

  2. Three-dimensional artificial spin ice in nanostructured Co on an inverse opal-like lattice

    NASA Astrophysics Data System (ADS)

    Mistonov, A. A.; Grigoryeva, N. A.; Chumakova, A. V.; Eckerlebe, H.; Sapoletova, N. A.; Napolskii, K. S.; Eliseev, A. A.; Menzel, D.; Grigoriev, S. V.

    2013-06-01

    The evolution of the magnetic structure for an inverse opal-like structure under an applied magnetic field is studied by small-angle neutron scattering. The samples were produced by filling the voids of an artificial opal film with Co. It is shown that the local configuration of magnetization is inhomogeneous over the basic element of the inverse opal-like lattice structure (IOLS) but follows its periodicity. Applying the “ice-rule” concept to the structure, we describe the local magnetization of this ferromagnetic three-dimensional lattice. We have developed a model of the remagnetization process predicting the occurrence of an unusual perpendicular component of the magnetization in the IOLS which is defined only by the direction and strength of the applied magnetic field.

  3. Improved Abel transform inversion: First application to COSMIC/FORMOSAT-3

    NASA Astrophysics Data System (ADS)

    Aragon-Angel, A.; Hernandez-Pajares, M.; Juan, J.; Sanz, J.

    2007-05-01

    In this paper the first results of Ionospheric Tomographic inversion are presented, using the Improved Abel Transform on the COSMIC/FORMOSAT-3 constellation of 6 LEO satellites, carrying on-board GPS receivers.[- 4mm] The Abel transform inversion is a wide used technique which in the ionospheric context makes it possible to retrieve electron densities as a function of height based of STEC (Slant Total Electron Content) data gathered from GPS receivers on board of LEO (Low Earth Orbit) satellites. Within this precise use, the classical approach of the Abel inversion is based on the assumption of spherical symmetry of the electron density in the vicinity of an occultation, meaning that the electron content varies in height but not horizontally. In particular, one implication of this assumption is that the VTEC (Vertical Total Electron Content) is a constant value for the occultation region. This assumption may not always be valid since horizontal ionospheric gradients (a very frequent feature in some ionosphere problematic areas such as the Equatorial region) could significantly affect the electron profiles. [- 4mm] In order to overcome this limitation/problem of the classical Abel inversion, a studied improvement of this technique can be obtained by assuming separability in the electron density (see Hernández-Pajares et al. 2000). This means that the electron density can be expressed by the multiplication of VTEC data and a shape function which assumes all the height dependency in it while the VTEC data keeps the horizontal dependency. Actually, it is more realistic to assume that this shape fuction depends only on the height and to use VTEC information to take into account the horizontal variation rather than considering spherical symmetry in the electron density function as it has been carried out in the classical approach of the Abel inversion.[-4mm] Since the above mentioned improved Abel inversion technique has already been tested and proven to be a useful tool to obtain a vertical description of the ionospheric electron density (see García-Fernández et al. 2003), a natural following step would be to extend the use of this technique to the recently available COSMIC data. The COSMIC satellite constellation, formed by 6 micro-satellites, is being deployed since April 2006 in circular orbit around the Earth, with a final altitude of about 700-800 kilometers. Its global and almost uniform coverage will overcome one of the main limitations of this technique which is the sparcity of data, related to lack of GPS receivers in some regions. This can significantly stimulate the development of radio occultation techniques with the use of the huge volume of data provided by the COSMIC constellation to be processed and analysed updating the current knowledge of the Ionospheres nature and behaviour. In this context a summary of the Improvel Abel transform inversion technique and the first results based on COSMIC constellation data will be presented. Moreover, future improvements, taking into account the higher temporal and global spatial coverage, will be discussed. [-4mm] References:M. Hernández-Pajares, J. M. Juan and J. Sanz, Improving the Abel inversion by adding ground GPS data to LEO radio occultations in ionospheric sounding, GEOPHYSICAL RESEARCH LETTERS, VOL. 27, NO. 16, PAGES 2473-2476, AUGUST 15, 2000.M. Garcia-Fernández, M. Hernández-Pajares, M. Juan, and J. Sanz, Improvement of ionospheric electron density estimation with GPSMET occultations using Abel inversion and VTEC Information, JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 108, NO. A9, 1338, doi:10.1029/2003JA009952, 2003

  4. Further results on open-loop compensation of rate-dependent hysteresis in a magnetostrictive actuator with the Prandtl-Ishlinskii model

    NASA Astrophysics Data System (ADS)

    Al Janaideh, Mohammad; Aljanaideh, Omar

    2018-05-01

    Apart from the output-input hysteresis loops, the magnetostrictive actuators also exhibit asymmetry and saturation, particularly under moderate to large magnitude inputs and at relatively higher frequencies. Such nonlinear input-output characteristics could be effectively characterized by a rate-dependent Prandtl-Ishlinskii model in conjunction with a function of deadband operators. In this study, an inverse model is formulated to seek real-time compensation of rate-dependent and asymmetric hysteresis nonlinearities of a Terfenol-D magnetostrictive actuator. The inverse model is formulated with the inverse of the rate-dependent Prandtl-Ishlinskii model, satisfying the threshold dilation condition, with the inverse of the deadband function. The inverse model was subsequently applied to the hysteresis model as a feedforward compensator. The proposed compensator is applied as a feedforward compensator to the actuator hardware to study its potential for rate-dependent and asymmetric hysteresis loops. The experimental results are obtained under harmonic and complex harmonic inputs further revealed that the inverse compensator can substantially suppress the hysteresis and output asymmetry nonlinearities in the entire frequency range considered in the study.

  5. Time domain localization technique with sparsity constraint for imaging acoustic sources

    NASA Astrophysics Data System (ADS)

    Padois, Thomas; Doutres, Olivier; Sgard, Franck; Berry, Alain

    2017-09-01

    This paper addresses source localization technique in time domain for broadband acoustic sources. The objective is to accurately and quickly detect the position and amplitude of noise sources in workplaces in order to propose adequate noise control options and prevent workers hearing loss or safety risk. First, the generalized cross correlation associated with a spherical microphone array is used to generate an initial noise source map. Then a linear inverse problem is defined to improve this initial map. Commonly, the linear inverse problem is solved with an l2 -regularization. In this study, two sparsity constraints are used to solve the inverse problem, the orthogonal matching pursuit and the truncated Newton interior-point method. Synthetic data are used to highlight the performances of the technique. High resolution imaging is achieved for various acoustic sources configurations. Moreover, the amplitudes of the acoustic sources are correctly estimated. A comparison of computation times shows that the technique is compatible with quasi real-time generation of noise source maps. Finally, the technique is tested with real data.

  6. Research on ionospheric tomography based on variable pixel height

    NASA Astrophysics Data System (ADS)

    Zheng, Dunyong; Li, Peiqing; He, Jie; Hu, Wusheng; Li, Chaokui

    2016-05-01

    A novel ionospheric tomography technique based on variable pixel height was developed for the tomographic reconstruction of the ionospheric electron density distribution. The method considers the height of each pixel as an unknown variable, which is retrieved during the inversion process together with the electron density values. In contrast to conventional computerized ionospheric tomography (CIT), which parameterizes the model with a fixed pixel height, the variable-pixel-height computerized ionospheric tomography (VHCIT) model applies a disturbance to the height of each pixel. In comparison with conventional CIT models, the VHCIT technique achieved superior results in a numerical simulation. A careful validation of the reliability and superiority of VHCIT was performed. According to the results of the statistical analysis of the average root mean square errors, the proposed model offers an improvement by 15% compared with conventional CIT models.

  7. Reconstructing source terms from atmospheric concentration measurements: Optimality analysis of an inversion technique

    NASA Astrophysics Data System (ADS)

    Turbelin, Grégory; Singh, Sarvesh Kumar; Issartel, Jean-Pierre

    2014-12-01

    In the event of an accidental or intentional contaminant release in the atmosphere, it is imperative, for managing emergency response, to diagnose the release parameters of the source from measured data. Reconstruction of the source information exploiting measured data is called an inverse problem. To solve such a problem, several techniques are currently being developed. The first part of this paper provides a detailed description of one of them, known as the renormalization method. This technique, proposed by Issartel (2005), has been derived using an approach different from that of standard inversion methods and gives a linear solution to the continuous Source Term Estimation (STE) problem. In the second part of this paper, the discrete counterpart of this method is presented. By using matrix notation, common in data assimilation and suitable for numerical computing, it is shown that the discrete renormalized solution belongs to a family of well-known inverse solutions (minimum weighted norm solutions), which can be computed by using the concept of generalized inverse operator. It is shown that, when the weight matrix satisfies the renormalization condition, this operator satisfies the criteria used in geophysics to define good inverses. Notably, by means of the Model Resolution Matrix (MRM) formalism, we demonstrate that the renormalized solution fulfils optimal properties for the localization of single point sources. Throughout the article, the main concepts are illustrated with data from a wind tunnel experiment conducted at the Environmental Flow Research Centre at the University of Surrey, UK.

  8. Probabilistic Magnetotelluric Inversion with Adaptive Regularisation Using the No-U-Turns Sampler

    NASA Astrophysics Data System (ADS)

    Conway, Dennis; Simpson, Janelle; Didana, Yohannes; Rugari, Joseph; Heinson, Graham

    2018-04-01

    We present the first inversion of magnetotelluric (MT) data using a Hamiltonian Monte Carlo algorithm. The inversion of MT data is an underdetermined problem which leads to an ensemble of feasible models for a given dataset. A standard approach in MT inversion is to perform a deterministic search for the single solution which is maximally smooth for a given data-fit threshold. An alternative approach is to use Markov Chain Monte Carlo (MCMC) methods, which have been used in MT inversion to explore the entire solution space and produce a suite of likely models. This approach has the advantage of assigning confidence to resistivity models, leading to better geological interpretations. Recent advances in MCMC techniques include the No-U-Turns Sampler (NUTS), an efficient and rapidly converging method which is based on Hamiltonian Monte Carlo. We have implemented a 1D MT inversion which uses the NUTS algorithm. Our model includes a fixed number of layers of variable thickness and resistivity, as well as probabilistic smoothing constraints which allow sharp and smooth transitions. We present the results of a synthetic study and show the accuracy of the technique, as well as the fast convergence, independence of starting models, and sampling efficiency. Finally, we test our technique on MT data collected from a site in Boulia, Queensland, Australia to show its utility in geological interpretation and ability to provide probabilistic estimates of features such as depth to basement.

  9. Cellular Automata

    NASA Astrophysics Data System (ADS)

    Gutowitz, Howard

    1991-08-01

    Cellular automata, dynamic systems in which space and time are discrete, are yielding interesting applications in both the physical and natural sciences. The thirty four contributions in this book cover many aspects of contemporary studies on cellular automata and include reviews, research reports, and guides to recent literature and available software. Chapters cover mathematical analysis, the structure of the space of cellular automata, learning rules with specified properties: cellular automata in biology, physics, chemistry, and computation theory; and generalizations of cellular automata in neural nets, Boolean nets, and coupled map lattices. Current work on cellular automata may be viewed as revolving around two central and closely related problems: the forward problem and the inverse problem. The forward problem concerns the description of properties of given cellular automata. Properties considered include reversibility, invariants, criticality, fractal dimension, and computational power. The role of cellular automata in computation theory is seen as a particularly exciting venue for exploring parallel computers as theoretical and practical tools in mathematical physics. The inverse problem, an area of study gaining prominence particularly in the natural sciences, involves designing rules that possess specified properties or perform specified task. A long-term goal is to develop a set of techniques that can find a rule or set of rules that can reproduce quantitative observations of a physical system. Studies of the inverse problem take up the organization and structure of the set of automata, in particular the parameterization of the space of cellular automata. Optimization and learning techniques, like the genetic algorithm and adaptive stochastic cellular automata are applied to find cellular automaton rules that model such physical phenomena as crystal growth or perform such adaptive-learning tasks as balancing an inverted pole. Howard Gutowitz is Collaborateur in the Service de Physique du Solide et Résonance Magnetique, Commissariat a I'Energie Atomique, Saclay, France.

  10. On the recovery of missing low and high frequency information from bandlimited reflectivity data

    NASA Astrophysics Data System (ADS)

    Sacchi, M. D.; Ulrych, T. J.

    2007-12-01

    During the last two decades, an important effort in the seismic exploration community has been made to retrieve broad-band seismic data by means of deconvolution and inversion. In general, the problem can be stated as a spectral reconstruction problem. In other words, given limited spectral information about the earth's reflectivity sequence, one attempts to create a broadband estimate of the Fourier spectra of the unknown reflectivity. Techniques based on the principle of parsimony can be effectively used to retrieve a sparse spike sequence and, consequently, a broad band signal. Alternatively, continuation methods, e.g., autoregressive modeling, can be used to extrapolate the recorded bandwidth of the seismic signal. The goal of this paper is to examine under what conditions the recovery of low and high frequencies from band-limited and noisy signals is possible. At the heart of the methods we discuss, is the celebrated non-Gaussian assumption so important in many modern signal processing methods, such as ICA, for example. Spectral recovery from limited information tends to work when the reflectivity consist of a few well isolated events. Results degrade with the number of reflectors, decreasing SNR and decreasing bandwidth of the source wavelet. Constrains and information-based priors can be used to stabilize the recovery but, as in all inverse problems, the solution is nonunique and effort is required to understand the level of recovery that is achievable, always keeping the physics of the problem in mind. We provide in this paper, a survey of methods to recover broad-band reflectivity sequences and examine the role that these techniques can play in the processing and inversion as applied to exploration and global seismology.

  11. Polymer tensiometer with ceramic cones: a case study for a Brazilian soil.

    NASA Astrophysics Data System (ADS)

    Durigon, A.; de Jong van Lier, Q.; van der Ploeg, M. J.; Gooren, H. P. A.; Metselaar, K.; de Rooij, G. H.

    2009-04-01

    Laboratory outflow experiments, in combination with inverse modeling techniques, allow to simultaneously determine retention and hydraulic conductivity functions. A numerical model solves the pressure-head-based form of the Richards' equation for unsaturated flow in a rigid porous medium. Applying adequate boundary conditions, the cumulative outflow is calculated at prescribed times, and as a function of the set of optimized parameters. These parameters are evaluated by nonlinear least-squares fitting of predicted to observed cumulative outflow with time. An objective function quantifies this difference between calculated and observed cumulative outflow and between predicted and measured soil water retention data. Using outflow data only in the objective function, the multistep outflow method results in unique estimates of the retention and hydraulic conductivity functions. To obtain more reliable estimates of the hydraulic conductivity as a function of the water content using the inverse method, the outflow data must be supplemented with soil retention data. To do so tensiometers filled with a polymer solution instead of water were used. The measurement range of these tensiometers is larger than that of the conventional tensiometers, being able to measure the entire pressure head range over which crops take up water, down to values in the order of -1.6 MPa. The objective of this study was to physically characterize a Brazilian red-yellow oxisol using measurements in outflow experiments by polymer tensiometers and processing these data with the inverse modeling technique for use in the analysis of a field experiment and in modeling. The soil was collected at an experimental site located in Piracicaba, Brazil, 22° 42 S, 47° 38 W, 550 m above sea level.

  12. Acoustic classification of zooplankton

    NASA Astrophysics Data System (ADS)

    Martin Traykovski, Linda V.

    1998-11-01

    Work on the forward problem in zooplankton bioacoustics has resulted in the identification of three categories of acoustic scatterers: elastic-shelled (e.g. pteropods), fluid-like (e.g. euphausiids), and gas-bearing (e.g. siphonophores). The relationship between backscattered energy and animal biomass has been shown to vary by a factor of ~19,000 across these categories, so that to make accurate estimates of zooplankton biomass from acoustic backscatter measurements of the ocean, the acoustic characteristics of the species of interest must be well-understood. This thesis describes the development of both feature based and model based classification techniques to invert broadband acoustic echoes from individual zooplankton for scatterer type, as well as for particular parameters such as animal orientation. The feature based Empirical Orthogonal Function Classifier (EOFC) discriminates scatterer types by identifying characteristic modes of variability in the echo spectra, exploiting only the inherent characteristic structure of the acoustic signatures. The model based Model Parameterisation Classifier (MPC) classifies based on correlation of observed echo spectra with simplified parameterisations of theoretical scattering models for the three classes. The Covariance Mean Variance Classifiers (CMVC) are a set of advanced model based techniques which exploit the full complexity of the theoretical models by searching the entire physical model parameter space without employing simplifying parameterisations. Three different CMVC algorithms were developed: the Integrated Score Classifier (ISC), the Pairwise Score Classifier (PSC) and the Bayesian Probability Classifier (BPC); these classifiers assign observations to a class based on similarities in covariance, mean, and variance, while accounting for model ambiguity and validity. These feature based and model based inversion techniques were successfully applied to several thousand echoes acquired from broadband (~350 kHz-750 kHz) insonifications of live zooplankton collected on Georges Bank and the Gulf of Maine to determine scatterer class. CMVC techniques were also applied to echoes from fluid-like zooplankton (Antarctic krill) to invert for angle of orientation using generic and animal-specific theoretical and empirical models. Application of these inversion techniques in situ will allow correct apportionment of backscattered energy to animal biomass, significantly improving estimates of zooplankton biomass based on acoustic surveys. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  13. A global search inversion for earthquake kinematic rupture history: Application to the 2000 western Tottori, Japan earthquake

    USGS Publications Warehouse

    Piatanesi, A.; Cirella, A.; Spudich, P.; Cocco, M.

    2007-01-01

    We present a two-stage nonlinear technique to invert strong motions records and geodetic data to retrieve the rupture history of an earthquake on a finite fault. To account for the actual rupture complexity, the fault parameters are spatially variable peak slip velocity, slip direction, rupture time and risetime. The unknown parameters are given at the nodes of the subfaults, whereas the parameters within a subfault are allowed to vary through a bilinear interpolation of the nodal values. The forward modeling is performed with a discrete wave number technique, whose Green's functions include the complete response of the vertically varying Earth structure. During the first stage, an algorithm based on the heat-bath simulated annealing generates an ensemble of models that efficiently sample the good data-fitting regions of parameter space. In the second stage (appraisal), the algorithm performs a statistical analysis of the model ensemble and computes a weighted mean model and its standard deviation. This technique, rather than simply looking at the best model, extracts the most stable features of the earthquake rupture that are consistent with the data and gives an estimate of the variability of each model parameter. We present some synthetic tests to show the effectiveness of the method and its robustness to uncertainty of the adopted crustal model. Finally, we apply this inverse technique to the well recorded 2000 western Tottori, Japan, earthquake (Mw 6.6); we confirm that the rupture process is characterized by large slip (3-4 m) at very shallow depths but, differently from previous studies, we imaged a new slip patch (2-2.5 m) located deeper, between 14 and 18 km depth. Copyright 2007 by the American Geophysical Union.

  14. Recovering Long-wavelength Velocity Models using Spectrogram Inversion with Single- and Multi-frequency Components

    NASA Astrophysics Data System (ADS)

    Ha, J.; Chung, W.; Shin, S.

    2015-12-01

    Many waveform inversion algorithms have been proposed in order to construct subsurface velocity structures from seismic data sets. These algorithms have suffered from computational burden, local minima problems, and the lack of low-frequency components. Computational efficiency can be improved by the application of back-propagation techniques and advances in computing hardware. In addition, waveform inversion algorithms, for obtaining long-wavelength velocity models, could avoid both the local minima problem and the effect of the lack of low-frequency components in seismic data. In this study, we proposed spectrogram inversion as a technique for recovering long-wavelength velocity models. In spectrogram inversion, decomposed frequency components from spectrograms of traces, in the observed and calculated data, are utilized to generate traces with reproduced low-frequency components. Moreover, since each decomposed component can reveal the different characteristics of a subsurface structure, several frequency components were utilized to analyze the velocity features in the subsurface. We performed the spectrogram inversion using a modified SEG/SEGE salt A-A' line. Numerical results demonstrate that spectrogram inversion could also recover the long-wavelength velocity features. However, inversion results varied according to the frequency components utilized. Based on the results of inversion using a decomposed single-frequency component, we noticed that robust inversion results are obtained when a dominant frequency component of the spectrogram was utilized. In addition, detailed information on recovered long-wavelength velocity models was obtained using a multi-frequency component combined with single-frequency components. Numerical examples indicate that various detailed analyses of long-wavelength velocity models can be carried out utilizing several frequency components.

  15. Qualitative and quantitative comparison of geostatistical techniques of porosity prediction from the seismic and logging data: a case study from the Blackfoot Field, Alberta, Canada

    NASA Astrophysics Data System (ADS)

    Maurya, S. P.; Singh, K. H.; Singh, N. P.

    2018-05-01

    In present study, three recently developed geostatistical methods, single attribute analysis, multi-attribute analysis and probabilistic neural network algorithm have been used to predict porosity in inter well region for Blackfoot field, Alberta, Canada, an offshore oil field. These techniques make use of seismic attributes, generated by model based inversion and colored inversion techniques. The principle objective of the study is to find the suitable combination of seismic inversion and geostatistical techniques to predict porosity and identification of prospective zones in 3D seismic volume. The porosity estimated from these geostatistical approaches is corroborated with the well log porosity. The results suggest that all the three implemented geostatistical methods are efficient and reliable to predict the porosity but the multi-attribute and probabilistic neural network analysis provide more accurate and high resolution porosity sections. A low impedance (6000-8000 m/s g/cc) and high porosity (> 15%) zone is interpreted from inverted impedance and porosity sections respectively between 1060 and 1075 ms time interval and is characterized as reservoir. The qualitative and quantitative results demonstrate that of all the employed geostatistical methods, the probabilistic neural network along with model based inversion is the most efficient method for predicting porosity in inter well region.

  16. Use of a Monte Carlo technique to complete a fragmented set of H2S emission rates from a wastewater treatment plant.

    PubMed

    Schauberger, Günther; Piringer, Martin; Baumann-Stanzer, Kathrin; Knauder, Werner; Petz, Erwin

    2013-12-15

    The impact of ambient concentrations in the vicinity of a plant can only be assessed if the emission rate is known. In this study, based on measurements of ambient H2S concentrations and meteorological parameters, the a priori unknown emission rates of a tannery wastewater treatment plant are calculated by an inverse dispersion technique. The calculations are determined using the Gaussian Austrian regulatory dispersion model. Following this method, emission data can be obtained, though only for a measurement station that is positioned such that the wind direction at the measurement station is leeward of the plant. Using the inverse transform sampling, which is a Monte Carlo technique, the dataset can also be completed for those wind directions for which no ambient concentration measurements are available. For the model validation, the measured ambient concentrations are compared with the calculated ambient concentrations obtained from the synthetic emission data of the Monte Carlo model. The cumulative frequency distribution of this new dataset agrees well with the empirical data. This inverse transform sampling method is thus a useful supplement for calculating emission rates using the inverse dispersion technique. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Evaluation of Uncertainties in Measuring Particulate Matter Emission Factors from Atmospheric Fugitive Sources Using Optical Remote Sensing

    NASA Astrophysics Data System (ADS)

    Yuen, W.; Ma, Q.; Du, K.; Koloutsou-Vakakis, S.; Rood, M. J.

    2015-12-01

    Measurements of particulate matter (PM) emissions generated from fugitive sources are of interest in air pollution studies, since such emissions vary widely both spatially and temporally. This research focuses on determining the uncertainties in quantifying fugitive PM emission factors (EFs) generated from mobile vehicles using a vertical scanning micro-pulse lidar (MPL). The goal of this research is to identify the greatest sources of uncertainty of the applied lidar technique in determining fugitive PM EFs, and to recommend methods to reduce the uncertainties in this measurement. The MPL detects the PM plume generated by mobile fugitive sources that are carried downwind to the MPL's vertical scanning plane. Range-resolved MPL signals are measured, corrected, and converted to light extinction coefficients, through inversion of the lidar equation and calculation of the lidar ratio. In this research, both the near-end and far-end lidar equation inversion methods are considered. Range-resolved PM mass concentrations are then determined from the extinction coefficient measurements using the measured mass extinction efficiency (MEE) value, which is an intensive PM property. MEE is determined by collocated PM mass concentration and light extinction measurements, provided respectively by a DustTrak and an open-path laser transmissometer. These PM mass concentrations are then integrated with wind information, duration of plume event, and vehicle distance travelled to obtain fugitive PM EFs. To obtain the uncertainty of PM EFs, uncertainties in MPL signals, lidar ratio, MEE, and wind variation are considered. Error propagation method is applied to each of the above intermediate steps to aggregate uncertainty sources. Results include determination of uncertainties in each intermediate step, and comparison of uncertainties between the use of near-end and far-end lidar equation inversion methods.

  18. Models of brachial to finger pulse wave distortion and pressure decrement.

    PubMed

    Gizdulich, P; Prentza, A; Wesseling, K H

    1997-03-01

    To model the pulse wave distortion and pressure decrement occurring between brachial and finger arteries. Distortion reversion and decrement correction were also our aims. Brachial artery pressure was recorded intra-arterially and finger pressure was recorded non-invasively by the Finapres technique in 53 adult human subjects. Mean pressure was subtracted from each pressure waveform and Fourier analysis applied to the pulsations. A distortion model was estimated for each subject and averaged over the group. The average inverse model was applied to the full finger pressure waveform. The pressure decrement was modelled by multiple regression on finger systolic and diastolic levels. Waveform distortion could be described by a general, frequency dependent model having a resonance at 7.3 Hz. The general inverse model has an anti-resonance at this frequency. It converts finger to brachial pulsations thereby reducing average waveform distortion from 9.7 (s.d. 3.2) mmHg per sample for the finger pulse to 3.7 (1.7) mmHg for the converted pulse. Systolic and diastolic level differences between finger and brachial arterial pressures changed from -4 (15) and -8 (11) to +8 (14) and +8 (12) mmHg, respectively, after inverse modelling, with pulse pressures correct on average. The pressure decrement model reduced both the mean and the standard deviation of systolic and diastolic level differences to 0 (13) and 0 (8) mmHg. Diastolic differences were thus reduced most. Brachial to finger pulse wave distortion due to wave reflection in arteries is almost identical in all subjects and can be modelled by a single resonance. The pressure decrement due to flow in arteries is greatest for high pulse pressures superimposed on low means.

  19. Evaluation of Observation-Fused Regional Air Quality Model Results for Population Air Pollution Exposure Estimation

    PubMed Central

    Chen, Gang; Li, Jingyi; Ying, Qi; Sherman, Seth; Perkins, Neil; Rajeshwari, Sundaram; Mendola, Pauline

    2014-01-01

    In this study, Community Multiscale Air Quality (CMAQ) model was applied to predict ambient gaseous and particulate concentrations during 2001 to 2010 in 15 hospital referral regions (HRRs) using a 36-km horizontal resolution domain. An inverse distance weighting based method was applied to produce exposure estimates based on observation-fused regional pollutant concentration fields using the differences between observations and predictions at grid cells where air quality monitors were located. Although the raw CMAQ model is capable of producing satisfying results for O3 and PM2.5 based on EPA guidelines, using the observation data fusing technique to correct CMAQ predictions leads to significant improvement of model performance for all gaseous and particulate pollutants. Regional average concentrations were calculated using five different methods: 1) inverse distance weighting of observation data alone, 2) raw CMAQ results, 3) observation-fused CMAQ results, 4) population-averaged raw CMAQ results and 5) population-averaged fused CMAQ results. It shows that while O3 (as well as NOx) monitoring networks in the HRR regions are dense enough to provide consistent regional average exposure estimation based on monitoring data alone, PM2.5 observation sites (as well as monitors for CO, SO2, PM10 and PM2.5 components) are usually sparse and the difference between the average concentrations estimated by the inverse distance interpolated observations, raw CMAQ and fused CMAQ results can be significantly different. Population-weighted average should be used to account spatial variation in pollutant concentration and population density. Using raw CMAQ results or observations alone might lead to significant biases in health outcome analyses. PMID:24747248

  20. The importance of ground magnetic data in specifying the state of magnetosphere-ionosphere coupling: a personal view

    NASA Astrophysics Data System (ADS)

    Kamide, Y.; Balan, Nanan

    2016-12-01

    In the history of geomagnetism, geoelectricity and space science including solar terrestrial physics, ground magnetic records have been demonstrated to be a powerful tool for monitoring the levels of overall geomagnetic activity. For example, the Kp and ap indices having perhaps the long-history geomagnetic indices have and are being used as space weather parameters, where "p" stands for "planetary" implying that these indices express average geomagnetic disturbances on the entire Earth in a planetary scale. To quantify the intensity level of geomagnetic storms, however, it is common to rely on the Dst index, which is supposed to show the magnitude of the storm-time ring current. Efforts were also made to inter-calibrate various activity indices. Different indices were proposed to express different aspects of a phenomenon in the near-Earth space. In the early 1980s, several research groups in Japan, Russia, Europe and the US developed the so-called magnetogram-inversion techniques, which were proposed all independently. Subsequent improvements of the magnetogram-inversion algorithms allowed their technology to be applied to a number of different datasets for magnetospheric convection and substorms. In the present review, we demonstrate how important it was to make full use of ground magnetic data covering a large extent in both latitudinal and longitudinal directions. It is now possible to map a number of electrodynamic parameters in the polar ionosphere on an instantaneous basis. By applying these new inverse methods to a number of ground-based geomagnetic observations, it was found that two basic elements in spatial patterns can be viewed as two physical processes for solar wind-magnetosphere energy coupling.

  1. Characterization of the passive and active material parameters of the pubovisceralis muscle using an inverse numerical method.

    PubMed

    Silva, M E T; Parente, M P L; Brandão, S; Mascarenhas, T; Natal Jorge, R M

    2018-04-11

    The mechanical characteristics of the female pelvic floor are relevant to understand pelvic floor dysfunctions (PFD), and how they are related with changes in their biomechanical behavior. Urinary incontinence (UI) and pelvic organ prolapse (POP) are the most common pathologies, which can be associated with changes in the mechanical properties of the supportive structures in the female pelvic cavity. PFD have been studied through different methods, from experimental tensile tests using tissues from fresh female cadavers or tissues collected at the time of a transvaginal hysterectomy procedure, or by applying imaging techniques. In this work, an inverse finite element analysis (FEA) was applied to understand the passive and active behavior of the pubovisceralis muscle (PVM) during Valsalva maneuver and muscle active contraction, respectively. Individual numerical models of women without pathology, with stress UI (SUI) and POP were built based on magnetic resonance images, including the PVM and surrounding structures. The passive and active material parameters obtained for a transversely isotropic hyperelastic constitutive model were estimated for the three groups. The values for the material constants were significantly higher for the women with POP when compared with the other two groups. The PVM of women with POP showed the highest stiffness. Additionally, the influence of these parameters was analyzed by evaluating their stress-strain, and force-displacements responses. The force produced by the PVM in women with POP was 47% and 82% higher when compared to women without pathology and with SUI, respectively. The inverse FEA allowed estimating the material parameters of the PVM using input information acquired non-invasively. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Simulating contrast inversion in atomic force microscopy imaging with real-space pseudopotentials

    NASA Astrophysics Data System (ADS)

    Lee, Alex J.; Sakai, Yuki; Chelikowsky, James R.

    2017-02-01

    Atomic force microscopy (AFM) measurements have reported contrast inversions for systems such as Cu2N and graphene that can hamper image interpretation and characterization. Here, we apply a simulation method based on ab initio real-space pseudopotentials to gain an understanding of the tip-sample interactions that influence the inversion. We find that chemically reactive tips induce an attractive binding force that results in the contrast inversion. We find that the inversion is tip height dependent and not observed when using less reactive CO-functionalized tips.

  3. Ultrafast magnetic vortex core switching driven by the topological inverse Faraday effect.

    PubMed

    Taguchi, Katsuhisa; Ohe, Jun-ichiro; Tatara, Gen

    2012-09-21

    We present a theoretical discovery of an unconventional mechanism of inverse Faraday effect which acts selectively on topological magnetic structures. The effect, topological inverse Faraday effect, is induced by the spin Berry's phase of the magnetic structure when a circularly polarized light is applied. Thus a spin-orbit interaction is not necessary unlike that in the conventional inverse Faraday effect. We demonstrate by numerical simulation that topological inverse Faraday effect realizes ultrafast switching of a magnetic vortex within a switching time of 150 ps without magnetic field.

  4. Joint Application of Concentrations and Isotopic Signatures to Investigate the Global Atmospheric Carbon Monoxide Budget: Inverse Modeling Approach

    NASA Astrophysics Data System (ADS)

    Park, K.; Emmons, L. K.; Mak, J. E.

    2007-12-01

    Carbon monoxide is not only an important component for determining the atmospheric oxidizing capacity but also a key trace gas in the atmospheric chemistry of the Earth's background environment. The global CO cycle and its change are closely related to both the change of CO mixing ratio and the change of source strength. Previously, to estimate the global CO budget, most top-down estimation techniques have been applied the concentrations of CO solely. Since CO from certain sources has a unique isotopic signature, its isotopes provide additional information to constrain its sources. Thus, coupling the concentration and isotope fraction information enables to tightly constrain CO flux by its sources and allows better estimations on the global CO budget. MOZART4 (Model for Ozone And Related chemical Tracers), a 3-D global chemical transport model developed at NCAR, MPI for meteorology and NOAA/GFDL and is used to simulate the global CO concentration and its isotopic signature. Also, a tracer version of MOZART4 which tagged for C16O and C18O from each region and each source was developed to see their contributions to the atmosphere efficiently. Based on the nine-year- simulation results we analyze the influences of each source of CO to the isotopic signature and the concentration. Especially, the evaluations are focused on the oxygen isotope of CO (δ18O), which has not been extensively studied yet. To validate the model performance, CO concentrations and isotopic signatures measured from MPI, NIWA and our lab are compared to the modeled results. The MOZART4 reproduced observational data fairly well; especially in mid to high latitude northern hemisphere. Bayesian inversion techniques have been used to estimate the global CO budget with combining observed and modeled CO concentration. However, previous studies show significant differences in their estimations on CO source strengths. Because, in addition to the CO mixing ratio, isotopic signatures are independent tracers that contain the source information, jointly applying the isotope and the concentration information is expected to provide more precise optimization results in CO budget estimation. Our accumulated long-term CO isotope measurement data contribute to having more confidence of the inversions as well. Besides the benefit of adding isotope data on the inverse modeling, a trait of each isotope of CO (oxygen and carbon isotope) contains another advantageous use in the top-down estimation of the CO budget. δ18O and δ13C has a distinctive isotopic signature on a specific source; combustion sources such as a fossil fuel use show clearly different values from other natural sources in the δ18O signatures and the methane source can be easily separated by using δ13C information. Therefore, inversions of the two major sources of CO respond with different sensitivity for the different isotopes. To maximize the strengths of using isotope data in the inverse modeling analysis, various coupling schemes combining [CO], δ18O and δ13C have been investigated to enhance the credibility of the CO budget optimization.

  5. Joint Application of Concentrations and Isotopic Signatures to Investigate the Global Atmospheric Carbon Monoxide Budget: Inverse Modeling Approach

    NASA Astrophysics Data System (ADS)

    Park, K.; Mak, J. E.; Emmons, L. K.

    2008-12-01

    Carbon monoxide is not only an important component for determining the atmospheric oxidizing capacity but also a key trace gas in the atmospheric chemistry of the Earth's background environment. The global CO cycle and its change are closely related to both the change of CO mixing ratio and the change of source strength. Previously, to estimate the global CO budget, most top-down estimation techniques have been applied the concentrations of CO solely. Since CO from certain sources has a unique isotopic signature, its isotopes provide additional information to constrain its sources. Thus, coupling the concentration and isotope fraction information enables to tightly constrain CO flux by its sources and allows better estimations on the global CO budget. MOZART4 (Model for Ozone And Related chemical Tracers), a 3-D global chemical transport model developed at NCAR, MPI for meteorology and NOAA/GFDL and is used to simulate the global CO concentration and its isotopic signature. Also, a tracer version of MOZART4 which tagged for C16O and C18O from each region and each source was developed to see their contributions to the atmosphere efficiently. Based on the nine-year-simulation results we analyze the influences of each source of CO to the isotopic signature and the concentration. Especially, the evaluations are focused on the oxygen isotope of CO (δ18O), which has not been extensively studied yet. To validate the model performance, CO concentrations and isotopic signatures measured from MPI, NIWA and our lab are compared to the modeled results. The MOZART4 reproduced observational data fairly well; especially in mid to high latitude northern hemisphere. Bayesian inversion techniques have been used to estimate the global CO budget with combining observed and modeled CO concentration. However, previous studies show significant differences in their estimations on CO source strengths. Because, in addition to the CO mixing ratio, isotopic signatures are independent tracers that contain the source information, jointly applying the isotope and the concentration information is expected to provide more precise optimization results in CO budget estimation. Our accumulated long-term CO isotope measurement data contribute to having more confidence of the inversions as well. Besides the benefit of adding isotope data on the inverse modeling, a trait of each isotope of CO (oxygen and carbon isotope) contains another advantageous use in the top-down estimation of the CO budget. δ18O and δ13C has a distinctive isotopic signature on a specific source; combustion sources such as a fossil fuel use show clearly different values from other natural sources in the δ18O signatures and the methane source can be easily separated by using δ13C information. Therefore, inversions of the two major sources of CO respond with different sensitivity for the different isotopes. To maximize the strengths of using isotope data in the inverse modeling analysis, various coupling schemes combining [CO], δ18O and δ13C have been investigated to enhance the credibility of the CO budget optimization.

  6. Electrocaloric effect in BaTiO3 at all three ferroelectric transitions: Anisotropy and inverse caloric effects

    NASA Astrophysics Data System (ADS)

    Marathe, Madhura; Renggli, Damian; Sanlialp, Mehmet; Karabasov, Maksim O.; Shvartsman, Vladimir V.; Lupascu, Doru C.; Grünebohm, Anna; Ederer, Claude

    2017-07-01

    We study the electrocaloric (EC) effect in bulk BaTiO3 (BTO) using molecular dynamics simulations of a first principles-based effective Hamiltonian, combined with direct measurements of the adiabatic EC temperature change in BTO single crystals. We examine in particular the dependence of the EC effect on the direction of the applied electric field at all three ferroelectric transitions, and we show that the EC response is strongly anisotropic. Most strikingly, an inverse caloric effect, i.e., a temperature increase under field removal, can be observed at both ferroelectric-ferroelectric transitions for certain orientations of the applied field. Using the generalized Clausius-Clapeyron equation, we show that the inverse effect occurs exactly for those cases where the field orientation favors the higher temperature/higher entropy phase. Our simulations show that temperature changes of around 1 K can, in principle, be obtained at the tetragonal-orthorhombic transition close to room temperature, even for small applied fields, provided that the applied field is strong enough to drive the system across the first-order transition line. Our direct EC measurements for BTO single crystals at the cubic-tetragonal and at the tetragonal-orthorhombic transitions are in good qualitative agreement with our theoretical predictions, and in particular confirm the occurrence of an inverse EC effect at the tetragonal-orthorhombic transition for electric fields applied along the [001] pseudocubic direction.

  7. Light-field camera-based 3D volumetric particle image velocimetry with dense ray tracing reconstruction technique

    NASA Astrophysics Data System (ADS)

    Shi, Shengxian; Ding, Junfei; New, T. H.; Soria, Julio

    2017-07-01

    This paper presents a dense ray tracing reconstruction technique for a single light-field camera-based particle image velocimetry. The new approach pre-determines the location of a particle through inverse dense ray tracing and reconstructs the voxel value using multiplicative algebraic reconstruction technique (MART). Simulation studies were undertaken to identify the effects of iteration number, relaxation factor, particle density, voxel-pixel ratio and the effect of the velocity gradient on the performance of the proposed dense ray tracing-based MART method (DRT-MART). The results demonstrate that the DRT-MART method achieves higher reconstruction resolution at significantly better computational efficiency than the MART method (4-50 times faster). Both DRT-MART and MART approaches were applied to measure the velocity field of a low speed jet flow which revealed that for the same computational cost, the DRT-MART method accurately resolves the jet velocity field with improved precision, especially for the velocity component along the depth direction.

  8. Using remote sensing and GIS techniques to estimate discharge and recharge. fluxes for the Death Valley regional groundwater flow system, USA

    USGS Publications Warehouse

    D'Agnese, F. A.; Faunt, C.C.; Keith, Turner A.

    1996-01-01

    The recharge and discharge components of the Death Valley regional groundwater flow system were defined by remote sensing and GIS techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. This map provided a basis for subsequent evapotranspiration and infiltration estimations. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were then used to calculate discharge volumes for these areas. A previously used empirical method of groundwater recharge estimation was modified by GIS methods to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter estimation techniques also used these data to evaluate the reliability and sensitivity of estimated values.

  9. High-resolution resistivity imaging of marine gas hydrate structures by combined inversion of CSEM towed and ocean-bottom receiver data

    NASA Astrophysics Data System (ADS)

    Attias, Eric; Weitemeyer, Karen; Hölz, Sebastian; Naif, Samer; Minshull, Tim A.; Best, Angus I.; Haroon, Amir; Jegen-Kulcsar, Marion; Berndt, Christian

    2018-06-01

    We present high-resolution resistivity imaging of gas hydrate pipe-like structures, as derived from marine controlled-source electromagnetic (CSEM) inversions that combine towed and ocean-bottom electric field receiver data, acquired from the Nyegga region, offshore Norway. Two-dimensional CSEM inversions applied to the towed receiver data detected four new prominent vertical resistive features that are likely gas hydrate structures, located in proximity to a major gas hydrate pipe-like structure, known as the CNE03 pockmark. The resistivity model resulting from the CSEM data inversion resolved the CNE03 hydrate structure in high resolution, as inferred by comparison to seismically constrained inversions. Our results indicate that shallow gas hydrate vertical features can be delineated effectively by inverting both ocean-bottom and towed receiver CSEM data simultaneously. The approach applied here can be utilised to map and monitor seafloor mineralisation, freshwater reservoirs, CO2 sequestration sites and near-surface geothermal systems.

  10. Model selection and Bayesian inference for high-resolution seabed reflection inversion.

    PubMed

    Dettmer, Jan; Dosso, Stan E; Holland, Charles W

    2009-02-01

    This paper applies Bayesian inference, including model selection and posterior parameter inference, to inversion of seabed reflection data to resolve sediment structure at a spatial scale below the pulse length of the acoustic source. A practical approach to model selection is used, employing the Bayesian information criterion to decide on the number of sediment layers needed to sufficiently fit the data while satisfying parsimony to avoid overparametrization. Posterior parameter inference is carried out using an efficient Metropolis-Hastings algorithm for high-dimensional models, and results are presented as marginal-probability depth distributions for sound velocity, density, and attenuation. The approach is applied to plane-wave reflection-coefficient inversion of single-bounce data collected on the Malta Plateau, Mediterranean Sea, which indicate complex fine structure close to the water-sediment interface. This fine structure is resolved in the geoacoustic inversion results in terms of four layers within the upper meter of sediments. The inversion results are in good agreement with parameter estimates from a gravity core taken at the experiment site.

  11. Numerical methods for the inverse problem of density functional theory

    DOE PAGES

    Jensen, Daniel S.; Wasserman, Adam

    2017-07-17

    Here, the inverse problem of Kohn–Sham density functional theory (DFT) is often solved in an effort to benchmark and design approximate exchange-correlation potentials. The forward and inverse problems of DFT rely on the same equations but the numerical methods for solving each problem are substantially different. We examine both problems in this tutorial with a special emphasis on the algorithms and error analysis needed for solving the inverse problem. Two inversion methods based on partial differential equation constrained optimization and constrained variational ideas are introduced. We compare and contrast several different inversion methods applied to one-dimensional finite and periodic modelmore » systems.« less

  12. Numerical methods for the inverse problem of density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jensen, Daniel S.; Wasserman, Adam

    Here, the inverse problem of Kohn–Sham density functional theory (DFT) is often solved in an effort to benchmark and design approximate exchange-correlation potentials. The forward and inverse problems of DFT rely on the same equations but the numerical methods for solving each problem are substantially different. We examine both problems in this tutorial with a special emphasis on the algorithms and error analysis needed for solving the inverse problem. Two inversion methods based on partial differential equation constrained optimization and constrained variational ideas are introduced. We compare and contrast several different inversion methods applied to one-dimensional finite and periodic modelmore » systems.« less

  13. Improve earthquake hypocenter using adaptive simulated annealing inversion in regional tectonic, volcano tectonic, and geothermal observation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ry, Rexha Verdhora, E-mail: rexha.vry@gmail.com; Nugraha, Andri Dian, E-mail: nugraha@gf.itb.ac.id

    Observation of earthquakes is routinely used widely in tectonic activity observation, and also in local scale such as volcano tectonic and geothermal activity observation. It is necessary for determining the location of precise hypocenter which the process involves finding a hypocenter location that has minimum error between the observed and the calculated travel times. When solving this nonlinear inverse problem, simulated annealing inversion method can be applied to such global optimization problems, which the convergence of its solution is independent of the initial model. In this study, we developed own program codeby applying adaptive simulated annealing inversion in Matlab environment.more » We applied this method to determine earthquake hypocenter using several data cases which are regional tectonic, volcano tectonic, and geothermal field. The travel times were calculated using ray tracing shooting method. We then compared its results with the results using Geiger’s method to analyze its reliability. Our results show hypocenter location has smaller RMS error compared to the Geiger’s result that can be statistically associated with better solution. The hypocenter of earthquakes also well correlated with geological structure in the study area. Werecommend using adaptive simulated annealing inversion to relocate hypocenter location in purpose to get precise and accurate earthquake location.« less

  14. Next-generation seismic experiments: wide-angle, multi-azimuth, three-dimensional, full-waveform inversion

    NASA Astrophysics Data System (ADS)

    Morgan, Joanna; Warner, Michael; Bell, Rebecca; Ashley, Jack; Barnes, Danielle; Little, Rachel; Roele, Katarina; Jones, Charles

    2013-12-01

    Full-waveform inversion (FWI) is an advanced seismic imaging technique that has recently become computationally feasible in three dimensions, and that is being widely adopted and applied by the oil and gas industry. Here we explore the potential for 3-D FWI, when combined with appropriate marine seismic acquisition, to recover high-resolution high-fidelity P-wave velocity models for subsedimentary targets within the crystalline crust and uppermost mantle. We demonstrate that FWI is able to recover detailed 3-D structural information within a radially faulted dome using a field data set acquired with a standard 3-D petroleum-industry marine acquisition system. Acquiring low-frequency seismic data is important for successful FWI; we show that current acquisition techniques can routinely acquire field data from airguns at frequencies as low as 2 Hz, and that 1 Hz acquisition is likely to be achievable using ocean-bottom hydrophones in deep water. Using existing geological and geophysical models, we construct P-wave velocity models over three potential subsedimentary targets: the Soufrière Hills Volcano on Montserrat and its associated crustal magmatic system, the crust and uppermost mantle across the continent-ocean transition beneath the Campos Basin offshore Brazil, and the oceanic crust and uppermost mantle beneath the East Pacific Rise mid-ocean ridge. We use these models to generate realistic multi-azimuth 3-D synthetic seismic data, and attempt to invert these data to recover the original models. We explore resolution and accuracy, sensitivity to noise and acquisition geometry, ability to invert elastic data using acoustic inversion codes, and the trade-off between low frequencies and starting velocity model accuracy. We show that FWI applied to multi-azimuth, refracted, wide-angle, low-frequency data can resolve features in the deep crust and uppermost mantle on scales that are significantly better than can be achieved by any other geophysical technique, and that these results can be obtained using relatively small numbers (60-90) of ocean-bottom receivers combined with large numbers of airgun shots. We demonstrate that multi-azimuth 3-D FWI is robust in the presence of noise, that acoustic FWI can invert elastic data successfully, and that the typical errors to be expected in starting models derived using traveltimes will not be problematic for FWI given appropriately designed acquisition. FWI is a rapidly maturing technology; its transfer from the petroleum sector to tackle a much broader range of targets now appears to be entirely achievable.

  15. Nonlinear Stimulated Raman Exact Passage by Resonance-Locked Inverse Engineering

    NASA Astrophysics Data System (ADS)

    Dorier, V.; Gevorgyan, M.; Ishkhanyan, A.; Leroy, C.; Jauslin, H. R.; Guérin, S.

    2017-12-01

    We derive an exact and robust stimulated Raman process for nonlinear quantum systems driven by pulsed external fields. The external fields are designed with closed-form expressions from the inverse engineering of a given efficient and stable dynamics. This technique allows one to induce a controlled population inversion which surpasses the usual nonlinear stimulated Raman adiabatic passage efficiency.

  16. Assessing non-uniqueness: An algebraic approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vasco, Don W.

    Geophysical inverse problems are endowed with a rich mathematical structure. When discretized, most differential and integral equations of interest are algebraic (polynomial) in form. Techniques from algebraic geometry and computational algebra provide a means to address questions of existence and uniqueness for both linear and non-linear inverse problem. In a sense, the methods extend ideas which have proven fruitful in treating linear inverse problems.

  17. Formation, Physicochemical Characterization, and Thermodynamic Stability of the Amorphous State of Drugs and Excipients.

    PubMed

    Martino, Piera Di; Magnoni, Federico; Peregrina, Dolores Vargas; Gigliobianco, Maria Rosa; Censi, Roberta; Malaj, Ledjan

    2016-01-01

    Drugs and excipients used for pharmaceutical applications generally exist in the solid (crystalline or amorphous) state, more rarely as liquid materials. In some cases, according to the physicochemical nature of the molecule, or as a consequence of specific technological processes, a compound may exist exclusively in the amorphous state. In other cases, as a consequence of specific treatments (freezing and spray drying, melting and co-melting, grinding and compression), the crystalline form may convert into a completely or partially amorphous form. An amorphous material shows physical and thermodynamic properties different from the corresponding crystalline form, with profound repercussions on its technological performance and biopharmaceutical properties. Several physicochemical techniques such as X-ray powder diffraction, thermal methods of analysis, spectroscopic techniques, gravimetric techniques, and inverse gas chromatography can be applied to characterize the amorphous form of a compound (drug or excipient), and to evaluate its thermodynamic stability. This review offers a survey of the technologies used to convert a crystalline solid into an amorphous form, and describes the most important techniques for characterizing the amorphous state of compounds of pharmaceutical interest.

  18. Imaging of the native inversion layer in Silicon-On-Insulator wafers via Scanning Surface Photovoltage: Implications for RF device performance

    NASA Astrophysics Data System (ADS)

    Dahanayaka, Daminda; Wong, Andrew; Kaszuba, Philip; Moszkowicz, Leon; Slinkman, James; IBM SPV Lab Team

    2014-03-01

    Silicon-On-Insulator (SOI) technology has proved beneficial for RF cell phone technologies, which have equivalent performance to GaAs technologies. However, there is evident parasitic inversion layer under the Buried Oxide (BOX) at the interface with the high resistivity Si substrate. The latter is inferred from capacitance-voltage measurements on MOSCAPs. The inversion layer has adverse effects on RF device performance. We present data which, for the first time, show the extent of the inversion layer in the underlying substrate. This knowledge has driven processing techniques to suppress the inversion.

  19. Synthesis of nanostructured materials in inverse miniemulsions and their applications.

    PubMed

    Cao, Zhihai; Ziener, Ulrich

    2013-11-07

    Polymeric nanogels, inorganic nanoparticles, and organic-inorganic hybrid nanoparticles can be prepared via the inverse miniemulsion technique. Hydrophilic functional cargos, such as proteins, DNA, and macromolecular fluoresceins, may be conveniently encapsulated in these nanostructured materials. In this review, the progress of inverse miniemulsions since 2000 is summarized on the basis of the types of reactions carried out in inverse miniemulsions, including conventional free radical polymerization, controlled/living radical polymerization, polycondensation, polyaddition, anionic polymerization, catalytic oxidation reaction, sol-gel process, and precipitation reaction of inorganic precursors. In addition, the applications of the nanostructured materials synthesized in inverse miniemulsions are also reviewed.

  20. Creating Fidelitious Climate Data Records from Meteosat First Generation Observations

    NASA Astrophysics Data System (ADS)

    Quast, Ralf; Govaerts, Yves; Ruthrich, Frank; Giering, Ralf; Roebeling, Rob

    2016-08-01

    A novel method for reconstructing the spectral response function of the Meteosat visible (VIS) channels is presented and applied to the Meteosat-10 Spinning Enhanced Visible and Infrared Imager (SEVIRI) high-resolution visible (HRV) channel as the first real-world benchmark. The method incorporates advanced radiative transfer modelling and inverse modelling techniques. Once established, EUMETSAT will use the reconstructed spectral response and uncertainty information to increase the calibration accuracy of Meteosat First Generation VIS observations, which will provide the basis for the Fidelity and Uncertainty in Climate data records from Earth Observations (FIDUCEO) Horizon 2020 project to produce new fundamental (reflectance) and thematic (albedo and aerosol) climate data records.

  1. [The reconstruction of welding arc 3D electron density distribution based on Stark broadening].

    PubMed

    Zhang, Wang; Hua, Xue-Ming; Pan, Cheng-Gang; Li, Fang; Wang, Min

    2012-10-01

    The three-dimensional electron density is very important for welding arc quality control. In the present paper, Side-on characteristic line profile was collected by a spectrometer, and the lateral experimental data were approximated by a polynomial fitting. By applying an Abel inversion technique, the authors obtained the radial intensity distribution at each wavelength and thus constructed a profile for the radial positions. The Fourier transform was used to separate the Lorentz linear from the spectrum reconstructed, thus got the accurate Stark width. And we calculated the electronic density three-dimensional distribution of the TIG welding are plasma.

  2. Emission computerized axial tomography from multiple gamma-camera views using frequency filtering.

    PubMed

    Pelletier, J L; Milan, C; Touzery, C; Coitoux, P; Gailliard, P; Budinger, T F

    1980-01-01

    Emission computerized axial tomography is achievable in any nuclear medicine department from multiple gamma camera views. Data are collected by rotating the patient in front of the camera. A simple fast algorithm is implemented, known as the convolution technique: first the projection data are Fourier transformed and then an original filter designed for optimizing resolution and noise suppression is applied; finally the inverse transform of the latter operation is back-projected. This program, which can also take into account the attenuation for single photon events, was executed with good results on phantoms and patients. We think that it can be easily implemented for specific diagnostic problems.

  3. Radar cross section lectures

    NASA Astrophysics Data System (ADS)

    Fuhs, A. E.

    A comprehensive account is given of the principles that can be applied in military aircraft configuration studies to minimize the radar cross section (RCS) that will be presented by the resulting design to advanced radars under various mission circumstances. It is noted that, while certain ECM techniques can be nullified by improved enemy electronics in a very short time, RCS reductions may require as much as a decade of radar development before prior levels of detectability can be reestablished by enemy defenses. Attention is given to RCS magnitude determinants, inverse scattering, the polarization and scattering matrix, the RCSs of flat plates and conducting cylinders, and antenna geometry and beam patterns.

  4. The evolution and discharge of electric fields within a thunderstorm

    NASA Technical Reports Server (NTRS)

    Hager, William W.; Nisbet, John S.; Kasha, John R.

    1989-01-01

    An analysis of the present three-dimensional thunderstorm electrical model and its finite-difference approximations indicates unconditional stability for the discretization that results from the approximation of the spatial derivatives by a box-schemelike method and of the temporal derivative by either a backward-difference or Crank-Nicholson scheme. Lightning propagation is treated through numerical techniques based on the inverse-matrix modification formula and Cholesky updates. The model is applied to a storm observed at the Kennedy Space Center in 1978, and numerical comparisons are conducted between the model and the theoretical results obtained by Wilson (1920) and Holzer and Saxon (1952).

  5. Comparative study of inversion methods of three-dimensional NMR and sensitivity to fluids

    NASA Astrophysics Data System (ADS)

    Tan, Maojin; Wang, Peng; Mao, Keyu

    2014-04-01

    Three-dimensional nuclear magnetic resonance (3D NMR) logging can simultaneously measure transverse relaxation time (T2), longitudinal relaxation time (T1), and diffusion coefficient (D). These parameters can be used to distinguish fluids in the porous reservoirs. For 3D NMR logging, the relaxation mechanism and mathematical model, Fredholm equation, are introduced, and the inversion methods including Singular Value Decomposition (SVD), Butler-Reeds-Dawson (BRD), and Global Inversion (GI) methods are studied in detail, respectively. During one simulation test, multi-echo CPMG sequence activation is designed firstly, echo trains of the ideal fluid models are synthesized, then an inversion algorithm is carried on these synthetic echo trains, and finally T2-T1-D map is built. Futhermore, SVD, BRD, and GI methods are respectively applied into a same fluid model, and the computing speed and inversion accuracy are compared and analyzed. When the optimal inversion method and matrix dimention are applied, the inversion results are in good aggreement with the supposed fluid model, which indicates that the inversion method of 3D NMR is applieable for fluid typing of oil and gas reservoirs. Additionally, the forward modeling and inversion tests are made in oil-water and gas-water models, respectively, the sensitivity to the fluids in different magnetic field gradients is also examined in detail. The effect of magnetic gradient on fluid typing in 3D NMR logging is stuied and the optimal manetic gradient is choosen.

  6. Inverse models: A necessary next step in ground-water modeling

    USGS Publications Warehouse

    Poeter, E.P.; Hill, M.C.

    1997-01-01

    Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares repression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares regression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.

  7. Inverse scattering method and soliton double solution family for the general symplectic gravity model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao Yajun

    A previously established Hauser-Ernst-type extended double-complex linear system is slightly modified and used to develop an inverse scattering method for the stationary axisymmetric general symplectic gravity model. The reduction procedures in this inverse scattering method are found to be fairly simple, which makes the inverse scattering method applied fine and effective. As an application, a concrete family of soliton double solutions for the considered theory is obtained.

  8. On the computation of molecular surface correlations for protein docking using fourier techniques.

    PubMed

    Sakk, Eric

    2007-08-01

    The computation of surface correlations using a variety of molecular models has been applied to the unbound protein docking problem. Because of the computational complexity involved in examining all possible molecular orientations, the fast Fourier transform (FFT) (a fast numerical implementation of the discrete Fourier transform (DFT)) is generally applied to minimize the number of calculations. This approach is rooted in the convolution theorem which allows one to inverse transform the product of two DFTs in order to perform the correlation calculation. However, such a DFT calculation results in a cyclic or "circular" correlation which, in general, does not lead to the same result as the linear correlation desired for the docking problem. In this work, we provide computational bounds for constructing molecular models used in the molecular surface correlation problem. The derived bounds are then shown to be consistent with various intuitive guidelines previously reported in the protein docking literature. Finally, these bounds are applied to different molecular models in order to investigate their effect on the correlation calculation.

  9. Reducing uncertainties in the velocities determined by inversion of phase velocity dispersion curves using synthetic seismograms

    NASA Astrophysics Data System (ADS)

    Hosseini, Seyed Mehrdad

    Characterizing the near-surface shear-wave velocity structure using Rayleigh-wave phase velocity dispersion curves is widespread in the context of reservoir characterization, exploration seismology, earthquake engineering, and geotechnical engineering. This surface seismic approach provides a feasible and low-cost alternative to the borehole measurements. Phase velocity dispersion curves from Rayleigh surface waves are inverted to yield the vertical shear-wave velocity profile. A significant problem with the surface wave inversion is its intrinsic non-uniqueness, and although this problem is widely recognized, there have not been systematic efforts to develop approaches to reduce the pervasive uncertainty that affects the velocity profiles determined by the inversion. Non-uniqueness cannot be easily studied in a nonlinear inverse problem such as Rayleigh-wave inversion and the only way to understand its nature is by numerical investigation which can get computationally expensive and inevitably time consuming. Regarding the variety of the parameters affecting the surface wave inversion and possible non-uniqueness induced by them, a technique should be established which is not controlled by the non-uniqueness that is already affecting the surface wave inversion. An efficient and repeatable technique is proposed and tested to overcome the non-uniqueness problem; multiple inverted shear-wave velocity profiles are used in a wavenumber integration technique to generate synthetic time series resembling the geophone recordings. The similarity between synthetic and observed time series is used as an additional tool along with the similarity between the theoretical and experimental dispersion curves. The proposed method is proven to be effective through synthetic and real world examples. In these examples, the nature of the non-uniqueness is discussed and its existence is shown. Using the proposed technique, inverted velocity profiles are estimated and effectiveness of this technique is evaluated; in the synthetic example, final inverted velocity profile is compared with the initial target velocity model, and in the real world example, final inverted shear-wave velocity profile is compared with the velocity model from independent measurements in a nearby borehole. Real world example shows that it is possible to overcome the non-uniqueness and distinguish the representative velocity profile for the site that also matches well with the borehole measurements.

  10. Fast in-memory elastic full-waveform inversion using consumer-grade GPUs

    NASA Astrophysics Data System (ADS)

    Sivertsen Bergslid, Tore; Birger Raknes, Espen; Arntsen, Børge

    2017-04-01

    Full-waveform inversion (FWI) is a technique to estimate subsurface properties by using the recorded waveform produced by a seismic source and applying inverse theory. This is done through an iterative optimization procedure, where each iteration requires solving the wave equation many times, then trying to minimize the difference between the modeled and the measured seismic data. Having to model many of these seismic sources per iteration means that this is a highly computationally demanding procedure, which usually involves writing a lot of data to disk. We have written code that does forward modeling and inversion entirely in memory. A typical HPC cluster has many more CPUs than GPUs. Since FWI involves modeling many seismic sources per iteration, the obvious approach is to parallelize the code on a source-by-source basis, where each core of the CPU performs one modeling, and do all modelings simultaneously. With this approach, the GPU is already at a major disadvantage in pure numbers. Fortunately, GPUs can more than make up for this hardware disadvantage by performing each modeling much faster than a CPU. Another benefit of parallelizing each individual modeling is that it lets each modeling use a lot more RAM. If one node has 128 GB of RAM and 20 CPU cores, each modeling can use only 6.4 GB RAM if one is running the node at full capacity with source-by-source parallelization on the CPU. A parallelized per-source code using GPUs can use 64 GB RAM per modeling. Whenever a modeling uses more RAM than is available and has to start using regular disk space the runtime increases dramatically, due to slow file I/O. The extremely high computational speed of the GPUs combined with the large amount of RAM available for each modeling lets us do high frequency FWI for fairly large models very quickly. For a single modeling, our GPU code outperforms the single-threaded CPU-code by a factor of about 75. Successful inversions have been run on data with frequencies up to 40 Hz for a model of 2001 by 600 grid points with 5 m grid spacing and 5000 time steps, in less than 2.5 minutes per source. In practice, using 15 nodes (30 GPUs) to model 101 sources, each iteration took approximately 9 minutes. For reference, the same inversion run with our CPU code uses two hours per iteration. This was done using only a very simple wavefield interpolation technique, saving every second timestep. Using a more sophisticated checkpointing or wavefield reconstruction method would allow us to increase this model size significantly. Our results show that ordinary gaming GPUs are a viable alternative to the expensive professional GPUs often used today, when performing large scale modeling and inversion in geophysics.

  11. Program manual for the Eppler airfoil inversion program

    NASA Technical Reports Server (NTRS)

    Thomson, W. G.

    1975-01-01

    A computer program is described for calculating the profile of an airfoil as well as the boundary layer momentum thickness and energy form parameter. The theory underlying the airfoil inversion technique developed by Eppler is discussed.

  12. Filtering techniques for efficient inversion of two-dimensional Nuclear Magnetic Resonance data

    NASA Astrophysics Data System (ADS)

    Bortolotti, V.; Brizi, L.; Fantazzini, P.; Landi, G.; Zama, F.

    2017-10-01

    The inversion of two-dimensional Nuclear Magnetic Resonance (NMR) data requires the solution of a first kind Fredholm integral equation with a two-dimensional tensor product kernel and lower bound constraints. For the solution of this ill-posed inverse problem, the recently presented 2DUPEN algorithm [V. Bortolotti et al., Inverse Problems, 33(1), 2016] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. In this work, I2DUPEN, an improved version of 2DUPEN that implements Mean Windowing and Singular Value Decomposition filters, is deeply tested. The reconstruction problem with filtered data is formulated as a compressed weighted least squares problem with multi-parameter Tikhonov regularization. Results on synthetic and real 2D NMR data are presented with the main purpose to deeper analyze the separate and combined effects of these filtering techniques on the reconstructed 2D distribution.

  13. Joint inversion of regional and teleseismic earthquake waveforms

    NASA Astrophysics Data System (ADS)

    Baker, Mark R.; Doser, Diane I.

    1988-03-01

    A least squares joint inversion technique for regional and teleseismic waveforms is presented. The mean square error between seismograms and synthetics is minimized using true amplitudes. Matching true amplitudes in modeling requires meaningful estimates of modeling uncertainties and of seismogram signal-to-noise ratios. This also permits calculating linearized uncertainties on the solution based on accuracy and resolution. We use a priori estimates of earthquake parameters to stabilize unresolved parameters, and for comparison with a posteriori uncertainties. We verify the technique on synthetic data, and on the 1983 Borah Peak, Idaho (M = 7.3), earthquake. We demonstrate the inversion on the August 1954 Rainbow Mountain, Nevada (M = 6.8), earthquake and find parameters consistent with previous studies.

  14. Magnetotelluric inversion via reverse time migration algorithm of seismic data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ha, Taeyoung; Shin, Changsoo

    2007-07-01

    We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nedelec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversionmore » algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data.« less

  15. A new method for the inversion of atmospheric parameters of A/Am stars

    NASA Astrophysics Data System (ADS)

    Gebran, M.; Farah, W.; Paletou, F.; Monier, R.; Watson, V.

    2016-05-01

    Context. We present an automated procedure that simultaneously derives the effective temperature Teff, surface gravity log g, metallicity [Fe/H], and equatorial projected rotational velocity vsini for "normal" A and Am stars. The procedure is based on the principal component analysis (PCA) inversion method, which we published in a recent paper . Aims: A sample of 322 high-resolution spectra of F0-B9 stars, retrieved from the Polarbase, SOPHIE, and ELODIE databases, were used to test this technique with real data. We selected the spectral region from 4400-5000 Å as it contains many metallic lines and the Balmer Hβ line. Methods: Using three data sets at resolving powers of R = 42 000, 65 000 and 76 000, about ~6.6 × 106 synthetic spectra were calculated to build a large learning database. The online power iteration algorithm was applied to these learning data sets to estimate the principal components (PC). The projection of spectra onto the few PCs offered an efficient comparison metric in a low-dimensional space. The spectra of the well-known A0- and A1-type stars, Vega and Sirius A, were used as control spectra in the three databases. Spectra of other well-known A-type stars were also employed to characterize the accuracy of the inversion technique. Results: We inverted all of the observational spectra and derived the atmospheric parameters. After removal of a few outliers, the PCA-inversion method appeared to be very efficient in determining Teff, [Fe/H], and vsini for A/Am stars. The derived parameters agree very well with previous determinations. Using a statistical approach, deviations of around 150 K, 0.35 dex, 0.15 dex, and 2 km s-1 were found for Teff, log g, [Fe/H], and vsini with respect to literature values for A-type stars. Conclusions: The PCA inversion proves to be a very fast, practical, and reliable tool for estimating stellar parameters of FGK and A stars and for deriving effective temperatures of M stars. Based on data retrieved from the Polarbase, SOPHIE, and ELODIE archives.Table 2 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/589/A83

  16. Escript: Open Source Environment For Solving Large-Scale Geophysical Joint Inversion Problems in Python

    NASA Astrophysics Data System (ADS)

    Gross, Lutz; Altinay, Cihan; Fenwick, Joel; Smith, Troy

    2014-05-01

    The program package escript has been designed for solving mathematical modeling problems using python, see Gross et al. (2013). Its development and maintenance has been funded by the Australian Commonwealth to provide open source software infrastructure for the Australian Earth Science community (recent funding by the Australian Geophysical Observing System EIF (AGOS) and the AuScope Collaborative Research Infrastructure Scheme (CRIS)). The key concepts of escript are based on the terminology of spatial functions and partial differential equations (PDEs) - an approach providing abstraction from the underlying spatial discretization method (i.e. the finite element method (FEM)). This feature presents a programming environment to the user which is easy to use even for complex models. Due to the fact that implementations are independent from data structures simulations are easily portable across desktop computers and scalable compute clusters without modifications to the program code. escript has been successfully applied in a variety of applications including modeling mantel convection, melting processes, volcanic flow, earthquakes, faulting, multi-phase flow, block caving and mineralization (see Poulet et al. 2013). The recent escript release (see Gross et al. (2013)) provides an open framework for solving joint inversion problems for geophysical data sets (potential field, seismic and electro-magnetic). The strategy bases on the idea to formulate the inversion problem as an optimization problem with PDE constraints where the cost function is defined by the data defect and the regularization term for the rock properties, see Gross & Kemp (2013). This approach of first-optimize-then-discretize avoids the assemblage of the - in general- dense sensitivity matrix as used in conventional approaches where discrete programming techniques are applied to the discretized problem (first-discretize-then-optimize). In this paper we will discuss the mathematical framework for inversion and appropriate solution schemes in escript. We will also give a brief introduction into escript's open framework for defining and solving geophysical inversion problems. Finally we will show some benchmark results to demonstrate the computational scalability of the inversion method across a large number of cores and compute nodes in a parallel computing environment. References: - L. Gross et al. (2013): Escript Solving Partial Differential Equations in Python Version 3.4, The University of Queensland, https://launchpad.net/escript-finley - L. Gross and C. Kemp (2013) Large Scale Joint Inversion of Geophysical Data using the Finite Element Method in escript. ASEG Extended Abstracts 2013, http://dx.doi.org/10.1071/ASEG2013ab306 - T. Poulet, L. Gross, D. Georgiev, J. Cleverley (2012): escript-RT: Reactive transport simulation in Python using escript, Computers & Geosciences, Volume 45, 168-176. http://dx.doi.org/10.1016/j.cageo.2011.11.005.

  17. Universal field matching in craniospinal irradiation by a background-dose gradient-optimized method.

    PubMed

    Traneus, Erik; Bizzocchi, Nicola; Fellin, Francesco; Rombi, Barbara; Farace, Paolo

    2018-01-01

    The gradient-optimized methods are overcoming the traditional feathering methods to plan field junctions in craniospinal irradiation. In this note, a new gradient-optimized technique, based on the use of a background dose, is described. Treatment planning was performed by RayStation (RaySearch Laboratories, Stockholm, Sweden) on the CT scans of a pediatric patient. Both proton (by pencil beam scanning) and photon (by volumetric modulated arc therapy) treatments were planned with three isocenters. An 'in silico' ideal background dose was created first to cover the upper-spinal target and to produce a perfect dose gradient along the upper and lower junction regions. Using it as background, the cranial and the lower-spinal beams were planned by inverse optimization to obtain dose coverage of their relevant targets and of the junction volumes. Finally, the upper-spinal beam was inversely planned after removal of the background dose and with the previously optimized beams switched on. In both proton and photon plans, the optimized cranial and the lower-spinal beams produced a perfect linear gradient in the junction regions, complementary to that produced by the optimized upper-spinal beam. The final dose distributions showed a homogeneous coverage of the targets. Our simple technique allowed to obtain high-quality gradients in the junction region. Such technique universally works for photons as well as protons and could be applicable to the TPSs that allow to manage a background dose. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  18. Verification of the helioseismology travel-time measurement technique and the inversion procedure for sound speed using artificial data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parchevsky, K. V.; Zhao, J.; Hartlep, T.

    We performed three-dimensional numerical simulations of the solar surface acoustic wave field for the quiet Sun and for three models with different localized sound-speed perturbations in the interior with deep, shallow, and two-layer structures. We used the simulated data generated by two solar acoustics codes that employ the same standard solar model as a background model, but utilize different integration techniques and different models of stochastic wave excitation. Acoustic travel times were measured using a time-distance helioseismology technique, and compared with predictions from ray theory frequently used for helioseismic travel-time inversions. It is found that the measured travel-time shifts agreemore » well with the helioseismic theory for sound-speed perturbations, and for the measurement procedure with and without phase-speed filtering of the oscillation signals. This testing verifies the whole measuring-filtering-inversion procedure for static sound-speed anomalies with small amplitude inside the Sun outside regions of strong magnetic field. It is shown that the phase-speed filtering, frequently used to extract specific wave packets and improve the signal-to-noise ratio, does not introduce significant systematic errors. Results of the sound-speed inversion procedure show good agreement with the perturbation models in all cases. Due to its smoothing nature, the inversion procedure may overestimate sound-speed variations in regions with sharp gradients of the sound-speed profile.« less

  19. Efficient 3D inversions using the Richards equation

    NASA Astrophysics Data System (ADS)

    Cockett, Rowan; Heagy, Lindsey J.; Haber, Eldad

    2018-07-01

    Fluid flow in the vadose zone is governed by the Richards equation; it is parameterized by hydraulic conductivity, which is a nonlinear function of pressure head. Investigations in the vadose zone typically require characterizing distributed hydraulic properties. Water content or pressure head data may include direct measurements made from boreholes. Increasingly, proxy measurements from hydrogeophysics are being used to supply more spatially and temporally dense data sets. Inferring hydraulic parameters from such datasets requires the ability to efficiently solve and optimize the nonlinear time domain Richards equation. This is particularly important as the number of parameters to be estimated in a vadose zone inversion continues to grow. In this paper, we describe an efficient technique to invert for distributed hydraulic properties in 1D, 2D, and 3D. Our technique does not store the Jacobian matrix, but rather computes its product with a vector. Existing literature for the Richards equation inversion explicitly calculates the sensitivity matrix using finite difference or automatic differentiation, however, for large scale problems these methods are constrained by computation and/or memory. Using an implicit sensitivity algorithm enables large scale inversion problems for any distributed hydraulic parameters in the Richards equation to become tractable on modest computational resources. We provide an open source implementation of our technique based on the SimPEG framework, and show it in practice for a 3D inversion of saturated hydraulic conductivity using water content data through time.

  20. Break Point Distribution on Chromosome 3 of Human Epithelial Cells exposed to Gamma Rays, Neutrons and Fe Ions

    NASA Technical Reports Server (NTRS)

    Hada, M.; Saganti, P. B.; Gersey, B.; Wilkins, R.; Cucinotta, F. A.; Wu, H.

    2007-01-01

    Most of the reported studies of break point distribution on the damaged chromosomes from radiation exposure were carried out with the G-banding technique or determined based on the relative length of the broken chromosomal fragments. However, these techniques lack the accuracy in comparison with the later developed multicolor banding in situ hybridization (mBAND) technique that is generally used for analysis of intrachromosomal aberrations such as inversions. Using mBAND, we studied chromosome aberrations in human epithelial cells exposed in vitro to both low or high dose rate gamma rays in Houston, low dose rate secondary neutrons at Los Alamos National Laboratory and high dose rate 600 MeV/u Fe ions at NASA Space Radiation Laboratory. Detailed analysis of the inversion type revealed that all of the three radiation types induced a low incidence of simple inversions. Half of the inversions observed after neutron or Fe ion exposure, and the majority of inversions in gamma-irradiated samples were accompanied by other types of intrachromosomal aberrations. In addition, neutrons and Fe ions induced a significant fraction of inversions that involved complex rearrangements of both inter- and intrachromosome exchanges. We further compared the distribution of break point on chromosome 3 for the three radiation types. The break points were found to be randomly distributed on chromosome 3 after neutrons or Fe ions exposure, whereas non-random distribution with clustering break points was observed for gamma-rays. The break point distribution may serve as a potential fingerprint of high-LET radiation exposure.

  1. Nonbinary quantification technique accounting for myocardial infarct heterogeneity: Feasibility of applying percent infarct mapping in patients.

    PubMed

    Mastrodicasa, Domenico; Elgavish, Gabriel A; Schoepf, U Joseph; Suranyi, Pal; van Assen, Marly; Albrecht, Moritz H; De Cecco, Carlo N; van der Geest, Rob J; Hardy, Rayphael; Mantini, Cesare; Griffith, L Parkwood; Ruzsics, Balazs; Varga-Szemes, Akos

    2018-02-15

    Binary threshold-based quantification techniques ignore myocardial infarct (MI) heterogeneity, yielding substantial misquantification of MI. To assess the technical feasibility of MI quantification using percent infarct mapping (PIM), a prototype nonbinary algorithm, in patients with suspected MI. Prospective cohort POPULATION: Patients (n = 171) with suspected MI referred for cardiac MRI. Inversion recovery balanced steady-state free-precession for late gadolinium enhancement (LGE) and modified Look-Locker inversion recovery (MOLLI) T 1 -mapping on a 1.5T system. Infarct volume (IV) and infarct fraction (IF) were quantified by two observers based on manual delineation, binary approaches (2-5 standard deviations [SD] and full-width at half-maximum [FWHM] thresholds) in LGE images, and by applying the PIM algorithm in T 1 and LGE images (PIM T1 ; PIM LGE ). IV and IF were analyzed using repeated measures analysis of variance (ANOVA). Agreement between the approaches was determined with Bland-Altman analysis. Interobserver agreement was assessed by intraclass correlation coefficient (ICC) analysis. MI was observed in 89 (54.9%) patients, and 185 (38%) short-axis slices. IF with 2, 3, 4, 5SDs and FWHM techniques were 15.7 ± 6.6, 13.4 ± 5.6, 11.6 ± 5.0, 10.8 ± 5.2, and 10.0 ± 5.2%, respectively. The 5SD and FWHM techniques had the best agreement with manual IF (9.9 ± 4.8%) determination (bias 1.0 and 0.2%; P = 0.1426 and P = 0.8094, respectively). The 2SD and 3SD algorithms significantly overestimated manual IF (9.9 ± 4.8%; both P < 0.0001). PIM LGE measured significantly lower IF (7.8 ± 3.7%) compared to manual values (P < 0.0001). PIM LGE , however, showed the best agreement with the PIM T1 reference (7.6 ± 3.6%, P = 0.3156). Interobserver agreement was rated good to excellent for IV (ICCs between 0.727-0.820) and fair to good for IF (0.589-0.736). The application of the PIM LGE technique for MI quantification in patients is feasible. PIM LGE , with its ability to account for voxelwise MI content, provides significantly smaller IF than any thresholding technique and shows excellent agreement with the T 1 -based reference. 2 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2018. © 2018 International Society for Magnetic Resonance in Medicine.

  2. Multivariate Formation Pressure Prediction with Seismic-derived Petrophysical Properties from Prestack AVO inversion and Poststack Seismic Motion Inversion

    NASA Astrophysics Data System (ADS)

    Yu, H.; Gu, H.

    2017-12-01

    A novel multivariate seismic formation pressure prediction methodology is presented, which incorporates high-resolution seismic velocity data from prestack AVO inversion, and petrophysical data (porosity and shale volume) derived from poststack seismic motion inversion. In contrast to traditional seismic formation prediction methods, the proposed methodology is based on a multivariate pressure prediction model and utilizes a trace-by-trace multivariate regression analysis on seismic-derived petrophysical properties to calibrate model parameters in order to make accurate predictions with higher resolution in both vertical and lateral directions. With prestack time migration velocity as initial velocity model, an AVO inversion was first applied to prestack dataset to obtain high-resolution seismic velocity with higher frequency that is to be used as the velocity input for seismic pressure prediction, and the density dataset to calculate accurate Overburden Pressure (OBP). Seismic Motion Inversion (SMI) is an inversion technique based on Markov Chain Monte Carlo simulation. Both structural variability and similarity of seismic waveform are used to incorporate well log data to characterize the variability of the property to be obtained. In this research, porosity and shale volume are first interpreted on well logs, and then combined with poststack seismic data using SMI to build porosity and shale volume datasets for seismic pressure prediction. A multivariate effective stress model is used to convert velocity, porosity and shale volume datasets to effective stress. After a thorough study of the regional stratigraphic and sedimentary characteristics, a regional normally compacted interval model is built, and then the coefficients in the multivariate prediction model are determined in a trace-by-trace multivariate regression analysis on the petrophysical data. The coefficients are used to convert velocity, porosity and shale volume datasets to effective stress and then to calculate formation pressure with OBP. Application of the proposed methodology to a research area in East China Sea has proved that the method can bridge the gap between seismic and well log pressure prediction and give predicted pressure values close to pressure meassurements from well testing.

  3. Applied Mathematics in EM Studies with Special Emphasis on an Uncertainty Quantification and 3-D Integral Equation Modelling

    NASA Astrophysics Data System (ADS)

    Pankratov, Oleg; Kuvshinov, Alexey

    2016-01-01

    Despite impressive progress in the development and application of electromagnetic (EM) deterministic inverse schemes to map the 3-D distribution of electrical conductivity within the Earth, there is one question which remains poorly addressed—uncertainty quantification of the recovered conductivity models. Apparently, only an inversion based on a statistical approach provides a systematic framework to quantify such uncertainties. The Metropolis-Hastings (M-H) algorithm is the most popular technique for sampling the posterior probability distribution that describes the solution of the statistical inverse problem. However, all statistical inverse schemes require an enormous amount of forward simulations and thus appear to be extremely demanding computationally, if not prohibitive, if a 3-D set up is invoked. This urges development of fast and scalable 3-D modelling codes which can run large-scale 3-D models of practical interest for fractions of a second on high-performance multi-core platforms. But, even with these codes, the challenge for M-H methods is to construct proposal functions that simultaneously provide a good approximation of the target density function while being inexpensive to be sampled. In this paper we address both of these issues. First we introduce a variant of the M-H method which uses information about the local gradient and Hessian of the penalty function. This, in particular, allows us to exploit adjoint-based machinery that has been instrumental for the fast solution of deterministic inverse problems. We explain why this modification of M-H significantly accelerates sampling of the posterior probability distribution. In addition we show how Hessian handling (inverse, square root) can be made practicable by a low-rank approximation using the Lanczos algorithm. Ultimately we discuss uncertainty analysis based on stochastic inversion results. In addition, we demonstrate how this analysis can be performed within a deterministic approach. In the second part, we summarize modern trends in the development of efficient 3-D EM forward modelling schemes with special emphasis on recent advances in the integral equation approach.

  4. A dynamic mechanical analysis technique for porous media

    PubMed Central

    Pattison, Adam J; McGarry, Matthew; Weaver, John B; Paulsen, Keith D

    2015-01-01

    Dynamic mechanical analysis (DMA) is a common way to measure the mechanical properties of materials as functions of frequency. Traditionally, a viscoelastic mechanical model is applied and current DMA techniques fit an analytical approximation to measured dynamic motion data by neglecting inertial forces and adding empirical correction factors to account for transverse boundary displacements. Here, a finite element (FE) approach to processing DMA data was developed to estimate poroelastic material properties. Frequency-dependent inertial forces, which are significant in soft media and often neglected in DMA, were included in the FE model. The technique applies a constitutive relation to the DMA measurements and exploits a non-linear inversion to estimate the material properties in the model that best fit the model response to the DMA data. A viscoelastic version of this approach was developed to validate the approach by comparing complex modulus estimates to the direct DMA results. Both analytical and FE poroelastic models were also developed to explore their behavior in the DMA testing environment. All of the models were applied to tofu as a representative soft poroelastic material that is a common phantom in elastography imaging studies. Five samples of three different stiffnesses were tested from 1 – 14 Hz with rough platens placed on the top and bottom surfaces of the material specimen under test to restrict transverse displacements and promote fluid-solid interaction. The viscoelastic models were identical in the static case, and nearly the same at frequency with inertial forces accounting for some of the discrepancy. The poroelastic analytical method was not sufficient when the relevant physical boundary constraints were applied, whereas the poroelastic FE approach produced high quality estimates of shear modulus and hydraulic conductivity. These results illustrated appropriate shear modulus contrast between tofu samples and yielded a consistent contrast in hydraulic conductivity as well. PMID:25248170

  5. Two-Port Representation of a Linear Transmission Line in the Time Domain.

    DTIC Science & Technology

    1980-01-01

    which is a rational function. To use the Prony procedure it is necessary to inverse transform the admittance functions. For the transmission line, most...impulse is a constant, the inverse transform of Y0(s) contains an impulse of value ._ Therefore, if we were to numerically inverse transform Yo(s), we...would remove this im- pulse and inverse transform Y-(S) Y (S) 1’LR+C~ (23) The prony procedure would then be applied to the result. Of course, an impulse

  6. Arterial spin labeling in combination with a look-locker sampling strategy: inflow turbo-sampling EPI-FAIR (ITS-FAIR).

    PubMed

    Günther, M; Bock, M; Schad, L R

    2001-11-01

    Arterial spin labeling (ASL) permits quantification of tissue perfusion without the use of MR contrast agents. With standard ASL techniques such as flow-sensitive alternating inversion recovery (FAIR) the signal from arterial blood is measured at a fixed inversion delay after magnetic labeling. As no image information is sampled during this delay, FAIR measurements are inefficient and time-consuming. In this work the FAIR preparation was combined with a Look-Locker acquisition to sample not one but a series of images after each labeling pulse. This new method allows monitoring of the temporal dynamics of blood inflow. To quantify perfusion, a theoretical model for the signal dynamics during the Look-Locker readout was developed and applied. Also, the imaging parameters of the new ITS-FAIR technique were optimized using an expression for the variance of the calculated perfusion. For the given scanner hardware the parameters were: temporal resolution 100 ms, 23 images, flip-angle 25.4 degrees. In a normal volunteer experiment with these parameters an average perfusion value of 48.2 +/- 12.1 ml/100 g/min was measured in the brain. With the ability to obtain ITS-FAIR time series with high temporal resolution arterial transit times in the range of -138 - 1054 ms were measured, where nonphysical negative values were found in voxels containing large vessels. Copyright 2001 Wiley-Liss, Inc.

  7. Surface wave inversion of central Texas quarry blasts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonner, J.L.; Goforth, T.T.

    1993-02-01

    Compressional and shear wave models of the upper crust in central Texas were obtained by inverting Rayleigh and Love waves recorded at the new W.M. Keck Foundation Seismological Observatory at Baylor University. The Keck Observatory, which became operational in April 1992, consists of a three-component, broadband Geotech seismometer located at a depth of 130 feet in a borehole 17 miles from the Baylor campus. The field station is solar powered, and the 140-dB dynamic range digital data are transmitted to the Baylor analysis lab via radio, where they are analyzed and archived. Limestone quarries located in all directions from themore » Keck Observatory detonate two to four tons of explosives per blast several times a week. Recordings of these blasts show sharp onsets of P and S waves, as well as dispersed Rayleigh and Love waves in the period band 1 to 3 seconds. Multiple filter analysis and phase matched filtering techniques were used to obtain high quality dispersion curves for the surface waves, and inversion techniques were applied to produce shear velocity models of the upper crust. A rapid increase in shear velocity at a depth of about 1.5 km is associated with the Ouachita Overthrust Belt. Portable seismic recording systems were placed at the quarries to monitor start times and initial wave forms. These data were combined with the Keck recordings to produce attenuation and compressional velocity models.« less

  8. Top-down estimate of dust emissions through integration of MODIS and MISR aerosol retrievals with the GEOS-Chem adjoint model

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Xu, Xiaoguang; Henze, Daven K.; Zeng, Jing; Ji, Qiang; Tsay, Si-Chee; Huang, Jianping

    2012-04-01

    Predicting the influences of dust on atmospheric composition, climate, and human health requires accurate knowledge of dust emissions, but large uncertainties persist in quantifying mineral sources. This study presents a new method for combined use of satellite-measured radiances and inverse modeling to spatially constrain the amount and location of dust emissions. The technique is illustrated with a case study in May 2008; the dust emissions in Taklimakan and Gobi deserts are spatially optimized using the GEOS-Chem chemical transport model and its adjoint constrained by aerosol optical depth (AOD) that are derived over the downwind dark-surface region in China from MODIS (Moderate Resolution Imaging Spectroradiometer) reflectance with the aerosol single scattering properties consistent with GEOS-chem. The adjoint inverse modeling yields an overall 51% decrease in prior dust emissions estimated by GEOS-Chem over the Taklimakan-Gobi area, with more significant reductions south of the Gobi Desert. The model simulation with optimized dust emissions shows much better agreement with independent observations from MISR (Multi-angle Imaging SpectroRadiometer) AOD and MODIS Deep Blue AOD over the dust source region and surface PM10 concentrations. The technique of this study can be applied to global multi-sensor remote sensing data for constraining dust emissions at various temporal and spatial scales, and hence improving the quantification of dust effects on climate, air quality, and human health.

  9. Top-down Estimate of Dust Emissions Through Integration of MODIS and MISR Aerosol Retrievals With the Geos-chem Adjoint Model

    NASA Technical Reports Server (NTRS)

    Wang, Jun; Xu, Xiaoguang; Henze, Daven K.; Zeng, Jing; Ji, Qiang; Tsay, Si-Chee; Huang, Jianping

    2012-01-01

    Predicting the influences of dust on atmospheric composition, climate, and human health requires accurate knowledge of dust emissions, but large uncertainties persist in quantifying mineral sources. This study presents a new method for combined use of satellite-measured radiances and inverse modeling to spatially constrain the amount and location of dust emissions. The technique is illustrated with a case study in May 2008; the dust emissions in Taklimakan and Gobi deserts are spatially optimized using the GEOSChem chemical transport model and its adjoint constrained by aerosol optical depth (AOD) that are derived over the downwind dark-surface region in China from MODIS (Moderate Resolution Imaging Spectroradiometer) reflectance with the aerosol single scattering properties consistent with GEOS-chem. The adjoint inverse modeling yields an overall 51% decrease in prior dust emissions estimated by GEOS-Chem over the Taklimakan-Gobi area, with more significant reductions south of the Gobi Desert. The model simulation with optimized dust emissions shows much better agreement with independent observations from MISR (Multi-angle Imaging SpectroRadiometer) AOD and MODIS Deep Blue AOD over the dust source region and surface PM10 concentrations. The technique of this study can be applied to global multi-sensor remote sensing data for constraining dust emissions at various temporal and spatial scales, and hence improving the quantification of dust effects on climate, air quality, and human health.

  10. Iterative electromagnetic Born inversion applied to earth conductivity imaging

    NASA Astrophysics Data System (ADS)

    Alumbaugh, D. L.

    1993-08-01

    This thesis investigates the use of a fast imaging technique to deduce the spatial conductivity distribution in the earth from low frequency (less than 1 MHz), cross well electromagnetic (EM) measurements. The theory embodied in this work is the extension of previous strategies and is based on the Born series approximation to solve both the forward and inverse problem. Nonlinear integral equations are employed to derive the series expansion which accounts for the scattered magnetic fields that are generated by inhomogeneities embedded in either a homogenous or a layered earth. A sinusoidally oscillating, vertically oriented magnetic dipole is employed as a source, and it is assumed that the scattering bodies are azimuthally symmetric about the source dipole axis. The use of this model geometry reduces the 3-D vector problem to a more manageable 2-D scalar form. The validity of the cross well EM method is tested by applying the imaging scheme to two sets of field data. Images of the data collected at the Devine, Texas test site show excellent correlation with the well logs. Unfortunately there is a drift error present in the data that limits the accuracy of the results. A more complete set of data collected at the Richmond field station in Richmond, California demonstrates that cross well EM can be successfully employed to monitor the position of an injected mass of salt water. Both the data and the resulting images clearly indicate the plume migrates toward the north-northwest. The plausibility of these conclusions is verified by applying the imaging code to synthetic data generated by a 3-D sheet model.

  11. Earthquake source tensor inversion with the gCAP method and 3D Green's functions

    NASA Astrophysics Data System (ADS)

    Zheng, J.; Ben-Zion, Y.; Zhu, L.; Ross, Z.

    2013-12-01

    We develop and apply a method to invert earthquake seismograms for source properties using a general tensor representation and 3D Green's functions. The method employs (i) a general representation of earthquake potency/moment tensors with double couple (DC), compensated linear vector dipole (CLVD), and isotropic (ISO) components, and (ii) a corresponding generalized CAP (gCap) scheme where the continuous wave trains are broken into Pnl and surface waves (Zhu & Ben-Zion, 2013). For comparison, we also use the waveform inversion method of Zheng & Chen (2012) and Ammon et al. (1998). Sets of 3D Green's functions are calculated on a grid of 1 km3 using the 3-D community velocity model CVM-4 (Kohler et al. 2003). A bootstrap technique is adopted to establish robustness of the inversion results using the gCap method (Ross & Ben-Zion, 2013). Synthetic tests with 1-D and 3-D waveform calculations show that the source tensor inversion procedure is reasonably reliable and robust. As initial application, the method is used to investigate source properties of the March 11, 2013, Mw=4.7 earthquake on the San Jacinto fault using recordings of ~45 stations up to ~0.2Hz. Both the best fitting and most probable solutions include ISO component of ~1% and CLVD component of ~0%. The obtained ISO component, while small, is found to be a non-negligible positive value that can have significant implications for the physics of the failure process. Work on using higher frequency data for this and other earthquakes is in progress.

  12. Comparative interpretations of renormalization inversion technique for reconstructing unknown emissions from measured atmospheric concentrations

    NASA Astrophysics Data System (ADS)

    Singh, Sarvesh Kumar; Kumar, Pramod; Rani, Raj; Turbelin, Grégory

    2017-04-01

    The study highlights a theoretical comparison and various interpretations of a recent inversion technique, called renormalization, developed for the reconstruction of unknown tracer emissions from their measured concentrations. The comparative interpretations are presented in relation to the other inversion techniques based on principle of regularization, Bayesian, minimum norm, maximum entropy on mean, and model resolution optimization. It is shown that the renormalization technique can be interpreted in a similar manner to other techniques, with a practical choice of a priori information and error statistics, while eliminating the need of additional constraints. The study shows that the proposed weight matrix and weighted Gram matrix offer a suitable deterministic choice to the background error and measurement covariance matrices, respectively, in the absence of statistical knowledge about background and measurement errors. The technique is advantageous since it (i) utilizes weights representing a priori information apparent to the monitoring network, (ii) avoids dependence on background source estimates, (iii) improves on alternative choices for the error statistics, (iv) overcomes the colocalization problem in a natural manner, and (v) provides an optimally resolved source reconstruction. A comparative illustration of source retrieval is made by using the real measurements from a continuous point release conducted in Fusion Field Trials, Dugway Proving Ground, Utah.

  13. Numerical studies in geophysics

    NASA Astrophysics Data System (ADS)

    Hier Majumder, Catherine Anne

    2003-10-01

    This thesis focuses on the use of modern numerical techniques in the geo- and environmental sciences. Four topics are discussed in this thesis: finite Prandtl number convection, wavelet analysis, inverse methods and data assimilation, and nuclear waste tank mixing. The finite Prandtl number convection studies examine how convection behavior changes as Prandtl numbers are increased to as high as 2 x 104, on the order of Prandtl numbers expected in very hot magmas or mushy ice diapirs. I found that there are significant differences in the convection style between finite Prandtl number convection and the infinite Prandtl number approximation even for Prandtl numbers on the order of 104. This indicates that the infinite Prandtl convection approximation might not accurately model behavior in fluids with large, but finite Prandtl numbers. The section on inverse methods and data assimilation used the technique of four dimensional variational data assimilation (4D-VAR) developed by meteorologists to integrate observations into forecasts. It was useful in studying the predictability and dependence on initial conditions of finite Prandtl simulations. This technique promises to be useful in a wide range of geological and geophysical fields, including mantle convection, hydrogeology, and sedimentology. Wavelet analysis was used to help image and scrutinize at small-scales both temperature and vorticity fields from convection simulations and the geoid. It was found to be extremely helpful in both cases. It allowed us to separate the information in the data into various spatial scales without losing the locations of the signals in space. This proved to be essential in understanding the processes producing the total signal in the datasets. The nuclear waste study showed that techniques developed in geology and geophysics can be used to solve scientific problems in other fields. I applied state-of-the-art techniques currently employed in geochemistry, sedimentology, and mantle mixing to simulate dynamical processes occurring in the course of mixing nuclear waste tanks.

  14. Composite pulses for interferometry in a thermal cold atom cloud

    NASA Astrophysics Data System (ADS)

    Dunning, Alexander; Gregory, Rachel; Bateman, James; Cooper, Nathan; Himsworth, Matthew; Jones, Jonathan A.; Freegarde, Tim

    2014-09-01

    Atom interferometric sensors and quantum information processors must maintain coherence while the evolving quantum wave function is split, transformed, and recombined, but suffer from experimental inhomogeneities and uncertainties in the speeds and paths of these operations. Several error-correction techniques have been proposed to isolate the variable of interest. Here we apply composite pulse methods to velocity-sensitive Raman state manipulation in a freely expanding thermal atom cloud. We compare several established pulse sequences, and follow the state evolution within them. The agreement between measurements and simple predictions shows the underlying coherence of the atom ensemble, and the inversion infidelity in a ˜80μK atom cloud is halved. Composite pulse techniques, especially if tailored for atom interferometric applications, should allow greater interferometer areas, larger atomic samples, and longer interaction times, and hence improve the sensitivity of quantum technologies from inertial sensing and clocks to quantum information processors and tests of fundamental physics.

  15. Spectral identification of a 90Sr source in the presence of masking nuclides using Maximum-Likelihood deconvolution

    NASA Astrophysics Data System (ADS)

    Neuer, Marcus J.

    2013-11-01

    A technique for the spectral identification of strontium-90 is shown, utilising a Maximum-Likelihood deconvolution. Different deconvolution approaches are discussed and summarised. Based on the intensity distribution of the beta emission and Geant4 simulations, a combined response matrix is derived, tailored to the β- detection process in sodium iodide detectors. It includes scattering effects and attenuation by applying a base material decomposition extracted from Geant4 simulations with a CAD model for a realistic detector system. Inversion results of measurements show the agreement between deconvolution and reconstruction. A detailed investigation with additional masking sources like 40K, 226Ra and 131I shows that a contamination of strontium can be found in the presence of these nuisance sources. Identification algorithms for strontium are presented based on the derived technique. For the implementation of blind identification, an exemplary masking ratio is calculated.

  16. Modeling of endoluminal and interstitial ultrasound hyperthermia and thermal ablation: applications to device design, feedback control, and treatment planning

    PubMed Central

    Prakash, Punit; Salgaonkar, Vasant A.; Diederich, Chris J.

    2014-01-01

    Endoluminal and catheter-based ultrasound applicators are currently under development and are in clinical use for minimally invasive hyperthermia and thermal ablation of various tissue targets. Computational models play a critical role in in device design and optimization, assessment of therapeutic feasibility and safety, devising treatment monitoring and feedback control strategies, and performing patient-specific treatment planning with this technology. The critical aspects of theoretical modeling, applied specifically to endoluminal and interstitial ultrasound thermotherapy, are reviewed. Principles and practical techniques for modeling acoustic energy deposition, bioheat transfer, thermal tissue damage, and dynamic changes in the physical and physiological state of tissue are reviewed. The integration of these models and applications of simulation techniques in identification of device design parameters, development of real time feedback-control platforms, assessing the quality and safety of treatment delivery strategies, and optimization of inverse treatment plans are presented. PMID:23738697

  17. Melt Flow Control in the Directional Solidification of Binary Alloys

    NASA Technical Reports Server (NTRS)

    Zabaras, Nicholas

    2003-01-01

    Our main project objectives are to develop computational techniques based on inverse problem theory that can be used to design directional solidification processes that lead to desired temperature gradient and growth conditions at the freezing front at various levels of gravity. It is known that control of these conditions plays a significant role in the selection of the form and scale of the obtained solidification microstructures. Emphasis is given on the control of the effects of various melt flow mechanisms on the local to the solidification front conditions. The thermal boundary conditions (furnace design) as well as the magnitude and direction of an externally applied magnetic field are the main design variables. We will highlight computational design models for sharp front solidification models and briefly discuss work in progress toward the development of design techniques for multi-phase volume-averaging based solidification models.

  18. Numerical optimization in Hilbert space using inexact function and gradient evaluations

    NASA Technical Reports Server (NTRS)

    Carter, Richard G.

    1989-01-01

    Trust region algorithms provide a robust iterative technique for solving non-convex unstrained optimization problems, but in many instances it is prohibitively expensive to compute high accuracy function and gradient values for the method. Of particular interest are inverse and parameter estimation problems, since function and gradient evaluations involve numerically solving large systems of differential equations. A global convergence theory is presented for trust region algorithms in which neither function nor gradient values are known exactly. The theory is formulated in a Hilbert space setting so that it can be applied to variational problems as well as the finite dimensional problems normally seen in trust region literature. The conditions concerning allowable error are remarkably relaxed: relative errors in the gradient error condition is automatically satisfied if the error is orthogonal to the gradient approximation. A technique for estimating gradient error and improving the approximation is also presented.

  19. Forward problem studies of electrical resistance tomography system on concrete materials

    NASA Astrophysics Data System (ADS)

    Ang, Vernoon; Rahiman, M. H. F.; Rahim, R. A.; Aw, S. R.; Wahab, Y. A.; Thomas W. K., T.; Siow, L. T.

    2017-03-01

    Electrical resistance tomography (ERT) is well known as non-invasive imaging technique, inexpensive, radiation free, visualization measurements of the multiphase flows and frequently applied in geophysical, medical and Industrial Process Tomography (IPT) applications. Application of ERT in concrete is a new exploration field, which can be used in monitoring and detecting the health and condition of concrete without destroying it. In this paper, ERT model under the condition of concrete is studied in which the sensitivity field model is produced and simulated by using COMSOL software. The affects brought by different current injection values with different concrete conductivity are studied in detail. This study able to provide the important direction for the further study of inverse problem in ERT system. Besides, the results of this technique hopefully can open a new exploration in inspection method of concrete structures in order to maintain the health of the concrete structure for civilian safety.

  20. Linear Water Waves

    NASA Astrophysics Data System (ADS)

    Kuznetsov, N.; Maz'ya, V.; Vainberg, B.

    2002-08-01

    This book gives a self-contained and up-to-date account of mathematical results in the linear theory of water waves. The study of waves has many applications, including the prediction of behavior of floating bodies (ships, submarines, tension-leg platforms etc.), the calculation of wave-making resistance in naval architecture, and the description of wave patterns over bottom topography in geophysical hydrodynamics. The first section deals with time-harmonic waves. Three linear boundary value problems serve as the approximate mathematical models for these types of water waves. The next section uses a plethora of mathematical techniques in the investigation of these three problems. The techniques used in the book include integral equations based on Green's functions, various inequalities between the kinetic and potential energy and integral identities which are indispensable for proving the uniqueness theorems. The so-called inverse procedure is applied to constructing examples of non-uniqueness, usually referred to as 'trapped nodes.'

Top